Support Vector Machine (SVM) is a supervised learning algorithm, recommended for classification and nonlinear function approaches. The goal of SVM is to find the optimum separating hyperplane, which is able to classify data points as well as possible, and to again separate them into two classification points as much as possible. The hallmarks of this classification reasoning are the support vectors chosen from the training set. On the other hand, training SVM involves solving a constrained quadratic programming problem, which requires a large memory and enormous amounts of training time for large-scale problems. Therefore, when finding the optimum separating hyperplane only a small part of the training set is used. \nIn this paper, we propose a method for finding a set of the training data for the training of the SVM. For this purpose, we use Principal Component Analysis (PCA) technique for the elimination of non-critical training examples in the training set. By the help of PCA, the data in multi-dimensional space is converted into one-dimensional space. Then, by using the mean and the standard deviation of the one dimensional instances, the non-critical points are identified. Then, from the original training set, these non-critical instances are removed and the new reduced training set is used for the training process. Our experimental results show that our proposed method has a positive effect on computational time without degrading the classification results.