Abstract: |
Support vector machine (SVM) is a hot topic in many areas, such as machine learning, computer vision, data mining, and so on, due to its powerful ability to perform classification. Though there exist a lot of approaches to improve the accuracy and the efficiency of the models of SVM, few of them address how to eliminate the redundant data from the input training vectors. As it is known, most of support vectors distributes in the boundary of the class, which means the vectors in the center of the class are useless. In the paper, we propose a new approach based on Gaussian model to preserve the training vectors in the boundary of the class and eliminate the training vectors in the center of the class. The experiments show that our approach can reduce most of the input training vectors and preserve the support vectors at the same time, which leads to a significant reduction in the computational cost and maintains the accuracy. |