Distance difference and linear programming nonparallel plane classifier

https://doi.org/10.1016/j.eswa.2011.01.131Get rights and content

Abstract

We first propose Distance Difference GEPSVM (DGEPSVM), a binary classifier that obtains two nonparallel planes by solving two standard eigenvalue problems. Compared with GEPSVM, this algorithm does not need to care about the singularity occurring in GEPSVM, but with better classification correctness. This formulation is capable of dealing with XOR problems with different distribution for keeping the genuine geometrical interpretation of primal GEPSVM. Moreover, the proposed algorithm gives classification correctness comparable to that of LSTSVM and TWSVM, but with lesser unknown parameters. Then, the regularization techniques are incorporated to the TWSVM. With the help of the regularized formulation, a linear programming formation for TWSVM is proposed, called FETSVM, to improve TWSVM sparsity, thereby suppressing input features. This means FETSVM is capable of reducing the number of input features, for linear case. When a nonlinear classifier is used, this means few kernel functions determine the classifier. Lastly, this algorithm is compared on artificial and public datasets. To further illustrate the effectiveness of our proposed algorithms, we also apply these algorithms to USPS handwritten digits.

Research highlights

► Compared with TWSVM and LSTSVM, DGEPSVM has lesser free parameter. ► It obtains higher performance on the XOR datasets for different distributions. ► The singularity occurring in GEPSVM can be avoided by DGEPSVM. ► In contrast to other multisurface classifiers, FETSVM can reduce the input features.

Introduction

Eigenvalue based techniques are attractive for the classification of very large sparse datasets (Guarracino, Cifarelli, Seref, & Pardalos, 2007) such as generalized proximal SVM (GEPSVM for short) (Mangasarian & Wild, 2006). GEPSVM obtains each of the nonparallel planes by solving the eigenvector corresponding to a smallest eigenvalue of a generalized eigenvalue problem, such that each plane is as close as possible to the samples for its class and meantime as far as possible from the samples for the other classes (Mangasarian & Wild, 2006). The edges of two-class GEPSVM lie in its lower computational complexity and its better classification performance in terms of solving XOR problems with respect to standard SVM that find one plane that separates the two classes. In Mangasarian and Wild (2006), Mangasarian et al. presented a simple “cross planes” example that is a generalization of the XOR example, which indicated the effectiveness of GEPSVM over PSVM and SVM. Fig. 1 in Mangasarian and Wild (2006) demonstrates GEPSVM has classification correctness of 100% in XOR case. Recently, a lot of GEPSVM-based algorithms have been proposed. To improve the generalization of GEPSVM, Jayadeva et al. proposed Fuzzy GEPSVM (FGEPSVM) given its multi-category formulation. In 2007, Guarracino et al. (2007) introduced a new regularization technique to GEPSVM for reducing the time complexity of GEPSVM, but with two unknown parameters in linear case. These algorithms obtain two planes by solving generalized eigenvalue problems as GEPSVM does. However, for the symmetric matrices occurring in these algorithms such as H and M in the formulation (5), (6), if both are semi-positive, an ill-defined operation will be obtained. Moreover, these algorithms weaken the genuine geometrical interpretation of the nonparallel plane classifier due to the adoption of regularization term that improves their generalization. Recently, a twin SVM algorithm (TWSVM for short), proposed by Jayadeva et al., was published in TPAMI (Jayadeva & Chandra, 2007). This algorithm, which is in the spirit of GEPSVM, obtains two planes by solving two smaller quadratic programming problems (QPPs) than that of the standard SVM. Experimental results show the effectiveness of TWSVM over SVM and GEPSVM (Arun Kumar and Gopal, 2009, Jayadeva and Chandra, 2007). TWSVM takes O(1/4m3) operations which is 1/4 of standard SVM, whereas, GEPSVM takes O(1/4n3). Here, m is the number of training samples, n is the dimensionality and m  n (Arun Kumar and Gopal, 2009, Jayadeva and Chandra, 2007). Obviously, GEPSVM is by far faster than TWSVM. To reduce the time complexity and keep the effectiveness of the twin SVM classifier, some scholars proposed its least squares version (LSTSVM for short) in 2009 (Arun Kumar and Gopal, 2009, Ghorai et al., 2009). In fact, LSTSVM determines two nonparallel planes by solving two PSVM-type (Fung & Mangasarian, 2001) problems. Compared with TWSVM, LSTSVM has lesser computational time due to the fact that it only solves two systems of linear equations instead of two QPPs as for TWSVM. TWSVM and LSTSVM however, also lose the genuine geometrical interpretation of the nonparallel plane classifier. GEPSVM is proposed to solve the complex examples which are difficult classification cases for typical linear classifiers just as XOR example does (Mangasarian & Wild, 2006). Each of the planes obtained by GEPSVM is as close as possible to the samples for its class and meantime as far as possible from the samples for the other classes (Mangasarian & Wild, 2006). However, TWSVM requires each of planes obtained to be as close as possible to the samples for its class and meantime at a distance of at least 1 from the samples for the other classes (Jayadeva & Chandra, 2007). LSTSVM requires each of the planes to be as close as possible to the samples for its class and meantime at a distance of 1 from the samples for the other classes. In intuition, when handling XOR examples of different distribution, TWSVM and LSTSVM may yield poor classification performance due to the difference from the optimization criterion of GEPSVM, although they can obtain good classification performance on UCI datasets due to the use of the loss function. Moreover, another flaw of TWSVM and LSTSVM is that two penalty parameters are introduced to their objective functions instead of one regularization parameter as for GEPSVM. Undoubtedly, this will lead to the difficulty of parameter selection. In addition, when there are many noise variables, the 1-norm SVM (Zou, 2007, Zhou et al., 2002) has advantages over the 2-norm SVM because the former is capable of generating sparse solutions that make the classifier easier to store and faster to compute. However, these GEPSVM-based algorithms, including GEPSVM, cannot generate very sparse solutions, even if we give their 1-norm formulations as in 1-norm SVM (Zou, 2007). This is so because the direction wi and threshold ri that determine the ith separating planes combines with the input samples.

In this paper, we first propose a new but fast algorithm, termed as Difference GEPSVM (DGEPSVM). DGEPSVM need not consider the singularity occurring in GEPSVM due to the use of a similar formulation to the MMC (Jiang & Zhang, 2004). We show that the solution of DGEPSVM reduces to solving two simple eigenvalue problems. This property determines DGEPSVM is fast and at least comparable to GEPSVM. Moreover, DGEPSVM can deal with XOR examples with different distribution because it keeps the genuine geometrical interpretation of GEPSVM. Then, we further propose a feature selection algorithm for TWSVM, called FETSVM. This proposed algorithm can overcome such a flaw, that is, GEPSVM and other GEPSVM-based algorithms cannot generate the very sparse solutions. Lastly, the two algorithms are compared on artificial and UCI datasets. We also go onto illustrate their effectiveness for USPS handwritten digits application.

Given four facts: (1) DGEPSVM need not care about the singularity occurring in GEPSVM and performs better in classification correctness than GEPSVM; (2) DGEPSVM surpasses TWSVM and LSTSVM in terms of solving XOR examples with different distribution and gives comparable classification correctness on standard datasets; (3) DGEPSVM has lesser unknown parameters than TWSVM and LSTSVM; and (4) FETSVM performs faster than TWSVM and suppresses input features as well as giving comparable classification correctness.

Section snippets

Generalized Proximal Support Vector Machines (GEPSVM) (Mangasarian & Wild, 2006)

Given m training points in n dimension input space Rn, denoted by the m1 × n matrix A belonging to class 1 and the m2 × n matrix B belonging to class-1, where m2 + m1 = m.

The main purpose of GEPSVM is to find two nonparallel hyperplanes in n-dimension space, i.e.,xTw1-b1=0,xTw2-b2=0where (wi, bi)  (Rn × R) (i = 1, 2). This algorithm requires each plane to be as close as possible to the samples for its class and as far as possible from the samples for the other classes at the same time.

Suppose (wi, bi)  0,

Linear DGEPSVM

In GEPSVM, the matrices H and M are always singular. In Mangasarian and Wild (2006), Mangasarian et al. acclaimed that GEPSVM can also obtain a perfect two-plane classifier in XOR examples, even if H and M are singular. This means GEPSVM has the ability to deal with XOR examples well without other constraints. Observing the constraints in TWSVM and LSTSVM, we easily find that they lose the genuine geometrical interpretation of primal GEPSVM. In TWSVM, the constraints require the plane to be at

Nonlinear MPDMC

We first discuss the optimization problem (16). In Jiang and Zhang (2004) and Mika, Ratsch, Weston, Scholkopf, and Mullers (1999), we note that every solution w  H(KFS: kernel feature space) can be written as an expansion in terms of mapped training data, thereby obtaining:w1=i=1m(ai)1ϕ(xi)=ϕ(X)a1whereϕ(X)=(ϕ(x11),ϕ(x12),,ϕ(x1m1),ϕ(x21),ϕ(x22),,ϕ(x2m2),a1=(a11,a12,,a1m1,,a21,a22,,a2m2)TSubstituting (39) into (16) gives an explicit expression:Minf(a1,b1)=(K(A,CT)a1-e1b1)T(K(A,CT)a1-e1b1)-β(K

Experimental results on XOR examples and standard datasets

We tried out on publicly available datasets from the UCI (Muphy & Aha, 1992) database as well as two synthetic datasets to demonstrate the effectiveness of DEGEPSVM and FETSVM. The two synthetic datasets “Crossplanes” are designed to visually illustrate the effectiveness of our proposed DEGEPSVM. In the first “Crossplanes” dataset different from that used in Ref. (Arun Kumar & Gopal, 2009), there exists some points, with different labels, distribute in the cross-location of two classes of

Conclusion

We have proposed a new but simple classifier for solving a data classification problem of data mining with an unknown parameter, which is termed as DGEPSVM in this paper. With the genuine geometrical interpretation of the nonparallel plane classifier, DGEPSVM is capable of dealing with XOR examples from different distribution in contrast to TWSVM and LSTSVM, which weaken the genuine geometrical interpretation of the nonparallel plane classifier. In addition, DGEPSVM on public available datasets

Acknowledgments

The authors are extremely thankful to Research Foundation for the Doctoral Program of Higher Education of China (20093219120025), and National Science Foundations of China and (90820306) for support.

References (19)

There are more references available in the full text version of this article.

Cited by (0)

View full text