Discriminant neighborhood embedding for classification
Introduction
Manifold learning plays an important role in many applications such as pattern representation and classification. Several nonlinear techniques [1], [2], [3] have recently been introduced to learn the intrinsic manifold embedded in the ambient space of high dimensionality. However, these nonlinear techniques yield maps that are defined only on the training data points and it remains unclear how to evaluate the maps on novel testing points. To address this problem, locality preserving projection (LPP) is proposed [4]. Specifically, LPP finds an embedding that preserves local information and may be simply extended to any new sample. Since LPP does not make use of class label information, it cannot perform well in classification. Local discriminant embedding (LDE) [5] incorporates the class information into the construction of embedding and derives the embedding for nearest-neighbor classification in a low-dimensional space. Nevertheless, distant points are not deemphasized efficiently by LDE and it may degrade the performance of classification. Furthermore, both LPP and LDE suffer from singularity of matrix when the number of training samples is much smaller than the dimensionality of each sample. To address this problem, LPP and LDE first project the data to a PCA subspace, which may lose some discriminative information for classification.
In this paper, a novel method called discriminant neighborhood embedding (DNE) is proposed. DNE is inspired by an intuition of dynamics theory. Multi-class data points in high-dimensional space are supposed to be pulled or pushed by discriminant neighbors to form an optimum embedding of low dimensionality for classification. There is no need to calculate the inverse matrix in DNE method, so the complication due to singularity of matrix can be avoided.
Section snippets
Discriminant neighborhood embedding
Suppose multi-class data are sampled from underlying manifold embedded in high-dimensional ambient space and any subset of data points in the same class is assumed to lie in a submanifold of . We seek an embedding characterized by intra-class compactness and inter-class separability. Assume that there is an interaction between each pair of data points in the ambient space. The larger the distance between two points is, the weaker the interaction becomes. Therefore each
Comparisons with LPP and LDE
LPP finds an embedding of low dimensionality that preserves locality regardless of the class of neighbors. That is, for multi-class points, LPP regards repulsions between inter-class neighbors as attractions by mistake, which incurs degradation of classification performance. As an example, insular point (see Fig. 1) and its inter-class neighbors will be pulled even more nearby via LPP. The same is true for the marginal points, e.g. Point in Fig. 1. To avoid this drawback of LPP, DNE divides
Experimental results
In this section two databases, namely the UMIST database of faces and the MNIST database of handwritten digits, are tested to compare the proposed DNE method with the state-of-art approaches: PCA, LPP, and LDE. All experiments employ the nearest-neighbor classifier.
The UMIST face database, available at http://images.ee.umist.ac.uk/danny/database.html, contains 575 images of 20 individuals. Each image was cropped and downsampled into size. Fig. 2 shows several samples of one individual in
Acknowledgements
This work is supported in part by the National Science Foundation of China under Grant No. 60373020, 60402007, 60533100, and MoE R&D Foundation 104075.
References (5)
- et al.
A global geometric framework for nonlinear dimensionality reduction
Science
(2000) - et al.
Nonlinear dimensionality reduction by locally linear embedding
Science
(2000)
Cited by (58)
Local dual-graph discriminant classifier for binary classification
2024, NeurocomputingTheoretical framework in graph embedding-based discriminant dimensionality reduction
2021, Signal ProcessingAutomated interpretation of biopsy images for the detection of celiac disease using a machine learning approach
2021, Computer Methods and Programs in BiomedicineSupervised discriminative dimensionality reduction by learning multiple transformation operators
2021, Expert Systems with ApplicationsCitation Excerpt :Locally Preserving Projection (LPP) is a popular example of graph embedding methods, preserving the local structure of data in the reduced subspace (He & Niyogi, 2004). While the original version of LPP is unsupervised, there are several supervised extensions of LPP (Wang, Ding, Hsu, & Yang, 2020; Zhang, Xue, Lu, & Guo, 2006). Double graphs-based discriminant projection (DGDP) is a recent similar work, which constructs two adjacent graphs instead of one.
Dimensionality reduction via preserving local information
2020, Future Generation Computer SystemsCitation Excerpt :Many graph embedding algorithms are proposed in recent years, such as discriminant neighborhood embedding [22], exponential local discriminant embedding [23], orthogonal neighborhood preserving projections [24], locality sensitive discriminant analysis [25], marginal fisher analysis [26,27], local feature discriminant projection [28], double adjacency graphs-based discriminant neighborhood embedding [29] and discriminant neighborhood structure graphs [30]. Here, we analyze three typical algorithms, [22,26,27,29]. Zhang et al. [22] proposed the discriminant neighborhood embedding (DNE) supposing that multi-class samples in high-dimensional space tend to move due to local intra-class attraction or inter-class repulsion, and the optimal embedding from the sample of view of classification is discovered consequently.
Discriminative globality and locality preserving graph embedding for dimensionality reduction
2020, Expert Systems with Applications