Elsevier

Pattern Recognition

Volume 39, Issue 11, November 2006, Pages 2240-2243
Pattern Recognition

Discriminant neighborhood embedding for classification

https://doi.org/10.1016/j.patcog.2006.05.011Get rights and content

Abstract

In this paper a novel subspace learning method called discriminant neighborhood embedding (DNE) is proposed for pattern classification. We suppose that multi-class data points in high-dimensional space tend to move due to local intra-class attraction or inter-class repulsion and the optimal embedding from the point of view of classification is discovered consequently. After being embedded into a low-dimensional subspace, data points in the same class form compact submanifod whereas the gaps between submanifolds corresponding to different classes become wider than before. Experiments on the UMIST and MNIST databases demonstrate the effectiveness of our method.

Introduction

Manifold learning plays an important role in many applications such as pattern representation and classification. Several nonlinear techniques [1], [2], [3] have recently been introduced to learn the intrinsic manifold embedded in the ambient space of high dimensionality. However, these nonlinear techniques yield maps that are defined only on the training data points and it remains unclear how to evaluate the maps on novel testing points. To address this problem, locality preserving projection (LPP) is proposed [4]. Specifically, LPP finds an embedding that preserves local information and may be simply extended to any new sample. Since LPP does not make use of class label information, it cannot perform well in classification. Local discriminant embedding (LDE) [5] incorporates the class information into the construction of embedding and derives the embedding for nearest-neighbor classification in a low-dimensional space. Nevertheless, distant points are not deemphasized efficiently by LDE and it may degrade the performance of classification. Furthermore, both LPP and LDE suffer from singularity of matrix when the number of training samples is much smaller than the dimensionality of each sample. To address this problem, LPP and LDE first project the data to a PCA subspace, which may lose some discriminative information for classification.

In this paper, a novel method called discriminant neighborhood embedding (DNE) is proposed. DNE is inspired by an intuition of dynamics theory. Multi-class data points in high-dimensional space are supposed to be pulled or pushed by discriminant neighbors to form an optimum embedding of low dimensionality for classification. There is no need to calculate the inverse matrix in DNE method, so the complication due to singularity of matrix can be avoided.

Section snippets

Discriminant neighborhood embedding

Suppose N multi-class data {x1,x2,,xN} are sampled from underlying manifold M embedded in high-dimensional ambient space Rn and any subset of data points in the same class is assumed to lie in a submanifold of M. We seek an embedding characterized by intra-class compactness and inter-class separability. Assume that there is an interaction between each pair of data points in the ambient space. The larger the distance between two points is, the weaker the interaction becomes. Therefore each

Comparisons with LPP and LDE

LPP finds an embedding of low dimensionality that preserves locality regardless of the class of neighbors. That is, for multi-class points, LPP regards repulsions between inter-class neighbors as attractions by mistake, which incurs degradation of classification performance. As an example, insular point C (see Fig. 1) and its inter-class neighbors will be pulled even more nearby via LPP. The same is true for the marginal points, e.g. Point H in Fig. 1. To avoid this drawback of LPP, DNE divides

Experimental results

In this section two databases, namely the UMIST database of faces and the MNIST database of handwritten digits, are tested to compare the proposed DNE method with the state-of-art approaches: PCA, LPP, and LDE. All experiments employ the nearest-neighbor classifier.

The UMIST face database, available at http://images.ee.umist.ac.uk/danny/database.html, contains 575 images of 20 individuals. Each image was cropped and downsampled into 56×46 size. Fig. 2 shows several samples of one individual in

Acknowledgements

This work is supported in part by the National Science Foundation of China under Grant No. 60373020, 60402007, 60533100, and MoE R&D Foundation 104075.

References (5)

  • J.B. Tenenbaum et al.

    A global geometric framework for nonlinear dimensionality reduction

    Science

    (2000)
  • S. Roweis et al.

    Nonlinear dimensionality reduction by locally linear embedding

    Science

    (2000)
There are more references available in the full text version of this article.

Cited by (58)

  • Supervised discriminative dimensionality reduction by learning multiple transformation operators

    2021, Expert Systems with Applications
    Citation Excerpt :

    Locally Preserving Projection (LPP) is a popular example of graph embedding methods, preserving the local structure of data in the reduced subspace (He & Niyogi, 2004). While the original version of LPP is unsupervised, there are several supervised extensions of LPP (Wang, Ding, Hsu, & Yang, 2020; Zhang, Xue, Lu, & Guo, 2006). Double graphs-based discriminant projection (DGDP) is a recent similar work, which constructs two adjacent graphs instead of one.

  • Dimensionality reduction via preserving local information

    2020, Future Generation Computer Systems
    Citation Excerpt :

    Many graph embedding algorithms are proposed in recent years, such as discriminant neighborhood embedding [22], exponential local discriminant embedding [23], orthogonal neighborhood preserving projections [24], locality sensitive discriminant analysis [25], marginal fisher analysis [26,27], local feature discriminant projection [28], double adjacency graphs-based discriminant neighborhood embedding [29] and discriminant neighborhood structure graphs [30]. Here, we analyze three typical algorithms, [22,26,27,29]. Zhang et al. [22] proposed the discriminant neighborhood embedding (DNE) supposing that multi-class samples in high-dimensional space tend to move due to local intra-class attraction or inter-class repulsion, and the optimal embedding from the sample of view of classification is discovered consequently.

View all citing articles on Scopus
View full text