Next Article in Journal
Influence of the Injection Bias on the Capacitive Sensing of the Test Mass Motion of Satellite Gravity Gradiometers
Previous Article in Journal
A Combined Magnetoelectric Sensor Array and MRI-Based Human Head Model for Biomagnetic FEM Simulation and Sensor Crosstalk Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information

School of Software, Tongji University, Shanghai 201800, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(4), 1187; https://doi.org/10.3390/s24041187
Submission received: 3 December 2023 / Revised: 5 February 2024 / Accepted: 7 February 2024 / Published: 11 February 2024
(This article belongs to the Section Remote Sensors)

Abstract

:
Hyperspectral image (HSI) classification is a highly challenging task, particularly in fields like crop yield prediction and agricultural infrastructure detection. These applications often involve complex image types, such as soil, vegetation, water bodies, and urban structures, encompassing a variety of surface features. In HSI, the strong correlation between adjacent bands leads to redundancy in spectral information, while using image patches as the basic unit of classification causes redundancy in spatial information. To more effectively extract key information from this massive redundancy for classification, we innovatively proposed the CESA-MCFormer model, building upon the transformer architecture with the introduction of the Center Enhanced Spatial Attention (CESA) module and Morphological Convolution (MC). The CESA module combines hard coding and soft coding to provide the model with prior spatial information before the mixing of spatial features, introducing comprehensive spatial information. MC employs a series of learnable pooling operations, not only extracting key details in both spatial and spectral dimensions but also effectively merging this information. By integrating the CESA module and MC, the CESA-MCFormer model employs a “Selection–Extraction” feature processing strategy, enabling it to achieve precise classification with minimal samples, without relying on dimension reduction techniques such as PCA. To thoroughly evaluate our method, we conducted extensive experiments on the IP, UP, and Chikusei datasets, comparing our method with the latest advanced approaches. The experimental results demonstrate that the CESA-MCFormer achieved outstanding performance on all three test datasets, with Kappa coefficients of 96.38%, 98.24%, and 99.53%, respectively.

1. Introduction

With the continuous advancement of spectral imaging technology, hyperspectral data have achieved significant improvements in both spatial and spectral resolution. Compared to multispectral and RGB images, hyperspectral images (HSI) possess narrower bandwidths and a greater number of bands, allowing them to provide more detailed and continuous spectral information [1,2,3]. As a result, HSI have demonstrated tremendous potential in various earth observation fields [4] such as precision agriculture [5,6], urban planning [7,8], environmental management [9,10,11], and target detection [12,13,14,15]. Consequently, research on HSI classification has rapidly progressed.
While traditional HSI classification methods, such as nearest neighbor [16], Bayesian estimation [17], multinomial logistic regression [18,19], and Support Vector Machine (SVM) [20,21,22,23], have their merits in certain scenarios, these methods often have limitations in data representation and fitting capability, struggling to produce satisfactory classification results on more complex datasets. In contrast, in recent years, methods underpinned by deep learning, thanks to their outstanding feature extraction capabilities, have gradually become the focus of research in HSI classification.
Convolutional Neural Networks (CNNs) dominate the field of deep learning and are capable of accumulating in-depth spatial features through layered convolution. As such, CNNs have been extensively applied to and researched in the classification of HSI [24,25]. Notably, Roy and colleagues [26] introduced a model named HybridSN. This model initially employs a 3D-CNN to extract spatial–spectral features from spectral bands that have undergone PCA dimensionality reduction. Subsequently, it uses a 2D-CNN to delve deeper into more abstract spatial feature hierarchies. Compared to a 3D-CNN, this hybrid approach simplifies the model architecture while effectively merging spatial and spectral information. Building on this, subsequent researchers have incorporated one-dimensional convolution based on central pixels to compensate for spectral information that might be lost after PCA reduction. Examples of this approach include the Cubic-CNN model proposed by J. Wang et al. [27] and the JigsawHSI model introduced by Moraga and others [28].
The Vision Transformer (ViT) model [29], which evolved from the natural language processing (NLP) domain, has also increasingly become a focal point in the field of deep learning. The ViT model segments images into fixed-size patches and leverages embedding techniques to obtain a broader receptive field. Furthermore, with the help of multi-head attention mechanisms, it adeptly captures the dependencies between different patches, thereby achieving higher processing efficiency and remarkable image recognition performance. Consequently, numerous studies have been dedicated to exploring the application of this model in HSI classification. For instance, a research team proposed the Spatial–Spectral Transformer (SST) model in [30]. They utilized VGGNet [31], from which several convolutional layers were removed, as a feature extractor to capture spatial characteristics from hyperspectral images. Subsequently, they employed the DenseTransformer to discern relationships between spectral sequences and used a multi-layer perceptron for the final classification task. Qing et al. introduced SATNet in [32], which effectively captures spectral continuity by adding position encoding vectors and learnable embedding vectors. Meanwhile, Hong and colleagues presented the SpectralFormer (SF) model in [33]. This model adopts the Group-wise Spectral Embedding (GSE) module to encode adjacent spectra, ensuring spectral information continuity, and utilizes the Cross-layer Adaptive Fusion (CAF) technique to minimize information loss during hierarchical transmission. X. He and their team introduced the SSFTT network in [34]. This model significantly simplifies the SST structure and incorporates Gaussian-weighted feature tagging for feature transformation, thus reducing computational complexity while enhancing classification performance.In recent studies, researchers have continued to explore more lightweight and effective methods for feature fusion and extraction based on the transformer architecture. For instance, Xuming Zhang and others proposed the CLMSA and PLMSA modules [35], while Shichao Zhang and colleagues introduced the ELS2T [36].
Due to the high correlation between adjacent bands in HSI, there is a significant amount of redundant information within HSI. Commonly, to mitigate the impact of this redundancy, the methods mentioned above [26,30,31] preprocess HSI using Principal Component Analysis (PCA). However, not using PCA leads to a significant decrease in model prediction accuracy, highlighting the model’s deficiency in extracting key spectral information. As an unlearnable dimensionality reduction technique, PCA’s process is often irreversible and can lead to information loss, such as the loss of spectral continuity [37]. Models reliant on PCA may thus produce suboptimal results. In transfer learning or few-shot image classification tasks, HSI are required to feed a large number of channels into the model to preserve as much original information as possible. This input of extensive channel data elevates the demands on feature extractors, necessitating their capability to efficiently process and extract key information from these numerous channels. Moreover, different datasets might require dimensionality reduction to different extents, making the selection of appropriate dimensions for each dataset a time-consuming operation. Therefore, we propose the CESA-MCFormer, which effectively extracts key information from HSI under conditions of limited samples and numerous channels, achieving higher classification accuracy in downstream tasks without relying on PCA for dimension reduction. To achieve this, we have incorporated attention mechanisms and mathematical morphology.
Attention mechanisms have been extensively applied in various domains of machine learning and artificial intelligence. Hu et al. [38] introduced a “channel attention module” in their SE network structure to capture inter-channel dependencies. Woo et al. [39] proposed CBAM, which combines channel and spatial attention, adaptively learning weights in both dimensions to enhance the network’s expressive power and robustness. Meanwhile, Zhong et al. [40] presented a deep convolutional neural network model, integrating both a “global attention mechanism” and a “local attention mechanism” in sequence to capture both global and local contextual information. Inspired by these advancements, researchers began incorporating spatial attention into HSI classification. Several studies [41,42,43,44] combine spectral and spatial attention mechanisms, enabling adaptive selection of key features within HSI. However, in HSI classification, a common practice is to segment the HSI into small patches and classify each patch based on its center pixel. Yet, these methods do not sufficiently consider the importance of the center pixel. This approach makes the information provided by the center pixel crucial. Recent studies have recognized this, such as those cited in [45,46], which employed the Central Attention Module (CAM). This module determines feature weights by analyzing the correlation of each pixel with the center pixel. However, considering the phenomena of same material, different spectra and different materials, same spectra in HSI, relying solely on similarity to the center pixel for weight allocation might overlook important spatial information provided by other pixels. Therefore, effectively weighting the center pixel while taking global spatial information into account remains a challenge.
Mathematical Morphology (MM) primarily focuses on studying the characteristics of object morphology, processing and describing object shapes and structures using mathematical tools such as set theory, topology, and functional analysis [47]. In previous HSI classification tasks, researchers often utilized attribute profiles (APs) and extended morphological profiles (EPs) to extract spatial features more effectively [48,49,50,51]. However, this approach typically requires many structuring elements (SEs), which are non-trainable and thus unable to effectively capture dynamic feature changes. To overcome these limitations, Roy et al. proposed the Morphological Transformer (morphFormer) in [52], combining trainable MM operations with transformers, thereby enhancing the interaction between HSI features and the CLS token through learnable pooling operations. However, this method involves simultaneous dilation and erosion of spatio-spectral features, where each SE introduces a significant number of parameters. This not only risks losing fine-grained feature information during feature selection but also leads to model overfitting and reduced robustness, especially in scenarios with limited data. Hence, there is substantial room for improvement in the application of MM in HSI classification.
The core contributions of this study are as follows:
  • We designed a flexible and efficient Center Enhanced Spatial Attention (CESA) module specifically for hyperspectral image feature extraction. This module can be easily integrated into various models, enhancing focus on areas around the center pixel while considering global spatial information;
  • We introduced Morphological Convolution (MC) to replace the traditional linear layer feature extraction mechanism in the transformer encoder. MC selects fine-grained features through a strategy of separating and then integrating spatial and spectral features, significantly reducing the number of parameters and enhancing the model’s robustness;
  • Utilizing these modules, we developed the CESA-MCFormer feature extractor, capable of effectively extracting key features from a multitude of channels, supporting various downstream classification tasks. We conducted in-depth ablation experiments to provide practical and theoretical insights for researchers exploring and applying similar modules.
The rest of the paper is organized as follows: In Section 2, we provide an overview of the CESA-MCFormer’s overall framework and detail our proposed CESA and MC modules. Section 3 describes the experimental datasets, results under various parameter settings, and an analysis of the model parameters. Finally, Section 4 concludes with our research findings.

2. Methodology

The architecture of CESA-MCFormer is illustrated in Figure 1. For an HSI patch of size c × h × w , the spectral continuity information is initially extracted through a 3D–2D Conv Block [52], and the dimensionality is transformed to 64. Subsequently, the HSI feature of size 64 × h × w is fed into the Emb Block for mixing spatial and spectral features, generating a 64 × 64 feature matrix. Then, a learnable CLS token, initialized to zero, is introduced for feature aggregation, along with a learnable matrix of size 65 × 64 , also initialized to zero, for spatial–spectral position encoding. After combining the feature map with the position encoding, it is passed through multiple iterations of the Transformer Encoder for deep feature extraction, and the extracted features are then input into the Classifier Head for downstream classification tasks. Next, we will provide a detailed introduction to the Emb Block and Transformer Encoder.

2.1. Emb Block

Given that the HSI patches input into the model are generally small (with a spatial size of 11 × 11 adopted in this study), we introduce the Emb Block to directly mix and encode global spatial features. This approach equips the model with a global receptive field before deep feature extraction, as illustrated in Figure 2. Since the information provided by the central pixel of the HSI patch is crucial, CESA is first used to weight information at different positions, aiding the model in actively eliminating redundant information. Then, we introduce a learnable weight matrix W a R 64 × 64 initialized using Xavier normal initialization, composed of 64 scoring vectors. By calculating the dot product between HSI features and each scoring vector, we score the features of each pixel. The scores are then transformed into mixing weights using the softmax function. Another learnable weight matrix W b R 64 × 64 , initialized in the same manner, is introduced to remap the HSI features of each pixel point through matrix multiplication. Finally, by multiplying the two matrices, we mix the spatial features based on the mixing weights to obtain the final feature encoding matrix.
The overall architecture of CESA is illustrated in Figure 3. To comprehensively consider both global information and the importance of the central pixel, we meticulously designed two modules: Soft CESA and Hard CESA. Hard CESA, a non-learnable module, statically assigns higher weights to pixels closer to the center. Soft CESA, conversely, is a learnable module that uses global information as a reference, enabling the model to adaptively select more important spatial information. This design aims to effectively integrate both global and local information, enhancing the overall performance of the model.
Specifically, CESA takes an HSI or its feature map ( F i n ) as input. Both Hard CESA and Soft CESA calculate and output the hard probabilistic diversity map ( M h ) and the soft probabilistic diversity map ( M s ), respectively. The M h and M s maps are added together and then expanded along the channel dimension to match the size of F i n before being element-wise multiplied with F i n . Finally, an optional simple convolutional module is used to adjust the dimensions of the output feature ( F o u t ). The implementation details of both Hard CESA and Soft CESA are presented in the following sections.

2.1.1. Hard CESA

The output M h of Hard CESA depends only on the size of F i n and the hyperparameter K. For a pixel q in F i n , its position coordinates are defined as (x, y), and its spectral features are denoted by p = [ p 1 , p 2 , , p c ] R 1 × c . We define q c as the center pixel of the patch, and its coordinates in the image are defined as ( x c , y c ). The distance d between q c and q is defined by the following Equation (1):
d = max ( | x x c | , | y y c | )
q w for pixel q is defined as follows in Equation (2):
q w = K d h × ( 2 K 1 )
where h is the length of the F i n . The hyperparameter K ( K [ 0.5 , 1 ) ) controls the importance gap between the center and edge pixels. As K becomes larger, the weight of the center pixels becomes larger and the weight of the edge pixels becomes smaller. When K = 0.5 , all pixels in the patch have equal weights, and therefore Hard CESA will not have any effect.

2.1.2. Soft CESA

As shown in Figure 4, Soft CESA processes F i n into three feature maps, F 1 , F 2 , and F 3 . F 1 and F 2 are used to represent the overall features of F i n , while F 3 is used to introduce the feature of the center pixel.
Specifically, for a pixel q in F i n , its position coordinates are defined as (x, y), and its spectral features are denoted by p = [ p 1 , p 2 , , p c ] R 1 × c . The value of F 1 at position (x, y), denoted as m 1 (x, y), can be calculated as follows:
m 1 ( x , y ) = max ( p 1 , p 2 , , p c )
The value of F 2 at position (x, y), denoted as m 2 (x, y), can be calculated as follows:
m 2 ( x , y ) = 1 c i = 1 c p i
To effectively extract the center overall feature in Soft CESA, we introduce a central weight vector r = [ r 1 , r 2 , , r c ] R c to weight F i n . Therefore, the value of F 3 at position (x, y) can be represented as follows:
m 3 ( x , y ) = 1 c i = 1 c p i r i
We extract the spectral features of the central pixel and its eight neighboring pixels, flatten them into a one-dimensional vector, and use this as the central feature vector v c R 9 c . We introduce a matrix A b R c × ( 9 c ) composed of c learnable spectral feature encoding vectors and a vector l b R c comprised of c bias terms to weight and sum the spectral bands at each position. The specific formula for calculating the corresponding r is as follows:
r = s o f t m a x ( A b v c + l b )
Finally, we concatenate F 1 , F 2 , and F 3 along the channel dimension to form the final feature matrix F. After passing through a convolutional layer with a kernel size of 3 × 3 , a softmax activation function is applied to produce the final soft probabilistic diversity map M s :
M s = s i g m o i d ( conv ( F ) )
It can be observed that the entire CESA model uses only c × ( 9 c + 1 ) + ( 9 × 3 + 1 ) learnable parameters, and the parameter c can be flexibly adjusted through the preceding conv Block. This means that the computational cost of CESA is very low, allowing it to be easily embedded into other models without significantly increasing the complexity of the original model.

2.2. Transformer Encoder

The primary function of the Transformer Encoder module is to extract deep spatial–spectral features through multiple iterations. As shown in Figure 5, in each iteration, HSI features are first processed through Spectral Morph and Spatial Morph for feature selection and extraction, followed by an interaction with the CLS token through Cross Attention, aggregating the spatial–spectral features into the CLS token.
To capture multi-dimensional features, we employ a multi-head attention mechanism in Cross Attention [29]. The input CLS token and HSI features are uniformly divided into eight parts along the spectral feature dimension, each with a feature length of eight. For each segmented feature, the CLS token serves as the query q R 1 × 8 , and the matrix formed by concatenating the CLS token and HSI features is used as the key and value k , v R 65 × 8 . The calculation method for Cross Attention is as follows:
X a t t n = d r o p o u t s o f t m a x ( q × w q ) × ( k × w k ) T l × ( v × w v )
In this process, w q , w k , and w v R 8 × 8 are all learnable parameters, while l is the feature length, set to eight in this study. After obtaining all eight groups of X a t t n , they are reassembled along the spectral feature dimension. Then, they are processed through a linear layer followed by a dropout layer, resulting in the updated C L S t o k e n n * R 1 × 64 . This is then added to the input C L S t o k e n n to produce the final C L S t o k e n n + 1 .
Inspired by morphFormer [52], we have also incorporated a Spectral Morph Block and a Spatial Morph Block into our model. The overall architecture of these two modules is identical, as shown in Figure 6. Both modules process HSI features through erosion and dilation modules. After processing, the Spectral Morph block utilizes a 1 × 1 convolution layer (corresponding to the blue Conv block in Figure 6) to extract deeper channel information, while the Spatial Morph block uses a 3 × 3 convolution layer to aggregate more channel information. The Morphological Convolution (MC) we propose is represented by the erosion and dilation modules in Figure 6. Next, we will elaborate on how MC is implemented.
MC’s primary function is to eliminate redundant data during the feature extraction process, ensuring that as the depth of the encoder increases, the HSI feature retains only pivotal information.To accomplish this, we apply multiple learnable Structuring Elements (SEs) to the HS feature for morphological convolution. Through dilation, we select maximum values from adjacent features, emphasizing boundary details. In contrast, erosion allows us to identify the minimum values, effectively attenuating minor details. Additionally, directly employing SEs might inflate the parameter count, posing overfitting risks. To mitigate this, we separate the spectral and spatial SEs, significantly reducing parameters and thereby boosting the model’s resilience.
Specifically, when using SEs with a spatial size of k × k , to maintain the consistency of input and output dimensions of the module, we first reshape the spatial dimension of the HS feature into two dimensions and then pad its boundaries, resulting in the feature matrix H R ( 8 + ( k 1 ) ) × ( 8 + ( k 1 ) ) × 64 . Next, by adopting a sliding window with a stride of 1, we segment H into 64 sub-blocks of size k × k × 64 , referred to as Xpatch. Subsequently, we further decompose Xpatch in both spatial and spectral directions. Spatially, Xpatch is divided into k × k vectors of dimension 64, denoted as { X a 1 , X a 2 , , X a k × k } . Spectrally, Xpatch is parsed into 64 vectors of dimension k × k , represented as { X b 1 , X b 2 , , X b 64 } . We then introduce multiple SEs groups, where each group consists of a spatial vector of length k × k and a spectral vector of length 64. For simplicity, we name one group of SEs W, with its spectral vector labeled W a and the spatial vector W b . For any given Xpatch and W, the dilation operation of the morphological convolution is shown in Figure 7.
First, we add each segmented feature vector to the corresponding W a and W b at their respective positions, then take the maximum value to obtain h dil R 1 :
h dil ( X a i , W a ) = max j { 1 , 2 , , 64 } ( X a i ( j ) + W a ( j ) )
h dil ( X b j , W b ) = max i { 1 , 2 , , k × k } ( X b j ( i ) + W b ( i ) )
Then, we introduce two learnable vectors h a R ( k × k ) and h b R 64 , along with two learnable bias terms β a and β b . We concatenate the results from the previous step into two one-dimensional vectors, which are then dot-multiplied with h a and h b , respectively, and added to β a and β b , resulting in g dil R 1 :
g dil ( X a , W a ) = concat h dil ( X a 1 , W a ) , h dil ( X a 2 , W a ) , , h dil ( X a k × k , W a ) × h a + β a
g dil ( X b , W b ) = concat h dil ( X b 1 , W b ) , h dil ( X b 2 , W b ) , , h dil ( X b 64 , W b ) × h b + β b
Finally, we concatenate the two obtained feature values to form the convolution result of that Xpatch under the specified W a and W b , referred to as f dil R 2 :
f dil ( X patch , W ) = concat g dil ( X a , W a ) , g dil ( X b , W b )
In actual experiments, 16 groups of W were used in the erosion block. Therefore, after computing all the W with Xpatch, the final HSI feature size obtained through the dilation module is 32 × 64 . Similarly, for any X patch and W, the following formula describes the erosion operation f ero ( X patch , W ) in the morphological convolution:
f ero ( X patch , W ) = concat ( g ero ( X a , W a ) , g ero ( X b , W b ) )
g ero ( X a , W a ) = concat ( h ero ( X a 1 , W a ) , h ero ( X a 2 , W a ) , , h ero ( X a k × k , W a ) ) × h a + β a
g ero ( X b , W b ) = concat ( h ero ( X b 1 , W b ) , h ero ( X b 2 , W b ) , , h ero ( X b 64 , W b ) ) × h b + β b
h ero ( X a i , W a ) = min j { 1 , 2 , , 64 } ( X a i ( j ) W a ( j ) )
h ero ( X b j , W b ) = min i { 1 , 2 , , k × k } ( X b j ( i ) W b ( i ) )
Overall, we process the 64 X patch using 32 sets of SEs. Specifically, 16 sets are responsible for the dilation operation, while the other 16 sets handle the erosion operation. This results in two 32 × 64 feature matrices. After spatial–spectral separation, the required parameter count for the SEs is reduced from 2 × 32 × k × k × 64 to 2 × 2 × 16 × ( k × k + 64 + 1 ) . Additionally, MC operates similarly to traditional convolutional layers, allowing it to directly replace convolutional layers in models. This attribute endows MC with significant versatility and adaptability.

3. Results and Discussion

3.1. Dataset Description

To validate the effectiveness of the CESA-MCFormer feature extractor, we tested its performance in two types of classification tasks. Specifically, for the semantic segmentation task, we used the Indian Pines dataset (IP), Pavia University dataset (UP), and Chikusei dataset; while, for the few-shot learning (FSL) task, the datasets included the IP, UP, Chikusei dataset, Botswana dataset, KSC dataset, and Salinas Valley dataset. The detailed information about these datasets is presented in Table 1.

3.1.1. Semantic Segmentation Task

In the semantic segmentation task, we randomly selected 1% of the pixels from the UP and Chikusei datasets as the training set, with the remaining pixels as the test set. Given that the Oats class in the IP dataset has only 20 pixels, we randomly extracted 5% of the pixels from the IP dataset as the training set and the rest as the test set. In constructing the training set, we did not employ any augmentation methods nor use any dimensionality reduction techniques on the datasets. The specific number of samples for each category in each dataset is shown in Table 2, Table 3 and Table 4.

3.1.2. Few-Shot Learning Task

Given the extensive category requirements for few-shot learning (FSL) training, our study utilized six datasets, with the Chikusei, Botswana, KSC, and Salinas Valley used for model pretraining, and the IP and UP datasets for testing. To ensure consistency in input data sizes across all datasets in the FSL experiments, we standardized the dimensions of each dataset to 100 using BS-Nets [53].
For the datasets involved in pretraining, we selected classes with over 250 samples, randomly allocating 50 samples to the support set and 200 to the query set for each class. Specifically, Chikusei contributed 17 classes, Botswana 8, KSC 9, and Salinas Valley 16, totaling 50 distinct training classes. After pretraining, we randomly selected 10 samples from each class in the IP and UP datasets for model fine-tuning and testing.
In our study, during the pretraining on the IP dataset, each task randomly selected 16 out of 50 available classes, following a 16-way, 10-shot method. For the UP dataset, each training iteration randomly chose nine classes, also using a 10-shot approach. For each class, the support set consisted of 10 randomly selected samples out of 50, while the query set used all 200 samples. Moreover, no form of data augmentation was used to expand the datasets, neither in the pretraining nor in the fine-tuning stages.

3.2. Training Details and Evaluation Indicators

3.2.1. Configuration

All experiments were designed and conducted using PyTorch on a Ubuntu 18.04 x64 machine with 13th Gen Intel(R) Core(TM) i5-13600KF CPU, 32GB RAM, and an NVIDIA Geforce RTX 4080 16GB GPU.

3.2.2. Training Details

In our semantic segmentation task, we directly classify using the cls token connected to a fully connected layer, as depicted in the Classifier Head block in Figure 1. The Adam optimizer is used with a learning rate of 0.001, and CrossEntropy Loss functions as the loss criterion. For models such as HybridSN [26], Vit [29], SF [33], and SSFTT [34], we maintain a batch size of 64. For morphFormer [52] and our developed CESA-MCFormer, the batch size is set at 32. Table 5 displays the floating point operations (FLOPS) and the number of parameters for various models.
For the FSL task, we incorporate two sets of trainable weights, summing HSI features weighted along spatial and spectral directions to create two feature vectors of length 64 each. These are concatenated with the cls token, forming a final vector of length 192. We average features of 10 samples from each class in the support set to represent class prototypes, with classification based on distances between query set features and these prototypes. For the convolutional network (HybridSN), we alter the final linear layer’s output to a 192-length feature vector for uniformity in feature output length across models. The SGD optimizer is employed, setting the learning rate at 0.00001 and weight decay at 0.0005.

3.2.3. Evaluation Indicators

We used four quantitative evaluation metrics including overall accuracy (OA), average accuracy (AA), kappa coefficient ( κ ), and class-specific accuracy to quantitatively analyze the effectiveness of CESA. The higher the values of these metrics, the better the classification performance.

3.3. Semantic Segmentation Task Experimental Results

3.3.1. Classification Results

To verify the advanced nature of our CESA-MCFormer model, we conducted comparative experiments with several recently proposed models. These include HybridSN [26], Vit [29], SF [33], and SSFTT [34], which originally required PCA for dimensionality reduction in their respective papers. Conversely, morphFormer [52] does not require such reduction. In our experiments, we used a patch size of 11 × 11 as the input for all models, with K set to 0.8 in Hard CESA.
Table 6, Table 7 and Table 8 display the classification results of various models without using PCA dimensionality reduction, while Figure 8 and Figure 9 present the visualization results on the IP and UP datasets. The experimental data demonstrate that CESA-MCFormer exhibits superior performance across all datasets, highlighting its exceptional feature extraction capability in the presence of abundant redundant information. Moreover, as observed from Figure 8 and Figure 9, the combination of EMB Block and MC to eliminate redundancy has notably enhanced the model’s accuracy in classifying complex pixels, especially in edge areas and categories with limited samples. This further underscores the outstanding performance of CESA-MCFormer.
Table 9 and Table 10 present the classification results of HybridSN, SpectralFormer, and SSFTT on the IP and UP datasets after applying PCA dimensionality reduction. Specifically, the dimensionality of the IP dataset was reduced to 30, while that of the UP dataset was reduced to 15. The results indicate that the performance of HybridSN, SpectralFormer, and SSFTT improved significantly after PCA reduction. However, their accuracy still falls short of our CESA-MCFormer model.

3.3.2. Ablation Experiment

To better understand the roles of CESA and MC within the model, we conducted several ablation studies on the IP dataset. In these experiments, we continued to adopt the hyperparameter settings from the previous section. Given the similar overall architecture of CESA-MCFormer and morphFormer, and the outstanding performance of morphFormer when compared to other models, we chose morphFormer as our baseline and built upon it by adding modules.
Table 11 demonstrates the influence of MC and CESA on the final classification performance of the model. It is evident from the table that both modules have significantly enhanced the OA. Furthermore, when both modules are used in conjunction, there is an additional improvement in accuracy. However, in contrast to the pronounced improvement in AA brought about by CESA, the contribution of MC is relatively limited. This can be primarily attributed to the lower classification accuracy for classes with fewer samples. In scenarios with limited sample sizes, prior information becomes particularly crucial. Without the spatial prior information provided by CESA, relying solely on MC to process hyperspectral features that encapsulate comprehensive spatial information proves to be more challenging.
To further validate the superiority of CESA, we replaced it with the traditional Spatial Attention block (SA) and CAM in the CESA-MCFormer model and conducted comparative experiments. The results, presented in Table 12, demonstrate that CESA achieved the highest classification accuracy. This is attributed to CESA’s combination of Hard and Soft components, which not only incorporate prior information but also ensure the learnability of the entire module. Thus, CESA effectively amalgamates the advantages of SA and CAM, leading to improved performance.

3.4. Few-Shot Learning Task Experimental Results

To assess the generality and effectiveness of CESA-MCFormer with extremely limited samples, we conducted FSL experiments on the IP and UP datasets, using only 10 samples per class. The experimental results, as shown in Table 13 and Table 14, indicate that CESA-MCFormer achieved optimal performance on both datasets.

4. Conclusions

This paper presents the CESA-MCFormer feature extractor, which boosts the model’s feature extraction capabilities with a “selection–extraction” strategy, enabling effective image feature extraction without reliance on PCA. CESA enables the model to incorporate spatial prior knowledge guided by an attention mechanism while maintaining the learnability of the module. The MC module introduces learnable pooling operations that effectively filter key information during deep feature extraction. Additionally, CESA-MCFormer adapts to various classification tasks by modifying the classifier head, and both CESA and MC modules can be flexibly integrated into other models to improve their feature extraction performance. Comparisons with other models in semantic segmentation and FSL tasks confirm the versatility and effectiveness of CESA-MCFormer, and ablation studies of CESA and MC attest to the efficacy of these two components.
The CESA-MCFormer has demonstrated exceptional versatility in surface object classification tasks. In future research, we intend to further explore the model’s application to subsurface exploration tasks (such as soil composition analysis) and are committed to further optimizing its performance.

Author Contributions

Methodology, S.L.; Validation, H.Z.; Writing—original draft, S.L.; Supervision, C.Y. and H.Z.; Funding acquisition, C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Tongji University grant number 09-H30G02-9001-20/22.

Institutional Review Board Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roy, S.K.; Kar, P.; Hong, D.; Wu, X.; Plaza, A.; Chanussot, J. Revisiting deep hyperspectral feature extraction networks via gradient centralized convolution. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5516619. [Google Scholar] [CrossRef]
  2. Roy, S.K.; Mondal, R.; Paoletti, M.E.; Haut, J.M.; Plaza, A. Morphological convolutional neural networks for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 8689–8702. [Google Scholar] [CrossRef]
  3. Ahmad, M.; Shabbir, S.; Roy, S.K.; Hong, D.; Wu, X.; Yao, J.; Khan, A.M.; Mazzara, M.; Distefano, S.; Chanussot, J. Hyperspectral image classification-traditional to deep models: A survey for future prospects. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 968–999. [Google Scholar] [CrossRef]
  4. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef]
  5. Li, K.; Huang, J.; Liu, X. Mapping wheat plant height using a crop surface model from unmanned aerial vehicle imagery and 3D feature points. Comput. Electron. Agric. 2019, 164, 104881. [Google Scholar] [CrossRef]
  6. Atherton, D.; Choudhary, R.; Watson, D. Hyperspectral Remote Sensing for Advanced Detection of Early Blight (Alternaria solani) Disease in Potato (Solanum tuberosum) Plants Prior to Visual Disease Symptoms. In Proceedings of the 2017 ASABE Annual International Meeting, American Society of Agricultural and Biological Engineers, Spokane, WA, USA, 16–19 July 2017; p. 1. [Google Scholar]
  7. Shafri, H.Z.M.; Taherzadeh, E.; Mansor, S.; Ashurov, R. Hyperspectral Remote Sensing of Urban Areas: An Overview of Techniques and Applications. Res. J. Appl. Sci. Eng. Technol. 2012, 4, 1557–1565. [Google Scholar]
  8. Navin, M.S.; Agilandeeswari, L. Multispectral and Hyperspectral Images Based Land Use/Land Cover Change Prediction Analysis: An Extensive Review. Multimedia Tools Appl. 2020, 79, 29751–29774. [Google Scholar] [CrossRef]
  9. Andrew, M.E.; Ustin, S.L. The role of environmental context in mapping invasive plants with hyperspectral image data. Remote Sens. Environ. 2008, 112, 4301–4317. [Google Scholar] [CrossRef]
  10. Stuart, M.B.; McGonigle, A.J.S.; Willmott, J.R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef]
  11. Rajabi, R.; Zehtabian, A.; Singh, K.D. Hyperspectral imaging in environmental monitoring and analysis. Front. Environ. Sci. 2024. [Google Scholar] [CrossRef]
  12. Nasrabadi, N.M. Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Process. Mag. 2013, 31, 34–44. [Google Scholar] [CrossRef]
  13. Chang, C.-I. Hyperspectral anomaly detection: A dual theory of hyperspectral target detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–20. [Google Scholar] [CrossRef]
  14. Lee, G.; Lee, J.; Baek, J.; Kim, H.; Cho, D. Channel sampler in hyperspectral images for vehicle detection. IEEE Geosci. Remote Sens.Lett. 2022, 19, 2022. [Google Scholar] [CrossRef]
  15. Shi, P.; Jiang, Q.; Li, Z. Hyperspectral Characteristic Band Selection and Estimation Content of Soil Petroleum Hydrocarbon Based on GARF-PLSR. J. Imaging 2023, 9, 87. [Google Scholar] [CrossRef]
  16. Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.A.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
  17. SahIn, Y.E.; Arisoy, S.; Kayabol, K. Anomaly detection with Bayesian Gauss background model in hyperspectral images. In Proceedings of the 26th Signal Processing and Communications Applications Conference, SIU, Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar]
  18. Haut, J.; Paoletti, M.; Paz-Gallardo, A.; Plaza, J.; Plaza, A. Cloud implementation of logistic regression for hyperspectral image classification. In Proceedings of the 17th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE), Cadiz, Spain, 4–8 July 2017; Costa Ballena: Cádiz, Spain, 2017; Volume 3, pp. 1063–1073. [Google Scholar]
  19. Li, J.; Bioucas-Dias, J.; Plaza, A. Spectral-spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields. IEEE Trans. Geosci. Remote Sens. 2012, 50, 809–823. [Google Scholar] [CrossRef]
  20. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  21. Ye, Q.; Huang, P.; Zhang, Z.; Zheng, Y.; Fu, L.; Yang, W. Multiview learning with robust double-sided twin SVM. IEEE Trans. Cybern. 2021, 52, 12745–12758. [Google Scholar] [CrossRef]
  22. Ye, Q.; Zhao, H.; Li, Z.; Yang, X.; Gao, S.; Yin, T.; Ye, N. L1-norm distance minimization-based fast robust twin support vector k-plane clustering. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 4494–4503. [Google Scholar] [CrossRef]
  23. Chen, Y.-N.; Thaipisutikul, T.; Han, C.-C.; Liu, T.-J.; Fan, K.-C. Feature line embedding based on support vector machine for hyperspectral image classification. Remote Sens. 2021, 13, 130. [Google Scholar] [CrossRef]
  24. Lee, H.; Kwon, H. Going deeper with contextual CNN for hyperspectral image classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef]
  25. Luo, Y.; Zou, J.; Yao, C.; Zhao, X.; Li, T.; Bai, G. HSI-CNN: A novel convolution neural network for hyperspectral image. In Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018. [Google Scholar]
  26. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  27. Wang, J.; Song, X.; Sun, L.; Huang, W.; Wang, J. A novel cubic convolutional neural network for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4133–4148. [Google Scholar] [CrossRef]
  28. Moraga, J.; Duzgun, H.S. JigsawHSI: A network for hyperspectral image classification. arXiv 2022, arXiv:2206.02327. [Google Scholar]
  29. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  30. He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  31. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  32. Qing, Y.; Liu, W.; Feng, L.; Gao, W. Improved Transformer Net for Hyperspectral Image Classification. Remote Sens. 2021, 13, 2216. [Google Scholar] [CrossRef]
  33. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. arXiv 2021, arXiv:2107.02988. [Google Scholar] [CrossRef]
  34. He, X.; Chen, Y.; Lin, Z. Spectral-Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6824–6836. [Google Scholar]
  35. Zhang, X.; Su, Y.; Gao, L.; Bruzzone, L.; Gu, X.; Tian, Q. A Lightweight Transformer Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5517617. [Google Scholar] [CrossRef]
  36. Zhang, S.; Zhang, J.; Wang, X.; Wang, J.; Zhu, Z. ELS2T: Efficient Lightweight Spectral-Spatial Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5518416. [Google Scholar] [CrossRef]
  37. Geiger, B.C.; Kubin, G. Relative information loss in the PCA. In Proceedings of the 2012 IEEE Information Theory Workshop, San Diego, CA, USA, 5 February 2012; pp. 562–566. [Google Scholar]
  38. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  39. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  40. Zhong, Y.; Liu, L.; Yang, Y.; Loy, C.C. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2018; pp. 3146–3154. [Google Scholar]
  41. Zhang, Y.; Wang, X.; Liu, Y.; Zhang, X. Discriminative spectral-spatial attention-aware residual network for hyperspectral image classification. IEEE Access 2020, 8, 226169–226184. [Google Scholar]
  42. Li, J.; Li, G.; Sun, Q.; Li, L. Residual Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 5, 449–462. [Google Scholar]
  43. Liu, L.; Zhang, P.; Zhang, W.; Lu, J.; Wang, J. Spectral-Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar]
  44. Gevaert, A.S.; De Backer, S.S.; Schiavon, A.C.; Philips, W. SpectralSpatial Fused Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58. [Google Scholar] [CrossRef]
  45. Wang, C.; Ma, X.; Chen, Y.; Ren, Y.; Han, Z. Center Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59. [Google Scholar] [CrossRef]
  46. Wang, Y.; Zhang, L.; Zhang, L. Spatial Attention Guided Residual Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5980. [Google Scholar] [CrossRef]
  47. Serra, J.; Soille, P. Morphological Image Analysis: Principles and Applications; Springer: Berlin, Germany, 1994. [Google Scholar]
  48. Pesaresi, M.; Benediktsson, J.A. A new approach for the morphological segmentation of high-resolution satellite imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 309–320. [Google Scholar] [CrossRef]
  49. Pedergnana, M.; Marpu, P.R.; Mura, M.D.; Benediktsson, J.A.; Bruzzone, L. Classification of remote sensing optical and LiDAR data using extended attribute profiles. IEEE J. Sel. Top. Signal Process. 2012, 6, 856–865. [Google Scholar] [CrossRef]
  50. Roy, S.K.; Chanda, B.; Chaudhuri, B.B.; Ghosh, D.K.; Dubey, S.R. Local morphological pattern: A scale space shape descriptor for texture classification. Digit. Signal Process. 2018, 82, 152–165. [Google Scholar] [CrossRef]
  51. Hong, D.; Wu, X.; Ghamisi, P.; Chanussot, J.; Yokoya, N.; Zhu, X.X. Invariant attribute profiles: A spatial-frequency joint feature extractor for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3791–3808. [Google Scholar] [CrossRef]
  52. Roy, S.K.; Deria, A.; Shah, C.; Haut, J.M.; Du, Q.; Plaza, A. Spectral–Spatial Morphological Attention Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5503615. [Google Scholar] [CrossRef]
  53. Cai, Y.; Liu, X.; Cai, Z. BS-Nets: An End-to-End Framework for Band Selection of Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1969–1984. [Google Scholar] [CrossRef]
Figure 1. Overall architecture of CESA-MCFormer.
Figure 1. Overall architecture of CESA-MCFormer.
Sensors 24 01187 g001
Figure 2. Overall architecture of EMB Block.The symbol “T” represents the transpose of a matrix. The symbol “·” represents element-wise multiplication of matrices, and the symbol ”x” denotes matrix multiplication.
Figure 2. Overall architecture of EMB Block.The symbol “T” represents the transpose of a matrix. The symbol “·” represents element-wise multiplication of matrices, and the symbol ”x” denotes matrix multiplication.
Sensors 24 01187 g002
Figure 3. Overall architecture of CESA.
Figure 3. Overall architecture of CESA.
Sensors 24 01187 g003
Figure 4. Overall architecture of Soft CESA.
Figure 4. Overall architecture of Soft CESA.
Sensors 24 01187 g004
Figure 5. Overall architecture of Transformer Encoder.
Figure 5. Overall architecture of Transformer Encoder.
Sensors 24 01187 g005
Figure 6. Overall architecture of the Spectral Morph Block and Spatial Morph Block.
Figure 6. Overall architecture of the Spectral Morph Block and Spatial Morph Block.
Sensors 24 01187 g006
Figure 7. Overall architecture of dilation block.
Figure 7. Overall architecture of dilation block.
Sensors 24 01187 g007
Figure 8. Visualization results on the IP dataset.
Figure 8. Visualization results on the IP dataset.
Sensors 24 01187 g008
Figure 9. Visualization results on the UP dataset.
Figure 9. Visualization results on the UP dataset.
Sensors 24 01187 g009
Table 1. Dataset Information.
Table 1. Dataset Information.
DatasetImage SizeNumber of ClassesNumber of Bands
IP145 × 14516220
UP610 × 3409103
Chikusei2517 × 233519128
Botswana1476 × 25614145
KSC512 × 61413176
Salinas Valley512 × 21716224
Table 2. Detailed information on the training and testing data samples for each class in the IP dataset.
Table 2. Detailed information on the training and testing data samples for each class in the IP dataset.
Class No.Class NameTrainingTesting
1Alfalfa343
2Corn-notill721356
3Corn-mintill42788
4Corn12225
5Grass-pasture25458
6Grass-trees37693
7Grass-pasture-mowed226
8Hay-windrowed24454
9Oats119
10Soybean-notill49923
11Soybean-mintill1232332
12Soybean-clean30563
13Wheat11194
14Woods641201
15Buildings-Grass-Trees-Drives20366
16Stone-Steel-Towers588
Total5209729
Table 3. Detailed information on the training and testing data samples for each class in the UP dataset.
Table 3. Detailed information on the training and testing data samples for each class in the UP dataset.
Class No.Class NameTrainingTesting
1Asphalt676564
2Meadows18718,462
3Gravel212078
4Trees313033
5Painted metal sheets141331
6Bare Soil514978
7Bitumen141316
8Self-Blocking Bricks373645
9Shadows10937
Total43242,344
Table 4. Detailed information on the training and testing data samples for each class in the Chikusei dataset.
Table 4. Detailed information on the training and testing data samples for each class in the Chikusei dataset.
Class No.Class NameTrainingTesting
1Water292816
2Bare soil (school)292830
3Bare soil (park)3283
4Bare soil (farmland)494830
5Natural plants434254
6Weeds in farmland121096
7Forest20620,310
8Grass666449
9Rice field (grown)13413,235
10Rice field (first stage)131255
11Row crops605901
12Plastic house222171
13Manmade (non-dark)131207
14Manmade (dark)777587
15Manmade (blue)5426
16Manmade (red)3219
17Manmade grass111029
18Asphalt9792
19Paved ground2143
Total78676,806
Table 5. The floating point operations (FLOPS) and the number of parameters for various models. “CESA-MCFormer *” refers to the CESA-MCFormer model without the inclusion of the CESA.
Table 5. The floating point operations (FLOPS) and the number of parameters for various models. “CESA-MCFormer *” refers to the CESA-MCFormer model without the inclusion of the CESA.
Class NoHybridSNVitSFSSFTTmorphFormerCESA-MCFormerCESA-MCFormer *
FLOPs (M)44.8075.3424.9129.8841.7832.2427.67
Params (M)1.120.610.110.510.250.360.25
Table 6. Classification accuracy of various models on the IP dataset (without PCA).
Table 6. Classification accuracy of various models on the IP dataset (without PCA).
Class NoHybridSNVitSFSSFTTmorphFormerCESA-MCFormer
162.7939.5383.7258.1497.6793.02
273.4574.7864.1691.8192.9296.61
370.0569.2950.2594.5488.0795.81
467.5657.3353.7877.7891.1190.22
576.8646.5157.4279.9189.7495.85
694.9586.2981.5398.7099.42100.00
746.1542.3176.92100.00100.00100.00
896.7095.1592.51100.0099.3499.34
963.1626.3273.6889.4731.5884.21
1076.6073.6776.7187.2295.5695.12
1181.8684.9583.4097.5197.1397.64
1244.7651.3345.8382.7788.2895.38
13100.00100.0098.97100.0097.94100.00
1498.1796.9299.2598.8399.7599.17
1559.8473.5084.4385.7991.5387.98
1655.6895.4551.1497.73100.00100.00
OA (%)79.2478.3875.5993.1594.9696.82
AA (%)73.0469.5873.3690.0191.2595.65
κ (%)76.2875.1771.9692.1794.2696.38
Table 7. Classification accuracy of various models on the UP dataset (without PCA).
Table 7. Classification accuracy of various models on the UP dataset (without PCA).
Class NoHybridSNVitSFSSFTTmorphFormerCESA-MCFormer
189.9891.9783.3294.7098.0098.35
295.5493.9095.3899.7099.6999.92
375.0747.5059.5887.7387.0589.56
494.1692.8189.4896.2496.7495.12
596.3999.8589.4199.9293.09100.00
684.3984.2385.0199.3899.08100.00
782.1462.5463.9190.2097.0498.33
892.7691.3987.4695.8095.6999.18
998.4099.7999.8998.5197.8797.55
OA (%)91.7089.2388.3697.4097.8598.67
AA (%)89.8784.8983.7295.8096.2797.56
κ (%)89.0085.7584.5996.5697.1598.24
Table 8. Classification accuracy of various models on the Chikusei dataset (without PCA).
Table 8. Classification accuracy of various models on the Chikusei dataset (without PCA).
Class NoHybridSNVitSFSSFTTmorphFormerCESA-MCFormer
197.4194.7491.5899.6199.9699.15
295.2394.0399.0195.3099.8299.72
30.0018.3723.6746.6422.6185.16
499.7997.6798.4497.4898.6999.88
599.7996.9497.3299.98100.0099.98
697.9097.2693.8991.8898.5491.79
7100.00100.00100.00100.00100.00100.00
899.7498.7199.18100.00100.0099.94
999.8199.8999.99100.0099.77100.00
1097.3799.2095.30100.00100.00100.00
11100.0099.75100.0099.8399.76100.00
1296.9693.7498.1196.3297.1496.78
1395.5395.5394.8695.5395.5395.53
1499.7998.8799.0099.9799.91100.00
1599.0693.1995.54100.0099.3099.53
16100.0098.1799.5493.61100.00100.00
1799.0386.1097.57100.00100.00100.00
1879.8091.4188.7690.0397.8599.37
1916.7831.4742.6688.1188.1195.10
OA (%)98.6597.9798.3899.0199.3499.59
AA (%)88.1088.6990.2394.4494.5898.00
κ (%)98.4497.6598.1398.8599.2499.53
Table 9. Classification accuracy of various models on the IP dataset (with PCA). “CESA-MCFormer *” refers to the CESA-MCFormer model without PCA.
Table 9. Classification accuracy of various models on the IP dataset (with PCA). “CESA-MCFormer *” refers to the CESA-MCFormer model without PCA.
Class NoHybridSNSFSSFTTCESA-MCFormerCESA-MCFormer *
172.0951.16100.0076.7493.02
292.9978.1794.9196.0296.61
398.9873.8698.3595.1895.81
494.6761.78100.0088.8990.22
590.8384.5098.6990.1795.85
699.8693.9499.1399.42100.00
757.6911.5473.08100.00100.00
894.49100.0099.56100.0099.34
968.4210.5352.63100.0084.21
1091.1285.5997.2994.5895.12
1196.5787.7897.6098.3797.64
1286.5069.4584.3788.2895.38
1397.9498.45100.00100.00100.00
1498.9290.6799.6799.9299.17
1596.4562.0293.4490.1687.98
1648.8681.8295.45100.00100.00
OA (%)94.6083.3396.7896.2396.82
AA (%)86.6571.3392.7694.8695.65
κ (%)93.8480.9796.3395.7096.38
Table 10. Classification accuracy of various models on the UP dataset (with PCA). “CESA-MCFormer *” refers to the CESA-MCFormer model without PCA.
Table 10. Classification accuracy of various models on the UP dataset (with PCA). “CESA-MCFormer *” refers to the CESA-MCFormer model without PCA.
Class NoHybridSNSFSSFTTCESA-MCFormerCESA-MCFormer *
198.8688.8298.2898.4998.35
299.9696.5999.9799.8799.92
385.5171.8090.3891.7789.56
492.9191.1697.3695.1995.12
5100.0095.87100.00100.00100.00
699.7679.2398.4999.16100.00
797.0475.0898.9498.0298.33
892.7684.9491.4498.3099.18
993.1792.3293.0695.7397.55
OA (%)97.6989.9597.9698.5698.67
AA (%)95.5586.2096.4497.3997.56
κ (%)96.9386.5897.2998.0998.24
Table 11. Ablation study results for CESA and MC. “CESA” stands for replacing the Emb Block, and “MC” stands for replacing the Transformer Encoder.
Table 11. Ablation study results for CESA and MC. “CESA” stands for replacing the Emb Block, and “MC” stands for replacing the Transformer Encoder.
CESAMC+OA (%)AA (%) κ (%)
94.9691.2594.26
95.8693.0695.27
95.8091.5095.21
96.8295.6596.38
Table 12. Comparative experiments of the traditional Spatial Attention block (SA), CAM, and CESA.
Table 12. Comparative experiments of the traditional Spatial Attention block (SA), CAM, and CESA.
OA (%)AA (%) κ (%)
SA95.7991.7695.19
CAM95.9493.1795.37
CESA96.8295.6596.38
Table 13. Classification accuracy of various models on the IP dataset.
Table 13. Classification accuracy of various models on the IP dataset.
HybridSNSSFTTmorphFormCESA-MCFormer
OA (%)68.9069.5671.2673.29
AA (%)81.1081.8782.9784.61
κ (%)65.1765.8567.8270.05
Table 14. Classification accuracy of various models on the UP dataset.
Table 14. Classification accuracy of various models on the UP dataset.
HybridSNSSFTTmorphFormCESA-MCFormer
OA (%)70.8174.4374.0377.54
AA (%)77.9079.3978.9083.55
κ (%)63.4467.1766.7171.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, S.; Yin, C.; Zhang, H. CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information. Sensors 2024, 24, 1187. https://doi.org/10.3390/s24041187

AMA Style

Liu S, Yin C, Zhang H. CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information. Sensors. 2024; 24(4):1187. https://doi.org/10.3390/s24041187

Chicago/Turabian Style

Liu, Shukai, Changqing Yin, and Huijuan Zhang. 2024. "CESA-MCFormer: An Efficient Transformer Network for Hyperspectral Image Classification by Eliminating Redundant Information" Sensors 24, no. 4: 1187. https://doi.org/10.3390/s24041187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop