Feature selection based on neighborhood rough sets and Gini index

Neighborhood rough set is considered an essential approach for dealing with incomplete data and inexact knowledge representation, and it has been widely applied in feature selection. The Gini index is an indicator used to evaluate the impurity of a dataset and is also commonly employed to measure the importance of features in feature selection. This article proposes a novel feature selection methodology based on these two concepts. In this methodology, we present the neighborhood Gini index and the neighborhood class Gini index and then extensively discuss their properties and relationships with attributes. Subsequently, two forward greedy feature selection algorithms are developed using these two metrics as a foundation. Finally, to comprehensively evaluate the performance of the algorithm proposed in this article, comparative experiments were conducted on 16 UCI datasets from various domains, including industry, food, medicine, and pharmacology, against four classical neighborhood rough set-based feature selection algorithms. The experimental results indicate that the proposed algorithm improves the average classification accuracy on the 16 datasets by over 6%, with improvements exceeding 10% in five. Furthermore, statistical tests reveal no significant differences between the proposed algorithm and the four classical neighborhood rough set-based feature selection algorithms. However, the proposed algorithm demonstrates high stability, eliminating most redundant or irrelevant features effectively while enhancing classification accuracy. In summary, the algorithm proposed in this article outperforms classical neighborhood rough set-based feature selection algorithms.

Rough set theory, proposed by Pawlak (1982) and continuously improved by subsequent researchers, is a mathematical tool for dealing with uncertain and inexact information.Recently, this theory has been widely applied to feature selection in data mining and machine learning (Sang et al., 2022;Huang, Li & Qian, 2022;Zhang et al., 2023;Wan et al., 2023;Yang et al., 2023).The rough set theory divides the dataset into equivalence classes to reveal the dependency relationships among attributes and the process of generating decision rules.For classification problems, rough set theory uses features to induce binary relations and divides samples into different information granules based on these binary relations.These information granules are then used to approximate decision variables and represent upper and lower approximations of decisions.Based on this, a feature evaluation function called the dependency function is defined.Different types of binary relations lead to different granulation mechanisms, resulting in various rough set models, such as classical rough set (Pawlak, 1982), similarity relation-based rough set (Dai, Gao & Zheng, 2018), dominance relation-based rough set (Greco, Matarazzo & Slowinski, 1999;Shao & Zhang, 2004), fuzzy rough set (Pawlak, 1985), and other rough set models (Sang et al., 2018).
The neighborhood rough set (NRS) (Hu et al., 2008) is one of the most crucial rough set models proposed to address the challenges of handling continuous features in classical rough set theory.Since its application to feature selection (Hu et al., 2008), NRS has gained widespread attention in data mining and machine learning (Liu et al., 2016;Zeng, She & Niu, 2014).Many scholars have proposed different feature evaluation functions based on this model and developed corresponding feature selection algorithms.Hu et al. (2011) proposed neighborhood information entropy to address the fact that Shannon entropy cannot directly evaluate uncertainty on continuous features.Wang et al. (2018) explored some neighborhood distinguishability measures to assess data uncertainty.They proposed the K-nearest neighbor NRS by combining the advantages of neighborhood and K-nearest neighbors while focusing on data distribution (Wang et al., 2019b).Wang et al. (2019a) proposed neighborhood self-information to utilize deterministic and uncertain information better.Sun et al. (2019aSun et al. ( , 2019b) ) introduced the Lebesgue measure into NRS, enabling feature selection on infinite sets (Sun et al., 2019a) and incomplete sets (Sun et al., 2019b).Li, Xiao & Wang (2018) extended discernibility matrices to NRS and applied them to a power system stability assessment.
Many of the evaluation methods above extend the evaluation metrics for discrete features to continuous random variables through neighborhood relations.For example, neighborhood information entropy extends Shannon entropy to continuous random variables, neighborhood discernibility matrix extends rough set discernibility matrix, and neighborhood self-information extends self-information to continuous random variables.The Gini index (GI) (Breiman et al., 1984), first introduced by Breiman in 1984 and applied to node splitting in decision trees, accurately quantifies the impurity of a dataset.Especially in classification problems, it effectively measures the contribution of features to classification results and has been widely used in feature selection in data mining and machine learning (Breiman, 2001;Wang, Deng & Xu, 2023).
This article combines NRS with GI from two perspectives and proposes two unique feature importance evaluation metrics.First, from the standpoint of sample neighborhoods, the Neighborhood Gini index is proposed to measure the importance of features through neighborhood information.Second, from the standpoint of class neighborhoods, the Neighborhood Class Gini index is proposed to reveal the differences in features among different classes.The properties of these two evaluation metrics and their relationships with attributes are discussed.Based on these evaluation metrics, two forward greedy algorithms are designed for feature selection.Finally, the effectiveness and stability of the proposed algorithms are validated through experiments.
The structure of this article is as follows: In the "Materials and Methods" section, a review of the fundamental concepts of NRS and GI is provided, and the combination of NRS and GI is used to propose two distinct feature importance evaluation indicators.The properties of these two evaluation indicators and their relationships with attributes are discussed.Subsequently, the importance of candidate features is defined based on the two evaluation indicators.Building upon this, two separate forward greedy feature selection algorithms are formulated.In the "Experimental Analysis and Discussion" section, the effectiveness and stability of the proposed algorithms are verified.In the "Conclusions" section, we concluded the article with possible directions for future research.

Neighborhood rough set
In rough sets, information tables are often represented by , U; A . , where U ¼ fx 1 ; x 2 ; …; x n g is a non-empty finite set of samples and A ¼ fa 1 ; a 2 ; …; a m g is a nonempty finite set of attributes used to describe those samples.
Let , U; A .be a table of information, B A and d B is a binary functional relation defined on U with an attribute subset B, that is, Then, d B is said to be a distance metric on U when it satisfies the following relation: ( The Euclidean distance is a commonly used distance measure, and all the subsequent distance references in this article are in terms of the Euclidean distance.For any two samples, the calculation of the Euclidean distance is as follows: In an information where r !0 is a user-defined constant.For any x 2 U, its neighborhood similarity class ½x r B is defined as follows: In an information system, neighborhood similarity classes are also referred to as neighborhood information granules, abbreviated as neighborhood granules.Here, r is called the radius of the neighborhood granules.In the information table , U; A . , the neighborhood granule family f½x i r B ji ¼ 1; 2; …; ng forms a covering of U. The neighborhoods of all objects in the domain constitute the granulation of the domain, and the neighborhood granule family constitutes the fundamental concept system in the domain space.Through these fundamental concepts, we can approximate any concept in the space.
For any sample set X & U, its lower approximation R r B and upper approximation R r B are defined as follows: Let D be a classification decision attribute defined on U, and A \ D ¼ [.In this case, the triple , U; A; D . is referred to as a decision table.In the decision table , U; A; D ., attribute D divides U into r decision classes, denoted as U=D ¼ fE 1 ; E 2 ; …; E r g.Here, E i ði ¼ 1; 2; …; rÞ is called a general equivalence class, meaning that all the samples in E i have the same class labels.In a decision table , U; A; D ., where B A, and R r B is a neighborhood similarity relation defined on the attribute set B in U with a neighborhood radius of r, the upper approximation R r B ðDÞ and lower approximation R r B ðDÞ of the decision attribute D with respect to the attribute set B at a neighborhood granule size of r are defined as follows: The positive domain of the decision table is written as: The boundary domain of the decision table is written as: The dependency function c r B ðDÞ of D associated with B is formulated as: where j:j indicates the cardinality of a set.

Gini index
GI is a metric used to measure the impurity of a dataset and is commonly employed in feature selection for decision tree algorithms.The values of GI range from 0 to 1.When GI = 0, the dataset's impurity is minimal, meaning all elements in the dataset are the same.Conversely, when GI = 1, the dataset's impurity is maximal, indicating that all elements in the dataset are different.For a dataset D with r categories, where each category's proportion of samples is denoted as p i , the formula for calculating GI is as follows: GI evaluates the impurity of a dataset based on the distribution of class probabilities to determine the importance of the corresponding features.A smaller GI indicates higher dataset purity and better discriminative power of the feature.However, in the same dataset, different feature subsets result in the same class probability distribution, making it unsuitable for directly evaluating the classification performance of different feature subsets.Therefore, a new influencing factor must be introduced to make the class probability distributions vary across feature subsets.For instance, in the Classification and Regression Trees, a tree-like structure is introduced to partition the dataset.This partitioning leads to substantial differences between data subsets created by different feature divisions, resulting in distinct class probability distributions.This makes the Gini index enable the measurement of feature importance.In NRS, when the neighborhood radius is consistent, different attribute sets lead to distinct neighborhoods for samples.Conversely, when the attribute set is constant, various neighborhood radii correspond to different neighborhoods.Different neighborhoods could lead to varying class probability distributions.GI values would also differ.This makes GI applicable for feature selection in NRS.
Next, two different feature importance evaluation metrics integrating NRS and GI will be proposed.

Neighborhood Gini index
Samples with certain similarities should be grouped into the same category, and samples within the neighborhood range of a sample are considered similar from a distance perspective.Their features determine the similarity of samples.However, for reasons such as data collection, some features may be redundant or irrelevant to class labels.Therefore, the class labels of samples within a neighborhood range may not be consistent under a subset of features.It is necessary to select features that can effectively represent the characteristics of all categories so that the class labels within the neighborhood range of samples are as consistent as possible.
The more consistent the class labels of samples within the neighborhood range, or the higher the purity of classes within the neighborhood range, the better the corresponding subset of features can represent that class.At this point, the subset of features can represent the local characteristics of that class well.If a subset of features can represent all the local characteristics of all classes well, i.e., the class purity within the neighborhood range of all samples in the dataset is high, then the subset of features can distinguish all classes well.In this case, the importance of features is also higher.
Based on this idea, we use GI to represent the impurity of the dataset and propose the Neighborhood Gini index (NGI).NGI evaluates the importance of a subset of features by assessing the purity of all samples' neighborhood ranges under that feature subset.The definition of NGI is given below: Definition 1: Given a decision table , U; A; D ., for any x i 2 U, B A, the impurity of r neighborhood R r B ðx i Þ is defined as: where r represents the number of categories, and p j signifies the proportion of the jth category within the r neighborhood of x i .Throughout the decision table, the impurity of the decision table is the mean value of the impurity within the neighborhood of each sample: From Definition 1, it can be observed that NGI is influenced by two parameters: the feature subset B and the neighborhood radius r.As GI focuses on the distribution of classes, changes in the number of samples within the neighborhood range can lead to class distribution changes, thereby affecting GI's magnitude.However, the behavior of GI to changes in the feature subset and neighborhood radius is not strictly monotonic.The following will analyze the variations of NGI to changes in the feature subset and neighborhood radius separately.

Impact of feature on NGI
For any subset of features B 1 & B 2 A, adding one or more features to B 1 to obtain B 2 does not necessarily guarantee that NGI B 2 ðDÞ will always be smaller than NGI B 1 ðDÞ, and the process of its change is not completely monotonous, as shown in Fig. 1.
When the features are continuous, the variation in the samples within the neighborhood range is small, leading to minor changes in the class distribution.As a result, the variation curve is relatively smooth, as shown in Fig. 1A.However, when the features are discrete, introducing new features might drastically reduce the number of samples within the neighborhood range, causing larger changes in the class distribution.This results in a fluctuating variation curve, as depicted in Fig. 1B.
From Fig. 1, it is evident that the variation of NGI is not strictly monotonic at a local level, yet it generally exhibits a descending trend as a whole.This phenomenon can be attributed to the overarching effect that, with an increase in the number of features, the data provides a more precise portrayal of the samples, making their inherent characteristics more prominent.When the features a sample emphasizes are more aligned with its class attributes, the purity of the sample's neighborhood increases, leading to a smaller NGI.Conversely, when the emphasized features deviate from the class attributes, the neighborhood's purity decreases, resulting in a larger NGI.As the number of features expands, characteristics relevant to the class gradually come into sharper focus, consequently contributing to the observed overall decreasing trend.
It is worth noting that not all continuous feature subsets follow smooth and monotonic variation curves, and not all discrete features yield fluctuating curves.Continuous features might also exhibit fluctuations, while discrete features can exhibit smooth and monotonic behaviors.However, regardless of whether the features are continuous or discrete, the overall tendency is characterized by a decrease.
The subsequent explanation illustrates the variation of NGI through changes in the class distribution within the feature space neighborhood of sample x i : We utilize the change in GI within the neighborhood feature subspace of sample x i to symbolize the overall changes in NGI across the entire dataset.The distribution of samples within this localized neighborhood feature subspace is depicted in Fig. 2.Among them, the hollow circle class accounts for 1 3 , and the solid circle class accounts for 2 3 .At this time, . Let a i 2 A À B, NGI r B[fa i g change relative to NGI r B as follows: 1. NGI increases when the number of samples in the neighborhood feature subspace with a large proportion of categories decreases proportionally more than the number of samples with a small proportion of categories.As shown in Fig. 3A, the number of samples in the hollow circle category decreases by 1, at which point the percentage is 2 4 , and the number of samples in the solid circle category decreases by 4, at which point the percentage 2. NGI remains unchanged when there is no change in the samples in the neighborhood feature subspace or when the samples in the neighborhood, according to the proportion of categories in equal, are reduced.As shown in Fig. 3B, at this time the proportion of the hollow circle category is still 1 3 , and the proportion of the solid circle category is 2 3 , 3. NGI decreases when the number of samples in the neighborhood feature subspace with a large proportion of categories decreases proportionally less than the number of samples with a small proportion of categories.As shown in Fig. 3C, the hollow circle category samples are reduced by 2, at this time the proportion of 1 5 , the solid circle category samples are reduced by 2, at this time the proportion of 4 5 , Sample distribution in original neighborhood feature subspace.

Impact of neighborhood radius on NGI
In addition to the influence of feature subsets on NGI, the size of the neighborhood radius also affects the changes in the distribution of classes within the neighborhood feature subspace, consequently impacting the magnitude of NGI.So, we have delved into the impact of varying neighborhood radius sizes on NGI.We set the neighborhood radius to range from 0 to 1 with a step size of 0.025, and the relationship between the neighborhood radius and NGI is depicted in Fig. 4. Figure 4 illustrates the relationship between different sizes of the neighborhood radius and NGI for the same feature subset.The x-axis is the size of the neighborhood radius, and the y-axis is the NGI of the corresponding neighborhood radius.In Fig. 4A, the feature subset consists of 10 continuous features, namely [46,8,3,2,44,53,59,24,25,42], sourced from the "Sonar" dataset in the UCI Machine Learning Repository.In Fig. 4B, the feature subset comprises five discrete features, namely [7,14,10,1,17], sourced from the "anneal" dataset in the UCI Machine Learning Repository.
As the value of r gradually increases, the number of samples within the neighborhood range also increases, leading to a rise in impurity.When r is relatively small, the change in the number of samples within the neighborhood is small, and the newly added samples are mostly from the same category.Consequently, the change curve remains relatively stable.When r exceeds a certain threshold (as depicted in Fig. 4A, e.g., 0.195), the category labels of the newly added samples start to deviate from those of the original samples.This leads to a change in NGI, eventually converging to the GI of the entire dataset.While an overall trend increases as the neighborhood radius gradually enlarges, this change is not necessarily monotonic.The reasons behind the variation of NGI with r are analogous to the reasons for its variation with the size of the feature subset.These reasons will not be reiterated here.Neighborhood Class Gini index In the context of neighborhood rough sets based on decision tables, the upper approximation of a category refers to the set of samples within the neighborhood range of that category.This sample set includes all samples from the current category and some from others.It is obvious that the fewer categories in the upper approximation and the fewer samples from other categories, the higher the purity of the upper approximation of the category.A higher purity indicates that the corresponding feature provides a more accurate description of that category, making it easier to distinguish it from others.If the upper approximations of all categories have higher purity, all types within the dataset can be better distinguished, and the corresponding features are more important.
In the entire decision table, the impurity of the decision table is the average of the impurities of all category upper approximations: Similar to NGI, the magnitude of NCGI is also influenced by the neighborhood radius r and the feature subset B. The following comparison illustrates the changes in the two evaluation metrics concerning the number of features and the neighborhood radius.The data in Fig. 5 corresponds to the data in Fig. 1, while the data in Fig. 6 corresponds to that in Fig. 4.
In the case of continuous features, the trend of NCGI with changing r is closely similar to that of NGI, displaying relatively smooth changes.NCGI exhibits a lower overall sensitivity to variations in the number of features and the neighborhood radius yet displays higher sensitivity within certain intervals, such as when r ranges from 0.2 to 0.375 in Fig. 5A.This phenomenon stems from NGI being rooted in the sample neighborhood, with the class distribution altering as the neighborhood radius expands.Conversely, NCGI assesses feature importance from a class neighborhood perspective.As the neighborhood radius expands, the number of samples within the neighborhood increases.However, when r is small, the newly added samples within the neighborhood share the same category as the current sample.Consequently, the category distribution in the upper approximation remains unchanged.When r is big enough, the upper approximation of the class encompasses all samples within the dataset, resulting in NCGI equating to the overall GI and ceasing to change with variations in the neighborhood radius.
In the case of discrete data, as the neighborhood radius varies, a sudden influx of samples within the neighborhood range can significantly alter the class distribution, causing larger fluctuations in the change curve, particularly noticeable in Fig. 5B.However, overall, NCGI experiences smaller changes in amplitude compared to NGI.
The number of samples within the neighborhood range gradually decreases with increased features.In the context of continuous features, the reduction in sample count is relatively smooth, as shown in Fig. 6A.Consequently, the class distribution alteration of neighborhoods is similarly gradual.With the increase of purity within the neighborhood, GI decreases until it converges to 0. For categorical features, the introduction of new features can exert a substantial influence on the class distribution within the neighborhood, leading to larger fluctuations, as shown in Fig. 6B.This is particularly evident upon the inclusion of the 11th feature, where both NGI and NCGI exhibit a sharp decline.This decline implies that adding this feature enhances the purity within the neighborhood, facilitating the differentiation of various categories.Beyond the 11th feature, sample neighborhoods and class neighborhoods' results diverge.These features can decrease the impurity within the sample neighborhood but paradoxically lead to an increase in the impurity within the class neighborhood.

Feature selection
Definition 3: Given a decision table , U; A; D ., B A, a i 2 A À B, the importance of a i with respect to B is calculated as follow: where GI r B stands for NGI and NCGI proposed in this article.In Definition 3, we defined the importance of feature a i relative to a given feature subset B. In the case where feature subset B is known, it is adding a feature a i to B and observing its GI (which refers to either NGI or NCGI).If GI decreases, it indicates that a i is a crucial feature relative to B. Conversely, if the GI remains unchanged or increases, it suggests that a i is a redundant feature relative to B or even an irrelevant feature with respect to the decision table , U; A; D . .To achieve better classification performance, we aim to select each feature a i in such a way that it is the most crucial feature relative to B. Therefore, we have designed heuristic algorithm based on Neighborhood Gini index (HANGI) and heuristic algorithm based on Neighborhood Gini index (HANCGI) feature selection algorithms using a forward greedy approach to select the optimal feature subset.The two algorithms differ only in calculating SIGða i ; B; DÞ, and their processes are illustrated in Fig. 7.
In HANGI and HANCGI, the algorithm starts by taking as input a decision table , U; A; D ., a neighborhood radius r, and a minimum threshold b for the relative importance of candidate features with respect to the reduced subset.Subsequently, the reduced subset and the candidate feature subset are initialized.An evaluation is made to determine if the candidate feature subset is empty.The current reduced subset is directly output if the candidate feature subset is empty.Conversely, if the candidate feature subset is not empty, all candidate features are iterated through.Each candidate feature's importance concerning the reduced subset is computed using NGI or NCGI, denoted as SIGða i ; red; DÞ.The feature with the highest importance is selected, and its importance is marked as SIG max .Following this, an assessment is carried out to determine whether SIG max > b.If true, the feature with the highest importance is removed from the candidate feature subset and incorporated into the reduced subset.The candidate feature subset is then revisited.Output the reduced subset until the candidate feature subset is empty or SIG max b.Assuming a dataset contains n samples, m features, and r categories, the best feature in each iteration is the one with the longest search time, with a worst-case search time of ðm 2 þ mÞ=2.Calculate the time nðn À 1Þ=2 required to determine the neighborhood relationship between samples in the dataset.The time to compute the Gini index within the neighborhood range is also nr.Therefore, the time complexity of the NGI and NCGI forward greedy feature selection algorithms is both Oðm 2 n 2 Þ.
In HANGI and HANCGI, two parameters, r and b, are present.Parameter r controls the neighborhood radius, which determines the granularity of the neighborhood particles.The parameter b is a threshold that stops the algorithm when the reduction of the GI is less than a particular value.Theoretically, the optimal values for these two parameters should be searched from the entire range of the dataset's space.Fortunately, as discussed in Hu et al. (2008Hu et al. ( , 2011)), for algorithms with two parameters, such as the neighborhood rough set model, it is possible to approximate the optimal performance of the algorithm if one parameter is fixed at a particular value and the optimal value of the other parameter is searched across the entire space.Since the meaning of the same-sized evaluation metric in different algorithms is not the same, in this case, all b values in all algorithms are set to 0. This means that adding a new feature will not lead to any improvement.Based on this, in the experimental analysis section, the value of parameter b is set as a constant 0, and the optimal value for the neighborhood radius r is searched within the interval [0, 1], with a step size of 0.025.

EXPERIMENTAL ANALYSIS AND DISCUSSION
In this section, we conduct experiments to validate the effectiveness and stability of the proposed methods.We select four classic feature importance evaluation metrics based on NRS to form corresponding forward greedy feature selection algorithms: Neighborhood Rough Set Dependency (HANRS) (Hu et al., 2008), Neighborhood Entropy (HANRE) (Hu et al., 2011), Neighborhood Discrimination Index (HANDI) (Wang et al., 2018), and Neighborhood Self-Information (HANSI) (Wang et al., 2019b).We compare these algorithms with the two proposed methods.The stopping parameter b ¼ 0 is employed as the termination condition for these algorithms.
All the datasets are sourced from the UCI Machine Learning Repository, and their specific descriptions are provided in Table 1.Where "Continuous" and "Categorical" represent the number of continuous and categorical features in each dataset.Before feature selection, all attributes are normalized to the interval [0, 1], and missing values are filled using the mean.
We compare the selected feature count and the corresponding classification accuracy to evaluate the algorithms' performance comprehensively.We employ four classical classifiers, support vector classifier (SVC), K-nearest neighbors (KNN), Extreme Gradient Boosting (XGBoost), and artificial neural network (ANN), to assess the performance of these feature selection algorithms.Since our primary focus is evaluating the feature selection algorithms, default parameter settings are used for SVC and ANN from the scikit-learn library.XGBoost also uses default parameters.For the KNN classifier, K is set to 3.
Ten-fold cross-validation is employed to perform feature selection on these datasets.Specifically, for a given neighborhood radius r stopping parameter b and a dataset, the dataset is randomly divided into ten parts, with nine parts used as the training set and one used as the test set.During the training phase, feature selection is performed on the training set to identify an optimal feature subset.The optimal feature subset is then used to extract a sub-dataset from the original dataset.During the testing phase, ten-fold crossvalidation is applied to the sub-dataset, computing the accuracy of the four classifiers.Finally, the mean of the output accuracy values obtained from four classifiers serves as the ultimate evaluation metric, providing a comprehensive assessment of feature selection effectiveness across the entire dataset.

Training parameters
In NRS-based models, the size of the neighborhood granule significantly impacts the model results.Determining the neighborhood granule's size is essential to achieve optimal experimental outcomes.Thus, we employ ten-fold cross-validation with a step size of 0.025 in the range (0, 1) (Wang et al., 2019b) to obtain the optimal neighborhood radius parameter r for each algorithm.The search range in the "Spambase" dataset is (0, 0.225).Subsequently, we use four datasets and one algorithm to illustrate the selection process.Figure 8 displays the variation of classification accuracy with changing neighborhood radius for different datasets, using NGI as the evaluation metric.
Evidently, the neighborhood radius has a pronounced impact on classification accuracy.As the parameter changes, the four datasets exhibit varying accuracy levels in all classifiers.We select the radius that corresponds to relatively higher accuracy in all classifiers as the optimal radius.For instance, in the "Anneal" dataset, r ¼ 0:05 is deemed the optimal neighborhood radius.Using the same training methodology, we determine the optimal neighborhood radius for each algorithm on various datasets, as presented in Table 2.In subsequent comparisons of algorithm performance, the neighborhood radius parameters are set based on this table.
In Table 2, the first column represents the dataset name, and each subsequent column header corresponds to the algorithm's name.The values inside the table indicate the optimal neighborhood radius for each algorithm.
It is important to note that for the "Voting_records" dataset, HANRS cannot select features at any neighborhood radius.Therefore, we set it to the minimum value of 0.025 for subsequent comparisons.Evaluation of feature validity In the context of classification problems, feature selection algorithms aim to extract the most representative and discriminative features from the original feature set, creating a more compact subset.Constructing a classification model using the selected feature subset, achieving higher accuracy indicates that these features are more effective for the classification task on the given data.Based on the optimal neighborhood radius, the feature selection algorithms (HANRS, HANRE, HANDI, HANSI, HANGI, and HANCGI) were applied to 16 datasets, and the number of features selected is presented in Table 3.Where "Original" denotes the original dataset's number of features, each subsequent column represents the average number of features selected by each algorithm over ten runs.
Underscored numbers indicate the fewest selected features relative to other algorithms.Notably, HANRS did not select features in the "Voting_records" dataset and, therefore, is not included in the comparison.
Comparing the number of selected features in Table 3, we observe that HANGI and HANCGI successfully achieve feature reduction.There is no significant difference in the average number of features reduced among the six algorithms.HANMI shows the strongest reduction capability, while HANSI demonstrates the weakest.Across the 16 datasets, the average number of features was reduced by HANGI to 18, ranking fifth on average among the six algorithms.HANCGI reduces the average number of features to 11, ranking second on average among the six algorithms.
Next, we employ SVC, KNN, XGBoost, and ANN to train the selected feature subsets and compare their classification accuracies, as presented in Tables 4-7.Table 8 presents

Note:
The underlines represent that the results corresponding to all neighborhood radii under this algorithm are exactly the same.
the mean accuracy of each dataset across the four classifiers.In these tables, underscored numbers indicate the best classification accuracy achieved through feature reduction relative to other algorithms.Tables 4-8 demonstrate that the HANGI and HANCGI algorithms proposed in this article effectively improve classification accuracy.On average, HANGI improved classification accuracy on 14 datasets across the four classifiers, with improvements exceeding 10% on four datasets and an average accuracy improvement of 7%.HANCGI improved classification accuracy on 12 datasets, with improvements exceeding 10% on five datasets and an average accuracy improvement of 6.6%.Among the classifiers, XGBoost showed the smallest improvement in classification accuracy, with an average accuracy decrease of 0.8% for the selected features by HANCGI.This is because XGBoost not only acts as a classifier but also is an embedded feature selection model, automatically selecting features during the classification process to enhance accuracy.The results in Table 6 indicate that XGBoost's feature selection results are similar to those of HANGI and HANCGI, with no significant difference in overall classification accuracy.In some datasets, the proposed algorithms have improved the classification accuracy of XGBoost by removing redundant and irrelevant features.For example, on the Toxicity dataset, both HANGI and HANCGI improved classification accuracy on XGBoost.
In the case of the 16 datasets, most of them showed improved classification accuracy after feature selection using the six feature selection algorithms.Although HANMI exhibited the strongest feature reduction capability, it had the poorest performance in terms of classification accuracy across the four classifiers.This suggests that HANMI might    have discarded some crucial features during the feature selection process, resulting in lower classification accuracy.HANDI, HANSI, the proposed HANGI, and HANCGI all showed relatively similar classification accuracy for the selected features across the four classifiers, but there were significant differences in some datasets.For example, on the DARWIN dataset, the features selected by HANCGI performed significantly better with SVC than HANDI, and the number of features selected by HANCGI was also much fewer than HANDI.On the "Parkinsons" dataset, despite a similar number of selected features between HANCGI and HANSI, HANCGI exhibited significantly lower classification accuracy with XGBoost.
It is worth mentioning that on the "Spambase" dataset, the selected features by all six feature selection algorithms achieved the same accuracy with XGBoost as the original dataset, even though the number of selected features differed among the algorithms.This suggests that all six feature selection algorithms got what XGBoost considered the crucial features, with HANDI selecting the fewest features and HANGI and HANCGI following closely behind.
Through comprehensive analysis of the experimental results, it is evident that the proposed methods often select fewer features while maintaining or improving classification accuracy.This suggests that the proposed methods can effectively eliminate more redundant attributes.Next, we evaluate the statistical significance of the performance differences among the six algorithms for feature selection through hypothesis testing.We first examine whether there are significant differences among the six algorithms on these datasets, utilizing the Friedman test statistic (Friedman, 1940): where r i represents the average rank of the algorithm, n denotes the number of datasets, and k represents the number of algorithms.The random variable "F" follows an Fdistribution with degrees of freedom k À 1 and ðk À 1Þðn À 1Þ.The critical value of the Fdistribution at a significance level a can be obtained by invoking the subroutine 'scipy.stats.f.ppf(1-α, n-1, (k-1)*(n-1))' in Python 3.9.Thus, when a ¼ 0:05, we obtain the critical value F(5,75) = 2.337.If the performance of the six algorithms is similar, the value of the Friedman statistic should not exceed the critical value F(5,75).Otherwise, there would be a significant difference in the feature selection performance among these six algorithms.Table 9 displays the performance ranking order of the six algorithms' selected features across the four classifiers arranged in ascending order.More significant numbers indicate better classification performance.According to the Friedman test statistic, we can obtain that F = 2.507 > 2.337 for the four classifiers.Evidently, there is a significant difference among the six algorithms on the four classifiers.
At this point, further post hoc tests are necessary to examine the differences among the six algorithms.The post hoc test employed here is the Nemenyi test.This statistical test requires determining the critical distance between average ranking values, defined by the following formula: where q a is the critical tabulated value for this test.From Demšar (2006), we can obtain that q 0:05 ¼ 2:850 when the number of algorithms is 6 and a ¼ 0:05.It follows from the above formula that CD 0:05 ¼ 1:885ðk ¼ 6; n ¼ 16Þ.If the corresponding average rank distance is greater than the critical distance CD 0:05 , it indicates a significant difference between the two algorithms.
It is easy to observe from Table 9 that the average ranking distance between HANSI and HANDI compared to HANMI is bigger than 1.885, indicating a significant difference in performance.However, the proposed HANGI and HANCGI, compared to the other four algorithms, have an average ranking distance of less than 1.885.This suggests that the algorithms proposed in the article do not exhibit a significant difference in average performance compared to the other four algorithms across the four classifiers.

Evaluation of algorithm stability
The stability of a feature selection algorithm refers to its ability to produce consistent or similar feature selection results when the dataset undergoes certain perturbations, such as removing or adding some samples.To discuss algorithm stability, we simulate removing a portion of samples.The procedure is as follows: First, the samples are randomly divided into ten subsets.In each iteration, nine subsets are chosen, and the feature selection algorithm is applied to obtain an optimal feature subset.This process is repeated ten times, resulting in 10 feature subsets.Features that appear at least five times out of the ten subsets are selected using a majority voting principle to form the final feature subset.Table 10 shows the number of features appearing at least once in each dataset's ten feature selection results.Table 11 shows the number of features that appear at least five times in the ten feature selection results.Table 12 presents the ratio of feature numbers before and after voting, where a higher ratio indicates higher stability of the corresponding feature selection algorithm.
If a feature repeats occurrences across the ten feature subsets, it indicates a high level of reproducibility for that feature.The features that appear frequently in the ten subsets are selected through voting.If the final selected feature subset contains more features, it suggests a higher degree of algorithm stability.Table 12 presents the stability performance of the six algorithms across different datasets, with the most stable algorithm for each dataset marked with an underline.
Table 12 shows that HANSI has the highest stability, reaching 0.64, followed by HANCGI, with a stability of 0.59.The remaining algorithms have relatively similar stability, with HANMI showing the poorest stability.This difference in stability can be attributed to the fact that HANSI and HANCGI assess feature importance based on the class distribution within the class neighborhood range.However, other algorithms evaluate feature importance based on the class distribution within the sample neighborhood.When some samples are perturbed or removed, it inevitably affects the class distribution within their respective neighborhood range, thus influencing the assessment of feature importance.Algorithms that assess feature importance by considering the distribution of classes within the sample neighborhood primarily focus on the local class distribution.When there is local perturbation, it directly impacts the evaluation of feature importance, resulting in lower stability.In contrast, HANCGI and HANSI pay more attention to the class neighborhood's class distribution.When local interference occurs, it first affects the neighborhood of their respective classes, and in this process, interference is averaged out by unaffected samples within that class.Subsequently, feature importance assessment is influenced by the class neighborhood, and during this process, it is further averaged out by other unaffected classes.Therefore, these algorithms exhibit higher stability.HANCGI considers the distribution of all classes within the class neighborhood range, while HANSI only considers whether the classes within the class neighborhood are the same as the primary class.Therefore, HANSI exhibits stronger robustness to disturbances.
The experimental results above demonstrate that the algorithms proposed in this article exhibit high stability and strong feature reduction capabilities, particularly in removing redundant and irrelevant features, resulting in improved classification accuracy on most datasets.HANSI demonstrates the highest stability and achieves the best classification performance across the four classifiers, but it has the weakest feature reduction capability.On the other hand, HANGI, proposed in this article, has stronger feature reduction capabilities than HANSI, with slightly lower stability, and its selected features exhibit classification performance just 0.06% worse than HANSI on average.HANCGI boasts significantly stronger feature reduction capabilities than HANSI, with slightly less stability, and its selected features have an average classification performance of only 0.3% worse than HANSI.
In conclusion, compared to four classical feature selection algorithms based on neighborhood rough sets, the algorithms proposed in this article outperform three and have advantages and disadvantages compared to HANSI.

CONCLUSIONS
The assessment of feature subset importance is crucial in classification learning and feature selection.There is currently a plethora of metrics available for evaluating feature importance.The Gini index has already been proven effective in classification learning and feature selection.In this article, we introduce the Gini index into the realm of neighborhood rough sets and propose two evaluation metrics for measuring the importance of feature subsets.These two metrics combine the Gini index at the level of sample neighborhoods and class neighborhoods, respectively, to gauge the importance of feature subsets.They assess the importance of feature subsets based on the purity of class distributions within the scope of sample neighborhoods and class neighborhoods.Subsequently, we delve into the properties of these two evaluation metrics and their relationships with attributes.Leveraging the assessment of candidate features' importance relative to existing feature subsets, we put forth two greedy heuristic algorithms to eliminate redundant and irrelevant features.
To comprehensively assess the performance of the algorithms proposed in this article, we conducted comparative experiments on 16 UCI datasets spanning various domains, including industrial, food, medical, and pharmaceutical fields, with four classical feature selection algorithms based on neighborhood rough sets.The experimental results demonstrate that HANGI and HANCGI effectively remove a substantial portion of redundant and irrelevant features, leading to enhanced classification accuracy while exhibiting high stability.Across the 16 UCI datasets, the average classification accuracy improved by more than 6%, with five datasets showing an average accuracy improvement exceeding 10%.
Compared to the four classical feature selection algorithms based on neighborhood rough sets, the two proposed algorithms showed no statistically significant difference in the average classification accuracy of the selected features across the four classifiers.However, HANCGI selected fewer features while maintaining the same level of classification accuracy, indicating its superior capability to eliminate redundant and irrelevant features compared to the other four algorithms.Additionally, the algorithms proposed in this article demonstrated high stability, with performance slightly below that of HANSI.
In conclusion, the algorithms proposed in this article outperformed three classical feature selection algorithms based on neighborhood rough sets.They had their strengths and weaknesses in comparison to HANSI.
It is worth noting that in the section where we discussed properties, we explored the relationships between NGI, NCGI, and feature subsets.Our examination reveals that while the expansion of the feature subset generally corresponds to a decline in overall GI, this trend is not entirely monotonic.Hence, the optimal feature subset found by the proposed forward greedy algorithm is a local optimum rather than a global one.Subsequent research endeavors may explore more sophisticated feature selection mechanisms, such as intelligent optimization algorithms, to ascertain globally optimal feature subsets for attaining superior results.

Figure 8
Figure 8 Variation of classification accuracies with a neighborhood radius.Full-size  DOI: 10.7717/peerj-cs.1711/fig-8 Based on this principle, this article proposes the Neighborhood Class Gini index (NCGI).It evaluates features' importance by assessing the upper approximation's impurity under different feature subsets.The definition of NCGI is provided below: Definition 2: Given a decision table , U; A; D ., let B A; E k 2 U=Dðk ¼ 1; 2; …; rÞ, and

Table 1
Description of datasets.
Note:Continuous and categorical respectively represent the number of continuous and categorical features in each dataset.

Table 2
Optimal neighborhood radius parameters.

Table 3
Number of selected features.Average accuracy on SVC.
Note:Underscored numbers indicate the fewest selected features relative to other algorithms.Zhang et al. (2023), PeerJ Comput.Sci., DOI 10.7717/peerj-cs.1711Table 4 Note: Underscored numbers indicate the best classification accuracy achieved through feature reduction relative to other algorithms.

Table 5
Average accuracy on KNN.
Note:Underscored numbers indicate the best classification accuracy achieved through feature reduction relative to other algorithms.

Table 6
Average accuracy on XGBoost.
Note:Underscored numbers indicate the best classification accuracy achieved through feature reduction relative to other algorithms.

Table 7
Average accuracy on ANN.
Note:Underscored numbers indicate the best classification accuracy achieved through feature reduction relative to other algorithms.

Table 8
Average accuracy on four classifiers.

Table 9
Rank of the six algorithms with the average accuracy on four classifiers.

Table 10
Number of features before voting.Table 11 Number of features after voting.

Table 12
Ratio of feature numbers before and after voting.
Note:Underlined numbers indicate higher stability relative to other algorithms.