Next Article in Journal
Atmospheric Boundary Layer Wind Profile Estimation Using Neural Networks, Mesoscale Models, and LiDAR Measurements
Next Article in Special Issue
Transcranial Magnetic Stimulation of the Dorsolateral Prefrontal Cortex Increases Posterior Theta Rhythm and Reduces Latency of Motor Imagery
Previous Article in Journal
StreetAware: A High-Resolution Synchronized Multimodal Urban Scene Dataset
Previous Article in Special Issue
Preliminary Evidence of EEG Connectivity Changes during Self-Objectification of Workers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Metaheuristic Optimization-Based Feature Selection for Imagery and Arithmetic Tasks: An fNIRS Study

by
Amad Zafar
1,†,
Shaik Javeed Hussain
2,†,
Muhammad Umair Ali
1,* and
Seung Won Lee
3,*
1
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
2
Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
3
Department of Precision Medicine, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2023, 23(7), 3714; https://doi.org/10.3390/s23073714
Submission received: 2 March 2023 / Revised: 23 March 2023 / Accepted: 30 March 2023 / Published: 3 April 2023
(This article belongs to the Special Issue Monitoring and Sensing in Neuroscience)

Abstract

:
In recent decades, the brain–computer interface (BCI) has emerged as a leading area of research. The feature selection is vital to reduce the dataset’s dimensionality, increase the computing effectiveness, and enhance the BCI’s performance. Using activity-related features leads to a high classification rate among the desired tasks. This study presents a wrapper-based metaheuristic feature selection framework for BCI applications using functional near-infrared spectroscopy (fNIRS). Here, the temporal statistical features (i.e., the mean, slope, maximum, skewness, and kurtosis) were computed from all the available channels to form a training vector. Seven metaheuristic optimization algorithms were tested for their classification performance using a k-nearest neighbor-based cost function: particle swarm optimization, cuckoo search optimization, the firefly algorithm, the bat algorithm, flower pollination optimization, whale optimization, and grey wolf optimization (GWO). The presented approach was validated based on an available online dataset of motor imagery (MI) and mental arithmetic (MA) tasks from 29 healthy subjects. The results showed that the classification accuracy was significantly improved by utilizing the features selected from the metaheuristic optimization algorithms relative to those obtained from the full set of features. All of the abovementioned metaheuristic algorithms improved the classification accuracy and reduced the feature vector size. The GWO yielded the highest average classification rates (p < 0.01) of 94.83 ± 5.5%, 92.57 ± 6.9%, and 85.66 ± 7.3% for the MA, MI, and four-class (left- and right-hand MI, MA, and baseline) tasks, respectively. The presented framework may be helpful in the training phase for selecting the appropriate features for robust fNIRS-based BCI applications.

1. Introduction

Brain–computer interface (BCI) technology enables a direct connection between the brain and an external device by avoiding the use of traditional channels, such as peripheral nerves and muscles [1]. BCIs are designed to provide impaired people (such as those with locked-in syndrome or spinal cord injuries) with new ways to communicate and manage their surroundings. BCIs may also be employed in industries such as gaming, transportation [2], recreation [3], virtual reality, human–machine interfaces [4], and neurological rehabilitation [5,6,7,8]. BCIs are classified into two types: invasive and non-invasive. Invasive BCIs capture brain activity using electrodes placed directly into the brain. This approach provides the highest-level information for signals but possesses significant risks, including the chance of infection and lasting brain tissue damage. Non-invasive BCIs, by contrast, do not require electrodes to be implanted in the brain. Instead, they capture signals from the scalp or other areas of the head using methods, such as electroencephalography (EEG) [9,10], functional magnetic resonance imaging [11,12], magnetoencephalography [13], and functional near-infrared spectroscopy (fNIRS) [14,15]. Non-invasive BCIs are less dangerous and less obtrusive than invasive BCIs but provide lower-resolution signals. The choice between invasive and non-invasive BCIs is determined by the specific application and the trade-off between the resolution and invasiveness. Despite these potential benefits, significant technical and clinical hurdles must be overcome before BCIs can be widely adopted and utilized.
Each non-invasive BCI has its own advantages and disadvantages. As a relatively new technique, fNIRS offers a balance between the temporal and spatial resolution and a variety of distinctive benefits [16,17,18]. Oxyhemoglobin (HbO/∆HbO) and deoxyhemoglobin (HbR/∆HbR) concentration changes are measured using fNIRS, utilizing pairs of multiple near-infrared lights (650–1000 nm range) that penetrate through the superficial cortical areas. Both the absolute and relative concentration changes are measured [19,20]. The neocortex activation caused by the brain stimulation results in increased blood flow and oxygenation levels (as reflected by the increases in ∆HbO or the decreases in ∆HbR). These modifications can then be used to provide control signals for the fNIRS-based BCI applications.
The development of rapid systems for quicker command decoding is one of the three primary research objectives in the field of BCIs. The other two are maximizing the number of decoded commands and improving the classification performance. The features are extracted using various small window sizes (0–2, 0–2.5, 0–5, 2–7, etc.) to speed up the BCI system [21,22,23]. fNIRS is used with other measuring modalities (e.g., EEG) to increase the number of commands that can be decoded from the brain [24,25]. The selection of the channels and features is a key component for enhancing the BCI classification accuracy. In fNIRS-based BCI research, active channel selection has been performed using various methods, such as averaging on a particular region of interest [26,27], computing the Pearson correlation coefficient [28], performing vector phase analyses [21,29,30,31], averaging across all the channels [32,33,34], applying baseline corrections [24], calculating the contrast-to-noise ratios [35], using t-statistics and z-statistics [23,30,36], using a least absolute shrinkage and selection operator homotopy-based sparse representation [37], and utilizing joint-channel-connectivity methods [38].
Various types of features have been used to decode fNIRS signals in the literature, including statistical features [39], graph-based features [40], Mel frequency cepstral coefficients [41], vector phase analysis-based features [29], and frequency domain-based features [42]. Different ranges of two- and three-feature combinations have been used in prior studies by using temporal statistical data to determine the best combinations for classifying the various activities [43,44]. However, selecting the best features using visual inspection can be challenging, particularly when all the channels are used for the feature extraction and classification. Various studies have proven the impact of the feature selection on BCIs [29,45,46]. It helps to reduce the dataset’s dimensionality, increase the computing effectiveness, and enhance the classification performance among the tasks. By identifying the features that are related to the activity, the feature selection increases the robustness of the classification system. Therefore, in one study [47], the authors used a genetic algorithm (GA) to determine the best window size for the computation of the temporal features. Their suggested framework improved the model’s capability for classification. A recent study [45] presented a systematic approach for choosing the subject-specific features. They used filter-based techniques to remove the redundant features and boost the classification efficiency. Similarly, the “ReliefF” filter merged with a GA was used to classify upper limb movement intentions [48]. In another study, a minimum-redundancy maximum-relevance filter was used to reduce the feature vector size and enhance the classification performance. The relevant features were also identified using a sparse representation classification method. [49]. Recently, Dokeroglu et al. [50] reviewed various wrapper-based feature selection methods. Wrapper approaches examine the performance of each subset of features and combine a metaheuristic optimization algorithm with a classifier. Except for GAs, other wrapper-based feature selection methods have not been explored for fNIRS-based BCIs in the literature. The no-free-lunch (NFL) theorem states that no heuristic is sufficient to address every optimization issue. Most metaheuristics excel in at least one area compared to the others. The optimal subset of the features from several domains may not always be discovered using a single metaheuristic algorithm.
Therefore, herein, wrapper-based approaches operating in conjunction with metaheuristic optimization algorithms were explored for an fNIRS BCI to improve the accuracy and accelerate the processing. The following are the main highlights of the presented work.
  • First, the data of all the channels of the fNIRS signals were pre-processed to remove physiological noise.
  • The most commonly used statistical temporal features were computed for the fNIRS signals in all the channels.
  • Wrapper approaches with various metaheuristic optimization algorithms, such as particle swarm optimization (PSO), cuckoo search optimization (CSO), the firefly algorithm (FA), the bat algorithm (BA), flower pollination optimization (FPO), the whale optimization algorithm (WOA), and grey wolf optimization (GWO), were applied to observe the classification performance using a k-nearest neighbor (k-NN) approach.
  • The performance of the proposed framework was evaluated using the online motor imagery (MI) and mental arithmetic (MA) datasets.
  • A statistical analysis was also performed to determine the significance of the obtained results.

2. Proposed Framework

This section explains a framework for the feature selection for the fNIRS signals using optimization algorithms. This framework entailed choosing the most pertinent and significant features from the fNIRS signals. Here, the acquired fNIRS data were pre-processed to remove the physiological noise, and further details are provided in Section 3. After that, the temporal statistical features were extracted for all the channels in a 10-s window. A wrapper-based feature selection method was applied to retrieve the most important features using PSO, CSO, the FA, the BA, FPO, the WOA, and GWO. All the algorithms were implemented using “Jx-WFST,” a Jxwrapper feature selection toolbox in MATLAB (https://www.mathworks.com/matlabcentral/fileexchange/84139-wrapper-feature-selection-toolbox, accessed on 7 February 2023). The k-NN model was selected for the classification, as it is a non-parametric machine learning approach that is accurate, easy, and widely used [51,52]. In this approach, the neighbors’ votes determine how a sample is classified. The item corresponding to the k training samples (i.e., k, the object’s closest neighbors) is categorized into a class based on the greatest class probability [53]. Further details on the k-NN can be found in [54]. The framework of the proposed approach is shown in Figure 1.

3. Experimental Data and Pre-Processing

Here, the fNIRS data of 29 participants from an online dataset were used to validate the presented methodology [55]. The prefrontal, motor, and occipital brain regions were surrounded by 36 channels formed by fourteen sources and sixteen detectors spaced three centimeters apart, utilizing the 10-5 international system with Fp1, Fp2, Fpz, C3, C4, and Oz as the references. The fNIRS data were measured at a 12.5 Hz sampling rate and were down-sampled to 10 Hz. The dataset was composed of triggered, fNIRS, and EEG data from six different sessions. The dataset was divided into datasets A and B corresponding to the left-hand motor imagery (LHMI) and right-hand motor imagery (RHMI) sessions (i.e., 1, 3, and 5) as well as the MA and baseline sessions (i.e., 2, 4, and 6). There was a 1-min pre-rest time at the start of each session, followed by 20 trials (10 for each activity) and followed by another 1-min post-rest interval. The exercise consisted of a 2-s visual introduction, a 10-s task phase, and a rest period randomly allocated to last between 15 and 17 s. Here, only the fNIRS and associated trigger data were used. The fNIRS data (i.e., only ΔHbO in this study) were passed through a 3rd-order Butterworth low-pass filter with a 0.1 Hz cutoff frequency and a Butterworth high-pass filter with a 0.01 Hz cutoff frequency to reduce the physiological noise. Figure 2 shows the fNIRS optode placement and the experimental paradigm [55].

4. Feature Extraction

The fNIRS trials for both the MI and MA were retrieved after pre-processing, and a section of the trial lasting 10 s was utilized for the feature extraction. In this investigation, only the top five most commonly used statistical features—the mean, peak, slope, skewness, and kurtosis—were retrieved. [39,56,57,58]. The mean, skewness, and kurtosis were determined using the following formulas, wherein the peak value is the maximum value, and the slope is retrieved using the curve fitting.
μ = 1 N k = k 1 k 2 X ( k )
S x = E x ( X x μ x ) 3 σ 3
K x = E x ( X x μ x ) 4 σ 4
In the above, X represents the signals of the fNIRS (i.e., ΔHbO) and μ and σ denote the mean and standard deviation, respectively. S x , K x , and E x are the skewness, kurtosis, and expectation of X , respectively.
Herein, initially, all of the fNIRS channels were used for the feature extraction. In total, 180 features (36 × 5) were retrieved for a single trial. Thus, the feature selection was necessary to choose the right features, further minimize the feature vector size, and enhance the classification performance.

5. Feature Selection Method

In the feature selection process, a subset was chosen from the larger set of features to develop the machine learning model. The quality of the created candidate subsets was assessed using a predetermined criterion [59]. The feature selection aimed to improve the model’s performance, decrease overfitting, and enhance the interpretability. The test classification accuracy was used to validate the outcome of the feature selection method. In general, feature selection algorithms may be divided into three primary categories, i.e., filter, embedding, and wrapper approaches, depending on the numerous assessment criteria and techniques used to generate the subsets.
Following the theory that the features with a high variance provide the most relevant details, the filter feature techniques are geared to maintain the features with a more significant variance [60]. These methods typically take little time to execute, although they can select redundant variables.
Embedding methods do not distinguish between the feature selection procedure and the classification method [61]. The feature selection is conducted within the classification process; therefore, it is incorporated as an algorithmic component or a functionality enhancement.
Wrapper algorithms are machine learning techniques that assess the performance of a subset of features using a particular machine learning model (the “wrapper”) [62]. The wrapper’s objective is to assess how the chosen features affect the model’s performance. Based on the evaluation findings, the wrapper algorithm will either choose the current subset of features or search for a better subset of features. The best subset of features is eventually discovered by repeating this approach. The wrapper approach concept is presented in Figure 3. This work uses wrapper-based techniques and various metaheuristics algorithms for the optimal feature selection.

5.1. Metaheuristics Algorithms for Wrapper-Based Methods

Metaheuristic algorithms aim to approximate solutions to challenging problems. They are called “meta” because they manage high-level optimization problems by integrating several low-level heuristics [63,64]. PSO, CSO, the FA, the BA, FPO, the WOA, and GWO, among others, are examples of metaheuristic algorithms [50,65].
Learning algorithms are used in wrapper feature selection approaches to assess the classification performance of the generated feature subsets. The metaheuristics serve as the search algorithms to generate new potential optimal subsets [66]. The cost function for all of the optimization algorithms is defined as shown in Equation (4) [67].
min ( J ) = α ( 1 A c c u r a c y ) + β ( n o .   o f   s e l e c t e d   f e a t u r e s n o .   o f   t o t a l   f e a t u r e s )
Here, α and β are selected with the default values of 0.99 and 0.01, respectively.

5.1.1. Particle Swarm Optimization (PSO)

PSO is a technique for solving complicated optimization issues. It depends on the coordinated behavior of a collection of particles traveling through a solution space while being directed by the experiences of their nearby neighbors to locate the best solution [68,69]. It is inspired by the coordinated activity of a flock of birds or school of fish. Each particle in PSO represents a potential solution to the issue, is initially placed, and moves randomly. The particles adjust their locations ( X ) after each iteration depending on their current velocity ( V ) and considering their own personal best solution (pbest) and the group’s overall best solution (gbest). The pull toward the pbest and gbest solutions and a random element to promote exploration are combined with the particle’s current velocity to create a velocity update. The process is continued until a stopping requirement is satisfied, e.g., achieving a suitable solution quality or completing a predetermined number of iterations [70]. The equations below can be used to determine each particle’s velocity and update its position.
V i j t = α V i j t 1 + a 1 b 1 j t 1 [ pbest X i j t 1 ] + a 2 b 2 j t 1 [ gbest X i j t 1 ] X i j t = X i j t 1 + V i j t }
where α , a 1 , a 2 , b 1 , and b 2 are the parameters of PSO. The b 1 and b 2 can be randomly selected. Further details about the PSO can be found in [71]. Algorithm 1 shows the pseudo code of PSO.
Algorithm 1 Pseudo code of PSO [71].
1. Generate a random population of particles (N)
2. while the termination condition is not satisfied do
3. for each particle do
4. Evaluate the fitness of all the particles using the fitness function
5. Update the velocity and position of all the particles using Equation (5)
6. Evaluate the fitness f ( Pr i j )
7. if   f ( Pr i t ) < f ( p b e s t i t )  then
8. p b e s t i t Pr i t
9. if  f ( Pr i t ) < f ( g b e s t i t )  then
10. g b e s t i t Pr i t
11. return  g b e s t

5.1.2. Cuckoo Search Optimization (CSO)

The CSO metaheuristic optimization method was developed in response to the cuckoo bird’s tendency to lay its eggs in the nests of other bird species. The CSO algorithm was proposed by Yang in 2009 [72]. In CSO, a population of potential solutions to a problem is retained and an algorithm that uses random walk and exploitation operations iteratively improves it. The cuckoo’s propensity to deposit its eggs in other birds’ nests served as the inspiration for the random walk operation, which represents the discovery of new solution areas. The algorithm can concentrate on and enhance the most promising solutions with the help of the exploitation operation. In contrast to the other optimization algorithms, CSO finds high-quality solutions more quickly owing to its careful balance between exploration and exploitation. Additionally, CSO may be quickly parallelized for complicated optimization issues and is very easy to implement. It also does not require extensive parameter adjustment. The following equations can be utilized to model CSO.
x i t + 1 = x i t + α L e v y ( λ ) L e v y u = t λ }
where x i is the solution at the step size ( α ) and λ is the variance of the levy distribution. Further details about CSO can be found in [73]. Algorithm 2 shows the pseudo code of CSO.
Algorithm 2 Pseudo code of CSO [73]
1. Generate a random population of host nests (N)
2. while the termination condition is not satisfied do
3. Select a random cuckoo (i) and determine a new solution using Equation (6)
4. Evaluate the fitness using the fitness function; Fi
5. Randomly choose a nest (j) from the population
6. if   ( F i > F j )  then
7. Replace j with a new solution
8. end if
9. Abandon a fraction (Pa) of the worst nest and a new one at new location using Levy flights
10. Rank the solution and find the current best
11. return  S b e s t

5.1.3. Firefly Algorithm (FA)

In 2010, Yang presented the FA, which was motivated by the flashing qualities of fireflies [74]. These flashes draw potential mates or warn off predators. In the FA, the flashing properties are developed and applied as functions to solve combinatorial optimization issues [75]. According to their levels of brightness, fireflies attract other flies and advance toward brighter fireflies. The more space between the fireflies, the less appealing they become. The fireflies move at random when a brighter one is not present. As a result, the critical influences on the FA are the light intensity and the attractiveness level. A selected function can be used to control the brightness at a particular place. As it relies on the distance and absorption coefficient, the attraction is determined by the other fireflies. The FA can be modeled as follows.
I = I o e γ r β = β o e γ r 2 x i t = x i t 1 + β o e γ r i j 2 ( x j t 1 x i t 1 ) + α t i t }
where I and I o are the current and default light intensities; γ and r are the light absorption coefficient and the distance, respectively; β and β o are the current and default attractiveness (when the distance is zero); x i is the position; j is the higher intensity firefly; and α and are the random parameters. Further details about the FA can be found in [76]. Algorithm 3 shows the pseudo code of the FA.
Algorithm 3 Pseudo code of the FA [76]
1. Generate a random population of fireflies (N)
2. Evaluated the fitness value f(xi)
3. Initialize the parameters T, γ
4. while (t < T) do
5. for i = 1 to N do
6. for j = 1 to i do
7. if   ( I j < I i )  then
8. Compute the attractiveness and move firefly i towards j using Equation (7)
9. end if
10. end for
11. end for
12. Evaluate the fitness value
13. Rank the fireflies and determine the current best
14. end while

5.1.4. Bat Algorithm (BA)

The BA is a nature-inspired optimization algorithm introduced by Yang in 2010 [77]. It is based on how bats use echolocation to locate their prey. The method is modeled after how bats fly around randomly while looking for food, generating noises, and listening for echoes to find their meal [78]. Using random walk and exploitation procedures, the BA maintains a population of alternative solutions to an optimization issue and iteratively enhances these solutions. While the exploitation operation enables the algorithm to concentrate on the most promising answers, the random walk operation imitates the unpredictable search behaviors of bats. Its usage of a frequency-tuning mechanism is motivated by the frequency modulation of bat sounds. This mechanism allows the algorithm to break out of the local optima and discover superior solutions. Furthermore, the BA is easy to deploy and does not require complicated parameter tuning. It is possible to codify echolocation as a method for improving an objective function [79]. The equations below can be used to model this algorithm.
f i = f min + ( f max f min ) β v i t = v i t 1 + ( x i t x * ) f i x i t = x i t 1 + v i t x n e w = x o l d + α A t }
where x i and v i are the position and velocity in the frequency range between f max and f min ; β is the random vector; x * is the best available solution; and α and A t are the random vector between the range of [0, 1] and the maximum loudness of the bats. Further details about the BA can be found in [80]. Algorithm 4 shows the pseudo code of the BA.
Algorithm 4 Pseudo code of the BA [80]
1. Generate a random population of bats (N) and an initial velocity vi
2. Evaluated the fitness value f(xi)
3. Define fi
4. Initialize the parameters T, α , and A t
5. while (t < T) do
6. Update the frequency, velocity, and position using Equation (8)
7. if   ( r a n d < r i )  then
8. Select the best result and create a local result around the best result
9. end if
10. if   ( r a n d < A i )   a n d   ( f ( x i ) < f ( x * ) )  then
11. Store the new result, decrease Ai, and increase ri
12. end if
13. Rank the bats and determine the current best
14. end while

5.1.5. Flower Pollination Optimization (FPO)

The metaheuristic optimization method known as FPO was motivated by the pollination process in flowers [81]. It was developed to address the optimization issues in various fields, including computer science, engineering, mathematics, physics, and finance. The optimization procedure in FPO is based on how pollen moves between flowers. The quality of a solution is expressed by how much “nectar” it contains, and each solution in the optimization problem is represented as a “flower.” The nectar is transferred between the flowers in a way that is similar to how the optimization problem solutions are selected and merged. FPO is renowned for its ease of use, quick convergence speed, and simplicity. It has been used to address numerous optimization issues, including function optimization, scheduling, and data clustering issues [82]. FPO can be modeled using the following equations.
x i t = x i t 1 + α L ( λ ) ( g * x i t 1 ) L ( λ ) = λ Γ ( λ ) · sin ( λ ) π · 1 s 1 + λ x i t = x i t 1 + ε ( x j t 1 x k t 1 ) }
where x i t is the pollen; g * is the best available solution; α is the scaling factor [0, 1]; L ( λ ) and Γ ( λ ) are the levy step size and gamma function; and x j and x k are the pollen from the different flowers but the same plant species. Further details about FPO can be found in [83]. Algorithm 5 shows the pseudo code of FPO.
Algorithm 5 Pseudo code of FPO [83]
1. Generate random pollinators and flowers
2. Initialize the parameters
3. Evaluate the solution of the population using the fitness function f(x)
4. Determine the global best solution (g*)
5. while (t < T) do
6. for i = 1:N
7. if (p < rand) then
8. Draw a (d-dimensional) step vector L that obeys a Levy distribution and perform global pollination
9. else
10. Pick ε randomly [0–1]
11. Randomly choose j and k among all the solutions and perform local pollination
12. end if
13. Evaluate fmin of the new solution
14. If fmin is better than previous solution, update the i solution and fmin in the population, and if fmin is better than global best solution, update the global best solution and its fmin
15. end for
16. Store the global best solution
17. t = t + 1
18. end while

5.1.6. Whale Optimization Algorithm (WOA)

In 2016, Mirjalili and Lewis [84] introduced the WOA, which is inspired by humpback whales’ hunting habits. These whales hunt in groups, demonstrating their gregarious nature. They blow bubble nets to catch their prey (such as tiny fish or krill) when they come across clusters of prey. The WOA mathematical model represents a fresh approach to addressing complex optimization issues. The algorithm’s primary tasks are finding prey, surrounding it, and moving in spiral bubble-net patterns. The WOA uses a variety of strategies after starting with a random solution. The other agents adjust their places after choosing the top search agent. They choose a target (either the best search agent or a random whale) and proceed to attack it. The WOA has helped resolve various optimization issues, including function optimization, scheduling, and data clustering issues [50]. The following equations can be used to model the WOA.
x i t = D · e b l · cos ( 2 π l ) + x * i t 1 D = | x * i t 1 x i t 1 | }
where x and x * are the current and best locations, respectively; b and l are the constant for determining the logarithmic spiral shape and a random number, respectively; and D is the distance between the whale and the prey. Further details about the WOA can be found in [85,86]. Algorithm 6 shows the pseudo code of the WOA.
Algorithm 6 Pseudo code of the WOA [86]
1. Generate a random population (N)
2. Initialize the parameters
3. Evaluate the solution of the population using the fitness function
4. Determine the best solution (x*)
5. while (t < T) do
6. for each solution do
7. Update all the parameters (i.e., a, A, C, l, and p)
8. if (p < 0.5) then
9. if |A| < 1 then
10. Update the position of the current solution by using x ( t + 1 ) = x * ( t ) A D
11. else if |A| > 1 then
12. Select a random solution from a population
13. Update the position of x(t) by using x ( t + 1 ) = x r a n d A D
14. end if
15. else if (p > 0.5) then
16.Update the position of x(t) using Equation (10)
17. end if
18. end for
19. Evaluate the fitness value for each solution.
20. Update x*
21. t = t + 1
22. end while

5.1.7. Grey Wolf Optimization (GWO)

The GWO was presented by Mirjalili et al. in 2014 [87] and its multi-objective variant was introduced in 2016 [88]. Predators such grey wolves live and hunt in groups. Grey wolves live in social packs composed of five to twelve individuals. Depending on their level of authority over others, the group members are referred to as alpha, beta, omega, or subordinates in this social system. Each pack has a single alpha wolf or a dominant wolf and leader of the group. As a result, the alpha is in charge of the majority of tasks. The second most dominating wolf is the beta. It is anticipated that the beta will become the dominant wolf (alpha). The beta supports the alpha’s decision-making, communicates his/her orders to the group, and sees them through. The lowest ranking wolf in the group (ruled by all the other wolves) is named the omega. The remaining wolves are referred to as subordinate or delta wolves. In addition to this social organization, grey wolves also hunt in groups. The group tracks and pursues a victim when it is within range. They surround the target whenever feasible, then each attacks in turn. The grey wolves’ characteristics are considered for modeling GWO. The mathematical representation of GWO comprises the social structure and prey hunting techniques. The alpha, beta, and delta wolves in GWO are chosen as the top three solutions. The other wolves in the pack are considered the omega wolves and do not influence the choices made for the following iteration. The GWO technique was created to address the optimization issues prevalent in many disciplines, including computer science, engineering, mathematics, physics, and finance [67,89]. The variants of GWO are also proposed and applied to solve various engineering problems [90,91]. GWO can be modeled using the following equations.
x t = x p t 1 + A · D D = | C · x p t 1 x t | A = 2 a · r 1 a C = 2 r 2 }
where x and x p are the grey wolf and prey locations, respectively; A and C are the coefficient vectors; D is the distance between the grey wolf and the prey; r 1 and r 2 are the random vectors; and a is the linearly decreased variable. It can be calculated using the following equation.
a = 2 t 2 max iterations
where is the iteration number. The following equations can be used to update the grey wolves’ position.
x t = x 1 + x 2 + x 3 3
where
x 1 = | x α A 1 D α | x 2 = | x β A 2 D β | x 3 = | x δ A 3 D δ | D α = | C 1 · x α x | D β = | C 2 · x β x | D δ = | C 3 · x δ x | }
Further details about the GWO can be found in [92]. Algorithm 7 shows the pseudo code of GWO.
Algorithm 7 Pseudo code of GWO [92]
1. Generate a random population (N)
2. Initialize the parameters
3. Evaluate the fitness of each wolf
4. while (t < T) do
5. for each wolf do
6. Update the position using Equations (13) and (14)
7. end for
8. Update the parameters
9. Evaluate the fitness value
10. Update the wolf’s position
11. t = t + 1
12. end while
13. return x α

6. Results and Discussion

This study applied wrapper-based metaheuristics feature selection approaches to enhance the fNIRS signals’ classification accuracy. The MA and MI datasets were used to test the suggested feature selection algorithms [55]. The discriminations between the MA and baseline, LHMI and RHMI, and LHMI, RHMI, MA, and baseline were accomplished by utilizing a feature set composed of five statistical temporal features, as discussed above. The wrapper-based metaheuristic algorithms discussed above (PSO, CSO, the FA, the BA, FPO, the WOA, and GWO) were employed to extract valuable information. The extracted feature subset was classified using a k-NN and a 0.2-holdout validation technique. The K parameter of the k-NN was set as 5, N was the population size and T was the maximum number of iterations. Each algorithm was run ten times. All the parameters for each algorithm are enlisted in Table 1.
The results from all three cases are presented in Table 2, Table 3 and Table 4. This study used the classification accuracy and the feature vector size as the performance comparison metrics.
Table 2, Table 3 and Table 4 present the results for all three cases with each optimization algorithm for the specific subjects, using the accuracy and the feature vector size as the comparison metrics. In the case of the MA tasks, the full feature vector (180 features for each task) resulted in a 91.67% classification accuracy for subject one. By contrast, all of the wrapper-based optimization algorithms had a higher classification accuracy with a significantly reduced feature vector size, as shown in Table 2. After carefully analyzing the results, it can be seen that the GWO algorithm had a 99.1 ± 2.6 classification rate with a feature vector size of only 24.1 ± 5.1 for subject one. The algorithm enhanced the classification rate more than 8% from almost 150 fewer features. The WOA algorithm used the lowest number of features for the classification and produced a good classification rate when using all the channel features. For instance, in the cases of subjects three and four, the WOA only utilized 12.5 ± 14 and 4.8 ± 2.8 features to classify the MA task with the baseline and produced classification rates of 81.6 ± 8.6% and 88.3 ± 8.9%, respectively. These were almost similar results to those from the cases using all the channel features.
In the case of the MI tasks, the full channel features yielded an accuracy of only 41.67% for subject one. By contrast, the WOA produced the highest classification rate of 92.5 ± 8.2 with 35.7 ± 13.7 features. The FA showed an 85.8 ± 8.8 classification performance for subject 14 with 80.5 ± 6.8 features. The BA showed an 84.1 ± 8.2 classification rate for subject six with 75.2 ± 6.3 features, i.e., 10% more than using all the features. The results for the other subjects for the MI classification tasks are shown in Table 3.
Table 4 shows the results from the merged MA and MI datasets. This was performed to observe the proposed strategy’s effectiveness in a multi-class environment. As depicted in Table 4, the accuracy when using all the features was reduced for all of the subjects compared to the MA task alone. The results presented in Table 4 demonstrate the effectiveness of the proposed approach in a multi-class environment. For a better understanding of the results, the average classification accuracies, processing times, and feature vector sizes of all the subjects for the various tasks are presented in Figure 4 and Table 5.
After deeply analyzing the results, it can be seen that GWO produced the highest classification rates of 94.83 ± 5.5%, 92.57 ± 6.9%, and 85.66 ± 7.3% for the MA, MI, and four-class (left- and right-hand MI, MA, and baseline) tasks, respectively. All of the channel features had classification rates of only 81.32%, 71.26%, and 62.79% for similar tasks. All of the optimization algorithms significantly improved the classification rate and reduced the feature vector size. However, GWO produced the best and most stable results since it was scalable, versatile, simple to use, and didn’t need any derivation information from the search space. The search method for the algorithm benefitted from a balance between exploration and exploitation, producing an excellent convergence [93].
A two-sample t-test was also utilized to determine the statistical significance of the results. The results of GWO remained true compared to those from the full feature set and all the other optimization methods’ results (p < 0.01). Table 5 presents the average processing time of each algorithm. The WOA had the lowest processing time and the smallest feature vector size with a reasonable accuracy. GWO used 29 features on average with only 2.17 s of processing time to yield the highest classification accuracy, as shown in Figure 4 and Table 5. As shown in Table 6, the results from this study were also compared to those from the other studies using the same dataset.
Several researchers have presented task-based relevant feature selection approaches to improve the performance of fNIRS-based BCIs [36,45,94,95]. Aydin utilized both filter-based and wrapper-based methods to reduce the redundant features and extract useful information [45]. She showed that the wrapper-based approach had a better performance. In the filtering techniques, the features were selected based on the feature relevance to the dependent variable. She did not find any relevance between the feature and the model performance, while the wrapper methods trained a model using a subset of features to verify their usefulness. In some cases, in the filter-based approaches, a threshold value was selected to remove the redundant information. Wrapper-based techniques can take slightly longer to process, but they have shown that they produce an excellent classification accuracy. In this work, a wrapper-based metaheuristic feature selection framework was proposed to enhance the classification performance of fNIRS-based BCIs. The proposed technique and the literature are compared in Table 6. It is evident from the results that the proposed approach had a better classification performance as compared to the others.
In the study, only continuous metaheuristic feature selection approaches were applied. However, different binary metaheuristic feature selection approaches can be explored in the future to improve the classification accuracy and the feature vector size [96,97,98]. This study’s practical application is in the BCI training phase. The model can be trained offline using the selected features that were determined using the presented wrapper-based approach. In the case of online testing, only the selected features were fed to the trained model for a better classification performance.

7. Conclusions

In this study, the wrapper-based metaheuristic optimization algorithms were utilized for activity-related feature selection in the fNIRS-based BCI applications. The performance of seven metaheuristic optimization algorithms (i.e., PSO, CSO, the FA, the BA, FPO, the WOA, and GWO) was analyzed using a k-NN-based cost function. The results demonstrated that all of the metaheuristic optimization algorithms significantly improved the classification accuracy and reduced the feature vector size. After extensive training and testing, GWO obtained the highest classification rates of 94.83 ± 5.5%, 92.57 ± 6.9%, and 85.66 ± 7.3% for the MA, MI (left- and right-hand), and four-class (left- and right-hand MI, MA, and baseline), respectively. Furthermore, the statistical analysis (p < 0.01) showed that GWO yielded better and more stable results. Therefore, the wrapper-based metaheuristic optimization algorithms were considered helpful for selecting the appropriate activity-related features for robust fNIRS-based BCI applications. In the future, other latest optimization algorithms and binary versions can be applied to measure their performance on fNIRS-based BCI applications.

Author Contributions

Conceptualization, A.Z. and S.J.H.; formal analysis, M.U.A.; investigation, A.Z. and S.J.H.; methodology, A.Z. and S.J.H.; resources, S.W.L.; software, M.U.A.; supervision, S.W.L.; validation, M.U.A.; writing—original draft, A.Z. and S.J.H.; writing—review and editing, M.U.A. and S.W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF2021R1I1A2059735) (S.W.L.).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khosrowabadi, R.; Quek, C.; Ang, K.K.; Tung, S.W.; Heijnen, M. A Brain-Computer Interface for classifying EEG correlates of chronic mental stress. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 757–762. [Google Scholar]
  2. Yue, K.; Wang, D. EEG-based 3D visual fatigue evaluation using CNN. Electronics 2019, 8, 1208. [Google Scholar] [CrossRef] [Green Version]
  3. Royer, A.S.; Doud, A.J.; Rose, M.L.; He, B. EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 581–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Daly, J.J.; Wolpaw, J.R. Brain–computer interfaces in neurological rehabilitation. Lancet Neurol. 2008, 7, 1032–1043. [Google Scholar] [CrossRef] [PubMed]
  5. Zabcikova, M.; Koudelkova, Z.; Jasek, R.; Lorenzo Navarro, J.J. Recent advances and current trends in brain-computer interface research and their applications. Int. J. Dev. Neurosci. 2022, 82, 107–123. [Google Scholar] [CrossRef]
  6. McFarland, D.J.; Wolpaw, J.R. Brain-computer interface operation of robotic and prosthetic devices. Computer 2008, 41, 52–56. [Google Scholar] [CrossRef]
  7. Müller-Putz, G.; Leeb, R.; Tangermann, M.; Höhne, J.; Kübler, A.; Cincotti, F.; Mattia, D.; Rupp, R.; Müller, K.-R.; Millan, J.d.R. Towards noninvasive hybrid brain–computer interfaces: Framework, practice, clinical application, and beyond. Proc. IEEE 2015, 103, 926–943. [Google Scholar] [CrossRef]
  8. Liu, Z.; Shore, J.; Wang, M.; Yuan, F.; Buss, A.; Zhao, X. A systematic review on hybrid EEG/fNIRS in brain-computer interface. Biomed. Signal Process. Control 2021, 68, 102595. [Google Scholar] [CrossRef]
  9. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain–computer interface paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef]
  10. Rashid, M.; Sulaiman, N.; PP Abdul Majeed, A.; Musa, R.M.; Bari, B.S.; Khatun, S. Current status, challenges, and possible solutions of EEG-based brain-computer interface: A comprehensive review. Front. Neurorobot. 2020, 14, 25. [Google Scholar] [CrossRef]
  11. Weiskopf, N. Real-time fMRI and its application to neurofeedback. NeuroImage 2012, 62, 682–692. [Google Scholar] [CrossRef]
  12. Tursic, A.; Eck, J.; Lührs, M.; Linden, D.E.; Goebel, R. A systematic review of fMRI neurofeedback reporting and effects in clinical populations. NeuroImage Clin. 2020, 28, 102496. [Google Scholar] [CrossRef]
  13. Lopes da Silva, F. EEG and MEG: Relevance to Neuroscience. Neuron 2013, 80, 1112–1128. [Google Scholar] [CrossRef] [Green Version]
  14. Hong, K.-S.; Zafar, A. Existence of initial dip for BCI: An illusion or reality. Front. Neurorobot. 2018, 12, 69. [Google Scholar] [CrossRef] [Green Version]
  15. Yücel, M.A.; Selb, J.J.; Huppert, T.J.; Franceschini, M.A.; Boas, D.A. Functional near infrared spectroscopy: Enabling routine functional brain imaging. Curr. Opin. Biomed. Eng. 2017, 4, 78–86. [Google Scholar] [CrossRef]
  16. Quaresima, V.; Ferrari, M. Functional near-infrared spectroscopy (fNIRS) for assessing cerebral cortex function during human behavior in natural/social situations: A concise review. Organ. Res. Methods 2019, 22, 46–68. [Google Scholar] [CrossRef]
  17. Boas, D.A.; Elwell, C.E.; Ferrari, M.; Taga, G. Twenty years of functional near-infrared spectroscopy: Introduction for the special issue. NeuroImage 2014, 85, 1–5. [Google Scholar] [CrossRef]
  18. Khan, M.A.; Ghafoor, U.; Yoo, H.-R.; Hong, K.-S. Acupuncture enhances brain function in patients with mild cognitive impairment: Evidence from a functional-near infrared spectroscopy study. Neural Regen. Res. 2022, 17, 1850. [Google Scholar]
  19. Pinti, P.; Scholkmann, F.; Hamilton, A.; Burgess, P.; Tachtsidis, I. Current status and issues regarding pre-processing of fNIRS neuroimaging data: An investigation of diverse signal filtering methods within a general linear model framework. Front. Hum. Neurosci. 2019, 12, 505. [Google Scholar] [CrossRef] [Green Version]
  20. Pinti, P.; Tachtsidis, I.; Hamilton, A.; Hirsch, J.; Aichelburg, C.; Gilbert, S.; Burgess, P.W. The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience. Ann. N. Y. Acad. Sci. 2020, 1464, 5–29. [Google Scholar] [CrossRef]
  21. Zafar, A.; Hong, K.-S. Detection and classification of three-class initial dips from prefrontal cortex. Biomed. Opt. Express 2017, 8, 367–383. [Google Scholar]
  22. Gateau, T.; Durantin, G.; Lancelot, F.; Scannella, S.; Dehais, F. Real-time state estimation in a flight simulator using fNIRS. PLoS ONE 2015, 10, e0121279. [Google Scholar] [CrossRef] [PubMed]
  23. Hong, K.-S.; Santosa, H. Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy. Hear. Res. 2016, 333, 157–166. [Google Scholar] [CrossRef] [PubMed]
  24. Khan, M.J.; Hong, K.-S. Hybrid EEG–fNIRS-based eight-command decoding for BCI: Application to quadcopter control. Front. Neurorobot. 2017, 11, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Li, Z.; Zhang, S.; Pan, J. Advances in hybrid brain-computer interfaces: Principles, design, and applications. Comput. Intell. Neurosci. 2019, 2019, 3807670. [Google Scholar] [CrossRef]
  26. Zhang, S.; Zheng, Y.; Wang, D.; Wang, L.; Ma, J.; Zhang, J.; Xu, W.; Li, D.; Zhang, D. Application of a common spatial pattern-based algorithm for an fNIRS-based motor imagery brain-computer interface. Neurosci. Lett. 2017, 655, 35–40. [Google Scholar] [CrossRef]
  27. Khan, M.J.; Hong, K.-S. Passive BCI based on drowsiness detection: An fNIRS study. Biomed. Opt. Express 2015, 6, 4063–4078. [Google Scholar] [CrossRef] [Green Version]
  28. Hasan, M.A.; Khan, M.U.; Mishra, D. A computationally efficient method for hybrid EEG-fNIRS BCI based on the Pearson correlation. BioMed Res. Int. 2020, 2020, 1838140. [Google Scholar] [CrossRef]
  29. Nazeer, H.; Naseer, N.; Khan, R.A.; Noori, F.M.; Qureshi, N.K.; Khan, U.S.; Khan, M.J. Enhancing classification accuracy of fNIRS-BCI using features acquired from vector-based phase analysis. J. Neural Eng. 2020, 17, 056025. [Google Scholar] [CrossRef]
  30. Zafar, A.; Hong, K.-S. Neuronal activation detection using vector phase analysis with dual threshold circles: A functional near-infrared spectroscopy study. Int. J. Neural Syst. 2018, 28, 1850031. [Google Scholar] [CrossRef]
  31. Zafar, A.; Hong, K.-S. Reduction of onset delay in functional near-infrared spectroscopy: Prediction of HbO/HbR signals. Front. Neurorobot. 2020, 14, 10. [Google Scholar] [CrossRef] [Green Version]
  32. Naseer, N.; Hong, K.-S. Classification of functional near-infrared spectroscopy signals corresponding to the right-and left-wrist motor imagery for development of a brain–computer interface. Neurosci. Lett. 2013, 553, 84–89. [Google Scholar] [CrossRef]
  33. Naseer, N.; Hong, M.J.; Hong, K.-S. Online binary decision decoding using functional near-infrared spectroscopy for the development of brain–computer interface. Exp. Brain Res. 2014, 232, 555–564. [Google Scholar] [CrossRef]
  34. Scarpa, F.; Brigadoi, S.; Cutini, S.; Scatturin, P.; Zorzi, M.; Dell’Acqua, R.; Sparacino, G. A reference-channel based methodology to improve estimation of event-related hemodynamic response from fNIRS measurements. NeuroImage 2013, 72, 106–119. [Google Scholar] [CrossRef]
  35. Lee, J.; Mukae, N.; Arata, J.; Iihara, K.; Hashizume, M. Comparison of feature vector compositions to enhance the performance of NIRS-BCI-triggered robotic hand orthosis for post-stroke motor recovery. Appl. Sci. 2019, 9, 3845. [Google Scholar] [CrossRef] [Green Version]
  36. Nazeer, H.; Naseer, N.; Mehboob, A.; Khan, M.J.; Khan, R.A.; Khan, U.S.; Ayaz, Y. Enhancing classification performance of fNIRS-BCI by identifying cortically active channels using the z-score method. Sensors 2020, 20, 6995. [Google Scholar] [CrossRef]
  37. Gulraiz, A.; Naseer, N.; Nazeer, H.; Khan, M.J.; Khan, R.A.; Shahbaz Khan, U. LASSO Homotopy-Based Sparse Representation Classification for fNIRS-BCI. Sensors 2022, 22, 2575. [Google Scholar] [CrossRef]
  38. Huang, M.; Zhang, X.; Chen, X.; Mai, Y.; Wu, X.; Zhao, J.; Feng, Q. Joint-channel-connectivity-based feature selection and classification on fNIRS for stress detection in decision-making. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 1858–1869. [Google Scholar] [CrossRef]
  39. Zafar, A.; Dad Kallu, K.; Atif Yaqub, M.; Ali, M.U.; Hyuk Byun, J.; Yoon, M.; Su Kim, K.; Memmolo, V. A Hybrid GCN and Filter-Based Framework for Channel and Feature Selection: An fNIRS-BCI Study. Int. J. Intell. Syst. 2023, 2023, 8812844. [Google Scholar] [CrossRef]
  40. Petrantonakis, P.C.; Kompatsiaris, I. Single-trial NIRS data classification for brain–computer interfaces using graph signal processing. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1700–1709. [Google Scholar] [CrossRef]
  41. Ghaffar, M.S.B.A.; Khan, U.S.; Iqbal, J.; Rashid, N.; Hamza, A.; Qureshi, W.S.; Tiwana, M.I.; Izhar, U. Improving classification performance of four class FNIRS-BCI using Mel Frequency Cepstral Coefficients (MFCC). Infrared Phys. Technol. 2021, 112, 103589. [Google Scholar] [CrossRef]
  42. Paulmurugan, K.; Vijayaragavan, V.; Ghosh, S.; Padmanabhan, P.; Gulyás, B. Brain–Computer Interfacing Using Functional Near-Infrared Spectroscopy (fNIRS). Biosensors 2021, 11, 389. [Google Scholar] [CrossRef] [PubMed]
  43. Zafar, A.; Ghafoor, U.; Yaqub, M.A.; Hong, K.-S. Initial-dip-based classification for fNIRS-BCI. Proc. Neural Imaging Sens. 2019, 2019, 116–124. [Google Scholar]
  44. Asam, M.; Khan, S.H.; Akbar, A.; Bibi, S.; Jamal, T.; Khan, A.; Ghafoor, U.; Bhutta, M.R. IoT malware detection architecture using a novel channel boosted and squeezed CNN. Sci. Rep. 2022, 12, 15498. [Google Scholar] [CrossRef] [PubMed]
  45. Aydin, E.A. Subject-Specific feature selection for near infrared spectroscopy based brain-computer interfaces. Comput. Methods Programs Biomed. 2020, 195, 105535. [Google Scholar] [CrossRef] [PubMed]
  46. Naseer, N.; Hong, K.-S. fNIRS-based brain-computer interfaces: A review. Front. Hum. Neurosci. 2015, 9, 3. [Google Scholar] [CrossRef] [Green Version]
  47. Noori, F.M.; Naseer, N.; Qureshi, N.K.; Nazeer, H.; Khan, R.A. Optimal feature selection from fNIRS signals using genetic algorithms for BCI. Neurosci. Lett. 2017, 647, 61–66. [Google Scholar] [CrossRef]
  48. Li, C.; Xu, Y.; He, L.; Zhu, Y.; Kuang, S.; Sun, L. Research on fNIRS Recognition Method of Upper Limb Movement Intention. Electronics 2021, 10, 1239. [Google Scholar] [CrossRef]
  49. Li, H.; Gong, A.; Zhao, L.; Zhang, W.; Wang, F.; Fu, Y. Decoding of walking imagery and idle state using sparse representation based on fNIRS. Comput. Intell. Neurosci. 2021, 2021, 6614112. [Google Scholar] [CrossRef]
  50. Dokeroglu, T.; Deniz, A.; Kiziloz, H.E. A comprehensive survey on recent metaheuristics for feature selection. Neurocomputing 2022, 494, 269–296. [Google Scholar] [CrossRef]
  51. Yang, L.; Jin, R. Distance metric learning: A comprehensive survey. Mich. State Univ. 2006, 2, 4. [Google Scholar]
  52. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  53. Kim, K.S.; Choi, H.H.; Moon, C.S.; Mun, C.W. Comparison of k-nearest neighbor, quadratic discriminant and linear discriminant analysis in classification of electromyogram signals based on the wrist-motion directions. Curr. Appl. Phys. 2011, 11, 740–745. [Google Scholar] [CrossRef]
  54. Ali, M.U.; Saleem, S.; Masood, H.; Kallu, K.D.; Masud, M.; Alvi, M.J.; Zafar, A. Early hotspot detection in photovoltaic modules using color image descriptors: An infrared thermography study. Int. J. Energy Res. 2022, 46, 774–785. [Google Scholar] [CrossRef]
  55. Shin, J.; von Lühmann, A.; Blankertz, B.; Kim, D.-W.; Jeong, J.; Hwang, H.-J.; Müller, K.-R. Open access dataset for EEG+ NIRS single-trial classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 1735–1745. [Google Scholar] [CrossRef]
  56. Hong, K.S.; Bhutta, M.R.; Liu, X.; Shin, Y.I. Classification of somatosensory cortex activities using fNIRS. Behav. Brain Res. 2017, 333, 225–234. [Google Scholar] [CrossRef]
  57. Hwang, H.-J.; Lim, J.-H.; Kim, D.-W.; Im, C.-H. Evaluation of various mental task combinations for near-infrared spectroscopy-based brain-computer interfaces. J. Biomed. Opt. 2014, 19, 077005. [Google Scholar] [CrossRef]
  58. Hong, K.-S.; Khan, M.J.; Hong, M.J. Feature Extraction and Classification Methods for Hybrid fNIRS-EEG Brain-Computer Interfaces. Front. Hum. Neurosci. 2018, 12, 246. [Google Scholar] [CrossRef]
  59. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  60. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  61. Khorram, T.; Baykan, N.A. Feature selection in network intrusion detection using metaheuristic algorithms. Int. J. Adv. Res. Ideas Innov. Technol. 2018, 4, 704–710. [Google Scholar]
  62. Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef] [Green Version]
  63. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Chapter 10—Metaheuristic Algorithms: A Comprehensive Review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Sangaiah, A.K., Sheng, M., Zhang, Z., Eds.; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
  64. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  65. Yang, X.-S. (Ed.) Chapter 1—Introduction to Algorithms. In Nature-Inspired Optimization Algorithms (Second Edition); Academic Press: Cambridge, MA, USA, 2021; pp. 1–22. [Google Scholar]
  66. Liu, W.; Wang, J. A Brief Survey on Nature-Inspired Metaheuristics for Feature Selection in Classification in this Decade. In Proceedings of the 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), Banff, AB, Canada, 9–11 May 2019; pp. 424–429. [Google Scholar]
  67. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  68. Ekinci, S.; Hekimoğlu, B. Improved Kidney-Inspired Algorithm Approach for Tuning of PID Controller in AVR System. IEEE Access 2019, 7, 39935–39947. [Google Scholar] [CrossRef]
  69. Mannan, J.; Kamran, M.A.; Ali, M.U.; Mannan, M.M.N. Quintessential strategy to operate photovoltaic system coupled with dual battery storage and grid connection. Int. J. Energy Res. 2021, 45, 21140–21157. [Google Scholar] [CrossRef]
  70. Yang, X.-S. (Ed.) Chapter 8—Particle Swarm Optimization. In Nature-Inspired Optimization Algorithms, 2nd ed.; Academic Press: Cambridge, MA, USA, 2021; pp. 111–121. [Google Scholar]
  71. Thaher, T.; Chantar, H.; Too, J.; Mafarja, M.; Turabieh, H.; Houssein, E.H. Boolean Particle Swarm Optimization with various Evolutionary Population Dynamics approaches for feature selection problems. Expert Syst. Appl. 2022, 195, 116550. [Google Scholar] [CrossRef]
  72. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  73. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  74. Yang, X.-S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  75. Fister, I.; Fister Jr, I.; Yang, X.-S.; Brest, J. A comprehensive review of firefly algorithms. Swarm Evol. Comput. 2013, 13, 34–46. [Google Scholar] [CrossRef] [Green Version]
  76. Altabeeb, A.M.; Mohsen, A.M.; Ghallab, A. An improved hybrid firefly algorithm for capacitated vehicle routing problem. Appl. Soft Comput. 2019, 84, 105728. [Google Scholar] [CrossRef]
  77. Yang, X.-S. A New Metaheuristic Bat-Inspired Algorithm; Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  78. Yang, X.-S. (Ed.) Chapter 11—Bat Algorithms. In Nature-Inspired Optimization Algorithms, 2nd ed.; Academic Press: Cambridge, MA, USA, 2021; pp. 157–173. [Google Scholar]
  79. Yang, X.-S.; He, X. Bat algorithm: Literature review and applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef] [Green Version]
  80. Yildizdan, G.; Baykan, Ö.K. A novel modified bat algorithm hybridizing by differential evolution algorithm. Expert Syst. Appl. 2020, 141, 112949. [Google Scholar] [CrossRef]
  81. Rodrigues, D.; Yang, X.-S.; De Souza, A.N.; Papa, J.P. Binary flower pollination algorithm and its application to feature selection. Recent Adv. Swarm Intell. Evol. Comput. 2015, 585, 85–100. [Google Scholar]
  82. Yang, X.-S. (Ed.) Chapter 12—Flower Pollination Algorithms. In Nature-Inspired Optimization Algorithms, 2nd ed.; Academic Press: Cambridge, MA, USA, 2021; pp. 175–195. [Google Scholar]
  83. Ong, K.M.; Ong, P.; Sia, C.K. A new flower pollination algorithm with improved convergence and its application to engineering optimization. Decis. Anal. J. 2022, 5, 100144. [Google Scholar] [CrossRef]
  84. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  85. Sharawi, M.; Zawbaa, H.M.; Emary, E. Feature selection approach based on whale optimization algorithm. In Proceedings of the 2017 Ninth International Conference on Advanced Computational Intelligence (ICACI), Doha, Qatar, 4–6 February 2017; pp. 163–168. [Google Scholar]
  86. Hassouneh, Y.; Turabieh, H.; Thaher, T.; Tumar, I.; Chantar, H.; Too, J. Boosted Whale Optimization Algorithm With Natural Selection Operators for Software Fault Prediction. IEEE Access 2021, 9, 14239–14258. [Google Scholar] [CrossRef]
  87. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  88. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.d.S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 2016, 47, 106–119. [Google Scholar] [CrossRef]
  89. Tu, Q.; Chen, X.; Liu, X. Multi-strategy ensemble grey wolf optimizer and its application to feature selection. Appl. Soft Comput. 2019, 76, 16–30. [Google Scholar] [CrossRef]
  90. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  91. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Zamani, H.; Bahreininejad, A. GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems. J. Comput. Sci. 2022, 61, 101636. [Google Scholar] [CrossRef]
  92. Aljarah, I.; Mafarja, M.; Heidari, A.A.; Faris, H.; Mirjalili, S. Clustering analysis using a novel locality-informed grey wolf-inspired clustering approach. Knowl. Inf. Syst. 2020, 62, 507–539. [Google Scholar] [CrossRef] [Green Version]
  93. Faris, H.; Aljarah, I.; Al-Betar, M.A.; Mirjalili, S. Grey wolf optimizer: A review of recent variants and applications. Neural Comput. Appl. 2018, 30, 413–435. [Google Scholar] [CrossRef]
  94. Ergün, E.; Aydemir, Ö. Decoding of Binary Mental Arithmetic Based Near-Infrared Spectroscopy Signals. In Proceedings of the 2018 3rd International Conference on Computer Science and Engineering (UBMK), Sarajevo, Bosna-Hersek, 20–23 September 2018; pp. 201–204. [Google Scholar]
  95. Jiang, X.; Gu, X.; Xu, K.; Ren, H.; Chen, W. Independent decision path fusion for bimodal asynchronous brain–computer interface to discriminate multiclass mental states. IEEE Access 2019, 7, 165303–165317. [Google Scholar] [CrossRef]
  96. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Abualigah, L. Binary Aquila Optimizer for Selecting Effective Features from Medical Data: A COVID-19 Case Study. Mathematics 2022, 10, 1929. [Google Scholar] [CrossRef]
  97. Nadimi-Shahraki, M.H.; Banaie-Dezfouli, M.; Zamani, H.; Taghian, S.; Mirjalili, S. B-MFO: A Binary Moth-Flame Optimization for Feature Selection from Medical Datasets. Computers 2021, 10, 136. [Google Scholar] [CrossRef]
  98. Taghian, S.; Nadimi-Shahraki, M.H.; Zamani, H. Comparative Analysis of Transfer Function-based Binary Metaheuristic Algorithms for Feature Selection. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; pp. 1–6. [Google Scholar]
Figure 1. Proposed wrapper-based feature selection approach for the functional near-infrared spectroscopy (fNIRS) signals.
Figure 1. Proposed wrapper-based feature selection approach for the functional near-infrared spectroscopy (fNIRS) signals.
Sensors 23 03714 g001
Figure 2. (a) Experimental paradigm; (b) fNIRS optodes placement.
Figure 2. (a) Experimental paradigm; (b) fNIRS optodes placement.
Sensors 23 03714 g002
Figure 3. Schematic of the working of the wrapper-based method.
Figure 3. Schematic of the working of the wrapper-based method.
Sensors 23 03714 g003
Figure 4. Comparison of all the presented metaheuristic algorithms and all the channel features for the various tasks (data represented as the mean ± the standard deviation). * p < 0.01.
Figure 4. Comparison of all the presented metaheuristic algorithms and all the channel features for the various tasks (data represented as the mean ± the standard deviation). * p < 0.01.
Sensors 23 03714 g004
Table 1. Parameters for each algorithm.
Table 1. Parameters for each algorithm.
PSOCSOFABAFPOWOAGWO
T = 100 N = 10 c 1 = 1 c 2 = 2 α = 2 T = 100 N = 10 α = 1 λ = 1.5 T = 100 N = 10 α = 1 β 0 = 1 γ = 1 T = 100 N = 10 f max = 2 f min = 0 α = β = 0.9 A = 2 T = 100 N = 10 λ = 1.5 T = 100 N = 10 b = 1 l = 1 T = 100 N = 10
Table 2. Classification performance of each subject for the mental arithmetic (MA) and baseline tasks in terms of the accuracy (Acc) and the feature vector size (F.V.S.) (data represented as the mean ± the standard deviation).
Table 2. Classification performance of each subject for the mental arithmetic (MA) and baseline tasks in terms of the accuracy (Acc) and the feature vector size (F.V.S.) (data represented as the mean ± the standard deviation).
SubjectPSOCSOFABAFPOWOAGWOFull Features
F.V.S.Acc (%)F.V.S.Acc (%)F.V.SAcc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)
157.9 ± 8.496.6 ± 4.364.6 ± 6.196.6 ± 5.877.6 ± 8.497.5 ± 472.5 ± 5.998.3 ± 3.578 ± 3.997.5 ± 436.1 ± 35.594.1 ± 6.824.1 ± 5.199.1 ± 2.618091.67
258.8 ± 12.894.1 ± 5.659 ± 6.490.8 ± 8.276 ± 6.491.6 ± 10.370.6 ± 6.489.1 ± 7.978.3 ± 7.891.6 ± 5.56.6 ± 3.990 ± 8.613.6 ± 5.294.1 ± 8.891.67
356.9 ± 5.885.8 ± 11.167.9 ± 5.785.8 ± 10.481.2 ± 7.483.3 ± 9.674.3 ± 578.3 ± 8.981 ± 5.874.1 ± 13.212.5 ± 1481.6 ± 8.621.3 ± 4.693.3 ± 6.583.33
457.6 ± 6.293.3 ± 6.558.7 ± 4.688.3 ± 8.975.1 ± 5.790 ± 6.569.4 ± 5.589.1 ± 8.875.3 ± 5.684.1 ± 7.24.8 ± 2.888.3 ± 8.914.4 ± 4.493.3 ± 6.591.67
564.4 ± 8.782.5 ± 13.270.9 ± 4.582.5 ± 14.480.6 ± 6.483.3 ± 11.780.6 ± 7.184.1 ± 13.879.3 ± 5.583.3 ± 10.310.1 ± 14.785.8 ± 6.827.3 ± 5.894.1 ± 5.658.33
658.8 ± 7.585.8 ± 7.970.4 ± 7.489.1 ± 7.980.1 ± 6.584.1 ± 9.177 ± 6.684.1 ± 8.279.8 ± 6.677.5 ± 1312.6 ± 14.588.3 ± 9.722.8 ± 7.694.1 ± 8.891.67
759.9 ± 6.984.1 ± 9.169.7 ± 991.6 ± 7.886.2 ± 6.487.5 ± 776.6 ± 4.782.5 ± 9.179.6 ± 6.584.1 ± 9.115.3 ± 29.990.8 ± 7.227.7 ± 8.195.8 ± 775
863.6 ± 5.389.1 ± 6.868.6 ± 789.1 ± 6.880.8 ± 7.881.6 ± 10.276.1 ± 5.884.1 ± 11.481.9 ± 5.783.3 ± 12.417.9 ± 25.891.6 ± 7.821.6 ± 3.694.1 ± 6.875
963.1 ± 8.296.6 ± 5.866.4 ± 4.695 ± 781 ± 4.492.5 ± 8.279.9 ± 6.297.5 ± 481.3 ± 3.590 ± 6.518 ± 20.496.6 ± 5.820.2 ± 6.299.1 ± 2.691.67
1054.2 ± 6.9100 ± 054.4 ± 5.1100 ± 072.3 ± 5100 ± 065.5 ± 3.7100 ± 076.4 ± 5.6100 ± 07.2 ± 7100 ± 06.9 ± 2.8100 ± 0100
1167.5 ± 6.790.8 ± 7.273.7 ± 6.981.6 ± 6.581.5 ± 5.277.5 ± 5.682.3 ± 5.588.3 ± 8.983.4 ± 885 ± 6.517.9 ± 27.681.6 ± 7.628 ± 691.6 ± 6.875
1260.6 ± 8.585 ± 6.562.6 ± 680 ± 12.584.3 ± 880 ± 10.577.6 ± 6.580.8 ± 7.980.1 ± 6.475 ± 10.35.7 ± 4.189.1 ± 5.618.8 ± 7.893.3 ± 5.275
1366.7 ± 7.587.5 ± 14.874.2 ± 3.985 ± 9.482 ± 4.580.8 ± 7.980.5 ± 5.381.6 ± 11.679.5 ± 5.982.5 ± 9.121.9 ± 28.586.6 ± 5.827.1 ± 5.193.3 ± 8.666.67
1466.4 ± 8.984.1 ± 10.767.4 ± 4.587.5 ± 783.1 ± 6.485.8 ± 5.677.9 ± 6.878.3 ± 9.782 ± 6.679.1 ± 917.3 ± 21.784.1 ± 10.723.9 ± 594.1 ± 483.33
1571.8 ± 7.287.5 ± 9.871.7 ± 3.577.5 ± 11.885.6 ± 7.981.6 ± 10.282.5 ± 8.881.6 ± 7.681.6 ± 5.578.3 ± 717.1 ± 37.279.1 ± 5.828.9 ± 7.494.1 ± 9.666.67
1667.8 ± 7.880.8 ± 9.672.4 ± 3.680 ± 11.983.8 ± 581.6 ± 1476 ± 5.781.6 ± 9.478.4 ± 4.777.5 ± 15.26.5 ± 10.179.1 ± 10.526.1 ± 4.685.8 ± 8.875
1761.6 ± 5.194.1 ± 5.667.1 ± 4.495.8 ± 5.878.3 ± 591.6 ± 6.878.3 ± 4.692.5 ± 6.180.7 ± 5.190.8 ± 8.237.2 ± 40.593.3 ± 5.217.5 ± 4.796.6 ± 4.383.33
1856.8 ± 4.195.8 ± 4.362.6 ± 8.196.6 ± 4.381.3 ± 4.496.6 ± 5.876.3 ± 5.393.3 ± 7.677.3 ± 7.797.5 ± 48 ± 9.295.8 ± 4.312.5 ± 5.796.6 ± 4.383.33
1957.8 ± 5.886.6 ± 870.1 ± 10.286.6 ± 11.980.2 ± 7.390 ± 11.677.8 ± 686.6 ± 8.980.4 ± 584.1 ± 9.919.8 ± 27.389.1 ± 9.617.8 ± 6.494.1 ± 5.683.33
2061.8 ± 6.983.3 ± 10.368.1 ± 7.881.6 ± 11.678.6 ± 7.885 ± 10.979 ± 6.683.3 ± 9.684.8 ± 7.882.5 ± 8.211.3 ± 12.385 ± 7.623.8 ± 4.793.3 ± 6.583.33
2160.5 ± 8.690.8 ± 8.269.7 ± 6.691.6 ± 8.786.5 ± 6.386.6 ± 5.879.6 ± 7.690 ± 9.479.4 ± 3.290.8 ± 10.714.1 ± 12.593.3 ± 8.624.6 ± 5.799.1 ± 2.6100
2261.1 ± 5.790 ± 11.673.1 ± 7.693.3 ± 6.581.9 ± 6.285 ± 10.979.6 ± 6.685.8 ± 1378.6 ± 7.584.1 ± 9.914.8 ± 1589.1 ± 5.623.9 ± 6.695.8 ± 4.383.33
2365.8 ± 7.587.5 ± 11.265.4 ± 6.988.3 ± 8.978.9 ± 5.676.6 ± 15.677.6 ± 5.177.5 ± 15.276.9 ± 4.977.5 ± 1316.9 ± 16.489.1 ± 7.922.1 ± 7.794.1 ± 6.883.33
2462.3 ± 5.483.3 ± 15.770.2 ± 385 ± 10.281.7 ± 4.986.6 ± 877.7 ± 4.376.6 ± 12.982.6 ± 9.180 ± 8.97.6 ± 12.683.3 ± 9.626.8 ± 7.289.1 ± 466.67
2559.5 ± 10.192.5 ± 7.268.1 ± 9.892.5 ± 9.977.2 ± 6.790 ± 8.679 ± 8.893.3 ± 6.579.4 ± 5.990.8 ± 8.223.2 ± 31.193.3 ± 5.219.4 ± 7.699.1 ± 2.683.33
2661.6 ± 6.592.5 ± 7.265.5 ± 4.492.5 ± 7.283.1 ± 8.688.3 ± 879.9 ± 6.191.6 ± 6.881.6 ± 4.488.3 ± 8.924.1 ± 25.491.6 ± 7.822 ± 9.794.1 ± 6.891.67
2762.9 ± 7.490.8 ± 6.173 ± 396.6 ± 4.378.7 ± 7.287.5 ± 877.8 ± 8.292.5 ± 8.282.7 ± 3.485.8 ± 7.922.3 ± 27.291.6 ± 5.522.7 ± 6.495.8 ± 5.883.33
2860 ± 5.886.6 ± 9.767.6 ± 7.890 ± 11.677.3 ± 6.789.1 ± 5.676.7 ± 688.3 ± 781.5 ± 7.486.6 ± 12.59.9 ± 9.885 ± 10.219.1 ± 495 ± 5.866.67
2956.7 ± 785 ± 8.663.1 ± 9.193.3 ± 7.678.1 ± 5.793.3 ± 6.577.1 ± 10.190 ± 8.679.7 ± 6.985.8 ± 1314.2 ± 14.790.8 ± 2.622.4 ± 11.498.3 ± 3.575
Table 3. Classification performance of each subject for the left-hand motor imagery (LHMI) and right-hand motor imagery (RHMI) tasks in terms of the accuracy (Acc) and the feature vector size (F.V.S.) (data represented as the mean ± the standard deviation).
Table 3. Classification performance of each subject for the left-hand motor imagery (LHMI) and right-hand motor imagery (RHMI) tasks in terms of the accuracy (Acc) and the feature vector size (F.V.S.) (data represented as the mean ± the standard deviation).
SubjectPSOCSOFABAFPOWOAGWOFull Features
F.V.S.Acc (%)F.V.S.Acc (%)F.V.SAcc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)
169.4 ± 5.582.5 ± 9.174.8 ± 6.976.6 ± 10.285.1 ± 6.970 ± 8.981.4 ± 5.170 ± 783.2 ± 6.370 ± 817.5 ± 32.577.5 ± 5.635.7 ± 13.792.5 ± 8.218041.67
263.6 ± 4.787.5 ± 968.7 ± 6.794.1 ± 8.879.3 ± 6.785 ± 7.676 ± 5.391.6 ± 6.879.2 ± 6.587.5 ± 5.821.1 ± 22.189.1 ± 6.823.3 ± 7.894.1 ± 6.891.67
366.9 ± 9.283.3 ± 11.769.1 ± 5.980 ± 882.9 ± 7.680 ± 12.585.4 ± 7.977.5 ± 9.685.2 ± 7.378.3 ± 8.923.2 ± 43.386.6 ± 5.824.7 ± 6.594.1 ± 5.666.67
462.6 ± 6.483.3 ± 11.166.2 ± 6.784.1 ± 11.481.8 ± 5.982.5 ± 10.780.2 ± 4.387.5 ± 880.7 ± 5.679.1 ± 13.79.1 ± 1283.3 ± 1322.4 ± 6.692.5 ± 8.275
566.2 ± 4.188.3 ± 871.2 ± 7.178.3 ± 9.777.5 ± 480 ± 13.781.2 ± 9.674.1 ± 13.282 ± 5.680 ± 817.2 ± 16.785 ± 5.227.8 ± 6.590.8 ± 9.166.67
656.3 ± 7.888.3 ± 11.961.7 ± 2.985.8 ± 7.978 ± 7.486.6 ± 10.575.2 ± 6.384.1 ± 8.277.4 ± 6.880 ± 88.1 ± 10.490 ± 7.616.8 ± 6.486.6 ± 775
759.2 ± 790.8 ± 8.267.7 ± 5.390 ± 9.479.9 ± 4.886.6 ± 878 ± 6.286.6 ± 782.2 ± 8.985.8 ± 7.919.8 ± 27.882.5 ± 6.123.3 ± 6.195 ± 5.875
863.7 ± 7.879.1 ± 11.969.4 ± 5.778.3 ± 782.4 ± 9.674.1 ± 15.975.9 ± 6.177.5 ± 13.679.5 ± 7.581.6 ± 10.98.2 ± 7.881.6 ± 9.424.1 ± 3.987.5 ± 9.883.33
968.4 ± 5.383.3 ± 12.470.4 ± 7.577.5 ± 13.682.6 ± 7.279.1 ± 881.3 ± 4.975 ± 11.186.6 ± 9.173.3 ± 12.915.4 ± 12.484.1 ± 4.723.3 ± 6.796.6 ± 5.866.67
1066.8 ± 5.786.6 ± 9.774 ± 4.689.1 ± 6.880.2 ± 6.582.5 ± 7.283.3 ± 6.280.8 ± 7.981.8 ± 4.283.3 ± 9.615.3 ± 25.684.1 ± 10.731.1 ± 10.392.5 ± 7.283.33
1161.3 ± 9.679.1 ± 9.873.6 ± 685 ± 9.482.3 ± 6.479.1 ± 780 ± 7.680 ± 5.883.5 ± 5.681.6 ± 10.28 ± 983.3 ± 8.719.5 ± 5.191.6 ± 9.675
1263.4 ± 7.183.3 ± 7.864.7 ± 6.676.6 ± 10.982.5 ± 7.180.8 ± 10.475.2 ± 5.375.8 ± 14.978.4 ± 6.480.8 ± 10.416.5 ± 20.580.8 ± 8.823.5 ± 8.790 ± 8.658.33
1362.5 ± 5.189.1 ± 6.873.4 ± 7.588.3 ± 880.6 ± 6.884.1 ± 9.179.1 ± 4.580 ± 9.781 ± 480 ± 714.9 ± 19.985 ± 10.225.9 ± 8.696.6 ± 4.366.67
1465.8 ± 6.483.3 ± 5.567 ± 6.287.5 ± 781.3 ± 5.189.1 ± 9.680.3 ± 4.183.3 ± 9.680.5 ± 6.885.8 ± 8.812.4 ± 19.983.3 ± 7.827.4 ± 5.897.5 ± 466.67
1560.3 ± 7.674.1 ± 10.770 ± 6.580 ± 11.286.2 ± 7.277.5 ± 9.678.3 ± 9.588.3 ± 9.781.1 ± 3.480 ± 11.210.8 ± 7.485.8 ± 10.422.1 ± 5.493.3 ± 6.566.67
1658.7 ± 5.481.6 ± 15.166.5 ± 6.183.3 ± 11.180 ± 5.669.1 ± 6.876.6 ± 6.280 ± 15.878.7 ± 6.176.6 ± 12.211.1 ± 23.379.1 ± 10.529.3 ± 16.694.1 ± 491.67
1764.8 ± 7.282.5 ± 11.474.2 ± 5.985 ± 13.480.5 ± 6.480.8 ± 9.680 ± 975 ± 12.482.5 ± 5.773.3 ± 7.619 ± 25.480 ± 8.927.8 ± 9.491.6 ± 9.683.33
1862.8 ± 5.988.3 ± 868.3 ± 4.887.5 ± 9.884.7 ± 8.382.5 ± 6.179.9 ± 10.483.3 ± 8.778.3 ± 4.675.8 ± 11.48.5 ± 10.181.6 ± 7.626.7 ± 5.387.5 ± 775
1970.7 ± 4.579.1 ± 13.774.4 ± 6.872.5 ± 12.485.1 ± 7.171.6 ± 13.775.2 ± 5.258.3 ± 15.284.9 ± 6.664.1 ± 12.44.2 ± 3.880 ± 5.832.8 ± 8.286.6 ± 841.67
2067.2 ± 787.5 ± 771.6 ± 7.885 ± 9.483.4 ± 9.580 ± 8.975.9 ± 5.483.3 ± 11.179.2 ± 478.3 ± 10.525.8 ± 27.688.3 ± 825.8 ± 5.397.5 ± 483.33
2162.8 ± 10.179.1 ± 12.574.2 ± 8.185.8 ± 10.480.7 ± 8.480.8 ± 7.978.4 ± 5.181.6 ± 16.579.9 ± 7.380 ± 9.717.8 ± 21.186.6 ± 9.729.5 ± 8.294.1 ± 7.966.67
2259.4 ± 7.880.8 ± 9.667.7 ± 5.578.3 ± 780.7 ± 4.675 ± 12.474.9 ± 3.983.3 ± 5.584 ± 7.475 ± 8.78.1 ± 13.182.5 ± 8.223.5 ± 7.890 ± 5.266.67
2368.3 ± 8.488.3 ± 12.568.8 ± 7.988.3 ± 883.6 ± 7.685 ± 6.575.9 ± 5.586.6 ± 5.881.6 ± 6.385 ± 10.220.5 ± 29.887.5 ± 826.9 ± 1094.1 ± 5.675
2463.1 ± 8.477.5 ± 17.569.8 ± 4.874.1 ± 13.882.1 ± 5.574.1 ± 9.182.1 ± 7.877.5 ± 11.177.4 ± 5.472.5 ± 11.18.7 ± 12.679.1 ± 723.3 ± 4.290 ± 6.575
2563.6 ± 8.978.3 ± 5.872.8 ± 5.784.1 ± 6.182.2 ± 6.380 ± 779.8 ± 1081.6 ± 5.280.6 ± 5.169.1 ± 12.417.1 ± 12.885 ± 6.531.3 ± 7.591.6 ± 7.866.67
2662.1 ± 584.1 ± 8.272.3 ± 5.982.5 ± 9.178.2 ± 5.281.6 ± 1477.3 ± 3.887.5 ± 14.881.3 ± 5.280 ± 828.6 ± 30.581.6 ± 5.230.2 ± 13.691.6 ± 11.166.67
2766.9 ± 5.884.1 ± 10.769.9 ± 5.390 ± 6.581.6 ± 8.283.3 ± 9.678.7 ± 4.885 ± 10.981.9 ± 7.784.1 ± 7.218.1 ± 28.290 ± 8.624.4 ± 7.696.6 ± 875
2868.7 ± 5.182.5 ± 9.173 ± 7.586.6 ± 784.8 ± 6.183.3 ± 7.878.1 ± 5.485 ± 10.285.7 ± 8.482.5 ± 9.912.6 ± 6.588.3 ± 9.725.7 ± 7.793.3 ± 5.275
2964.4 ± 6.185 ± 12.272.2 ± 9.185 ± 9.480.2 ± 6.782.5 ± 9.979.1 ± 4.175 ± 10.382.2 ± 7.675 ± 12.424.6 ± 37.184.1 ± 9.128.1 ± 695 ± 5.866.67
Table 4. Classification performance of each subject for the LHMI, RHMI, MA, and baseline tasks in terms of the accuracy (Acc) and the feature vector size (F.V.S.) (data represented as the mean ± the standard deviation).
Table 4. Classification performance of each subject for the LHMI, RHMI, MA, and baseline tasks in terms of the accuracy (Acc) and the feature vector size (F.V.S.) (data represented as the mean ± the standard deviation).
SubjectPSOCSOFABAFPOWOAGWOFull Features
F.V.S.Acc (%)F.V.S.Acc (%)F.V.SAcc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)F.V.S.Acc (%)
181.1 ± 6.177.5 ± 7.681.3 ± 5.574.1 ± 8.786.1 ± 5.775 ± 5.584 ± 5.273.3 ± 6.886.9 ± 4.368.3 ± 4.851.6 ± 39.569.5 ± 6.844.8 ± 7.782.9 ± 8.218066.67
276.6 ± 8.179.5 ± 4.577.8 ± 4.179.1 ± 7.383.6 ± 5.472.5 ± 5.985.8 ± 7.276.6 ± 4.486.7 ± 5.675 ± 9.853.9 ± 37.472.9 ± 11.335.3 ± 5.587.9 ± 5.370.83
371.8 ± 7.175.8 ± 7.275.9 ± 672.9 ± 8.183 ± 4.473.7 ± 5.583.8 ± 8.470 ± 11.783.7 ± 4.870.4 ± 5.371.3 ± 51.872.9 ± 5.236.3 ± 783.7 ± 8.858.33
472.6 ± 8.278.7 ± 4.173.9 ± 3.977 ± 6.283.9 ± 372.9 ± 7.683.3 ± 7.474.5 ± 5.784.9 ± 6.173.7 ± 3.460.8 ± 31.471.2 ± 8.235.4 ± 9.886.2 ± 5.966.67
578.6 ± 12.673.3 ± 5.681.4 ± 675 ± 6.885.9 ± 5.267.9 ± 7.382.6 ± 6.772 ± 5.583.5 ± 7.667 ± 6.942 ± 36.173.3 ± 6.539.4 ± 9.586.6 ± 2.658.33
669.5 ± 4.869.5 ± 7.376.9 ± 7.168.7 ± 7.481.4 ± 660.4 ± 10.282.1 ± 5.667 ± 4.982.4 ± 8.765 ± 7.634.1 ± 25.863.7 ± 6.535.5 ± 5.883.7 ± 6.350
776.2 ± 8.279.1 ± 6.576 ± 7.685 ± 7.982.2 ± 5.477.9 ± 5.584.5 ± 4.972.5 ± 6.884.1 ± 6.177.9 ± 6.243.6 ± 31.874.1 ± 5.137.9 ± 490 ± 5.958.33
877.9 ± 8.165.8 ± 7.575.1 ± 6.868.3 ± 10.685.3 ± 7.467.9 ± 5.583.1 ± 961.6 ± 785.2 ± 6.961.2 ± 720.3 ± 17.562 ± 7.935.2 ± 4.584.5 ± 5.262.50
979.1 ± 6.680 ± 7.276.1 ± 3.579.5 ± 7.484.5 ± 5.777.9 ± 9.685.1 ± 676.6 ± 5.985.9 ± 6.276.6 ± 6.551.8 ± 37.975.8 ± 8.238.4 ± 5.591.6 ± 5.875
1070.9 ± 892.9 ± 3.475.2 ± 6.790 ± 483.9 ± 588.7 ± 6.884 ± 7.989.5 ± 5.283.7 ± 5.588.3 ± 5.153.9 ± 27.186.6 ± 5.433.3 ± 5.897.5 ± 2.183.33
1175.6 ± 7.371.2 ± 9.974.3 ± 5.965.4 ± 6.884.8 ± 6.464.1 ± 9.287.6 ± 567 ± 4.983.9 ± 4.667.9 ± 3.458.8 ± 44.962 ± 10.643.7 ± 13.576.6 ± 7.958.33
1267.9 ± 8.470.4 ± 8.671.9 ± 6.572.5 ± 11.983 ± 4.370 ± 8.979.3 ± 7.770.4 ± 6.681.2 ± 662.9 ± 7.417 ± 15.563.3 ± 6.430.4 ± 6.577 ± 5.658.33
1376.7 ± 6.977 ± 8.677.9 ± 6.471.6 ± 6.487.7 ± 5.367.9 ± 6.281.7 ± 5.769.5 ± 7.387.3 ± 665 ± 5.239.8 ± 26.869.1 ± 8.341.9 ± 6.287 ± 7.758.33
1477.2 ± 6.377 ± 6.275.4 ± 6.577 ± 5.685 ± 6.273.7 ± 7.383.9 ± 4.675.4 ± 684.5 ± 6.367.9 ± 8.535.2 ± 30.270.4 ± 8.241.4 ± 11.585 ± 7.475
1575.3 ± 4.369.1 ± 7.678.5 ± 968.3 ± 6.285.5 ± 6.361.6 ± 6.186.1 ± 5.164.1 ± 5.984.9 ± 558.7 ± 5.354.3 ± 44.462.5 ± 6.839.1 ± 5.978.7 ± 6.641.67
1677 ± 8.374.5 ± 680.2 ± 6.270 ± 6.484.3 ± 6.164.5 ± 7.480.8 ± 5.365.8 ± 9.585.1 ± 5.562.9 ± 10.239.8 ± 39.465.8 ± 10.941.1 ± 6.981.6 ± 5.966.67
1777.5 ± 6.682 ± 7.676.1 ± 6.375.8 ± 9.783.9 ± 7.170 ± 3.886.6 ± 7.372.9 ± 5.986 ± 769.1 ± 6.860.6 ± 41.170.4 ± 4.937.4 ± 7.989.1 ± 458.33
1876.9 ± 8.481.6 ± 10.474.8 ± 6.980 ± 782.1 ± 572.5 ± 7.684.6 ± 5.677.5 ± 7.187.2 ± 5.772.9 ± 9.239.6 ± 26.875.8 ± 4.733.8 ± 5.889.1 ± 6.566.67
1978 ± 5.970.8 ± 8.577.6 ± 6.159.5 ± 783.9 ± 5.259.1 ± 5.482.2 ± 7.355.8 ± 7.981.8 ± 4.355 ± 6.419.5 ± 28.560.4 ± 936.9 ± 6.778.7 ± 7.450
2077.1 ± 877.5 ± 973.5 ± 4.772.5 ± 5.689.6 ± 5.568.3 ± 6.282.8 ± 5.867.5 ± 7.887 ± 5.270 ± 7.554.7 ± 33.672 ± 7.837.2 ± 785 ± 6.562.50
2178.2 ± 580.4 ± 10.280.7 ± 6.677.9 ± 6.886.7 ± 7.879.1 ± 5.585.4 ± 3.274.1 ± 8.785.2 ± 6.575 ± 5.853.5 ± 42.276.2 ± 740.6 ± 7.388.3 ± 5.862.50
2278.1 ± 7.578.3 ± 879.8 ± 5.773.3 ± 6.586.2 ± 6.372 ± 3.985.4 ± 6.870.8 ± 4.885 ± 3.366.6 ± 746 ± 25.272 ± 7.644.1 ± 9.983.3 ± 5.870.83
2370.9 ± 5.682 ± 9.478.2 ± 7.882 ± 6.583.8 ± 5.975.8 ± 8.286.2 ± 8.178.3 ± 10.184.1 ± 7.477.5 ± 6.552.5 ± 44.674.1 ± 5.438 ± 1385 ± 1075
2478.4 ± 7.475.8 ± 6.174.6 ± 4.866.2 ± 6.384.6 ± 770 ± 6.485.5 ± 6.565 ± 6.579.8 ± 6.562.5 ± 738 ± 35.672 ± 7.632.4 ± 4.880.8 ± 5.654.17
2576 ± 3.680.8 ± 5.680.8 ± 6.570 ± 8.287.4 ± 7.169.1 ± 10.683.1 ± 7.565 ± 8.384 ± 6.163.3 ± 9.965.3 ± 26.569.5 ± 9.641 ± 7.786.2 ± 4.858.33
2674.8 ± 4.276.6 ± 6.878.4 ± 5.572.5 ± 8.386.1 ± 5.868.7 ± 7.983.1 ± 5.570.8 ± 8.782.7 ± 3.665.8 ± 5.850.2 ± 33.370 ± 7.237.6 ± 5.783.7 ± 5.754.17
2777 ± 8.785 ± 7.681.1 ± 4.375 ± 6.283.6 ± 4.473.3 ± 8.186.1 ± 7.270.8 ± 780.7 ± 7.973.3 ± 8.351.5 ± 30.372 ± 739.7 ± 8.982.9 ± 8.470.83
2875.5 ± 7.969.5 ± 978.7 ± 867.5 ± 5.482.1 ± 5.765 ± 485.1 ± 6.659.5 ± 9.284.2 ± 963.7 ± 9.437.8 ± 21.664.5 ± 7.637.8 ± 6.678.3 ± 8.262.50
2968.9 ± 3.774.5 ± 677.2 ± 7.474.5 ± 9.381.4 ± 673.3 ± 6.881.5 ± 5.275.8 ± 5.887.1 ± 5.870 ± 7.557.4 ± 4675.4 ± 5.333.9 ± 7.584.5 ± 9.666.67
Table 5. Average feature vector size and processing time of the optimization algorithms.
Table 5. Average feature vector size and processing time of the optimization algorithms.
Metaheuristic
Algorithm
Feature Vector
Size
Processing
Time (s)
PSO672.37
CSO724.48
FA8210.08
BA802.32
FPO822.27
WOA262.00
GWO292.17
Table 6. Comparison of the proposed framework with the studies using the same dataset.
Table 6. Comparison of the proposed framework with the studies using the same dataset.
Ref.MA
(%)
MI
(%)
Four-Class (LHMI,
RHMI, MA, and Baseline)
(%)
[36]88.187.2 (LHMI),
88.4 (RHMI)
-
[45]86.8377.41-
[55]83.663.5-
[94]84.9470.14-
[95]82.7665.86-
Proposed
approach
94.83 ± 5.5%92.57 ± 6.9%85.66 ± 7.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zafar, A.; Hussain, S.J.; Ali, M.U.; Lee, S.W. Metaheuristic Optimization-Based Feature Selection for Imagery and Arithmetic Tasks: An fNIRS Study. Sensors 2023, 23, 3714. https://doi.org/10.3390/s23073714

AMA Style

Zafar A, Hussain SJ, Ali MU, Lee SW. Metaheuristic Optimization-Based Feature Selection for Imagery and Arithmetic Tasks: An fNIRS Study. Sensors. 2023; 23(7):3714. https://doi.org/10.3390/s23073714

Chicago/Turabian Style

Zafar, Amad, Shaik Javeed Hussain, Muhammad Umair Ali, and Seung Won Lee. 2023. "Metaheuristic Optimization-Based Feature Selection for Imagery and Arithmetic Tasks: An fNIRS Study" Sensors 23, no. 7: 3714. https://doi.org/10.3390/s23073714

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop