Next Article in Journal
Sensing Advancement and Health Monitoring of Transport Structures
Previous Article in Journal
Video-Based Analysis and Reporting of Riding Behavior in Cyclocross Segments
Previous Article in Special Issue
Applying MMD Data Mining to Match Network Traffic for Stepping-Stone Intrusion Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Progress in Smart Electronic Nose Technologies Enabled with Machine Learning Methods

1
Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA 22030, USA
2
Applied Materials, Sunnyvale, CA 94085, USA
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(22), 7620; https://doi.org/10.3390/s21227620
Submission received: 24 October 2021 / Revised: 8 November 2021 / Accepted: 13 November 2021 / Published: 16 November 2021

Abstract

:
Machine learning methods enable the electronic nose (E-Nose) for precise odor identification with both qualitative and quantitative analysis. Advanced machine learning methods are crucial for the E-Nose to gain high performance and strengthen its capability in many applications, including robotics, food engineering, environment monitoring, and medical diagnosis. Recently, many machine learning techniques have been studied, developed, and integrated into feature extraction, modeling, and gas sensor drift compensation. The purpose of feature extraction is to keep robust pattern information in raw signals while removing redundancy and noise. With the extracted feature, a proper modeling method can effectively use the information for prediction. In addition, drift compensation is adopted to relieve the model accuracy degradation due to the gas sensor drifting. These recent advances have significantly promoted the prediction accuracy and stability of the E-Nose. This review is engaged to provide a summary of recent progress in advanced machine learning methods in E-Nose technologies and give an insight into new research directions in feature extraction, modeling, and sensor drift compensation.

1. Introduction

An electronic nose (or E-Nose) is an aroma analyzer that uses mechanical and electronic components to emulate the human olfactory system. Unlike conventional aroma analysis methods under the lab environment, E-Nose is developed for applications demanding quick measurement while avoiding the subjectivity of humans, which has been proven promising in robotics [1,2,3,4], food engineering [5,6,7,8,9,10,11], environment monitoring [12,13,14,15], and diagnosis of diseases [16,17,18,19,20,21].
Compared to the human olfactory system, an E-Nose uses a gas sensor array to convert the gas molecular signals into electric signals (Figure 1). Although no highly specific receptors are used in an E-Nose, unique patterns can be generated for various odors as their fingerprints for future predictions through proper machine learning techniques. According to Yan et al. [22], most optimizations adopted by recent studies for E-Nose systems belong to one of three categories: sensitive material selection and sensor array optimization, the feature extraction and selection method, and the pattern recognition method. Despite the advancements in finding more selective and sensitive materials/mechanisms for gas sensing such as a functionalized graphene [15,23,24,25,26], the conductive polymer [27,28,29,30], and sound acoustic wave gas sensor [31,32], improving the differentiation capability and long-term signal consistency of an E-Nose remains a challenge for machine learning and data processing.
The general machine learning framework of the E-Nose for specific applications involves feature extraction, modeling, and drift compensation. An E-Nose produces high-dimensional time-series raw signals in responding to target gases, which contain noises and redundant information. Feature extraction preserves only the information uniquely characterizing the pattern of an odor signal. The extracted features can be used for qualitative and quantitative aroma analysis assisted by proper modeling techniques. Qualitative aroma analysis aims to distinguish different odors; quantitative aroma analysis predicts a particular property associated with target odor. However, many gas sensors suffer from drifting problems [34,35,36]; that is, sensor responses to the same gas change over time due to sensor aging and environment change, and the inconsistency of sensor responses may void the model built on previous data. The sensor drifting problem can be relieved by drift compensation algorithms through machine learning instead of building a model on new data.
Although there have been a few previous reviews on the E-Nose, they focus either on specific applications [37,38,39] or only a portion of the entire E-Nose data processing pipeline [22,40]. In comparison, this review aims to provide a comprehensive study of machine learning techniques for general E-Nose applications. Moreover, recent years have seen high-performance neural network approaches adopted by various machine learning tasks for audio and image processing, but few works review their effectiveness in E-Nose data processing. Therefore, we are motivated to include the latest practices that have applied neural networks to the E-Nose in this work. Here, we review the recent advances in E-Nose machine learning techniques, with a focus on three important aspects: (1) feature extraction, (2) modeling, and (3) gas sensor drift compensation. By surveying machine learning methods for different E-Nose applications, this work tries to evaluate the performance of an E-Nose in existing applications and to inspire new applications in many emerging fields.

2. Machine Learning for E-Nose

2.1. Feature Extraction

E-Nose data is a time-series array of high dimensionality that reflects the concentration of target gas, and the sensor response shows as in Figure 2. The signal strength increases during the response phase and decreases during the sensor recovery phase in each measurement. Odors are distinguished and identified based on their distinct features at both phases in their responses. The features can either be manually extracted or learned from a neural network.

2.1.1. Manual Feature Extraction

In general, manually extracted features are selected based on the prior knowledge of data processing and E-Nose data. Features extracted from raw signals can be either from the time domain or frequency domain [41,42,43].
Time-domain features can be extracted from the original response curve. The commonly used features are summarized by [22] in Table 1, where x(t) represents either the voltage or resistance change signals generated by the E-Nose. Those features characterize the local pattern and can be calculated based on a small section of complete signals.
Nallon et al. [23] fabricated a graphene gas sensor and tested its response towards 11 different analytes. The signals were first normalized to the range 0~1, after which the undercurve areas during the sensor response and recovery were used as features. Zhi et al. [5] used a commercialized E-Nose with 18 metal oxide semiconductor (MOX) gas sensors to distinguish tea of different qualities. Maximum response values and average response values during the time period were extracted for classification.
In addition to the low-level features abstracting local characteristics of E-Nose signals, there are also high-level feature extraction practices such as parametric fitting with predefined functions [29,30]. Nallon et al. [24] modeled the resistance response of a graphene gas sensor as R s t = α S 1 e β s t + γ s for the sensing period and R r t = α r e β r t + γ r for the recovery period. Thus, six function parameters were obtained for each sensor and used as features for gas discrimination. It was reported that those parametric features showed better discrimination towards gases compared to other time-domain features. Yan et al. [44] compared different curve fitting functions on their performance in wound pathogen detection. In the study, signals were collected on seven pathogen samples using an E-Nose device with 5 MOX gas sensors. After the features were extracted using different feature extraction methods, they were used for training the same radial basis function network (RBFN), and the resulting test accuracies were compared. The selection of the parametric fitting function was essential to classification; a template function with more parameters might be influenced by the noise in the signal and result in low-quality features.
Inspired by the parametric curve fitting method, Liu et al. [45] came up with a non-parametric modeling-based feature extraction method by decomposing a sensor signal into impulse responses. An arbitrary signal v t can be represented by v t | θ s = τ = 1 D θ s τ u s t τ , where u s are the ideal step inputs and θ s are the coefficients used as features. A Mercer kernel was applied to regularize the solution and improve the finite impulse response model, which converted the problem into solving θ s ^ = a r g m i n θ s V s X s θ s T V s X s θ s + θ s T K 1 θ s , where V s are the sensor response vectors, X s are the vector presentation of u s t τ , and K is the kernel matrix. The method generated a feature matrix of size D * S , where D was the order of impulse response and S was the number of sensors. The extracted features were reported as effective for odor classification and noise resistance enough to skip denoising preprocessing.
The Windowing method slices signals in the time domain with window functions such as hamming and gaussian. Guo et al. [46] used a moving window to compute the area surrounded by the window and sensor signals (Figure 3). A window was placed around the peak of the normalized signal and moved both left and right by the width of the window, which generated three features corresponding to each window position. Various hyperparameters were tested including the width and the type of window, of which a training window of size 480 s produced the best classification accuracy.
Frequency-domain features can be extracted after transforming the original time-domain signal to a frequency domain through Fourier transform or wavelet transform [47,48,49,50,51]. During the experiment by Dai et al. [6], who used MOX sensors to classify different teas, original signals were transformed into a vector by wavelet packet decomposition with Daubechies wavelet as the wavelet base. The three-level decomposed signals produced eight energy strengths corresponding to distinct frequency bands for each sensor. Average and maximum energy strengths were then calculated as features. Men et al. [52] applied the same wavelet decomposition method as Dai while using variable importance of projection scores generated by partial least squares (PLS) to choose the features with the most explanatory power. The average and maximum energy strengths were then calculated as features.
Ye et al. [53] transformed the original signal to the frequency domain using discrete Fourier transform and used AC noise shift as a differentiation indicator. The AC noise that came from the power supply of the measurement equipment were 60 Hz and 120 Hz and they would be shifted from the measurement due to the surface reaction of the molecule and gas sensor. Similarly, Gomri et al. [54] analyzed the power spectral density (PSD) of the noise measured at gas sensor terminals during the sensor interaction with target analytes. Features including the derivative and the max power of noise PSD were proved effectively discriminate two different gases.

2.1.2. Feature Extraction through Learning

In addition to manual feature extraction that requires domain knowledge about the E-Nose and gas sensors, features can also be “learned” through optimizing a neural network. Different from the convolution neural network (CNN) in the next section, the methods discussed in this section do not adopt an end-to-end (E2E) scheme, which means they are not trained for prediction. Instead, these methods involve learning intermediate feature representations of E-Nose signals via supervised and unsupervised learning. Shi et al. [55] developed an odor classification pipeline with a CNN and support vector machine (SVM). The CNN was pretrained to learn feature embeddings, after which the SVM was trained on the features for odor classification. It was reported that the workflow cascading the CNN with the SVM had better differentiation ability compared to training two classifiers individually. With a similar architecture, [56] used a CNN to extract fusion features from an E-Nose and hyperspectral data for rice classification. Signals collected from the E-Nose and hyperspectral imager were concatenated and reshaped into a 2D image, after which the CNN was pretrained on the images for feature extraction. Another extreme learning machine (ELM) classifier was then trained on the resulting features for prediction.
P v j | h = σ c i + j W i j h j
P h j | v = σ b j + i W i j h j
Langkvist and Loutfi [57] proposed a method that used a Restricted Boltzmann Machine (RBM) to extract features from the E-Nose signals. RBM is a generative model learning to reconstruct input from its hidden representation. More specifically, the researchers used a variation of RBM called conditional RBM (or cRBM) to learn the feature representation of time-series gas sensor array signals (Figure 4). As an unsupervised learning method, RBM does not require labeled data during training, the goal of training is to construct the conditional probability pair as Equations (1) and (2), where h is the latent representation of the input, and v is the original input. After pretraining cRBM with data, a weight projecting input to latent features can be used to train or fine-tune another model to perform classification.
Other unsupervised models were also tried for feature extraction such as autoencoder and a deep belief network [34,57,58,59]. Autoencoder is a feed-forward network structure aimed at recovering input at its output by minimizing mean square error; a stacked autoencoder refers to an autoencoder with more than one hidden layer. Essentially, an autoencoder is a non-linear version of principal component analysis (PCA) whose non-linearity is introduced by the activation function between hidden layers, and both PCA and autoencoder can be used for dimension reduction [60]. Compared to dimension reduction with PCA, autoencoder can preserve more non-linear relationships in the resulting feature space. A recent study [61] also showed that an autoencoder network built from unlabeled data can generate highly discriminative features for another labeled dataset. Zhao et al. [62] proposed a stacked sparse autoencoder model (SSAE), which was combined with a backpropagation neural network (BPNN) to perform feature extraction for Chinese liquor classification (Figure 5). After the model was trained, an extra prediction layer was appended to the encoder of autoencoder for prediction. Lu et al. [63] replaced the hand-craft features for the E-Nose with latent representation generated from a gated recurrent unit-based autoencoder (GRU-AE). Compared to other dimensionality reduction methods including PCA and Kernel-PCA, feature representations from the GRU-AE were more distinguishable and effectively improved classification performance.
There have been many time-series unsupervised feature extraction methods proposed but not yet tried with gas sensor array signals. A more advanced model involves a temporal autoencoder [64] to capture both short-term features using CNN and temporal changes using long short-term memory (LSTM), or directly applying an LSTM autoencoder to extract features [65].

2.2. Modeling

Machine learning models have been heavily researched for mapping E-Nose features to target predictions such as odor categories and gas mixtures of different chemical concentrations.

2.2.1. Qualitative Aroma Analysis

The qualitative aroma analysis with an E-Nose aims to identify the distinctness of several unknown gas samples by the responses generated from the E-Nose device. The diversity of gas sensor arrays enables an E-Nose with differentiation capability towards different target gases and the differentiation capability of an E-Nose can be further improved with advanced modeling techniques. Table 2 lists the surveyed E-Nose modeling practices for qualitative aroma analysis.
Many studies [5,6] have adopted linear models such as linear discriminant analysis (LDA) or the crude k-nearest neighbor (KNN) model due to their easy implementation. Some studies have experimented with simple neural networks [66,75], most of which consisted of only one or two layers with a small number of parameters. E-Nose data are hard to obtain since there is no standard E-Nose system configuration or setup. In addition, environmental conditions vary among experiments, which holds the E-Nose back from adopting deep learning methods that demand large amounts of data samples.
However, there are also practices to apply deep learning models such as deep multi-layer perceptron networks, LSTM [78], and convolution neural networks (CNN) for odor classification [79,80,81]. It was reported by [82] that a deep neural network (DNN) with five hidden layers outperformed an SVM and MLP with a single but wide hidden layer for classifying wine. The proposed DNN required manual feature extraction and the maximum responses of sensors were used as features.
Regular feed-forward neural network architecture such as MLP tremendously increases the number of parameters when the network goes deeper. However, the convolutional neural network (CNN) improves efficiency by reusing the same set of parameters to different segments in inputs, which is well suitable for applications with inputs highly correlated in local areas, such as images and time-series signals. Compared to traditional machine learning methods, the CNN does not require the feature extraction process.
The architecture of a CNN can vary depending on the interpretation of E-Nose data. Some recent studies considered E-Nose data as a time-series array, while others interpreted E-Nose signals as images and applied a similar CNN architecture to image processing. Zhao et al. [79] processed e-nose signals as time-series data using a 1D-CNN, taking the assumption that signal responses from the gas sensor array in an E-Nose correlated along time steps, and signals from different sensors were independent. A structure was proposed (Figure 6) as a combination of two topologies: (1) signals of each sensor are processed by the same convolution operation and then concatenated along a depth channel, (2) cross sensor relationships are considered in the following three 1D convolution layers. A dropout layer was added during training to avoid overfitting, and a uniformly distributed Xavier was used for convolution layer parameter initialization. A total of 593 samples were used for model training and an evaluation in differentiating Ethylene, CO, and Methane. It was reported that 1D-CNN could outperform SVM, MLP, KNN, and random forest by around 10% on average.
On the other hand, Qi et al. [83] treated the E-Nose signals as an image and used a CNN with two convolution layers for Chinese liquor classification. All the values in the gas sensor array signal map were normalized to between 0 and 1 to generate a grayscale image. Wei et al. [73] tried to adapt a LeNet-5 network structure, which was previously for handwritten letter recognition to classify gases. They first down-sampled the signal to smaller feature maps, then rescaled all the values to 0~255, and then fed these to the CNN network as shown in Figure 7. To remedy the lack of data problem, a data augmentation technique was applied through translating the down-sampled data by steps of 2n to have a new feature map. It was reported that the resulting model outperformed multi-layer perceptron (MLP) and other linear models when classifying three gases and gas mixtures.
Peng et al. [84] practiced a deeper CNN with a total of 12 convolution layers and two pooling layers. Inspired by ResNet, shortcuts across convolution blocks were included to overcome the problem of gradient vanishing and speed up training. In their setting, four types of gases were measured with eight MOX gas sensors, with each gas sampled 300 times to form a dataset of 1200 samples. The proposed GasNet structure achieved 95.2% classification accuracy, outperforming an MLP and SVM by a large amount but taking training speed and model size as trade-offs. Zhang et al. [85] added a channel attention module to the CNN backbone, which was used to learn the dependencies of different channels for refined features. Compared to the manual feature extraction, the proposed model resulted in the best classification accuracy on 10 Chinese liquors.
H c i , j = 1 + cos θ i + θ j 2 ,   1 i , j T
H s i , j = 1 cos θ i + θ j 2 ,   1 i , j T
Some recent works encoded E-Nose data into an image before adopting a CNN for classification. Liu et al. [86] encoded the time-series signal of each sensor to an image of three channels to preserve the temporal dependencies. Two out of three total channels in the encoded images were built from a polar transition field, and the other channel was built from a Markov transition matrix. To convert time-series signals to a polar transition matrix, response strength in a sensor signal was normalized by the min–max method to the range of 0~1. Thus, an angle can be calculated as the inverse cosine of the normalized response. Two T × T angle transition matrixes were built with each position as Equations (3) and (4), where T is the number of time steps for the signal. In addition, the Markov transition matrix represented the chance that a state appears at a specific time after a state at another time point. The bin method constructed different states of responses, which quantile the response value to Q slots and assign the sequence number of bins as the state for each response value (Figure 8). The Q value was reported to affect the classification performance and was set to 32 for the optimal result. All the images encoded from nine sensors would be patched together to form an image of dimension (3T) × (3T) × 3 in RGB format for visualization and CNN classification. A recent study from Wang et al. [87] surveyed several ways to convert E-Nose data to an image-like 2D structure, including (1) taking E-Nose time-series data as an image, (2) reshaping data from each sensor into a small image patch and padding all the patches together in order, (3) the same as (2) but putting the most relevant sensors closer during padding. A modified ResNet-based CNN network was proposed to perform classification, and the conversion through method (3) was proved to outperform the other two on classification accuracy. Jong et al. [88] converted the correlation coefficient table of sensor responses to a heat map image; a regular CNN can process the resulting images for image processing. Shi et al. [89] treated the correlation coefficient table as a complete graph and used a graph convolutional neural network for modeling.
Terros-Tello et al. [90] investigated the performance of 1D-CNN, LSTM, and traditional machine learning models on classifying odors from explosives of different amounts. The LSTM model was able to produce accurate prediction by examining only a short portion of the entire time-series of E-Nose data.
To further improve the differentiation capability for gases, sensor fusion has been introduced by some studies by considering the signals from an electronic tongue together with an E-Nose [5,66,72,91], and Figure 9 shows one of the sensor fusion frameworks. Two sets of features are extracted from the E-Nose and E-Tongue, respectively, and are used to train separate classifiers, a decision is made by fusing the result from both classifiers using decision level fusion based on Dempster–Shafer evidence theory.

2.2.2. Quantitative Aroma Analysis

Quantitative analysis of E-Nose signals is a task aimed at estimating the continuous properties associated with gases or odors such as molecule concentration and strength. Compared to qualitative analysis, where labels associated with each sample indicate only the uniqueness of certain gases, target labels/properties in the quantitative analysis are more flexible. For instance, Zhang et al. [15] used a gas sensor array to estimate the concentration of formaldehyde, ammonia, and their mixtures. An MLP model was built in the experiment for prediction, and the mean absolute errors were 0.27 ppm and 0.37 ppm for ammonia and formaldehyde, respectively. In addition to the concentration of specific molecular components, other properties associated with gas can be predicted if they are related to the molecular composition of the gas. Coffee pH may relate to the concentrations of different volatile compounds, and it can be predicted based on the signal generated by a gas sensor array [66]. Table 3 provides a summary of practices for quantitative aroma analysis with an E-Nose.
Linear models are the most common models used in quantitative aroma analysis with an E-Nose for their simplicity. Partial least square regression (PLSR) is the preferred linear model over regular linear regression [70]. The preference is due to the limitation of the E-Nose data: features are much cheaper to calculate than obtaining many data samples, which introduces the curse of dimensionality problems. In this case, a high correlation might be observed among features, and overfitting can occur. PLS decorrelates features by projecting them into a latent space and reducing feature numbers by keeping the top-k latent variables that most explain the variance of latent target variables.
Recently, there has been a rising popularity in applying neural network structure for quantitative aroma analysis. Multi-layer perceptron (MLP) is the simplest feed-forward neural network structure applied by many studies. At each hidden layer, matrix multiplication is performed between the input vector and weight matrix to produce the output at that specific layer Most researchers adopt the architecture with one~two hidden layers and hidden units of different sizes, but networks can vary with structure towards predicting targets. Zhang et al. [92] reported their experiment using an E-Nose to predict individual chemical concentrations in gas mixtures with MLP (Figure 10). They compared the performance between a multiple inputs multiple outputs (SMIMO) MLP and several multiple inputs single output MLPs (MMISO).
Radial basis function network (RBFN) is a type of neural network with special architecture, which normally does not go “deep” (Figure 11). Typical RBFN consists of an input layer, a hidden layer, and an output layer. Unlike MLP, whose parameters are all randomized before training and calculations are performed as a matrix calculation at all the hidden layers, RBFN uses a radial basis function ϕ d to calculate the response at each hidden unit, where d represents the distance between the “center” of the unit and input vector. Gaussian activation (Equation (3)) is often used as the radial basis function in RBFN, which plays a similar role as the RBF kernel in a support vector machine to add more non-linearity by virtually projecting input vector into higher dimensions. Moreover, RBFN has a different training method from MLP, which relies mainly on the gradient-based method. The training step varies for RBFN depending on the decision of the variable setting: if center vector and receptive width are to be updated during training, then the gradient-based method can be used; otherwise, training will adopt a two-step manner, the hidden layer and output parameters will be trained separately in each step. To train the hidden layer, a center-based cluster algorithm such as K-means is used to determine the mean and variance of each hidden unit, and these variables are fixed during the training for the output layer using the gradient-based method [95]. RBFN is generally faster and more robust to train than MLP and is likely to perform better than MLP for qualitative prediction.
ϕ x , c = e x c 2 2 σ 2
There have been many recent studies using deep learning models for quantitative aroma analysis with an E-Nose [96]. Wang et al. [97] trained different recurrent neural network models on an open source dataset for air pollutants’ concentration. Compared to the vanilla recurrent neural network (RNN) and gated recurrent unit (GRU), LSTM showed the lowest prediction error on all four different pollutants. Guo et al. [98] proposed an E-Nose framework to predict odor descriptors using a CNN–LSTM model. E-nose data collected from 16 gas sensors were first sliced into small patches, among which each patch represented the same period. The data patches were then fed into multiple CNN–LSTM models, and each of the models ended with a fully connected layer that regressed a combination of odor descriptors. The CNN–LSTM model used both the spatial and temporal information while avoiding the gradient vanish problem in LSTM for long time series. To solve the data contamination problem caused by noise, Wijaya et al. [99] performed noise filtering through wavelet transform on the E-Nose raw signals before feeding the signals into an LSTM model. The most suitable mother wavelet for wavelet decomposition was decided based on the information quality ratio between raw and filtered signals. The noise filtering step was proved to be important to LSTM model performance on predicting the microbial population in beef samples.
Various metrics can be used to evaluate the performance of modeling for a gas property estimation task. Table 4 summarized the description and equation (if any) for evaluation metrics, among which a large t-value, R-value, and R2 value are desirable, while a small RMESP, RMSE, and MSE are desirable.

2.3. Sensor Drift Compensation

One of the biggest problems existing in gas sensor applications is sensor drifting. There are two causes of drifting: (1) natural drift is due to the aging of the sensor, and (2) secondary drift is due to environmental influences such as temperature and humidity. Unlike other sensors such as the gyroscope or accelerometer, gas sensors require reference gases of specific concentrations for calibration. Romain et al. [101] had a long-term stability test for commonly used MOX gas sensors and found that some sensors drift more than 200% after 7 years. A dataset with gas sensor signals over 36 months for investigating sensor drift was released by Vergara et al. [35], which was used by many related studies as the benchmark for gas sensor drift compensation. The dataset was divided into 10 batches, with all the data samples in the same batch corresponding to specific time ranges. Multiple machine learning techniques addressing the sensor drifting problem have been proposed such as ensemble learning and domain transfer learning. Table 5 summarizes the methods adopted by the recent E-Nose drift compensation based on the classifier ensemble, and all the methods were evaluated on the dataset collected in [35].
Ensemble learning for drift compensation trains a set of predictors on the data collected at different times [106,107]. Each of the predictors is assigned a weight on predicting new data samples. The average accuracy and standard deviation of all data batches and the final data batch accuracy are compared among different methods. Vergara et al. [35] proposed the ensemble learning method with SVM classifiers to counteract sensor drifting. An SVM classifier was trained on each batch of newly collected data at time t, notated as f t x . To predict the data at time step t+1, the decision was made as a weighted sum of classifiers trained previously, i.e., h t + 1 = i = 1 t β i f i x , where   β t is the weight for each classifier. For simplicity, the prediction accuracy of each f t x on the current batch of data was used as β t . The result showed that the ensemble classifier improved classification stability (Figure 12). Liu et al. [103] improved the ensemble method by introducing extra weights when training each classifier, called 2D dimension ensemble. For each batch of data with k different classes, k(k − 1)/2 classifiers were trained to solve the multi-class classification problem. Each of the classifiers was assigned a weight based on their performance on the current batch of data. Verma et al. [36] modified the original optimization step in ensemble drift compensation by introducing a regularization term. The regularization term restricts the data distribution change using the KL divergence and norm-based terms, resulting in higher accuracy than the original approach. Zhao et al. [105] proposed an ensemble learning framework with SVM and LSTM classifiers for drifting compensation. The dataset was preprocessed in four different configurations for training both SVM and LSTM, which added extra robustness to the ensemble learning.
Domain transfer learning aims to find a feature space that maximizes the similarities between samples from the source domain and target domain while ensuring the discrimination capability of the features [108,109,110]. Zhang et al. [111] investigated domain transfer learning with extreme learning machine (ELM) for gas sensor drift compensation. ELM is a special MLP whose weights in the first two layers are randomized and only the weights in the last layer are tunable. Two proposed methods (DAELM-S and DAELM-T) updated the weights parameter β of ELM obtained from previous data (source domain) by incorporating the latest partially labeled and unlabeled data (target domain). DAELM-S (Equation (6)) updated β by minimizing the weighted sum of error for labeled data from the source domain and target domain, where t is the target value and H is the input to the output layer; DAELM-T (Equation (7)) updated β by minimizing the error on the latest labeled data while regularizing the parameters change based on the source domain parameters.
min β S L D A E L M S = m i n β S 1 2 β S 2 + C S 2 i = 1 N S t S i H S i β S 2 + C T 2 j = 1 N T t T i H T i β S 2
min β T L W D A E L M T = m i n β T 1 2 β T 2 + C T 2 i = 1 N T t T i H T i β T 2 + C T u 2 j = 1 N T u ( H T u j β B H T u j β T ) 2
min β T L W D A E L M T = m i n β T 1 2 β T 2 + C T 2 i = 1 N T t T i H T i β T 2 + C T u 2 j = 1 N T u w ( H T u j β B H T u j β T ) 2
Based on [111], Ma et al. [112] proposed the weighted domain transfer extreme learning machine (WDTELM), which focused on reducing the impact of wrongly classified unlabeled data samples to parameter update. The final objective function was a variation from DAELM-T with extra weight (Equation (3)). Unlabeled data were first clustered and only a few in each cluster needed labeling. The weight for the rest of the unlabeled sample was assigned by its distance to the labeled data from the same cluster. The method showed 4% improvement on average compared to the previous DAELM-T.
Zhang et al. [113] proposed to find a feature projection that mitigated the difference between the source and target domain while regularizing the data distortion. Similarly, [114] used a kernel transformation for domain transfer. Based on [113], Yi et al. [115] proposed a feature subspace projection method by minimizing local intra-class variance and maximizing local inter-class variance. An autoencoder-based domain transfer was proposed in [34] to learn a feature projection unsupervised. The method modeled sensor drift as a result of device variation and time variation and encoded both variations into a domain feature vector. The autoencoder (Figure 13) was optimized to recover the input feature at its output. For those labeled data in the target domain, the resulting feature representations from the autoencoder were fed into an MLP for further classification. During training, a regularization term was added to enforce the similarity between encoded features for samples from the source and target domains.
Tao et al. [116] developed an adversarial training framework based on neural networks for E-Nose domain adaption, and the Wasserstein distance was used to measure the difference between the source domain and target domain (Figure 14). The adversarial training framework consisted of a feature extractor and a domain discriminator, where the feature extractor was trained to generate similar features for samples from the source and target domain, and the domain discriminator was trained to maximize the dissimilarities between the two. The classification performance was added as another constraint for the extracted features.
In addition to domain transfer learning, Atiq et al. [117] proposed a method to select drift-insensitive features. Discrete binary particle swarm optimization (DBPSO) searched for top-M feature combinations from the feature space that are the most resistant to drift. A cosine similarity model was built to evaluate feature combinations’ drift resistance by training on the first data batch and testing on other data batches collected at different time points. Yu et al. [118] also stated that by using a deep belief network for data preprocessing, the resulting features are more resistant to drifting due to the strengthened coupling among different sensors.

3. Conclusions

Due to the complex VOCs composition of odors, machine olfaction is rather challenging. The recent significant improvement in E-Nose’s stability and performance in both qualitative and quantitative analysis is a result of adopting machine learning methods. This review presented an overview of machine learning methods in smart sensing with a focus on feature extraction, modeling, and sensor drift compensation.
Previous works in E-Nose technology have extensively studied the time-domain and frequency-domain features in the signal analysis of an E-Nose. Manually extracted features were sufficient for odor discrimination in many cases and were widely used in various applications. However, manual feature extraction requires prior knowledge of gas sensor technology and needs very careful and time-consuming feature selection. In contrast, recent studies showed successful feature learning of raw sensing signals with neural networks, such as deep belief network and autoencoder, which only needed minimum data preprocessing steps for very competitive odor prediction accuracy. In addition, many practices adopted neural networks for modeling in both qualitative and quantitative analysis with E-Nose. Even with the limited data samples, the reported CNN and LSTM architectures led to performance boosts in comparison with conventional machine learning models. Moreover, gas sensor drifting affects the signal and feature consistency, which is a critical problem for an E-Nose’s performance. Although gas sensor drift compensation was addressed by many recent works with machine learning methods such as ensemble learning and domain adaption learning, it remains a significant obstacle for E-Nose technology. Additionally, the performance of feature selection and models depends on the E-Nose system setup and target gas [119,120,121], which might be another challenge to overcome.
Given that many advanced machine learning techniques are well established for other fields such as audio processing and computer vision [122], we hope that more attempts can be made to migrate those methods to E-Nose applications in the future. In addition, most works reported the proposed machine learning algorithm performance on a dataset collected on their own device, which makes it difficult to compare algorithms across different works. Although there have been a few public E-Nose datasets available [123,124,125], they have a limited number of samples and focus on specific target gases. Therefore, there is a need for building a benchmark dataset for the E-Nose with a standard system setup and data collection scheme. Moreover, many current works addressed the effectiveness of their methods only on certain target gases. However, we envision common patterns are shared among different gases of similar odors and transfer learning across various gases should be further exploited [120].

Author Contributions

Writing-original draft preparation, Z.Y.; writing-review and editing, Z.Y., Y.L. and Q.L.; supervision, Q.L.; funding acquisition, Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NASA’s STTR grant, and the Virginia Microelectronics Consortium’s (VMEC) research seed grant. The APC was funded by VMEC research seed grant.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The author would like to acknowledge the support of NASA’s STTR grant on hybrid gas sensors and the Virginia Microelectronics Consortium’s (VMEC) research seed grant.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yuan, H.; Xiao, C.; Zhan, W.; Wang, Y.; Shi, C.; Ye, H.; Jiang, K.; Ye, Z.; Zhou, C.; Wen, Y.; et al. Target Detection, Positioning and Tracking Using New UAV Gas Sensor Systems: Simulation and Analysis. J. Intell. Robot. Syst. 2019, 94, 871–882. [Google Scholar] [CrossRef]
  2. Wei, G.; Gardner, J.W.; Cole, M.; Xing, Y. Multi-Sensor Module for a Mobile Robot Operating in Harsh Environments. In Proceedings of the 2016 IEEE SENSORS, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. [Google Scholar]
  3. Xing, Y.; Vincent, T.A.; Cole, M.; Gardner, J.W.; Fan, H.; Bennetts, V.H.; Schaffernicht, E.; Lilienthal, A.J. Mobile Robot Multi-Sensor Unit for Unsupervised Gas Discrimination in Uncontrolled Environments. In Proceedings of the 2017 IEEE SENSORS, Glasgow, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar]
  4. Xing, Y.; Vincent, T.A.; Cole, M.; Gardner, J.W. Real-Time Thermal Modulation of High Bandwidth MOX Gas Sensors for Mobile Robot Applications. Sensors 2019, 19, 1180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Zhi, R.; Zhao, L.; Zhang, D. A Framework for the Multi-Level Fusion of Electronic Nose and Electronic Tongue for Tea Quality Assessment. Sensors 2017, 17, 1007. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Dai, Y.; Zhi, R.; Zhao, L.; Gao, H.; Shi, B.; Wang, H. Longjing Tea Quality Classification by Fusion of Features Collected from E-Nose. Chemom. Intell. Lab. Syst. 2015, 144, 63–70. [Google Scholar] [CrossRef]
  7. Xu, M.; Wang, J.; Zhu, L. The Qualitative and Quantitative Assessment of Tea Quality Based on E-Nose, E-Tongue and E-Eye Combined with Chemometrics. Food Chem. 2019, 289, 482–489. [Google Scholar] [CrossRef]
  8. Gu, D.-C.; Liu, W.; Yan, Y.; Wei, W.; Gan, J.; Lu, Y.; Jiang, Z.-L.; Wang, X.-C.; Xu, C.-H. A Novel Method for Rapid Quantitative Evaluating Formaldehyde in Squid Based on Electronic Nose. LWT 2019, 101, 382–388. [Google Scholar] [CrossRef]
  9. Jia, W.; Liang, G.; Tian, H.; Sun, J.; Wan, C. Electronic Nose-Based Technique for Rapid Detection and Recognition of Moldy Apples. Sensors 2019, 19, 1526. [Google Scholar] [CrossRef] [Green Version]
  10. Wijaya, D.R.; Sarno, R. Mobile Electronic Nose Architecture for Beef Quality Detection Based on Internet of Things. Technology 2015, 2, 10. [Google Scholar]
  11. Rusinek, R.; Kmiecik, D.; Gawrysiak-Witulska, M.; Malaga-Toboła, U.; Tabor, S.; Findura, P.; Siger, A.; Gancarz, M. Identification of the Olfactory Profile of Rapeseed Oil as a Function of Heating Time and Ratio of Volume and Surface Area of Contact with Oxygen Using an Electronic Nose. Sensors 2021, 21, 303. [Google Scholar] [CrossRef]
  12. Herrero, J.L.; Lozano, J.; Santos, J.P.; Suárez, J.I. On-Line Classification of Pollutants in Water Using Wireless Portable Electronic Noses. Chemosphere 2016, 152, 107–116. [Google Scholar] [CrossRef]
  13. Blanco-Rodríguez, A.; Camara, V.F.; Campo, F.; Becherán, L.; Durán, A.; Vieira, V.D.; de Melo, H.; Garcia-Ramirez, A.R. Development of an Electronic Nose to Characterize Odours Emitted from Different Stages in a Wastewater Treatment Plant. Water Res. 2018, 134, 92–100. [Google Scholar] [CrossRef]
  14. Cho, J.; Kim, Y.; Na, K.; Jeon, G. Wireless Electronic Nose System for Real-Time Quantitative Analysis of Gas Mixtures Using Micro-Gas Sensor Array and Neuro-Fuzzy Network. Sens. Actuators B Chem. 2008, 134, 104–111. [Google Scholar] [CrossRef]
  15. Zhang, D.; Liu, J.; Jiang, C.; Liu, A.; Xia, B. Quantitative Detection of Formaldehyde and Ammonia Gas via Metal Oxide-Modified Graphene-Based Sensor Array Combining with Neural Network Model. Sens. Actuators B Chem. 2017, 240, 55–65. [Google Scholar] [CrossRef]
  16. Chen, C.-Y.; Lin, W.-C.; Yang, H.-Y. Diagnosis of Ventilator-Associated Pneumonia Using Electronic Nose Sensor Array Signals: Solutions to Improve the Application of Machine Learning in Respiratory Research. Respir. Res. 2020, 21, 45. [Google Scholar] [CrossRef]
  17. Sánchez, C.; Santos, J.; Lozano, J. Use of Electronic Noses for Diagnosis of Digestive and Respiratory Diseases through the Breath. Biosensors 2019, 9, 35. [Google Scholar] [CrossRef] [Green Version]
  18. Dragonieri, S.; Pennazza, G.; Carratu, P.; Resta, O. Electronic Nose Technology in Respiratory Diseases. Lung 2017, 195, 157–165. [Google Scholar] [CrossRef]
  19. Liao, Y.-H.; Wang, Z.-C.; Zhang, F.-G.; Abbod, M.F.; Shih, C.-H.; Shieh, J.-S. Machine Learning Methods Applied to Predict Ventilator-Associated Pneumonia with Pseudomonas Aeruginosa Infection via Sensor Array of Electronic Nose in Intensive Care Unit. Sensors 2019, 19, 1866. [Google Scholar] [CrossRef] [Green Version]
  20. Tiele, A.; Wicaksono, A.; Ayyala, S.K.; Covington, J.A. Development of a Compact, IoT-Enabled Electronic Nose for Breath Analysis. Electronics 2020, 9, 84. [Google Scholar] [CrossRef] [Green Version]
  21. Saidi, T.; Zaim, O.; Moufid, M.; El Bari, N.; Ionescu, R.; Bouchikhi, B. Exhaled Breath Analysis Using Electronic Nose and Gas Chromatography–Mass Spectrometry for Non-Invasive Diagnosis of Chronic Kidney Disease, Diabetes Mellitus and Healthy Subjects. Sens. Actuators B Chem. 2018, 257, 178–188. [Google Scholar] [CrossRef]
  22. Yan, J.; Guo, X.; Duan, S.; Jia, P.; Wang, L.; Peng, C.; Zhang, S. Electronic Nose Feature Extraction Methods: A Review. Sensors 2015, 15, 27804–27831. [Google Scholar] [CrossRef]
  23. Nallon, E.C.; Schnee, V.P.; Bright, C.; Polcha, M.P.; Li, Q. Chemical Discrimination with an Unmodified Graphene Chemical Sensor. ACS Sens. 2016, 1, 26–31. [Google Scholar] [CrossRef]
  24. Nallon, E.C.; Schnee, V.P.; Bright, C.J.; Polcha, M.P.; Li, Q. Discrimination Enhancement with Transient Feature Analysis of a Graphene Chemical Sensor. Anal. Chem. 2016, 88, 1401–1406. [Google Scholar] [CrossRef] [PubMed]
  25. Choi, S.-J.; Jang, B.-H.; Lee, S.-J.; Min, B.K.; Rothschild, A.; Kim, I.-D. Selective Detection of Acetone and Hydrogen Sulfide for the Diagnosis of Diabetes and Halitosis Using SnO 2 Nanofibers Functionalized with Reduced Graphene Oxide Nanosheets. ACS Appl. Mater. Interfaces 2014, 6, 2588–2597. [Google Scholar] [CrossRef] [PubMed]
  26. Cho, B.; Yoon, J.; Gwan Hahm, M.; Kim, D.-H.; Ra Kim, A.; Ho Kahng, Y.; Park, S.-W.; Lee, Y.-J.; Park, S.-G.; Kwon, J.-D.; et al. Graphene-Based Gas Sensor: Metal Decoration Effect and Application to a Flexible Device. J. Mater. Chem. C 2014, 2, 5280–5285. [Google Scholar] [CrossRef]
  27. Fang, Q.; Chetwynd, D.G.; Covington, J.A.; Toh, C.-S.; Gardner, J.W. Micro-Gas-Sensor with Conducting Polymers. Sens. Actuators B Chem. 2002, 84, 66–71. [Google Scholar] [CrossRef]
  28. Ma, Z.; Chen, P.; Cheng, W.; Yan, K.; Pan, L.; Shi, Y.; Yu, G. Highly Sensitive, Printable Nanostructured Conductive Polymer Wireless Sensor for Food Spoilage Detection. Nano Lett. 2018, 18, 4570–4575. [Google Scholar] [CrossRef]
  29. Sharma, H.J.; Sonwane, N.D.; Kondawar, S.B. Electrospun SnO2/Polyaniline Composite Nanofibers Based Low Temperature Hydrogen Gas Sensor. Fibers Polym. 2015, 16, 1527–1532. [Google Scholar] [CrossRef]
  30. Su, P.-G.; Peng, Y.-T. Fabrication of a Room-Temperature H2S Gas Sensor Based on PPy/WO3 Nanocomposite Films by in-Situ Photopolymerization. Sens. Actuators B Chem. 2014, 193, 637–643. [Google Scholar] [CrossRef]
  31. Wang, J.; Ai, F.; Sun, Q.; Liu, T.; Li, H.; Yan, Z.; Liu, D. Diaphragm-Based Optical Fiber Sensor Array for Multipoint Acoustic Detection. Opt. Express OE 2018, 26, 25293–25304. [Google Scholar] [CrossRef]
  32. Li, L.; Yang, K.; Bian, X.; Liu, Q.; Yang, Y.; Ma, F. A Gas Leakage Localization Method Based on a Virtual Ultrasonic Sensor Array. Sensors 2019, 19, 3152. [Google Scholar] [CrossRef] [Green Version]
  33. Turner, A.P.F.; Magan, N. Electronic Noses and Disease Diagnostics. Nat. Rev. Microbiol. 2004, 2, 161–166. [Google Scholar] [CrossRef]
  34. Yan, K.; Zhang, D. Correcting Instrumental Variation and Time-Varying Drift: A Transfer Learning Approach with Autoencoders. IEEE Trans. Instrum. Meas. 2016, 65, 2012–2022. [Google Scholar] [CrossRef]
  35. Vergara, A.; Vembu, S.; Ayhan, T.; Ryan, M.A.; Homer, M.L.; Huerta, R. Chemical Gas Sensor Drift Compensation Using Classifier Ensembles. Sens. Actuators B Chem. 2012, 166–167, 320–329. [Google Scholar] [CrossRef]
  36. Verma, M.; Asmita, S.; Shukla, K.K. A Regularized Ensemble of Classifiers for Sensor Drift Compensation. IEEE Sens. J. 2016, 16, 1310–1318. [Google Scholar] [CrossRef]
  37. Lekha, S.; Suchetha, M. Recent Advancements and Future Prospects on E-Nose Sensors Technology and Machine Learning Approaches for Non-Invasive Diabetes Diagnosis: A Review. IEEE Rev. Biomed. Eng. 2021, 14, 127–138. [Google Scholar] [CrossRef]
  38. Tan, J.; Xu, J. Applications of Electronic Nose (e-Nose) and Electronic Tongue (e-Tongue) in Food Quality-Related Properties Determination: A Review. Artif. Intell. Agric. 2020, 4, 104–115. [Google Scholar] [CrossRef]
  39. Liu, K.; Zhang, C. Volatile Organic Compounds Gas Sensor Based on Quartz Crystal Microbalance for Fruit Freshness Detection: A Review. Food Chem. 2021, 334, 127615. [Google Scholar] [CrossRef]
  40. Hu, W.; Wan, L.; Jian, Y.; Ren, C.; Jin, K.; Su, X.; Bai, X.; Haick, H.; Yao, M.; Wu, W. Electronic Noses: From Advanced Materials to Sensors Aided with Data Processing. Adv. Mater. Technol. 2018, 4, 1800488. [Google Scholar] [CrossRef] [Green Version]
  41. Ye, H.; Nallon, E.C.; Schnee, V.P.; Shi, C.; Yuan, H.; Jiang, K.; Gu, K.; Feng, S.; Wang, H.; Xiao, C.; et al. Optimization of the Transient Feature Analysis for Graphene Chemical Vapor Sensors: A Comprehensive Study. IEEE Sens. J. 2017, 17, 6350–6359. [Google Scholar] [CrossRef]
  42. Precise Gas Discrimination with Cross-Reactive Graphene and Metal Oxide Sensor Arrays: Applied Physics Letters: Vol 113, No 22. Available online: https://aip.scitation.org/doi/full/10.1063/1.5063375 (accessed on 21 October 2021).
  43. Ye, H.; Nallon, E.C.; Schnee, V.P.; Shi, C.; Jiang, K.; Xu, J.; Feng, S.; Wang, H.; Li, Q. Enhance the Discrimination Precision of Graphene Gas Sensors with a Hidden Markov Model. Anal. Chem. 2018, 90, 13790–13795. [Google Scholar] [CrossRef]
  44. Yan, J.; Tian, F.; He, Q.; Shen, Y.; Xu, S.; Feng, J.; Chaibou, K. Feature Extraction from Sensor Data for Detection of Wound Pathogen Based on Electronic Nose. Sens. Mater. 2012, 24, 57–73. [Google Scholar] [CrossRef] [Green Version]
  45. Liu, T.; Zhang, W.; Ye, L.; Ueland, M.; Forbes, S.L.; Su, S.W. A Novel Multi-Odour Identification by Electronic Nose Using Non-Parametric Modelling-Based Feature Extraction and Time-Series Classification. Sens. Actuators B Chem. 2019, 298, 126690. [Google Scholar] [CrossRef]
  46. Guo, X.; Peng, C.; Zhang, S.; Yan, J.; Duan, S.; Wang, L.; Jia, P.; Tian, F. A Novel Feature Extraction Approach Using Window Function Capturing and QPSO-SVM for Enhancing Electronic Nose Performance. Sensors 2015, 15, 15198–15217. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. He, A.; Yu, J.; Wei, G.; Chen, Y.; Wu, H.; Tang, Z. Short-Time Fourier Transform and Decision Tree-Based Pattern Recognition for Gas Identification Using Temperature Modulated Microhotplate Gas Sensors. J. Sens. 2016, 2016, 7603931. [Google Scholar] [CrossRef]
  48. Peng, C.; Yan, J.; Duan, S.; Wang, L.; Jia, P.; Zhang, S. Enhancing Electronic Nose Performance Based on a Novel QPSO-KELM Model. Sensors 2016, 16, 520. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, Z.; Chen, W.; Gu, S.; Wang, Y.; Wang, J. Evaluation of Trunk Borer Infestation Duration Using MOS E-Nose Combined with Different Feature Extraction Methods and GS-SVM. Comput. Electron. Agric. 2020, 170, 105293. [Google Scholar] [CrossRef]
  50. Yin, Y.; Hao, Y.; Yu, H.; Liu, Y.; Hao, F. Detection Potential of Multi-Features Representation of E-Nose Data in Classification of Moldy Maize Samples. Food Bioprocess Technol. 2017, 10, 2226–2239. [Google Scholar] [CrossRef]
  51. Shi, Y.; Jia, X.; Yuan, H.; Jia, S.; Liu, J.; Men, H. Origin Traceability of Rice Based on an Electronic Nose Coupled with a Feature Reduction Strategy. Meas. Sci. Technol. 2021, 32, 025107. [Google Scholar] [CrossRef]
  52. Men, H.; Shi, Y.; Jiao, Y.; Gong, F.; Liu, J. Electronic Nose Sensors Data Feature Mining: A Synergetic Strategy for the Classification of Beer. Anal. Methods 2018, 10, 2016–2025. [Google Scholar] [CrossRef]
  53. Ye, H.; Shi, C.; Li, J.; Tian, L.; Zeng, M.; Wang, H.; Li, Q. New Alternating Current Noise Analytics Enables High Discrimination in Gas Sensing. Anal. Chem. 2020, 92, 824–829. [Google Scholar] [CrossRef]
  54. Gomri, S.; Bedoui, S.; Morati, N.; Fiorido, T.; Contaret, T.; Seguin, J.-L.; Kachouri, A.; Masmoudi, M. A Noise Spectroscopy-Based Features Extraction Method to Detect Two Gases Using One Single MOX Sensor. IEEE Sens. J. 2019, 19, 9063–9070. [Google Scholar] [CrossRef]
  55. Shi, Y.; Gong, F.; Wang, M.; Liu, J.; Wu, Y.; Men, H. A Deep Feature Mining Method of Electronic Nose Sensor Data for Identifying Beer Olfactory Information. J. Food Eng. 2019, 263, 437–445. [Google Scholar] [CrossRef]
  56. Shi, Y.; Yuan, H.; Xiong, C.; Zhang, Q.; Jia, S.; Liu, J.; Men, H. Improving Performance: A Collaborative Strategy for the Multi-Data Fusion of Electronic Nose and Hyperspectral to Track the Quality Difference of Rice. Sens. Actuators B Chem. 2021, 333, 129546. [Google Scholar] [CrossRef]
  57. Längkvist, M.; Loutfi, A. Unsupervised Feature Learning for Electronic Nose Data Applied to Bacteria Identification in Blood. In Proceedings of the NIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning, Sierra Nevada, Spain, 17 December 2011. [Google Scholar]
  58. Längkvist, M.; Coradeschi, S.; Loutfi, A.; Rayappan, J. Fast Classification of Meat Spoilage Markers Using Nanostructured ZnO Thin Films and Unsupervised Feature Learning. Sensors 2013, 13, 1578–1592. [Google Scholar] [CrossRef] [Green Version]
  59. Dey, P.; Saurabh, K.; Kumar, C.; Pandit, D.; Chaulya, S.K.; Ray, S.K.; Prasad, G.M.; Mandal, S.K. T-SNE and Variational Auto-Encoder with a Bi-LSTM Neural Network-Based Model for Prediction of Gas Concentration in a Sealed-off Area of Underground Coal Mines. Soft Comput. 2021, 25, 14183–14207. [Google Scholar] [CrossRef]
  60. Wetzel, S.J. Unsupervised Learning of Phase Transitions: From Principal Component Analysis to Variational Autoencoders. Phys. Rev. E 2017, 96, 022140. [Google Scholar] [CrossRef] [Green Version]
  61. He, P.; Jia, P.; Qiao, S.; Duan, S. Self-Taught Learning Based on Sparse Autoencoder for E-Nose in Wound Infection Detection. Sensors 2017, 17, 2279. [Google Scholar] [CrossRef] [Green Version]
  62. Zhao, W.; Meng, Q.-H.; Zeng, M.; Qi, P.-F. Stacked Sparse Auto-Encoders (SSAE) Based Electronic Nose for Chinese Liquors Classification. Sensors 2017, 17, 2855. [Google Scholar] [CrossRef] [Green Version]
  63. Lu, B.; Fu, L.; Nie, B.; Peng, Z.; Liu, H. A Novel Framework with High Diagnostic Sensitivity for Lung Cancer Detection by Electronic Nose. Sensors 2019, 19, 5333. [Google Scholar] [CrossRef] [Green Version]
  64. Madiraju, N.S.; Sadat, S.M.; Fisher, D.; Karimabadi, H. Deep Temporal Clustering: Fully Unsupervised Learning of Time-Domain Features. arXiv 2018, arXiv:1802.01059. [Google Scholar]
  65. Freitag, M.; Amiriparian, S.; Pugachevskiy, S.; Cummins, N.; Schuller, B. AuDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks. J. Mach. Learn. Res. 2017, 18, 6340–6344. [Google Scholar]
  66. Dong, W.; Zhao, J.; Hu, R.; Dong, Y.; Tan, L. Differentiation of Chinese Robusta Coffees According to Species, Using a Combined Electronic Nose and Tongue, with the Aid of Chemometrics. Food Chem. 2017, 229, 743–751. [Google Scholar] [CrossRef]
  67. Liu, H.; Li, Q.; Yan, B.; Zhang, L.; Gu, Y. Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection. Sensors 2019, 19, 45. [Google Scholar] [CrossRef] [Green Version]
  68. Konduru, T.; Rains, G.; Li, C. A Customized Metal Oxide Semiconductor-Based Gas Sensor Array for Onion Quality Evaluation: System Development and Characterization. Sensors 2015, 15, 1252–1273. [Google Scholar] [CrossRef] [Green Version]
  69. Khorramifar, A.; Rasekh, M.; Karami, H.; Malaga-Toboła, U.; Gancarz, M. A Machine Learning Method for Classification and Identification of Potato Cultivars Based on the Reaction of MOS Type Sensor-Array. Sensors 2021, 21, 5836. [Google Scholar] [CrossRef]
  70. Cui, S.; Wang, J.; Yang, L.; Wu, J.; Wang, X. Qualitative and Quantitative Analysis on Aroma Characteristics of Ginseng at Different Ages Using E-Nose and GC–MS Combined with Chemometrics. J. Pharm. Biomed. Anal. 2015, 102, 64–77. [Google Scholar] [CrossRef]
  71. Du, D.; Wang, J.; Wang, B.; Zhu, L.; Hong, X. Ripeness Prediction of Postharvest Kiwifruit Using a MOS E-Nose Combined with Chemometrics. Sensors 2019, 19, 419. [Google Scholar] [CrossRef] [Green Version]
  72. Zhu, L.; Yan, Y.; Gu, D.-C.; Lu, Y.; Gan, J.-H.; Tao, N.-P.; Wang, X.-C.; Xu, C.-H. Rapid Quality Discrimination and Amino Nitrogen Quantitative Evaluation of Soy Sauces by Tri-Step IR and E-Nose. Food Anal. Methods 2018, 11, 3201–3210. [Google Scholar] [CrossRef]
  73. Wei, G.; Li, G.; Zhao, J.; He, A. Development of a LeNet-5 Gas Identification CNN Structure for Electronic Noses. Sensors 2019, 19, 217. [Google Scholar] [CrossRef] [Green Version]
  74. Wang, T.; Zhang, H.; Wu, Y.; Jiang, W.; Chen, X.; Zeng, M.; Yang, J.; Su, Y.; Hu, N.; Yang, Z. Target Discrimination, Concentration Prediction, and Status Judgment of Electronic Nose System Based on Large-Scale Measurement and Multi-Task Deep Learning. Sens. Actuators B Chem. 2021, 351, 130915. [Google Scholar] [CrossRef]
  75. Heredia, A.P.D.; Cruz, F.R.; Balbin, J.R.; Chung, W.-Y. Olfactory Classification Using Electronic Nose System via Artificial Neural Network. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 3569–3574. [Google Scholar]
  76. Rasekh, M.; Karami, H.; Wilson, A.D.; Gancarz, M. Performance Analysis of MAU-9 Electronic-Nose MOS Sensor Array Components and ANN Classification Methods for Discrimination of Herb and Fruit Essential Oils. Chemosensors 2021, 9, 243. [Google Scholar] [CrossRef]
  77. Rasekh, M.; Karami, H.; Wilson, A.D.; Gancarz, M. Classification and Identification of Essential Oils from Herbs and Fruits Based on a MOS Electronic-Nose Technology. Chemosensors 2021, 9, 142. [Google Scholar] [CrossRef]
  78. Zhang, H.; Ye, W.; Zhao, X.; Teng, R.K.F.; Pan, X. A Novel Convolutional Recurrent Neural Network Based Algorithm for Fast Gas Recognition in Electronic Nose System. In Proceedings of the 2018 IEEE International Conference on Electron Devices and Solid State Circuits (EDSSC), Shenzhen, China, 6–8 June 2018; pp. 1–2. [Google Scholar]
  79. Zhao, X.; Wen, Z.; Pan, X.; Ye, W.; Bermak, A. Mixture Gases Classification Based on Multi-Label One-Dimensional Deep Convolutional Neural Network. IEEE Access 2019, 7, 12630–12637. [Google Scholar] [CrossRef]
  80. Yu, D.; Gu, Y. A Machine Learning Method for the Fine-Grained Classification of Green Tea with Geographical Indication Using a MOS-Based Electronic Nose. Foods 2021, 10, 795. [Google Scholar] [CrossRef] [PubMed]
  81. Rodriguez Gamboa, J.C.; da Silva, A.J.; Araujo, I.C.S.; Albarracin, E.S.; Duran, A.C.M. Validation of the Rapid Detection Approach for Enhancing the Electronic Nose Systems Performance, Using Different Deep Learning Models and Support Vector Machines. Sens. Actuators B Chem. 2021, 327, 128921. [Google Scholar] [CrossRef]
  82. Grodniyomchai, B.; Chalapat, K.; Jitkajornwanich, K.; Jaiyen, S. A Deep Learning Model for Odor Classification Using Deep Neural Network. In Proceedings of the 2019 5th International Conference on Engineering, Applied Sciences and Technology (ICEAST), Luang Prabang, Laos, 2–5 July 2019; pp. 1–4. [Google Scholar]
  83. Qi, P.-F.; Meng, Q.-H.; Jing, Y.-Q.; Liu, Y.-J.; Zeng, M. A Bio-Inspired Breathing Sampling Electronic Nose for Rapid Detection of Chinese Liquors. IEEE Sens. J. 2017, 17, 4689–4698. [Google Scholar] [CrossRef]
  84. Peng, P.; Zhao, X.; Pan, X.; Ye, W. Gas Classification Using Deep Convolutional Neural Networks. Sensors 2018, 18, 157. [Google Scholar] [CrossRef] [Green Version]
  85. Zhang, S.; Cheng, Y.; Luo, D.; He, J.; Wong, A.K.Y.; Hung, K. Channel Attention Convolutional Neural Network for Chinese Baijiu Detection With E-Nose. IEEE Sens. J. 2021, 21, 16170–16182. [Google Scholar] [CrossRef]
  86. Liu, Y.-J.; Meng, Q.-H.; Zhang, X.-N. Data Processing for Multiple Electronic Noses Using Sensor Response Visualization. IEEE Sens. J. 2018, 18, 9360–9369. [Google Scholar] [CrossRef]
  87. Wang, S.-H.; Chou, T.-I.; Chiu, S.-W.; Tang, K.-T. Using a Hybrid Deep Neural Network for Gas Classification. IEEE Sens. J. 2021, 21, 6401–6407. [Google Scholar] [CrossRef]
  88. Jong, G.-J.; Hendrick; Wang, Z.-H.; Hsieh, K.-S.; Horng, G.-J. A Novel Feature Extraction Method an Electronic Nose for Aroma Classification. IEEE Sens. J. 2019, 19, 10796–10803. [Google Scholar] [CrossRef]
  89. Shi, Y.; Liu, M.; Sun, A.; Liu, J.; Men, H. A Fast Pearson Graph Convolutional Network Combined With Electronic Nose to Identify the Origin of Rice. IEEE Sens. J. 2021, 21, 21175–21183. [Google Scholar] [CrossRef]
  90. Torres-Tello, J.; Guaman, A.V.; Ko, S.-B. Improving the Detection of Explosives in a MOX Chemical Sensors Array With LSTM Networks. IEEE Sens. J. 2020, 20, 14302–14309. [Google Scholar] [CrossRef]
  91. Han, F.; Huang, X.; Teye, E.; Gu, F.; Gu, H. Nondestructive Detection of Fish Freshness during Its Preservation by Combining Electronic Nose and Electronic Tongue Techniques in Conjunction with Chemometric Analysis. Anal. Methods 2014, 6, 529–536. [Google Scholar] [CrossRef]
  92. Zhang, L.; Tian, F. Performance Study of Multilayer Perceptrons in a Low-Cost Electronic Nose. IEEE Trans. Instrum. Meas. 2014, 63, 1670–1679. [Google Scholar] [CrossRef]
  93. Kiani, S.; Minaei, S.; Ghasemi-Varnamkhasti, M.; Ayyari, M. An Original Approach for the Quantitative Characterization of Saffron Aroma Strength Using Electronic Nose. Int. J. Food Prop. 2017, 20, S673–S683. [Google Scholar] [CrossRef]
  94. Gonzalez Viejo, C.; Fuentes, S.; Godbole, A.; Widdicombe, B.; Unnithan, R.R. Development of a Low-Cost e-Nose to Assess Aroma Profiles: An Artificial Intelligence Application to Assess Beer Quality. Sens. Actuators B Chem. 2020, 308, 127688. [Google Scholar] [CrossRef]
  95. Xie, T.; Yu, H.; Wilamowski, B. Comparison between Traditional Neural Networks and Radial Basis Function Networks. In Proceedings of the 2011 IEEE International Symposium on Industrial Electronics, Gdansk, Poland, 27–30 June 2011; pp. 1194–1199. [Google Scholar]
  96. Liu, H.; Li, Q.; Gu, Y. A Multi-Task Learning Framework for Gas Detection and Concentration Estimation. Neurocomputing 2020, 416, 28–37. [Google Scholar] [CrossRef]
  97. Wang, Q.; Xie, T.; Wang, S. Research on Air Pollution Gases Recognition Method Based on LSTM Recurrent Neural Network and Gas Sensors Array. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 3486–3491. [Google Scholar]
  98. Guo, J.; Cheng, Y.; Luo, D.; Wong, K.-Y.; Hung, K.; Li, X. ODRP: A Deep Learning Framework for Odor Descriptor Rating Prediction Using Electronic Nose. IEEE Sens. J. 2021, 21, 15012–15021. [Google Scholar] [CrossRef]
  99. Wijaya, D.R.; Sarno, R.; Zulaika, E. DWTLSTM for Electronic Nose Signal Processing in Beef Quality Monitoring. Sens. Actuators B Chem. 2021, 326, 128931. [Google Scholar] [CrossRef]
  100. Kaewtapee, C.; Khetchaturat, C.; Bunchasak, C. Comparison of Growth Models between Artificial Neural Networks and Nonlinear Regression Analysis in Cherry Valley Ducks. J. Appl. Poult. Res. 2011, 20, 421–428. [Google Scholar] [CrossRef]
  101. Romain, A.; Nicolas, J. Long Term Stability Of Metal Oxide-Based Gas Sensors For E-nose Environmental Applications: An Overview. AIP Conf. Proc. 2009, 1137, 443–445. [Google Scholar] [CrossRef] [Green Version]
  102. Adhikari, S.; Saha, S. Multiple Classifier Combination Technique for Sensor Drift Compensation Using ANN & KNN. In Proceedings of the 2014 IEEE International Advance Computing Conference (IACC), Gurgaon, India, 21–22 February 2014; pp. 1184–1189. [Google Scholar]
  103. Liu, H.; Chu, R.; Tang, Z. Metal Oxide Gas Sensor Drift Compensation Using a Two-Dimensional Classifier Ensemble. Sensors 2015, 15, 10180–10193. [Google Scholar] [CrossRef] [Green Version]
  104. Wiezbicki, T.; Ribeiro, E.P. Sensor Drift Compensation Using Weighted Neural Networks. In Proceedings of the 2016 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), Natal, Brazil, 23–25 May 2016; pp. 92–97. [Google Scholar]
  105. Zhao, X.; Li, P.; Xiao, K.; Meng, X.; Han, L.; Yu, C. Sensor Drift Compensation Based on the Improved LSTM and SVM Multi-Class Ensemble Learning Models. Sensors 2019, 19, 3844. [Google Scholar] [CrossRef] [Green Version]
  106. Cheng, Y.-C.; Chou, T.-I.; Chiu, S.-W.; Tang, K.-T. A Concentration-Based Drift Calibration Transfer Learning Method for Gas Sensor Array Data. IEEE Sens. Lett. 2020, 4, 7003704. [Google Scholar] [CrossRef]
  107. Zhang, Y.; Yan, J.; Wang, Z.; Peng, X.; Tian, Y.; Duan, S. TDACNN: Target-Domain-Free Domain Adaptation Convolutional Neural Network for Drift Compensation in Gas Sensors. arXiv 2021, arXiv:2110.07509. [Google Scholar]
  108. Ma, Z.; Luo, G.; Qin, K.; Wang, N.; Niu, W. Online Sensor Drift Compensation for E-Nose Systems Using Domain Adaptation and Extreme Learning Machine. Sensors 2018, 18, 742. [Google Scholar] [CrossRef] [Green Version]
  109. Liu, T.; Li, D.; Chen, Y.; Wu, M.; Yang, T.; Cao, J. Online Drift Compensation by Adaptive Active Learning on Mixed Kernel for Electronic Noses. Sens. Actuators B Chem. 2020, 316, 128065. [Google Scholar] [CrossRef]
  110. Liu, T.; Li, D.; Chen, J.; Chen, Y.; Yang, T.; Cao, J. Gas-Sensor Drift Counteraction with Adaptive Active Learning for an Electronic Nose. Sensors 2018, 18, 4028. [Google Scholar] [CrossRef] [Green Version]
  111. Zhang, L.; Zhang, D. Domain Adaptation Extreme Learning Machines for Drift Compensation in E-Nose Systems. IEEE Trans. Instrum. Meas. 2015, 64, 1790–1801. [Google Scholar] [CrossRef] [Green Version]
  112. Ma, Z.; Luo, G.; Qin, K.; Wang, N.; Niu, W. Weighted Domain Transfer Extreme Learning Machine and Its Online Version for Gas Sensor Drift Compensation in E-Nose Systems. Wirel. Commun. Mob. Comput. 2018, 2018, 2308237. [Google Scholar] [CrossRef] [Green Version]
  113. Zhang, L.; Liu, Y.; He, Z.; Liu, J.; Deng, P.; Zhou, X. Anti-Drift in E-Nose: A Subspace Projection Approach with Drift Reduction. Sens. Actuators B Chem. 2017, 253, 407–417. [Google Scholar] [CrossRef]
  114. Tao, Y.; Xu, J.; Liang, Z.; Xiong, L.; Yang, H. Domain Correction Based on Kernel Transformation for Drift Compensation in the E-Nose System. Sensors 2018, 18, 3209. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Yi, Z.; Shang, W.; Xu, T.; Guo, S.; Wu, X. Local Discriminant Subspace Learning for Gas Sensor Drift Problem. IEEE Trans. Syst. Man Cybern. Syst. 2020, 1–13. [Google Scholar] [CrossRef]
  116. Tao, Y.; Li, C.; Liang, Z.; Yang, H.; Xu, J. Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose. Sensors 2019, 19, 3703. [Google Scholar] [CrossRef] [Green Version]
  117. Ur Rehman, A.; Bermak, A. Drift-Insensitive Features for Learning Artificial Olfaction in E-Nose System. IEEE Sens. J. 2018, 18, 7173–7182. [Google Scholar] [CrossRef]
  118. Luo, Y.; Wei, S.; Chai, Y.; Sun, X. Electronic Nose Sensor Drift Compensation Based on Deep Belief Network. In Proceedings of the 2016 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 3951–3955. [Google Scholar]
  119. Wang, X.; Gu, Y.; Liu, H. A Transfer Learning Method for the Protection of Geographical Indication in China Using an Electronic Nose for the Identification of Xihu Longjing Tea. IEEE Sens. J. 2021, 21, 8065–8077. [Google Scholar] [CrossRef]
  120. Yang, Y.; Liu, H.; Gu, Y. A Model Transfer Learning Framework With Back-Propagation Neural Network for Wine and Chinese Liquor Detection by Electronic Nose. IEEE Access 2020, 8, 105278–105285. [Google Scholar] [CrossRef]
  121. Zhang, L.; Liu, Y.; Deng, P. Odor Recognition in Multiple E-Nose Systems With Cross-Domain Discriminative Subspace Learning. IEEE Trans. Instrum. Meas. 2017, 66, 1679–1692. [Google Scholar] [CrossRef]
  122. Längkvist, M.; Karlsson, L.; Loutfi, A. A Review of Unsupervised Feature Learning and Deep Learning for Time-Series Modeling. Pattern Recognit. Lett. 2014, 42, 11–24. [Google Scholar] [CrossRef] [Green Version]
  123. Wijaya, D.R.; Sarno, R.; Zulaika, E. Electronic Nose Dataset for Beef Quality Monitoring in Uncontrolled Ambient Conditions. Data Brief 2018, 21, 2414–2420. [Google Scholar] [CrossRef]
  124. Rodriguez Gamboa, J.C.; Albarracin E., E.S.; da Silva, A.J.; Ferreira, T.A.E. Electronic Nose Dataset for Detection of Wine Spoilage Thresholds. Data Brief 2019, 25, 104202. [Google Scholar] [CrossRef]
  125. Durán Acevedo, C.M.; Cuastumal Vasquez, C.A.; Carrillo Gómez, J.K. Electronic Nose Dataset for COPD Detection from Smokers and Healthy People through Exhaled Breath Analysis. Data Brief 2021, 35, 106767. [Google Scholar] [CrossRef]
Figure 1. An analogy between human olfactory system and E-nose [33].
Figure 1. An analogy between human olfactory system and E-nose [33].
Sensors 21 07620 g001
Figure 2. A typical electrical response from a gas sensor [24].
Figure 2. A typical electrical response from a gas sensor [24].
Sensors 21 07620 g002
Figure 3. The schematic diagram of moving window function capturing (MWFC) [46].
Figure 3. The schematic diagram of moving window function capturing (MWFC) [46].
Sensors 21 07620 g003
Figure 4. (a) Deep RBM and (b) deep cRBM model [57].
Figure 4. (a) Deep RBM and (b) deep cRBM model [57].
Sensors 21 07620 g004
Figure 5. Autoencoder for feature extraction [62].
Figure 5. Autoencoder for feature extraction [62].
Sensors 21 07620 g005
Figure 6. Architecture of 1d-DCNN algorithm [79].
Figure 6. Architecture of 1d-DCNN algorithm [79].
Sensors 21 07620 g006
Figure 7. LeNet-5 structure for odor identification [73].
Figure 7. LeNet-5 structure for odor identification [73].
Sensors 21 07620 g007
Figure 8. Illustration of encoding sensor response into RGB image [86].
Figure 8. Illustration of encoding sensor response into RGB image [86].
Sensors 21 07620 g008
Figure 9. A sensor fusion framework to predict tea of different grade [5].
Figure 9. A sensor fusion framework to predict tea of different grade [5].
Sensors 21 07620 g009
Figure 10. Structure of (a) SMIMO- and (b) MMISO-based MLP concentration estimation models [92].
Figure 10. Structure of (a) SMIMO- and (b) MMISO-based MLP concentration estimation models [92].
Sensors 21 07620 g010
Figure 11. MLP and RBFN [100].
Figure 11. MLP and RBFN [100].
Sensors 21 07620 g011
Figure 12. Validation result for different drift compensation methods [35].
Figure 12. Validation result for different drift compensation methods [35].
Sensors 21 07620 g012
Figure 13. DCAE with correction layer and hidden layers [34].
Figure 13. DCAE with correction layer and hidden layers [34].
Sensors 21 07620 g013
Figure 14. Adversarial training framework for gas sensor drift compensation [116].
Figure 14. Adversarial training framework for gas sensor drift compensation [116].
Sensors 21 07620 g014
Table 1. Summary of commonly used features extracted from time-response curves.
Table 1. Summary of commonly used features extracted from time-response curves.
FeatureDescription
Maximum responseMax value of response
Responses of special timeResponse value of special time in the whole response curve
Time of special responsesTime of special response value in the whole response curve
AreaArea values of sensor response curve and time axis surrounded
IntegralArea between two time points
Derivative D = d x t d t
DifferenceMagnitude difference between two time points
Second Derivative D = d 2 x t d t 2
Table 2. Summary of commonly used model in E-Nose odor differentiation.
Table 2. Summary of commonly used model in E-Nose odor differentiation.
Gas TypeGas NumberSensor NumberModelsReference
Tea418KNN, and variations, LDA[6]
Tea418KNN ensemble[5]
Coffee76KNN, PLS-DA, Multi-Layer Perceptron[66]
Beer510SVM[52]
Beer510CNN–SVM[55]
Wine36SVM, XGBoost,
Multi-Layer Perceptron
[67]
Onion27LDA[68]
Potato59LDA, Multi-Layer Perceptron[69]
Rice610Extreme Learning Machine[56]
Ginseng418LDA,
Hierarchical Cluster Analysis
[70]
Kiwifruit810LDA[71]
Soy Sauce418-[72]
Single Chemicals201KNN, LDA[24]
Single Chemicals41KNN, LDA, Random Forest[23]
Single Chemicals31SVM[53]
Single Chemicals312CNN[73]
Single Chemicals128CNN[74]
Polluted water124Multi-Layer Perceptron[12]
Smell mixture107Multi-Layer Perceptron[75]
Essential Oils69Multi-Layer Perceptron[76]
Essential Oils69LDA, SVM[77]
Table 3. Summary of commonly used model in E-Nose gas concentration estimation.
Table 3. Summary of commonly used model in E-Nose gas concentration estimation.
Gas TypePredicting PropertyPredicting TargetSensor NumberModelsEvaluation MethodReference
Gas MixtureConcentration32Neural-fuzzy networkRMSE[14]
Gas MixtureConcentration64MLP and its variationsMSEP[92]
Gas MixtureConcentration23MLPMSE, MAE[15]
GinsengChemical Concentration718PLSR, MLPRMSE, R2[35]
TeaChemical Concentration410SVMR, Random Forest RegressionRMSE, R2[7]
FishTVC19SVMR, RBFNRMSEP, R-value[91]
FlowerAroma Strength111MLP, RBFNRMSE, R2[93]
CoffeePH, Solid%. Acid%, Soluble%46PLSRR-value, RPD, RMSE[66]
BeerChemical Concentration-9MLPR-value,
MSE
[94]
SquidChemical Concentration118PLSRR2, t-test[8]
Polluted waterOdor Concentration15PLSRRSME, R2[13]
KiwifruitRipeness Index310PLSR, SVMR, Random Forest RegressionRSME, R2[71]
Table 4. Modeling evaluation metric for gas property estimation.
Table 4. Modeling evaluation metric for gas property estimation.
MetricEquationDescription
t-value-The significance of the predicting model is close to the real model
r-value i = 1 n y i ^ y ^ ¯ y i y ¯ i = 1 n y i ^ y ^ ¯ 2 i = 1 n y i y ¯ 2 The correlation between predicted value and real value
R2 1 i = 1 n y i ^ y i 2 i = 1 n y i y ¯ 2 The extent to which predict model is explaining the variation of data
RMESP i = 1 n y i ^ y i y i 2 n Average squared rooted deviation from predicted value to real value by percentage
RMSE i = 1 n y i ^ y i 2 n Average squared rooted deviation from predicted value to real value
MSE i = 1 n y i ^ y i 2 n Average squared deviation from predicted value to real value
MAE i = 1 n y i ^ y i n Average absolute deviation from predicted value to real value
Table 5. Summary for drift compensation by ensemble.
Table 5. Summary for drift compensation by ensemble.
Ensemble MethodAccuracy MeanAccuracy Std DevAccuracy on Final BatchReference
SVM~81.6%~12%~68%[35]
MLP and KNN63.93% (MLP),
75.59% (KNN)_
~29% (MLP),
~17%(KNN)_
38% (MLP),
53% (KNN)
[102]
SVM with 2D weights84.8%~15%~60%[103]
SVM with regularization~79.3%~8%~80%[36]
MLP~83.1%~10%72.89%[104]
SVM, LSTM83.2% (SVM)
77.8% (LSTM)
89.26% (SVM and LSTM)
16.63% (SVM)
9.21% (LSTM)
10.0% (SVM and LSTM)
70.6% (SVM),
83.3% (LSTM),
83.4% (SVM and LSTM)
[105]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye,  .; Liu, Y.; Li, Q. Recent Progress in Smart Electronic Nose Technologies Enabled with Machine Learning Methods. Sensors 2021, 21, 7620. https://doi.org/10.3390/s21227620

AMA Style

Ye  , Liu Y, Li Q. Recent Progress in Smart Electronic Nose Technologies Enabled with Machine Learning Methods. Sensors. 2021; 21(22):7620. https://doi.org/10.3390/s21227620

Chicago/Turabian Style

Ye,  Zhenyi, Yuan Liu, and Qiliang Li. 2021. "Recent Progress in Smart Electronic Nose Technologies Enabled with Machine Learning Methods" Sensors 21, no. 22: 7620. https://doi.org/10.3390/s21227620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop