Next Article in Journal
Older Adults’ Perceptions and Recommendations Regarding a Falls Prevention Self-Management Plan Template Based on the Health Belief Model: A Mixed-Methods Study
Previous Article in Journal
Reducing Obesogenic Eating Behaviors in Hispanic Children through a Family-Based, Culturally-Tailored RCT: Abriendo Caminos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Deep Feature Generation for Appropriate Face Mask Use Detection

1
Department of Management Information, College of Management, Sakarya University, Sakarya 54050, Turkey
2
Department of Computer Engineering, Engineering Faculty, Kirsehir Ahi Evran University, Kirsehir 40100, Turkey
3
School of Management & Enterprise, University of Southern Queensland, Toowoomba, QLD 4350, Australia
4
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
5
Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia
6
Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan 75000, Turkey
7
Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield S1 1WB, UK
8
Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig 23119, Turkey
9
School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
10
Centre for Advanced Modelling and Geospatial lnformation Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
11
Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore
12
Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore
13
Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2022, 19(4), 1939; https://doi.org/10.3390/ijerph19041939
Submission received: 1 January 2022 / Revised: 29 January 2022 / Accepted: 30 January 2022 / Published: 9 February 2022
(This article belongs to the Section Digital Health)

Abstract

:
Mask usage is one of the most important precautions to limit the spread of COVID-19. Therefore, hygiene rules enforce the correct use of face coverings. Automated mask usage classification might be used to improve compliance monitoring. This study deals with the problem of inappropriate mask use. To address that problem, 2075 face mask usage images were collected. The individual images were labeled as either mask, no masked, or improper mask. Based on these labels, the following three cases were created: Case 1: mask versus no mask versus improper mask, Case 2: mask versus no mask + improper mask, and Case 3: mask versus no mask. This data was used to train and test a hybrid deep feature-based masked face classification model. The presented method comprises of three primary stages: (i) pre-trained ResNet101 and DenseNet201 were used as feature generators; each of these generators extracted 1000 features from an image; (ii) the most discriminative features were selected using an improved RelieF selector; and (iii) the chosen features were used to train and test a support vector machine classifier. That resulting model attained 95.95%, 97.49%, and 100.0% classification accuracy rates on Case 1, Case 2, and Case 3, respectively. Having achieved these high accuracy values indicates that the proposed model is fit for a practical trial to detect appropriate face mask use in real time.

1. Introduction

Pandemics have occurred throughout human history. The deadliest pandemic was an outbreak of bubonic plague between 1347 and 1352. It caused approximately 30 million deaths, which corresponds to about 40 percent of the population in medieval Europe at that time [1,2]. The first known flu pandemic occurred in the 18th century. During this pandemic, approximately 70% of the world population were infected, but the death rate remained low. Spanish flu was the first pandemic of the 20th century. An outbreak in 1918 caused between 50 to 100 million fatalities. Later, in 1957, Asian flu caused approximately 1 to 3 million fatalities. The first known pandemic of the 21st century was caused by the H1N1 virus. It occurred in 2009 and caused 125,000 to 400,000 deaths [3,4]. On 11 March 2020, an outbreak of COVID-19 was officially classified as pandemic by the World Health Organization (WHO) [5]. The COVID-19 virus was first reported in Wuhan, China, in December 2019 [6]. Since then, the disease has spread rapidly to all communities worldwide because the virus is easily transmitted from person to person by air [7,8]. Crowded environments and a lack of face coverings increases the risk of spreading the virus [9]. The virus has affected 71 million people and caused 1.5 million deaths worldwide as of December 2020 [10].
Face masks are an essential tool to prevent COVID-19 contaminated aerosols and, thereby, slow the virus spread [11]. Hence, wearing face masks in public places has been encouraged by the WHO to control the COVID-19 outbreak [12,13]. Some governments passed laws and regulatory frameworks which introduce the mandatory use of face masks in public places. Even if it is mandatory to wear a mask, some people do not obey this rule for non-permissible reasons [14,15,16]. These people pose a major threat because of their unrestrained ability to spread COVID-19. Therefore, enforcing face covering laws and social standards becomes a priority for governments and local authorities. Prerequisites for law enforcement are adequate detection methods of people who fail to wear face masks. However, face mask detection in public spaces is a hard problem [17]. As such, face masks consist of protective material which is used to cover mouth and nose. This definition allows a wide range of face masks with different visual features. Furthermore, head shape, hair, and the face itself are quite different from person to person. This makes appropriate face mask use detection a hard computer vision problem.
In this study, we propose a machine classification model to automate the detection of appropriate face mask use. We have created a novel Threshold RelieF Iterative RelieF (TRFIRF) algorithm to select features that were extracted with DenseNet201 and ResNet101 from still images. The model was trained and tested with a hand-curated dataset. Each image in the dataset belongs to one of the following three classes: mask (Class 1), no mask (Class 2), and improper mask (Class 3). The model was structured into three main parts: deep feature generation, feature selection using TRFIRF, and classification using support vector machine (SVM). The key contributions of the presented deep hybrid feature- and TRFIRF-based model are given below:
1
We have curated a new dataset and made it publicly available under: (accessed 20 December 2021) https://websiteyonetimi.ahievran.edu.tr/_Download/MaskDataset.rar.
2
We have improved the RelieF feature selector by creating an iterative version of that algorithm. Subsequently, we addressed the high time complexity of the iterative RelieF (IRF). The result of these efforts was the TRFIRF algorithm.
3
A novel transfer learning method for feature generation was created by combining DenseNet201 and ResNet101 with TRFIRF. The extracted features were used to train and test an SVM classifier. The test results indicate that a high-performance face mask detection model was obtained.
The remainder of this paper is organized as follows. The next section provides some background on medical decision support through artificial intelligence. Section 3 details the methods used to design the hybrid deep feature generator for appropriate face mask use detection. After that, Section 4 details the performance results. These results do not stand in isolation. Therefore, in the discussion section, we relate them to findings from other researchers. Furthermore, we introduce limitations and future work before we conclude the paper.

2. Background

Machine learning is a powerful technique used for automatic feature extraction [18,19,20,21]. Many machine learning techniques have been presented in the literature for the detection of different diseases [22,23,24,25,26]. Machine learning techniques, developed especially for the early diagnosis of COVID-19, have achieved successful results [27,28]. Moreover, deep learning models are the most widely used techniques to detect COVID-19, since they are better solutions for COVID-19 classification than feature engineering models [29,30]. Deep learning methods achieved high accuracy when sufficient labeled data was available. Thus, deep learning-based automatic diagnosis systems are of great interest in cases when human expertise is not accessible [31]. Such systems can also serve as adjunct tools to be used by clinicians to confirm their findings. Machine learning methods have been used to detect face masks automatically [32,33]. A wide range of deep learning models, especially convolutional neural networks (CNNs) [34,35], were used to solve computer vision problems. As such, deep learning models are state-of-the-art networks in artificial intelligence, and they are likely to yield high classification performance, even with large datasets. Table 1 presents a selection of recent studies conducted to address the face mask detection problem.
It can be noted, from Table 1, that many databases have been used, and various models were proposed. Most of the developed models delivered high classification performances. To improve and, indeed, to generalize these results, we have incorporated two widely used pre-trained deep learning models, DenseNet201 [58] and ResNet101, [59] to our model as feature generators. A novel TRFIRF algorithm was used as feature selector. These features were fed to an SVM [60,61] classifier.

3. Methods

As part of this study, we have designed and implemented a deep feature engineering model to detect facemask-wearing. The main objective of this model was to achieve high classification performance with low time complexity. Therefore, transfer learning has been incorporated as an integral part of the proposed model. Figure 1 provides a schematic illustration of the proposed hybrid deep features and TRFIRF-based face mask detection model. The remainder of this section introduces the individual processing steps in more detail.
In this study, photographs of individuals mask (Case 1), no mask (Case 2), and improper mask (Case 3) were collected by researchers via internet search. The discovered photos were combined with 4072 photos that were uploaded to the Kaggle website by Larxel [62]. A face detection application was created to obtain face images from all the photos in the database. There may be more than one photo of the same individual in the database. This program, coded in C# language, detected automatically faces from photos. Through visual inspection, we eliminated a few low-quality face-mask images. Finally, we collected 529 improper mask, 992 mask, and 554 no mask face images. To explain the used classes, Figure 2 shows example images for each of the three classes.
The attributes of the collected face mask dataset are listed in Table 2.
The collected dataset can be downloaded from (accessed 20 December 2021) https://websiteyonetimi.ahievran.edu.tr/_Download/MaskDataset.rar URL.
As can be seen in Table 2, the used dataset contains 2075 facial images taken from different profiles. By using this dataset, a model for face mask-wearing sensitive doors has been proposed. Moreover, this dataset is a hybrid dataset. We created this facial image dataset using open-source face mask datasets. The most important attribute that distinguishes this dataset from other datasets is the creation of the improper mask class.
Feature concentration was accomplished with two pre-trained deep learning models. ResNet101 [59] and DenseNet201 [58], with 101 and 201 layers, respectively, were used for feature generation [58,59]. Initially, these models were trained using the ImageNet dataset [34], and, by now, these models have been used extensively for transfer learning applications [63,64]. As such, transfer learning models can be used for both feature generation and classification. For our study, we have used this technique for feature generation. To be specific, the fully connected layers of the two pre-trained models were used for this task. Figure 3 shows a block diagram of ResNet101 and DenseNet201. Within that diagram, we have highlighted the layers used for deep feature generation. The following sections provide more details on ResNet101 and DenseNet201.
Numerous studies have shown that CNNs provide good solutions for computer vision problems [65,66]. The CNN network structure is inspired by pyramid cells from the cerebral cortex [67,68]. A drawback of that approach is that CNNs tend to suffer from exploding/vanishing gradient problems; hence, they are difficult to optimize [69]. To solve these problems, various models have been proposed, and one of these models is ResNet. This model architecture uses residual connections which allow some information to bypass specific network layers [59]. The most widely used ResNet implementations are: ResNet18, ResNet31, ResNet50, ResNet101, and ResNet172. These different versions are named after the number of layers. For example, ResNet18 and ResNet31 have 18 and 31 layers, respectively. As such, they are categorized as small networks. In this work, we have used ResNet101, which has 101 layers, for feature generation through transfer learning [59].
Huang et al. [58] presented a densely connected CNN, widely known as DenseNet. It uses hierarchical connections and contains shorter connections. DenseNet uses ResNet, dense connectivity layers, composite function (batch normalization and ReLu), pooling, setting growth rate, bottleneck layers, and compression layers. DenseNet201 can be used for both classification and feature generation. In this work, it was used as a feature generator [58].
In this section, we present our transfer learning approach to feature extraction and selection. Both ResNet101 and DenseNet201 algorithms were used for automated hybrid deep feature extraction. Feature selection was established through the novel TRFIRF algorithm. Finally, SVM was used as classification algorithm. The steps of the proposed model are given below.
Step 0: Load face images.
Step 1: Generate 1000 features by deploying the pre-trained ResNet101.
Step 2: Generate 1000 features by deploying the pre-trained DenseNet201.
Step 3: Merge features and assemble a 2000-dimensional feature vector for each face image.
Step 4: Use the TRFIRF algorithm on the feature vector.
Step 4a: Apply RelieF to the feature vector and obtain weights.
Step 4b: Select features with weights greater than a threshold.
Step 4c: Deploy an iterative RelieF (IRF) to the selected features.
Step 5: Feed the chosen features to the SVM classifier.
In this section, we describe how the deep features were generated from the face images. DenseNet201 and ResNet101 were used in transfer learning mode. The face images were fed to these networks, and 1000 features were obtained from each network. To be specific, these features were obtained from the last fully connected layer (FC1000) of the networks. The primary objective of this phase is to combine the classification abilities of both DenseNet201 and ResNet201, such that this combination outperforms each of the individual classifiers. To conduct this objective, a deep feature engineering model has been introduced, and the processing steps of the deep feature generation algorithm are given below:
Step 0: Load the collected face images.
Step 1: Generate features using pre-trained ResNet101.
f e a t R e s N e t 101 = R e s N e t 101 ( I m ) ,
f e a t D e n s e N e t 201 = D e n s e N e t 201 ( I m ) ,
where R e s N e t 101 ( . ) and D e n s e N e t 201 ( . ) are defined as deep feature generation functions, I m is the image, and f e a t R e s N e t 101 and f e a t D e n s e N e t 201 are 1000 dimensional feature vectors.
Step 2: Merge the generated deep features.
f e a t ( i ) = f e a t R e s N e t 101 ( i ) ,   i { 1 , 2 , , 1000 } ,
f e a t ( i + 1000 ) = f e a t D e n s e N e t 201 ( i ) ,
where f e a t defines the concatenated features with a length of 2000.
One of the most important steps during machine learning algorithm design is feature selection. As such, feature selection must establish the feature significance and rank the features accordingly. In this work, we use the novel TRFIRF method, which is a variation of the RelieF [70]. The algorithm creates a feature weighting matrix based on Manhattan distance calculations. The individual weights can be negative and positive. Negative weights represent redundant features. Figure 4 provides a graphical representation of TRFIRF.
The TRFIRF algorithm is composed from two layers. In the first layer, the algorithm calculates the feature weights for threshold-based features selection. In the second layer, the indices of the most relevant features are generated for iterative selection. The steps below indicate how the algorithm functionality unfolds:
Step 0: Apply RelieF to generate 2000 features.
w = R e l i e f F ( f e a t ,   a c t o u t ) .
The R e l i e f F ( . , . ) function helps us to explain RelieF algorithm. w defines weights with a length of 2000, and a c t o u t represents actual outputs.
Step 1: Select features using the calculated weights ( w ) and threshold value ( t r s ). In this work, t r s was selected as 10−2.
f e a t T ( i ) = f e a t ( c o u n t ) ,   i f   w ( i ) > t r s ,   c o u n t = c o u n t + 1 ,
where f e a t T represents the selected features using threshold value.
Step 2: Calculate weights of the f e a t T by using Equation (5).
Step 3: Determine initial and end values. They were selected in between 100 and 500.
Step 4: Select a loss generator. In this work, SVM classifier was used as loss value generator with 10-fold cross-validation.
Step 5: Choose features iteratively.
w T = R e l i e f F ( f e a t T , a c t o u t ) ,
where w T defines weights of the f e a t T .
i d x = s o r t ( w T ) .
In Equation (8), the indices ( i d x ) of the most relevant features are calculated by employing sorting.
f v k ( j ) = f e a t T ( i d x ( j ) ) ,   j { 1 , 2 , 100 + k } ,   k { 1 , 2 , , 400 } ,
where f v k the selected k th feature vector.
Step 6: Calculate loss values.
l o s s ( k ) = S V M ( f v k , a c t o u t ) .
Step 7: Calculate the index of minimum loss value.
i n d = min ( l o s s ) .
Step 8: Select the optimal/final features.
f i n a l = f e a t T ( i d x ( j ) ) ,   j { 1 , 2 , 100 + i d d } .
The nine steps, outlined above, define the TRFIRF feature selector.
The final step of the presented face mask detection model is classification. The selected features are fed to the SVM classifier [60,61]. The classifier was trained and tested using 10-fold cross-validation. The features/attributes of the developed SVM classifier are given in the list below:
  • Kernel function: 3rd degree polynomial kernel, also known as Cubic SVM.
  • Kernel scale: Automatic.
  • Box constraint level: One.
  • Coding: One-vs-One.

4. Results

The presented model was trained using the dataset described in Section 3. MATLAB (2020a) was used as the programming environment. The model was evaluated based on three test cases. These cases are defined in the text below. In addition, a descriptive view of these cases is presented in Figure 5.
  • Case 1: Creates a three-class classification problem by using the categories mask, no mask, and improper mask as individual classes. This case contains 2075 images.
  • Case 2: Creates a two-class classification problem by combining wrong mask and no mask to form a ‘non-compliance’ set. Two thousand and seventy-five images were used in this case.
  • Case 3: Creates a two-class classification problem by excluding the improper mask set. This allowed us to compare our results with outcomes from other studies. There are 1546 images in this case.
We have evaluated the classification performance of the SVM model with 10-fold cross-validation. The individual performance parameters were accuracy (ACC), average precision (AP), unweighted average recall (UAR), Mathew correlation coefficient (MCC), F1-score, Cohen’s kappa (CK), and geometric mean (GM) [71,72]. The results obtained for the defined cases are presented in Table 3.
Figure 6 communicates the classification results in the form of a confusion matrix for each of the three cases.
Table 3 shows that the presented model has obtained 100.0% classification accuracy for Case 3, which resulted from 100% accuracy in each of the ten folds. Figure 7 shows the graph of accuracy (%) versus each fold of ten-fold cross-validation for Case 1 and Case 2.
In Figure 7, fold-wise classification accuracies of Case 3 were not depicted since our model has attained 100% classification accuracy on that case. Thus, fold-wise accuracies of Case 3 are equal, and they are 100%.

5. Discussion

Compulsory face covering, introduced to slow the spread of COVID-19, significantly impacted on the life of ordinary people worldwide. To reduce the transmission rate, it has become mandatory to wear face masks in some public spaces. However, enforcing that demand is difficult. Systems that can detect people without or with incorrectly worn face masks might help to enforce regulations and, thereby, control the spread of COVID-19. In this work, we propose a tool to address that problem. We use a transfer learning-based feature generation technique to detect face covering violations. To be specific, our method takes still images from faces as input and determines if the person, shown in that image, wears a mask correctly. That functionality was achieved with transfer learning. As part of this study, we developed a feed-forward feature generation model which has a low computational complexity. We have generated the features with two transfer learning algorithms (ResNet101 and DenseNet201). In other words, we have fused two deep learning models. Hence, the resulting feature extractor captured subtle variations in the data which led to a good classification performance. In this paper, various deep networks were tested before ResNet and DenseNet were selected. Pre-trained fully connected network layers were used to obtain features to speed up the model generation. The accuracy (%) obtained using various transfer learning models with our hand-curated face mask image dataset is shown in Table 4.
Table 4 indicates that the best performing transfer learning methods were ResNet101 and DenseNet201. Therefore, we have selected these two CNNs as feature generators. The TRFIRF algorithm was created to facilitate feature selection. Three cases were used to obtain the results. The TRFIRF selected 406, 478, and 345 features for Case 1, Case 2, and Case 3, respectively. Figure 8 shows a graph of loss value versus number of features using the TRFIRF algorithm for Case 1, Case 2, and Case 3.
Figure 8 denotes the iterative feature selection process by stating loss values over the number of features for the three cases. TRFIRF is a parametric feature selection function, and the number of features ranges from 100 to 500. Figure 8 shows that the minimum loss values (close to 0) are obtained for Case 3. Hence, the proposed model yielded the highest classification accuracy of 100.0% (Accuracy = 1-loss value) for Case 3. We take this high classification accuracy value as a strong indication that Case 3 poses the easiest problem. Clearly, mask and no mask image discrimination is easier when compared to the other two problems posed by Cases 1 and 2. In this research, the biggest problem was to detect face images that show people that apply their mask improperly. Therefore, we added the improper mask class to the dataset.
In order to establish that our model has working face mask detection knowledge, we have validated it with the MaskedFace-Net dataset [8]. As such, the MaskedFace-Net is a widely used open access dataset, and it is one of the largest datasets containing correctly/incorrectly mask face images. It contains a combination of actual photographic images and artificial mask images. Table 5 lists the properties of this dataset.
The proposed method achieved 99.75% accuracy when asked to classify the images from the MaskedFace-Net dataset. The high accuracy that was achieved on this large dataset confirms both the performance and the practical applicability of the proposed model. The practical applicability arises from the fact that none of the MaskedFace-Net images was used for training the model.
Unlike the MaskedFace-Net dataset, the dataset we curated contains only completely real images. In other words, our dataset contains real masked and unmasked images. This property distinguishes our dataset from the MaskedFace-Net dataset.
The advantages of this work are given below:
  • A new face mask dataset consisting of real face mask images was developed.
  • A new problem has been defined in this work, and it is named improper mask. Improper wearing of a face mask has been defined as the wrong masked class. Especially in Turkey, improper wearing of a face mask has widely been seen as a common violation of face covering rules. Hence, this unruly behavior is believed to be a major contributor to the spread of COVID-19. To detect this this rule violation, we have defined ‘wrong masked’ as a category in this work. The classification capability of the proposed hybrid deep feature extractor-based model has been demonstrated by using this signal class.
  • Our literature review indicates that most of the face mask-wearing detection methods have been tested on the categories mask and no mask. Our proposal attained 100% classification (magnificent classification capability) accuracy for this case (Case 3). We solved this problem by deploying a hybrid deep feature engineering model. Moreover, we have used transfer learning. Therefore, our model has also low time complexity.
  • A highly accurate deep feature-based model is presented.
  • The presented model used two pre-trained transfer learning networks for feature generation. Therefore, it extracted more salient features with low execution time.
  • A new version of RelieF selector, named TRFIRF, was developed. It selects an optimal number of features automatically.
  • This model can also be used for the automated classification of abnormal classes from normal classes.
  • The disadvantages of this work can be summarized as follows (9 and 10 below).
  • ResNet101 and DenseNert201 networks are not cognitive and lightweight methods. New generation lightweight and cognitive models could be used.
  • Bigger face mask datasets are required to test the model further.
In the future, real time automated face mask detection can be developed with the following steps: (i) public images, collected with wearable and fixed position cameras, (ii) face recognition, (iii) face region segmentation, (iv) face mask detection with the proposed model, and (v) no mask people are reported.
The presented deep feature engineering-based face mask wearing detection application can be used in medical centers and other locations to detect violations of face covering rules. A camera can be placed on the door, and this camera can take a picture of a person’s face that depicts the front profile. That picture can be processed with the presented hybrid deep model. The processing results will indicate possible violations of face covering rules. Deploying such a system will automate and objectify the detection aspect of face covering rule enforcement. A schematic demonstration of our project is shown in Figure 9.

6. Conclusions

Automated detection of appropriate mask use based on face images is a challenging and popular problem in machine learning. In this work, an accurate model was developed using deep feature generation, TRFIRF,—based feature selection and classification techniques. We assembled a face mask image dataset which consists of masked, no masked, and improper mask categories. From this dataset, three cases were created. Our proposed model attained 95.95%, 97.49%, and 100.0% accuracies for Case 1, Case 2, and Case 3, respectively. In the future, we aim to extend this work to create a real time face mask detection system. Such a system might reduce the risk of spreading the viruses by monitoring and subsequently enforcing face covering rules.

Author Contributions

Conceptualization, E.A., M.A.Y., P.D.B., M.B., O.F., S.D., S.C., T.T., U.R.A.; formal analysis, E.A., M.A.Y., P.D.B., M.B., O.F., S.D., S.C., T.T., U.R.A.; investigation, E.A., M.A.Y., P.D.B.; software, T.T.; methodology, E.A., M.A.Y., P.D.B.; project administration, U.R.A.; resources, E.A., M.A.Y., P.D.B., M.B., O.F., S.D., S.C., T.T., U.R.A.; supervision, U.R.A.; validation, E.A., M.A.Y., P.D.B., M.B.; visualization, E.A., M.A.Y., P.D.B., M.B.; writing—original draft, E.A., M.A.Y., P.D.B., M.B., O.F., S.D., S.C., T.T., U.R.A.; writing—review and editing, E.A., M.A.Y., P.D.B., M.B., O.F., S.D., S.C., T.T., U.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors state that this work has not received any funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are publicly available: (accessed 20 December 2021) https://websiteyonetimi.ahievran.edu.tr/_Download/MaskDataset.rar.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schmid, B.V.; Büntgen, U.; Easterday, W.R.; Ginzler, C.; Walløe, L.; Bramanti, B.; Stenseth, N.C. Climate-driven introduction of the Black Death and successive plague reintroductions into Europe. Proc. Natl. Acad. Sci. USA 2015, 112, 3020–3025. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Akin, H. A catastrophe foretold: An assessment of the great pestilence of the middle ages and its social consequences. Kebikec Insan Bilimleri Icin Kaynak Arast. Derg. 2018, 46, 247–296. [Google Scholar]
  3. Katz, R. Use of revised International Health Regulations during influenza A (H1N1) epidemic, 2009. Emerg. Infect. Dis. 2009, 15, 1165. [Google Scholar] [CrossRef] [PubMed]
  4. Chowell, G.; Bertozzi, S.M.; Colchero, M.A.; Lopez-Gatell, H.; Alpuche-Aranda, C.; Hernandez, M.; Miller, M.A. Severe respiratory disease concurrent with the circulation of H1N1 influenza. N. Engl. J. Med. 2009, 361, 674–679. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Liu, X.; Zhang, S. COVID-19: Face masks and human-to-human transmission. Influenza Other Respir. Viruses 2020, 14, 472–473. [Google Scholar] [CrossRef] [PubMed]
  6. Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R. A novel coronavirus from patients with pneumonia in China, 2019. N. Engl. J. Med. 2020, 382, 727–733. [Google Scholar] [CrossRef]
  7. Wang, B.; Zheng, J.; Chen, C.P. A Survey on Masked Facial Detection Methods and Datasets for Fighting Against COVID-19. IEEE Trans. Artif. Intell. 2021, 1–21. [Google Scholar] [CrossRef]
  8. Cabani, A.; Hammoudi, K.; Benhabiles, H.; Melkemi, M. MaskedFace-Net–A dataset of correctly/incorrectly masked face images in the context of COVID-19. Smart Health 2021, 19, 100144. [Google Scholar] [CrossRef]
  9. Huang, C.; Wang, Y.; Li, X.; Ren, L.; Zhao, J.; Hu, Y.; Zhang, L.; Fan, G.; Xu, J.; Gu, X. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet 2020, 395, 497–506. [Google Scholar] [CrossRef] [Green Version]
  10. WHO. Coronavirus Disease (COVID-19) Dashboard. 2020. Available online: https://covid19.who.int/ (accessed on 7 December 2020).
  11. Feng, S.; Shen, C.; Xia, N.; Song, W.; Fan, M.; Cowling, B.J. Rational use of face masks in the COVID-19 pandemic. Lancet Respir. Med. 2020, 8, 434–436. [Google Scholar] [CrossRef]
  12. Kirby, T. Australian Government releases face masks to protect against coronavirus. Lancet Respir. Med. 2020, 8, 239. [Google Scholar] [CrossRef]
  13. Turan, A.; Çelikyay, H. Fight against COVID-19 in Turkey: Policies and actors. Intern. J. Manag. Acad. 2020, 3, 1–25. [Google Scholar]
  14. Eikenberry, S.E.; Mancuso, M.; Iboi, E.; Phan, T.; Eikenberry, K.; Kuang, Y.; Kostelich, E.; Gumel, A.B. To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the COVID-19 pandemic. Infect. Dis. Model. 2020, 5, 293–308. [Google Scholar] [CrossRef] [PubMed]
  15. Betsch, C.; Korn, L.; Sprengholz, P.; Felgendreff, L.; Eitze, S.; Schmid, P.; Böhm, R. Social and behavioral consequences of mask policies during the COVID-19 pandemic. Proc. Natl. Acad. Sci. USA 2020, 117, 21851–21853. [Google Scholar] [CrossRef]
  16. Batagelj, B.; Peer, P.; Štruc, V.; Dobrišek, S. How to Correctly Detect Face-Masks for COVID-19 from Visual Information? Appl. Sci. 2021, 11, 2070. [Google Scholar] [CrossRef]
  17. Loey, M.; Manogaran, G.; Taha, M.H.N.; Khalifa, N.E.M. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic. Measurement 2020, 167, 108288. [Google Scholar] [CrossRef]
  18. Akilan, T.; Wu, Q.J.; Zhang, H. Effect of fusing features from multiple DCNN architectures in image classification. IET Image Process. 2018, 12, 1102–1110. [Google Scholar] [CrossRef]
  19. Ma, L.; Jiang, W.; Jie, Z.; Jiang, Y.-G.; Liu, W. Matching image and sentence with multi-faceted representations. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2250–2261. [Google Scholar] [CrossRef]
  20. Zhang, W.; Wu, Q.J.; Yang, Y.; Akilan, T. Multimodel feature reinforcement framework using Moore-Penrose Inverse for big data analysis. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 5008–5021. [Google Scholar] [CrossRef]
  21. Huynh-The, T.; Hua, C.-H.; Kim, D.-S. Encoding pose features to images with data augmentation for 3-D action recognition. IEEE Trans. Ind. Inform. 2019, 16, 3100–3111. [Google Scholar] [CrossRef]
  22. Pahuja, G.; Nagabhushan, T. A comparative study of existing machine learning approaches for parkinson’s disease detection. IETE J. Res. 2021, 67, 4–14. [Google Scholar] [CrossRef]
  23. Deivasigamani, S.; Senthilpari, C.; Yong, W.H. Machine learning method based detection and diagnosis for epilepsy in EEG signal. J. Ambient Intell. Humaniz. Comput. 2021, 12, 4215–4221. [Google Scholar] [CrossRef]
  24. Tuncer, T.; Dogan, S.; Pławiak, P.; Acharya, U.R. Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals. Knowl.-Based Syst. 2019, 186, 104923. [Google Scholar] [CrossRef]
  25. Tuncer, T.; Dogan, S.; Subasi, A. Surface EMG signal classification using ternary pattern and discrete wavelet transform based feature extraction for hand movement recognition. Biomed. Signal Processing Control 2020, 58, 101872. [Google Scholar] [CrossRef]
  26. Jahmunah, V.; Sudarshan, V.K.; Oh, S.L.; Gururajan, R.; Gururajan, R.; Zhou, X.; Tao, X.; Faust, O.; Ciaccio, E.J.; Ng, K.H. Future IoT tools for COVID-19 contact tracing and prediction: A review of the state-of-the-science. Int. J. Imaging Syst. Technol. 2021, 31, 455–471. [Google Scholar] [CrossRef]
  27. Sharifrazi, D.; Alizadehsani, R.; Roshanzamir, M.; Joloudari, J.H.; Shoeibi, A.; Jafari, M.; Hussain, S.; Sani, Z.A.; Hasanzadeh, F.; Khozeimeh, F. Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images. Biomed. Signal Process. Control 2021, 68, 102622. [Google Scholar] [CrossRef]
  28. Abdar, M.; Salari, S.; Qahremani, S.; Lam, H.-K.; Karray, F.; Hussain, S.; Khosravi, A.; Acharya, U.R.; Nahavandi, S. UncertaintyFuseNet: Robust Uncertainty-aware Hierarchical Feature Fusion with Ensemble Monte Carlo Dropout for COVID-19 Detection. arXiv 2021, arXiv:2105.08590. [Google Scholar]
  29. Wang, S.; Zha, Y.; Li, W.; Wu, Q.; Li, X.; Niu, M.; Wang, M.; Qiu, X.; Li, H.; Yu, H. A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. Eur. Respir. J. 2020, 56, 2000775. [Google Scholar] [CrossRef]
  30. Jamshidi, M.; Lalbakhsh, A.; Talla, J.; Peroutka, Z.; Hadjilooei, F.; Lalbakhsh, P.; Jamshidi, M.; La Spada, L.; Mirmozafari, M.; Dehghani, M. Artificial intelligence and COVID-19: Deep learning approaches for diagnosis and treatment. IEEE Access 2020, 8, 109581–109595. [Google Scholar] [CrossRef]
  31. Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Al-Turjman, F.; Pinheiro, P.R. Covidgan: Data augmentation using auxiliary classifier gan for improved covid-19 detection. IEEE Access 2020, 8, 91916–91923. [Google Scholar] [CrossRef]
  32. Chowdary, G.J.; Punn, N.S.; Sonbhadra, S.K.; Agarwal, S. Face mask detection using transfer learning of InceptionV3. arXiv 2020, arXiv:2009.08369. [Google Scholar]
  33. Mbunge, E.; Simelane, S.; Fashoto, S.G.; Akinnuwesi, B.; Metfula, A.S. Application of deep learning and machine learning models to detect COVID-19 face masks-A review. Sustain. Oper. Comput. 2021, 2, 235–245. [Google Scholar] [CrossRef]
  34. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  35. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  36. Nieto-Rodríguez, A.; Mucientes, M.; Brea, V.M. System for medical mask detection in the operating room through facial attributes. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Santiago de Compostela, Spain, 17–19 June 2015; pp. 138–145. [Google Scholar]
  37. Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on Faces in ’Real-Life’ Images: Detection, Alignment, and Recognition. 2008. Available online: https://hal.inria.fr/inria-00321923/ (accessed on 20 December 2021).
  38. Rowley, H.A.; Baluja, S.; Kanade, T. Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 23–38. [Google Scholar] [CrossRef]
  39. Frischholz, R. Bao Face Database at the Face Detection Homepage. 2012. Available online: https://facedetection.com/ (accessed on 20 December 2021).
  40. Ejaz, M.S.; Islam, M.R.; Sifatullah, M.; Sarker, A. Implementation of Principal Component Analysis on masked and non-masked face recognition. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–5. [Google Scholar]
  41. AT&T Laboratories Cambridge. The ORL Database of Faces. 2014. Available online: https://cam-orl.co.uk/facedatabase.html (accessed on 20 December 2021).
  42. Qin, B.; Li, D. Identifying facemask-wearing condition using image super-resolution with classification network to prevent COVID-19. Sensors 2020, 20, 5236. [Google Scholar] [CrossRef]
  43. Witkowski, M. Medical Masks Dataset. 2020. Available online: https://www.kaggle.com/vtech6/medical-masks-dataset (accessed on 15 November 2020).
  44. Li, C.; Wang, R.; Li, J.; Fei, L. Face detection based on YOLOv3. In Recent Trends in Intelligent Computing, Communication and Devices; Springer: Berlin/Heidelberg, Germany, 2020; pp. 277–284. [Google Scholar]
  45. Liu, Z.; Luo, P.; Wang, X.; Tang, X. Large-scale celebfaces attributes (celeba) dataset. Retrieved August 2018, 15, 2018. [Google Scholar]
  46. Yang, S.; Luo, P.; Loy, C.-C.; Tang, X. Wider face: A face detection benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5525–5533. [Google Scholar]
  47. Hussain, S.A.; Al Balushi, A.S.A. A real time face emotion classification and recognition using deep learning model. J. Phys. Conf. Ser. 2020, 1432, 012087. [Google Scholar] [CrossRef]
  48. Lundqvist, D.; Flykt, A.; Öhman, A. The karolinska directed emotional faces (KDEF). CD ROM Dep. Clin. Neurosci. Psychol. Sect. Karolinska Inst. 1998, 91, 2. [Google Scholar]
  49. Wang, Z.; Wang, G.; Huang, B.; Xiong, Z.; Hong, Q.; Wu, H.; Yi, P.; Jiang, K.; Wang, N.; Pei, Y. Masked face recognition dataset and application. arXiv 2020, arXiv:2003.09093. [Google Scholar]
  50. SMFD. A Simulated Masked Face Dataset, SMFD. 2020. Available online: https://github.com/prajnasb/observations (accessed on 15 November 2020).
  51. Loey, M.; Manogaran, G.; Taha, M.H.N.; Khalifa, N.E.M. Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection. Sustain. Cities Soc. 2020, 65, 102600. [Google Scholar] [CrossRef]
  52. Makwana, D. Face Mask Dataset, FMD. 2020. Available online: https://www.kaggle.com/andrewmvd/face-mask-detection (accessed on 15 November 2020).
  53. Roy, B.; Nandy, S.; Ghosh, D.; Dutta, D.; Biswas, P.; Das, T. MOXA: A deep learning based unmanned approach for real-time monitoring of people wearing medical masks. Trans. Indian Natl. Acad. Eng. 2020, 5, 509–518. [Google Scholar] [CrossRef]
  54. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  55. Mohan, P.; Paul, A.J.; Chirania, A. A tiny CNN architecture for medical face mask detection for resource-constrained endpoints. arXiv 2020, arXiv:2011.14858. [Google Scholar]
  56. Jangra, A. Face Mask 12k Images Dataset. 2020. Available online: https://www.kaggle.com/ashishjangra27/face-mask-12k-images-dataset (accessed on 20 December 2021).
  57. Bhadani, A.K.; Sinha, A. A facemask detector using machine learning and image processing techniques. Eng. Sci. Technol. Int. J. 2020, 1–8. [Google Scholar]
  58. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  60. Vapnik, V. The support vector method of function estimation. In Nonlinear Modeling; Springer: Berlin/Heidelberg, Germany, 1998; pp. 55–85. [Google Scholar]
  61. Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  62. Larxel. Face Mask Detection. Available online: www.kaggle.com (accessed on 10 November 2020).
  63. Morid, M.A.; Borjali, A.; Del Fiol, G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput. Biol. Med. 2021, 128, 104115. [Google Scholar] [CrossRef]
  64. Cohn, R.; Holm, E. Unsupervised machine learning via transfer learning and k-means clustering to classify materials image data. Integr. Mater. Manuf. Innov. 2021, 10, 231–244. [Google Scholar] [CrossRef]
  65. Hussein, B.R.; Malik, O.A.; Ong, W.-H.; Slik, J.W.F. Application of computer vision and machine learning for digitized herbarium specimens: A systematic literature review. arXiv 2021, arXiv:2104.08732. [Google Scholar]
  66. Chai, J.; Zeng, H.; Li, A.; Ngai, E.W. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
  67. Sasongko, A.T.; Fanany, M.I. Indonesia toll road vehicle classification using transfer learning with pre-trained Resnet models. In Proceedings of the 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 5–6 December 2019; pp. 373–378. [Google Scholar]
  68. Kumar, D.; Zhang, X.; Su, H.; Wei, S. Accurate object detection based on faster R-CNN in remote sensing imagery. In Proceedings of the 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China, 26–29 November 2019; pp. 1–6. [Google Scholar]
  69. Shao, J.; Qu, C.; Li, J. A performance analysis of convolutional neural network models in SAR target recognition. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; pp. 1–6. [Google Scholar]
  70. Kira, K.; Rendell, L.A. The feature selection problem: Traditional methods and a new algorithm. In Proceedings of the National Conference on Artificial Intelligence (AAAI), San Jose Convention Center, San Jose, CA, USA, 12–16 July 1992; pp. 129–134. [Google Scholar]
  71. Warrens, M.J. On the equivalence of Cohen’s kappa and the Hubert-Arabie adjusted Rand index. J. Classif. 2008, 25, 177–183. [Google Scholar] [CrossRef] [Green Version]
  72. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  74. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  75. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  76. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
Figure 1. Illustration of the proposed hybrid deep features and TRFIRF-based face mask detection model.
Figure 1. Illustration of the proposed hybrid deep features and TRFIRF-based face mask detection model.
Ijerph 19 01939 g001
Figure 2. Sample images from the three classes in the dataset: (a) mask images, (b) no mask images, (c) improper mask images.
Figure 2. Sample images from the three classes in the dataset: (a) mask images, (b) no mask images, (c) improper mask images.
Ijerph 19 01939 g002aIjerph 19 01939 g002b
Figure 3. ResNet101 and DenseNet201 deep network architectures.
Figure 3. ResNet101 and DenseNet201 deep network architectures.
Ijerph 19 01939 g003
Figure 4. Snapshot of the presented TRFIRF model. In this model, ReliefF is applied two times.
Figure 4. Snapshot of the presented TRFIRF model. In this model, ReliefF is applied two times.
Ijerph 19 01939 g004
Figure 5. Test cases used in the study.
Figure 5. Test cases used in the study.
Ijerph 19 01939 g005
Figure 6. Confusion matrices resulting from training and testing the model with the three different cases: (a) Case 1, (b) Case 2, and (c) Case 3.
Figure 6. Confusion matrices resulting from training and testing the model with the three different cases: (a) Case 1, (b) Case 2, and (c) Case 3.
Ijerph 19 01939 g006aIjerph 19 01939 g006b
Figure 7. Accuracy (%) versus each fold of ten-fold cross-validation for the Cases 1 and 2.
Figure 7. Accuracy (%) versus each fold of ten-fold cross-validation for the Cases 1 and 2.
Ijerph 19 01939 g007
Figure 8. Graph of loss value versus number of features using TRFIRF selector for Case 1, Case 2, and Case 3.
Figure 8. Graph of loss value versus number of features using TRFIRF selector for Case 1, Case 2, and Case 3.
Ijerph 19 01939 g008
Figure 9. Mask sensitive automatic door.
Figure 9. Mask sensitive automatic door.
Ijerph 19 01939 g009
Table 1. Summary of current studies conducted on face mask detection.
Table 1. Summary of current studies conducted on face mask detection.
StudyMethodDatasetAccuracy (%)
Nieto-Rodríguez et al. [36] 2015Mixture of GaussiansLFW [37],
CMU [38], BAO [39]
95.00
Ejaz et al. [40] 2019Principal Component AnalysisORL [41]72.00
Qin and Li [42] 2020Super-resolution with classification networkMMD [43]98.70
Li et al. [44] 2020You Only Look Once (YOLOv3)CelebA [45], WIDER FACE [46]93.90
Hussain et al. [47] 2020Convolution Neural NetworksKDEF [48]88.00
Loey et al. [17] 2020Convolution Neural Networks, Support Vector MachineRMFD [49], SMFD [50], LFW [37]100.00
Loey et al. [51] 2020Convolution Neural Networks, You Only Look Once (YOLOv2)MMD [43],
FMD [52]
81.00
Chowdary et al. [32] 2020Convolution Neural NetworksSMFD [50]100
Roy et al. [53] 2020You Only Look Once (YOLOv3)Moxa3K [53,54]63.00
Mohan et al. [55] 2020Convolution Neural Networks,FMD [52],
FM12kID [56]
99.83
Bhadani and Sinha [57] 2020Deep Neural Networks, Principal Component AnalysisCollected Data95.67
Table 2. Amount of class specific data within the dataset.
Table 2. Amount of class specific data within the dataset.
ClassesNumber of Face Images
Mask992
No masked554
Improper masked529
Total2075
Table 3. Summary of overall performance (%) obtained for the three cases.
Table 3. Summary of overall performance (%) obtained for the three cases.
Performance MeasuresCase 1Case 2Case 3
Accuracy (%)95.9597.49100.0
AP (%)95.5697.47100.0
UAR (%)95.3697.51100.0
MCC (%)93.4294.98100.0
F1-score (%)95.4597.49100.0
CK (%)93.6294.98100.0
GM (%)95.3197.51100.0
Table 4. Accuracy results obtained using various transfer learning models with our face mask image dataset. These results were obtained for Case 1.
Table 4. Accuracy results obtained using various transfer learning models with our face mask image dataset. These results were obtained for Case 1.
NumberCNNAccuracy (%)
1ResNet101 [59]93.83
2DenseNet201 [58]93.54
3InceptionResNetv2 [73]92.72
4Inceptionv3 [73]92.43
5ResNet50 [59]92.34
6SqueezeNet [74]91.90
7MobileNetv2 [34]91.04
8GoogLeNet [75]90.89
9ResNet18 [59]90.70
10VGG19 [76]90.51
11AlexNet [34]89.93
12VGG16 [76]89.88
Table 5. The properties of MaskedFace-Net dataset.
Table 5. The properties of MaskedFace-Net dataset.
ClassesNumber of Face Images
Correctly Masked Face Dataset (CMFD)67,049
Incorrectly Masked Face Dataset (IMFD)66,734
Total133,783
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aydemir, E.; Yalcinkaya, M.A.; Barua, P.D.; Baygin, M.; Faust, O.; Dogan, S.; Chakraborty, S.; Tuncer, T.; Acharya, U.R. Hybrid Deep Feature Generation for Appropriate Face Mask Use Detection. Int. J. Environ. Res. Public Health 2022, 19, 1939. https://doi.org/10.3390/ijerph19041939

AMA Style

Aydemir E, Yalcinkaya MA, Barua PD, Baygin M, Faust O, Dogan S, Chakraborty S, Tuncer T, Acharya UR. Hybrid Deep Feature Generation for Appropriate Face Mask Use Detection. International Journal of Environmental Research and Public Health. 2022; 19(4):1939. https://doi.org/10.3390/ijerph19041939

Chicago/Turabian Style

Aydemir, Emrah, Mehmet Ali Yalcinkaya, Prabal Datta Barua, Mehmet Baygin, Oliver Faust, Sengul Dogan, Subrata Chakraborty, Turker Tuncer, and U. Rajendra Acharya. 2022. "Hybrid Deep Feature Generation for Appropriate Face Mask Use Detection" International Journal of Environmental Research and Public Health 19, no. 4: 1939. https://doi.org/10.3390/ijerph19041939

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop