Next Article in Journal
Muscle Synergy of Lower Limb Motion in Subjects with and without Knee Pathology
Next Article in Special Issue
Mortality Prediction Utilizing Blood Biomarkers to Predict the Severity of COVID-19 Using Machine Learning Technique
Previous Article in Journal
Large Platelet and Endothelial Extracellular Vesicles in Cord Blood of Preterm Newborns: Correlation with the Presence of Hemolysis
Previous Article in Special Issue
Detection and Severity Classification of COVID-19 in CT Images Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review

1
Artificial Intelligence Center, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
2
Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
3
Department of Radiology, IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, 20089 Milan, Italy
4
Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Fondazione Policlinico Universitario Agostino Gemelli—IRCCS, 00168 Rome, Italy
5
Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Via Roma 67, 56126 Pisa, Italy
6
Italian Society of Medical and Interventional Radiology, SIRM Foundation, Via della Signora 2, 20122 Milano, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally.
Diagnostics 2021, 11(8), 1317; https://doi.org/10.3390/diagnostics11081317
Submission received: 10 June 2021 / Revised: 2 July 2021 / Accepted: 9 July 2021 / Published: 22 July 2021
(This article belongs to the Special Issue Artificial Intelligence for COVID-19 Diagnosis)

Abstract

:
Diagnostic imaging is regarded as fundamental in the clinical work-up of patients with a suspected or confirmed COVID-19 infection. Recent progress has been made in diagnostic imaging with the integration of artificial intelligence (AI) and machine learning (ML) algorisms leading to an increase in the accuracy of exam interpretation and to the extraction of prognostic information useful in the decision-making process. Considering the ever expanding imaging data generated amid this pandemic, COVID-19 has catalyzed the rapid expansion in the application of AI to combat disease. In this context, many recent studies have explored the role of AI in each of the presumed applications for COVID-19 infection chest imaging, suggesting that implementing AI applications for chest imaging can be a great asset for fast and precise disease screening, identification and characterization. However, various biases should be overcome in the development of further ML-based algorithms to give them sufficient robustness and reproducibility for their integration into clinical practice. As a result, in this literature review, we will focus on the application of AI in chest imaging, in particular, deep learning, radiomics and advanced imaging as quantitative CT.

1. Introduction

Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection, named COVID-19 (coronavirus disease 2019), caused a global healthcare and economic crisis. The first cases were observed in Wuhan, China, in December 2019, and it rapidly spread across the world so that in early March 2020, the WHO decided to classify COVID-19 a pandemic.
Diagnostic imaging has a fundamental role in the clinical work-up of patients with suspected or confirmed COVID-19 infection, granting disease identification, screening and stratification based on the severity of lung involvement as well as in predicting the risk of complications and the need of intensive care unit (ICU) admission. Imaging helps, nonetheless, in the differential diagnosis of COVID-19 from other kinds of lung infections and diseases. However, due to the rapid diffusion of COVID-19 pandemic, a lot of hospitals and primary and secondary care structures found themselves unprepared, having trouble getting personal protective equipment (PPE) [1], thus making diagnostic imaging procedures difficult and risky to perform, [2] also considering the difficultly to fully and promptly clean the CT scanners between each examination.
In fact, imaging should be reserved to the following precise cases, as suggested in the advice guide for the diagnosis and management of COVID-19 by the WHO [3]:
  • For the diagnostic workup of COVID-19 when RT-PCR testing is not available; when RT-PCR testing is available, but results are delayed; and when initial RT-PCR testing is negative, but with high clinical suspicion of COVID-19. In addition to clinical and laboratory data for patients with suspected or confirmed COVID-19, not currently hospitalized and with mild symptoms in order to decide on hospital admission/home discharge or on regular ward admission/intensive care unit admission.
  • In addition to clinical and laboratory data for therapeutic management of patients with suspected or confirmed COVID-19, currently hospitalized and with moderate to severe symptoms.
Due to its high availability, portability and cost-effectiveness, chest X-ray (CXR) is the most widely used diagnostic imaging modality against COVID-19, contributing to the first assessment of patients with respiratory symptoms. Patients affected by COVID-19 can present with a pattern varying from normal lung to bilateral interstitial involvement, to opacification, based on the stage of the disease and the clinical presentation [4].
Chest computed tomography (CT) is usually performed in critically ill patients, in which there could also be the need to rule out pulmonary thromboembolism which can be a fatal complication of COVID-19 infection. CT imaging is more accurate than CXR, and is also used in cases of dubious finding at the radiographs: CT patterns are represented by peribronchial and peripheral ground-glass opacities (GGO), mostly basal and bilateral, with involvement of two or more lung lobes, with an increase in severity and consolidation and/or crazy paving pattern as the disease advances in the middle and late stages. However is important to outline that CT patterns of COVID-10 pneumonia are not specific, and superimposable to many other infectious and non-infectious pneumonia [5,6,7].
Lung ultrasound (US) does not have a clear role in the diagnostic approach to a suspected or confirmed COVID-19 case. Due to its great availability and mobility, it can be of great use for bedside evaluation of subpleural consolidations, pneumothorax and alveolar damage, even though its diagnostic accuracy greatly depends on the operator experience [8,9,10]. Recent progress has been made in diagnostic imaging with the integration of artificial intelligence (AI) with computer-aided design (CAD) softwares [11], leading to an increase in the accuracy of exams’ interpretation and to the extraction of prognostic information useful in the decision-making process [12,13,14,15].
Specifically, COVID-19 has catalyzed the rapid expansion in the application of AI to combat disease. As a result, previous authors made a summary of the work performed and the discriminatory ability of AI in its various diagnostic imaging applications.
Ghaderzadeh et al. in their systematic review analyzed papers published between 1 November 2019, and 20 July 2020 regarding the application of deep learning (DL) in chest X-ray and CT. In this review, they suggested that DL-based models share high accuracy in the detection and diagnosis of COVID-19 and that the application of DL reduces false-positive and negative errors compared to radiological examination performed by a radiologist [16].
Another review article by Shi et al. focused on the role of AI in chest CT and CXR in COVID-19 affected patients. They gave an overview of the whole pipeline regarding the implementation of DL in chest imaging, from image acquisition, segmentation to diagnosis, giving also insights regarding the follow-up and the public datasets available [17].
In this review, we explore the role of AI/ML in the diagnostic imaging of patients with COVID-19, including deep learning integration, radiomics features and quantitative CT imaging algorithms. We discuss its wide-range applications on the following domains:
Identification and screening of COVID-19 pneumonia,
For setting the differential diagnosis between COVID-19 pneumonia and other types of infectious pneumonia.
In the stratification and definition of severity and complications of COVID-19 pneumonia.

2. Search Strategy

Before setting up our search strategy we aimed at answering the following questions:
(1)
What are the main indications for COVID-19 imaging?
(2)
What is the workflow followed in image elaboration for AI solutions?
(3)
Does DL improve the diagnostic abilities of radiologists in COVID-19 patients?
(4)
What are the other applications of AI in COVID-19 patients (apart from the identification of the lesions?
(5)
Are there any limitations for AI in this field?
After defining the aforementioned research question, we searched using the PubMed database by inserting the following keywords: “COVID-19,” “diagnosis,” “artificial intelligence,” “detection,” “chest x-ray,” “chest CT,” “deep learning,” “stratification,” “prognosis,” “differential diagnosis,” eventually, the related published studies were extracted and reviewed. We set inclusion criteria to refine the selection of manuscripts based on our subjective assessment of their relevance, novelty and being in English language.

3. Workflow of Images Segmentation, Annotation and Elaboration

Development of AI-based COVID-19 classification/segmentation models starts from their training with various images sources, usually represented by normal and abnormal (COVID-19, non-COVID-19) chest images. Data collection is, therefore, considered mandatory.
The whole workflow of image annotation, segmentation, and elaboration is shown in Figure 1.
Patients’ data must be downloaded, queried, correctly de-identified and safely stored after ethical consent. The best approach to de-identification is pseudonymitazion; when the DICOM images are pseudonymized, the information that can point to the identity of a subject is replaced by “pseudonyms” or identifiers [18].
Manual selection of similar images according to basic criteria (age, technique, imaging findings) is always performed by expert radiologists to have the best training dataset. Image segmentation is a fundamental part of image processing and analysis for assessment of pathologic examinations. Segmentation is based on delineation of regions of interest (ROIs), as lung lobes, airways, focal or diffuse pathologies in the images [19,20,21,22,23]. A robust training model needs sufficient labeled images, which usually lack in case of COVID-19, mostly due to the time-consuming nature of this task in a pandemic setting; in these cases, the radiologist can be asked to interact with the segmentation network to supervise the machine learning methods [24]. An appropriate segmentation may help in monitoring the progression of COVID-19 pneumonia and the assessment of severity. AI models can be trained using available datasets or with the “transfer learning” method, making the most of already available models which also avoid mixing training and test data [25]. Features obtained from different convolutional neural network models can be classified with a support vector machine (SVM) classifier using images [26]. After training and testing, one or more other sets of images can be used for external validation of the model.

4. Artificial Intelligence in Chest X-ray

Several studies focused on the automatic classification of COVID-19 from CXR images [27,28,29,30,31,32,33,34,35], considering how useful it could be in emergency departments, urgent care, and resource-limited settings. Moreover, by matching CXR findings to clinical data prognostic models can be developed, to predict disease gravity, and stratify patients on the basis of their risk of developing severe disease and or complications.

4.1. AI in the Identification of COVID-19 Pneumonia at Chest X-ray

CXR can help in identify signs of pneumonia, also in case of negative RT-PCR test: sensitivity of CXR greatly depends on the stage of the lung infection and on the extent of the disease, as well as on the technical quality of the exam (usually performed bedside in critically ill patients), ranging from 50% to 84% [36,37,38]. Specificity is low, attested at 33% [36]. However, the COVID-19 pandemic kickstarted the development of AI-based models worldwide, for the automatic detection of pneumonia signgs on CXR images, which yielded great results: using automated machine learning algorithms and deep convolutional neural networks (DCNN), as well as deep transfer learning techniques, various Authors presented results in COVID-19 detection in which obtained a sensitivity ranging from 97.9% to 100%, a specificity between 95% and 98.8%, an accuracy ranging from 83.5% to 98%, and precision of up to 97.95% [27,35,39,40,41,42].
Accuracy can be improved by up to 99.41% when using support vector machines (SVM), which are supervised learning methods based on statistical learning theory [43] that work by dividing the dataset in training and test subsets [44,45], and up to 100% when using twice transfer learning (also known as transfer learning in three steps), and output neuron keeping (keeping output neurons that classify similar classes between the second and third step of the twice transfer learning), which improves training speed or performances particularly in the first phases of the training process [46]. Other approaches in COVID-19 pneumonia identification were performed using several convolutional layers and applying filters to each layer [33], as well as introducing stochastic pooling in DCNN [47], or using multiresolution approaches with improved results when compared to deep learning methods [48,49].
Moreover, Sahlol et al. used an efficient hybrid classification which adopted a combination of CNN and an improved swarm-based feature selection algorithm. This combination should achieve two main targets; high performance and resource consumption, storage capacity. In addition, they also proposed a novel robust optimizer called Fractional-order Marine Predators Algorithm (FO-MPA) to efficiently select the huge feature vector produced from the CNN. Then, they tested and evaluated the proposed approach by performing extensive comparisons to several state-of-art feature selection algorithms, most recent CNN architectures and most recent relevant works and existing classification methods of COVID-19 images [50].
Table 1 provides a summary of the papers included in the review, focused on AI in the identification of COVID-19 pneumonia signs at CXR. Figure 2 shows the distribution of subjects included considering those studies where it was clearly stated.

4.2. AI in the First Assessment of COVID-19 Pneumonia at Chest X-ray

As CXR is often the first-line diagnostic imaging modality when facing a patient suspected of COVID-19 infection, even if less sensitive than lung CT, it plays a great role in the first assessment of patient. Even though the confirmation of COVID-19 infection should always come from RT-PCR tests performed on naso-pharyngeal swabs [51], these tests could not be readily available and may take time to give the result; therefore, a rapid CXR assessment of patients with respiratory symptoms should be performed, and AI can play an important role, especially when dealing with a large number of requests in the emergency settings [52]. Most literature studies use AI in CXR to distinguish between COVID-19 and other pneumonia and healthy patients [53,54,55]. Xia et al. described the use of a rapid and economic classifier for screening of COVID-19 from influenza-A/B pneumonia which combined CXR (or CT-localizer scanogram) data with clinical features, with 91.5% sensitivity and 81.2% specificity and an AUC of 0.971 (95% CI 0.964–0.980) [56].
In Table 2, we provided a summary of the papers included in our review focused on AI in the screening of COVID-19 pneumonia at Chest X-ray. Figure 3 shows the distribution of subjects included considering those studies where it was clearly stated.

4.3. AI in the Stratification and Definition of Severity and Complications of COVID-19 Pneumonia at Chest X-ray

As diagnostic images in COVID-19 correlate with disease severity, AI can be used as a prognostic tool, helping monitoring disease evolution and course, and identifying patients at risk of ICU admission [57,58]. However, there is no standardized method in reporting CXR findings in terms of disease severity. Li et al. used the pulmonary x-ray severity (PXS) score, a DL-based algorithm providing quantitative measures of COVID-19 severity on CXR, as an adjuvant tool to radiologists’ work—which, however, always decided on the severity grading and definitive radiological report-, and noticed an improvement in the assessment of the severity on a 4-point scale (normal/minimal, mild, moderate, severe) and in the inter-reader agreement, with no need for radiologists’ training on the use of the score [59,60]. Li et al. also found that the severity scores were significantly associated with intubation/death within 3 days from the admission, in CXR rated moderate or severe [59]. Mushtaq et al. reported in their retrospective study that an AI-powered severity score based on the percentage of pixels involved by opacity or consolidation for each lung at the CXR, adjusted at the multivariate analysis for demographics and comorbidities, showed that a value ≥30 at the hospital admission CXR was an independent predictor for mortality and ICU admission for COVID19 (p < 0.001), and found a significant link with admission pO2/FiO2 levels [61]. Zhu et al. compared the evaluation of an AI algorithm to the one performed by independent expert radiologists on the results of CXR in patients suspected for COVID19 in terms of disease severity using criteria based on the degree of lung opacity and geographical extent of the opacity, finding a strong correlation between the two severity scores [62].
Table 3 provides a summary of the papers included in our review focused on AI in the stratification and definition of severity and complications of COVID-19 pneumonia at CXR. Figure 4 shows the distribution of subjects included considering those studies where it was clearly stated.

4.4. AI in the Differential Diagnosis of COVID-19 Pneumonia from Other Pneumonia at Chest X-ray

Various authors also investigated the effectiveness of supervised AI learning models in aiding medical professionals in the differential diagnosis between COVID-19 pneumonia and other lung diseases, in particular the non-COVID-19 viral pneumonia, with a reported accuracy of up to 87% [33,39,41,42,63,64]. Jin et al. proposed a three-step hybrid model, incorporating a feature extractor, feature selector, and an SVM classifier, reporting an overall accuracy rate of 98.6%, with a remarkable reduction of training time and of the training sets size [65].
However, the differential diagnosis is impaired by the aspecific picture of COVID-19 pneumonia, similar to other viral and non-viral interstitial diseases. AI models should be adequately trained to achieve state-of-the-art diagnostic efficacy in the external validation process and in the real-life radiological workflow: CXR obtained in different views (postero-anterior (PA), latero-lateral, as well as bedside ones) must be differentiated, and the same goes for age groups, distinguishing pediatric patients from adults. Some authors chose to train models only on PA views, as it is usually the most common view used in the emergency department, even though bedside CXR are getting more and more important in the first diagnosis and in monitoring critically ill patients [66,67].
AI evolution could aim to help the diagnostic radiology in screening, diagnosing and grading CXRs, even though there are serious concerns on the potential risk of this situation happening [68].
Table 4 provides a summary of the papers included in the review focused on AI in the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest X-ray.

5. Artificial Intelligence in Chest CT

Machine learning approaches applied to CT images in COVID-19 pneumonia show great potential for improving diagnostic accuracy as well as for the prediction of patient outcomes and many studies have been focused on this topic.
Indeed, AI takes advantage of the large quantity of imaging data that can be used to train algorithms, and if effective, it could bring to a revolution in the identification and triage of patients with suspected COVID-19.

5.1. AI in the Identification of COVID-19 Pneumonia and Its Complications at Chest CT

From the beginning of the COVID-19 pandemic, the use of AI for detection of the radiological signs of pneumonia on CT imaging has been investigated, also in cases of false-negative results at RT-PCR [69], and augmented radiologists workload [70].
Considering the central role of imaging in the management of infected patients, multiple deep-learning algorithms have been developed to face the increased needs, also within just 10 days [71]. A pilot study by Yang et al., performed in the first two months of 2020, evaluated the performance of a DenseNet algorithm model—an improved CCN—for COVID-19 detection on HRCT. It yielded an AUC of 0.98 and a sensitivity of 97%, but an accuracy of 92% and specificity of 87% resulted slightly lower than those of an experienced radiologist. The authors concluded that their DL model had a human-level performance and allowed to save time due to a rapid diagnosis in about 30 s versus 5–10 min needed by a radiologist. A limitation of this study was a restricted number of included patients (146 with COVID-19 and 149 controls), further divided into training, validation and test sets [72].
To overcome this limit, multiple studies utilized datasets composed of thousands of patients derived from public sources or as occurred in multicenter trials. Therefore, Harmon et al. analyzed a heterogeneous multinational CT dataset composed of 2617 patients, overcoming a limited applicability to different populations, demographics or geographies, and maximizing the potential for generalizability. The 922 included cases of COVID-19 were from China, Italy and Japan, while the balanced control population was identified either from 2 US institutions or from a publicly available dataset (LIDC). Their image classification model used both hybrid 3D and full 3D models based on a Densnet-121 architecture, and they achieved a 0.949 AUC, resulting in 90.8% accuracy for COVID-19 identification on chest CT [73].
In addition to public datasets, previously validated AI algorithms are available for further confirmation of their performance or as assistant tools to clinicians and radiologists [74]. In this regard, Chen et al. created a cloud-based open access AI platform to improve the diagnosis of COVID-19 pneumonia. They developed a UNet++-based model with an accuracy of 96% for COVID-19 detection on HRCT in multiple testing datasets, either internal (retrospective and prospective) and external ones. Furthermore, the use of a similar deep-learning based model has the potential to reduce the number of missed diagnosis, especially in early phases, because the lung infection foci could be mild and need observation under 0.625-mm layer scanning [75].
Other authors focused not only on the pneumonia detection on a CT scan, but also on a quantitative assessment [74]. In fact, Zhang et al. analyzed images from 2460 patients using the uAI Intelligent Assistant Analysis System (a modified 3D CNN and a combined V-Net with bottle-neck structures) to segment anatomical lung structures and to accurately localize infected regions, according to the specific lobes and segments. Their findings were consistent with those of previous studies [76] that demonstrate a typical bilateral involvement, mainly in the dorsal segments, with GGOs as the most common CT feature [77].
These results have been confirmed also in other studies about the role of quantitative CT [78]. Du et al. evaluated pre-discharge CT scans in asymptomatic patients with negative RT-CR with an AI-assisted system (InferRead CT pneumonia software). Their quantitative image analysis resulted in a prevalence of fibrosis as the second common manifestation after GGOs, characterized by heterogeneous density and rigid reticulation [79].
To ease the evaluation of COVID-19 patients according to the findings on chest CT scan, the standardized score CO-RADS has been introduced to grade the level of suspicion from very low (1) up to very high (5), providing a higher performance in patients with moderate and severe symptoms (average AUC 0.91 for predicting RT-PCR outcome and 0.95 for clinical diagnosis) and a higher interobserver agreement for categories 1 and 5 [80]. Lessmann et al. aimed to develop a CO-RADS AI system to obtain an automated assessment of the suspicion value. CO-RADS AI included three deep-learning algorithms based on a U-Net architecture that automatically performed lobe and lesion segmentation, prediction of a CT severity score according to the percentage of affected parenchymal tissue per lobe and, at last, the assignment of the CO-RADS value. The key result of this study was a high diagnostic performance in the identification of COVID-19 patients with an AUC curve of 0.95 in the internal test set and of 0.88 in the external cohort [81]. However, its use is controversial because it does not take into consideration clinical and laboratory findings to build a diagnosis of COVID-19, also AI-assisted.
In fact, a study by Liu et al. demonstrated that a combined clinical-radiological model outperformed the CO-RADS and a clinical model in the COVID-19 diagnosis. Their preliminary study investigated the performance of a combined radiomics model that included 5 clinical features and a radiomic signature, after multivariate logistic regression analysis: age, lesion distribution (central or peripheral), neutrophil ratio, lymphocyte count, CT score and mean Radscore. The latter was calculated by 8 radiomic features, selected after the application of a mRMR algorithm and LASSO logistic regression algorithm. The result was an open-source constructed radiomics model with an AUC of 0.98, sensitivity of 0.94 and specificity of 0.93 [82]. Similar results have been achieved in another study that confirmed a mixed model—presented as nomogram—as the highest predictor of COVID-19 with an AUC of 0.955 (versus an AUC of 0.626 of the clinical model). It included either CT characteristics of the lesions (distribution, maximum lesion range, involvement of lymph nodes and pleural effusions) and a RadScore based on a signature of 3 features selected by LASSO regression [83].
Another use of radiomic models has been described in the non-invasive monitoring of ARDS, a life-threatening COVID-19 complication. Indeed, Chen et al. compared the performance of traditional quantitative and radiomics analysis of CT images. While the former quantified the infected regions through the calculation of volume and percentage of infection, the latter included 30 radiomic features selected by regression analysis and combined into a risk score. Results showed that the radiomics model was the most promising one because of the highest accuracy and specificity, despite a similar AUC of 0.94. According to the authors, sensitivity is more important than specificity in an ARDS screening due to the high risk related to delayed oxygen treatment in false-positivity results [84].
Voulodimos et al. adopted a semantic segmentation approach, which can be implemented in a two-step process: (i) feature extraction over an image patch and (ii) a training process, using annotated datasets. Using this method, each pixel is described by feature values, extracted locally, over a, typically, small area, denoted as “patch”. Deep learning approaches do both steps for a given set of data [85].
The possibility of segmentation transferability in COVID-19 CT has been investigated by Wang et al. They presented a set of experiments to better understand how different non-COVID19 lung lesions influence the performance of COVID-19 infection segmentation and their different transfer ability under different transfer learning strategies. They concluded clear benefits of pre-training on non-COVID19 lung lesion datasets when public labeled COVID-19 datasets are inadequate to train a robust deep learning model [86].
Saood et al. proposed a new fully automated deep learning framework for rapid quantification and differentiation between lung lesions in COVID-19 pneumonia on both contrast and non-contrast CT images using convolutional Long Short-Term Memory (ConvLSTM) networks. They showed a strong agreement between expert manual and automatic segmentation for lung lesions; describing excellent correlations of 0.978 and 0.981 for ground-glass opacity and high opacity volumes [87].
Akram et al. presented a novel entropy-based fitness optimizer function implementation, which selects the chromosomes with maximum information. The only chromosome with maximum fitness value is selected to get the sub-optimal solution in the minimum number of iterations. To conserve maximum information and to obliterate the redundant features at the initial level, a preliminary selection process is initiated on each feature set using the entropy-controlled fitness optimizer. To exploit the complementary strength of all features, a feature fusion approach is utilized which combines all the competing features to generate a resultant feature vector. The previously adopted methods of machine learning utilize either sole or hybrid approaches for feature extraction. Though both methods have their advantages and drawbacks, but the fused feature space has more capacity to retain the dexterous features. Due to this flexibility, the hybrid approaches have gained much popularity among the researchers. However, selection of the most appropriate feature extraction technique is quite a sensitive task, which needs to be handled carefully, otherwise, it may result in feature redundancy and, therefore, increased correlation. In this work, they utilized four different techniques—belongs to two different categories, statistical and texture. Two feature families were not considered, color and shape, because of their limited impact and significance in this application. Using the proposed framework, the achieved accuracy using the Naive Bayes classifier is 92.6%, 92.6%, whereas other classifiers (EBT, L-SVM and F-KNN) behave significantly better to achieve an average accuracy of 92.2%, 92.1%, 92.2%, 92.1% and 92.0%, 92.0%, respectively. From the sensitivity and specificity values, the proposed framework was successfully managed to achieve high true positive and negative rates [88].
Mukherjee et al. developed a CNN-tailored DNN for COVID-19 diagnosis, integrating either CT and CXR images. Their proposed DNN based on a mixed database of integrated modalities reached an AUC of 0.9808, higher than those of other existing DNN (Inception, MobileNet and ResNet). Moreover, the performances score using separate dataset appeared to be higher for CXRs with an AUC of 0.9908 vs. 0.9731 for CT scan [89].
Table 5 provides a summary of the papers included in the review focused on AI in the diagnosis of COVID-19 pneumonia at Chest CT. Figure 5 shows the distribution of subjects included considering those studies where it was clearly stated.

5.2. AI in the Screening of COVID-19 Pneumonia at Chest CT

The application of AI to CT images for the immediate triage of COVID-19 patients may be of assistance due to delayed results of RT-PCR as definitive viral testing.
Javor et al. used an open-source data of 6868 CT images to train their CCN model ResNet50 that achieved high accuracy with an AUC of 0.956, higher than those of radiologists. They described the importance of the ML model in the patient triage for the possibility to identify rule-in and rule-out thresholds for COVID-19 diagnosis, compared to a dichotomous decision of radiologists. In case of high level of suspicion, the patient should be isolated until the confirmation of rejection by an RT-PCR test [90]. However, CT scan may have a low negative predictive value, especially in early phases of the disease. A joint AI algorithm that integrated chest CT findings and clinical history enabled a rapid diagnosis of COVID-19 with an AUC of 0.98 that might have a fundamental role in the triaging, allowing rapid isolation of infected people and avoiding delayed treatments. The evaluated model was first developed on a CNN to learn imaging characteristics on initial CT scans and then on a MLP to classify patients according to the clinical information (sex, age, exposure history, clinical symptoms—fever and cough—and laboratory findings—WBCs). Finally, a neural network model combined radiological and clinical data to predict COVID-19 status [91].
Another study performed in an emergency department confirmed the positive performance of a mixed predictive ML model in the triage. It was based on the CO-RADS score from chest CT and additional data—laboratory findings (ferritin, leukocytes, CK), diarrhea and number of days from onset of the disease. The added value of the prediction model compared with CT alone was increased AUC (0.953 vs. 0.930) and accuracy (93.1% vs. 90.4%), probably due to specific laboratory anomalies. Nevertheless, authors concluded that 9% of the included patients with positive RT-PCR were false negative according to the prediction model and the nasopharyngeal swab should be the primary standard test [92].
In Table 6, we provided a summary of the papers included in our review focused on AI in the screening of COVID-19 pneumonia at Chest CT. Figure 6 shows the distribution of subjects included considering those studies where it was clearly stated.

5.3. AI in the Stratification and Definition of Severity and Complications of COVID-19 Pneumonia at Chest CT

Different studies have already demonstrated the correlation between conventional CT scores and prognosis of COVID-19 patients, using semi-quantitative methods based on visual scores [93,94,95]. As an attempt to avoid subjective and time-consuming evaluations, multiple AI models have been developed and tested to accurately stratify patients into severity stages and to improve the clinical decision-making process. According to the ATS, the major criteria for the definition of severe pneumonia are respiratory failure in need for mechanical ventilation (MV) or septic shock treated with vasopressors; other minor criteria include increased respiratory rate (>30/min), P/F ratio < 250 or hypotension requiring fluid resuscitation [96]. Therefore, these are the most common endpoints used to find potential high-risk patients.
According to Chatzitofis et al., a VoI aware DNN could assess patients’ conditions and prognosis even without results of laboratory tests, as occurred shortly after the ED admission. They introduced a two-stage data-driven approach to classify patients into three classes—moderate, severe and extreme, considering their risk to be discharged, hospitalized or admitted to ICU, respectively. The proposed algorithm was trained with a COVID-19_CHDSET Dataset, composed by CT images from Milan, an extensively involved area during the first months of the COVID-19 pandemic. The DenseNet201-VoI model reaches an AUC of 0.97, 0.92 and 1.00 for the three groups, respectively, and accuracy of 88.88%, specificity of 94.73% and sensitivity of 89.77% [93]. Xiao et al. developed and tested a DL-based model using multiple instance learning and CNN (ResNet34) on CT imaging. It resulted in an excellent performance for the prediction of disease severity (AUC of 0.892) that is, in turn, positively correlated with area and density of lung lesions. Moreover, the clinical significance of the model relied on the possibility to identify mild disease in early stages that could progress to a more severe form, characterized by a lower survival probability [97].
The idea of a possible rapid deterioration of mild cases has been further analyzed by Zhu et al. whose joint regression and classification model was able to predict the conversion time from a mild to a severe case in a unified framework with a sensitivity of 76.97% and an average conversion time of 4.59 days [98]. Another fully automated DL-model succeeded in diagnostic and prognostic analysis of COVID-19, after training in a large dataset of 4106 patients. Authors defined the length of hospital stay as prognostic end event, knowing that longer hospitalization might imply worse prognosis and longer recovery time. COVID-19Net showed a good diagnostic and prognostic performance in the stratification of low- and high-risk patients with significant differences in days of hospital stay [99].
A DL prognostic model developed by Meng et al. predicted the probability of patients’ death within two weeks. This 3D-CNN De-COVID19-Net outperformed clinical, radiomics-based and pure CNN models (without incorporation of the clinical model) with an AUC of 0.943 in the identification of high-risk patients, i.e., died within 14 days, that required more intensive care [100].
Specific laboratory measurements can be combined with CT features to create AI-based prediction models for the stratification of severe patients, as demonstrated by Li et al. (AUROC of 0.93). They segmented CT imaging through a deep CNN to extract essential features and selected 12 laboratory tests that showed the largest change in the two groups of patients, mainly D-dimer, LDH and lymphocytes as predictors of higher mortality risk. Moreover, lymphocytes, neutrophils, D-dimer and platelets-large cell ratio demonstrated a significant correlation with selected CT features [101].
An additional DL model mixed an artificial neural network (ANN) for clinical and laboratory data and a CNN for 3D CT imaging data to classify patients in high risk of severe progression (event) or low risk (event-free). The considered events included respiratory deterioration (high-flow nasal cannula, MV, ICU admission), septic shock, renal failure or death. In the correlation heatmap of clinical and laboratory features, CRP and WBC had a strong positive correlation with the endpoint, age was described as significant risk factor related to the endpoint; oxygen saturation and female sex were negatively correlated with the endpoint. This mixed ACNN model obtained a high performance with an AUC of 0.916, accuracy of 93.9% and specificity of 96.9% [102].
An approach to estimate the prognostic utility of CT findings is based on a quantitative image assessment, using computer-aided software for segmentation and quantification of lung volumes according to different Hounsfield Unit (HU). Hu et al. performed a pilot study in the first two months of 2020 to demonstrate the validity of quantitative CT images in the evaluation of CT findings between mild and severe patients. They discovered a prevalence of consolidative and progressive lesions (crazy paving and “white lung”), mainly in lower lobes, in the severe group of patients, using a total lung and a per-lobe severity score to estimate pulmonary involvement and a 2D UNet model for the automatic lesion segmentation. However, this cross-sectional study lacked analysis of follow-up images, considering that the analysis of dynamic CT images could be useful for prognostic purposes [103].
Therefore, a Chinese retrospective study quantitatively evaluated lung involvement on serial CT scan with a deep-learning model, tracking the modification of the percentage of lung opacification (QTC-PLO) as a unique parameter. Authors divided the 126 included patients into four categories (mild, moderate, severe and critical) according to clinical features at baseline; they underwent at least two CT scans as inclusion criteria (median interval between baseline and first follow-up: 4 days) and, eventually, a second follow-up CT. The study results showed a significant difference in QTC-PLO among clinical groups at baseline (0%, 2.2%, 28.9%, 49.6%, respectively) with a sustained progression of imaging findings at first follow-up CT (median: 3.6% vs. 8.7%) and a plateau on second follow-up CT [19].
Similarly, Li et al. developed a fully automated AI system using a U-Net structure to assess disease severity and progression in severe and non-severe patients, considering the portion of infection (POI) and the average infection HU (iHU) in longitudinal CT scans. The two imaging biomarkers reached an AUC of 0.97 for POI and 0.69 for iHU and significant difference in the two severity states; authors concluded that only POI can be considered an effective indicator of COVID-19 severity taking into consideration high specificity and sensitivity; iHU could be affected by respiratory status and reconstruction slice thickness [104].
Zhang et al. analyzed temporal changes of quantitative lung lesion on CT scan from the onset of symptoms in common and severe groups, according to percentages of GGO-volume (PGV), consolidations (PCV) and total lesions (PTV). The used AI system combined the CNN and thresholding methods for lung segmentation and detection of patchy shadows, followed by automatic calculation of quantitative features by AI algorithms. Severe patients exhibited greater PGV, PCV and PTV in all the 5 stages of the diseases (0–30 days), a longer time to peak (17 vs. 12 days, respectively) and a higher peak percentage (22–25% vs. 2.5–5%, respectively) and longer recovery time [105].
Similar results have been demonstrated by Pan et al. that predicted a faster peak in moderate group compared to severe group (18 vs. 23 days, respectively, from onset of symptoms) with faster lesions absorption. Moreover, their DL model COVID-Lesion Net showed a good correlation with conventional CT scores (Spearman’s correlation coefficient 0.920) [106].
Other authors focused on the correlation between quantitative CT data with clinical features or laboratory values. Cheng et al. employed a uAI Discover-2019nCoV software to quantify images and to report a positive correlation between quantitative parameters (GGOs, consolidations and total lesions) and CRP, ESR and a negative correlation with lymphocyte count. Then, the proportion of total lesions resulted positively correlated with LDH [107].
An Italian retrospective study proved similar correlations, extending their results to parameters related to respiratory function (PaO2, pH, HCO3−, P/F). In fact, all the 108 included were in need for supplemental oxygen with NIV, CPAP or IV by ET. Their semi-automatic software showed a strong negative correlation between P/F ratio or hypercapnia, expression of hypoxia, and analyzed CT volumes. [108] Moreover, the Dense-UNet used by Mergen et al. further confirmed the previously described positive correlations about CRP and leukocytes. Authors underlined the negative correlation between percentage of opacity (PO) or percentage of high opacity (PHO, consolidations) with SO2 as an additional demonstration that patients in need for supplemental oxygen have a higher proportion of involved lungs [109].
In this regard, multiple studies have examined the utility of radiographic findings for the prediction of respiratory deterioration and consequent ICU admission by a quantitative CT analysis. A single-center retrospective study by Lanza et al. explored the role of quantitative computer-aided CT analysis as outcome predictor. The compromised lung volume (%CL), sum of poorly aerated and non-aerated parenchyma (from −500 to 100 HU), could predict oxygenation support, either low- and high-flow (%CL 6–23%, AUROC 0.83), and intubation (%CL > 23%, AUROC 0.86); moreover, %CL shown a negative correlation with P/F ratio, sign of deterioration of respiratory function, and was predictive of in-hospital mortality (HR 1.02) [110].
Similar results have been obtained in a retrospective study that confirmed the AI-calculated percentage of total opacity >51% as the main predictor for MV (AUC 0.87) and all-cause mortality during hospitalization (AUC 0.88). Moreover, they proposed a prognostic model that included biochemical variables (LDH level for mortality and troponin I for MV) and imaging data (total opacity for mortality and CT severity score for MV) with a good risk classification of hospitalized patients. [111] A multiparametric model of imaging-derived features—affected lung volume—and inflammatory laboratory parameters—CRP and IL-6—has been tested in a German Cohort to estimate the need for ICU treatment. The multivariate random forest modelling showed an AUC of 0.79, sensitivity of 0.72, specificity of 0.86 and accuracy of 0.80; affection of upper lung lobes could be considered an important parameter in the risk estimation (mean importance 0.184) [112].
Liu et al. proved that the quantitative CT evaluation with radiographic changes in the firsts 4 days after admission had excellent predictive capability (AUC 0.93) for severe disease, outperforming APACHE-II, NLR and D-dimer. The AI algorithms calculated percentages of GGOs (PGV), consolidation (PCV) and semi-consolidation (PSV) [113]. A further retrospective study assessed the feasibility of an automated quantification process of GGOs (−700–−501 HU), one of the most significant lesions of COVID-19 pneumonia, and normally restricted parenchyma (−900—−701 HU). They affirmed that GGOs could be an objective biomarker for lung injury due to a statistically significant correlation between the measured volumes and a respiratory assessment severity score on 6 categories, from absence of hospitalization and inability to resume normal activity (1) to death (7) [114]. On the other hand, a software-based quantitative CT assessment of the normal lung parenchyma percentage (SQNLP) has proven to accurately predict ICU admission if <81.1% (sensitivity 86.5% and specificity 86.7%). Furthermore, SQNLP <82.45% can show severe pneumonia with a sensitivity 83.1% and specificity 84.2%, characterized by increased presence of crazy-paving pattern (specificity 97.2%) [115]. Wang et al. focused on the risk of ARDS, primary cause of ventilation in COVID-19 patients. Their retrospective study used a Vb-Net model to segment lesions, discovering that the proportion of specific lesion density in the range −549–−450 HU was at high-risk for ARDS. In fact, total volume and average density of lung lesions were not statistically related to ARDS [116].
Radiomics analysis represents an additional approach to predict prognostic outcome of COVID-19 patients. A first attempt has been made to quantitatively analyze pulmonary lesions, dividing them in mild (Grade I) or moderate/severe (Grade II). After features preselection with a LASSO algorithm, the radiomic signature was built with 9 features and it achieved an AUC of 0.87 in the test set. The impact of the grading regards the subsequent treatment strategies, because mild lesions usually need only supportive treatment, while more severe ones need symptomatic treatment, up to invasive ventilation [117].
In a similar way, a tested radiomic model can predict not only the extent of pulmonary opacities (AUC 0.99), but also the type of lesions (0.77). In this case, skewness and small-area low gray-level emphasis were the best indicators of GGOs, considering that the category of pulmonary opacities has an important role in the pneumonia severity in addition to the volume of affected parenchyma [118].
Fu et al. performed a retrospective study in a cohort of patients divided into stable and progressive groups according to clinical manifestations, laboratory tests and CT imaging findings (statistically significant number of lesions). They tested the discriminatory capacity of a radiomic signature of 7 features, after application of mRMR and LASSO algorithms, with significant differences in the RadScore of the 2 groups. Moreover, cough and abnormal CRP values could improve the detection of patients in the progressive group [119].
In fact, other studies have reported an improved performance of their radiomic nomogram in the prognosis prediction after the integration of clinical factors. An example is those described in the retrospective analysis by Chen et al., composed of a Radscore of 15 features integrated with clinical information (age, gender, neutrophils count, % of NK cells and CD3) [120].
Wu et al. demonstrated that the integration of a radiomic signature with clinical risk factors (age, sex, type on admission, comorbidities) is more important in the early phases of COVID-19 for its accurate prediction of poor outcome (death, MV, ICU admission) with an AUC of 0.862 (vs. AUC of 0.816 of the RadScore alone) [121].
A peculiar merged model based on 6 significant radiomic features and DL model based on 3D-Resnet-10 has been analyzed to distinguish severe and critical cases of COVID-19. In the test cohort, the merged model yielded an AUC of 0.861, compared to AUC of 0.838 and 0.787 of single radiomic and DL models respectively, demonstrating the complementarity between the two types of features [122].
A Chinese retrospective multicenter study showed accuracy in the prediction of hospital stay in COVID-19 patients, as predictor of patients’ prognosis. Authors determined 10 days as the optimal cut-off value, classifying patients into short-term (<10 days) and long-term (>10 days) hospital stay. Their radiomic models of 6 features were based on logistic regression (LR) and random forest (RF) and reached an AUC of 0.97 and 0.92, respectively [21].
Differently from the previous studies about the analysis of the focus of pneumonia for patient stratification, Tan et al. tested their radiomics automatic ML model on the non-focus lung areas in the first CT scan of COVID-19 patients because they affirmed it could be difficult to distinguish initial areas of interstitial inflammation by eyes in early CT images. Authors included 219 first chest CT of patients with moderate and severe symptoms from which they extracted image texture features to construct classification models. The proposed model demonstrated a good prediction of COVID-19 pneumonia and its different clinical types due to differences in the non-focus areas with an AUC > 0.95 [21].
Moreover, a radiomic model combining CT feature and clinical data has been tested for its role in the prediction of RT-PCR negativity in order to identify the right retesting time. In this way, it is possible to avoid unnecessary repeated tests and prolonged hospital stay. Cai et al. included 203 patients in their retrospective study, divided into RT-PCR negative and RT-PCR positive groups according to the results of 3 RT-PCR tests performed after 3–5 days from symptoms disappearance. For each patient, 20 different features (clinical, quantitative and radiomic) were collected and compared between the two groups. Authors concluded that the RT-PCR negative group had a longer interval from onset of symptoms to CT scan (23 vs. 16 days) and the radiomic model of 9 features had a good performance for differentiating the RT-PCR negative group with an AUC of 0.812 [123].
Among the risk factors for severe COVID-19, comorbidities have been associated with increased risk of progression, probably due to a persistent pro-inflammatory state and attenuation of the immune response [124].
Lu et al. analyzed the effect of diabetes mellitus on chest CT features and COVID-19 severity in 3 groups of patients divided according to their clinical history of DM and HbA1c level. Their CT images were quantitatively evaluated, focusing on percentage of total lung lesion volume (PLV), percentage of ground-glass opacity volume (PGV) and percentage of consolidation volume (PCV) as parameters of pneumonia severity.
It was demonstrated a positive correlation between blood glucose level, measured also with blood fasting glucose, at admission and pulmonary involvement (higher PLV, PGV and PCV) that, in turn, were predictors of poor clinical outcomes (AUC of 0.796, 0.783, 0.816, respectively) [125]. Another retrospective study quantified pneumonia lesions on CT images through a UNet neural network to assess the influence of comorbidity on COVID-19 patients.
Differently from the previous study, Zhang et al. included hypertension—the most common, COPD and cerebrovascular diseases in addition to DM, already described as major risk factors [126]: authors found a significant correlation with age, length of incubation period, abnormal laboratory findings and severity status. Moreover, a higher number of comorbidities resulted in a higher number of CT lesions, especially in presence of DM as main risk factors for lung volume involvement [127].
In Table 7, we provided a summary of the papers included in our review focused on AI in the stratification and definition of severity and complications of COVID-19 pneumonia at Chest CT. Figure 7 shows the distribution of subjects included considering those studies where it was clearly stated.

5.4. AI in the Differential Diagnosis of COVID-19 Pneumonia from Other Pneumonia at Chest CT

The differentiation between pneumonia related to COVID-19 or to other pathogens represents a challenge due to superimposable clinical and radiological characteristics, but it is critical for early diagnosis and pandemic control.
Multiple studies have evaluated the diagnostic performance of different AI systems in the detection of COVID-19 and in the differential diagnosis with other common pneumonia, demonstrating an AUC in the range of 0.903 to 0.99 [128,129,130,131,132,133,134].
A Chinese retrospective and multi-center study developed a 3D DL model COVNet to detect COVID-19 and distinguish it from community-acquired pneumonia (CAP) due to typical and atypical bacteria or viruses. The calculated AUC for COVID-19 and CAP were 0.95 and 0.94, respectively, tested in a dataset of 3322 patients. The application of Gradient-weighted Class Activation Mapping (Grad-CAM) simplified the interpretability of the proposed model: it was an automatically generated heatmap that applied the red color to the suspected regions associated with the predicted class [133]. Other studies aimed to evaluate not only the performance of a proposed AI model in the differential diagnosis, but also the radiologist’s performance with and without AI assistance [131].
A retrospective study employed an EfficientNet architecture for the pneumonia classification task and a heatmap generated through a Grad-CAM for the visualization of the important image regions. The proposed model achieved an AUC of 0.95 and a higher accuracy, sensitivity and specificity than those of experienced radiologists (96% vs. 85%, 95% vs. 79%, 96% vs. 88%). Authors deduced that the performance of radiologists with AI assistance improved compared to manual interpretation, yielding higher accuracy (90%), sensitivity (88%), and specificity (91%) [133].
Another observation study by Zeng et al. tested a ML algorithm based on a radiomic texture analysis of CT imaging to distinguish pneumonia due to COVID-19 (NCP) and Influenza A (IAP). Their nomogram included 8 radiomic features as independent diagnosticators of NCP after application of LASSO regression model that were subsequently included into a radiomics score (higher values suggested COVID-related pneumonia). Their data suggested an excellent performance of the nomogram with an AUC of 0.87, helping clinicians in the choice of the right management [135].
Table 8 provides a summary of the papers included in the review focused on AI in the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest CT. Figure 8 shows the distribution of subjects included considering those studies where it was clearly stated.

6. Computational Cost

A brief introduction to the concept of the computational cost is due. Computational cost is a generic name that refers to the computational power in (usually in terms of number of operations and memory) required to run an algorithm. Even the most demanding algorithms can be executed in reasonable time when more computational resources are provided. Generally speaking pipelines not based on deep learning have a rather low computational cost, both during training and inference. Indeed, studies based on radiomics and quantitative CT do not require expensive or very performant hardware to reach very low run times. Deep learning models, on the other hand, require modern, dedicated hardware (GPUs) to train in reasonable time but may still require multiple days to train.
This does not hinder their effectiveness or their use in production as the inference time is usually significantly lower. Among deep learning architectures some are designed specifically for a lower computational cost [136] while others focus on performance disregarding computational efficiency [137]. In particular, studies employing 3D convolutions [74] or studies that leverage multiple large models [81] are very computationally intensive and probably would require an amount of resources that few hospitals can provide. Nonetheless, for pipelines dedicated to a single disease, the required throughput is not too high and larger models can still provide value.

7. Discussion

In this literature review, we presented a structured review on the applications that AI can have in the clinical setting with regards to chest imaging in COVID-19 patients, describing the performances that the several DL/radiomics models have both in the identification, screening, stratification of patients as well as the differential diagnosis with other pneumonia.
Some of the previously described models showed very high performances, suggesting that the implementation of AI techniques would aid radiologists in their clinical practice, leading to a significant increase in accuracy values and leveraging their daily workflow performance.
However, the potential utility of the machine learning-based models using CXR and CT images for diagnostic and prognostic purposes in COVID-19 has been analyzed in a systematic review that included some of the previously discussed studies [21,84,91,98,99,121,132].
According to Roberts et al. [138], none of the included studies in their systematic review showed a sufficient robustness and reproducibility to be integrated into the regular clinical practice, due to biases in datasets, either too small or too heterogeneous, poor data integration or insufficient validation. In addition, some machine learning models may show over and under-fitting bias.
Specifically, as concerns the quality of the training data of the analyzed studies [138] the authors suggested the following key issues:
  • a warning about using online repositories because of (1) the potential bias attributable to source issues and the inability to match demographics through populations (2) the possible overfitting on the shared dataset (3) the eventual low-resolution unbalanced across classes of the images of the shared dataset.
  • to pay attention to CXRs projections (anteroposterior vs. posteroanterior) since models can wrongly correlate more severe disease to the view of the radiogram and not to the actual radiographic findings
  • most studies did not report the timing between imaging and RT–PCR tests, since a negative RT–PCR test is a definitive exclusion criteria COVID-19 infection.
The authors recommended also major attention in the development of further ML-based algorithms; suggesting external validation, assessment with established frameworks (e.g., QUADAS, CLAIM, RQS) and checklists to identify these weaknesses [138].
Furthermore, other authors advised the sampling of large datasets to reduce predictive uncertainty, even though most works used small image samples, due to the lack of large open COVID-19 datasets (particularly for CXR) [139,140,141,142].
This is why further studies are needed to implement AI capacities in the above discussed settings (identification, screening, patients’ stratification and differential diagnosis), in order to guide the development of AI-empowered tools to reduce human error and assist radiologists in their decision-making process.
Limitations of the study:
Firstly, we would like to cite some limitations of the reviewed studies which include inadequate verification of datasets [138], limited time available considering the on-going pandemic, lack of large datasets for some authors. It’s worth mentioning that the first published work that reviews the usability of X-ray images to detect COVID-19 was of a very limited dataset [143]. In some investigations, the number of positive images used in the training was less than 100, which greatly limits the generalization power of the models, under the CNN paradigm [144]. The rapidly evolving and emerging applications of AL/ ML in COVID-19 can also represent another hurdle for reviewing the previous work. Some authors have managed to release newer versions of their early pandemic studies; enforcing their algorisms with larger datasets, including clinical information, overcoming some of the technical issues that was raised earlier such as over-fitting. Additionally, to avoid the limitations regarding the selection bias, we set a structured criteria for inclusion and exclusion of the selected studies.

8. Conclusions

The combination of chest imaging and artificial intelligence can help for a fast, accurate and precise disease extent quantification as well as for the identification of patients with severe short-term outcomes. AI/ ML as well as radiomics have feasible applications and optimistic potential to help leverage the radiologists’ workflow in the current pandemic. In other words, there are multiple domains that can benefit from AI applications in chest imaging, including identification, screening and risk stratification of COVID-19 cases. As aforementioned, the basic stages to tackle that pandemic include early and accurate identification of COVID-19, and ML can play a crucial role in this setting.
The integration of ML techniques will help in diagnosing this condition faster, cheaper, and safer in the upcoming years. However, various biases should be overcome in the development of further ML-based algorithms to guarantee sufficient robustness and reproducibility for their integration into clinical practice.
Though, as previously stated by Roberts et al. [138], many of those ML models developed could not be proved to be ready for the translation in clinical practice.
Datasets of higher quality, articles with enough documentation to be repeatable as well as external validation are required to give the currently developed ML models a sufficient robustness and reproducibility to integrate them into clinical practice.

Author Contributions

Conceptualization, M.E.L.; formal analysis, M.E.L., A.A., A.P.; investigation M.E.L., A.A., A.P.; writing—original draft preparation, M.E.L., A.A., A.P.; writing—review and editing, M.E.L., A.A., A.P., P.C., E.N., S.S., V.S.; supervision, M.E.L., V.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tan, B.S.; Dunnick, N.R.; Gangi, A.; Goergen, S.; Jin, Z.-Y.; Neri, E.; Nomura, C.H.; Pitcher, R.D.; Yee, J.; Mahmood, U. RSNA International Trends: A Global Perspective on the COVID-19 Pandemic and Radiology in Late 2020. Radiology 2021, 299, E193–E203. [Google Scholar] [CrossRef]
  2. Coppola, F.; Faggioni, L.; Neri, E.; Grassi, R.; Miele, V. Impact of the COVID-19 Outbreak on the Profession and Psychological Wellbeing of Radiologists: A Nationwide Online Survey. Insights Imaging 2021, 12, 23. [Google Scholar] [CrossRef] [PubMed]
  3. Akl, E.A.; Blažić, I.; Yaacoub, S.; Frija, G.; Chou, R.; Appiah, J.A.; Fatehi, M.; Flor, N.; Hitti, E.; Jafri, H.; et al. Use of Chest Imaging in the Diagnosis and Management of COVID-19: A WHO Rapid Advice Guide. Radiology 2021, 298, E63–E69. [Google Scholar] [CrossRef] [PubMed]
  4. Jacobi, A.; Chung, M.; Bernheim, A.; Eber, C. Portable Chest X-Ray in Coronavirus Disease-19 (COVID-19): A Pictorial Review. Clin. Imaging 2020, 64, 35–42. [Google Scholar] [CrossRef]
  5. Li, X.; Zeng, W.; Li, X.; Chen, H.; Shi, L.; Li, X.; Xiang, H.; Cao, Y.; Chen, H.; Liu, C.; et al. CT Imaging Changes of Corona Virus Disease 2019(COVID-19): A Multi-Center Study in Southwest China. J. Transl. Med. 2020, 18, 154. [Google Scholar] [CrossRef] [Green Version]
  6. Bao, C.; Liu, X.; Zhang, H.; Li, Y.; Liu, J. Coronavirus Disease 2019 (COVID-19) CT Findings: A Systematic Review and Meta-Analysis. J. Am. Coll. Radiol. 2020, 17, 701–709. [Google Scholar] [CrossRef] [PubMed]
  7. Cao, Y.; Liu, X.; Xiong, L.; Cai, K. Imaging and Clinical Features of Patients with 2019 Novel Coronavirus SARS-CoV-2: A Systematic Review and Meta-Analysis. J. Med. Virol. 2020, 92, 1449–1459. [Google Scholar] [CrossRef] [Green Version]
  8. Buonsenso, D.; Piano, A.; Raffaelli, F.; Bonadia, N.; de Gaetano Donati, K.; Franceschi, F. Point-of-Care Lung Ultrasound Findings in Novel Coronavirus Disease-19 Pnemoniae: A Case Report and Potential Applications during COVID-19 Outbreak. Eur. Rev. Med. Pharmacol. Sci. 2020, 24, 2776–2780. [Google Scholar] [CrossRef]
  9. Moore, S.; Gardiner, E. Point of Care and Intensive Care Lung Ultrasound: A Reference Guide for Practitioners during COVID-19. Radiography 2020, 26, e297–e302. [Google Scholar] [CrossRef] [PubMed]
  10. Soldati, G.; Smargiassi, A.; Inchingolo, R.; Buonsenso, D.; Perrone, T.; Briganti, D.F.; Perlini, S.; Torri, E.; Mariani, A.; Mossolani, E.E.; et al. Is There a Role for Lung Ultrasound During the COVID-19 Pandemic? J. Ultrasound Med. 2020, 39, 1459–1462. [Google Scholar] [CrossRef] [Green Version]
  11. Neri, E.; Coppola, F.; Miele, V.; Bibbolino, C.; Grassi, R. Artificial Intelligence: Who Is Responsible for the Diagnosis? Radiol. Med. 2020, 125, 517–521. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Neri, E.; Miele, V.; Coppola, F.; Grassi, R. Use of CT and Artificial Intelligence in Suspected or COVID-19 Positive Patients: Statement of the Italian Society of Medical and Interventional Radiology. Radiol. Med. 2020, 125, 505–508. [Google Scholar] [CrossRef]
  13. Grassi, R.; Belfiore, M.P.; Montanelli, A.; Patelli, G.; Urraro, F.; Giacobbe, G.; Fusco, R.; Granata, V.; Petrillo, A.; Sacco, P.; et al. COVID-19 Pneumonia: Computer-Aided Quantification of Healthy Lung Parenchyma, Emphysema, Ground Glass and Consolidation on Chest Computed Tomography (CT). Radiol. Med. 2021, 126, 553–560. [Google Scholar] [CrossRef]
  14. Scapicchio, C.; Gabelloni, M.; Barucci, A.; Cioni, D.; Saba, L.; Neri, E. A Deep Look into Radiomics. Radiol. Med. 2021. [Google Scholar] [CrossRef]
  15. Santos, M.K.; Ferreira Júnior, J.R.; Wada, D.T.; Tenório, A.P.M.; Barbosa, M.H.N.; Marques, P.M.D.A. Artificial Intelligence, Machine Learning, Computer-Aided Diagnosis, and Radiomics: Advances in Imaging towards to Precision Medicine. Radiol. Bras. 2019, 52, 387–396. [Google Scholar] [CrossRef] [Green Version]
  16. Ghaderzadeh, M.; Asadi, F. Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A Systematic Review. J. Healthc. Eng. 2021, 2021, 6677314. [Google Scholar] [CrossRef]
  17. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2021, 14, 4–15. [Google Scholar] [CrossRef] [Green Version]
  18. MIRC CTP—MircWiki. Available online: https://mircwiki.rsna.org/index.php?title=CTP-The_RSNA_Clinical_Trial_Processor (accessed on 8 April 2021).
  19. Cao, Y.; Xu, Z.; Feng, J.; Jin, C.; Han, X.; Wu, H.; Shi, H. Longitudinal Assessment of COVID-19 Using a Deep Learning–Based Quantitative CT Pipeline: Illustration of Two Cases. Radiol. Cardiothorac. Imaging 2020, 2, e200082. [Google Scholar] [CrossRef] [Green Version]
  20. Huang, L.; Han, R.; Ai, T.; Yu, P.; Kang, H.; Tao, Q.; Xia, L. Serial Quantitative Chest CT Assessment of COVID-19: A Deep Learning Approach. Radiol. Cardiothorac. Imaging 2020, 2, e200075. [Google Scholar] [CrossRef] [Green Version]
  21. Yue, H.; Yu, Q.; Liu, C.; Huang, Y.; Jiang, Z.; Shao, C.; Zhang, H.; Ma, B.; Wang, Y.; Xie, G.; et al. Machine Learning-Based CT Radiomics Method for Predicting Hospital Stay in Patients with Pneumonia Associated with SARS-CoV-2 Infection: A Multicenter Study. Ann. Transl. Med. 2020, 8, 859. [Google Scholar] [CrossRef]
  22. Gozes, O.; Frid-Adar, M.; Greenspan, H.; Browning, P.D.; Zhang, H.; Ji, W.; Bernheim, A.; Siegel, E. Rapid AI Development Cycle for the Coronavirus (COVID-19) Pandemic: Initial Results for Automated Detection & Patient Monitoring Using Deep Learning CT Image Analysis. arXiv 2020, arXiv:2003.05037. [Google Scholar]
  23. Tang, L.; Zhang, X.; Wang, Y.; Zeng, X. Severe COVID-19 Pneumonia: Assessing Inflammation Burden with Volume-Rendered Chest CT. Radiol. Cardiothorac. Imaging 2020, 2, e200044. [Google Scholar] [CrossRef] [Green Version]
  24. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shen, D.; Shi, Y. Abnormal Lung Quantification in Chest CT Images of COVID-19 Patients with Deep Learning and Its Application to Severity Prediction. Med. Phys. 2021, 48, 1633–1645. [Google Scholar] [CrossRef]
  25. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  26. Sethy, P.K.; Behera, S.K. Detection of Coronavirus Disease (COVID-19) Based on Deep Features. Available online: https://www.preprints.org/manuscript/202003.0300/v1 (accessed on 8 April 2021).
  27. Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic Detection from X-Ray Images Utilizing Transfer Learning with Convolutional Neural Networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
  28. Asif, S.; Wenhui, Y.; Jin, H.; Tao, Y.; Jinhai, S. Classification of COVID-19 from Chest X-Ray Images Using Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020. [Google Scholar]
  29. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Emadi, N.A.; et al. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  30. Islam, M.Z.; Islam, M.M.; Asraf, A. A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-Ray Images. Inform. Med. Unlocked 2020, 20, 100412. [Google Scholar] [CrossRef]
  31. Nour, M.; Cömert, Z.; Polat, K. A Novel Medical Diagnosis Model for COVID-19 Infection Detection Based on Deep Features and Bayesian Optimization. Appl. Soft Comput. 2020, 97, 106580. [Google Scholar] [CrossRef]
  32. Oh, Y.; Park, S.; Ye, J.C. Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets. IEEE Trans. Med. Imaging 2020, 39, 2688–2700. [Google Scholar] [CrossRef]
  33. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated Detection of COVID-19 Cases Using Deep Neural Networks with X-Ray Images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef] [PubMed]
  34. Pereira, R.M.; Bertolini, D.; Teixeira, L.O.; Silla, C.N.; Costa, Y.M.G. COVID-19 Identification in Chest X-Ray Images on Flat and Hierarchical Classification Scenarios. Comput. Methods Programs Biomed. 2020, 194, 105532. [Google Scholar] [CrossRef]
  35. Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
  36. Yoon, S.H.; Lee, K.H.; Kim, J.Y.; Lee, Y.K.; Ko, H.; Kim, K.H.; Park, C.M.; Kim, Y.H. Chest Radiographic and CT Findings of the 2019 Novel Coronavirus Disease (COVID-19): Analysis of Nine Patients Treated in Korea. Korean J. Radiol. 2020, 21, 494–500. [Google Scholar] [CrossRef]
  37. Wong, H.Y.F.; Lam, H.Y.S.; Fong, A.H.; Leung, S.T.; Chin, T.W.; Lo, C.S.Y.; Lui, M.M.; Lee, J.C.Y.; Chiu, K.W.; Chung, T.; et al. Frequency and Distribution of Chest Radiographic Findings in COVID-19 Positive Patients. Radiology 2020, 296, E72–E78. [Google Scholar] [CrossRef] [Green Version]
  38. Lomoro, P.; Verde, F.; Zerboni, F.; Simonetti, I.; Borghi, C.; Fachinetti, C.; Natalizi, A.; Martegani, A. COVID-19 Pneumonia Manifestations at the Admission on Chest Ultrasound, Radiographs, and CT: Single-Center Study and Comprehensive Radiologic Literature Review. European J. Radiol. Open 2020, 7, 100231. [Google Scholar] [CrossRef]
  39. Borkowski, A.A.; Viswanadhan, N.A.; Thomas, L.B.; Guzman, R.D.; Deland, L.A.; Mastorides, S.M. Using Artificial Intelligence for COVID-19 Chest X-Ray Diagnosis. Fed. Pract. 2020, 37, 398–404. [Google Scholar] [CrossRef]
  40. Chowdhury, N.K.; Rahman, M.M.; Kabir, M.A. PDCOVIDNet: A Parallel-Dilated Convolutional Neural Network Architecture for Detecting COVID-19 from Chest X-Ray Images. Health Inf. Sci. Syst. 2020, 8, 27. [Google Scholar] [CrossRef]
  41. Toraman, S.; Alakus, T.B.; Turkoglu, I. Convolutional Capsnet: A Novel Artificial Neural Network Approach to Detect COVID-19 Disease from X-Ray Images Using Capsule Networks. Chaos Solitons Fractals 2020, 140, 110122. [Google Scholar] [CrossRef]
  42. Ouchicha, C.; Ammor, O.; Meknassi, M. CVDNet: A Novel Deep Learning Architecture for Detection of Coronavirus (Covid-19) from Chest x-Ray Images. Chaos Solitons Fractals 2020, 140, 110245. [Google Scholar] [CrossRef]
  43. Kilicarslan, S.; Adem, K.; Celik, M. Diagnosis and Classification of Cancer Using Hybrid Model Based on ReliefF and Convolutional Neural Network. Med. Hypotheses 2020, 137, 109577. [Google Scholar] [CrossRef]
  44. Toğaçar, M.; Ergen, B.; Cömert, Z. COVID-19 Detection Using Deep Learning Models to Exploit Social Mimic Optimization and Structured Chest X-Ray Images Using Fuzzy Color and Stacking Approaches. Comput. Biol. Med. 2020, 121, 103805. [Google Scholar] [CrossRef]
  45. Özkaya, U.; Öztürk, Ş.; Budak, S.; Melgani, F. Classification of COVID-19 in Chest CT Images Using Convolutional Support Vector Machines. arXiv 2020, arXiv:2011.05746. [Google Scholar]
  46. Bassi, P.R.A.S.; Attux, R. A Deep Convolutional Neural Network for COVID-19 Detection Using Chest X-Rays. Res. Biomed. Eng. 2021. [Google Scholar] [CrossRef]
  47. Zhang, Y.-D.; Satapathy, S.C.; Liu, S.; Li, G.-R. A Five-Layer Deep Convolutional Neural Network with Stochastic Pooling for Chest CT-Based COVID-19 Diagnosis. Mach. Vis. Appl. 2021, 32, 14. [Google Scholar] [CrossRef] [PubMed]
  48. Ismael, A.M.; Şengür, A. The Investigation of Multiresolution Approaches for Chest X-Ray Image Based COVID-19 Detection. Health Inf. Sci. Syst. 2020, 8, 29. [Google Scholar] [CrossRef]
  49. Hassantabar, S.; Ahmadi, M.; Sharifi, A. Diagnosis and Detection of Infected Tissue of COVID-19 Patients Based on Lung x-Ray Image Using Convolutional Neural Network Approaches. Chaos Solitons Fractals 2020, 140, 110170. [Google Scholar] [CrossRef] [PubMed]
  50. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-Qaness, M.A.A.; Damasevicius, R.; Elaziz, M.A. COVID-19 Image Classification Using Deep Features and Fractional-Order Marine Predators Algorithm. Sci. Rep. 2020, 10, 15364. [Google Scholar] [CrossRef] [PubMed]
  51. Sule, W.F.; Oluwayelu, D.O. Real-Time RT-PCR for COVID-19 Diagnosis: Challenges and Prospects. Pan Afr. Med. J. 2020, 35, 121. [Google Scholar] [CrossRef] [PubMed]
  52. Murphy, K.; Smits, H.; Knoops, A.J.G.; Korst, M.B.J.M.; Samson, T.; Scholten, E.T.; Schalekamp, S.; Schaefer-Prokop, C.M.; Philipsen, R.H.H.M.; Meijers, A.; et al. COVID-19 on Chest Radiographs: A Multireader Evaluation of an Artificial Intelligence System. Radiology 2020, 296, E166–E172. [Google Scholar] [CrossRef]
  53. Wang, L.; Wong, A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest Radiography Images. arXiv 2020, arXiv:abs/2003.09871. [Google Scholar]
  54. Narin, A.; Kaya, C.; Pamuk, Z. Automatic Detection of Coronavirus Disease (COVID-19) Using X-Ray Images and Deep Convolutional Neural Networks. arXiv 2020, arXiv:abs/2003.10849. [Google Scholar]
  55. Zhang, J.; Xie, Y.; Li, Y.; Shen, C.; Xia, Y. COVID-19 Screening on Chest X-Ray Images Using Deep Learning Based Anomaly Detection. arXiv 2020, arXiv:2003.12338. [Google Scholar]
  56. Xia, Y.; Chen, W.; Ren, H.; Zhao, J.; Wang, L.; Jin, R.; Zhou, J.; Wang, Q.; Yan, F.; Zhang, B.; et al. A Rapid Screening Classifier for Diagnosing COVID-19. Int. J. Biol. Sci. 2021, 17, 539–548. [Google Scholar] [CrossRef] [PubMed]
  57. Salehi, S.; Abedi, A.; Balakrishnan, S.; Gholamrezanezhad, A. Coronavirus Disease 2019 (COVID-19) Imaging Reporting and Data System (COVID-RADS) and Common Lexicon: A Proposal Based on the Imaging Data of 37 Studies. Eur. Radiol. 2020, 30, 4930–4942. [Google Scholar] [CrossRef]
  58. Toussie, D.; Voutsinas, N.; Finkelstein, M.; Cedillo, M.A.; Manna, S.; Maron, S.Z.; Jacobi, A.; Chung, M.; Bernheim, A.; Eber, C.; et al. Clinical and Chest Radiography Features Determine Patient Outcomes in Young and Middle-Aged Adults with COVID-19. Radiology 2020, 297, E197–E206. [Google Scholar] [CrossRef] [PubMed]
  59. Li, M.D.; Little, B.P.; Alkasab, T.K.; Mendoza, D.P.; Succi, M.D.; Shepard, J.-A.O.; Lev, M.H.; Kalpathy-Cramer, J. Multi-Radiologist User Study for Artificial Intelligence-Guided Grading of COVID-19 Lung Disease Severity on Chest Radiographs. Acad. Radiol. 2021, 28, 572–576. [Google Scholar] [CrossRef] [PubMed]
  60. Li, M.D.; Arun, N.T.; Gidwani, M.; Chang, K.; Deng, F.; Little, B.P.; Mendoza, D.P.; Lang, M.; Lee, S.I.; O’Shea, A.; et al. Automated Assessment and Tracking of COVID-19 Pulmonary Disease Severity on Chest Radiographs Using Convolutional Siamese Neural Networks. Radiol. Artif. Intell. 2020, 2, e200079. [Google Scholar] [CrossRef] [PubMed]
  61. Mushtaq, J.; Pennella, R.; Lavalle, S.; Colarieti, A.; Steidler, S.; Martinenghi, C.M.A.; Palumbo, D.; Esposito, A.; Rovere-Querini, P.; Tresoldi, M.; et al. Initial Chest Radiographs and Artificial Intelligence (AI) Predict Clinical Outcomes in COVID-19 Patients: Analysis of 697 Italian Patients. Eur. Radiol. 2021, 31, 1770–1779. [Google Scholar] [CrossRef]
  62. Zhu, J.; Shen, B.; Abbasi, A.; Hoshmand-Kochi, M.; Li, H.; Duong, T.Q. Deep Transfer Learning Artificial Intelligence Accurately Stages COVID-19 Lung Disease Severity on Portable Chest Radiographs. PLoS ONE 2020, 15, e0236621. [Google Scholar] [CrossRef]
  63. Varela-Santos, S.; Melin, P. A New Approach for Classifying Coronavirus COVID-19 Based on Its Manifestation on Chest X-Rays Using Texture Features and Neural Networks. Inf. Sci. 2021, 545, 403–414. [Google Scholar] [CrossRef]
  64. Bai, H.X.; Hsieh, B.; Xiong, Z.; Halsey, K.; Choi, J.W.; Tran, T.M.L.; Pan, I.; Shi, L.-B.; Wang, D.-C.; Mei, J.; et al. Performance of Radiologists in Differentiating COVID-19 from Non-COVID-19 Viral Pneumonia at Chest CT. Radiology 2020, 296, E46–E54. [Google Scholar] [CrossRef]
  65. Jin, W.; Dong, S.; Dong, C.; Ye, X. Hybrid Ensemble Model for Differential Diagnosis between COVID-19 and Common Viral Pneumonia by Chest X-Ray Radiograph. Comput. Biol. Med. 2021, 131, 104252. [Google Scholar] [CrossRef]
  66. Sharma, A.; Rani, S.; Gupta, D. Artificial Intelligence-Based Classification of Chest X-Ray Images into COVID-19 and Other Infectious Diseases. Int. J. Biomed. Imaging 2020, 2020, 8889023. [Google Scholar] [CrossRef]
  67. Tsiknakis, N.; Trivizakis, E.; Vassalou, E.E.; Papadakis, G.Z.; Spandidos, D.A.; Tsatsakis, A.; Sánchez-García, J.; López-González, R.; Papanikolaou, N.; Karantanas, A.H.; et al. Interpretable Artificial Intelligence Framework for COVID-19 Screening on Chest X-Rays. Exp. Ther. Med. 2020, 20, 727–735. [Google Scholar] [CrossRef] [PubMed]
  68. Fleishon, H.; Haffty, B. Comments of the American College of Radiology Regarding the Evolving Role of Artificial Intelligence in Radiological Imaging 2020. Available online: https://www.acr.org/-/media/ACR/NOINDEX/Advocacy/acr_rsna_comments_fda-ai-evolvingrole-ws_6-30-2020.pdf (accessed on 8 April 2021).
  69. Li, D.; Wang, D.; Dong, J.; Wang, N.; Huang, H.; Xu, H.; Xia, C. False-Negative Results of Real-Time Reverse-Transcriptase Polymerase Chain Reaction for Severe Acute Respiratory Syndrome Coronavirus 2: Role of Deep-Learning-Based CT Diagnosis and Insights from Two Cases. Korean J. Radiol. 2020, 21, 505–508. [Google Scholar] [CrossRef] [PubMed]
  70. Dong, D.; Tang, Z.; Wang, S.; Hui, H.; Gong, L.; Lu, Y.; Xue, Z.; Liao, H.; Chen, F.; Yang, F.; et al. The Role of Imaging in the Detection and Management of COVID-19: A Review. IEEE Rev. Biomed. Eng. 2021, 14, 16–29. [Google Scholar] [CrossRef]
  71. Anastasopoulos, C.; Weikert, T.; Yang, S.; Abdulkadir, A.; Schmülling, L.; Bühler, C.; Paciolla, F.; Sexauer, R.; Cyriac, J.; Nesic, I.; et al. Development and Clinical Implementation of Tailored Image Analysis Tools for COVID-19 in the Midst of the Pandemic: The Synergetic Effect of an Open, Clinically Embedded Software Development Platform and Machine Learning. Eur. J. Radiol. 2020, 131, 109233. [Google Scholar] [CrossRef]
  72. Yang, S.; Jiang, L.; Cao, Z.; Wang, L.; Cao, J.; Feng, R.; Zhang, Z.; Xue, X.; Shi, Y.; Shan, F. Deep Learning for Detecting Corona Virus Disease 2019 (COVID-19) on High-Resolution Computed Tomography: A Pilot Study. Ann. Transl. Med. 2020, 8, 450. [Google Scholar] [CrossRef] [PubMed]
  73. Harmon, S.A.; Sanford, T.H.; Xu, S.; Turkbey, E.B.; Roth, H.; Xu, Z.; Yang, D.; Myronenko, A.; Anderson, V.; Amalou, A.; et al. Artificial Intelligence for the Detection of COVID-19 Pneumonia on Chest CT Using Multinational Datasets. Nat. Commun. 2020, 11, 4080. [Google Scholar] [CrossRef] [PubMed]
  74. Ni, Q.; Sun, Z.Y.; Qi, L.; Chen, W.; Yang, Y.; Wang, L.; Zhang, X.; Yang, L.; Fang, Y.; Xing, Z.; et al. A Deep Learning Approach to Characterize 2019 Coronavirus Disease (COVID-19) Pneumonia in Chest CT Images. Eur. Radiol. 2020, 30, 6517–6527. [Google Scholar] [CrossRef]
  75. Chen, J.; Wu, L.; Zhang, J.; Zhang, L.; Gong, D.; Zhao, Y.; Chen, Q.; Huang, S.; Yang, M.; Yang, X.; et al. Deep Learning-Based Model for Detecting 2019 Novel Coronavirus Pneumonia on High-Resolution Computed Tomography. Sci. Rep. 2020, 10, 19196. [Google Scholar] [CrossRef]
  76. Ye, Z.; Zhang, Y.; Wang, Y.; Huang, Z.; Song, B. Chest CT Manifestations of New Coronavirus Disease 2019 (COVID-19): A Pictorial Review. Eur. Radiol. 2020, 30, 4381–4389. [Google Scholar] [CrossRef] [Green Version]
  77. Zhang, H.-T.; Zhang, J.-S.; Zhang, H.-H.; Nan, Y.-D.; Zhao, Y.; Fu, E.-Q.; Xie, Y.-H.; Liu, W.; Li, W.-P.; Zhang, H.-J.; et al. Automated Detection and Quantification of COVID-19 Pneumonia: CT Imaging Analysis by a Deep Learning-Based Software. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2525–2532. [Google Scholar] [CrossRef] [PubMed]
  78. Ma, C.; Wang, X.-L.; Xie, D.-M.; Li, Y.-D.; Zheng, Y.-J.; Zhang, H.-B.; Ming, B. Dynamic Evaluation of Lung Involvement during Coronavirus Disease-2019 (COVID-19) with Quantitative Lung CT. Emerg. Radiol. 2020, 27, 671–678. [Google Scholar] [CrossRef] [PubMed]
  79. Du, S.; Gao, S.; Huang, G.; Li, S.; Chong, W.; Jia, Z.; Hou, G.; Wáng, Y.X.J.; Zhang, L. Chest Lesion CT Radiological Features and Quantitative Analysis in RT-PCR Turned Negative and Clinical Symptoms Resolved COVID-19 Patients. Quant. Imaging Med. Surg. 2020, 10, 1307–1317. [Google Scholar] [CrossRef] [PubMed]
  80. Prokop, M.; van Everdingen, W.; van Rees Vellinga, T.; Quarles van Ufford, H.; Stöger, L.; Beenen, L.; Geurts, B.; Gietema, H.; Krdzalic, J.; Schaefer-Prokop, C.; et al. CO-RADS: A Categorical CT Assessment Scheme for Patients Suspected of Having COVID-19-Definition and Evaluation. Radiology 2020, 296, E97–E104. [Google Scholar] [CrossRef]
  81. Lessmann, N.; Sánchez, C.I.; Beenen, L.; Boulogne, L.H.; Brink, M.; Calli, E.; Charbonnier, J.-P.; Dofferhoff, T.; van Everdingen, W.M.; Gerke, P.K.; et al. Automated Assessment of COVID-19 Reporting and Data System and Chest CT Severity Scores in Patients Suspected of Having COVID-19 Using Artificial Intelligence. Radiology 2021, 298, E18–E28. [Google Scholar] [CrossRef]
  82. Liu, H.; Ren, H.; Wu, Z.; Xu, H.; Zhang, S.; Li, J.; Hou, L.; Chi, R.; Zheng, H.; Chen, Y.; et al. CT Radiomics Facilitates More Accurate Diagnosis of COVID-19 Pneumonia: Compared with CO-RADS. J. Transl. Med. 2021, 19, 29. [Google Scholar] [CrossRef]
  83. Fang, X.; Li, X.; Bian, Y.; Ji, X.; Lu, J. Radiomics Nomogram for the Prediction of 2019 Novel Coronavirus Pneumonia Caused by SARS-CoV-2. Eur. Radiol. 2020, 30, 6888–6901. [Google Scholar] [CrossRef]
  84. Chen, Y.; Wang, Y.; Zhang, Y.; Zhang, N.; Zhao, S.; Zeng, H.; Deng, W.; Huang, Z.; Liu, S.; Song, B. A Quantitative and Radiomics Approach to Monitoring ARDS in COVID-19 Patients Based on Chest CT: A Retrospective Cohort Study. Int. J. Med. Sci. 2020, 17, 1773–1782. [Google Scholar] [CrossRef]
  85. Voulodimos, A.; Protopapadakis, E.; Katsamenis, I.; Doulamis, A.; Doulamis, N. Deep Learning Models for COVID-19 Infected Area Segmentation in CT Images. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 29 June–2 July 2021. [Google Scholar] [CrossRef]
  86. Wang, Y.; Zhang, Y.; Liu, Y.; Tian, J.; Zhong, C.; Shi, Z.; Zhang, Y.; He, Z. Does Non-COVID-19 Lung Lesion Help? Investigating Transferability in COVID-19 CT Image Segmentation. Comput. Methods Programs Biomed. 2021, 202, 106004. [Google Scholar] [CrossRef]
  87. Saood, A.; Hatem, I. COVID-19 Lung CT Image Segmentation Using Deep Learning Methods: U-Net versus SegNet. BMC Med. Imaging 2021, 21, 19. [Google Scholar] [CrossRef]
  88. Akram, T.; Attique, M.; Gul, S.; Shahzad, A.; Altaf, M.; Naqvi, S.S.R.; Damaševičius, R.; Maskeliūnas, R. A Novel Framework for Rapid Diagnosis of COVID-19 on Computed Tomography Scans. Pattern Anal. Appl. 2021, 1–14. [Google Scholar] [CrossRef]
  89. Mukherjee, H.; Ghosh, S.; Dhar, A.; Obaidullah, S.M.; Santosh, K.C.; Roy, K. Deep Neural Network to Detect COVID-19: One Architecture for Both CT Scans and Chest X-Rays. Appl. Intell. 2020. [Google Scholar] [CrossRef]
  90. Javor, D.; Kaplan, H.; Kaplan, A.; Puchner, S.B.; Krestan, C.; Baltzer, P. Deep Learning Analysis Provides Accurate COVID-19 Diagnosis on Chest Computed Tomography. Eur. J. Radiol. 2020, 133, 109402. [Google Scholar] [CrossRef]
  91. Mei, X.; Lee, H.-C.; Diao, K.-Y.; Huang, M.; Lin, B.; Liu, C.; Xie, Z.; Ma, Y.; Robson, P.M.; Chung, M.; et al. Artificial Intelligence-Enabled Rapid Diagnosis of Patients with COVID-19. Nat. Med. 2020, 26, 1224–1228. [Google Scholar] [CrossRef] [PubMed]
  92. Hermans, J.J.R.; Groen, J.; Zwets, E.; Boxma-De Klerk, B.M.; Van Werkhoven, J.M.; Ong, D.S.Y.; Hanselaar, W.E.J.J.; Waals-Prinzen, L.; Brown, V. Chest CT for Triage during COVID-19 on the Emergency Department: Myth or Truth? Emerg. Radiol. 2020, 27, 641–651. [Google Scholar] [CrossRef]
  93. Francone, M.; Iafrate, F.; Masci, G.M.; Coco, S.; Cilia, F.; Manganaro, L.; Panebianco, V.; Andreoli, C.; Colaiacomo, M.C.; Zingaropoli, M.A.; et al. Chest CT Score in COVID-19 Patients: Correlation with Disease Severity and Short-Term Prognosis. Eur. Radiol. 2020, 30, 6808–6817. [Google Scholar] [CrossRef] [PubMed]
  94. Zhao, W.; Zhong, Z.; Xie, X.; Yu, Q.; Liu, J. Relation Between Chest CT Findings and Clinical Conditions of Coronavirus Disease (COVID-19) Pneumonia: A Multicenter Study. AJR Am. J. Roentgenol. 2020, 214, 1072–1077. [Google Scholar] [CrossRef]
  95. Li, K.; Wu, J.; Wu, F.; Guo, D.; Chen, L.; Fang, Z.; Li, C. The Clinical and Chest CT Features Associated With Severe and Critical COVID-19 Pneumonia. Investig. Radiol. 2020, 55, 327–331. [Google Scholar] [CrossRef]
  96. Metlay, J.P.; Waterer, G.W.; Long, A.C.; Anzueto, A.; Brozek, J.; Crothers, K.; Cooley, L.A.; Dean, N.C.; Fine, M.J.; Flanders, S.A.; et al. Diagnosis and Treatment of Adults with Community-Acquired Pneumonia. An Official Clinical Practice Guideline of the American Thoracic Society and Infectious Diseases Society of America. Am. J. Respir. Crit. Care Med. 2019, 200, e45–e67. [Google Scholar] [CrossRef]
  97. Chatzitofis, A.; Cancian, P.; Gkitsas, V.; Carlucci, A.; Stalidis, P.; Albanis, G.; Karakottas, A.; Semertzidis, T.; Daras, P.; Giannitto, C.; et al. Volume-of-Interest Aware Deep Neural Networks for Rapid Chest CT-Based COVID-19 Patient Risk Assessment. Int. J. Environ. Res. Public Health 2021, 18, 2842. [Google Scholar] [CrossRef]
  98. Zhu, X.; Song, B.; Shi, F.; Chen, Y.; Hu, R.; Gan, J.; Zhang, W.; Li, M.; Wang, L.; Gao, Y.; et al. Joint Prediction and Time Estimation of COVID-19 Developing Severe Symptoms Using Chest CT Scan. Med. Image Anal. 2021, 67, 101824. [Google Scholar] [CrossRef]
  99. Wang, S.; Zha, Y.; Li, W.; Wu, Q.; Li, X.; Niu, M.; Wang, M.; Qiu, X.; Li, H.; Yu, H.; et al. A Fully Automatic Deep Learning System for COVID-19 Diagnostic and Prognostic Analysis. Eur. Respir. J. 2020, 56, 2000775. [Google Scholar] [CrossRef]
  100. Meng, L.; Dong, D.; Li, L.; Niu, M.; Bai, Y.; Wang, M.; Qiu, X.; Zha, Y.; Tian, J. A Deep Learning Prognosis Model Help Alert for COVID-19 Patients at High-Risk of Death: A Multi-Center Study. IEEE J. Biomed. Health Inform. 2020, 24, 3576–3584. [Google Scholar] [CrossRef]
  101. Li, D.; Zhang, Q.; Tan, Y.; Feng, X.; Yue, Y.; Bai, Y.; Li, J.; Li, J.; Xu, Y.; Chen, S.; et al. Prediction of COVID-19 Severity Using Chest Computed Tomography and Laboratory Measurements: Evaluation Using a Machine Learning Approach. JMIR Med. Inform. 2020, 8, e21604. [Google Scholar] [CrossRef] [PubMed]
  102. Ho, T.T.; Park, J.; Kim, T.; Park, B.; Lee, J.; Kim, J.Y.; Kim, K.B.; Choi, S.; Kim, Y.H.; Lim, J.-K.; et al. Deep Learning Models for Predicting Severe Progression in COVID-19-Infected Patients. JMIR Med. Inform. 2021, 9, e24973. [Google Scholar] [CrossRef]
  103. Hu, X.; Zeng, W.; Zhang, Y.; Zhen, Z.; Zheng, Y.; Cheng, L.; Wang, X.; Luo, H.; Zhang, S.; Wu, Z.; et al. CT Imaging Features of Different Clinical Types of COVID-19 Calculated by AI System: A Chinese Multicenter Study. J. Thorac. Dis. 2020, 12, 5336–5346. [Google Scholar] [CrossRef]
  104. Li, Z.; Zhong, Z.; Li, Y.; Zhang, T.; Gao, L.; Jin, D.; Sun, Y.; Ye, X.; Yu, L.; Hu, Z.; et al. From Community-Acquired Pneumonia to COVID-19: A Deep Learning-Based Method for Quantitative Analysis of COVID-19 on Thick-Section CT Scans. Eur. Radiol. 2020, 30, 6828–6837. [Google Scholar] [CrossRef] [PubMed]
  105. Zhang, Y.; Liu, Y.; Gong, H.; Wu, L. Quantitative Lung Lesion Features and Temporal Changes on Chest CT in Patients with Common and Severe SARS-CoV-2 Pneumonia. PLoS ONE 2020, 15, e0236858. [Google Scholar] [CrossRef] [PubMed]
  106. Pan, F.; Li, L.; Liu, B.; Ye, T.; Li, L.; Liu, D.; Ding, Z.; Chen, G.; Liang, B.; Yang, L.; et al. A Novel Deep Learning-Based Quantification of Serial Chest Computed Tomography in Coronavirus Disease 2019 (COVID-19). Sci. Rep. 2021, 11, 417. [Google Scholar] [CrossRef]
  107. Cheng, Z.; Qin, L.; Cao, Q.; Dai, J.; Pan, A.; Yang, W.; Gao, Y.; Chen, L.; Yan, F. Quantitative Computed Tomography of the Coronavirus Disease 2019 (COVID-19) Pneumonia. Radiol. Infect. Dis. 2020, 7, 55–61. [Google Scholar] [CrossRef] [PubMed]
  108. Ippolito, D.; Ragusi, M.; Gandola, D.; Maino, C.; Pecorelli, A.; Terrani, S.; Peroni, M.; Giandola, T.; Porta, M.; Talei Franzesi, C.; et al. Computed Tomography Semi-Automated Lung Volume Quantification in SARS-CoV-2-Related Pneumonia. Eur. Radiol. 2020, 31, 2726–2736. [Google Scholar] [CrossRef]
  109. Mergen, V.; Kobe, A.; Blüthgen, C.; Euler, A.; Flohr, T.; Frauenfelder, T.; Alkadhi, H.; Eberhard, M. Deep Learning for Automatic Quantification of Lung Abnormalities in COVID-19 Patients: First Experience and Correlation with Clinical Parameters. Eur. J. Radiol. Open 2020, 7, 100272. [Google Scholar] [CrossRef] [PubMed]
  110. Lanza, E.; Muglia, R.; Bolengo, I.; Santonocito, O.G.; Lisi, C.; Angelotti, G.; Morandini, P.; Savevski, V.; Politi, L.S.; Balzarini, L. Quantitative Chest CT Analysis in COVID-19 to Predict the Need for Oxygenation Support and Intubation. Eur. Radiol. 2020, 30, 6770–6778. [Google Scholar] [CrossRef]
  111. Kimura-Sandoval, Y.; Arévalo-Molina, M.E.; Cristancho-Rojas, C.N.; Kimura-Sandoval, Y.; Rebollo-Hurtado, V.; Licano-Zubiate, M.; Chapa-Ibargüengoitia, M.; Muñoz-López, G. Validation of Chest Computed Tomography Artificial Intelligence to Determine the Requirement for Mechanical Ventilation and Risk of Mortality in Hospitalized Coronavirus Disease-19 Patients in a Tertiary Care Center In Mexico City. Rev. Investig. Clin. 2020, 73, 111–119. [Google Scholar] [CrossRef] [PubMed]
  112. Burian, E.; Jungmann, F.; Kaissis, G.A.; Lohöfer, F.K.; Spinner, C.D.; Lahmer, T.; Treiber, M.; Dommasch, M.; Schneider, G.; Geisler, F.; et al. Intensive Care Risk Estimation in COVID-19 Pneumonia Based on Clinical and Imaging Parameters: Experiences from the Munich Cohort. J. Clin. Med. 2020, 9, 1514. [Google Scholar] [CrossRef]
  113. Liu, F.; Zhang, Q.; Huang, C.; Shi, C.; Wang, L.; Shi, N.; Fang, C.; Shan, F.; Mei, X.; Shi, J.; et al. CT Quantification of Pneumonia Lesions in Early Days Predicts Progression to Severe Illness in a Cohort of COVID-19 Patients. Theranostics 2020, 10, 5613–5622. [Google Scholar] [CrossRef]
  114. Noll, E.; Soler, L.; Ohana, M.; Ludes, P.-O.; Pottecher, J.; Bennett-Guerrero, E.; Veillon, F.; Goichot, B.; Schneider, F.; Meyer, N.; et al. A Novel, Automated, Quantification of Abnormal Lung Parenchyma in Patients with COVID-19 Infection: Initial Description of Feasibility and Association with Clinical Outcome. Anaesth. Crit. Care Pain Med. 2020, 40, 100780. [Google Scholar] [CrossRef]
  115. Durhan, G.; Düzgün, S.A.; Demirkazık, F.B.; Irmak, İ.; İdilman, İ.; Akpınar, M.G.; Akpınar, E.; Öcal, S.; Telli, G.; Topeli, A.; et al. Visual and Software-Based Quantitative Chest CT Assessment of COVID-19: Correlation with Clinical Findings. Diagn. Interv. Radiol. 2020, 26, 557–564. [Google Scholar] [CrossRef]
  116. Wang, Y.; Chen, Y.; Wei, Y.; Li, M.; Zhang, Y.; Zhang, N.; Zhao, S.; Zeng, H.; Deng, W.; Huang, Z.; et al. Quantitative Analysis of Chest CT Imaging Findings with the Risk of ARDS in COVID-19 Patients: A Preliminary Study. Ann. Transl. Med. 2020, 8, 594. [Google Scholar] [CrossRef]
  117. Qiu, J.; Peng, S.; Yin, J.; Wang, J.; Jiang, J.; Li, Z.; Song, H.; Zhang, W. A Radiomics Signature to Quantitatively Analyze COVID-19-Infected Pulmonary Lesions. Interdiscip. Sci. 2021, 13, 61–72. [Google Scholar] [CrossRef] [PubMed]
  118. Homayounieh, F.; Babaei, R.; Mobin, H.K.; Arru, C.D.; Sharifian, M.; Mohseni, I.; Zhang, E.; Digumarthy, S.R.; Kalra, M.K. Computed Tomography Radiomics Can Predict Disease Severity and Outcome in Coronavirus Disease 2019 Pneumonia. J. Comput. Assist. Tomogr. 2020, 44, 640–646. [Google Scholar] [CrossRef]
  119. Fu, L.; Li, Y.; Cheng, A.; Pang, P.; Shu, Z. A Novel Machine Learning-Derived Radiomic Signature of the Whole Lung Differentiates Stable From Progressive COVID-19 Infection: A Retrospective Cohort Study. J. Thorac. Imaging 2020, 35, 361. [Google Scholar] [CrossRef] [PubMed]
  120. Chen, H.; Zeng, M.; Wang, X.; Su, L.; Xia, Y.; Yang, Q.; Liu, D. A CT-Based Radiomics Nomogram for Predicting Prognosis of Coronavirus Disease 2019 (COVID-19) Radiomics Nomogram Predicting COVID-19. Br. J. Radiol. 2021, 94, 20200634. [Google Scholar] [CrossRef]
  121. Wu, Q.; Wang, S.; Li, L.; Wu, Q.; Qian, W.; Hu, Y.; Li, L.; Zhou, X.; Ma, H.; Li, H.; et al. Radiomics Analysis of Computed Tomography Helps Predict Poor Prognostic Outcome in COVID-19. Theranostics 2020, 10, 7231–7244. [Google Scholar] [CrossRef] [PubMed]
  122. Li, C.; Dong, D.; Li, L.; Gong, W.; Li, X.; Bai, Y.; Wang, M.; Hu, Z.; Zha, Y.; Tian, J. Classification of Severe and Critical Covid-19 Using Deep Learning and Radiomics. IEEE J. Biomed. Health Inform. 2020, 24, 3585–3594. [Google Scholar] [CrossRef]
  123. Cai, Q.; Du, S.-Y.; Gao, S.; Huang, G.-L.; Zhang, Z.; Li, S.; Wang, X.; Li, P.-L.; Lv, P.; Hou, G.; et al. A Model Based on CT Radiomic Features for Predicting RT-PCR Becoming Negative in Coronavirus Disease 2019 (COVID-19) Patients. BMC Med. Imaging 2020, 20, 118. [Google Scholar] [CrossRef]
  124. Yang, J.; Zheng, Y.; Gou, X.; Pu, K.; Chen, Z.; Guo, Q.; Ji, R.; Wang, H.; Wang, Y.; Zhou, Y. Prevalence of Comorbidities and Its Effects in Patients Infected with SARS-CoV-2: A Systematic Review and Meta-Analysis. Int. J. Infect. Dis. 2020, 94, 91–95. [Google Scholar] [CrossRef]
  125. Lu, X.; Cui, Z.; Pan, F.; Li, L.; Li, L.; Liang, B.; Yang, L.; Zheng, C. Glycemic Status Affects the Severity of Coronavirus Disease 2019 in Patients with Diabetes Mellitus: An Observational Study of CT Radiological Manifestations Using an Artificial Intelligence Algorithm. Acta Diabetol. 2021, 58, 575–586. [Google Scholar] [CrossRef]
  126. Wang, B.; Li, R.; Lu, Z.; Huang, Y. Does Comorbidity Increase the Risk of Patients with COVID-19: Evidence from Meta-Analysis. Aging 2020, 12, 6049–6057. [Google Scholar] [CrossRef]
  127. Zhang, C.; Yang, G.; Cai, C.; Xu, Z.; Wu, H.; Guo, Y.; Xie, Z.; Shi, H.; Cheng, G.; Wang, J. Development of a Quantitative Segmentation Model to Assess the Effect of Comorbidity on Patients with COVID-19. Eur. J. Med. Res. 2020, 25, 49. [Google Scholar] [CrossRef]
  128. Song, J.; Wang, H.; Liu, Y.; Wu, W.; Dai, G.; Wu, Z.; Zhu, P.; Zhang, W.; Yeom, K.W.; Deng, K. End-to-End Automatic Differentiation of the Coronavirus Disease 2019 (COVID-19) from Viral Pneumonia Based on Chest CT. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 2516–2524. [Google Scholar] [CrossRef] [PubMed]
  129. Yan, T.; Wong, P.K.; Ren, H.; Wang, H.; Wang, J.; Li, Y. Automatic Distinction between COVID-19 and Common Pneumonia Using Multi-Scale Convolutional Neural Network on Chest CT Scans. Chaos Solitons Fractals 2020, 140, 110153. [Google Scholar] [CrossRef] [PubMed]
  130. Liu, C.; Wang, X.; Liu, C.; Sun, Q.; Peng, W. Differentiating Novel Coronavirus Pneumonia from General Pneumonia Based on Machine Learning. Biomed. Eng. Online 2020, 19, 66. [Google Scholar] [CrossRef]
  131. Yang, Y.; Lure, F.Y.M.; Miao, H.; Zhang, Z.; Jaeger, S.; Liu, J.; Guo, L. Using Artificial Intelligence to Assist Radiologists in Distinguishing COVID-19 from Other Pulmonary Infections. J. Xray Sci. Technol. 2020, 29, 1–17. [Google Scholar] [CrossRef]
  132. Bai, H.X.; Wang, R.; Xiong, Z.; Hsieh, B.; Chang, K.; Halsey, K.; Tran, T.M.L.; Choi, J.W.; Wang, D.-C.; Shi, L.-B.; et al. Artificial Intelligence Augmentation of Radiologist Performance in Distinguishing COVID-19 from Pneumonia of Other Origin at Chest CT. Radiology 2020, 296, E156–E165. [Google Scholar] [CrossRef] [PubMed]
  133. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Using Artificial Intelligence to Detect COVID-19 and Community-Acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy. Radiology 2020, 296, E65–E71. [Google Scholar] [CrossRef]
  134. Ardakani, A.A.; Acharya, U.R.; Habibollahi, S.; Mohammadi, A. COVIDiag: A Clinical CAD System to Diagnose COVID-19 Pneumonia Based on CT Findings. Eur. Radiol. 2021, 31, 121–130. [Google Scholar] [CrossRef] [PubMed]
  135. Zeng, Q.-Q.; Zheng, K.I.; Chen, J.; Jiang, Z.-H.; Tian, T.; Wang, X.-B.; Ma, H.-L.; Pan, K.-H.; Yang, Y.-J.; Chen, Y.-P.; et al. Radiomics-Based Model for Accurately Distinguishing between Severe Acute Respiratory Syndrome Associated Coronavirus 2 (SARS-CoV-2) and Influenza A Infected Pneumonia. MedComm 2020, 1, 240–248. [Google Scholar] [CrossRef]
  136. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  137. Real, E.; Aggarwal, A.; Huang, Y.; Le, Q.V. Regularized Evolution for Image Classifier Architecture Search. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Palo Alto, CA, USA, 2019; Volume 33. [Google Scholar]
  138. Roberts, M.; Driggs, D.; Thorpe, M.; Gilbey, J.; Yeung, M.; Ursprung, S.; Aviles-Rivero, A.I.; Etmann, C.; McCague, C.; Beer, L.; et al. Common Pitfalls and Recommendations for Using Machine Learning to Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT Scans. Nat. Mach. Intell. 2021, 3, 199–217. [Google Scholar] [CrossRef]
  139. Alizadehsani, R.; Roshanzamir, M.; Hussain, S.; Khosravi, A.; Koohestani, A.; Zangooei, M.H.; Abdar, M.; Beykikhoshk, A.; Shoeibi, A.; Zare, A.; et al. Handling of Uncertainty in Medical Data Using Machine Learning and Probability Theory Techniques: A Review of 30 Years (1991–2020). Ann. Oper. Res. 2021, 1–42. [Google Scholar] [CrossRef]
  140. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. arXiv 2020, arXiv:2006.11988. [Google Scholar]
  141. Calderon-Ramirez, S.; Yang, S.; Moemeni, A.; Colreavy-Donnelly, S.; Elizondo, D.A.; Oala, L.; Rodriguez-Capitan, J.; Jimenez-Navarro, M.; Lopez-Rubio, E.; Molina-Cabello, M.A. Improving Uncertainty Estimation With Semi-Supervised Deep Learning for COVID-19 Detection Using Chest X-Ray Images. IEEE Access 2021, 9, 85442–85454. [Google Scholar] [CrossRef]
  142. Zhou, J.; Jing, B.; Wang, Z.; Xin, H.; Tong, H. SODA: Detecting COVID-19 in Chest X-Rays with Semi-Supervised Open Set Domain Adaptation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021. [Google Scholar] [CrossRef] [PubMed]
  143. Ilyas, M.; Rehman, H.; Nait-ali, A. Detection of Covid-19 from Chest X-Ray Images Using Artificial Intelligence: An Early Review. arXiv 2020, arXiv:2004.05436. [Google Scholar]
  144. López-Cabrera, J.D.; Orozco-Morales, R.; Portal-Diaz, J.A.; Lovelle-Enríquez, O.; Pérez-Díaz, M. Current Limitations to Identify COVID-19 Using Artificial Intelligence with Chest X-Ray Imaging. Health Technol. 2021, 1–14. [Google Scholar] [CrossRef]
Figure 1. Workflow of image annotation, segmentation, and elaboration. The diagram illustrates the steps to follow when building a ML model using the radiological images.
Figure 1. Workflow of image annotation, segmentation, and elaboration. The diagram illustrates the steps to follow when building a ML model using the radiological images.
Diagnostics 11 01317 g001
Figure 2. Distribution of subjects included in the studies for the development of ML models for the diagnosis of COVID-19 pneumonia at CXR. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Figure 2. Distribution of subjects included in the studies for the development of ML models for the diagnosis of COVID-19 pneumonia at CXR. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Diagnostics 11 01317 g002
Figure 3. Distribution of subjects included in the studies for the development of ML models for the screening of COVID-19 pneumonia at CXR. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Figure 3. Distribution of subjects included in the studies for the development of ML models for the screening of COVID-19 pneumonia at CXR. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Diagnostics 11 01317 g003
Figure 4. Distribution of subjects included in the studies for the development of ML models for the stratification and definition of severity and complications of COVID-19 pneumonia at CXR. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients.
Figure 4. Distribution of subjects included in the studies for the development of ML models for the stratification and definition of severity and complications of COVID-19 pneumonia at CXR. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients.
Diagnostics 11 01317 g004
Figure 5. Distribution of subjects included in the studies for the development of ML models for the diagnosis of COVID-19 pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients, the green bar represents the group of healthy patients, the blue bar represents the group of patients for which their health status was unclear.
Figure 5. Distribution of subjects included in the studies for the development of ML models for the diagnosis of COVID-19 pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients, the green bar represents the group of healthy patients, the blue bar represents the group of patients for which their health status was unclear.
Diagnostics 11 01317 g005
Figure 6. Distribution of subjects included in the studies for the development of ML models for the screening of COVID-19 pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Figure 6. Distribution of subjects included in the studies for the development of ML models for the screening of COVID-19 pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Diagnostics 11 01317 g006
Figure 7. Distribution of subjects included in the studies for the development of ML models for the stratification and definition of severity and complications of COVID-19 pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Figure 7. Distribution of subjects included in the studies for the development of ML models for the stratification and definition of severity and complications of COVID-19 pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Diagnostics 11 01317 g007
Figure 8. Distribution of subjects included in the studies for the development of ML models for the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Figure 8. Distribution of subjects included in the studies for the development of ML models for the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest CT. The plot shows the distribution of the subjects included in the studies: in the legend in the right upper corner of the figure, the red bar represents the COVID-19 pneumonia group of patients, the yellow bar represents the non-COVID-19 pneumonia group of patients.
Diagnostics 11 01317 g008
Table 1. AI in the identification of COVID-19 pneumonia at Chest X-ray.
Table 1. AI in the identification of COVID-19 pneumonia at Chest X-ray.
AuthorsYearPopulation (No. of Patients)ML ModelResults
Apostolopoulos et al.2020First dataset: 224 Covid+, 1204 covid-. plus a second dataset with 224 Covid+, 1218 Covid-different CNNs (VGG19, MobileNet v2, Inception, Xception, Inception ResNet v2)acc 96.78%, sen 98.66%, spe 96.46% (for binary class), acc 93.48% (for multi-class)
Ozturk et al.2020127 Covid+DarkNetacc 98.08%, sen 95.13%, spe 95.3%, (for binary class) acc 87.02%, sen 85.35%, spe 92.18%, (for multi-class)
Wang et al.2020358 Covid+, 13,604 Covid-covid-netacc 95%, sen 93%, spe 96% (for multi-class)
Borkowski et al.2020training: 484 Covid+, 1000 Covid-; validation: 10 Covid+, 20 Covid-Microsoft custom visionacc 97%, sen 100%, spe 95% (for binary)
Chowdhury et al.2020219 Covid+, 2659 Covid-PDCOVID-netacc 96.58%, pre 96.58%, rec 96.59%, F1 96.58% (for multi-class: covid, normal, viral pneumonia)
Toraman et al.2020231 Covid+ (1050 with data augmentation), 2100 Covid-CapsNetacc 89.48%, sen 84.22%, spe 92.11% (for multi-class: covid, normal, pneumonia)
Ouchicha et al.2020219 Covid+, 2686 Covid-CVDNetacc 97.79%, sen 96.83%, spe 98.02% (for multi-class: covid, normal, pneumonia)
Togacar et al.2020295 Covid+, 163 Covid-MobileNet+squeezenet+SVMacc 98.83%, sen 97.04%, spe 99.15% (for multi-class: covid, normal, pneumonia)
Hassantabar et al.2020315 Covid+, 367 Covid-CNN and DNNCNN: accuracy 93.2, sensitivity 96.1, DNN: accuracy 83.4, sensitivity 86
Mukherjee et al.2021Various datasetsCNNAccuracy: 96.13
Table 2. AI in the screening of COVID-19 pneumonia at Chest X-ray.
Table 2. AI in the screening of COVID-19 pneumonia at Chest X-ray.
AuthorsYearPopulation (No. of Patients)ML ModelResults
Murphy et al.2020217 covid+, 237 covid-CAD4COVID-XRayAUC 0.81, specificity 85%
Wang et al.202053 COVID+, 13,592 COVID-covid-netaccuracy 92.4%
Narin et al.202050 covid+, 50 covid-ResNet-50, Inception V3, Inception-ResNet V2, ResNet101, ResNet152accuracy 98% (ResNet-50)
Zhang et al.2020various datasets for internal and external validationResNet-18sen 72.00%, spe 97.97%, AUC 95.18% (for binary class)
Xia et al.2021512 covid+, 106 covid-DNNAUC 0.919 (when combining cxr and clinical features: AUC 0.952, sensitivity 91.5, specificity 81.2)
Bassi et al.2021439 covid+, 1625 covid-DenseNet201 and DenseNet121accuracy 100
Table 3. AI in the stratification and definition of severity and complications of COVID-19 pneumonia at CXR.
Table 3. AI in the stratification and definition of severity and complications of COVID-19 pneumonia at CXR.
AuthorsYearPopulation (No. of Patients)ML ModelResults
Li et al.2020various datasetsconvolutional siamese NNAUC 0.80
Mushtaq et al.2021697 covid+qXRAchieving a statistical significance in predicting negative outcome in ED patients.
Zhu et al.2020131 covid+VGG16AI-predicted scores were highly correlated with radiologist scores
Table 4. AI in the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest X-ray.
Table 4. AI in the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest X-ray.
AuthorsYearPopulation (No. of Patients)ML ModelResults
Varela-Santos et al.2021various datasets (Cohen, Kermany)FFNN, CNNVarious AUC values depending on the dataset/population/network considered
Jin et al.2021various datasets (NIH chext x ray database and others): 543 covid+, 600 covid-, 600 normalhybrid ensemble model (AlexNet with ReliefF algorithm and SVM classifier)accuracy 98.642, specificity 98.644, sensitivity 98.643, AUC 0.9997
Sharma et al.2020various datasetsCovidPredaccuracy 93.8
Tsiknakis et al.2020various datasets (Cohen, QUIBIM imagingcovid19): 137 covid+, 150 covid-, 150 normalInception-V3sensibility 99, specificity 100, accuracy 100, AUC 1 for binary class (covid vs. other pneumonia)
Table 5. AI in the identification of COVID-19 pneumonia and its complications at Chest CT.
Table 5. AI in the identification of COVID-19 pneumonia and its complications at Chest CT.
AuthorsYearML ModelPopulation (No. of Patients)Results
Anastasopoulos et al.2020U-Net197 COVID+, 141 COVID-Dice coefficient: 0.97
Yang et al.2020DenseNet146 COVID+, 149 COVID-AUC: 0.98
Harmon et al.2020AH-Net(segmentation) Densenet3D/2D+1 (classification)922 COVID+, 1695 COVID-AUC: 0.949—original design,
0.941—independent population
Ni et al.2020MVP-Net, 3D U-Net14,435 (training): 2154 COVID+, 12,281 COVID- +
96 COVID+ (testing)
Accuracy: 82—per-lobe lung level,
0.94—per-patient level
Chen et al.2020U-Net++ with a ResNet50 backbone106 (training and retrospective testing): 51 COVID+, 55 COVID- +27 (internal prospective testing): 16 COVID+, 11 COVID- +100 (external prospective testing): 50 COVID+, 50 COVID-
27 (internal prospec-tive testing): 16 COVID+, 11 COVID- +
100 (external pro-spective testing): 50 COVID+, 50 COVID-
Accuracy: 95.24—retrospective testing,
92.59—internal prospective testing,
96—external prospective testing
Zhang et al.2020QCT2460 COVID+Identification of lesions
Ma et al.2020QCT18 COVID+Identification of lesions and dynamic changes
Du et al.2020QCT125 COVID+Identification of lesions and dynamic changes
Lessmann et al.2020Two-stage U-Net (lobe segmentation and labeling), 3D U-net with nnU-Net framework (CT severity score prediction), 3D-inflated Inception (CO-RADS score prediction)476 (training)
105 (internal test): 58 COVID+, 47 COVID-
262 (external test): 179 COVID+, 83 COVID-
AUC: 0.95—internal testing, 0.88—external testing
Liu et al.2021Radiomics115 COVID+, 435 COVID-AUC: 0.93
Fang et al.2020Radiomics239 (training): 136 COVID+, 103 COVID-
90 (validation): 56 COVID+, 34 COVID-
AUC: 0.955
Chen et al.2020Radiomics84 COVID+AUC: 0.94
Voulodimos et al.2020FCN, U-net10 COVID+Unclear data: FCN Accuracy: ~0.9 (validation); Accuracy U-net: >0.9 (validation)
Sahood et al.2021U-net, SegNet100—one slice CT scansAccuracy: SegNet: 0.954; U-Net: 0.949
Mukherjee et al.2021CNN336 COVID+, 336 COVID—(CXR + CT)AUC CXR+CT: 0.9808
(AUC CT: 0.9731)
Table 6. AI in the screening of COVID-19 pneumonia at Chest CT.
Table 6. AI in the screening of COVID-19 pneumonia at Chest CT.
AuthorsYearML ModelPopulation (No. of Patients)Results
Javor et al.2020ResNet50209 COVID+, 209 COVID-AUC: 0.956
Mei et al.2020LeNet, YOLO, DenseNet (pipeline developed in previous work)419 COVID+, 486 COVID-AUC: 0.92
Hermans et al.2020Logistic regression (no DL)133 COVID+, 16 COVID-AUC: 0.953
Table 7. AI in the stratification and definition of severity and complications of COVID-19 pneumonia at Chest CT.
Table 7. AI in the stratification and definition of severity and complications of COVID-19 pneumonia at Chest CT.
AuthorsYearML ModelPopulation (No. of Patients)Results
Chatzitofis2021DenseNet201497 COVID+AUC: 0.79–0.97—moderate risk, 0.81–0.92—severe risk, 0.93–1.00—extreme risk
Xiao et al.2020Instance Aware ResNet34408 COVID+AUC: 0.892
Zhu et al.2020DL408 COVID+Accuracy: 85.91
Wang et al.2020DenseNet121-FPN (lung segmentation), COVID-19Net (novel) (COVID-19 diagnostic and prognostic analysis)924 COVID+, 4448 COVID-AUC-3 sets: 0.87, 0.88, 0.86
Meng et al.2020De-COVID19-Net (novel)366 COVID+AUC: 0.943
Li et al.2020DenseNet46 COVID+AUC: 0.93
Ho et al.2021Custom architectures (not very interesting) + an assortment of existing architectures297 COVID+AUC: 0.916
Hu et al.2020Custom architectures (not very interesting) + an assortment of existing architectures164 COVID+Identification of lesions
Li et al.2020QCT196 COVID+AUC: 0.97
Zhang et al.2020QCT73 COVID+Identification of volumes and dynamic changes
Pan et al.2021QCT95 COVID+Correlation with CT score—Spearman’s correlation coefficient 0.920
Cheng et al.2020QCT30 COVID+Significant correlation with laboratory data, PSI and CT score
Ippolito et al.2020QCT108 COVID+Significant correlation with laboratory data and CT score
Mergen et al.2020QCT60 COVID+Significant correlation with laboratory and clinical data
Lanza et al.2020QCT222 COVID+AUC: 0.83—oxygenation support, 0.86—intubation
Kimura-Sandoval et al.2020QCT166 COVID+AUC: 0.884—MV, 0.876—Mortality
Burian et al.2020QCT65 COVID+AUC: 0.79
Liu et al.2020QCT134 COVID+AUC: 0.93
Noll et al.2020QCT37 COVID+Correlation with clinical data
Durhan et al.2020QCT90 COVID+AUC: 0.902—severe pneumonia, 0.944—ICU admission
Wang et al.2020QCT27 COVID+Correlation with clinical data
Qiu et al.2021Radiomics84 COVID+AUC: 0.87
Homayounieh et al.2020Radiomics92 COVID+AUC: 0.99—disease severity, 0.90—outcome
Fu et al.2020Radiomics64 COVID+AUC: 0.833
Chen et al.2021Radiomics40 COVID+“AUC -3 classifiers: 0.82, 0.88,0.86, c-index-nomogram: 0.85”
Wu et al.2020Radiomics492 COVID+“AUC: 0.862—early-phase group, 0.976—late-phase group”
Li et al.2020DL-Radiomics217 COVID+AUC: 0.861
Yue et al.2020Radiomics31 COVID+AUC-2 models: 0.97, 0.92
Tan et al.2020Radiomics219 COVID+AUC-3 cohorts: 0.95, 0.95, 0.98
Cai et al.2020Radiomics203 COVID+AUC: 0.812
Lu et al.2021QCT126 COVID+AUC: 0.796—PLV, 0.783—PGV, 0.816—PCV
Zhang et al.2020QCT294 COVID+(Dice coefficients >0.85 and all accuracies >0.95)
Table 8. AI in the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest CT.
Table 8. AI in the differential diagnosis of COVID-19 pneumonia from other pneumonia at Chest CT.
AuthorsYearML ModelPopulation (No. of Patients)Results
Song et al.2020BigBiGAN98 COVID+, 103 COVID-AUC: 0.972—internal test, 0.850—external validation
Yan et al.2020EfficientNetB0206 COVID+, 412 COVID-AUC: 0.962—per-slice, 0.934—per-scan
Liu et al.2020Radiomics61 COVID+, 27 COVID-AUC: 0.99
Yang et al.2020ResUNet118 COVID+, 576 COVID-AUC: 0.903
Bai et al.2020EfficientNet-B4521 COVID+, 665 COVID-AUC: 0.95—internal testing, 0.90—independent testing
Li et al.2020COVNet (novel)468 COVID+, 2854 COVID-AUC: 0.96
Abbasian Ardakani et al.2021COVIDiag306 COVID+, 306 COVID-AUC: 0.965
Zeng et al.2020Radiomics41 COVID+, 37 COVID-AUC: 0.87
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Laino, M.E.; Ammirabile, A.; Posa, A.; Cancian, P.; Shalaby, S.; Savevski, V.; Neri, E. The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review. Diagnostics 2021, 11, 1317. https://doi.org/10.3390/diagnostics11081317

AMA Style

Laino ME, Ammirabile A, Posa A, Cancian P, Shalaby S, Savevski V, Neri E. The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review. Diagnostics. 2021; 11(8):1317. https://doi.org/10.3390/diagnostics11081317

Chicago/Turabian Style

Laino, Maria Elena, Angela Ammirabile, Alessandro Posa, Pierandrea Cancian, Sherif Shalaby, Victor Savevski, and Emanuele Neri. 2021. "The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review" Diagnostics 11, no. 8: 1317. https://doi.org/10.3390/diagnostics11081317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop