Korean J Radiol. 2023 May;24(5):454-464. English.
Published online Apr 19, 2023.
Copyright © 2023 The Korean Society of Radiology
Original Article

A Nationwide Web-Based Survey of Neuroradiologists’ Perceptions of Artificial Intelligence Software for Neuro-Applications in Korea

Hyunsu Choi,1 Leonard Sunwoo,1,2 Se Jin Cho,1 Sung Hyun Baik,1 Yun Jung Bae,1 Byung Se Choi,1 Cheolkyu Jung,1 and Jae Hyoung Kim1
    • 1Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea.
    • 2Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, Korea.
Received November 21, 2022; Revised February 19, 2023; Accepted March 06, 2023.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Objective

We aimed to investigate current expectations and clinical adoption of artificial intelligence (AI) software among neuroradiologists in Korea.

Materials and Methods

In April 2022, a 30-item online survey was conducted by neuroradiologists from the Korean Society of Neuroradiology (KSNR) to assess current user experiences, perceptions, attitudes, and future expectations regarding AI for neuro-applications. Respondents with experience in AI software were further investigated in terms of the number and type of software used, period of use, clinical usefulness, and future scope. Results were compared between respondents with and without experience with AI software through multivariable logistic regression and mediation analyses.

Results

The survey was completed by 73 respondents, accounting for 21.9% (73/334) of the KSNR members; 72.6% (53/73) were familiar with AI and 58.9% (43/73) had used AI software, with approximately 86% (37/43) using 1–3 AI software programs and 51.2% (22/43) having up to one year of experience with AI software. Among AI software types, brain volumetry software was the most common (62.8% [27/43]). Although 52.1% (38/73) assumed that AI is currently useful in practice, 86.3% (63/73) expected it to be useful for clinical practice within 10 years. The main expected benefits were reducing the time spent on repetitive tasks (91.8% [67/73]) and improving reading accuracy and reducing errors (72.6% [53/73]). Those who experienced AI software were more familiar with AI (adjusted odds ratio, 7.1 [95% confidence interval, 1.81–27.81]; P = 0.005). More than half of the respondents with AI software experience (55.8% [24/43]) agreed that AI should be included in training curriculums, while almost all (95.3% [41/43]) believed that radiologists should coordinate to improve its performance.

Conclusion

A majority of respondents experienced AI software and showed a proactive attitude toward adopting AI in clinical practice, suggesting that AI should be incorporated into training and active participation in AI development should be encouraged.

Keywords
Artificial intelligence; Neuroradiology; Web-based survey; User experience; Perceptions

INTRODUCTION

Artificial intelligence (AI), especially deep learning, is rapidly transforming the medical field, including radiology [1, 2]. Radiologists are at the forefront of AI innovation because AI deals with various complex tasks, including image acquisition and processing, target detection, segmentation of the target organ, and classification of disease [1, 3, 4, 5]. AI research has shown promising outcomes by improving diagnostic accuracy, outcome prediction, and work efficiency [1, 6, 7]. To date, many commercial AI software packages have been released, some of which are actively used in clinical practice.

Neuroradiology is among the major subspecialties in radiology regarding the number and diversity of AI applications [8, 9, 10, 11]. More than one-third of recent AI-related publications on medical imaging is related to the central nervous system [1]. Furthermore, various commercial AI products for neuroimaging have been developed, such as brain volumetry software for evaluating neurodegenerative diseases, perfusion analysis software for evaluating acute ischemic stroke, and software for automated detection of intracranial hemorrhage or large vessel occlusion [10, 12, 13, 14, 15]. To date, of the 202 Conformité Européenne (CE)-marked AI software products, 71 (35.1%) are specialized in neuroradiology [16].

With the rapid development of AI-related technology, expectations for solving unmet clinical needs are growing, such as enhancing diagnostic accuracy/efficiency or expanding new knowledge, as well as fear and concern about its side effects [17, 18, 19, 20]. Replacing radiologists with AI software was a major concern [1, 18]. In previous surveys, 48.6% of medical students referred AI as an obstacle to choosing to specialize in radiology, and 39% of radiologists were fearful of AI implementation [18, 20]. However, several recent studies revealed that, although many radiologists agree that AI will have a significant impact on their work in the near future, they feel less fearful of AI with the increase in knowledge of AI [18, 19, 21].

There is considerable interest in AI research and its application in clinical practice in Korea [22]. As of 2021, the Ministry of Food and Drug Safety in Korea has approved 101 AI-based medical devices, 16 of which are targeted for neuroradiology [23]. However, there has not yet been a survey in Korea investigating the attitudes of radiologists toward AI and its prospects. In addition, interactions between commercial AI software and radiologists have not been investigated. Thus, herein we assessed user experiences, perspectives, and attitudes of neuroradiologists about AI for neuro-applications using a nationwide web-based survey across Korea.

MATERIALS AND METHODS

The requirement for approval from the institutional review board of Seoul National University Bundang Hospital was waived (No. X-2211-790-901).

Survey

Two radiologists (L.S. and H.C. with 13 and 3 years of experience in radiology, respectively) generated the questionnaire. This survey was created using Google Forms (Google LLC) and distributed for two weeks in April 2022 to 334 members of the Korean Society of Neuroradiology (KSNR) via email to ask for participation voluntarily and anonymously. During the two weeks, reminders were sent thrice to encourage participation.

Questionnaire

The questionnaire (Supplementary Table 1) comprised 30 questions organized into three sections: demographics, user experiences and benefits/concerns regarding AI software, and perceptions and attitudes toward AI.

Demographics and Baseline Characteristics

The first part of the survey assessed the participants’ age, sex, years of practice, type of hospital, and the professional position. It also included whether they had any experience in taking AI-related lectures or conducting research on AI. Seven questions were included in the questionnaire.

User Experience and Benefits/Concerns Regarding AI Software

Participants with experience in using AI were further asked about the current state of AI software use, including the frequency, number of software packages that had been used, and route of purchase. Specifically, we investigated the types of AI software currently in use and included several questions regarding the direction of future AI usage. A total of 12 questions were included in this study.

Self-Assessed Perceptions, Attitudes, and Expectations Toward AI

Participants’ perceptions and attitudes were investigated, such as whether they felt familiar with AI at the moment, whether they thought it would pose a threat to their jobs in the future, and how well-prepared they felt. When asked about the benefits and concerns of AI software, six and five options were given, respectively; the two items they thought were the most important were chosen. Ten questions were included in the study.

Statistical Analysis

The results were presented as the proportion of participants. Mann–Whitney and Kruskal–Wallis tests were used to analyze differences in the degree of familiarity and preparedness depending on the demographic characteristics, and experience in using AI software and conducting research. Using the Pearson correlation coefficient, the relationship between the degree of familiarity, degree of feeling prepared, and feeling of threat to future jobs was assessed. For the self-assessed recognition and attitude toward AI questionnaire items were rated using a Likert scale (question number 8, 9, 10, 12, 13, 25, 26, 28, 29, 30).

The association of independent variables with the degree of familiarity, fear, and usefulness within 10 years was assessed using multivariable logistic regression. Variables (age, sex, type of hospital, professional position, experience in using and conducting studies related to AI software) were selected. Results of the logistic regression analyses were presented as adjusted odds ratios (ORs) with 95% confidence intervals (CIs). Additionally, mediation analysis using a Sobel test was performed to determine whether experience in using AI software or conducting AI research played a role in familiarity or fear. Statistical analyses were performed by the authors using IBM SPSS Statistics for Windows (version 21.0; IBM Corp.). Because this survey was intended to generate hypotheses through a cross-sectional study, a nominal P-value < 0.05 was deemed statistically significant, without Bonferroni adjustment when conducting multiple statistical tests, given the concern of a substantial reduction in statistical power [24].

RESULTS

Demographics and Baseline Characteristics

Of the 334 KSNR members, 73 (21.9%) completed the survey. A summary of respondents’ demographics is presented in Table 1. Most respondents were between 30 and 49 years (71.2%) and showed an approximately even sex ratio (females: 46.6%, 34/73). Most of the respondents worked in academic hospitals (91.8%, 67/73).

Table 1
Demographic Distribution and Baseline Characteristics of Respondents (n = 73)

Most respondents had experience with lectures or training software related to AI (83.6%, 61/73). Additionally, 60.3% (44/73) of the respondents had experience in conducting research related to AI and 58.9% (43/73) had used AI software. There was a significant difference in the type of hospital between the groups with and without experience using AI; a larger proportion of the group with experience using AI worked at a university hospital (P = 0.029).

Current Usage Experience and Benefits of AI

Of the respondents, more than half (52.1%, 38/73) thought that AI is currently useful in practice. Respondents working in the academic hospital more frequently answered that current AI software is useful (P = 0.001) (Table 2 and Supplementary Table 2).

Table 2
Subgroup Comparison on the Attitude toward Artificial Intelligence in 73 Respondents

Among 43 respondents who had used AI software, approximately 86% (37/43) had experience in using 1–3 AI software (Table 3). More than half (51.2%, 22/43) had up to one year of experience with AI software, and 41.9% (18/43) had 1–3 years of experience. While most respondents attempted the demo version of AI software (60.5%, 26/43), 30.2% (13/43) purchased software through their institutions. Respondents who had used AI software for more than 1 year felt more familiar with AI software and were more ready for its introduction in the future than those who had less than 1 year of experience (P = 0.022 and P = 0.035, respectively).

Table 3
Self-assessed Current User Experiences, Perceptions, Attitudes, and Future Expectation Regarding AI in 43 Respondents Who Had Experience in AI Software

Figure 1 shows that, among various software, brain volumetry software for neurodegenerative diseases was the most used (62.8%, 27/43), followed by brain tumor analysis software (32.6%, 14/43), intracranial hemorrhage detection (25.6%, 11/43), and cerebral infarction detection software (16.3%, 7/43). There was no significant difference in the distribution of answers between the software currently used and that which needed future development (P = 0.756), suggesting that many of the respondents thought that the current software requires further improvement. Although only a few respondents used such software, they felt AI software for detecting cerebral aneurysms and/or vascular stenosis should be developed in the future (48.8%).

Fig. 1
AI software currently in use and those that need to be improved or developed in the future. The percentage does not equal 100%, as it is a multiple-response question. Data are shown as percentages calculated based on input from 43 respondents. *Difference in the distribution of answers between the software currently used and software that needed future development. N/A = not applicable

When asked whether AI software helped with decision-making, 34.9% (15/43) answered positively, 41.9% (18/43) answered neither yes nor no, and 20.9% (9/43) answered negatively. Participants responded that it improved diagnostic accuracy, reduced reading time and inter-reader variability, and increased research applicability (Fig. 2).

Fig. 2
Responses for the current benefit of using artificial intelligence (AI) software. Data are shown as percentages calculated based on input from 43 respondents. N/A = not applicable

Current Perceptions and Attitudes

The respondents’ perceptions and attitudes toward AI are shown in Figure 3. Among the respondents, 72.6% (53/73) were familiar with AI to an average extent or more relative to their peers. A significantly larger proportion of the group with experience in using AI software worked in university hospitals, were more familiar with AI (adjusted OR 7.1 [95% CI, 1.81–27.81]; P = 0.005) (Table 4), and considered AI as useful (P = 0.029, P < 0.001, and P = 0.050, respectively). For approximately 11.2% of respondents, the association between respondents and familiarity was partially mediated by experience using AI (P = 0.027).

Fig. 3
Respondents’ perceptions and expectations toward artificial intelligence (AI). Data are shown as percentages calculated based on input from 73 respondents.

Table 4
The Predictors for Familiarity toward AI in a Multivariable Logistic Regression Model in 43 Respondents

Answers to whether AI could threaten or replace the role of radiologists in the future were approximately evenly distributed, with 37.0% (27/73) and 38.4% (28/73) of respondents answering that AI could and would not, respectively, replace radiologists. The group with an age over 40 years and that with more than 10 years of practice showed a higher level of fear (P = 0.029 and P = 0.011, respectively). Those who worked in the academic hospital felt more ready (P = 0.043) (Table 2 and Supplementary Table 2). More than half (56.2%, 41/73) were prepared for the current AI software introduction to an average extent or higher.

Those with experience in AI research were more familiar with AI (adjusted OR 4.4 [95% CI, 1.01–19.08]; P = 0.047) (Table 4), and less likely to feel fear of replacement (adjusted OR 0.2 [95% CI, 0.04–0.95]; P = 0.043). They were also more ready for the introduction of AI software (P = 0.024). A positive correlation was found between the degree of familiarity and readiness for the introduction of AI software (r = 0.417, P < 0.001), whereas the degree of feeling ready was negatively correlated with the degree of feeling threatened in the future (r = -0.397, P = 0.001).

Regardless of the fear of job replacement by AI, most neuroradiologists predicted that AI software would become useful in practice in less than 10 years (86.3%, 63/73), with professors working in the academic hospital feeling this more confident (adjusted OR 3.4 [95% CI, 1.00–11.78]; P = 0.05). Additionally, 85.0% (62/73) of respondents were willing to purchase AI software in the future.

Expectations

Many respondents expected to reduce the time spent on repetitive tasks (91.8%, 67/73), followed by improving reading accuracy and reducing errors (72.6%, 53/73) (Fig. 4). The greatest concern was making an incorrect decision led by AI (54.8%, 40/73), followed by mistrust of the rationale of AI assessment (47.9%, 35/73); thus, most concerns were about the reliability of the machine (Fig. 5). Approximately 40% (29/73) of respondents worried about losing the initiative of the healthcare industry to AI companies and reducing the scope of the role of radiologists.

Fig. 4
Responses for the perceived future benefit of artificial intelligence (AI) usage. The percentage does not equal 100%, as it is a multiple-response question. Data are shown as percentages calculated based on input from 73 respondents.

Fig. 5
Responses for the perceived concerns of the use of artificial intelligence (AI). The percentage does not equal 100%, as it is a multiple-response question. Data are shown as percentages calculated based on input from 73 respondents.

Among 43 respondents who had used AI software, several respondents answered that the AI software could assist the decision-making of clinicians if a radiologist was unavailable (34.9% vs. 62.8%, P = 0.011). However, almost all respondents (95.3%, 41/43) answered that the coordination of radiologists is essential to improve AI performance. The responses were mixed regarding AI software being first classified as normal and abnormal studies and then only the abnormal studies being read in an urgent clinical setting (positive vs. negative, 32.6% vs. 41.9%). More than half (55.8%, 24/43) of the respondents agreed that topics related to the use of AI software should be added to medical school curricula and hospital training (Table 3).

DISCUSSION

The results of this survey suggest that neuroradiologists have proactive attitudes and expectations regarding the future use of AI software. Respondents showed some level of fear about the introduction of AI, but the more prepared they were regarding AI, the less fear they had. In addition, respondents who had experience with commercial AI software found it useful in terms of increased reading accuracy, saving time, and research. However, the participants expressed concerns regarding the reliability of AI software.

In this study, approximately 60% of the respondents had experience with using AI software. This is higher compared to a study of 230 Australian and New Zealand radiologists in 2021, in which less than 20% had experience using AI software [21]. The high proportion of neuroradiologists working at academic hospitals may explain their high exposure to AI. In addition, more than half of the respondents (51.2%) had less than one year of experience using AI software, which can be viewed as a reflection of the rapid introduction of AI software in clinical practice. Most respondents (86.3%) agreed that AI software would soon have a noticeable impact on practice (within 10 years, consistent with previous surveys) [18, 21].

The fear that AI software could replace radiologists was found in 37.0% of participants, which is similar to a study of 1041 European radiologists [18]. In that study, it was found that increasing age was associated with decreasing fear, whereas our study revealed that the older respondents felt more fear related to AI. This may reflect the concern that older individuals are inflexible to follow technological advances. Moreover, the longer the experience of using AI, the more familiar the respondents felt and those who felt familiar with AI were more ready to adopt AI software. This is consistent with previous results, which indicated that the more knowledgeable a person is about AI, the less afraid the person is of introducing new technology [17, 18, 25]. Another survey of European radiologists found that neuroradiology was one of the radiology subspecialties that will be most impacted by the revolution of AI [26], which may explain the concern of neuroradiologists.

The most anticipated advantage of the introduction of AI was the optimization of radiologists’ work, such as reduced reading time (91.8%) or increased diagnostic accuracy and reduced error (72.6%), which is consistent with a previous study [18]. The most common concern in adopting AI software was making incorrect decisions due to machine errors (54.8%) and a lack of trust in the basis of the AI’s judgment (47.9%) because they are considered a black box and the grounds for their results are not well explained. Compared to a survey conducted by the Italian Society of Medical and Interventional Radiology (SIRM), which revealed most concerns related to the decreased professional reputation of radiologists (60.3%) and reduced learning opportunities (25.5%), our survey revealed doubts about the AI software itself [18]. The differences between the two surveys may result from differences in the AI usage experience and distribution of AI software in the target country [18].

In this survey, neuroradiologists used various types of AI software, and they expected improvements in the current software packages, rather than developing different kinds of AI software. However, although software for the detection of cerebral arterial aneurysms or stenosis is not widely used, many neuroradiologists (48.8%) expected its future development. Radiologists thought that AI software would be more helpful to clinicians than to themselves. Approximately 95% of respondents said that coordination of radiologists to improve the performance of AI software is necessary, which suggests that although AI software can provide some useful information, it is insufficient to replace the role of radiologists in its current form. Additionally, we found mixed responses regarding the use of AI software for patient triaging in an urgent setting, such as the emergency room, with strong disagreement expressed (23.3%). This may reflect concerns regarding reducing the role of radiologists and distrust of AI software reliability. To improve familiarity with and trust in AI software, and narrow the gap between AI software and its users, AI software developers must communicate with radiologists through symposiums or seminars, obtain feedback, and enable them to participate in the development/validation process.

To the best of our knowledge, this is the first study to investigate user experiences with commercial AI software, as well as the perception of AI by radiologists in Korea. Since the introduction of AI in the medical field, several studies have evaluated the attitudes of radiologists and radiology residents toward AI, but none have focused on the use of commercial AI software [18, 19, 21]. Thus, we believe that it is important to portray the current state of the use of AI software by radiologists. In addition, this study had a higher response rate than previous surveys: 22% of KSNR members responded to the questionnaire compared to previous surveys with a response rate of less than 10% [18, 19, 21, 26]. This indicates that the current status of the target group was reflected relatively well.

Our study had some limitations. First, the participants of this survey were limited to the members of KSNR and may not reflect the opinions of all radiologists. However, neuroradiology is one of the most active subspecialties in radiology regarding AI research, and many commercial neuroimaging AI products have already been developed and are currently used in clinical practice. Thus, the current results may help predict the changes that will occur in the entire radiological society in the near future. Second, most respondents worked at academic hospitals, reflecting the high proportion of KSNR members in academic positions. As these radiologists are more frequently exposed to AI software, the overall fear of AI may have been underestimated. It is also possible that young neuroradiologists in their 30s or 40s, who have more proactive attitudes and those who are more familiar with AI, might have more actively participated in this survey. However, the level of fear in this study was comparable to that reported in a previous study [18], and many respondents who did not have experience using AI software also participated in this study. Third, KSNR full membership is not limited to neuroradiology subspecialists, and there is a chance that some non-neuroradiologists may have participated in the survey. Thus, it would have been better if the subspecialty and detailed practicing field information was collected. Additionally, it is worth noting that some respondents only had experience using AI applications for other subspecialties, while this survey primarily aimed to evaluate the user experiences of neuroimaging AI software. Therefore, the opinions expressed by respondents who only used AI products for other subspecialties may not fully represent their views on neuro-applications. Finally, although the response rate was higher than that of other studies, further studies need to consider providing incentives to encourage more active participation. A follow-up study targeting all subspecialties of radiology conducted for a longer period would be preferred to assess the general perception of AI and user experience of AI software. In addition, since there are various deployment strategies in AI software, an in-depth comparison of user experience and attitude of each strategy is required.

In conclusion, a majority of neuroradiologists who responded had an experience with AI software and showed a proactive attitude toward adopting AI in clinical practice. The survey suggests that to address distrust of AI software reliability, AI should be incorporated into training, and active participation in AI development should be encouraged.

Supplement

The Supplement is available with this article at https://doi.org/10.3348/kjr.2022.0905.

SUPPLEMENTS

Click here to view.(54K, pdf)

Notes

Conflicts of Interest:Cheolkyu Jung, a contributing editor of the Korean Journal of Radiology, was not involved in the editorial evaluation or decision to publish this article. All remaining authors have declared no conflicts of interest.

Author Contributions:

  • Conceptualization: Leonard Sunwoo.

  • Data curation: Hyunsu Choi.

  • Formal analysis: Hyunsu Choi.

  • Funding acquisition: Leonard Sunwoo.

  • Investigation: Hyunsu Choi.

  • Methodology: Hyunsu Choi, Leonard Sunwoo.

  • Project administration: Leonard Sunwoo.

  • Resources: Hyunsu Choi, Leonard Sunwoo.

  • Software: Hyunsu Choi, Leonard Sunwoo.

  • Supervision: Leonard Sunwoo, Se Jin Cho, Sung Hyun Baik, Yun Jung Bae, Byung Se Choi, Cheolkyu Jung, Jae Hyoung Kim.

  • Validation: Leonard Sunwoo.

  • Visualization: Hyunsu Choi.

  • Writing—original draft: Hyunsu Choi, Leonard Sunwoo.

  • Writing—review & editing: all authors.

Funding Statement:This research was funded by the SNUBH Research Fund (No. 09-2019-006).

Availability of Data and Material

The datasets generated or analyzed during the study are available from the corresponding author on reasonable request.

Acknowledgments

The authors acknowledge the participation of Korean Society of Neuroradiology (KSNR) in this survey.

References

    1. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018;2:35
    1. Chartrand G, Cheng PM, Vorontsov E, Drozdzal M, Turcotte S, Pal CJ, et al. Deep learning: a primer for radiologists. Radiographics 2017;37:2113–2131.
    1. Havaei M, Guizard N, Larochelle H, Jodoin PM. Deep learning trends for focal brain pathology segmentation in MRI. In: Holzinger A, editor. Machine learning for health informatics: state-of-the-art and future challenges. Cham: Springer; 2016. pp. 125-148.
    1. Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imaging 2016;35:1207–1216.
    1. Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, et al. Artificial intelligence in medical imaging of the liver. World J Gastroenterol 2019;25:672–682.
    1. Hwang EJ, Goo JM, Yoon SH, Beck KS, Seo JB, Choi BW, et al. Use of artificial intelligence-based software as medical devices for chest radiography: a position paper from the Korean Society of Thoracic Radiology. Korean J Radiol 2021;22:1743–1748.
    1. Lee S, Shin HJ, Kim S, Kim EK. Successful implementation of an artificial intelligence-based computer-aided detection system for chest radiography in daily clinical practice. Korean J Radiol 2022;23:847–852.
    1. Sakai K, Yamada K. Machine learning studies on major brain diseases: 5-year trends of 2014-2018. Jpn J Radiol 2019;37:34–72.
    1. Olthof AW, van Ooijen PMA, Rezazade Mehrizi MH. Promises of artificial intelligence in neuroradiology: a systematic technographic review. Neuroradiology 2020;62:1265–1278.
    1. van Leeuwen KG, Schalekamp S, Rutten MJCM, van Ginneken B, de Rooij M. Artificial intelligence in radiology: 100 commercially available products and their scientific evidence. Eur Radiol 2021;31:3797–3804.
    1. Choi KS, Sunwoo L. Artificial intelligence in neuroimaging: clinical applications. Investig Magn Reson Imaging 2022;26:1–9.
    1. Persson K, Barca ML, Cavallin L, Brækhus A, Knapskog AB, Selbæk G, et al. Comparison of automated volumetry of the hippocampus using NeuroQuant(R) and visual assessment of the medial temporal lobe in Alzheimer’s disease. Acta Radiol 2018;59:997–1001.
    1. Sheth SA, Lopez-Rivera V, Barman A, Grotta JC, Yoo AJ, Lee S, et al. Machine learning–enabled automated determination of acute ischemic core from computed tomography angiography. Stroke 2019;50:3093–3100.
    1. Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018;392:2388–2396.
    1. Yahav-Dovrat A, Saban M, Merhav G, Lankri I, Abergel E, Eran A, et al. Evaluation of artificial intelligence-powered identification of large-vessel occlusions in a comprehensive stroke center. AJNR Am J Neuroradiol 2021;42:247–254.
    1. Radboud UMC. Products. AI for Radiology.com Web site. [Accessed October 25, 2022].
    1. Gallix B, Chong J. Artificial intelligence in radiology: who’s afraid of the big bad wolf? Eur Radiol 2019;29:1637–1639.
    1. Huisman M, Ranschaert E, Parker W, Mastrodicasa D, Koci M, Pinto de Santos D, et al. An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. Eur Radiol 2021;31:7058–7066.
    1. Coppola F, Faggioni L, Regge D, Giovagnoni A, Golfieri R, Bibbolino C, et al. Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey. Radiol Med 2021;126:63–71.
    1. Gong B, Nugent JP, Guest W, Parker W, Chang PJ, Khosa F, et al. Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: ANational survey study. Acad Radiol 2019;26:566–577.
    1. Scheetz J, Rothschild P, McGuinness M, Hadoux X, Soyer HP, Janda M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep 2021;11:5193
    1. Shin SY. Current status and future direction of digital health in Korea. Korean J Physiol Pharmacol 2019;23:311–315.
    1. Ministry of Food and Drug Safety. The Ministry of Food and Drug Safety has become a leading country in regulating AI medical devices. Ministry of Food and Drug Safety Web site. [Published May 22, 2022]. [Accessed September 24, 2022].
    1. Perneger TV. What’s wrong with Bonferroni adjustments. BMJ 1998;316:1236–1238.
    1. Pinto Dos Santos D, Giese D, Brodehl S, Chon SH, Staab W, Kleinert R, et al. Medical students’ attitude towards artificial intelligence: a multicentre survey. Eur Radiol 2019;29:1640–1646.
    1. European Society of Radiology. Impact of artificial intelligence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insights Imaging 2019;10:105

Metrics
Share
Figures

1 / 5

Tables

1 / 4

Funding Information
PERMALINK