Skip to main content
Advertisement
  • Loading metrics

Theory of radiologist interaction with instant messaging decision support tools: A sequential-explanatory study

  • John Lee Burns ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    jolburns@iu.edu

    Affiliations Department of Radiology & Imaging Sciences, Indiana University School of Medicine, Indianapolis, Indiana, United States of America, Department of BioHealth Informatics, Indiana University Luddy School of Informatics, Computing, and Engineering, Indianapolis, Indiana, United States of America

  • Judy Wawira Gichoya,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia, United States of America

  • Marc D. Kohli,

    Roles Conceptualization, Validation, Writing – review & editing

    Affiliation Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, United States of America

  • Josette Jones,

    Roles Conceptualization, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliation Department of BioHealth Informatics, Indiana University Luddy School of Informatics, Computing, and Engineering, Indianapolis, Indiana, United States of America

  • Saptarshi Purkayastha

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation Department of BioHealth Informatics, Indiana University Luddy School of Informatics, Computing, and Engineering, Indianapolis, Indiana, United States of America

Abstract

Radiology specific clinical decision support systems (CDSS) and artificial intelligence are poorly integrated into the radiologist workflow. Current research and development efforts of radiology CDSS focus on 4 main interventions, based around exam centric time points–after image acquisition, intra-report support, post-report analysis, and radiology workflow adjacent. We review the literature surrounding CDSS tools in these time points, requirements for CDSS workflow augmentation, and technologies that support clinician to computer workflow augmentation. We develop a theory of radiologist-decision tool interaction using a sequential explanatory study design. The study consists of 2 phases, the first a quantitative survey and the second a qualitative interview study. The phase 1 survey identifies differences between average users and radiologist users in software interventions using the User Acceptance of Information Technology: Toward a Unified View (UTAUT) framework. Phase 2 semi-structured interviews provide narratives on why these differences are found. To build this theory, we propose a novel solution called Radibot—a conversational agent capable of engaging clinicians with CDSS as an assistant using existing instant messaging systems supporting hospital communications. This work contributes an understanding of how radiologist-users differ from the average user and can be utilized by software developers to increase satisfaction of CDSS tools within radiology.

Author summary

There is a need for human-machine interfaces between radiologists and clinical decision support systems (CDSS). Within the variety of systems radiologists interact with, there is no best fit for CDSS presented in the literature. After reviewing current literature surrounding CDSS use in healthcare and radiology, we propose a novel solution—a conversational agent capable of engaging clinicians as a team member using existing instant messaging systems supporting hospital communications.

We test the acceptance of this intervention using the User Acceptance of Information Technology: Toward a Unified View (UTAUT) framework in survey and interview formats. Within our sample group, we found that radiologists have a high intent to use and a positive attitude towards this intervention. Our sample of radiologists deviated from the standard user UTAUT expects, suggesting that radiologist’s acceptance of software tools differs from the standard user. This work builds a theory of radiologist-decision support tool interaction that may be useful for software developers and systems integrators.

1 Introduction

Clinical decision support systems (CDSS) are software designed to enhance clinical decision making, capable of combining clinical knowledge bases and data to provide suggestions for patient care [1]. Radiology domain specific CDSS applications are poorly integrated into the radiologist workflow [2]. In 2017, Dreyer and Geis described a transition in radiology moving towards integrating Artificial Intelligence (AI) into the radiologist workflow. "In the past, radiology was reinvented as a fully digital domain when new tools, PACS and digital modalities, were combined with new workflows and environments that took advantage of the tools. Similarly, a new cognitive radiology domain will appear when AI tools combine with new human-plus-computer workflows and environments." They describe the concept of a "Centaur Radiologist" as a physician utilizing AI-augmented CDSS workflows to increase efficiency [3]. We expand this term as “future radiologist,” inclusive of non-AI techniques in CDSS.

However, the future radiologist concept will not happen if the tools are poorly integrated, with cumbersome human-computer interfaces [4]. Deliberate and sustained effort by using inter-disciplinary knowledge from human-centered computing, psychology, cognitive sciences, and medicine is required to build CDSS for the future radiologist [5]. In this work we create a basis of knowledge in the theory of radiologist-decision tool interaction using a sequential explanatory study design. The study consists of 2 phases, the first a quantitative survey and the second a qualitative interview study. The phase 1 survey identifies differences between average users and radiologist users in software interventions using the User Acceptance of Information Technology: Toward a Unified View (UTAUT) framework [6]. Phase 2 semi-structured interviews provide narratives on why these differences are found. To build this theory, we propose a novel solution called Radibot—a conversational agent (CA) capable of engaging clinicians with CDSS as an assistant using existing instant messaging (IM) systems supporting hospital communications. This work contributes an understanding of how radiologist-users differ from the average user and can be utilized by software developers to increase satisfaction of CDSS tools within radiology.

1.1 Background

We expect that the future radiologist will routinely interact with CDSS at each stage of their workflow. section C.1 in S1 Appendix includes an extended background of radiology CDSS including standards and features of radiology workflow and associated systems, and an overview of the backend workflow engines that support radiology CDSS tools. We designed Radibot for diagnostic radiologists, with interventions at each of the following time-points: after image acquisition, during report creation, after report creation, and between studies. A brief overview of existing interventions in each time point follows. Section C.1 in S1 Appendix includes an extended background of interventions at these time-points.

  • After Image Acquisition—radiologists combine a variety of data to make interpretations of images. Interventions include Computer-Aided Detection (CAD), where regions of interest are highlighted for later interpretation; Computer-Aided Diagnosis (CADx), where the computer presents a diagnosis but does not necessarily highlight regions of interest; and patient history/metadata presentation. These interventions generally function within the tool radiologists use to view images, the Picture Archiving and Communication System (PACS), though some will interface with the Radiology Information System (RIS) that houses scheduling/billing/patient metadata, Voice Recognition system (VR) used for report dictation, or in an external purpose built clients [717].
  • During Report Creation–these interventions surround embedding evidence-based guideline processes during dictation and are found within VR. Guidelines are navigated using drill-through commands or natural language processing (NLP) of the dictation to generate report text [18,19].
  • After Report Creation—In most RIS, reports are stored as unstructured text. Interventions in post-report analysis include extracting categorical data, automating radiologist-clinician communication, and quality improvement systems. By generating summative report metadata, these interventions enable context-switching and reduce fatigue when a radiologist is asked to return to a finished report [2032].
  • Between Studies–existing adjacent to radiologist workflow, these interventions influence decision making at an individual or business level and consist of workflow-prioritization, management, and feedback tools. These tools utilize metadata found in Health Level 7 (HL7) or Digital Imaging and Communications in Medicine (DICOM) messages. Users interface with them outside of clinical systems, IE. web dashboards, or they are integrated into PACS/RIS/VR presentation layers [3340].

Diagnostic radiologist’s clinical work is mostly completed using systems, including PACS, RIS, and VR, with nearly every interaction being digitally augmented [41]. Given the mostly digital clinical workflows, radiology specific CDSS implementations are uniquely positioned to provide support and affect change. Radiology specific guidelines for "advisor systems" were laid out by Teather et al. in 1985, while Khorasani in 2006 provides features for the development of clinical decision support systems [42,43]. Outside of radiology, CDSS are built following the Ten Commandments for Effective Clinical Decision Support: Making the Practice of Evidence-Based Medicine a Reality. These 10 commandments summarize elements authors found critical for successful implementation of decision support in clinical workflows–

  1. “Speed is everything
  2. Anticipate needs and deliver in real time
  3. Fit into the users workflow
  4. Little things can make a big difference (usability of CDSS tools)
  5. Recognize that physicians will strongly resist stopping
  6. Changing direction is easier than stopping
  7. Simple interventions work best
  8. Ask for additional information only when you really need it
  9. Monitor impact, get feedback, and respond
  10. Manage and maintain your knowledge-based systems” [44].

Commandments 2, 3, 7, 10, and 1 –anticipate needs, fit into user workflow, simple interventions, knowledge system maintenance, and speed–appear with a higher frequency when aligned with radiology specific guidance. An alignment of the general CDSS and radiology specific CDSS guidelines are found in Table 1. Differences in CDSS priorities underscore the need for more research in this area and are mapped to UTAUT concepts and the hypotheses for phase 1.

thumbnail
Table 1. Commandments compared to published considerations.

https://doi.org/10.1371/journal.pdig.0000297.t001

Other frameworks exist for testing usability and user experience for software design in survey form. However, UTAUT is unique in the number of constructs it can capture quickly. UTAUT was developed as a theoretical model that combines measures like the System Usability Scale or Technology Acceptance Model. Other measures can capture intent to use, but do not create the linkages to potential moderating factors of interest for this study including expected effort, expected performance, anxiety, age, and experience with similar tools [6]. UTAUT is an accepted and comprehensive model for technology adoption [6,4547]. Within the UTAUT framework we focused on the following factors below, as appropriate the UTAUT concepts are linked to the CDSS commandments described above [5]:

  • Behavioral intention to use the system
    1. ○ Positive behavioral intent indicates stronger intent to use the system if created
  • Attitude toward using the technology
    1. ○ Positive attitude indicates positive reaction to using the application
    2. ○ Performance Expectancy
    3. ○ Positive expected performance indicates perceived performance gains
    4. ○ 1. Speed is everything
    5. ○ 2. Anticipate Needs and Deliver in Real Time
    6. ○ 5. Recognize that physicians will strongly resist stopping
    7. ○ 7. Simple interventions work best
  • Effort Expectancy
    1. ○ Positive expected effort indicates perceived increased ease of use of the system compared to similar applications
    2. ○ 3. Fit into the user’s workflow
    3. ○ 4. Little things can make a big difference
    4. ○ 6. Changing Direction is Easier than Stopping
    5. ○ 8. Ask for Additional Information Only When You Really Need It
  • Anxiety
    1. ○ Positive anxiety indicates increasing negative emotions towards the system
    2. ○ 5. Recognize that physicians will strongly resist stopping

1.2 Instant Messaging and Conversational Agents (CA) in Healthcare

IM is found throughout the healthcare enterprise, including in disease management, patient-clinician interactions, medical education, among patient populations and workforce members for extra-clinical activities. IM can be inclusive of voice, video calling, and file sharing [48]. Extra-clinically, IM tools facilitate socialization, catharsis, and professional connectiveness functionalities when applied in clinical settings [49,50]. IM is asynchronous and short-form, leading to advantages over other communication methods, particularly in the area of articulation work—answering medical questions, coordinating logistics, addressing social information for patients, and querying staff/equipment locations or status [51]. IM is integrated into many PACS, RIS, and VR, serving many purposes within radiology including care discussions and facilitating remote tele-radiology communications [30,5262].

CA are natural language human-machine interfaces capable of synthesizing a variety of information and conversing in less programmatic/fixed ways than other language interfaces like chatbots. CA can apply 4 methods for negotiating user interactions: immediate, negotiated, mediated, and scheduled [63]. Consumer health care CA are currently scheduling appointments, providing basic symptom identification and recommendation, and assisting with long term care such as sensor monitoring/alerting and medication reminders [64]. Most healthcare CA are built for patients (interview, data collection, or telemonitoring), while clinician focused CA are designed around data collection [65]. Other efforts in clinician focused CA include interpreting spoken language into clinical facts and drug interaction/alternative drug recommendation systems [6668]. IM impact on task completion is not fully understood, especially in the context of automated IM interventions. There is evidence that non-relevant messages can increase or reduce task completion times depending on the message initiator; at a cost of quality of the task output [69]. Disruptiveness of IM specific interventions is reduced when IM are relevant to the task being completed or if delivered at time-points that fit the user’s workflow [70]. IM interactions among a professional workforce are found to support task completion, accuracy, and quality of outcomes [69]. Historically CA were powered by rules-based systems or ‘small’ AI language models, while modern CA like ChatGPT are using Large Language Models (LLM) [6468,7174].

2 Methods

2.1 Population

Our study population consists of radiologists– 112 attendings and 62 resident or fellow trainees at a large academic health system. Our population is acquired through convenience sampling. Of 174 possible participants, 98 responded affirmative that they would complete the survey and 3 that they did not want to participate. 39 participants responded that they would complete an interview and 11 responded that they would participate in the survey but not the interview. In total, 88 surveys were submitted, and 23 interviews were conducted.

2.2 Survey

An electronic survey was created using Qualtrics [75] that collects population composition and quantitative data surrounding intervention feasibility, usability, and acceptance. We chose to not utilize questions in social influence, facilitating conditions, and self-efficacy due to applicability to a prospective study of a tool not yet implemented in practice. A full listing of UTAUT questions by construct and factor are found on EDUTECH’s Wiki [47]. Due to respondent time constraints we chose to utilize 12 of 19 questions in the chosen constructs, with each construct having at least 2 questions asked [47]. Questions were eliminated if they were not relevant to a system that does not yet exist (Example: Working with the system is fun). Construct validity and reliability is confirmed with structured equation modeling (SEM).

The survey in full is included in Section A.1 in S1 Appendix. Fig 1 highlights the intervention and proposed capabilities.

thumbnail
Fig 1. Sample PACS workstation before/after IM based intervention, and details of intervention presented to survey takers.

Source images for Lung X-Ray [76], Report [77], and IM transaction [78]. LUNG-RAD scenario and output text [79].

https://doi.org/10.1371/journal.pdig.0000297.g001

2.3 Interview

Using the research statements developed with the survey (Table A.5 in S1 Appendix), we generated hypotheses and began developing the semi-structured interviews. As we did not have a working system, we prototyped 5 interventions and created video examples of each to use during the interview. Fig 2 highlights what these videos looked like during a demo. The videos highlighted interventions during each workflow time point in the following ways:

  • After Image Acquisition
    1. ○ Video 1 –Radibot identifies potential for 3d reconstruction, asks radiologist permission to process, and then suggests the correct VR template.
  • During Report Creation
    1. ○ Video 2 –Radiologist engages Radibot to query the Electronic Medical Record (EMR) for cardiac risk factors. Radibot performs this query as the radiologist returns to reviewing images, then returns all risk factors that meet these criteria.
    2. ○ Video 3 –Radibot identifies VR dictation of left adrenal nodule then engages radiologist in stepping through adrenal nodule flow chart. Completion of the flowchart inserts guideline recommend text and citation into report.
  • After Report Creation
    1. ○ Video 4 –Based on text of report, Radibot engages radiologist for follow up communication.
  • Between Studies
    1. ○ Video 5 –Radibot presents possible studies for radiologists to engage with, removing the need to navigate the worklist. Includes suggestions of cross coverage of busier worklists and high priority studies.
thumbnail
Fig 2. Capture from Video 2 highlighting radiologist query and Radibot response.

https://doi.org/10.1371/journal.pdig.0000297.g002

An interview guide was created (Figure B.1 in S1 Appendix) following the UTAUT framework. The guide begins with video 1, loops through each video asking the same questions, then has a set of questions after all videos have completed. A portable interview setup was created consisting of one laptop, a 4k portrait monitor mimicking a diagnostic monitor, and a microphone for collecting audio. Interviews occurred in offices/conference rooms located near interview candidates normal work locations. Subjects were presented with consent and informed that no names would be utilized during the interview for confidentiality. Zoom was utilized to record the screen and interview narrative to the laptop [80].

39 survey participants responded that they would complete an interview. 23 interviews occurred before the research team agreed that response saturation was achieved. Interviews were transcribed using Otter.AI, then a research assistant and study team member reviewed each video separately and corrected any transcription errors [81]. Transcriptions were downloaded in docx format, then loaded into ATLAS.ti 9.0.19.0 for qualitative analysis. The study team created labels for text analysis (Table B.2 in S1 Appendix) and linked these by semantic domain (UTAUT construct). 2 research assistants were hired and trained by the study team to annotate interview text using ATLAS.ti. The research assistants separately annotated interview 1, then the study team reviewed and provided additional guidance. They then separately annotated the remaining interview narratives, and the annotated narratives were merged, and inter-rater agreement is measured. Because semantic domains are established and we did not segment quotes in advance, Krippendorff’s CU Alpha is utilized to measure semantic domain agreement by quote. An overall agreement level of α ≥ .8 is set for all documents [82]. A second round of interviews was planned if text analysis was finding new semantic linkages–defined as quotes with more than 1 label linking constructs together. Any individual interview presented no new semantic linkages differing from the remaining interviews, confirming saturation was reached.

3 Results

3.1 Survey data analysis

Resulting data was downloaded from Qualtrics in Comma Separated Values (CSV) format and analyzed using Excel. Irrelevant metadata fields were removed. A total of 88 survey responses were used for analysis, representing 50.6 percent of the total sample population. After removing 4 outliers that took over an hour to complete the survey, average completion time was found to be 6 minutes and 45 seconds. Raw survey data is available in S2 Survey Data.

Qualitative questions were bucketed into numbers ranging from 0–5 (IE 0 to 5 years = 1; 5 to 10 years = 2; etc.). A full set of questions, response bucketing, and UTAUT constructs are included the Table A.2 in S1 Appendix. Summary data surrounding survey responses used in the analysis are listed in Table 2.

thumbnail
Table 2. Summary data for Likert scale questions in survey responses, generated using SmartPLS v. 3.2.9.

https://doi.org/10.1371/journal.pdig.0000297.t002

Partial Least Squares (PLS) SEM was utilized to investigate the relationship between constructs. PLS-SEM calculations were performed using SmartPLS V. 3.2.9. Complete data analysis steps are included in the Supplemental Data Analysis (Section A.3 in S1 Appendix). SEM began with connecting all possible paths, then eliminating construct relationships that were insignificant. The final SEM is presented in Fig 3 and details in Table 3. T statistics for each path are greater than 1.95 and p values are below 0.05, indicating that each relationship is statistically significant.

thumbnail
Fig 3. Final Path Model generated using SmartPLS v. 3.2.9 Bootstrapping.

https://doi.org/10.1371/journal.pdig.0000297.g003

thumbnail
Table 3. Final Path Coefficient Report generated using SmartPLS v. 3.2.9 Bootstrapping.

https://doi.org/10.1371/journal.pdig.0000297.t003

Cronbach’s Alpha report (Table A.6 in S1 Appendix) shows that the t statistic is greater than double the standard deviation, and this indicates the model fits 95% of the data. Average Variance Extracted report (Table A.7 in S1 Appendix) and Construct Reliability and Validity report (Table A.8 in S1 Appendix) show strong model fit, reliability, and validity of remaining constructs. Fig 4 Partial Least Squares model was created to determine path coefficients–Table 4. These reports explain the model and variance encountered in the model. The weakest relationships surround anxiety. Based on this analysis, we know that Clinical Tools strongly influence anxiety, however, Clinical Tools has the lowest Cronbach’s Alpha and Adjusted Rho of all reviewed items. Anxiety also has a less than ideal Cronbach’s Alpha, but other indicators show that it is likely a reliable concept.

thumbnail
Fig 4. Partial Least Squares Path Model generated using SmartPLS v. 3.2.9 PLS.

https://doi.org/10.1371/journal.pdig.0000297.g004

thumbnail
Table 4. Partial Least Squares Path Coefficients Report generated using SmartPLS v. 3.2.9 PLS.

https://doi.org/10.1371/journal.pdig.0000297.t004

3.2 Interview data analysis

The average interview time was 39.93 minutes. Krippendorff’s CU Alpha was generated at an individual narrative (Table B.3 in S1 Appendix) and overall level. Interviews were eliminated until the overall level reached α ≥ .8, resulting in 5 interviews eliminated and an overall α = 0.82. Code co-occurrence was measured by hypothesis and Sankey diagrams generated (Fig 5). Section B.4 in S1 Appendix includes hypothesis testing, interpretation, and Sankey diagrams (figures B.4.1–B.4.7) for each hypothesis.

thumbnail
Fig 5. First level interactions Sankey diagram generated using ATLAS.ti 9.0.19.0.

https://doi.org/10.1371/journal.pdig.0000297.g005

3.3 Survey results

Table 5 includes the outcomes of each hypothesis for the survey. Results are expanded upon in Section A.4. in S1 Appendix. Hypotheses were tested at a 95% significance level.

thumbnail
Table 5. Survey Hypotheses tested, P-values, and outcomes.

https://doi.org/10.1371/journal.pdig.0000297.t005

3.4 Interview results

Table 6 includes the outcomes of each hypothesis for the interview. Interpretation, code co-occurrence tables, and Sankey diagrams supporting results are found in Section B.4 figures B.4.1-B.4.7 in S1 Appendix. Quotes related to common themes are included in Section B.5 in S1 Appendix.

4 Discussion

Radiologists have a high intent to use and positive attitudes towards IM based CDSS and the presented interventions overall. We determined that years of experience, and Consumer Tools (IM and CA) were not moderating variables in our model. In any given path, the t statistic was too low and p value too high to consider this in our analysis. These questions are not a part of the UTAUT model, and we found them not to be factors relevant to our efforts. The following UTAUT expected paths were additionally removed, and speculation as to why is included.

4.1 Age and intent to use

This is a deviation from UTAUT which expects younger users to be more accepting of new software than older users. Potentially, radiologists are technologically saturated users; they perform their work functions using a wide variety of complex technological solutions. Among clinicians, radiologists chose this specialty because of their interest in technology solutions within healthcare. We were unable to measure this result during interviews.

4.2 Expected efforts influence on attitude

The survey and interview studies have differing results for expected efforts influence on attitude. The survey deviated from UTAUT in not finding a statistically significant association between effort and attitude. However, the interview study showed that decreasing effort is linked to positive attitude and positive intent to use, which is what we would expect in any technology usability study. Potentially the single example given in the survey was not enough to reveal this connection. Common themes on effort/attitude interactions

  • Reducing time to acquire and apply clinical knowledge. The task of looking up non-imaging data within the EMR or clinical guidelines adds up quickly. CDSS’s ability to reduce this time is valued.
  • Increasing the ability to multitask by enhancing images with useful information such as patient risk factors that are related to the exam being read.
  • “Trusting CDSS as safety nets that ensure every necessary step of the workflow is automated or confirmed, for example incidental finding or critical results communications.

4.3 Anxieties influence on attitude

The survey shows a small negative relationship with attitude, which is the expected path in UTAUT. As anxiety increases, the user’s attitude toward using the technology decreases. The interview study asked many questions to understand anxiety surrounding this intervention, however, we were unable to strongly correlate with attitude. Overall, anxiety is the least grounded concept throughout the interview.

4.4 Expected performance as the major influencer of attitude and intent to use

Overall, expected performance is a major influence on attitude and intent to use. Within the survey results it has significantly more influence than any other factor. However, the interview results show a stronger correlation of expected effort with attitude and intent to use. There is a strong negative relationship between performance and effort present in both phases of the study, another deviation from the UTAUT model. There is potential that radiologists’ system use is derived from performance, maybe measured in clinical outcomes. However, we cannot assume these performance metrics overcome effort needs. Common themes from factors influencing attitude/intent to use-

  • Radiologists expect to be interrupted or context switch quickly. CDSS tools for radiologists can leverage this expectation, but there is a lot of room to reduce mental load in simple tasks such as worklist management.
  • Reducing effort is highly embraced. Tools that automate routine workflow steps such as looking up clinical guidelines or communicating with providers or staff are spoken of frequently.
  • Radiologists will trade effort for performance. Even if they have to parse more information, if that information is relevant to the study they are reading they find it useful. Interventions that lead to higher reimbursement rates are accepted even if they require more effort. Decreasing overall productivity can be acceptable if you also increase the quality of their work.

4.5 Limitations

The survey data was collected using convenience sampling at one academic health care institution. Future studies could be done sampling radiologists from a broader audience.

While CA are in vogue now, this study was completed in 2019 and 2020. This was a novel design for a CDSS tool. This study is proof of concept and further development and study is warranted. In light of recent advancements in the field surrounding LLM and generative AI models, the results of this study may be different.

UTAUT is designed to collect data on a system that exists and that the users can interact with. We have leveraged aspects of UTAUT to collect data on a system that could exist and performed SEM analysis to generate a new model.

4.6 Conclusion

Radiologist’s interactions with decision support tools, or at least this intervention, differs from the standard user software interaction model. The positive relationship from performance to effort is the most major deviation, allowing increasing effort if the outcomes are desirable enough. This relationship is supported by both the survey and interview studies. Further, because performance and effort make up most of attitude and intent to use, there are a lot of opportunities for CDSS to provide novel workflow changes that increase patient outcomes. CDSS should be designed to streamline activities, and we see particular interest in tools to enable clinical knowledge gathering and context switching.

Anxiety is another deviation from the standard user model. In both parts of the study anxiety had the weakest relationships and was often secondary to the excitement of new clinical solutions. The most common source of anxiety surrounds the maintenance of CDSS. This suggests that radiologists are users with low technological anxiety compared to the general population and that they may be more accepting of advancements in their tools. This is reflected in radiology’s transformation from analog to digital over the last 50 years [14].

Radiologists deviate from the standard clinician with regards to the 10 commandments of CDSS. Commandments 2, 3, 7, 10, and 1 –anticipate needs, fit into user workflow, simple interventions, knowledge system maintenance, and speed–are all highlighted within radiology specific guidance, and we do find these present for radiologists in our study. However, the relationship between performance and effort highlights that radiologist CDSS doesn’t need to always hit every commandment. Radiologists expect workflow modification, they routinely use complex interventions, and they are not overwhelmed by CDSS information gathering. As we design for the future radiologist, we can trade effort in these commandments for increasing positive outcomes.

Supporting information

S1 Appendix. Detailed information on the study including the full survey, expanded hypothesis results, Semi-structured interview guide, code co-occurrence tables, full data analysis/findings, and additional diagrams.

https://doi.org/10.1371/journal.pdig.0000297.s001

(DOCX)

S1 Data. Raw quantitative survey data in Excel format.

https://doi.org/10.1371/journal.pdig.0000297.s002

(CSV)

References

  1. 1. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digital Medicine. 2020;3(1):17. pmid:32047862
  2. 2. Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, et al. Current Applications and Future Impact of Machine Learning in Radiology. Radiology. 2018;288(2):318–28. pmid:29944078
  3. 3. Dreyer KJ, Geis JR. When Machines Think: Radiology’s Next Frontier. Radiology. 2017;285(3):713–8. pmid:29155639
  4. 4. Gichoya JW, Alarifi M, Bhaduri R, Tahir B, Purkayastha S, editors. Using cognitive fit theory to evaluate patient understanding of medical images. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2017 11–15 July 2017.
  5. 5. Gichoya JW, Nuthakki S, Maity PG, Purkayastha S. Phronesis of AI in radiology: Superhuman meets natural stupidity. arXiv preprint arXiv:180311244. 2018.
  6. 6. Venkatesh V, Morris M, Davis G, Davis F. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly. 2003;27:425–78.
  7. 7. Lodwick GS, Turner AH Jr., Lusted LB, Templeton AW. Computer-aided analysis of radiographic images. Journal of chronic diseases. 1966;19(4):485–96. pmid:4895057
  8. 8. Agrawal JP, Erickson BJ, Kahn CE Jr., Imaging Informatics: 25 Years of Progress. Yearbook of medical informatics. 2016;Suppl 1:S23–31.
  9. 9. Nowinski WL, Qian G, Hanley DF. A CAD System for Hemorrhagic Stroke. The neuroradiology journal. 2014;27(4):409–16. pmid:25196612
  10. 10. Stivaros SM, Gledson A, Nenadic G, Zeng XJ, Keane J, Jackson A. Decision support systems for clinical radiological practice—towards the next generation. Br J Radiol. 2010;83(995):904–14. pmid:20965900
  11. 11. Wang Y, Yan F, Lu X, Zheng G, Zhang X, Wang C, et al. IILS: Intelligent imaging layout system for automatic imaging report standardization and intra-interdisciplinary clinical workflow optimization. EBioMedicine. 2019;44:162–81. pmid:31129095
  12. 12. Barinov L, Jairaj A, Becker M, Seymour S, Lee E, Schram A, et al. Impact of Data Presentation on Physician Performance Utilizing Artificial Intelligence-Based Computer-Aided Diagnosis and Decision Support Systems. J Digit Imaging. 2019;32(3):408–16. pmid:30324429
  13. 13. Berbaum KS, Franken EA Jr., Commentary does clinical history affect perception? Acad Radiol. 2006;13(3):402–3. pmid:16488852
  14. 14. Berbaum KS, Franken EA Jr., Dorfman DD, Lueben KR. Influence of clinical history on perception of abnormalities in pediatric radiographs. Acad Radiol. 1994;1(3):217–23. pmid:9419489
  15. 15. Leslie A, Jones AJ, Goddard PR. The influence of clinical information on the reporting of CT by radiologists. Br J Radiol. 2000;73(874):1052–5. pmid:11271897
  16. 16. Reiner BI. Medical Imaging Data Reconciliation, Part 3: Reconciliation of Historical and Current Radiology Report Data. Journal of the American College of Radiology. 2011;8(11):768–71. pmid:22051459
  17. 17. Gorniak RJ, Sevenster M, Flanders AE, Deshmukh SP, Ford RW, Katzman GL, et al. A PACS-Integrated Tool to Automatically Extract Patient History From Prior Radiology Reports. J Am Coll Radiol. 2016;13(10):1249–52.
  18. 18. Boland GW, Thrall JH, Gazelle GS, Samir A, Rosenthal DI, Dreyer KJ, Alkasab TK. Decision support for radiologist report recommendations. J Am Coll Radiol. 2011;8(12):819–23. pmid:22136994
  19. 19. Do BH, Wu AS, Maley J, Biswal S. Automatic retrieval of bone fracture knowledge using natural language processing. J Digit Imaging. 2013;26(4):709–13. pmid:23053906
  20. 20. Kohli M, Alkasab T, Wang K, Heilbrun ME, Flanders AE, Dreyer K, Kahn CE, Jr. Bending the Artificial Intelligence Curve for Radiology: Informatics Tools From ACR and RSNA. J Am Coll Radiol. 2019. pmid:31319078
  21. 21. Liu Y, Zhu LN, Liu Q, Han C, Zhang XD, Wang XY. Automatic extraction of imaging observation and assessment categories from breast magnetic resonance imaging reports with natural language processing. Chin Med J (Engl). 2019;132(14):1673–80. pmid:31268905
  22. 22. Esmaeili M, Ayyoubzadeh SM, Ahmadinejad N, Ghazisaeedi M, Nahvijou A, Maghooli K. A decision support system for mammography reports interpretation. Health Inf Sci Syst. 2020;8(1):17. pmid:32257128
  23. 23. Bozkurt S, Gimenez F, Burnside ES, Gulkesen KH, Rubin DL. Using automatically extracted information from mammography reports for decision-support. Journal of biomedical informatics. 2016;62:224–31. pmid:27388877
  24. 24. Patel TA, Puppala M, Ogunti RO, Ensor JE, He T, Shewale JB, et al. Correlating mammographic and pathologic findings in clinical decision support using natural language processing and data mining methods. Cancer. 2017;123(1):114–21. pmid:27571243
  25. 25. European Society of R. The future role of radiology in healthcare. Insights Imaging. 2010;1(1):2–11. pmid:22347897
  26. 26. Weiss DL, Kim W, Branstetter BFt, Prevedello LM. Radiology reporting: a closed-loop cycle from order entry to results communication. J Am Coll Radiol. 2014;11(12 Pt B):1226–37. pmid:25467899
  27. 27. Larson PA, Berland LL, Griffith B, Kahn CE Jr., Liebscher LA. Actionable findings and the role of IT support: report of the ACR Actionable Reporting Work Group. J Am Coll Radiol. 2014;11(6):552–8. pmid:24485759
  28. 28. Meng X, Ganoe CH, Sieberg RT, Cheung YY, Hassanpour S. Assisting radiologists with reporting urgent findings to referring physicians: A machine learning approach to identify cases for prompt communication. Journal of biomedical informatics. 2019;93:103169. pmid:30959206
  29. 29. Lacson R, Prevedello LM, Andriole KP, O’Connor SD, Roy C, Gandhi T, et al. Four-year impact of an alert notification system on closed-loop communication of critical test results. AJR Am J Roentgenol. 2014;203(5):933–8. pmid:25341129
  30. 30. Rosenkrantz AB, Sherwin J, Prithiani CP, Ostrow D, Recht MP. Technology-Assisted Virtual Consultation for Medical Imaging. J Am Coll Radiol. 2016;13(8):995–1002. pmid:27084068
  31. 31. Reiner BI. Redefining the Practice of Peer Review Through Intelligent Automation-Part 3: Automated Report Analysis and Data Reconciliation. J Digit Imaging. 2018;31(1):1–4. pmid:28744581
  32. 32. Reiner BI. Quantifying Analysis of Uncertainty in Medical Reporting: Creation of User and Context-Specific Uncertainty Profiles. J Digit Imaging. 2018;31(4):379–82. pmid:29427140
  33. 33. Burns JL, Hasting D, Gichoya JW, McKibben B 3rd, Shea L, Frank M. Just in Time Radiology Decision Support Using Real-time Data Feeds. J Digit Imaging. 2019.
  34. 34. Chen R, Mongkolwat P, Channin DS. RadMonitor: radiology operations data mining in real time. J Digit Imaging. 2008;21(3):257–68. pmid:17534683
  35. 35. Nance JW Jr., Meenan C, Nagy PG. The future of the radiology information system. AJR Am J Roentgenol. 2013;200(5):1064–70. pmid:23617491
  36. 36. Nagy PG, Warnock MJ, Daly M, Toland C, Meenan CD, Mezrich RS. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence. Radiographics: a review publication of the Radiological Society of North America, Inc. 2009;29(7):1897–906. pmid:19734469
  37. 37. Morgan MB, Branstetter BFt, Lionetti DM, Richardson JS, Chang PJ. The radiology digital dashboard: effects on report turnaround time. J Digit Imaging. 2008;21(1):50–8. pmid:17334871
  38. 38. Awan OA, van Wagenberg F, Daly M, Safdar N, Nagy P. Tracking delays in report availability caused by incorrect exam status with Web-based issue tracking: a quality initiative. J Digit Imaging. 2011;24(2):300–7. pmid:20798973
  39. 39. International H. HL7 International 2019 [Available from: http://www.hl7.org/.
  40. 40. Library D. About DICOM 2019 [Available from: https://www.dicomlibrary.com/dicom/.
  41. 41. Kohli M, Dreyer KJ, Geis JR. Rethinking Radiology Informatics. American Journal of Roentgenology. 2015;204(4):716–20. pmid:25794061
  42. 42. Teather D, Morton BA, du Boulay GH, Wills KM, Plummer D, Innocent PR. Computer assistance for C.T. scan interpretation and cerebral disease diagnosis. Stat Med. 1985;4(3):311–5. pmid:3840604
  43. 43. Khorasani R. Clinical decision support in radiology: what is it, why do we need it, and what key features make it effective? J Am Coll Radiol. 2006;3(2):142–3. pmid:17412025
  44. 44. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. Journal of the American Medical Informatics Association: JAMIA. 2003;10(6):523–30.
  45. 45. Ayaz A, Yanartaş M. An analysis on the unified theory of acceptance and use of technology theory (UTAUT): Acceptance of electronic document management system (EDMS). Computers in Human Behavior Reports. 2020;2:100032.
  46. 46. Batucan GB, Gonzales GG, Balbuena MG, Pasaol KRB, Seno DN, Gonzales RR. An Extended UTAUT Model to Explain Factors Affecting Online Learning System Amidst COVID-19 Pandemic: The Case of a Developing Economy. Frontiers in Artificial Intelligence. 2022;5. pmid:35573898
  47. 47. Wiki E. Usability and user experience surveys 2019 [updated 8/16/2019. Available from: http://edutechwiki.unige.ch/en/Usability_and_user_experience_surveys#UTAUT.
  48. 48. Cheeseman SE. Communication and collaboration technologies. Neonatal Netw. 2012;31(2):115–9. pmid:22397797
  49. 49. Pimmer C, Mhango S, Mzumara A, Mbvundula F. Mobile instant messaging for rural community health workers: a case from Malawi. Glob Health Action. 2017;10(1):1368236. pmid:28914165
  50. 50. Bautista JR, Lin TTC. Nurses’ use of mobile instant messaging applications: A uses and gratifications perspective. Int J Nurs Pract. 2017;23(5). pmid:28752519
  51. 51. Iversen TB, Melby L, Toussaint P. Instant messaging at the hospital: supporting articulation work? Int J Med Inform. 2013;82(9):753–61. pmid:23746431
  52. 52. Rosset C, Rosset A, Ratib O. General consumer communication tools for improved image management and communication in medicine. J Digit Imaging. 2005;18(4):270–9. pmid:15988626
  53. 53. Fratt L. PACS Powers the Enterprise. Health Imaging Insights in Imaging & Informatics [Internet]. 2007 10/21/2019. Available from: https://www.healthimaging.com/topics/advanced-visualization/pacs-powers-enterprise.
  54. 54. Philips Adds Options to PACS. Imaging Technology News [Internet]. 2007 10/21/2019. Available from: https://www.itnonline.com/content/philips-adds-options-pacs.
  55. 55. Grabb A. Early experience with electronic messaging tightly integrated within PACS. J Am Coll Radiol. 2011;8(2):141–2. pmid:21292192
  56. 56. America IN. INFINITT PACS 2019 [Available from: https://www.infinittna.com/solutions/radiology/infinitt-pacs/.
  57. 57. Health IW. Merge PACS Innovative Reading Workflows for Enterprise Radiology 2019 [Available from: https://www.merge.com/Solutions/Radiology/Merge-PACS.aspx.
  58. 58. Carestream Health I. RIS Module Streamlined Productivity. 2018.
  59. 59. Saince. Saince Merge Enterprise PACS 2019 [Available from: https://www.saince.com/international-solutions/saince-enterprise-pacs/.
  60. 60. Medical S. Sectra PACS and RIS—Examples of supported radiology workflows: Communication 2019 [Available from: https://medical.sectra.com/product/sectra-radiology-pacs-ris/.
  61. 61. Corporation FHA. Synapse EIS Features 2019 [Available from: https://www.fujifilmusa.com/products/medical/medical-informatics/radiology/RIS/index.html#features.
  62. 62. HealthCare A. XERO Viewer All images, One View 2019 [Available from: https://global.agfahealthcare.com/us/enterprise-imaging/universal-viewer/.
  63. 63. McFarlane D. Comparison of four primary methods for coordinating the interruption of people in human-computer interaction. Hum-Comput Interact. 2002;17(1):63–139.
  64. 64. Bates M. Health Care Chatbots Are Here to Help. IEEE Pulse. 2019;10(3):12–4. pmid:31135345
  65. 65. Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. Journal of the American Medical Informatics Association: JAMIA. 2018;25(9):1248–58.
  66. 66. Beveridge M, Fox J. Automatic generation of spoken dialogue from medical plans and ontologies. Journal of biomedical informatics. 2006;39(5):482–99. pmid:16495159
  67. 67. Mesko B, Hetenyi G, Gyorffy Z. Will artificial intelligence solve the human resource crisis in healthcare? BMC Health Serv Res. 2018;18(1):545. pmid:30001717
  68. 68. Breastfeeding Si. A virtual assistant to help doctors in their daily work 2016 [Available from: https://www.safeinbreastfeeding.com/safedrugbot-chatbot-medical-assistant/.
  69. 69. Gupta A, Li H, Sharda R. Should I send this message? Understanding the impact of interruptions, social hierarchy and perceived task complexity on user performance and perceived workload. Decis Support Syst. 2013;55(1):135–45.
  70. 70. Czerwinski M, Cutrell E, Horvitz E. Instant Messaging: Effects of Relevance and Timing. 2000.
  71. 71. Rao A, Kim J, Kamineni M, Pang M, Lie W, Succi MD. Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making. medRxiv. 2023.
  72. 72. Rao A, Pang M, Kim J, Kamineni M, Lie W, Prasad AK, et al. Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study. J Med Internet Res. 2023;25:e48659. pmid:37606976
  73. 73. Şendur HN, Şendur AB, Cerit MN. ChatGPT from radiologists’ perspective. Br J Radiol. 2023;96(1148):20230203. pmid:37183840
  74. 74. Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930–40. pmid:37460753
  75. 75. Qualtrics. QualtricsXM 2019 [Available from: https://www.qualtrics.com/.
  76. 76. Sudraben. In: X-ray.jpg L, editor. wikimedia: Wikimedia; 2018.
  77. 77. Imaging OA. Jane_Doe_CBCT_NEW_Report. In: Jane_Doe_CBCT_NEW_Report.jpg, editor. http://www.orbitimaging.com/imaging-services/radiologist-interpretation/.
  78. 78. screen-0. In: screen-0.jpg, editor.
  79. 79. Hsu W. Capturing Data Elements and the Role of Imaging Informatics11/2/2019. Available from: http://amos3.aapm.org/abstracts/pdf/99-27434-359478-111844-1383861762.pdf.
  80. 80. Zoom Video Communications I. Zoom 2021 [Available from: https://zoom.us/.
  81. 81. Otter.AI. Otter.AI 2021 [Available from: https://otter.ai.
  82. 82. Krippendorff K. Content Analysis: An Introduction to Its Methodology: Sage; 2004.