Diagnostic Reasoning by Expert Clinicians: What Distinguishes Them From Their Peers?

Objectives Expert clinicians (ECs) are defined in large part as a group of physicians recognized by their peers for their diagnostic reasoning abilities. However, their reasoning skills have not been quantitatively compared to other clinicians using a validated instrument. Methods We surveyed Internal Medicine physicians at the University of Iowa to identify ECs. These clinicians were administered the Diagnostic Thinking Inventory, along with an equivalent number of their peers in the general population of internists. Scores were tabulated for structure and thinking, as well as four previously identified elements of diagnostic reasoning (data acquisition, problem representation, hypothesis generation, and illness script search and selection). We compared scores between the two groups using the two-sample t-test. Results Seventeen ECs completed the inventory (100%). Out of 25 randomly-selected non-EC internists (IM), 19 completed the inventory (76%). Mean total scores were 187.2 and 175.8 for the EC and the IM groups respectively. Thinking and structure subscores were 91.5 and 95.71 for ECs, compared to 85.5 and 90.3 for IMs (p-values: 0.0783 and 0.1199, respectively). The mean data acquisition, problem representation, hypothesis generation, and illness script selection subscores for ECs were 4.46, 4.57, 4.71, and 4.46, compared to 4.13, 4.38, 4.45, and 4.13 in the IM group (p-values: 0.2077, 0.4528, 0.095, and 0.029, respectively). Conclusions ECs have greater proficiency in searching for and selecting illness scripts compared to their peers. There were no statistically significant differences between the other scores and subscores. These results will help to inform continuing medical education efforts to improve diagnostic reasoning.


Introduction
In recent years, the term "expert clinician" (EC) has been used to define physicians from any specialty or subspecialty who have attained high levels of proficiency in a variety of skills considered essential to clinical practice and education [1]. Although the specific definition has remained elusive, ECs are acknowledged to be superior diagnosticians from whom trainees can learn valuable lessons [2]. For that reason, "Expert clinician" and "Master clinician" programs have been established at sites like the University of Iowa. This study aims to distinguish elements of diagnostic reasoning that characterize ECs compared to peers, using the previously validated Diagnostic Thinking Inventory (DTI) [3].

Materials And Methods
This study was approved by the Institutional Review Board at the University of Iowa. The investigation was split into three parts.
First, each question in the DTI was classified into one of four categories: data acquisition, problem representation, hypothesis generation, and illness script search and selection. Definitions for these four steps of diagnostic reasoning were adopted from Bowen and colleagues [4]. The wording of each question was scrutinized independently by two investigators (BK and MS), and results were compared. Two rounds of reconciliation were pursued in order to obtain a consensus classification of these questions.
Secondly, ECs were identified through a survey of all Internal Medicine physicians at the University of Iowa. Inclusion criteria were permanent faculty status, appointment within the Department of Internal Medicine, and over 50% clinical effort. Exclusion criteria were adjunct or visitor status, and research or administrative effort greater than 50%. They were asked to identify one or more clinicians amongst themselves who are considered ECs based on their diagnostic skills. Those nominated by at least five colleagues were designated as ECs. Once identified, they were administered the DTI.
Finally, a third investigator (KF) used a random number generator to identify a sample of 25 internists not recognized as ECs (IM) who were then administered the DTI. Demographic information about both the EC and the IM groups was also obtained, including years in practice, age, under-represented minority status, international medical graduate status, and residency location. For binary variables, the Fisher exact test was used for comparison while for continuous variables, unpaired t-tests were used.
Results were tabulated and uploaded into SAS® (SAS Institute, Cary, North Carolina). Descriptive statistics, including mean, median, and variance were calculated for the scores and subscores. The two-sample t-test was employed to compare means between the EC and IM groups.

Results
The DTI was split into four categories as given in the Materials and Methods section, composed of eight to 14 questions each ( Table 1). There was agreement among the two investigators for 38 out of 41 at the first round, and 41/41 by the second round for reconciliation.  ECs exhibited a higher total mean score (187.2), thinking subtotal (95.7), and structure subtotal (91.5) compared to the IM group, whose means were 175.8, 90.3, and 85.5, respectively ( Figure 1). The standard deviations among ECs were lower for the total (14.6) as well as thinking and structure subtotals (8.6 and 7.1), compared to IMs (21.8, 11.4, and 11.8, respectively). The differences in means were not statistically significant (0.0783 for thinking and 0.1199 for structure).  Table 2).

Discussion
This study demonstrates that the diagnostic approach of ECs may be different than their peers. Specifically, ECs seem to be more proficient in searching and selecting for illness scripts. Illness scripts are defined as "conceptual models, such as groups of diseases [or] representational memories of specific syndromes [5]." Illness scripts are the result of experience and deliberate practice, suggesting that peer-recognized ECs continuously hone their understanding of key discriminating features, risk factors, and pathophysiologic mechanisms that define illness scripts [6]. This helps to explain findings from previously published literature that expert diagnosticians are able to diagnose conditions using relatively few pieces of clinical data [7]. It also reinforces the observations that ECs improve their diagnostic skills through continuous reflection [4,8].
Of note, there were no statistically significant differences between the two groups with respect to gender, age, under-represented minority status, international medical graduate status, MD/PhD training, or residency location in Iowa. The mean years in practice was slightly higher among expert clinicians (25.4) compared to non-expert clinicians (21.3), but it is unclear how much this difference in seniority may impact diagnostic reasoning skills.
Interestingly, there was no statistically significant difference in scores among the other elements of diagnostic reasoning. Hypothesis generation and data acquisition appeared to approach significance but were not significant at the p=0.05 significance level. Of note, illness script selection had the lowest score in both groups, suggesting that this is a more advanced skill to master, compared to the other three.
Likewise, there was no statistically significant difference between the "structure" and "knowledge" subscores in the DTI, which have been the two traditional categories used in prior analyses. Therefore, based on our analysis, it may be more appropriate to use this modified four-category breakdown to characterize the diagnostic reasoning process, using the same questions.
The strengths of our study include a robust prospective study design with high participation of staff physicians, including all of the identified ECs. The DTI has been validated as a tool to identify diagnostic reasoning skills [3]. The methodology by which the investigators categorized the questions was predetermined based on established definitions and criteria, enabling high rates of agreement after two rounds of reconciliation.
However, there are some notable limitations. The DTI is a self-administered test, so it is subject to social desirability biases. Also, this study was only conducted at one institution, the University of Iowa. However, the mean scores for the EC group have been higher than previously reported numbers for physicians in general, suggesting that they truly possess better diagnostic reasoning skills [8]. Finally, the numbers are relatively modest (36 total), which may explain why some of the other elements of diagnostic reasoning did not demonstrate statistical significance. While more participants may reduce the standard deviations, it is unclear how the means would change, particularly if the EC group were expanded. Similarly, our investigation also did not focus on how demographic features are correlated with DTI scores and therefore the lack of statistically significant differences between the two groups, with the exception of years in practice, may be due to an inadequate sample size to detect such differences. Lastly, the DTI was not designed to identify what aspects of illness script search and selection are most discriminating. Regardless, our data suggest that, in these four steps of diagnostic reasoning, the ability to search and select for illness scripts is the most specific marker of being a peer-recognized EC.

Conclusions
ECs are recognized by peers in large part due to their diagnostic reasoning abilities. Compared to their peers in the general population of internists, ECs have greater proficiency in searching for and selecting illness scripts. This aligns well with prior observations that ECs engage in deliberate practice to build upon their prior experiences. Replication of these findings at other institutions may bolster such conclusions. Furthermore, these findings inform the development of EC and continuing medical education programs at other institutions.

Additional Information
Disclosures