Elsevier

Academic Radiology

Volume 20, Issue 3, March 2013, Pages 351-357
Academic Radiology

Radiology Education
Improving Accuracy in Reporting CT Scans of Oncology Patients: Assessing the Effect of Education and Feedback Interventions on the Application of the Response Evaluation Criteria in Solid Tumors (RECIST) Criteria

https://doi.org/10.1016/j.acra.2012.12.002Get rights and content

Rationale and Objectives

In February 2010, our radiology department adopted the use of the Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 criteria for newly diagnosed oncology patients. Prior to staff used RECIST 1.1, we hypothesized that education and feedback interventions could help clarify differences between RECIST 1.0 and the newly adopted RECIST 1.1 guidelines and result in appropriate and accurate utilization of both reporting systems. This study evaluates the effect of education and feedback interventions on the accuracy of computed tomography (CT) reporting using RECIST criteria.

Materials and Methods

Consecutive CT scan reports and images were retrospectively reviewed during three different periods to assess for compliance and adherence to RECIST guidelines. Data collected included interpreting faculty, resident, type, and total number of errors per report. Significance testing of differences between cohorts was performed using an unequal variance t-test. Group 1 (baseline): RECIST 1.0 used; prior to adoption of RECIST 1.1 criteria. Group 2 (post distributed educational materials): Following adoption of RECIST 1.1 criteria and distribution of educational materials. Group 3 (post audit and feedback): Following the audit and feedback intervention.

Results

The percentage of reports with errors decreased from 30% (baseline) to 28% (group 2) to 22% (group 3). Only the difference in error rate between the baseline and group 3 was significant (P = .03).

Conclusion

The combination of distributed educational materials and audit and feedback interventions improved the quality of radiology reports requiring RECIST criteria by reducing the number of studies with errors.

Section snippets

Materials and methods

The study was approved by the Institutional Committee for the Protection of Human Subjects.

The CT scan images and reports of all oncology scans performed over three 1-month periods were evaluated for adherence to RECIST guidelines. The three periods (cohorts) were: 1) pre-RECIST 1.1 introduction, 2) post RECIST 1.1 adoption and distribution of educational materials intervention, and 3) post audit and feedback intervention.

Total Number of Errors by Cohort

The error types and totals for each cohort are listed in (Table 3). The baseline group (A) consisted of 246 consecutive CT scans reported by 20 different staff radiologists with a total of 96 errors committed. The post DEM group (B) consisted of 246 consecutive CT scans reported by 21 different staff radiologists with 93 total errors committed. The post A & F group (C) consisted of 218 consecutive CT scans reported by 24 different staff radiologists with 68 total errors committed. There was a

Discussion

Accurate interpretations of imaging studies and clear communication of their results are imperative to appropriately determine disease response to therapy for routine clinical care and clinical trials. In the 1990s, the RECIST criteria were created by an international working group to standardize and simplify tumor response reporting (1). The goals were to standardize the methods of obtaining tumor measurements and the definitions of response, and to increase accuracy in determining tumor

Conclusion

The results of our study are consistent with the conflicting evidence base regarding the effectiveness of DEM and A & F. This study highlights some of the difficulties of training a variable cohort of radiologists to use a standard image reporting format. Despite extensive passive and active individual-based educational initiatives, some of which required significant physician time and resources, we were still only able to minimally impact the accuracy of reports. Although this adds to the

Cited by (10)

  • Interobserver Variation in Response Evaluation Criteria in Solid Tumors 1.1

    2019, Academic Radiology
    Citation Excerpt :

    However a clear and comprehensive consensus on improving consistency of reads is still lacking. Since significant variations can lead to nonreliable trial results, institutions that employ RECIST 1.1 also use well-established quality checks such as: Double independent reads by different radiologists (or referred as readers in this study) (7); Adjudication of discordant reports by a more experienced radiologist; Periodic training and re-training of radiologists; Continuous feedback (8) between the Project Manager and Adjudicator and the Readers; Internal Peer review (6); Auditing of completed trial reports either alone or as part of the Blinded independent central review (BICR) (9); and collaboration and communication with medical oncologists (10) especially during baseline lesions selection and determining progression. Such measures although useful for finalizing the "right" report for a trial does not resolve the primary conflicts leading to interobserver variation itself.

  • Developing Quality Measures for Diagnostic Radiologists: Part 2

    2018, Journal of the American College of Radiology
    Citation Excerpt :

    Referring physicians depend on both radiologists’ interpretations of studies and any recommendations for follow-up imaging in order to provide quality patient care. A recent analysis found that the majority of referring physicians (84%-90%) rely on radiologists’ interpretations all or most of the time, and half of referring physicians look for radiologists to include recommendations on next steps in management [1]. The written radiology report is critical for the timely and accurate communication not only of imaging results but also of any follow-up recommendations [2].

  • Pitfalls in RECIST Data Extraction for Clinical Trials: Beyond the Basics

    2015, Academic Radiology
    Citation Excerpt :

    We propose that errors in RECIST assessments are best avoided through a combination of educational and operational support strategies, both of which should be considered by radiologists considering adding RECIST assessments to their workflow or embarking on new consultative or core laboratory arrangements at their institutions. Educational and feedback interventions have been shown to improve performance with RECIST assessments (11), as would be predicted for a learnable skill requiring both didactic and experiential training. Adherence to best practices may be further enhanced by workflow tools including “smart” eCRFs configured to calculate measurement totals and disallow inappropriate entries, as well as by next-generation informatic applications for lesion tracking and annotation (12).

View all citing articles on Scopus
View full text