INTRODUCTION

Clinical practice guidelines (CPGs) are systematically developed statements that aim to assist stakeholders in making appropriate decisions regarding healthcare for specific clinical circumstances 1. High-quality CPGs are essential in bridging gaps between policy, best practice, and patient choice, to enhance healthcare quality and patient outcomes 1. From a methodological perspective, three types of methodological tools/methods help CPG developers and authors to conduct high-quality CPGs: how to assess the quality of existing relevant CPGs with your topic, how to conduct a CPG, and how to report a CPG after completing it.

The AGREE (Appraisal of Guidelines, REsearch and Evaluation) II assessment tool is a widely used tool used to appraise the quality of existing CPGs. AGREE II assessment tool was updated from its original 2003 version and published in 2010 2. The GRADE (Grading of Recommendations, Assessment, Development and Evaluations) approach is a widely accepted methodology to present the quality of evidence (also called “certainty” of evidence) and generate recommendations in the context of CPG development 3. A guideline reporting checklist is intended to aid guideline authors (including methodologists and clinicians) and journal editors with the final guideline reporting manuscript once the CPG is finished, which is different from a conduction handbook (GRADE approach) or an assessment tool (AGREE II). How to clearly report the process with key steps after completing a CPG and to maintain transparency is important because CPG users will judge whether it is a high-quality CPG based on the reporting details, which will impact CPG users’ confidence on implementing the CPG in the clinical practice.

The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network (http://www.equator-network.org/) is an international initiative promoting transparent and accurate reporting statements for original studies, systematic reviews, and CPGs, to improve the quality and reliability of health research literature 4. The EQUATOR Network is the only recommended resource by the International Committee of Medical Journal Editors (ICMJE) to guide authors and medical journal editors on which reporting checklist should be used when preparing a scientific manuscript. Currently, two reporting checklists are available at the EQUATOR Network website for CPGs: the AGREE checklist, published in 2016 5, and the RIGHT (Reporting Items for practice Guidelines in HealThcare) checklist, published in 2017 6. The AGREE reporting checklist 5 was derived from the AGREE II CPG assessment tool 2, but they remain as two independent tools used for different purposes by the end users. The authors of RIGHT reporting checklist intended to add new items and provide detailed information and examples that AGREE reporting tool lacks 6. CPG developers and authors often struggle with selecting between them because they are unsure about the advantages and disadvantages of each checklist. The objective of this study was to illustrate the differences between the two CPG reporting checklists of AGREE and RIGHT and to discuss their potential impact on the reporting quality of CPGs.

METHODS

Two epidemiologists (DK and AU) who lacked experience using either AGREE or RIGHT but were familiar with Evidence-Based Medicine methodology and fluent in English, independently compared AGREE with RIGHT on an item-by-item basis to determine if and how these two reporting checklists differed. Their assessments were compiled on a pre-designed data form and any disagreements were resolved through discussion. Three other co-authors (XY and IF, familiar with AGREE; QW, familiar with RIGHT) independently compared AGREE with RIGHT again and decided if they agreed with the results of comparison of these two CPG reporting checklists from the first two co-authors. Then, five co-authors reached a consensus by discussion. Finally, another co-author (JM), a methodologist and statistician without experience of conduction a CPG and using either CPG reporting checklist, reviewed the comparison results of AGREE and RIGHT to ensure that the description was clear and understandable.

RESULTS

The AGREE and RIGHT reporting checklists present similar content using different structures. AGREE has 23 items with 2–7 reporting criteria to evaluate under each item 5. The RIGHT checklist has 22 topics with 1–3 items under each topic (a total of 35 items) 6. The following six relationships between the two checklists were observed: (1) 11 items (items 1, 2, 5, 9,10, 11, 15, 16, 17, 20, and 23) from AGREE completely matched with 12 items from RIGHT (items 6, 8b, 10a, 14a, 12, 15, 13a, 13b, 2, 14b, 19a, and 19b) (Table 1); (2) four items (items 8, 14, 18, and 21) were listed in AGREE only; (3) 12 items (items 1a, 1b, 1c, 3, 4, 5, 10b, 11a, 14c, 17, 21, and 22) were listed in RIGHT only; (4) three items (items 6, 7, and 13) in AGREE were partially covered by three items in RIGHT (items 8a, 11b, and 16); (5) six items (items 7a, 7b, 9a, 9b, 18a, and 18b) in RIGHT were partially covered by three items (items 3, 4, and 22) in AGREE; and (6) two items intersected across AGREE (items 12 and 19) and RIGHT (items 13c and 20) (Table 2). Based on the comparison results, the potential impact analysis of selecting either checklist is indicated in Table 2.

Table 1 Completely Matched Items Between AGREE and RIGHT
Table 2 Differences Between the AGREE and RIGHT Reporting Checklists

DISCUSSION

Both AGREE and RIGHT checklists have their own strengths and limitations. Approximately half of the items from the AGREE and RIGHT checklists completely overlap. The unique items of RIGHT emphasize using presentation format, i.e., elements to include in a CPG title, a list of abbreviations and acronyms, and contact information of the corresponding developer; these items are good reminders for CPG developers. “How the outcomes were selected and sorted” increases CPG clarity for how to choose outcomes and rank them among each other. “Whether the CPG is based on new systematic reviews (SRs) or whether existing SRs were used” indicates that searching existing SRs is a necessary step when conducting a CPG. “Describing other factors taken into consideration when formulating the recommendations, such as equity, feasibility, and acceptability” can remind CPG developers to consider different factors when making recommendations. Noting “Whether the CPG was subjected to a quality assurance process,” “The gaps in the evidence and/or provide suggestions for future research,” and “Any limitations in the CPG development and indicating how these limitations may have affected the validity of the recommendations” are good reminders for CPG developers to keep the CPG transparent and consider ideas for future research.

Conversely, AGREE emphasizes “Evidence selection criteria” from a primary literature search; without this information, readers will not know how the SR was conducted. “Describing the procedure for updating the CPG” shows that any CPG should be updated over time and an out-of-date CPG may cause harm towards patients. “The facilitators and barriers to the CPG’s application” and “Monitoring and/or auditing criteria to measure the application of CPGs” inform of the benefits, challenges, and requirements of CPG application.

It is worth noting that the AGREE reporting checklist 5 differs from the AGREE II assessment tool 2 because the former is used to help CPG developers and authors determine how to report a CPG once it is complete, whereas the latter is an assessment tool used to appraise the quality of an existing CPG. As an assessment tool, AGREE II has a scoring system to calculate a quality score for each of the six domains, but the six domain scores are independent and should not be aggregated into a single quality score 7.

The AGREE or RIGHT reporting checklist, as other clinical research reporting checklists available on the EQUATOR Network 4, only request the CPG developers or authors to list the items that are recommended. As a reporting checklist, it should not have a scoring system. Thus, we think that it is inappropriate to compare and report the correlation between the AGREE II assessment tool and a CPG reporting checklist (i.e., RIGHT) for one CPG (e.g., Wayant et al.’s paper 8) since they were developed for different purposes.

We recommend that CPG developers use either AGREE plus items unique to RIGHT or RIGHT plus items unique to AGREE to ensure their CPG reporting is complete. We would suggest that in the near future, the authors of AGREE and RIGHT reporting checklists should collaborate to develop a new CPG reporting checklist to incorporate the strengths from both current checklists.