Evaluating assessment tools of the quality of clinical ethics consultations: a systematic scoping review from 1992 to 2019

Amidst expanding roles in education and policy making, questions have been raised about the ability of Clinical Ethics Committees (CEC) s to carry out effective ethics consultations (CECons). However recent reviews of CECs suggest that there is no uniformity to CECons and no effective means of assessing the quality of CECons. To address this gap a systematic scoping review of prevailing tools used to assess CECons was performed to foreground and guide the design of a tool to evaluate the quality of CECons. Guided by Levac et al’s (2010) methodological framework for conducting scoping reviews, the research team performed independent literature reviews of accounts of assessments of CECons published in six databases. The included articles were independently analyzed using content and thematic analysis to enhance the validity of the findings. Nine thousand sixty-six abstracts were identified, 617 full-text articles were reviewed, 104 articles were analyzed and four themes were identified – the purpose of the CECons evaluation, the various domains assessed, the methods of assessment used and the long-term impact of these evaluations. This review found prevailing assessments of CECons to be piecemeal due to variable goals, contextual factors and practical limitations. The diversity in domains assessed and tools used foregrounds the lack of minimum standards upheld to ensure baseline efficacy. To advance a contextually appropriate, culturally sensitive, program specific assessment tool to assess CECons, clear structural and competency guidelines must be established in the curation of CECons programs, to evaluate their true efficacy and maintain clinical, legal and ethical standards.


Introduction
Facing shifts in sociocultural paradigms, resource pressures and increasing complexities of medical care [1], the role of Clinical Ethics Committees (CEC)s has evolved. Whilst retaining its original role in facilitating "the process and outcomes of patient care by helping to identity, analyze, and resolve" ethical, moral and legal issues in clinical care [2] CECs have come to adopt active roles in education and policy making. To meet these goals, the CEC which is understood to be "[a team of] physicians, social workers, attorneys, and theologians…which serves to review the individual circumstances of ethical dilemma and which has [previously shown to provide] much in the way of assistance and safeguards for patients and their medical caretakers" [3] now educate patients, their families, clinicians, and the host organization as it guides them through the conflicts and uncertainties impacting their specific healthcare situation [4,5]. CECs have also engaged in policy making roles to ensure consistency, transparency and accountability in resolving ethical issues in the clinical setting [3,4].
However despite the establishment of ASBH's Core Competencies, there is little means of assessing the quality of CECons [19,20,27].

Need for this review
Focusing upon determining if and how CECs meet their 'fundamental' role of carrying out CECons and if these consults meet prevailing requirements, a systematic scoping review (SSR) of prevailing tools to assess the quality of CECons is proposed. This narrow area of study sets this SSR apart from previous reviews of CECs that have taken a more generalized view of assessing CEC function [30,31]. It is hoped that mapping prevailing methods of assessing CECons will guide the design of a robust CECons assessment tool. This need to address this lack of an assessment tool to evaluate the approach, quality and content of CECcons [32], assess its long-term effects on patient care and safety [6,33] and standardise and benchmark practice [34] is further underlined by evidence of variations in CEC practice and CECcons methods that will ultimately undermine the efficacy and standing of CECs as a whole. Better understanding of how CECs meet this key role will also improve oversight and improvements to quality standards and guidelines of CECs [35,36].

Methods
An SSR of prevailing methods and tools to assess CECons is proposed to map the size and scope of available literature in peer-reviewed and grey literature studies [37][38][39][40][41]. The flexible nature of an SSR enables systematic extraction and synthesis of actionable and applicable information [42] across a wide range of practice settings [43,44], whilst summarizing available literature on CECons assessments [45,46] and circumnavigating limitations posed by a dearth of relevant literature [43,44,[47][48][49]. This data along with the identification of commonalities within CEC practice could lay the foundations for a consistent approach to assessing CECons [37][38][39][40][41]. Levac [37] methodological framework for conducting scoping reviews was adopted to map "the key concepts underpinning a research area and the main sources and types of evidence available" [40] and to "produce a profile of the existing literature in a topic area, creating a rich database of literature that can serve as a foundation" to inform practice and guide further research [38,51,52]. Guided by PRISMA-P 2015 checklist [45], a six-stage systematic scoping review protocol was developed for this study [37][38][39][40][41].

Stage 1: identifying the research question
To better understand prevailing CECons assessment tools, the ten-member research team discussed prevailing concerns regarding evaluations of CECons with a team of experts consisting of two medical librarians, five CEC members at the National Cancer Centre Singapore and Singapore General Hospital; academics from the Centre for BioMedical Ethics at the National University Singapore and the Palliative Care Institute Liverpool at the University of Liverpool; and clinicians and educationalists from the Yong Loo Lin School of Medicine at the National University of Singapore (NUS) and Duke-NUS Medical School (henceforth the expert team).
To further focus this review on assessments of CECons, Post et al. (2015)'s description of CECons was adopted to guide this processareas to consider included "the goals of ethics consultation, who may perform ethics consultation, who may request ethics consultations, what requests are appropriate for the ethics consultation service, what requests are appropriate for ethics case consultation, which consultation model(s) may be used and when, who must be notified when an ethics case consultation has been requested, how the confidentiality of participants will be protected, how ethics consultations will be performed, how ethics consultations will be documented, who is accountable for the ethics consultation service and how the quality of ethics consultation will be assessed and assured" ( [4], p.144). From this description, it is evident that assessments of the CECons must necessarily include evaluations of personnel and the processes involved in CECons, the methods used to assess CECons and the outcomes of the CECons.
To this end, the expert and research teams determined the primary research question to be "what tools are available to evaluate the quality of CECons?" The secondary research questions include "what domains of CECons were evaluated in prevailing assessment tools, or were proposed to be evaluated?" and "how were they assessed, or proposed to be assessed?" These questions were designed on the population, concept and context elements of the inclusion and exclusion criteria [53], using a PICOS format ( Table 1). The draft protocol was designed and shaped by feedback from the panel of experts and research team.

Stage 2: identifying relevant studies
Independent pilot searches were carried out by the ten members of the research team using variations of "clinical ethics consultations" and "assessment" that appeared in titles and abstracts of research papers in PubMed between 1st January 1992 and 17th December 2019. The searches were confined to articles published after 1992, in acknowledgment of the year the Joint Commission's first recognized the CEC's role in patient care [4]. The detailed search strategy for PubMed is shown in Table 1 in the Additional file 1. Based on these findings the research team guided by the expert team created the search terms and strategies for the other databases.
The research team adopted the search strategies set out for each database and carried out independent searches of each database. The results of the independent pilot searches were discussed online and at face-to-face meetings where Sambunjak et al. (2010)'s 'negotiated consensual validation' approach was used to achieve consensus on the final list of abstracts to be included [54]. Guided by the expert team, the research team conducted independent searches of PubMed, Embase, JSTOR, ERIC, Scopus and PsycInfo databases between 18th October 2019 to 17th December 2019.

Stage 3: selecting studies to be included
Each member of the research team independently screened the titles and abstracts using the same screening tool. The list of articles identified by each member of the research team were shared and discussed online and at face-to-face meetings. 'Negotiated consensual validation' approach was employed to achieve consensus on the final list of full text articles to be studied and analyzed.

Stage 4: data characterisation and analysis
With a focus on evaluating personnel, process and engagement of stakeholders in CECons, three members of the research team adopted Hsieh and Shannon's approach to directed content analysis (2006) to independently assess the included articles [55,56]. Four categories were drawn from Adams et al's (2014) [57] review of Single-IRBs, Chenneville's IRB Researcher's Assessment Tool [58] and the core characteristics of CECs highlighted by Post et al. (2015) [4] and Flamm (2012) [59].
Concurrently in keeping with Krishna's 'Split Approach' [60] that was adopted to enhance the trustworthiness and reproducibility of the analysis, three other members of the study team employed Braun and Clarke's (2006) [61] thematic analysis approach to independently analyze the included articles. All articles were analysed through independent use of thematic analysis and directed content analysis. Use of Krishna's 'Split Approach' [60] served as a means of confirming and triangulating the findings [62]. Concurrently, 'negotiated consensual validation' served as a means of peer debrief thus enhancing their validity [54,63]. Nine thousand sixty-six abstracts were identified, 617 full-text articles were reviewed and 104 full text articles were analyzed. (Fig. 1: PRISMA Flowchart). When compared, the findings of concurrent thematic and content analysis revealed the same themes/categories allowing them to be presented together.
The narrative produced was guided by the Best Evidence Medical Education (BEME) Collaboration guide [64,65] and the STORIES (STructured apprOach to the Reporting In healthcare education of Evidence Synthesis) statement [66]. Critical appraisals were deemed not necessary for this scoping review as it aimed to consider a wide landscape of articles and thus did not seek to exclude articles through critical appraisal scores.

Stage 6: consultation with key stakeholders
Feedback was sought from key stakeholders after the results were collated and reported.

Results
The four themes/categories elucidated were the purpose of the CECons evaluation, the various domains assessed, the methods of assessment used and the long-term impact of these evaluations. These are outlined in Fig. 2.

Methods of Assessment
Long Term Impact of Evaluations Fig. 2 Results of This Review

Long-term Impact of CECons Evaluations
Positive long-term impact of CECons evaluations include the development of new guidelines [15], formalization of ethics consultations [69], and increased self-reflection by CEC consultants [76,108].

Agreement of results by key stakeholders
The stakeholders and expert team were in agreement that these findings reflected prevailing practice and called for the design of a new holistic and longitudinal assessment tool for CECons based upon the disparate findings of this review.

Discussion
In addressing its primary and secondary research questions, this systematic scoping review identifies a variety of tools designed to assess different aspects of CECons. The diversity of these assessment tools stem from the overall goals of assessing CECons which are largely driven by a combination of objectives including accrediting CEC members, evaluating the CECons process and benchmarking it against prevailing standards and/or other programs, and determining their overall outcomes on patient care.
Notably, the four domains assessed were the CEC personnel's attributes and skillsets; the approach employed in the CECons process; the CECons decisions; and the presence of support for the CECs. This explains focus upon.
With tools ranging from self-appraisals to single time point and longitudinal assessments, perhaps just as significant is the diversity in methods and the quality and type of data generated from them. Such variability in these domains and tools used explains the lack of consistency in CECons assessments. Whilst it may be argued that such diversity merely reflect practical limitations [133] or adaptations to sociocultural and clinical factors [134], a minimum standard must be upheld to ensure baseline efficacy. CEC programs must be rigorously structured and core competencies for CEC members consistently adopted. The Healthcare Ethics Consultant-Certified Program (HEC-C) and Core Competencies for Healthcare Ethics Consultation set out by the ASBH would set the tone for such structuring and training [135] and establish minimum data sets to be evaluated as well as guide standardization of methods used to collect the data [128].
However such a standard setting process must be mindful of the prevailing clinical, educational, ethical, legal, sociocultural, financial and contextual factors influencing the CECs as they continue to expand across North America [6,7], Asia [8][9][10][11][12] and Europe [13][14][15]. It may prove useful for CECs to adopt a Modified Delphi approach to consider the key elements of an effective CECons process and design an assessment tool that better suits their context, focus, capabilities and capacities [118].Indeed, like pieces of a jigsaw, bringing together and carefully deliberating the disparate considerations and domains discerned by this systematic scoping review would allow for a more cohesive and comprehensive assessment tool to be curated.

Limitations
There are a number of limitations to this review. Firstly, use of the directed content analysis based on a relatively unique interpretation of the data would have been problematic without the employment of the 'Split Approach'. However, whilst the 'Split Approach' addresses concerns surrounding the validity of using a directed content analysis and addresses researcher reflexivity, this approach remains unproven. Despite this, some of these concerns are assuaged with the use of Braun and Clarke's (2006) [61] approach to thematic analysis, which served as a means of confirming the evidence, a form of triangulation and a method of enhancing the validity of the findings.
Secondly, this review drew conclusions from a small pool of papers which were limited to articles published or translated into the English language, primarily from North America and Europe. This may limit the applicability of the findings in wider healthcare settings.

Conclusion
In addressing its primary and secondary research questions, this systematic scoping review highlights the variable goals, contextual factors and practical limitations behind the lack of a consistent approach to assessing CECons. In so doing this review also highlights the need for the design of a contextually appropriate, culturally sensitive, program specific assessment tool designed around the key domains identified here to be used not only to evaluate the quality and content of CECons but as a means of informing the training and assessment of CEC members, improving CECons procedures, assessing the efficacy and impact of its CECons and benchmarking its practice with other programs and international standards of practice.
With enhancing patient care at the centre of its processes, CECs should employ prevailing design principles and assessment theories to improve its educational and policy making roles in establishing national standards. Whilst there is still much to be done, and the efficacy of CEC's other roles to be evaluated, we believe this systematic scoping review points the way towards more accountable, effective and user-friendly CECs. We look forward to further discourse on this critical aspect of clinical practice.
Additional file 1. Appendix with PubMed Search Strategy and List of Included Articles.