Using a modiﬁed Delphi process to develop a structured case-based handover assessment tool

Background Objective assessment of medical handover skills is challenging. This short communication will illustrate the use of a modiﬁed Delphi technique in utilising both clinicians and educationalists to produce a fair and robust tool to assess medical student competence in handover. Method Using three rounds in a reactive Delphi process, two case-speciﬁc assessment tools were formulated. Round two involved clinical experts to respond to pre-deﬁned criteria set in round 1, rather than being invited to develop their own benchmarks. Round 3 saw medical educationalists identify the components that a student should be expected to hand over at their stage of training. Results For Case 1, twenty-eight initial key handover components were reﬁned down to eighteen which were then encompassed in the ﬁnal Case 1 assessment tool. For Case 2, thirty-one initial components were reﬁned down to seven key handover components. These were then used to populate the ﬁnal Case 2 assessment tool. whilst The modiﬁed Delphi process allows the formulation of a robust and fair assessment tool but challenges were encountered.


Introduction
The Delphi survey is a recognised, reliable and established method for reaching a consensus of opinion from experts (Boath, Mucklow and Black, 1997;Hsu and Sandford, 2007). There is no standardised method of conducting a Delphi survey and several modifications exist, with variable numbers of feedback rounds and respondents to fit the needs of the research (Gene and George, 1999;Michels, Evans and Blok, 2012).
The teaching of handover in undergraduate education is essential (General Medical Council, 2018), however, its inclusion in curricula is variable (Liston et al., 2014;Thaeter et al., 2018).
A local medical school was identified as not explicitly teaching handover and so an educational workshop was developed.
The modified Delphi approach was selected to produce a handover performance assessment tool to assess medical students' competence in delivering a medical handover before and after the handover teaching intervention.
This approach enabled sampling across a range of experts within a large health-board. When used to produce a tool assessing skills competence, it offers high face and concurrent validity (Gibson, 1998;Williams and Webb, 1994) of the tool, thus strengthening the project validity. It has been widely accepted and utilised as a research method within healthcare settings (Gibson, 1998;Hasson, Keeney, and McKenna, 2000;Williams and Webb, 1994).

Methods
The project's assessment tool was formulated from a modified or "reactive" Delphi model, where panels of multidisciplinary experts respond to pre-defined criteria rather than being invited to develop their own benchmarks (Hasson, Keeney, and McKenna, 2000;Davies, Martin and Foxcroft, 2016;Tonni and Oliver, 2013). Three rounds were utilised to finalise the tool.

Modified Delphi Round 1
The researchers, a group of mixed specialty doctors, formed a focus group to decide upon the basic structure of the assessment tool. A Situation-Background-Assessment-Recommendation (SBAR) format was chosen due to its widespread use within our clinical context and endorsement from various professional and regulatory organisations (World Health Organisation, 2007). A pre-existing, validated tool that could be modified to fit our purpose was sought. The "Clinical Handover Assessment Tool" (Moore et al. 2017) was chosen because of its previous use in undergraduate medical education research. The focus group adapted the tool to the SBAR format, and derived assessment competency domains within it.
Two mock medical cases were produced in a similar format to local health board medical case notes to provide authenticity. The students utilised these to formulate a handover for the pre and post-intervention assessments. The medical cases included explicit instructions and a detailed management plan to minimise any confounding effects of the students' clinical knowledge on their handover performance.

Modified Delphi Round 2
A panel of fifteen multidisciplinary consultant doctors were invited to review the cases and assessment tool structure. The materials and responses were sent electronically. They were asked to identify components from the casenotes that were essential to handover. When 50% of respondents agreed on a component, it was deemed that a consensus had been reached. A list of key components was formed for each case.

Modified Delphi Round 3
The cases, assessment sheets and key component lists were circulated to ten medical education experts from across NHS Lanarkshire Medical Education faculty. From the key components list, they were asked to select those that they expected the students to be able to identify as essential to the handover. The components, on which a 50% consensus had been agreed, populated the final case-specific tools.
During the assessment, in order to achieve 'observed' for each individual assessment criterion, the student had to handover all the key information that had been identified by this process under each specific domain.

Case 1
Seven of the fifteen consultants responded. Individually they identified twenty-eight separate elements as essential for students to handover with consensus being reached on nineteen.
In the final round, four of the ten medical education experts responded. Consensus was reached for eighteen of the components carried forward from round 2, and therefore these eighteen elements populated the final version of the assessment tool for Case 1 (Supplementary file 1).

Case 2
Thirty-one elements were identified by the seven consultants as being essential to the handover for case 2. Of these, consensus was reached on eight components. The final round saw a consensus in seven of these eight components and the final assessment sheet for case 2 was formed (Supplementary file 1).

Discussion
The modified Delphi process allowed us to survey clinical and medical education experts to form a standardised assessment for individual handover scenarios. Using the clinical experts ensured the assessment process reflected current clinical best practice. The medical education faculty have knowledge of the expectations of students at this stage of training which helped ensure the assessment was contextualised appropriately.
However, the Delphi process also poses challenges. The process is time consuming and dependent on the engagement of multiple experts outwith the immediate research group (Williams and Webb, 1994;Gibson, 1998). Indeed the number of respondents in our project was small and inclusion of larger panels of experts may have increased the project's validity.
Furthermore, the process does not necessarily produce a faultless assessment tool as it relies on group consensus. It allows anonymity and an expression of opinion without influence, whilst ensuring a controlled feedback mechanism (Michels, Evans and Blok, 2012;Murphy et al., 1998), however, the final tool is invariably affected by the experiences and opinions of the group members. In this project, expert consensus was that escalation of care status should be explicitly handed over. In our clinical experience, unwell patients are deemed for discussion with, and escalation to the intensive care unit in the event of deterioration, unless explicitly stated beforehand. Both fictional patients in the cases were previously well individuals and explicitly recorded in the mock case notes "for full escalation." In the assessment, the majority of the students omitted this information and did not achieve this Holt N, Hutcheson Z, Crowe K, Lynagh D MedEdPublish https://doi.org/10.15694/mep.2021.000017.1 Page | 4 assessment component. The researchers feel this omission reflects authentic clinical practice to which the students are exposed. Potentially, strict adherence to the modified Delphi process has compromised the integrity of the research tool due to not truly reflecting current clinical practice. This highlights the importance of continual validity judgements in assessment standardisation. A fourth Delphi round informed by student performance data may help with further refinement.

Conclusion
The use of the modified Delphi is accepted and widely used in healthcare research for forming valid and objective assessments (Williams and Webb, 1994). The Delphi process provides validity to an assessment tool, especially one assessing competency based skills that require a large group consensus on expected standards and where consistent best practice evidence is lacking (Tomasik, 2010). However, the Delphi process is not infallible. Our study highlighted the potential for the process to compromise the integrity of the product if it diverges from true clinical practice and experience (Holt et al., 2020).

Take Home Messages
The Delphi process provides validity to an assessment tool assessing handover competency based skills which require a large group consensus on expected standards and where consistent best evidence practice evidence is lacking Delphi process allows experts to form a standardised assessment for individual handover scenarios which reflects current clinical best practice The process can be time consuming and dependent on the engagement of multiple experts outwith the immediate research group