Home Article
International Journal of Healthcare Simulation
image
Exploring self-led debriefings in simulation-based education: an integrative review protocol

DOI:10.54531/fxbh1520, Pages: 1-10
Article Type: Protocol, Article History
Abstract

Background

Facilitator-led debriefing is commonplace in simulation-based education and has been extensively researched. In contrast, self-led debriefing is an emerging field that may yet provide an effective alternative to well-established debriefing practices. The term ‘self-led debriefing’, however, is often used across a variety of heterogeneous practices in a range of contexts, leading to difficulties in expanding the evidence base for this practice. Evidence, specifically exploring in-person group self-led debriefings in the context of immersive simulation-based education, is yet to be appropriately synthesized. This protocol explains the rationale for conducting an integrative review of this topic whilst summarizing and critiquing the key steps of the process.

Research Aim and Question

The aim of this integrative review is to systematically search, analyse and synthesize relevant literature to answer the following research question: With comparison to facilitator-led debriefings, how and why do in-person self-led debriefings influence debriefing outcomes for groups of learners in immersive simulation-based education?

Methods

This is a protocol to conduct an integrative review aligned to Whittemore and Kanfl’s established five-step framework. The protocol fully addresses the first two steps of this framework, namely the problem identification and literature search stages. Seven databases (PubMed, Cochrane, EMBASE, ERIC, SCOPUS, CINAHL Plus and PsycINFO) will be searched comprehensively to optimize both the sensitivity and precision of the search in order to effectively answer the research question. It also outlines and appraises the various procedures that will be undertaken in the data evaluation, analysis and presentation stages of the process.

Discussion

This review will attempt to address a gap in the literature concerning self-led debriefing in immersive simulation-based education, as well as identify areas for future research. Integrative reviews aim to provide a deeper understanding of complex phenomena and we detail a comprehensive explanation and justification of the rigorous processes involved in conducting such a review. Finally, this protocol highlights the applicability and relevance of integrative reviews for simulation-based education scholarship in a wider context.

Kumar and Somerville: Exploring self-led debriefings in simulation-based education: an integrative review protocol

Introduction

Simulation-based education (SBE) has become a widely adopted educational technique across healthcare disciplines. Many experts now broadly agree that the discourse surrounding research in SBE has moved on from ‘does SBE work?’ to an assessment of the contexts and conditions under which it is best employed and most effective, and why this may be the case [1,2]. SBE has wide-ranging applications, from learning complex technical skills to practising and consolidating behavioural skills as part of a multidisciplinary team [1]. Immersive SBE deeply engages learners’ senses and emotions through an environment that offers physical and psychological fidelity, thus allowing for the conceptualization of their perception of realism [3]. The concept of immersion relates specifically to the subjective impression of learners that they are participating in an activity as if it were a real-world experience [4]. Many simulated learning events (SLEs) achieve this by having learners interact with either computerized manikins or simulated patients to work through scenario-based simulations. This is commonly followed by structured facilitated debriefings, during which scenarios are reviewed and learner experiences are reflected upon, explored and analysed, such that meaningful learning can be derived and consolidated [5].

Debriefing has been defined as a ‘discussion between two or more individuals in which aspects of performance are explored and analysed’ (p. 658) [6] and is commonly cited as one of the most important aspects of learning in immersive SBE [7,8]. Debriefings should provide a psychologically safe environment for learners to actively reflect on actions and assimilate new information with previously constructed knowledge, thus resulting in developing strategies for future improvement within their own real-world context [57]. They are typically led by facilitators guiding conversations to ensure content relevance and successful achievement of intended learning outcomes [9]. Evidence suggests that the quality of debriefing is highly dependent on the skills and expertise of the facilitator and the establishment of a safe psychological space for learning [1012]. Indeed, some commentators claim that facilitator skill is the strongest independent predictor of successful learning [7]. However, this interpretation has been challenged, with self-led debriefings (SLDs) being presented as potentially effective alternatives [13,14].

Several literature reviews have reported on SLDs in comparison to facilitator-led debriefings (FLDs) within the umbrella of debriefing effectiveness more generally [69,11,1418]. The consensus is that there is limited evidence of superiority of one approach over the other. However, the broad scope of the reviews limits authors’ abilities to critically appraise the evidence, with sufficient detail relating to SLDs lacking. To our knowledge, only one published review has specifically investigated SLDs’ impact on debriefing effectiveness [19]. Two questions were asked in this review: (1) what are the characteristics of self-debriefs used in healthcare simulation? and (2) to what extent do self-debriefs found in the literature align with the standards of best practice for debriefing? Whilst concluding equivalent outcomes for well-designed SLDs and FLDs, these findings included virtual settings and were limited to individual learner debriefings only. The value and place of in-person SLDs for groups of learners post-immersive SLEs, either in isolation or in comparison with FLDs, therefore warrants dedicated exploration.

SLDs are a relatively new concept offering a potential alternative to well-established FLDs. Their utility has important implications for SBE due to the resources required to support faculty development programmes [7,10]. Evidence is emerging regarding how and why in-person SLDs influence debriefing outcomes for groups of learners in immersive SBE, but that evidence is yet to be appropriately synthesized. This integrative review (IR) aims to address this current gap in the evidence base, thereby informing simulation-based educators of best practices moving forward in immersive SBE, whilst highlighting gaps for further research.

What is self-led debriefing?

There is currently no consensus definition for SLDs within the literature, and as such the term encompasses a wide variety of heterogeneous practices, leading to a confusing narrative for commentators to navigate as they report on debriefing practices. To ensure clarity for the purposes of this study, we have refined the definition of ‘self-led debriefing’ to describe debriefings that occur without the immediate presence of a trained faculty member, such that the debriefing is conducted by the learners themselves.

Rationale for an integrative review

Alignment between the research paradigm, research question and methodology is vital for conducting high-quality research and is often only invoked at a superficial level in health professions education (HPE) research [20]. To ensure such alignment, we have chosen to undertake an IR to answer the research question. This approach will fulfil the need for new insights and innovative approaches to SBE, such that the nature of science with which we engage and the subsequent knowledge formation it generates are not constricted [2].

Whilst systematic reviews may have traditionally been viewed as a gold-standard review type, this perspective is now regularly challenged, especially within HPE research [21]. An IR is one that integrates findings from studies with diverse and differing designs, in which both quantitative and qualitative, and in some cases theoretical, data sets are examined in a rigorous systematic manner, thus aiming to provide a more comprehensive understanding of a particular phenomenon [2225]. Such an approach is particularly pertinent in SBE, with researchers employing wide-ranging study designs to examine complex phenomena from a variety of differing perspectives and paradigms. Legitimate concerns note that the inherent complexity of integrating findings from diverse sources and study designs can lead to inaccuracies, biases and a lack of rigour [26]. Such concerns have led to alternative attempts to develop frameworks with which to conduct IRs in a manner that enhances the methodological rigour of the work [2225,27,28].

Integrative review framework

This IR protocol is aligned with Whittmore and Knafl’s [22] framework (Figure 1). To ensure methodological rigour, their five-step process is explicitly framed, structured, protocolized and documented [22]. Multiple publications describe various modifications of this framework [2325,27,28]. However, the original version allows flexibility to sub-categorize different elements of the methods and processes that suit specific elements of this IR.

Five-step IR framework [22].
Figure 1:

Five-step IR framework [22].

This protocol fully addresses the first two steps of Whittemore and Knafl’s [22] framework. It then outlines the various processes that will be undertaken in steps three through five, detailing several facets of the evaluation, analysis and presentation stages. By documenting this process in detail, we aim to inform and inspire other SBE researchers to utilize this hither-to underemployed review method in their own work.

The research paradigm

This study is rooted in constructivism and constructionism with important elements originating from both perspectives. Constructivism is a paradigm in which individuals socially construct concepts, models and schemas to develop personal meanings whilst making sense of the world and reality from their subjective experiences [29]. Constructionism espouses the deep impact that culture and society have on shaping how such subjective experiences impact an individual’s formulation of meaning within the world, or context, they reside in, thereby shaping one’s ongoing thoughts and behaviours [30]. Therefore, in the context of immersive SBE, we reject the notion of ‘one objective reality’, instead believing the presence of multiple subjective realities that are constructed by individuals or groups. Participant experiences influence their view of reality and therefore different meanings may be derived from the same nominal experience. Furthermore, construction of meaning may be influenced and shaped by the presence or absence of facilitators within debriefings. In this IR, by applying theory to interpret data already collected and analysed by other parties, we will use theory inductively to formulate a new understanding and interpretation of the evidence already available. Several theories, such as experiential learning [31], transformative learning theory [32], situated learning theory [33] and social constructivist theory [34], inform learning in immersive SBE contexts, and will be drawn upon in the analysis of the findings in this IR to explore how and why in-person SLDs influence debriefing outcomes for groups of learners.

Research aim and question

The aim of this IR is to systematically search, analyse and synthesize relevant literature to explore in-person SLDs in immersive SBE for groups of learners. Emerging from this aim, the overarching research question is: With comparison to facilitator-led debriefings, how and why do in-person self-led debriefings influence debriefing outcomes for groups of learners in immersive simulation-based education?

Methods

Step 1: Problem identification stage

To formulate the research question, we deconstructed its various parts, initially comparing both PICOS (Population, Intervention, Comparison, Outcome, Study design) [35] and SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type) [36] frameworks (Table 1). We chose the PICOS framework as the inclusion of a comparator suits this study, whilst simultaneously safeguarding the maintenance of the integrative study design to ensure a range of diverse quantitative and qualitative studies can be identified. Whilst such an approach may derive from positivist disciplines, their use can extend to other paradigms [37]. However, there remains dubiety as to whether these frameworks are too simplistic to aid in formulating integrative questions which require to search a cross-section of study designs to adequately harness the complexity of contexts specifically involved in HPE research [38].

Table 1:
Comparison of PICOS and SPIDER frameworks for formulation of research question [34,35]
PICOS Framework Methley et al. [34] SPIDER Framework Cooke et al. [35]
Population In-person immersive SBE debriefing participants Sample In-person immersive SBE debriefing participants
Intervention/interest Self-led debriefings Phenomenon of interest Self-led debriefings
Comparison/context Facilitator- or instructor-led debriefings Design Integrative
Outcome Any outcomes Evaluation Any outcomes
Study design Integrative: both quantitative and qualitative studies included Research type Integrative: both quantitative and qualitative studies included

Step 2: Literature search stage

The search strategy in any knowledge synthesis is critical to ensure that relevant literature within the scope of the study is identified, thus minimizing bias and enhancing methodological rigour [22,39]. It should be clearly documented and transparent such that readers are able to reproduce the search themselves with the same results [28,40]. We sought the expertise of a librarian to help ensure a focused, appropriate and rigorous search strategy [20,28,40,41].

We iteratively designed an extensive and broad search strategy that optimizes both the sensitivity and precision of the search, thereby ensuring that relevant literature pertaining to the review is identified whilst avoiding non-relevant studies [41]. By framing the keywords articulated within the research question, along with their associated potential synonyms, into the PICOS framework, we constructed a logic grid to document the key search terms (Table 2) [40]. To ensure these terms are comprehensive, we have analysed and incorporated free-text words, keywords and index terms of relevant articles identified during a preliminary scoping literature search. These terms were then employed in pilot iterative searches of the PubMed database, and based upon preliminary results, have been subsequently refined to formulate the finalized list of relevant, inclusive and precise search terms (Table 2). Lefebvre et al. [41] assert that whilst research questions often articulate specific comparators and outcomes, this is not always replicated in the titles or abstracts of articles and hence not well indexed. They, therefore, recommend that a search strategy typically encompass terms from three categories of the PICOS framework as opposed to all five [41]:

Table 2:
Logic grid aligned with the PICOS elements of the review question, omitting outcome/study design categories [39,40]
PICOS framework category Key search terms
Population/problem/setting Simulation training [MeSH], Simulation-based, Simulation-enhanced, Simulation training, Simulation teaching, Simulation event, Immersion, Simulation, Simul*, Debrief*, Conversation*
Intervention Self-led, Peer-led, Group-led, Participant-led, Student-led, self-directed, Student-directed, Self-guided, Self-facilitated, Peer-facilitated, Group-facilitated, Student-facilitated, Self-debrief*, Peer-debrief*, Group-debrief*, Self debrief*, Peer debrief*, Group debrief*, Within-team
Comparison Facilitator-led, Instructor-led, Faculty-led, Instructor debrief*, Facilitated
Outcome n/a
Study design n/a
  1. Terms to search for the situation of interest (i.e. the population)
  2. Terms to search for intervention(s) evaluated
  3. Terms to search for the study design types to be included

Due to the well-established practice of FLDs within SBE [13], we have modified this guidance by choosing to include FLDs as a comparator term in the search strategy. Furthermore, in omitting outcome terms, we have chosen to forgo specifying types of study design, as by definition, IRs encourage the incorporation of diverse study methodologies [22]. The final search strategy used in the PubMed search can be found in Table 3. The search strategy will be customized to accommodate the characteristics of each specific database.

Table 3:
PubMed search conducted on 14/10/2022 with resultant number of records
#1 Simulation training [MeSH] OR simulation-based OR simulation-enhanced OR “simulation training” OR “simulation teaching” OR “simulation event” OR (immersion AND simulation). 26,658
#2 (Facilitator-led OR Instructor-led OR Faculty-led OR “Instructor debrief*” OR Facilitated) OR (Search: Self-led OR Peer-led OR Group-led OR Participant-led OR Student-led OR Self-directed OR Student-directed OR Self-guided OR Self-facilitated OR Peer-facilitated OR Group-facilitated OR Student-facilitated OR Self-debrief* OR Peer-debrief* OR Group-debrief* OR “Self debrief*” OR “Peer debrief*” OR “Group debrief*” OR Within-team) 659,522
#3 Debrief* OR Conversation* 31,513
#4 (#2 AND #3): ((Facilitator-led OR Instructor-led OR Faculty-led OR "Instructor debrief*" OR Facilitated) OR (Search: Self-led OR Peer-led OR Group-led OR Participant-led OR Student-led OR Self-directed OR Student-directed OR Self-guided OR Self-facilitated OR Peer-facilitated OR Group-facilitated OR Student-facilitated OR Self-debrief* OR Peer-debrief* OR Group-debrief* OR "Self debrief*" OR "Peer debrief*" OR "Group debrief*" OR Within-team)) AND (Debrief* OR Conversation*) 3,795
#5 (#1 AND #4): (((Facilitator-led OR Instructor-led OR Faculty-led OR "Instructor debrief*" OR Facilitated) OR (Search: Self-led OR Peer-led OR Group-led OR Participant-led OR Student-led OR Self-directed OR Student-directed OR Self-guided OR Self-facilitated OR Peer-facilitated OR Group-facilitated OR Student-facilitated OR Self-debrief* OR Peer-debrief* OR Group-debrief* OR "Self debrief*" OR "Peer debrief*" OR "Group debrief*" OR Within-team)) AND (Debrief* OR Conversation*)) AND (Simulation training [MeSH] OR simulation-based OR simulation-enhanced OR "simulation training" OR "simulation teaching" OR "simulation event" OR (immersion AND simulation)) 381

Searching for one database is inadequate and may lead to identifying a small and unrepresentative selection of relevant studies [28,41]. Therefore, guided by the research question, researchers should select several electronic bibliographic databases that are aligned with specific scientific and academic fields [42]. Whilst there is no consensus as to what constitutes an acceptable number of databases searched, it is not necessarily the number of databases searched that should be questioned, but rather which databases and why [39]? Taking these factors into consideration, to ensure our search strategy best suits our research topic, we will search seven key electronic bibliographic databases: PubMed, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, ERIC, SCOPUS, CINAHL Plus and PsycINFO. Furthermore, we will conduct supplementary manual searches of reference lists from relevant studies identified via the search strategy and a manual search of relevant SBE internet resources, such as healthysimulation.com, ResearchGate and Google Scholar, to avoid missing key studies.

We will use the bibliographical software package EndNoteTM 20 to organize search results due to its functionality, familiarity and ability to directly communicate with and retrieve references from the databases being searched [43]. Storage of the search strategies and methods will ensure a transparent and auditable process that would be available for external review.

Inclusion and exclusion criteria

Inclusion and exclusion criteria allow researchers to refine their searches to locate specific data that explicitly answers the research question [28], manage the volume of research generated and focus their reviews more reliably [44]. However, such criteria may introduce implicit and explicit biases into the search results [45]. For example, by only including peer-reviewed published studies, this IR will potentially be confounded by publication bias, whereby its findings may be skewed by the types of studies or associated reported findings that are more likely to be published compared with types of studies or results that are deemed to be less important or valid [46]. Furthermore, non-peer-reviewed grey literature such as essays, commentaries, editorials, letters, blogs, conference abstracts, theses and course evaluations will all be omitted, despite being potentially relevant to the results of the review [40]. Importantly, whilst the peer-review process is being increasingly questioned by academics across scientific disciplines [47], it provides quality assurance and scrutiny for academic work and remains a cornerstone in scientific publishing [48]. It thus remains part of the inclusion criteria. Finally, we have excluded studies examining non-immersive SLEs and studies examining debriefings that were either virtual, related to clinical events or included only one learner, as their findings may not be applicable to the contexts described in our research question. Having carefully considered the advantages, disadvantages, logistics and practicalities of these matters in relation to our research question, the criteria presented in Table 4 will be applied to the search strategy.

Table 4:
Inclusion and exclusion criteria
Inclusion criteria Exclusion criteria
Original empirical research Non-empirical research
Live in-person SLEs and debriefing studies Virtual/online/tele-simulation and& debriefing studies
Immersive SLE debriefings Non-immersive SLE debriefings (e.g. mastery learning workshops and procedural skills SLEs)
Debriefings including more than one learner/participant (excluding faculty) Debriefings only involving only one learner/participant (excluding faculty)
Peer-reviewed studies Non peer-reviewed studies
Studies reported in English Studies reported in a language other than English
Studies describing SLDs with or without inclusion of or comparison to FLDs Studies describing FLDs exclusively or comparing FLDs to no debriefing
Healthcare professional or student participants Non-healthcare professional or student participants
Any date Grey literature (including doctoral theses or dissertations, conference or poster abstracts, opinion or commentary pieces, letters, websites, blogs, instruction manuals and policy documents)
Clinical event debriefing

Documenting the search process

Documenting the search process ensures transparency such that the strategy should be reproducible by any reader [41]. It allows readers to gauge how extensive the search was or conversely, if there were gaps in the process [28]. Whilst guidelines differ, the consensus remains that databases and interfaces used to conduct the searches, explicit search strategies and use of limits or inclusion and exclusion criteria, should be documented in some format [39]. To adhere to such standards, many authors visually present these practices via the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA) reporting tool [49]. Due to its simplistic style and familiarity for fellow SBE researchers, we will use the PRISMA flow chart to document and present the search process in this review [49].

Step 3: Data evaluation

Quality assessment and risk of bias

Studies identified through the search strategy should undergo a quality assessment (QA) process. This identifies the methodological qualities and risk of bias within studies, their individual contribution, weighting and interpretation in the data analysis stage of a review [19,22,28]. Bias within the study design, conduct and analysis may impact the validity of any results [50]. There are several established QA tools such as Critical Appraisal Skills Programme checklists [51], Joanna Briggs Institute critical appraisal checklists [50] and the Mixed Methods Appraisal Tool (MMAT) [52].

Within IRs, the QA process is complex due to the potential for a diverse range of empirical study designs being assessed, with each type of design generally necessitating differing criteria to demonstrate quality. Furthermore, there is no established standard of ‘quality measure’, and the sampling frame of the study dictates the method of how quality is judged [22]. We chose the MMAT tool because, in the context of this complexity, it aligns itself well with IRs as it details distinct criteria specifically tailored across five study designs: qualitative, quantitative randomized controlled trials, quantitative non-randomized controlled trials, quantitative descriptive and mixed methods [52].

Hierarchy of evidence

The hierarchy of evidence is an established concept in evidence-based medicine and HPE scholarship and should be considered when reviewing and appraising literature [53]. It proposes that certain types of study design, and by extension the results they present, are deemed ‘better’ or ‘stronger’ than others [54]. The ranking system, often presented as a pyramid, is mainly based upon the probability of bias, with studies having the least risk of systematic errors ranked highest [54]. The concept has been adapted with relation to SBE scholarship, splitting different study designs into filtered and unfiltered information, with evidence syntheses and clinical guidelines being highlighted as a separate category ranked just below systematic reviews [53]. However, these hierarchical models can be overly simplistic and are not necessarily the most appropriate choice for evaluating evidence and best practices in HPE [55]. First, the notion of a hierarchy of evidence assumes that all studies are conducted, within their design parameters, to the same high standard. This is not always the case. Second, especially in HPE research, the best available evidence to answer specific research questions may indeed be from alternative study designs [54]. Pilcher and Bedford [55] propose a more integrated and contemporary model which recognizes the value of differing study designs and places the goal of evidence-based education at its centre. Recognizing the value of multiple sources and study designs is the core ethos of an IR, and whilst not specific to HPE scholarship, Pilcher and Bedford’s [55] model usefully integrates this notion with the convention of hierarchy of evidence levels. Using their model in conjunction with the MMAT quality assessment tool will allow for the identification and interpretation of the best available evidence to answer our specific research question.

Step 4: Data analysis

The data analysis stage of an IR includes the processing, ordering, categorizing and summarizing of data from primary sources, with the overall aim of synthesizing information such that a unified and integrated conclusion can be made [22]. Whittemore and Knafl [22] emphasize the importance of a systematic analytic approach to this endeavour to minimize bias and the chance of error. They advocate using a four-phase constant comparison method originally described for qualitative data analysis [56]. With this method, data are compared item by item and categorized and grouped together, before further comparison between different groups allows for an analytical synthesis of the varied data. These phases will be applied to this IR and are detailed below.

Phase 1: Data reduction

Data reduction involves creating a classification system to manage data from diverse methodologies [22]. In this IR, the studies will be classified according to their study design, as per the MMAT tool to allow logical analysis, ease of review and comparison. A standardized data extraction tool ensures consistency, minimizes bias and allows easy comparison of primary sources on specific variables, demographics, data, interpretations and key findings [22,28]. Li et al. [57] propose a four-stage approach to data reduction: develop outlines of tables and figures expected to appear in the review, assemble and group data elements, identify optimal ways to frame the data points, and pilot and review the forms to ensure data are presented and structured correctly. We followed this approach to formulate a data extraction tool which will be applied consistently across multiple study designs (Table 5).

Table 5:
Data extraction tool to be used in this IR
Bibliographic details
Author(s)
Publication date
Study title and citation
Study characteristics
Study aim and objectives
Study design
Location/setting
Theoretical framework/approach
Learner characteristics
Sample size
Sample characteristics
Sample demographics
Methods
Research design
Recruitment methods
Inclusion criteria
Exclusion criteria
Ethical approval
SLE activity description
SLD activity description
Comparator description (if applicable)
Results
Outcome measures
Key findings
Conclusions/limitations and weaknesses/comments
Authors’ conclusions
Reviewer’s comments
Limitations and weaknesses
Quality assessment

Phase 2: Data display

Extracted data need assembled and displayed, such that relationships across primary sources can be visualized and analysed [22]. In the case of this IR, the extracted data will be displayed in tabular form.

Phase 3: Data comparison

This stage involves an iterative process of scrutinizing data displays of primary sources such that patterns, themes or relationships can be identified [22]. This requires researchers to move beyond simply summarizing study findings, instead forming new perspectives and understandings of a topic or phenomena whilst formulating questions that can guide further research into gaps in the literature [24].

Whilst most commonly used in qualitative data analysis, thematic analysis is a collection of techniques also regularly used in integrative research to achieve such aims [22,58,59]. It is defined as a flexible and practical method to identify, analyse and report emerging themes within data that can be rich, powerful and complex [60]. Reflexive thematic analysis (RTA) refers to an approach in which thematic analysis is fully conceptualized and underpinned by qualitative paradigms, whereby the researcher, through their reflexive interpretative analysis of the patterns of data and their meanings, has an active and central role in knowledge formulation [60]. In this approach, it is accepted, even expected, that two researchers may derive differing interpretations from the same nominal data set, and as such this approach shuns any notion of positivistic data interpretation [61]. To this end, researchers should embrace subjectivity, creativity and reflexivity in the research process [22,60]. To demonstrate reflexivity, we will be transparent in how we critically interrogate our engagement with the research process such that readers can assess how our assumptions and perspectives as simulation educators who engage primarily in FLDs may influence the analysis and interpretation of data [62]. We will build reflexivity into our process using strategies such as reflexive journaling, peer debriefing and reflection on positionality [60]. Contrary to codebook approaches where themes are often predefined prior to coding, in RTA, themes should be produced organically and be thought of as the final outcome of data coding that the researcher has interpreted from the data [60,61]. Braun and Clarke’s [60] framework provides a sound methodological, practical and theoretically flexible method with which to analyse the data that will be extracted in this IR. The framework includes the following six phases:

  1. Familiarizing yourself with the data set
  2. Coding
  3. Generating initial themes
  4. Developing and reviewing themes
  5. Refining, defining and naming themes
  6. Writing up [60]

Whilst these phases are ordered sequentially, analysis can occur as a recursive process in which researchers move back and forth between the different stages as insights deepen [60]. Themes do not simply emerge from the data, but are constructed by the researcher as they analyse, compare and map the codes and are best conceptualized as output of the analytic process that researchers undertake [60]. In this IR, we will follow this six-step process to develop the final themes.

Phase 4: Conclusion drawing and verification

This final phase in an IR is the synthesis of important elements from each subgroup analysis to form an integrated summation of the topic or phenomenon. This involves moving from the interpretative phase of the patterns, themes and relationships to higher levels of abstract and conceptual processing [22,56]. Conclusions and conceptual models can then be developed and modified to ensure that they are inclusive of as much primary data as possible, but do not exceed the evidence from which they are drawn [22,56]. The review conclusions should be explained within the context of the review parameters and limitations [28], but may be difficult to delineate and explain if there is conflicting evidence from the primary studies [22].

Step 5: Presentation

The final step in the review process is the presentation of data which, depending on the researcher’s preferences and the type of data, conclusions and interpretations formed, can occur in a variety of formats [22]. The results of this IR will be presented in a combination of tables, thematic maps and prose, thus ensuring the breadth and depth of the topic are captured [22]. An accompanying narrative summary will ensure that the results, interpretations and conclusions are aligned with the research question. Finally, the limitations of the study will be acknowledged and gaps for further research will be identified and documented.

Conclusion

SLDs are a relatively new concept that may offer an effective alternative debriefing experience to established FLD practices. The evidence regarding how and why they influence debriefing outcomes for groups of learners in immersive SBE is yet to be appropriately synthesized. This IR will attempt to address this gap as well as identify areas for future research. The purpose of this protocol is to detail, explain and justify the underlying rationale for performing an IR, and report and critically appraise the specific elements of the process in relation to our research question. Finally, through this protocol, we hope to have highlighted the applicability and relevance of IRs for SBE scholarship in a wider context.

Acknowledgements

We would like to thank Scott McGregor, University of Dundee librarian, for his invaluable help in formulating and refining the search strategy employed in this review. We would also like to thank doctors Kathleen Collins, Kathryn Sharp and Michael Basler for their critical eyes when reviewing this work and manuscript.

Declarations

Authors’ contributions

PK led the conception and design of this protocol as part of his Masters in Medical Education (University of Dundee) research project. SS supervised the development of the protocol. Both authors contributed to the writing and editing of this article and have reviewed and approved the final manuscript.

Funding

No funding declared.

Availability of data and materials

Data supporting findings in this protocol are available within the article or on special request from the lead author, Dr Prashant Kumar.

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors give consent for this manuscript to be published.

Competing interests

No conflicts of interest declared.

Authors’ information

PK currently works as an anaesthetic registrar in NHS Greater Glasgow & Clyde and has previously completed a 2-year clinical simulation fellowship. He has extensive experience of debriefing within immersive simulation in both undergraduate and postgraduate settings. He has a specialist interest in debriefing practices, interprofessional simulation-based education and faculty development.

SS currently works as a Senior Lecturer in Simulation for Health Professions Education, Dundee Institute for Healthcare Simulation, School of Medicine, University of Dundee. She is interested in flexible and blended learning in postgraduate health-professions education. She has over 20 years teaching experience in undergraduate clinical skills education, post-graduate medical education and simulation-based faculty development which includes many international collaborations.

References

1. 

Battista A, Nestel D. Simulation in medical education. In: Swannick T, Forrest K, O’Brien C, editors. Understanding medical education: evidence, theory and practice. 3rd edition. Oxford: John Wiley & Sons Ltd. 2019. p.151162.

2. 

Eppich W, Reedy G. Advancing healthcare simulation research: innovations in theory, methodology, and method. Advances in Simulation. 2022;7:23.

3. 

Lioce L, Lopreiato JO, Downing D, et al Healthcare simulation dictionary. 2nd edition. Rockville, MD: Agency for Healthcare Research and Quality. 2020; AHRQ Publication No. 20-0019. Available from: https://www.ahrq.gov/sites/default/files/wysiwyg/patient-safety/resources/simulation/sim-dictionary-2nd.pdf [Accessed 6 June 2023].

4. 

Dede C. Immersive interfaces for engagement and learning. Science. 2009;323(5910):6669. doi: 10.1126/science.1167311

5. 

Krogh K, Bearman M, Nestel D. “Thinking on your feet” – a qualitative study of debriefing practice. Advances in Simulation. 2016;1:12.

6. 

Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Medical Education. 2014;48:657666.

7. 

Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simulation in Healthcare. 2007;2(2):115125.

8. 

Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Education Today. 2014;34:e58e63. doi: 10.1016/j.nedt.2013.09.020

9. 

Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief: a critical review of healthcare simulation debriefing methods. Simulation in Healthcare. 2016;11(3):209217.

10. 

Cheng A, Grant V, Huffman J, et al. Coaching the debriefer: peer coaching to improve debriefing quality in simulation programs. Simulation in Healthcare. 2017;12(5):319325.

11. 

Endacott R, Gale T, O’Connor A, Dix S. Frameworks and quality measures used for debriefing in team-based simulation: a systematic review. BMJ Simulation & Technology Enhanced Learning. 2019;5:6172. doi: 10.1136/bmjstel-2017-000297

12. 

Kumar P, Paton C, Simpson HM, King CM, McGowan N. Is interprofessional co-debriefing necessary for effective interprofessional learning within simulation-based education? International Journal of Healthcare Simulation. 2021;1(1):4955.

13. 

Boet S, Bould MD, Sharma B, et al Within-team debriefing versus instructor-led debriefing for simulation-based education: a randomized controlled trial. Annals of Surgery. 2013;258(1):5358.

14. 

Garden AL, Le Fevre DM, Waddington HL, Weller JM. Debriefing after simulation-based non-technical skill training in healthcare: a systematic review of effective practice. Anaesthesia and Intensive Care. 2015;43(3):300308. doi: 10.1177/0310057X1504300303

15. 

Dufrene C, Young A. Successful debriefing- best methods to achieve positive learning outcomes: a literature review. Nurse Education Today. 2014;34:372376. doi: 10.1016/j.nedt.2013.06.026

16. 

Kim Y, Yoo J. The utilization of debriefing for simulation in healthcare: a literature review. Nurse Education in Practice. 2020;43:102698. doi: 10.1016/j.nepr.2020.102698

17. 

Lee J, Lee H, Kim S, et al Debriefing methods and learning outcomes in simulation nursing education: a systematic review and meta-analysis. Nurse Education Today. 2020;87:104345. doi: 10.1016/j.nedt.2020.104345

18. 

Niu Y, Liu T, Li K, et al Effectiveness of simulation debriefing methods in nursing education: a systematic review and meta-analysis. Nurse Education Today. 2021;107:105113. doi: 10.1016/j.nedt.2021.105113

19. 

MacKenna V, Díaz DA, Chase SK, Boden CJ, Loerzel V. Self-debriefing in healthcare simulation: an integrative literature review. Nurse Education Today. 2021;102:104907. doi: 10.1016/j.nedt.2021.104907

20. 

Maggio LA, Sewell JL, Artino Jr AR. The literature review: a foundation for high-quality medical education research. Journal of Graduate Medical Education. 2016;8(3):297303. doi: 10.4300/JGME-D-16-00175.1

21. 

Norman G, Sherbino J, Varpio L. The scope of health professions education requires complementary and diverse approaches to knowledge synthesis. Perspectives in Medical Education. 2022;11(3):139143.

22. 

Whittemore R, Knafl K. The integrative review: updated methodology. Journal of Advanced Nursing. 2005;52(5):546553. doi: 10.1111/j.1365-2648.2005.03621.x

23. 

Christmals CD, Gross JJ. An integrative literature review framework for postgraduate nursing research reviews. European Journal of Research in Medical Sciences. 2017;5(1):715.

24. 

Kutcher AM, LeBaron VT. A simple guide for completing an integrative review using an example article. Journal of Professional Nursing. 2022;40:1319. doi: 10.1016/j.profnurs.2022.02.004

25. 

Soares CB, Hoga LAK, Peduzzi M, Sangaleti C, Yonekura T, Silva DRAD. Integrative review: concepts and methods used in nursing. Revista da Escola de Enfermagem da USP. 2014;48(2):335345. doi: 10.1590/s0080-6234201400002000020

26. 

O’Mathuna DP. Evidence-based practice and reviews of therapeutic touch. Journal of Nursing Scholarship. 2000;32(3):279285. doi: 10.1111/j.1547-5069.2000.00279.x

27. 

de Souza MT, da Silva MD, de Carvalho R. Integrative review: what is it? How to do it? Einstein. 2010;8(1):102107.

28. 

Dhollande S, Taylor A, Meyer S, Scott M. Conducting integrative reviews: a guide for novice nursing researchers. Journal of Research in Nursing. 2021;26(5):427438. doi: 10.1177/1744987121997907

29. 

Brown MEL, Dueñas AN. A medical science educator’s guide to selecting a research paradigm: building a basis for better research. Medical Science Educator. 2020;30:545553. doi: 10.1007/s40670-019-00898-9

30. 

Rees CE, Crampton PES, Monrouxe LV. Revisioning academic medicine through a constructionist lens. Academic Medicine. 2020;95(6):846850.

31. 

Kolb DA. Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall. 1984.

32. 

Mezirow J. Transformative dimensions in adult learning. San Francisco, CA: Jossey-Bass. 1991.

33. 

Lave J, Wenger E. Situated learning: legitimate peripheral participation. Cambridge: Cambridge University Press. 1991.

34. 

Vygotsky LS. (1978). Mind in society: development of higher psychological processes. Cole M, John-Steiner V, Scribner S, Souberman E, editors. Cambridge, MA: Harvard University Press. 1978.

35. 

Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi S. PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC Health Services Research. 2014;14(1):579. doi: 10.1186/s12913-014-0579-0

36. 

Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qualitative Health Research. 2012;22(10):14351443 doi: 10.1177/1049732312452938

37. 

Kainth R, Reedy G. A systematic meta-ethnography of simulation debrief practice: a study protocol to investigate debrief interactions and the relationship to participant learning. International Journal of Healthcare Simulation. 2023;111. doi: 10.54531/tsvw4493

38. 

Booth A, Noyes J, Flemming K, Moore G, Tunçalp Ö, Shakibazadeh E. Formulating questions to explore complex interventions within qualitative evidence synthesis. BMJ Global Health. 2019;4:e001107. doi: 10.1136/bmjgh-2018-001107

39. 

Cooper C, Booth A, Varley-Campbell J, Britten N, Garside R. Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies. BMC Medical Research Methodology. 2018;18:85. doi: 10.1186/s12874-018-0545-3

40. 

Aromataris E, Riitano D. Constructing a search strategy and searching for evidence: a guide to the literature search for a systematic review. The American Journal of Nursing. 2014;114(5):4956.

41. 

Lefebvre C, Glanville J, Briscoe S, et al Chapter 4: Searching for and selecting studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane handbook for systematic reviews of interventions (version 6.3). 2022. Available from: https://training.cochrane.org/handbook [Accessed 6 April 2023].

42. 

Lorenzetti DL, Topfer LA, Dennett L, Clement F. Value of databases other than MEDLINE for rapid health technology assessments. International Journal of Technology Assessment in Health Care. 2014;30(2):173178. doi: 10.1017/S0266462314000166

43. 

Bramer WM, Milic J, Mast F. Reviewing retrieved references for inclusion in systematic reviews using EndNote. Journal of the Medical Library Association. 2017;105(1):8487.

44. 

Stern C, Jordan Z, McArthur A. Developing the review question and inclusion criteria. American Journal of Nursing. 2014;114(4):5356.

45. 

Hammerstrøm K, Wade A, Jørgensen A. Searching for relevant studies. In: Heyvaert M, Hannes K, Onghena P, editors. Using mixed methods research synthesis for literature reviews. Thousand Oaks, CA: SAGE Publications. 2017. p.69112.

46. 

Dalton JE, Bolen SD, Mascha EJ. Publication bias: the elephant in the review. Anesthesia & Analgesia. 2016;123(4):812813. doi: 10.1213/ANE.0000000000001596

47. 

Stahel PF, Moore EE. Peer review for biomedical publications: we can improve the system. BMC Medicine. 2014;12:179. doi: 10.1186/s12916-014-0179-1

48. 

Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Medicine. 2016;14:85. doi: 10.1186/s12916-016-0631-5

49. 

Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. British Medical Journal. 2021;372:n71.

50. 

Aromataris E, Munn Z, editors. JBI manual for evidence synthesis. The Joanna Briggs Institute. 2020. Available from: https://jbi-global-wiki.refined.site/space/MANUAL [Accessed 6 April 2023].

51. 

Critical Appraisal Skills Programme. CASP checklists. 2008. Available from: https://casp-uk.net/casp-tools-checklists/ [Accessed 6 April 2023].

52. 

Hong QN, Pluye P, Fàbregues S, et al Mixed Methods Appraisal Tool (MMAT), Version 2018 user guide. Ontario: McGill University, Department of Family Medicine. 2018. Available from: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/146002140/MMAT_2018_criteria-manual_2018-08-08c.pdf [Accessed 6 April 2023].

53. 

Kessler DO, Auerback M, Chang TP. Seeking, reviewing and reporting on healthcare simulation research. In: Nestel D, Hui J, Kunkler K, Scerbo MW, Calhoun AW, editors. Healthcare simulation research: a practical guide. 3rd edition. Cham: Springer. 2019. p.5154.

54. 

Burns PB, Rohrich RJ, Chung KC. The levels of evidence and their role in evidence-based medicine. Plastic and Reconstructive Surgery. 2011;128(1):305310. doi: 10.1097/PRS.0b013e318219c171

55. 

Pilcher J, Bedford LA. Hierarchies of evidence in education. Journal of Continuing Education in Nursing. 2011;42(8):371377. doi: 10.3928/00220124-20110401-03

56. 

Miles MB, Huberman AM, Saldana J. Qualitative data analysis: a methods sourcebook. 3rd edition. Thousand Oaks, CA: SAGE Publications. 2014.

57. 

Li T, Higgins JPT, Deeks JJ. Chapter 5: Collecting data. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch, VA, editors. Cochrane handbook for systematic reviews of interventions (version 6.3). 2022. Available from: https://training.cochrane.org/handbook [Accessed 6 April 2023].

58. 

Dwyer PA. Analysis and synthesis. In: Toronto CE, Remington R, editors. A step-by-step guide to conducting an integrative review. Cham: Springer. 2020. p.5770.

59. 

Broome ME. Integrative literature reviews for the development of concepts. In: Rodgers BL, Knafl KA, editors. Concept development in nursing: foundations, techniques, and applications. Philadelphia, PA: W.B. Saunders Company. 1993. p.231250.

60. 

Braun V, Clarke V. Thematic analysis: a practical guide. Thousand Oaks, CA: SAGE Publications. 2021.

61. 

Byrne D. A worked example of Braun and Clarke’s approach to reflexive thematic analysis. Quality & Quantity. 2022;56:13911412.

62. 

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007;19(6):349357. doi: 10.1093/intqhc/mzm042