Skip to main content
  • Research article
  • Open access
  • Published:

Assessing fidelity of a community based psychosocial intervention for people with mild dementia within a large randomised controlled trial

Abstract

Background

Understanding intervention delivery as intended, particularly in complex interventions, should be underpinned by good quality fidelity assessment. We present the findings from a fidelity assessment embedded as part of a trial of a complex community-based psychosocial intervention, Journeying through Dementia (JtD). The intervention was designed to equip individuals with the knowledge and skills to successfully self-manage, maintain independence, and live well with dementia and involves both group and individual sessions. The methodological challenges of developing a conceptual framework for fidelity assessment and creating and applying purposely designed measures derived from this framework are discussed to inform future studies.

Methods

A conceptual fidelity framework was created out of core components of the intervention (including the intervention manual and training for delivery), associated trial protocols and pre-defined fidelity standards and criteria against which intervention delivery and receipt could be measured. Fidelity data collection tools were designed and piloted for reliability and usability. Data collection in four selected sites (fidelity sites) was via non-participatory observations of the group aspect of the intervention, attendance registers and interventionist (facilitator and supervisor) self-report.

Results

Interventionists from all four fidelity sites attended intervention training. The majority of group participants at the four sites (71%) received the therapeutic dose of 10 out of 16 sessions. Weekly group meeting attendance (including at ‘out of venue’ sessions) was excellent at 80%. Additionally, all but one individual session was attended by the participants who completed the intervention. It proved feasible to create tools derived from the fidelity framework to assess in-venue group aspects of this complex intervention. Results of fidelity assessment of the observed groups were good with substantial inter-rater reliability between researchers KAPPA 0.68 95% CI (0.58–0.78). Self-report by interventionists concurred with researcher assessments.

Conclusions

There was good fidelity to training and delivery of the group aspect of the intervention at four sites. However, the methodological challenges of assessing all aspects of this complex intervention could not be overcome due to practicalities, assessment methods and ethical considerations. Questions remain regarding how we can assess fidelity in community-based complex interventions without impacting upon intervention or trial delivery.

Trial registration

ISRCTN17993825.

Peer Review reports

Background

Despite growing recognition of the value of psychosocial interventions to assist people to adapt and live well with dementia [1, 2] there is still a paucity of high quality research evidence for intervention effectiveness [3]. These psychosocial interventions are by their nature complex and therefore present evaluation challenges including the extent to which the intervention is delivered as intended. Factors to take into account include the impact of context upon intervention delivery [4] and whether the desired behaviour change is achieved [5]. Interest in intervention fidelity originated in response to treatment integrity concerns and demands for accountability in research. This was followed by a wider focus upon compliance (the extent to which those taking part in a trial follow the protocol) and increase in use of strategies such as intervention manualisation and training for systematic implementation and maintenance of fidelity [6, 7]. Reporting fidelity is now considered essential to determine the credibility, validity, and replicability of findings [8, 9]. As part of well-designed randomised controlled trials, fidelity studies can also help to establish intervention effectiveness and thereby support implementation into practice [10]. Embedded fidelity studies are therefore a feature of recent psychosocial dementia trials [11, 12]. Mixed-method trial designs are recommended to capture intervention effectiveness as well as fidelity of delivery [13, 14].

Intervention manualisation as well as intervention specific training and supervision can improve compliance and outcomes [15], thereby enhancing fidelity [6, 16]. Researchers are also encouraged to adopt outcome measures that offer validity [7, 17] and that can be measured consistently through rigorous processes such as randomised controlled trials to evidence effectiveness. However, the nature of complex tailored interventions could be considered in opposition to the idea of measuring consistent delivery [18]. For example, complex interventions often rely upon the judgements of those delivering the intervention to make any necessary adaptations. Therefore, fidelity assessment seeks to understand to what extent adaptations can be made without the intervention becoming different from what was intended [19]. Creation and application of appropriate measures to assess adaptation can improve our understanding of how different intervention components impact upon delivery and receipt of the intervention in context [20]. However, the uniqueness of complex interventions usually demands the creation of purposely designed measures for fidelity assessment, which lack demonstrable psychometric properties. Consequently bespoke fidelity measures and evaluation criteria (behaviours and activities observed) need to be formulated and located in the theoretical underpinnings, as well as in the aims and core content of the specific intervention [4].

This paper reports the results of an embedded fidelity assessment within a large randomised controlled trial to explore delivery of a psychosocial intervention, Journeying through Dementia [21]. The primary aim of the fidelity assessment was to evaluate how well the Journeying through Dementia intervention was delivered according to the trial protocol and intervention manual applying pre-defined fidelity standards. To achieve this, it was necessary to create appropriate fidelity documents and materials. This multi-component community-based intervention was co-designed by people living with dementia [22]. It includes mechanisms to increase independence, self-efficacy and effective problem solving in people living with early-stage dementia, thereby enabling individuals to live as well as possible with the condition. This paper provides a working example of how fidelity assessment using a range of methods can be successfully implemented using purposely designed measures within the context of a trial of a complex psychosocial intervention, Journeying through Dementia, to inform future studies.

Methods

Trial design

We conducted a pragmatic, two-arm parallel group, randomised controlled trial of the Journeying through Dementia intervention alongside usual care or usual care alone which included an embedded fidelity assessment and qualitative sub-study [21]. Patients were randomised using a secure, centralized, internet-based interface. The assignment sequence was computer-generated with block size of 4 in a 1:1 ratio and stratification according to trial site. The primary outcome for the effectiveness study was quality of life, measured by the Dementia Related Quality of Life (DEMQOL) at 8 months post-randomisation. A range of secondary outcomes measured other key components of the intervention. These were health and social care resource use; self-efficacy; well-being; self-management; activities of daily living and quality of life. A total of 480 participants with mild dementia (score of > 18 on MMSE) were recruited and randomised of which 241 were randomised to receive the intervention. The Trial was registered ISRCTN17993825, Date registered 11 Oct 2016.

Sample

A convenience sample of four out of the 13 recruiting sites participating in the trial were approached and consented to take part in the fidelity assessment. Site selection for fidelity sites was pragmatic with criteria being that sites were actively recruiting and delivering the intervention. Geographical location and population size were also considered to ensure that selected sites were representative of those that took part in the trial. Unless otherwise stated data reported was collected during a 12-week programme at each of the four participating sites.

Intervention

The Journeying through Dementia intervention, underpinned by social cognitive theory [23], is a manualised self-management multi-component intervention designed in consultation with people with dementia [22, 24]. It consists of 12 consecutive facilitated weekly group meetings held in a regular venue (in-venue) and four individual sessions with a facilitator to focus on personal goals. A minimum of three of the 12 group meetings are activities held in the community outside the regular venue (out of venue) to consolidate learning and practice neglected skills with support from others. Participation is designed to promote independence and self-management and support meaningful occupation including social interaction. Although manualised, the intervention enables tailoring of activities according to the needs of individuals and the group. This is achieved through a layered approach to learning and behaviour change within each of the different components (in-venue group meetings, individual sessions and out of venue activities). The intervention aims to elicit behaviour change through supporting greater self-efficacy, increased self-management and effective problem solving [25].

Mitchie’s theory of behaviour change informed the intervention design and delivery [26]. Within this theoretical framework, the anticipated change was improved self-management and engagement in meaningful activity. The theory emphasizes the importance of capability addressed by imparting knowledge and training. Motivation is addressed by increasing understanding through experiential learning which leads to behaviour change being associated with positive feelings. Opportunity for change is addressed by enabling participants to experience change within the groups as well as in the community.

Prior to the trial starting it was agreed a therapeutic dose would constitute 10 sessions attended out of the potential 16 (group and individual sessions). This was a pragmatic decision taken by the trial team with clinical advice which took account of experience of delivering similar interventions.

Training

An important aspect of fidelity is to reduce variability between interventionists. Inadequate or limited training can be factors in poor fidelity [7, 8]. Therefore, to maximize intervention delivery as intended a two-day training package was prepared for the purposes of the trial. It was delivered by the manual author, a highly experienced trainer with support from a second tutor with experience working with people with dementia. A standardised approach to training promoted consistent future delivery by interventionists. Following recommendations and learning from other studies [24, 27], training included experiential work to practice and model the intervention. The two-day training was intended to be delivered as close as possible to the beginning of intervention delivery for those who attended.

Supervision

Once intervention delivery commenced, all facilitators received weekly supervision from an experienced clinical professional from within their organisation but whom did not have experience of the intervention. This was not ideal as ‘real world’ delivery supervisors would have direct experience of delivering the intervention. To account for this, members of the trial team, experienced in delivering and supervising psychosocial interventions within trials provided supervision to site based supervisors via monthly face-to-face or phone meetings [28]. Those in the site supervisor role were encouraged to attend day one of the two-day training alongside facilitators from their site. This was to ensure that all interventionists received the same information about the intervention, and to build relationships and support shared learning. In addition, all supervisors attended a separate half-day training session led by the trial team to discuss their role and raise any concerns. A supervision protocol (see supplementary document 1) was provided to all sites for the purposes of the study and to support intervention fidelity.

Aims and objectives of the fidelity assessment

The aim of the fidelity assessment was to evaluate how well the Journeying through Dementia intervention was delivered according to the trial protocol and intervention manual, applying pre-defined fidelity standards and criteria against which to measure these standards. The conceptual framework for this assessment is presented in Table 1. This framework was purposely designed using quality assurance parameters for provider training, delivery, receipt, and enactment based on criteria identified by the Behaviour Change Consortium [8] and NICE guidance on behaviour change [29].

Table 1 Fidelity conceptual assessment strategy for Journeying through Dementia

Data collection methods

The design of the fidelity framework (see Table 1) and the accompanying data collection instruments were adapted from the experiences of a previous trial of a complex psychosocial intervention conducted by the authors [27]. Throughout, the value of obtaining self-report and multiple perspectives, including the views of those both delivering and receiving the intervention training was prioritised. Consequently, in addition to the researcher observations, we obtained the perspectives of interventionists. Data collection completed by the interventionists also acted as a training tool to reinforce intervention delivery as intended and reduce facilitator drift [8]. However, to encourage completion rates as well as reduce the burden upon interventionists, acceptability and usability were prioritised when designing these data collection tools [17].

Study participants were not asked for their views as part of the fidelity assessment. This was because it was considered too burdensome to ask individuals to engage in activities beyond those already being requested of them, which included completing a number of questionnaires at multiple time points for all and additional qualitative interviews for some.

A range of assessment methods were used to reflect both researcher and interventionists’ perspectives of intervention delivery [8]. Data were collected at multiple time points [30, 31] to enable findings to be compared for similarity or differences over time. Methods used included researcher completion of itemised checklists of non-participatory (unobtrusive) observations [32] of training and of a purposive sample of in-venue group meetings.

Observation is an established research method, enabling understanding of complex relationships and lived reality through observable phenomena and behaviour in context [33]. For this fidelity assessment observations were conducted to enable the researcher to have the same experience as participants whilst remaining detached from the group. Maintaining detachment during an observation can be challenging when present in the same space as participants. A level of engagement is sometimes inevitable for example when making introductions and explaining observer presence or putting participants at ease. In-person observations were selected over video recorded observations in response to learning from previous studies where the technology was found to be challenging for those involved [12].

Researchers also kept comprehensive observation field notes to support and evidence scoring decisions. Attendance registers for both group and individual sessions were analysed as an objective measure of engagement with the intervention [7] and interventionists self-report itemised checklists for the two-day training, in-venue group meetings, individual sessions and supervision were completed by interventionists at the fidelity sites to cross-validate with researcher observations.

All methods were performed in accordance with the relevant guidelines and regulations [34, 35].

Data collection tools

Assessment tools were developed by the fidelity lead and members of the Trial Management Group (TMG) including the authors of the manualised intervention. Materials used during facilitator training, the manualised intervention and associated resources, and trial protocols all informed the content of these tools. They were all designed to interrogate:

  • evidence of learning and skills acquisition

  • receipt and enactment of core skills

  • anticipated observed behaviours e.g. role-play (for training fidelity) and contributions to the group (for group intervention fidelity).

Since these were purposely designed tools to measure fidelity of the Journeying through Dementia intervention, they were deemed to be sufficiently sensitive to the complexity of the components and contextual variables specific to this intervention [4]. Two researchers, one who managed the feasibility study, scored pre-defined criteria using itemised checklists to rate the core components of the training or intervention [19]. The draft assessment tools were piloted and refined using observations of the first training session and the first group in-venue session at two of the fidelity sites. This process was conducted to identify items that could not be reliably scored by the researchers, improve consistency of scoring and make revisions before further application [36].

Details of the tools are listed below and copies of the final checklists and registers are provided as supplementary document 2.

Training observation checklists

To evaluate interventionist training, observation checklists were completed concurrently by two researchers during delivery. A simplified version of the checklist was also completed by the interventionists immediately following training to record what they understood that they had received during the two successive days. Out of a total of six two-day training sessions offered to the interventionists from the four sites, the first three were observed and scored using an itemised checklist by the fidelity lead (KS) and a second researcher (SM or JBD).

Supervision registers and checklists

Data reported for fidelity to the supervision protocol here are those completed for two intervention programs at three of the four fidelity sites and three at the fourth. Completion was dependent on how many groups each site delivered. All supervisors were asked to complete a register to record facilitator attendance and the format of every session they conducted (expected to be weekly throughout the entire intervention delivery period). Both facilitators and supervisors completed an itemised fidelity checklist derived from the supervision protocol to detail their views of delivery/receipt of supervision. This was requested at the end of the first, fifth and twelfth weeks of supervision for all interventions delivered.

Attendance registers – group meetings and individual sessions

Attendance registers maintained by the facilitators during delivery of all group (including out of venue sessions) and individual sessions recorded compliance with attendance including the offer and acceptance or alternatively decline of sessions by participants.

In-venue group meeting observation checklists

To assess fidelity of the in-venue group meetings, the observation checklists were completed by the researchers during delivery of two sessions at each site. Comprehensive notes were also taken to evidence researcher scoring decisions. A simplified version of the checklist was also completed by the facilitators immediately after the group finished. Observations of eight in-venue meetings (one group per site, two meetings per group at approximately week three and week eight of delivery) were conducted in total. Observing two meetings per group also enabled us to examine any learning effects and identify potential facilitator drift as intervention delivery progressed [37].

Individual session checklist

Direct observations or recording of individual sessions were considered too intrusive, particularly as these sessions were required to be in the participant’s home or local community [12]. Facilitators were asked to complete a summary fidelity checklist evaluating their experience of delivering these sessions as intended. It was requested that these be completed immediately after each session.

Out of venue activities

Out of venue activities were not observed or recorded as there were significant ethical and practical considerations; for example, when interacting in the community with people who were not part of the group or the trial.

Methods of data analysis

Time between training and delivery

Time lapse between the dates training was received and when intervention delivery commenced were recorded by the research team. Dates were then extracted from the trial database.

Completion rates

Checklist and register completion rates were analysed to identify adherence to intervention as intended.

Descriptive analysis of observational data

Frequency scores from the observation checklists for training and in-venue group sessions were compared to identify researcher rates of agreement for criteria achievement. Where there was disagreement, free text notes taken during observations were referred to reach agreement on the criterion score. The categories for scoring were ‘0’ never observed, ‘1’ sometimes observed and ‘2’ observed most of the time. Several criteria within the checklists were rated on a binary ‘Yes’ or ‘No’ scale for presence or absence in this instance for the purpose of analysis if a criterion was rated ‘Yes’ this was converted to ‘2’ and ‘No’ converted to ‘0’. All interventionist completed checklists were rated binary ‘Yes’ or ‘No’ for ease of completion. Fidelity scores were calculated based on percentage agreement on the final score (after moderation) obtained by each researcher as follows:

  • 0–60% Unsatisfactory

  • 61–70% Satisfactory

  • 71–80% Good

  • 81–90% Very good

  • 91–100% Excellent

The percentage fidelity score obtained by the researchers on the observation checklists was then compared to that of the interventionists on the associated self-report checklist to look for convergence or divergence.

Inter-rater reliability

Inter-rater reliability assessed the extent to which the two researchers attributed the same score to the same checklist item during session observations only. A weighted kappa using a predefined table of weights was applied to provide estimates of the degree of agreement between the two researchers. As per Cohens Kappa [38] values for inter-rater agreement are as follows:

  • ≤ 0 indicating no agreement

  • 0.01–0.20 none to slight

  • 0.21–0.40 fair

  • 0.41–0.60 moderate

  • 0.61–0.80 as substantial

  • 0.81–1.00 almost perfect agreement

Results

Training fidelity

Interventionist training

The first three out of six two-day training sessions were observed. These sessions were attended by a range of sites taking part in the trial in addition to the four fidelity sites as shown in Table 2.

Table 2 Two day training delivery and attendance

Overall fidelity to the training was excellent in all observed three training sessions as scored by the researchers averaging 95% achievement (range 91–97), see Table 3. Facilitator and supervisor (trainee) scores averaged 94% achievement (range 88–97).

Table 3 Coder training overall fidelity score agreement

As Table 3 illustrates, the lowest fidelity score was obtained for the second of the three observed two-day training (23rd–24th Jan 2017). This could be explained by the necessary delivery modifications to accommodate the numbers attending, which had exceeded what was planned (see Table 2). These modifications included reducing the time spent on a topic/activity or excluding it altogether as well as limiting opportunities for practical activities/exploration. There was agreement between the researchers and trainees on the top three items that were not delivered during this modified session which were:

  • Item 7 ‘Did the trainer discuss the supporter attended sessions and relationship dynamics’ (8 out of 18 trainees)

  • Item 11 ‘Were you able to reflect on and share your own facilitation style and skills’ (8 out of 18 trainees).

  • Item 17 ‘Did you discuss the value and principles of supervision’ (12 out of 18 trainees). This rating related predominantly to the second training session and was one of the topics affected by the modifications made to day-two of the training.

Implementation of training

Analysis of trial records found that on average, the time from training to delivery of a first intervention group at the four sites was 106 days (range 79–134).

Supervision fidelity

At the fidelity sites, there were no changes to facilitators and supervisors during the assessment period. Supervision registers were completed and returned by 11 facilitators and five supervisors (one site had two supervisors who shared supervision responsibilities). Out of the 111 opportunities for supervision identified and recorded by the four sites 105 were recorded as having been completed. Five supervision sessions were not achieved due to annual or sick leave and one was cancelled by the supervisor. On three occasions, once at each of three sites, supervisors held an additional supervision session prior to commencement of a new intervention programme. Supervision with more than one facilitator, referred to as joint supervision in the supervision protocol, was held on 59 occasions and individual supervision on 42. Four sessions were recorded as being delivered as a combination a joint and individual session within the allocated time. Delivery format included 97 face-to-face sessions and eight using remote methods (telephone or Skype). The average time was 61 min (range 30–125 min) for joint supervision and 51 min (range 25–70 min) for individual supervision. Only one out of the four fidelity sites met the requirements for provision of a minimum of four individual supervision sessions as identified in the protocol. A further two sites did achieve stated requirements during delivery of their second intervention programme after receiving feedback and encouragement from the clinical experts on the trial team.

Supervisors rated their fidelity to the supervision protocol as being very good on all three observation checklists; averaging 82% achievement (range 77–86) (see Table 4). Supervisors recorded that they had delivered most components of supervision during each session. The component that was most frequently not fulfilled was ‘Did you use a reflective diary as part of the supervision session?’ which was optional for facilitators to complete.

Table 4 Supervisor completed fidelity scores for individual sessions

Intervention fidelity

Attendance registers

Participant attendance was good with 25 out of 35 (71%) participants receiving the therapeutic dose of 10 out of 16 sessions (12 weekly meetings and four individual sessions). Five (14%) attended all 16 sessions.

Group session attendance was good. Of the total 331 available sessions to participants 264 (80%) were attended.

All 35 participants took part in the first individual session. Of those 35, eight participants then withdrew from the intervention between the first and second individual session and one before the third individual session. Of the remaining 26 participants, 25 took part in all four sessions.

Group meeting checklist fidelity

Observed fidelity to the group aspect of intervention was very good with researchers reaching between 88 and 95% agreement on observed items delivered (see Table 5). For all eight observed sessions Cohens kappa score of 0.68 demonstrated substantial inter-rater reliability 95% CI (0.58–0.78) between the two researchers.

Table 5 Fidelity and Kappa scores by sites

The in-venue group meeting checklists for facilitators were all completed as requested (100%). The recorded ratings reflected those of the observing researchers with fidelity across all eight groups averaging 93% (84–100).

Individual session checklist fidelity

Individual session checklists for a total of 20 participants (out of a possible 35) were completed by the facilitators as part of intervention delivery. Seven of the 20 participants had incomplete records that could not be accounted for by intervention withdrawal. An average of 77% achievement (range 22–100) was found for items delivered during each session. The two lowest scored items for delivery were item 5 with 40% achievement ‘Did you help the participant set any goals’ and item 8 with only 22% achievement ‘Did you enable the participant to rehearse skills learned in their everyday life’.

Discussion

The aim of this embedded fidelity assessment was to determine how well the Journeying through Dementia intervention was delivered according to the trial protocol and intervention manual. A second aim was to further examine what methodologies can be practically, ethically and reliably employed for assessment of fidelity during pragmatic trials of complex interventions.

The assessment tools (checklists and registers) derived from the fidelity framework were found to be fit for purpose suggested by good completion rates and inter-rater reliability, and we were able to demonstrate compliance with the intervention and fidelity to delivery of the in-venue group component of the intervention. The caveat here was that we were not able to observe all core components of the intervention due to the inability to be able to capture the delivery of out of venue and individual sessions. Interwoven with this were ethical considerations such as participant rights to confidentiality and the intrusive and impractical nature of observation of these sessions [12]. Indeed, the findings from the qualitative sub study [39] indicated that the components that we were not able to assess were the more challenging aspects for facilitators to deliver and where participant behaviour change was critical for success. Examples included enabling participants to practice learning in the community with the support of others and being assisted to identify and work towards individual goals. The most appropriate methods for monitoring fidelity in psychosocial community-based complex interventions are therefore yet to be identified.

Methodological challenges

Understanding social constructs and interpreting observed behaviours are influenced by the subjective interpretations of researchers [40]. The inherent challenges of measuring the subjective nature of observed outcomes of complex interventions such as those being promoted through Journeying through Dementia as well as the observation method itself means by design, any bespoke instruments will lack psychometric properties. Researchers have little choice but to use bespoke assessment tools to evaluate behavioural complex interventions and such tools need to be designed underpinned by intervention theory and content [4].

Certain behaviours and criteria detailed on the observation measure were more concrete to understand and therefore observe and score. One example was concerned with the practicalities of delivery, and whether two facilitators were present as required by the protocol. In comparison, other criteria were more subjective such as relying on the researcher observing and recognising specific behaviours, for example evidence of facilitators enabling participants. This required the researchers who were scoring to fully understand how behaviours manifest, and use their judgement when interpreting what they observed [41]. A significant amount of prior work was therefore needed by those who were going to undertake the observations to agree how to identify and rate the criteria on the observation checklist [42]. Piloting was essential and it quickly established that several criteria were unlikely to be observed during the in-venue group aspect of the intervention over a limited number of sessions, for example practicing learning in the community. It is essential that those who are to observe and score intervention fidelity are involved in identifying how criteria might manifest and how to score criteria. Therefore the recommendation is for observers to take comprehensive notes in addition to scoring the checklist to use as evidence to explain scoring decisions.

Evaluation of this intervention required a comprehensive and detailed understanding of the manualised programme and training package. The primary trainer was the author of the manual and some members of the research team including the Chief Investigator and Fidelity Lead had gained extensive experience of the manual during its use in the prior feasibility study [24]. This knowledge and experience aided the development of the conceptual framework for fidelity assessment and creation of assessment instruments. One of the observing researchers had detailed knowledge of the intervention from working on the feasibility study. Whether someone who is not fully immersed, or supported by an experienced person, in this intervention can understand and therefore score the nuances is debatable and we therefore posit that inclusion of researchers with experience of the intervention for fidelity assessment of complex interventions is preferable.

The tools and measures used for this fidelity assessment were based in the underpinning methodology and ethos of the intervention, yet the question of robustness remains [43, 44]. To increase credibility we incorporated data from multiple sources including from the researchers and from the perspective of interventionists via self-report [30, 31]. However, the influence of observer presence and interactions as well as the inclusion of the interventionists’ self-report may have resulted in the Hawthorne effect and therefore reporting bias, with individuals potentially trying harder to achieve optimum scores on the fidelity checklists. As observers only attended two meetings at each site the fidelity researchers’ presence was evident to both the facilitators and participants and had to be explained. However, our findings suggest that the similarity in scores obtained from the researchers and interventionists indicated that potential for bias was limited. In addition conducting observations over time [33] and use of multiple perspectives helped reduce these potential bias [10]. We are however unable to comment on how comparable the remaining eight sites were in their intervention delivery to the four fidelity sites.

The research developed and piloted bespoke itemised checklists for observations including agreement of scoring guidelines. Facilitators however, although given simplified versions of the checklist with binary scoring (Yes/No) for usability, were not asked to score observations but to indicate whether they recalled a specified item being completed or achieved. Therefore, inconsistencies may have occurred based on memory recall for the facilitators. However, as our findings found similarity in scores obtained from the observing researchers and from facilitator’ self-report, this potential bias was limited. Encouraging and implementing simple and effective protocols for timely completion of self-report checklists could assist with completion rates as well as quality of data.

Timing of training in trials of group interventions is a well-known challenge [45]. The time lag between training and delivery of the intervention could have impacted upon fidelity but we mitigated against this by site supervisors being place and supervision of the site supervisors by the research team.

Lastly the assessment process was not applied to the individual sessions or out of venue activities due to several ethical considerations. These included the need to maintain participant confidentiality during individual sessions. It is essential this component of the intervention is seen as a safe space for the person to speak freely if they are to maximise participation in the intervention. And secondly confidentiality and consent issues when interacting with people who were not part of the trial during community based out of venue activities. We therefore relied on facilitator self-reports for these components of the intervention. We did consider asking participants themselves to provide self-report on these components but felt that due to other trial requirements this was deemed too burdensome. Obtaining participants contemporary views of the intervention would however provide nuanced data to better understand the intervention in action. If we are to evidence fidelity to delivery as intended as well as in relation to intervention effectiveness then all components of complex interventions need to be included in assessments.

Conclusion

We have conducted a fidelity evaluation of a complex psychosocial intervention, demonstrating its fitness for assessing several core components specific to the intervention. In addition, we have demonstrated that non-participatory observations in the community are possible when carried out in a regular venue. This approach can be used as a model for development of fidelity assessment for community based complex interventions. However, we were not able to observe all core components of the intervention due to the inability to be able to either observe or record all aspects. Therefore, questions remain about how to best observe and assess fidelity in community based complex psychosocial interventions where methodological and ethical issues prevent use of established assessment methods.

Availability of data and materials

The datasets generated and analysed for this study will be available upon request from the corresponding author.

References

  1. McDermott O, Charlesworth G, Hogervorst E, et al. Psychosocial interventions for people with dementia: a synthesis of systematic reviews. Aging Ment Health. 2019;23(4):393–403. https://doi.org/10.1080/13607863.2017.1423031.

    Article  PubMed  Google Scholar 

  2. Moniz-Cook E, Vernooij-Dassen M, Woods B, et al. Psychosocial interventions in dementia care research: the INTERDEM manifesto. Aging Ment Health. 2011;15(3):283–90. https://doi.org/10.1080/13607863.2010.543665 [published Online First: 2011/04/15].

    Article  PubMed  Google Scholar 

  3. Datta J, Petticrew M. Challenges to evaluating complex interventions: a content analysis of published papers. BMC Public Health. 2013;13(1):568. https://doi.org/10.1186/1471-2458-13-568.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Mars T, Ellard D, Carnes D, et al. Fidelity in complex behaviour change interventions: a standardised approach to evaluate intervention integrity. BMJ Open. 2013;3(11):e003555. https://doi.org/10.1136/bmjopen-2013-003555.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. https://doi.org/10.1136/bmj.a1655 [published Online First: 2008/10/01].

    Article  PubMed  PubMed Central  Google Scholar 

  6. Carroll C, Patterson M, Wood S, et al. A conceptual framework for implementation fidelity. Implement Sci. 2007;2(1):40. https://doi.org/10.1186/1748-5908-2-40.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Gearing RE, El-Bassel N, Ghesquiere A, et al. Major ingredients of fidelity: A review and scientific guide to improving quality of intervention research implementation. Clin Psychol Rev. 2011;31(1):79–88. https://doi.org/10.1016/j.cpr.2010.09.007.

    Article  PubMed  Google Scholar 

  8. Bellg AJ, Borrelli B, Resnick B, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443–51. https://doi.org/10.1037/0278-6133.23.5.443 [published Online First: 2004/09/16].

    Article  PubMed  Google Scholar 

  9. Roy R, Colquhoun H, Byrne M, et al. Addressing fidelity within complex health behaviour change interventions: A protocol of a scoping review of intervention fidelity frameworks and models. [version 1; peer review: 2 approved]. HRB Open Res. 2018;1(25):25. https://doi.org/10.12688/hrbopenres.12892.1.

  10. Borrelli B. The assessment, monitoring, and enhancement of treatment Fidelity in public health clinical trials. J Public Health Dent. 2011;71(s1):S52–63. https://doi.org/10.1111/j.1752-7325.2011.00233.x.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Clare L, Kudlicka A, Oyebode JR, et al. Individual goal-oriented cognitive rehabilitation to improve everyday functioning for people with early-stage dementia: a multicentre randomised controlled trial (the GREAT trial). Int J Geriatr Psychiatry. 2019;34(5):709–21. https://doi.org/10.1002/gps.5076.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Walton H, Tombor I, Burgess J, et al. Measuring fidelity of delivery of the community occupational therapy in dementia-UK intervention. BMC Geriatr. 2019;19(1):364. https://doi.org/10.1186/s12877-019-1385-7.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350:h1258. https://doi.org/10.1136/bmj.h1258.

    Article  PubMed  PubMed Central  Google Scholar 

  14. O'Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy. 2008;13(2):92–8. https://doi.org/10.1258/jhsrp.2007.007074 [published Online First: 2008/04/18].

    Article  PubMed  Google Scholar 

  15. Beck C, McSweeney JC, Richards KC, et al. Challenges in tailored intervention research. Nurs Outlook. 2010;58(2):104–10. https://doi.org/10.1016/j.outlook.2009.10.004.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Moncher FJ, Prinz RJ. Treatment fidelity in outcome studies. Clin Psychol Rev. 1991;11(3):247–66. https://doi.org/10.1016/0272-7358(91)90103-2.

    Article  Google Scholar 

  17. Lohr KN. Assessing health status and quality-of-life instruments: attributes and review criteria. Qual Life Res. 2002;11(3):193–205. https://doi.org/10.1023/A:1015291021312.

    Article  PubMed  Google Scholar 

  18. Bragstad LK, Bronken BA, Sveen U, et al. Implementation fidelity in a complex intervention promoting psychosocial well-being following stroke: an explanatory sequential mixed methods study. BMC Med Res Methodol. 2019;19(1):59. https://doi.org/10.1186/s12874-019-0694-z.

    Article  PubMed  PubMed Central  Google Scholar 

  19. O’Malley KA, Qualls SH. Application of treatment fidelity in tailored caregiver interventions. Aging Ment Health. 2019:1–9. https://doi.org/10.1080/13607863.2019.1647134.

  20. Walton H, Spector A, Tombor I, et al. Measures of fidelity of delivery of, and engagement with, complex, face-to-face health behaviour change interventions: a systematic review of measure quality. Br J Health Psychol. 2017;22(4):872–903. https://doi.org/10.1111/bjhp.12260.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Wright J, Foster A, Cooper C, et al. Study protocol for a randomised controlled trial assessing the clinical and cost-effectiveness of the journeying through dementia (JtD) intervention compared to usual care. BMJ Open. 2019;9(9):e029207. https://doi.org/10.1136/bmjopen-2019-029207.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Mountain GA, Craig CL. What should be in a self-management programme for people with early dementia? Aging Ment Health. 2012;16(5):576–83. https://doi.org/10.1080/13607863.2011.651430 [published Online First: 2012/03/01].

    Article  PubMed  Google Scholar 

  23. Bandura A. Self-efficacy: the exercise of control. New York: Freeman; 1997.

    Google Scholar 

  24. Sprange K, Mountain GA, Shortland K, et al. Journeying through dementia, a community-based self-management intervention for people aged 65 years and over: a feasibility study to inform a future trial. Pilot Feasibility Stud. 2015;1(1):42. https://doi.org/10.1186/s40814-015-0039-6.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Bandura A. Self-efficacy mechanism in human agency. Am Psychol. 1982;37(122):122–47.

    Article  Google Scholar 

  26. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42. https://doi.org/10.1186/1748-5908-6-42.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Mountain G, Windle G, Hind D, et al. A preventative lifestyle intervention for older adults (lifestyle matters): a randomised controlled trial. Age Ageing. 2017;46(4):627–34. https://doi.org/10.1093/ageing/afx021.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Karlin BE, Cross G. From the laboratory to the therapy room: National dissemination and implementation of evidence-based psychotherapies in the U.S. Department of Veterans Affairs Health Care System. Am Psychol. 2014;69(1):19–33. https://doi.org/10.1037/a0033888 [published Online First: 2013/09/05].

    Article  PubMed  Google Scholar 

  29. National Institute for Health and Care Excellence (NICE). Behaviour change: general approaches Public Health Guidleine [PH6], 2007.

    Google Scholar 

  30. Keller-Margulis MA. Fidelity of implementation framework: a critical need for response to intervention models. Psychol Sch. 2012;49(4):342–52. https://doi.org/10.1002/pits.21602.

    Article  Google Scholar 

  31. Munafo MR, Davey SG. Robust research needs many lines of evidence. Nature. 2018;553(7689):399–401. https://doi.org/10.1038/d41586-018-01023-3 [published Online First: 2018/01/26].

    Article  CAS  PubMed  Google Scholar 

  32. Gorman GE, Clayton P. Qualitative research for the information professional. A Practical Handbook 2: London: Facet 2005.

    Google Scholar 

  33. Baker LM. Observation: A Complex Research Method. Libr Trends. 2006;55(1):173.

    Google Scholar 

  34. Department of Health. Mental Capacity Act 2005 London 2005 [15 Jan 2021]. Available from: https://www.legislation.gov.uk/ukpga/2005/9/contents. Accessed 15 Jan 2021.

  35. Health Research Authority. UK Policy Framework for Health and Social Care Research [15 Jan 2021]. Available from: https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/uk-policy-framework-health-social-care-research/. Accessed 15 Jan 2021.

  36. Lorencatto F, West R, Bruguera C, et al. A method for assessing fidelity of delivery of telephone behavioral support for smoking cessation. J Consult Clin Psychol. 2014;82(3):482–91. https://doi.org/10.1037/a0035149 [published Online First: 2013/12/04].

    Article  PubMed  Google Scholar 

  37. Masterson-Algar P, Burton CR, Rycroft-Malone J, et al. Towards a programme theory for fidelity in the evaluation of complex interventions. J Eval Clin Pract. 2014;20(4):445–52. https://doi.org/10.1111/jep.12174 [published Online First: 2014/05/21].

    Article  PubMed  Google Scholar 

  38. McHugh ML. Interrater reliability: the kappa statistic. Biochemia Med. 2012;22(3):276–82 [published Online First: 2012/10/25].

    Article  Google Scholar 

  39. Sprange K, Beresford-Dent J, Mountain G, et al. Journeying through dementia randomised controlled trial of a psychosocial intervention for people living with early dementia: embedded qualitative study with participants, Carers and interventionists. Clin Interv Aging In Press 2021.

  40. Stafford MR, Stafford TF. Participant observation and the pursuit of truth: methodological and ethical considerations. Market Res Soc J. 1993;35(1):1–16. https://doi.org/10.1177/147078539303500105.

    Article  Google Scholar 

  41. Mowbray CT, Holter MC, Teague GB, et al. Fidelity criteria: development, measurement, and validation. Am J Eval. 2003;24(3):315–40. https://doi.org/10.1177/109821400302400303.

    Article  Google Scholar 

  42. Hardeman W, Michie S, Fanshawe T, et al. Fidelity of delivery of a physical activity intervention: predictors and consequences. Psychol Health, 2008. 23(1):11–24. https://doi.org/10.1080/08870440701615948 [published Online First: 2008/01/01].

  43. Schinckus L, Van den Broucke S, Housiaux M. Assessment of implementation fidelity in diabetes self-management education programs: a systematic review. Patient Educ Couns. 2014;96(1):13–21. https://doi.org/10.1016/j.pec.2014.04.002 [published Online First: 2014/05/06].

    Article  PubMed  Google Scholar 

  44. Toomey E, Matthews J, Guerin S, et al. Development of a feasible implementation Fidelity protocol within a complex physical therapy–led self-management intervention. Phys Ther. 2016;96(8):1287–98. https://doi.org/10.2522/ptj.20150446.

    Article  PubMed  Google Scholar 

  45. Biggs K, Hind D, Gossage-Worrall R, et al. Challenges in the design, planning and implementation of trials evaluating group interventions. Trials. 2020;21(1):116. https://doi.org/10.1186/s13063-019-3807-4.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Members of the Journeying through Dementia PPI Advisory Group, Michael Andrews and our Experts by Experience Group (based at the University of Bradford). The sponsor Nicholas Bell, Sheffield Health and Social Care NHS Foundation Trust. Stephen Walters, Ellen Lee, Amanda Loban, Emily Turton, Esme Moniz-Cook, Tom Dening, Tracey Young, Peter Bowie, Daniel Blackburn and Jasper Palmier-Claus of the Trial Management Group (TMG). Catherine Hewitt (Chair), University of York, Wendy Mitchell, PPI Representative, Jennifer Wenborn, University College London of the Trial Steering Committee (TSC) and Mona Kanaan University of York, Jane Burgess, North East London NHS Foundation Trust and Emily Robinson, Kings College London of the Data Monitoring and Ethics Committee (DMEC) whom all advised on and critically reviewed the trial protocol including the fidelity assessment. All the facilitators, supervisors and support staff from the four sites who took part in the fidelity assessment.

Funding

This study was funded by the National Institute for Health Research (NIHR) Health Technology Assessment Programme (14/140/80). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.

Author information

Authors and Affiliations

Authors

Contributions

Fidelity assessment lead – KS. Author of the manualised programme – CCr. Intervention trainers – CCr, GM, CM. Research supervisor to recruiting sites – KB. Design of the protocol, framework and measures – KS, JBD, GM, CCr, KB, JW, CCo. Data collection – KS, JBD, GM, CM, KB, JW, SM, BT. Data analysis – KS, JBD, GM, CCo. Development of manuscript – KS, GM. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Kirsty Sprange.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was obtained in July 2016 (ref no. 16/YH/0238) from the United Kingdom National Health Service Research Ethics. United Kingdom Health Research Authority approval was given (IRAS reference 199383) in August 2016. Participating sites gained permission from the local NHS Trust Research and Development Department prior to commencing research activities as a study site.

Consent for publication

We obtained written informed consent for the participants who took part in the interviews via the Trial Consent Form. This information is held as part of the archived record of the trial. Only anonymised nonidentifiable data are used in this report as per written consent.

Competing interests

Clare Craig is the author of the Journeying through Dementia manual. All other authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sprange, K., Beresford-Dent, J., Mountain, G. et al. Assessing fidelity of a community based psychosocial intervention for people with mild dementia within a large randomised controlled trial. BMC Geriatr 21, 119 (2021). https://doi.org/10.1186/s12877-021-02070-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12877-021-02070-8

Keywords