Attrition in Conversational Agent–Delivered Mental Health Interventions: Systematic Review and Meta-Analysis

Background Conversational agents (CAs) or chatbots are computer programs that mimic human conversation. They have the potential to improve access to mental health interventions through automated, scalable, and personalized delivery of psychotherapeutic content. However, digital health interventions, including those delivered by CAs, often have high attrition rates. Identifying the factors associated with attrition is critical to improving future clinical trials. Objective This review aims to estimate the overall and differential rates of attrition in CA-delivered mental health interventions (CA interventions), evaluate the impact of study design and intervention-related aspects on attrition, and describe study design features aimed at reducing or mitigating study attrition. Methods We searched PubMed, Embase (Ovid), PsycINFO (Ovid), Cochrane Central Register of Controlled Trials, and Web of Science, and conducted a gray literature search on Google Scholar in June 2022. We included randomized controlled trials that compared CA interventions against control groups and excluded studies that lasted for 1 session only and used Wizard of Oz interventions. We also assessed the risk of bias in the included studies using the Cochrane Risk of Bias Tool 2.0. Random-effects proportional meta-analysis was applied to calculate the pooled dropout rates in the intervention groups. Random-effects meta-analysis was used to compare the attrition rate in the intervention groups with that in the control groups. We used a narrative review to summarize the findings. Results The systematic search retrieved 4566 records from peer-reviewed databases and citation searches, of which 41 (0.90%) randomized controlled trials met the inclusion criteria. The meta-analytic overall attrition rate in the intervention group was 21.84% (95% CI 16.74%-27.36%; I2=94%). Short-term studies that lasted ≤8 weeks showed a lower attrition rate (18.05%, 95% CI 9.91%- 27.76%; I2=94.6%) than long-term studies that lasted >8 weeks (26.59%, 95% CI 20.09%-33.63%; I2=93.89%). Intervention group participants were more likely to attrit than control group participants for short-term (log odds ratio 1.22, 95% CI 0.99-1.50; I2=21.89%) and long-term studies (log odds ratio 1.33, 95% CI 1.08-1.65; I2=49.43%). Intervention-related characteristics associated with higher attrition include stand-alone CA interventions without human support, not having a symptom tracker feature, no visual representation of the CA, and comparing CA interventions with waitlist controls. No participant-level factor reliably predicted attrition. Conclusions Our results indicated that approximately one-fifth of the participants will drop out from CA interventions in short-term studies. High heterogeneities made it difficult to generalize the findings. Our results suggested that future CA interventions should adopt a blended design with human support, use symptom tracking, compare CA intervention groups against active controls rather than waitlist controls, and include a visual representation of the CA to reduce the attrition rate. Trial Registration PROSPERO International Prospective Register of Systematic Reviews CRD42022341415; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022341415


Objectives
4 Provide an explicit statement of the objective(s) or question(s) the review addresses.

Eligibility criteria 5
Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses.

6
Specify all databases, registers, websites, organisations, reference lists and other sources searched or consulted to identify studies.Specify the date when each source was last searched or consulted.

8-9
Search strategy 7 Present the full search strategies for all databases, registers and websites, including any filters and limits used.

Multimedia Appendix 2
Selection process 8 Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process.

Data collection process
9 Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process.

Study risk of bias assessment
11 Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process. 10

Effect measures
12 Specify for each outcome the effect measure(s) (e.g.risk ratio, mean difference) used in the synthesis or presentation of results.

10-11
Synthesis methods 13a Describe the processes used to decide which studies were eligible for each synthesis (e.g.tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item 5)).

10-13
13b Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions.

10-13
13c Describe any methods used to tabulate or visually display results of individual studies and syntheses.

10-13
13d Describe any methods used to synthesize results and provide a rationale for the choice(s).If metaanalysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used.

10-13
13e Describe any methods used to explore possible causes of heterogeneity among study results (e.g.subgroup analysis, meta-regression).
12-13, Multimedia Appendix 1 13f Describe any sensitivity analyses conducted to assess robustness of the synthesized results.

Reporting bias assessment
14 Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases).

Multimedia
Appendix 1

Topic
No. Item Location where item is reported

Certainty assessment
15 Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome.

Study selection
16a Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram.

12-13
16b Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded.

Study characteristics
17 Cite each included study and present its characteristics.

Risk of bias in studies
18 Present assessments of risk of bias for each included study.

Results of individual studies
19 For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g.confidence/credible interval), ideally using structured tables or plots.

Results of syntheses
20a For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies.

17-18
20b Present results of all statistical syntheses conducted.If meta-analysis was done, present for each the summary estimate and its precision (e.g.confidence/credible interval) and measures of statistical heterogeneity.If comparing groups, describe the direction of the effect.

20-36
20c Present results of all investigations of possible causes of heterogeneity among study results.

20-36
20d Present results of all sensitivity analyses conducted to assess the robustness of the synthesized results.

20-36
Reporting biases 21 Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed.

Certainty of evidence
22 Present assessments of certainty (or confidence) in the body of evidence for each outcome assessed.

Topic
No. Item Location where item is reported

Discussion
23a Provide a general interpretation of the results in the context of other evidence.

36-40
23b Discuss any limitations of the evidence included in the review.

40-41
23c Discuss any limitations of the review processes used.

40-41
23d Discuss implications of the results for practice, policy, and future research.

Registration and protocol
24a Provide registration information for the review, including register name and registration number, or state that the review was not registered.

7
24b Indicate where the review protocol can be accessed, or state that a protocol was not prepared.

Eligibility criteria
3 Specify the inclusion and exclusion criteria for the review.Yes Information sources 4 Specify the information sources (e.g.databases, registers) used to identify studies and the date when each was last searched.

Yes
Risk of bias 5 Specify the methods used to assess risk of bias in the included studies.

Synthesis of results
6 Specify the methods used to present and synthesize results.Yes

Included studies 7
Give the total number of included studies and participants and summarise relevant characteristics of studies.

8
Present results for main outcomes, preferably indicating the number of included studies and participants for each.If meta-analysis was done, report the summary estimate and confidence/credible interval.If comparing groups, indicate the direction of the effect (i.e. which group is favoured).

Limitations of evidence 9
Provide a brief summary of the limitations of the evidence included in the review (e.g.study risk of bias, inconsistency and imprecision).
of financial or non-financial support for the review, and the role of the funders or sponsors in the review.statement of the main objective(s) or question(s) the review addresses.
primary source of funding for the review.No Registration 12 Provide the register name and registration number.No