Skill Enactment and Knowledge Acquisition in Digital Cognitive Behavioral Therapy for Depression and Anxiety: Systematic Review of Randomized Controlled Trials

Background Digital cognitive behavioral therapy (CBT) interventions can effectively prevent and treat depression and anxiety, but engagement with these programs is often low. Although extensive research has evaluated program use as a proxy for engagement, the extent to which users acquire knowledge and enact skills from these programs has been largely overlooked. Objective This study aimed to investigate how skill enactment and knowledge acquisition have been measured, evaluate postintervention changes in skill enactment and knowledge acquisition, examine whether mental health outcomes are associated with skill enactment or knowledge acquisition, and evaluate predictors of skill enactment and knowledge acquisition. Methods PubMed, PsycINFO, and Cochrane CENTRAL were searched for randomized controlled trials (RCTs) published between January 2000 and July 2022. We included RCTs comparing digital CBT with any comparison group in adolescents or adults (aged ≥12 years) for anxiety or depression. Eligible studies reported quantitative measures of skill enactment or knowledge acquisition. The methodological quality of the studies was assessed using the Joanna Briggs Institute Critical Appraisal Checklist for RCTs. Narrative synthesis was used to address the review questions. Results In total, 43 papers were included, of which 29 (67%) reported a skill enactment measure and 15 (35%) reported a knowledge acquisition measure. Skill enactment was typically operationalized as the frequency of enacting skills using the completion of in-program activities (ie, formal skill enactment; 13/29, 45%) and intervention-specific (9/29, 31%) or standardized (8/29, 28%) questionnaires. Knowledge measures included tests of CBT knowledge (6/15, 40%) or mental health literacy (5/15, 33%) and self-report questionnaires (6/15, 40%). In total, 17 studies evaluated postintervention changes in skill enactment or knowledge acquisition, and findings were mostly significant for skill enactment (6/8, 75% of the studies), CBT knowledge (6/6, 100%), and mental health literacy (4/5, 80%). Of the 12 studies that evaluated the association between skill enactment and postintervention mental health outcomes, most reported ≥1 significant positive finding on standardized questionnaires (4/4, 100%), formal skill enactment indicators (5/7, 71%), or intervention-specific questionnaires (1/1, 100%). None of the 4 studies that evaluated the association between knowledge acquisition and primary mental health outcomes reported significant results. A total of 13 studies investigated predictors of skill enactment; only type of guidance and improvements in psychological variables were associated with increased skill enactment in ≥2 analyses. Predictors of knowledge acquisition were evaluated in 2 studies. Conclusions Digital CBT for depression and anxiety can improve skill enactment and knowledge acquisition. However, only skill enactment appears to be associated with mental health outcomes, which may depend on the type of measure examined. Additional research is needed to understand what types and levels of skill enactment and knowledge acquisition are most relevant for outcomes and identify predictors of these constructs. Trial Registration PROSPERO CRD42021275270; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=275270


Information sources
6 Specify all databases, registers, websites, organisations, reference lists and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted.
p. 4, Appendix 2 Search strategy 7 Present the full search strategies for all databases, registers and websites, including any filters and limits used. Appendix 2 Selection process 8 Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the proc ess.
p. 6 Data collection process 9 Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process.
pp. 6-7 Data items 10a List and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome d omain in each study were sought (e.g. for all measures, time points, analyses), and if not, the methods used to decide which results to collect.
pp. 7-8 10b List and define all other variables for which data were sought (e.g. participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information.
pp. 6-7 Study risk of bias assessment 11 Specify the methods used to assess risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process.
p. 7 Effect measures 12 Specify for each outcome the effect measure(s) (e.g. risk ratio, mean difference) used in the synthesis or presentation of results. p. 7-8 Synthesis methods 13a Describe the processes used to decide which studies were eligible for each synthesis (e.g. tabulating the study interventio n characteristics and comparing against the planned groups for each synthesis (item #5)).
pp. 7-8 13b Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statisti cs, or data conversions.
pp. 7-8 13c Describe any methods used to tabulate or visually display results of individual studies and syntheses. p. 7 13d Describe any methods used to synthesize results and provide a rationale for the choice(s). If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used.
p. 7 13e Describe any methods used to explore possible causes of heterogeneity among study results (e.g. subgroup analysis, meta-regression). p. 7 13f Describe any sensitivity analyses conducted to assess robustness of the synthesized results. NA

Reporting bias assessment
14 Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases). NA Certainty assessment 15 Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome. NA

Study selection
16a Describe the results of the search and selection process, from the number of records identified in the search to the number of studies inc luded in the review, ideally using a flow diagram .
pp. 8-9, Figure 1 16b Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded. p. 9 Study characteristics Availability of data, code and 27 Report which of the following are publicly available and where they can be found: template data collection forms; data extrac ted from included studies; data used for all analyses; analytic code; any other m aterials used in the review.