Using Mobile Apps to Assess and Treat Depression in Hispanic and Latino Populations: Fully Remote Randomized Clinical Trial

Background Most people with mental health disorders fail to receive timely access to adequate care. US Hispanic/Latino individuals are particularly underrepresented in mental health care and are historically a very difficult population to recruit into clinical trials; however, they have increasing access to mobile technology, with over 75% owning a smartphone. This technology has the potential to overcome known barriers to accessing and utilizing traditional assessment and treatment approaches. Objective This study aimed to compare recruitment and engagement in a fully remote trial of individuals with depression who either self-identify as Hispanic/Latino or not. A secondary aim was to assess treatment outcomes in these individuals using three different self-guided mobile apps: iPST (based on evidence-based therapeutic principles from problem-solving therapy, PST), Project Evolution (EVO; a cognitive training app based on cognitive neuroscience principles), and health tips (a health information app that served as an information control). Methods We recruited Spanish and English speaking participants through social media platforms, internet-based advertisements, and traditional fliers in select locations in each state across the United States. Assessment and self-guided treatment was conducted on each participant's smartphone or tablet. We enrolled 389 Hispanic/Latino and 637 non-Hispanic/Latino adults with mild to moderate depression as determined by Patient Health Questionnaire-9 (PHQ-9) score≥5 or related functional impairment. Participants were first asked about their preferences among the three apps and then randomized to their top two choices. Outcomes were depressive symptom severity (measured using PHQ-9) and functional impairment (assessed with Sheehan Disability Scale), collected over 3 months. Engagement in the study was assessed based on the number of times participants completed active surveys. Results We screened 4502 participants and enrolled 1040 participants from throughout the United States over 6 months, yielding a sample of 348 active users. Long-term engagement surfaced as a key issue among Hispanic/Latino participants, who dropped from the study 2 weeks earlier than their non-Hispanic/Latino counterparts (P<.02). No significant differences were observed for treatment outcomes between those identifying as Hispanic/Latino or not. Although depressive symptoms improved (beta=–2.66, P=.006) over the treatment course, outcomes did not vary by treatment app. Conclusions Fully remote mobile-based studies can attract a diverse participant pool including people from traditionally underserved communities in mental health care and research (here, Hispanic/Latino individuals). However, keeping participants engaged in this type of “low-touch” research study remains challenging. Hispanic/Latino populations may be less willing to use mobile apps for assessing and managing depression. Future research endeavors should use a user-centered design to determine the role of mobile apps in the assessment and treatment of depression for this population, app features they would be interested in using, and strategies for long-term engagement. Trial Registration Clinicaltrials.gov NCT01808976; https://clinicaltrials.gov/ct2/show/NCT01808976 (Archived by WebCite at http://www.webcitation.org/70xI3ILkz)

Items numbered 1., 2., 3., 4a., 4b etc are original CONSORT or CONSORT-NPT (non-pharmacologic treatment) items. Items with Roman numerals (i., ii, iii, iv etc.) are proposed CONSORT-EHEALTH extensions/clari cations. BELOW, PLEASE RATE THE IMPORTANCE OF THESE PROPOSED NEW SUB-ITEMS AND/OR COMMENT ON EACH ITEM. (comments could also include original references to be cited to support the subitem). This is a Delphi survey to obtain feedback from ehealth and reporting guidelines experts (including journal editors). If something important is missing which in your opinion must be part of EVERY reported ehealth trial, please add an item, but remember that there will be a separate Exploration&Elaboration document -the checklist should only contain essential and universally applicable items.
Subitems should be included in the new CONSORT-EHEALTH checklist if one of the following conditions for the subitem are met: -if not conducted properly it may lead to empirical evidence of bias or threaten internal validity -if not reported properly this is associated with empirical evidence of bias -it may be associated with the success of the trial -it may be associated with external validity (applicability or success of the application/intervention in other settings) -it re ects crucial trial results -it aids in the interpretation of results At the same time, the CONSORT-EHEALTH checklist should be reasonably brief and universally applicable, so only essential items should be included.

Other guidelines
Are you aware of any other guidelines that should be cited? (give references and provide a short description) TITLE AND ABSTRACT 1a) Identi cation as a randomized trial in the title

1a-i) Identify the mode of delivery in the title
Identify the mode of delivery. Preferably use "web-based" and/or "mobile" and/or "electronic game" in the title. Avoid ambiguous terms like "online", "virtual", "interactive". Use "Internet-based" only if Intervention includes non-web-based Internet components (e.g. email), use "computer-based" or "electronic" only if o ine products are used. Use "virtual" only in the context of "virtual reality" (3-D worlds). Use "online" only in the context of "online support groups". Complement or substitute product names with broader terms for the class of products (such as "mobile" or "smart phone" instead of "iphone"), especially if the application runs on different platforms.
1 2 3 4 5 subitem not at all important essential Comment on subitem 1a-i) "Using mobile apps to assess and treat depression in Hispanics and Latinos"

1a-ii) Non-web-based components or important co-interventions in title
Mention non-web-based components or important co-interventions in title, if any (e.g., "with telephone support").

Comment on subitem 1a-ii)
The study was completely mobile with minimal staff contact (restricted to technical support, payment, or reminders to use intervention/study apps only).

1b-i) Key features/functionalities/components of the intervention and comparator in the abstract
Mention key features/functionalities/components of the intervention and comparator in the abstract. If possible, also mention theories and principles used for designing the site. Keep in mind the needs of systematic reviewers and indexers by including important synonyms.

Comment on subitem 1b-i)
"The apps were Project: EVO™, a cognitive training app theorized to mitigate depressive symptoms by improving cognitive control, and iPST, an app based on an evidencebased psychotherapy for depression, and Health Tips, a control app condition."

1b-ii) Level of human involvement in the abstract
Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any).

Comment on subitem 1b-ii)
"Treatment and assessment were conducted remotely on each participant's smart phone and/or tablet with minimal contact with study staff."

) ( )
Open vs. closed, web-based (self-assessment) vs. face-to-face assessments in abstract: Mention how participants were recruited (online vs. o ine), e.g., from an open access website (open trial) or from a clinic or other closed user group (closed trial), and clarify if this was a purely web-based trial, or there were faceto-face components (as part of the intervention or for assessment). Clearly say if outcomes were selfassessed through questionnaires (as common in web-based trials).

Comment on subitem 1b-iii)
"This was a fully remote, 12week randomized controlled trial (RCT) conducted across the United States. Participants were recruited through online advertisements and social media with an emphasis on targeting Hispanics/Latinos. Treatment and assessment were conducted remotely on each participant's smart phone and/or tablet with minimal contact with study staff."

1b-iv) Results in abstract must contain use data
Report number of participants enrolled/assessed in each group, the use/uptake of the intervention (e.g., attrition/adherence metrics, use over time, number of logins etc.), in addition to primary/secondary outcomes.

1b-v) Conclusions/Discussions in abstract for negative trials
Conclusions/Discussions in abstract for negative trials: Discuss the primary outcome -if the trial is negative (primary outcome not changed), and the intervention was not used, discuss whether negative results are attributable to lack of uptake and discuss reasons.

Comment on subitem 1b-v)
Sixty six percent of participants did not download their intervention app. There was close to fifty percent further drop off after the week 1. About thirty percent of the Participants' depression were significantly improved after 4 weeks of treatment, with improvement persisting at the 12week final assessment.
"Longterm engagement surfaced as a key issue with Spanish speaking participants, as these individuals dropped out two weeks earlier than their English counterparts. " "Our findings suggest that fully remote mobilebased studies can attract a diverse participant pool including people from lower socioeconomic backgrounds (in this case Hispanic/Latinos). However, keeping participants engaged in this type of 'lowtouch' research study remains challenging. Further research including usercentered approaches are needed to understand the culturally adequate incentive levels, notification strategies and effective mobile content formatting to enhance the recruitment and retention of minority populations."

2a-i) Problem and the type of system/solution
Describe the problem and the type of system/solution that is object of the study: intended as stand-alone intervention vs. incorporated in broader health care After confirming completion of baseline assessments (or 72 hours after the initiation of these assessments, whichever came first), participants were sent an online survey which described each of the 3 possible treatment arms. Following this description, participants were asked to select which two apps they were most inclined to use in this study (thus, we employed an equipoise stratified design(Lavori et al., 2001)). Participants were then randomly assigned to one of these two preferred conditions sent a link to download the intervention app and provided 3b) Important changes to methods after trial commencement (such as eligibility criteria), with reasons

3b-i) Bug xes, Downtimes, Content Changes
Bug xes, Downtimes, Content Changes: ehealth systems are often dynamic systems. A description of changes to methods therefore also includes important changes made on the intervention or comparator during the trial (e.g., major bug xes or changes in the functionality or content) (5-iii) and other "unexpected events" that may have in uenced study design such as staff changes, system failures/downtimes, etc. [2].

Add a subitem under CONSORT item 3b
4a) Eligibility criteria for participants

4a-i) Computer / Internet literacy
Computer / Internet literacy is often an implicit "de facto" eligibility criterion -this should be explicitly clari ed [1].

Comment on subitem 4a-i)
and own either an a) iPhone with Wi Fi or 3G/4G/LTE capabilities or b) an Android phone along with an Apple iPad version 2.0 or newer device. iOS ownership was required as one of our intervention apps were only available on iOS devices at the time of the study. Participants had to endorse clinically significant symptoms of depression, as indicated by either a score of 5 or higher on the Patient Health Questionnaire ([PHQ 9, 21]), or a score of 2 or greater on PHQ item 10 (indicating that they felt disabled in their life because of their mood). " 4a-ii) Open vs. closed, web-based vs. face-to-face assessments: Open vs. closed, web-based vs. face-to-face assessments: Mention how participants were recruited (online vs. o ine), e.g., from an open access website or from a clinic, and clarify if this was a purely web-based trial, or there were face-to-face components (as part of the intervention or for assessment), i.e., to what degree got the study team to know the participant. In online-only trials, clarify if participants were quasianonymous and whether having multiple identities was possible or whether technical or logistical measures (e.g., cookies, email con rmation, phone calls) were used to detect/prevent these.

2 3 4 5
subitem not at all important essential Comment on subitem 4a-ii)

4a-iii) Information giving durnig recruitment
Information given during recruitment. Specify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results.

Add a subitem under CONSORT item 4a
4b) Settings and locations where the data were collected explain the study. Participants had to pass a quiz that confirmed their understanding that participation was voluntary, was not a substitute for treatment, and that they were to be randomized to treatment conditions. Each question had to be answered correctly before moving on to baseline assessment and randomization. Individuals who indicated that they speak Spanish were given a 7question multiple choice survey assessing language proficiency. Eligibility was established after consent was obtained. Upon being eligible, participants were sent a link to download their assessment application (Surveytory)."

4b-i) Report if outcomes were (self-)assessed through online questionnaires
Clearly report if outcomes were (self-)assessed through online questionnaires (as common in web-based trials) or otherwise.
1 2 3 4 5 subitem not at all important essential Comment on subitem 4b-i) "All treatment and assessment was delivered over the participants' smart devices, using assessment and intervention software."

4b-ii) Report how institutional a liations are displayed
"Report how institutional a liations are displayed to potential participants [on ehealth media], as a liations with prestigious hospitals or universities may affect volunteer rates, use, and reactions with regards to an intervention" [1].

Add a subitem under CONSORT item 4b
5) The interventions for each group with su cient details to allow replication, including how and when they were actually administered

5-i) Mention names, credential, a liations of the developers, sponsors, and owners
Mention names, credential, a liations of the developers, sponsors, and owners [6] (if authors/evaluators are owners or developer of the software, this needs to be declared in a "Con ict of interest" section).

Comment on subitem 5-i)
"AG is cofounder, chief science advisor and shareholder of Akili Interactive Labs, a company that develops cognitive training software. AG has a patent pending for a gamebased cognitive training intervention, 'Enhancing cognition in the presence of distraction and/or interruption', on which the cognitive training application (PROJECT: EVO) that was used in this study was based. No other author has any conflict of interest to report."

5-ii) Describe the history/development process
Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results.

5-iii) Revisions and updating
Revisions and updating. Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial.
Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b).

5-iv) Quality assurance methods
Provide information on quality assurance methods to ensure accuracy and quality of information provided [1], if applicable.

Comment on subitem 5-iv)
N/A; all assessments were selfreported.

5-v) Ensure replicability by publishing the source code, and/or providing screenshots/screencapture video, and/or providing owcharts of the algorithms used
Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing owcharts of the algorithms used. Replicability (i.e., other researchers should in principle be able to replicate the study) is a hallmark of scienti c reporting.

5-vi) Digital preservation
Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org, and/or publishing the source code or screenshots/videos alongside the article). As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login.
subitem not at all important essential Comment on subitem 5-vi)

5-vii) Access
Access: Describe how participants accessed the application, in what setting/context, if they had to pay (or were paid) or not, whether they had to be a member of speci c group. If known, describe how participants obtained "access to the platform and Internet" [1]. To ensure access for editors/reviewers/readers, consider to provide a "backdoor" login account or demo mode for reviewers/readers to explore the application (also important for archiving purposes, see vi).

Comment on subitem 5-vii)
"Once participants completed the consent process, a secure, oneuser valid link to a secure webpage was sent to participants' email address that contained a brief video explaining how to download and then use their assigned intervention. This webpage also contained a link to automatically download said apps to the participants' phone or iPad." Comment on subitem 5-viii)

5-ix) Describe use parameters
Describe use parameters (e.g., intended "doses" and optimal timing for use) [1]. Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use [1], if any, or was the intervention used ad libitum.

Comment on subitem 5-ix)
For all treatment and assessments participants were sent reminders/notifications through the mobile app interfaces

5-x) Clarify the level of human involvement
Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as co-intervention (detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered" [6]. It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability).

Comment on subitem 5-x)
"Study staff contacted participants to remind them to use their intervention and/or assessment app if they had three consecutive days of missing data via email or SMS. Aside from this, participants were only contacted when they A) were due payment, or B) when they reached out to study staff for technical support."

5-xi) Report any prompts/reminders used
Report any prompts/reminders used: Clarify if there were prompts (letters, emails, phone calls, SMS) to use the application, what triggered them, frequency etc [1]. It may be necessary to distinguish between the level of prompts/reminders required for the trial, and the level of prompts/reminders for a routine application outside of a RCT setting (discuss under item 21 -generalizability).
The first app was a cognitive intervention video game (Project: Evolution™ [EVO]) designed to modulate cognitive control abilities, a common neurological deficit underlying depression [22]. The second intervention was an app based on problemsolving therapy (iPST), an evidencebased treatment for depression [23]. The final intervention app, an information control, provided daily health tips (HTips) for overcoming depressed mood such as selfcare (e.g., taking a shower) or physical activity (e.g., taking a walk; see supplemental materials from Anguera et l 2016 f f th d i ti f h) Si il t th t 1 2 3 4 5 subitem not at all important essential

Comment on subitem 5-xi)
"Study staff contacted participants to remind them to use their intervention and/or assessment app if they had three consecutive days of missing data via email or SMS. Aside from this, participants were only contacted when they A) were due payment, or B) when they reached out to study staff for technical support."

5-xii) Describe any co-interventions (incl. training/support)
Describe any co-interventions (incl. training/support): Clearly state any "interventions that are provided in addition to the targeted eHealth intervention" [1], as ehealth intervention may not be designed as standalone intervention. This includes training sessions and support [1]. It may be necessary to distinguish between the level of training required for the trial, and the level of training for a routine application outside of a RCT setting (discuss under item 21 -generalizability.

Comment on subitem 5-xii)
There were no additional interventions beyond the mobile mental health applications apart from technical assistance in downloading the intervention application itself.

Add a subitem under CONSORT item 5
6a) Completely de ned pre-speci ed primary and secondary outcome measures, including how and when they were assessed 6a-i) Online questionnaires: describe if they were validated for online use [6] and apply CHERRIES items to describe how the questionnaires were designed/deployed If outcomes were obtained through online questionnaires, describe if they were validated for online use [6] and apply CHERRIES items to describe how the questionnaires were designed/deployed [9]. 1 2 3 4 5 subitem not at all important essential Comment on subitem 6a-i)

6a-ii) Describe whether and how "use" (including intensity of use/dosage) was de ned/measured/monitored
Describe whether and how "use" (including intensity of use/dosage) was de ned/measured/monitored (logins, log le analysis, etc.). Use/adoption metrics are important process outcomes that should be reported in any ehealth trial.

2 3 4 5 subitem not at all important essential
Comment on subitem 6a-ii)

6a-iii) Describe whether, how, and when qualitative feedback from participants was obtained
Describe whether, how, and when qualitative feedback from participants was obtained (e.g., through emails, feedback forms, interviews, focus groups). 6b) Any changes to trial outcomes after the trial commenced, with reasons (no EHEALTH-speci c subitems under CONSORT item 6b)

Comment below to suggest a subitem
There were no changes to trial outcomes after the trial commenced.
7a) How sample size was determined NPT: When applicable, details of whether and how the clustering by care provides or centers was addressed

7a-i) Describe whether and how expected attrition was taken into account when calculating the sample size
Describe whether and how expected attrition was taken into account when calculating the sample size.

Comment below to suggest a subitem
Participants ended the study once they reached the 12 week mark.
8a) Method used to generate the random allocation sequence NPT: When applicable, how care providers were allocated to each trial group

Comment below to suggest a subitem
No care providers were allocated to any of the trial groups. Participants were assigned randomly into one of three conditions (see below).
8b) Type of randomisation; details of any restriction (such as blocking and block size)

(no EHEALTH-speci c subitems under CONSORT item 8b)
Comment below to suggest a subitem "Participants were randomly assigned to one of the three apps using a random number generator built into the eligibility survey." 9) Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned

(no EHEALTH-speci c subitems under CONSORT item 9)
Comment below to suggest a subitem See above. A random number generator was used to determine which treatment group participants would be assigned to.
10) Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions

(no EHEALTH-speci c subitems under CONSORT item 10)
Comment below to suggest a subitem 11a) If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how NPT: Whether or not administering co-interventions were blinded to group assignment

11a-i) Specify who was blinded, and who wasn't
Specify who was blinded, and who wasn't. Usually, in web-based trials it is not possible to blind the participants [1, 3] (this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any).

Comment on subitem 11a-i)
Study staff were not blind to treatment condition.
11a-ii) Discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator" Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator".

Add a subitem under CONSORT item 11a
11b) If relevant, description of the similarity of interventions (this item is usually not relevant for ehealth trials as it refers to similarity of a placebo or sham intervention to a active medication/intervention)

(no EHEALTH-speci c subitems under CONSORT item 11b)
Comment below to suggest a subitem The Health Tips application (control) functions similarly to supportive therapy in clinical trials comparing the effectiveness of psychotherapies.
"Although it provided daily advice on improving one's health, it is not tied to any specific theory, similar to supportivecontrol treatments. Participants were not required to act on the health tip." 12a) Statistical methods used to compare groups for primary and secondary outcomes NPT: When applicable, details of whether and how the clustering by care providers or centers was addressed

12a-i) Imputation techniques to deal with attrition / missing values
Imputation techniques to deal with attrition / missing values: Not all participants will use the intervention/comparator as intended and attrition is typically high in ehealth trials. Specify how participants who did not use the application or dropped out from the trial were treated in the statistical analysis (a complete case analysis is strongly discouraged, and simple imputation techniques such as LOCF may also be problematic [4]).

Comment on subitem 12a-i)
None missing assessments were not imputed for the analysis

Add a subitem under CONSORT item 12a
12b) Methods for additional analyses, such as subgroup analyses and adjusted analyses

(no EHEALTH-speci c subitems under CONSORT item 12b)
Comment below to suggest a subitem X26) (not a CONSORT item) To assess the marginal effect (i.e., association in the entire sample) between longitudinal weekly PHQ9 and SDS scores and treatment arms, we used generalized estimating equations (GEEs) (Liang & Zeger, 1986). Briefly, GEE models extend generalized linear models to longitudinal or clustered data. GEEs use a working correlation structure that accounts for withinsubject correlations of participant responses, thereby estimating robust and unbiased standard errors compared to ordinary least squares regression (Ballinger, 2004; Liang & Zeger, 1986.

X26-iii) Safety and security procedures
Safety and security procedures, incl. privacy considerations, and "any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline)" [1].

RESULTS
13a) For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center

(no EHEALTH-speci c subitems under CONSORT item 13a)
Comment below to suggest a subitem 363 (33.51%) of the initially enrolled participants were active in the study, The remaining 720 (66.48%) participants did not respond to any post enrollment surveys or provided passive data and as a result, were considered droppedout from the study. Of the 363, 80 were in Health Tips, 118 in iPST, 88 in EVO and rest 77 were Enrolled but never downloaded the assigned treatment arm therefore were included in the fourth Enrolled not randomized arm.
13b) For each group, losses and exclusions after randomisation, together with reasons

13b-i) Attrition diagram
Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) [5] or other gures or tables demonstrating usage/dose/engagement.

Comment on subitem 13b-i)
A consort flow diagram can be found in the main word document.

Comment on subitem 14a-i)
"4,502 participants were screened in the span of over 6 months" "The primary outcome measures were of depression (PHQ9) and function (SDS[25]), with these scores captured weekly for the first four weeks of treatment, then at 8 and 12 weeks (see Supplementary materials for discussion of other exploratory outcomes)."

Add a subitem under CONSORT item 14a
14b) Why the trial ended or was stopped (early)

(no EHEALTH-speci c subitems under CONSORT item 14b)
Comment below to suggest a subitem The trial ended upon completion of the 12week assessment.

15) A table showing baseline demographic and clinical characteristics for each group
NPT: When applicable, a description of care providers (case volume, quali cation, expertise, etc.) and centers (volume) in each group

15-i) Report demographics associated with digital divide issues
In ehealth trials it is particularly important to report demographics associated with digital divide issues, such as age, education, gender, social-economic status, computer/Internet/ehealth literacy of the participants, if known.

Comment on subitem 15-i)
A 16) For each group, number of participants (denominator) included in each analysis and whether the analysis was by original assigned groups

16-i) Report multiple "denominators" and provide de nitions
Report multiple "denominators" and provide de nitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at speci c pre-de ned time points of interest (in absolute and relative numbers per group). Always clearly de ne "use" of the intervention.

Comment on subitem 16-i)
Primary information can be found in the consort diagram, which can be found in the main document.

16-ii) Primary analysis should be intent-to-treat
Primary analysis should be intent-to-treat, secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i). 17a) For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% con dence interval)

17a-i) Presentation of process outcomes such as metrics of use and intensity of use
In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational de nitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a metric like a "session" is de ned (e.g., timeout after idle time) [1] (report under item 6a). 17b) For binary outcomes, presentation of both absolute and relative effect sizes is recommended

(no EHEALTH-speci c subitems under CONSORT item 17b)
Comment below to suggest a subitem 18) Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-speci ed from exploratory

18-i) Subgroup analysis of comparing only users
A subgroup analysis of comparing only users is not uncommon in ehealth trials, but if done, it must be stressed that this is a self-selected sample and no longer an unbiased sample from a randomized trial (see 16-iii).
subitem not at all important essential

19) All important harms or unintended effects in each group
(for speci c guidance see CONSORT for harms)

19-i) Include privacy breaches, technical problems
Include privacy breaches, technical problems. This does not only include physical "harm" to participants, but also incidents such as perceived or real privacy breaches [1], technical problems, and other unexpected/unintended incidents. "Unintended effects" also includes unintended positive effects [2].

Comment on subitem 19-i)
"App use was collected and ported to a secure data server at UCSF, which met all HIPPA and security requirements imposed by the university."

19-ii) Include qualitative feedback from participants or observations from staff/researchers
Include qualitative feedback from participants or observations from staff/researchers, if available, on strengths and shortcomings of the application, especially if they point to unintended/unexpected effects or uses. This includes (if available) reasons for why people did or did not use the application as intended by the developers.
subitem not at all important essential

Add a subitem under CONSORT item 19
DISCUSSION 22) Interpretation consistent with results, balancing bene ts and harms, and considering other relevant evidence NPT: In addition, take into account the choice of the comparator, lack of or partial blinding, and unequal expertise of care providers or centers in each group

22-i) Restate study questions and summarize the answers suggested by the data [2], starting with primary outcomes and process outcomes (use)
Restate study questions and summarize the answers suggested by the data [2], starting with primary outcomes and process outcomes (use).

20-i) Typical limitations in ehealth trials
Typical limitations in ehealth trials: Participants in ehealth trials are rarely blinded. Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error. Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events.

21-i) Generalizability to other populations
Generalizability to other populations: In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations [2].

21-ii) Discuss if there were elements in the RCT that would be different in a routine application setting
Discuss if there were elements in the RCT that would be different in a routine application setting (e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting.