Acceptability of Web-Based Mental Health Interventions in the Workplace: Systematic Review

Background Web-based interventions have proven to be effective not only in clinical populations but also in the occupational setting. Recent studies conducted in the work environment have focused on the effectiveness of these interventions. However, the role of employees’ acceptability of web-based interventions and programs has not yet enjoyed a similar level of attention. Objective The objective of this systematic review was to conduct the first comprehensive study on employees’ level of acceptability of web-based mental health interventions based on direct and indirect measures, outline the utility of different types of web-based interventions for work-related mental health issues, and build a research base in the field. Methods The search was conducted between October 2018 and July 2019 and allowed for any study design. The studies used either qualitative or quantitative data sources. The web-based interventions were generally aimed at supporting employees with their mental health issues. The study characteristics were outlined in a table as well as graded based on their quality using a traffic light schema. The level of acceptability was individually rated using commonly applied methods, including percentile quartiles ranging from low to very high. Results A total of 1303 studies were identified through multiple database searches and additional resources, from which 28 (2%) were rated as eligible for the synthesis. The results of employees’ acceptability levels were mixed, and the studies were very heterogeneous in design, intervention characteristics, and population. Approximately 79% (22/28) of the studies outlined acceptability measures from high to very high, and 54% (15/28) of the studies reported acceptability levels from low to moderate (overlap when studies reported both quantitative and qualitative results). Qualitative studies also provided insights into barriers and preferences, including simple and tailored application tools as well as the preference for nonstigmatized language. However, there were multiple flaws in the methodology of the studies, such as the blinding of participants and personnel. Conclusions The results outline the need for further research with more homogeneous acceptability studies to draw a final conclusion. However, the underlying results show that there is a tendency toward general acceptability of web-based interventions in the workplace, with findings of general applicability to the use of web-based mental health interventions.


Background
There is an increasing level of awareness regarding the importance of health and well-being in the workplace [1]. Anxiety, stress, and depression are the dominant mental health issues for workers in the United Kingdom, with a prevalence of 1320 cases per 100,000 workers, causing close to 18 million lost working days per year [2]. Employers have a responsibility to take care of their employees and provide support for both their physical and mental health [3,4].
Web-based mental health interventions are increasingly being used in the work environment as they have the advantage of being cost-effective, efficient, anonymous, location-independent, flexible, and empowering. They are regularly used for both prevention and intervention [5][6][7][8][9][10].
Web-based interventions also have multiple flaws, including technical difficulties, ethical concerns, increased attrition rates, and low engagement in the absence of guidance by professional support [6,11]. Therefore, it is important to understand the barriers that reduce engagement and acceptability of web-based interventions. Multiple systematic studies provide evidence of the effectiveness of web-based mental health interventions at work [12,13]. Importantly, they outline the need to tailor interventions to populations' needs, which requires greater insight into barriers and the acceptability of web-based mental health interventions in the workplace.
The acceptability of an intervention includes users' emotional and cognitive responses to the intervention [14], including affective perceptions, burden and barriers, perceived benefits, understanding of the intervention, opportunity costs, and usability. In practice, this takes into account the individuals' preferences for features and tools, their willingness to use web-based interventions, their engagement (eg, dropout and attrition rate), and users' perceived utility or satisfaction with the intervention.
Studying users' acceptability of new treatments has ethical, methodological (validity), and practical applications [15]. Specifically, ethical obligations include the exploration of reasons for acceptable or unacceptable treatments as perceived by the users. It is important to understand potential barriers to intervention engagement before introducing the intervention to employees. Awareness of intervention efficacy alone does not mean that employees accept web-based interventions as a useful tool for self-help.
Sekhon et al [14] outlined studies assessing interventions' acceptability by using operational definitions in line with measurable acceptability data (dropout rate and satisfaction rating) and qualitative studies focusing on in-depth user experiences. Current research has been limited to studies on clinical populations. Clinical populations differ significantly in symptom severity, level of risks, functionality, and response to treatment; thus, the results might not be generalizable to occupational populations [13]. Therefore, it is relevant to explicitly assess employees' acceptability of web-based interventions.

Objectives
This systematic review aimed to assess employees' acceptability of web-based interventions to improve their mental health. The study aimed to inform intervention design and utility by evaluating user experience and barriers and facilitators to using web-based mental health interventions in the workplace.

Methods
This systematic review was conducted in line with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [16] and followed the ENTREQ (Enhancing the Transparency in Reporting the Synthesis of Qualitative Research) guidelines [17,18].

Eligibility Criteria
Eligible studies met the Population, Intervention, Comparison, Outcome criteria and included qualitative interviews, quantitative studies including scale measures of satisfaction and forms of attrition rates, and mixed methods studies. Acceptability was assessed by means of both direct (acceptability, satisfaction, and experience) and indirect (compliance, completion, adherence, attrition, and dropout rate) measures. Studies were included if they were available in English and published after 2005.

Population
The population was narrowed down to people aged ≥18 years as there is a difference between child and adult interventions. The participants could be employed part-time or full-time or self-employed. Studies were included if the participants were >60% employed. This threshold guaranteed that the main outcome could be generalized to the eligible population for the study's purpose.

Intervention
Following the guidance of the meta-review by Joyce et al [19] on general workplace mental health interventions, web-based interventions were kept broad to include those that were conducted at work, had a work-related component, or aimed to treat work-related risk factors (eg, stress, depression, or anxiety). However, eligible interventions had to be exclusively web-based programs or interventions that targeted employed people or were applied in an occupational setting. Interventions or programs could be delivered via a computer program, app, or website. They could also differ in the device used to deliver the content (computer, laptop, or mobile phone) as well as include various forms of multimedia. All interventions aimed to change employees' behavior or mental health. They could have the aim of preventing, treating, or rehabilitating mental health issues.

Comparison
This review compared randomized controlled trials (RCTs), nonrandomized comparative trials, noncomparative trials, explorative studies, and qualitative studies published between 2005 and 2019.

Outcome
Studies were included if they measured acceptability directly or indirectly by means of qualitative assessment of acceptability, satisfaction, and experience or the indirect measure of acceptability through compliance, completion, adherence, attrition, or dropout rate. Studies were included that assessed the potential willingness to use interventions or the potential features of interventions that were preferred or addressed as disadvantageous for utility.

Exclusion Criteria
Studies were excluded if they did not meet the Population, Intervention, Comparison, Outcome criteria; that is, if they included guidance through coaches, therapists, or face-to-face interactions and were applied to participants who were retired or unemployed (>40%). In addition, studies were excluded if they did not measure acceptability or willingness of use as an outcome variable or used interventions that were not focused on the users' mental health.

Data Sources
The search was conducted in July 2019 and included the following electronic databases: PsycINFO (Ovid), Embase (Ovid), MEDLINE (Ovid), Global Health (Ovid), and the Cochrane Library Trials (CENTRAL). Backward searching was used to ensure that no key papers were missed.

Search Strategy
Databases were searched for studies published between 2005 and 2019. Duplicates were removed (Ovid search option). The Boolean system was applied using AND and OR (Textbox 1) to combine different terminologies of 4 key concepts included in a free-text and keyword search. Specific occupations were added to the general search of employees to increase the likelihood of finding studies on high stress-exposed work settings; for example, in the military or firefighter professions. The search terms used were categorized as occupational settings (employee), web-based interventions, mental ill health, and acceptability of interventions. All the key terms considered American and British spelling. Textbox 1. Search terms organized into 4 concepts.

Study Selection
Duplicates were removed, and titles, abstracts, and full texts were scanned for the inclusion criteria. After the assessment of the full texts' eligibility by the first author (JS), all the included studies were summarized and synthesized. The study selection process is outlined in the PRISMA flowchart ( Figure 1).

Data Collection Process
Data were collected according to the following criteria: reference, characteristics of the intervention, its aims and objectives, study design, population, setting and recruitment, results, acceptability, and-if available-reasons for dropout, as well as qualitative data.

Quality Assessment
The quality of the studies was assessed using the Critical Appraisal Skills Programme (CASP) checklist [20] for both qualitative studies and RCTs. Studies were evaluated based on research design, representativeness, recruitment procedure, presence of a comparison group, dropout rate, validity, reliability, and relevance of the measurement tools. Quality was graded with a traffic light system based on the 10 quality questions of the CASP and answered with yes (green) if the information was present, no information (red), and not available (yellow) if the information was not apparent or clearly outlined within the study (Multimedia Appendix 1 ). Studies were included in this systematic review irrespective of quality judgment.

Synthesis of Data
Similar to corresponding systematic reviews on the acceptability of web-based interventions [10], the level of acceptability was categorized into the following quartiles: low (− −), moderate (−), high (+), and very high (+ +). This was specifically used for studies that reported a satisfaction rating on a scale, the percentage results of which could then be transferred to the suggested levels of acceptability. In addition, studies reporting dropout rates and compliance percentages were organized according to the 4-quartile rating system for acceptability. If studies reported mixed results, including positive and negative outcomes on different acceptability factors, they were rated with a tilde (~). Qualitative studies were synthesized in an integrative, meta-aggregative style following methodological guidance on the use of meta-aggregations [49] as well as using similar systematic reviews [50]. The key data were extracted and are outlined in Table 1 (see Multimedia Appendix 2 [31][32][33]35,38,39,[45][46][47][48] for direct measures and qualitative data sources). Computer-delivered intervention (app), Self-Record, that facilitates cognitive restructuring for distress and alcohol consumption through self-monitoring of thought and activities; 4 weeks; effectiveness of the intervention on mental distress and consumption of alcohol Hamamura et al [25], Japan +/++ Time constraints (4/9), motivation constraints (3/9), technical difficulties (1/9), and dissatisfaction with the intervention (1/9)

Review Process
The characteristics of the studies are outlined in Table 1 as well as in Multimedia Appendix 2 including more details. Within the review process, 1303 papers were identified, of which 363 (28%) were duplicates, not published in English, or published before 2005. Of the 1303 papers, titles and abstracts were then scanned, a process that identified 940 (72%) and 379 (29%) papers, respectively. Papers were excluded if the interventions were independent of work environments, did not include most employees, and did not focus on mental health issues. Most studies were excluded owing to the involvement of face-to-face or telephone guidance by a coach or therapist. Ultimately, 28 studies were identified for further analysis, which either reported indirect measures of the acceptability of web-based interventions (n=17, 61%) or provided qualitative data on acceptability (n=11, 39%; Figure 1).

Study Characteristics
The 28 included studies had an overall sample of 9739 participants, with sample sizes ranging from 8 to 1140. The mean age of the participants was 40.7 years, and most participants were White and employed full-time.

Methodological Quality
The methodological quality of the studies is summarized in Multimedia Appendix 1. The studies were assessed for quality using the CASP [20] qualitative and quantitative templates and reported in the form of a traffic light schema. Various quality flaws were outlined in the studies, and no study met all 10 criteria marked by the CASP [20]. Independent of quality, all studies (28/28, 100%) were included in the final synthesis. Allocation bias appeared to be low in the quantitative studies as participants were mostly randomly distributed to their condition (15/17, 88%). However, qualitative studies often indicated performance and detection bias as studies often missed reporting on blinding status or researchers' awareness of the participants' condition. Selection bias was generally high as participants repeatedly originated from specific population samples (eg, male-dominated industry workers or female educational staff). Attrition bias was predominantly high as various studies reported a high dropout rate, which weakened their generalizability. Several studies missed reporting on the specific demographics of their samples and, thus, might risk the presence of confounders, whereas other studies (4/28, 14%) clearly outlined their risk of confounding [25,26,30,42]. The analysis of quantitative studies was generally good as all studies used data from all participant groups in their final analysis. Qualitative studies showed generally good quality in the guidance of clear questions, taking care of ethical considerations, and the provision of clear information on methodology. However, various studies missed accounting for the potential bias caused by the relationship between the researchers and the participants.

Intervention Characteristics and Country of Conduct
As outlined in the Study Characteristics section, most studies used CBT in their administered interventions (9/28, 32%). CBT was relatively equally distributed across Western countries, including the United States (2/9, 22%), Germany (2/9, 22%), the Netherlands (2/9, 22%), the United Kingdom (2/9, 22%), and Australia (1/9, 11%). Summarizing the CBT studies, the acceptability level indicated that 44% (4/9) of the studies had a low to moderate level of acceptability, whereas 33% (3/9) of the studies showed a high to very high acceptability level. Approximately 11% (1/9) of the studies had a mix of moderate and high acceptability levels. Other analyzed intervention types (mindfulness, psychological education, cognitive appraisal, emotional regulation, acceptance and commitment therapy, problem solving, cognitive strategies, exercise, and tracking tools) did not indicate any pattern of acceptability level. Broadly speaking, the intervention type, country of conduct, and outcome of the study did not indicate any notable patterns. However, most of the studies (26/28, 93%) were conducted in Western countries.

Measure of Acceptability
Relevant studies measured acceptability in different ways. They used direct measures of acceptability, which included qualitative data through questionnaires and interviews, or indirect quantitative measures by means of take-up, dropout, compliance, adherence, attrition, or completion rate. Some studies used both direct and indirect measures. All measures of acceptability are outlined in either Table 1 or Multimedia Appendix 2 (qualitative synthesized data) in the context of the reference, intervention, sample, study design, recruitment, outcome, indirect and direct acceptability measures, available reasons for dropout, example quotations from interviews, and an individually rated acceptability level. Table 1 and Multimedia Appendix 2 present the direct outcome of employees' acceptability of web-based therapy in the workplace. When categorizing the qualitative outcome into key themes, the following topics commonly emerged: (1) general interest in or willingness to use web-based interventions, (2) employees' satisfaction rating of the utility of the interventions, and (3) preferred features of the design and application style of the interventions. Most participants reported a generally positive interest in and acceptability of web-based interventions [31,33,35,38,46]. However, there were mixed results and negative opinions in other studies [32,33,39,48]. Common preferred features of web-based mental health interventions were the use of nonstigmatized language [45,48], the preference for interventions with interactive support [39,45], and broad application spectrum as well as short mobile and interactive multimedia interventions [31,35,38,48]. The synthesized outputs of the studies were written in descriptions of each theme as well as provided within the context of the setting and intervention type. To deliver a deeper insight into common themes, Multimedia Appendix 2 provides quotations of interviewees in primary studies. As this systematic review synthesized key themes in an integrative, meta-aggregative way, quotations aid in the understanding of the summarized key themes. Most of the studies reported details on the satisfaction rating on a scale associated with web-based interventions. Employees were mostly satisfied with the interventions and rated their utility positively.

Direct Measure of Acceptability
In addition, multiple studies assessed acceptability using satisfaction, usability, or interest ratings of the intervention (Table 1). Satisfaction ratings were frequently used in the studies [22,24,26,29,31,[36][37][38]41,43]. The average satisfaction score was 82.6%, which is similar to a very high individual-defined acceptability level (++) of web-based interventions. Moreover, 14% (4/28) of the studies reported a score of 0.85% for practical use [22,31,38,41], equivalent to a high (+) acceptability. In particular, Wilson et al [47] reported a rate of 75% in "comfortability" of using a mental health program on the computer and an 84% rate in "willingness to use." In contrast, Hennemann et al [34] reported that 89.1% of participants rated low on the "acceptability" of general occupational web-based mental health interventions. Both studies were very heterogeneous in intervention specificity and sample population. Hennemann et al [34] explained the negative outcome by the direct predictor variables of acceptability with "social influence," "effort" and "performance expectancy," and "time spent in the web" as well as the "frequency of searching online for health information."

Indirect Measure of Acceptability
This systematic review included hypothetical measures of acceptability characterized by dropout, attrition, compliance, adherence, uptake, and completion rate. The indirect or hypothetical measures of the acceptability of web-based interventions in the workplace are summarized in Table 1. The mean percentage of dropout rates from the included studies was 50.9% with a range of 15.3% [25] to 67.7% [36], which is equivalent to a moderate individual-defined level of acceptability [23][24][25]27,36,39,44]. A few studies reported the reasons for dropout or termination of the interventions. Repeated reasons were lack of time [21,26,40], technical difficulties [26,27,39,40], younger age [23,27,36], lower education [36], lack of motivation [26,40], no need for help [40], ability to manage stress personally [22], dissatisfaction with the intervention [26,39], higher initial level of psychological distress [30], and privacy concerns [31]. Other measures of acceptability included an average attrition rate of 32% [24,37,42], an average adherence rate of 54% [25,28], an uptake and intervention start rate of 11% [23,27], and a completion rate of 68% [30,40]. As visible in the outcome, there was no clear consensus in acceptability level, and the comparison of studies was difficult as they were heterogeneous in study design, sample, and methodology. However, the most frequently reported indirect measure of acceptability was the dropout rate, supported by a moderate (−) level of acceptability.

Principal Findings
This systematic review assessed the levels of employees' acceptability of web-based interventions aimed at improving mental health. The findings showed a generally positive level of acceptability and highlighted various factors to be considered in making interventions acceptable, engaging, and useful for employees. Themes to be addressed with caution when introducing interventions are the use of stigmatized terminology, including words of ill health and mental illness. In terms of implementation, applications are recommended to be short and use interactive multimedia tools.
Results were obtained from 28 separate studies. Satisfaction ratings and feedback appeared positive, particularly when the interventions included multimedia and nonstigmatizing language. In particular, 79% (22/28) of the studies showed acceptability measures from high to very high, and 54% (15/28) of the studies reported acceptability levels from low to moderate (overlap when studies reported both quantitative and qualitative results). The average satisfaction rating was >80%, and the employees rated the interventions' utility as good overall. However, quantitative measures contradicted the universal positive perspective of web-based interventions by means of the common measured dropout rate of approximately 50%. Hence, the attrition rate was very high in multiple studies, which questions the efficacy of unguided self-applied interventions.
Collectively, these results are in line with other acceptability studies that supported the general acceptability of web-based interventions in clinical settings [51]. Various studies have outlined barriers to assessing acceptability; for example, negative results from indirect measures. In addition, complications in synthesis owing to the heterogeneity of the interventions have been repeatedly reported.
Stigma and attitudes toward mental health at work were an emerging theme. Acceptability levels may relate to the web-based interventions themselves or to the fact that the intervention relates to mental health. This is supported by other studies showing that there is fear of stigmatization when seeking support [52]. It may also be difficult to successfully implement web-based interventions within an organization as employees prefer to separate health matters and their workplace [32]. Hence, the issue around mental health and stigma, especially at work, may be strongly influenced by the organizational culture that influences the use of mental health interventions [45].
The relationship between dropout and acceptability requires further assessment to interpret the current evidence. Although dropout for web-based workplace interventions was high (the mean percentage score of the included studies was 50.9%), explorations of the reasons for this were limited. Indeed, studies have outlined that dropout rates might not be the result of disinterest in occupational web-based interventions for mental health issues but appear to be generally high in computerized interventions [53], suggesting that these interventions are not as engaging as guided or face-to-face sessions and people might not feel committed enough to complete the treatment or program. Consequently, web-based interventions should be tailored and made as interactive and attractive as possible by using animation tools, pictures, and videos, as well as made as short and simple as possible to increase engagement and decrease the likelihood of technical issues [12]. Furthermore, the findings of this study suggest that, before applying interventions in organizations, people's needs, the environment, and the culture should be assessed; the interventions should be tailored accordingly; and awareness of the benefits and understanding of the use should be addressed.

Strengths and Limitations
The generalizability of the findings across workplaces may be limited because of the diversity of individual workplaces; for example, their organizational culture and stigma or attitudes toward mental health. In addition, assessing for confounding variables, including recruitment, setting, intervention characteristics, and country of conduct, did not reveal significant information. However, most of the included studies were conducted in Western countries and used CBT-based interventions, which may further limit generalizability.
Assessing acceptability using indirect measures may be flawed as there could be multiple reasons for employees to stop the intervention. Specifically, dropping out of interventions could be the result of feeling rehabilitated and seeing no further benefit of using the intervention. Nevertheless, dropouts provide great insight into the acceptability of interventions, but more in-depth analyses of the reasons for dropping out should be conducted.
Analysis of the specific assessment of acceptability of occupational web-based interventions was limited because of the heterogeneity of the study designs, intervention types, sample characteristics, and conditions under which the interventions were provided to employees. The studies used data assessment techniques, including cross-sectional self-report methods, whereas the qualitative studies used small samples. Data collection and analysis biases may be observed based on the role of the researchers [54]. As qualitative acceptability results were generally higher compared with indirect measures, this further raises the question of the role of researcher bias. In addition, limitations regarding the consistent and objective measurement of acceptability in the wider literature prevent robust conclusions from being drawn. However, the inclusion and critical appraisal of qualitative studies may have added depth to the factors within the acceptability capture in this study [55].
Despite these limitations, this study offers a comprehensive insight into multiple forms of acceptability measures [56]. Using both qualitative and quantitative as well as direct and indirect measures of acceptability provided a deeper insight into the options for assessing the acceptability of interventions in general. Although this study focused on the workplace, it examined the acceptability of web-based interventions that could be applied more generally to support people's mental health. For example, the findings could support the implementation of interventions outside of the workplace (eg, as part of clinical mental health treatments). These results might help clinicians, developers, researchers, and the health technology industry create effective and engaging tools in the future.

Implications
In relation to workplace practice, before applying interventions, it would be beneficial to increase people's knowledge of web-based interventions as well as assess their needs in general to improve their attitude toward interventions [13,34]. This is supported by Murray et al [57], whose study found that participants who rejected computerized treatments had significantly lower expectations of the usefulness of self-help and had general concerns, anxiety, and misunderstandings about computerized treatments. Hence, acceptability may be increased by identifying and correcting misperceptions before participation. Similarly, tailoring interventions to the environment and employees' needs could increase their general interest and willingness to use them [13]. In other words, web-based interventions for employees should be adapted to the specific environment applied as well as to the users' needs to increase engagement and acceptability levels. Generally, the acceptability of interventions might increase if employees and organizations are made aware of evidence-based web-based interventions that have multiple practical benefits and the potential to increase individuals' mental health and well-being in the long run. Finally, the ability of web-based interventions to engage and retain users is critical for ensuring reduced dropout and increased acceptability.
Regarding future research, the results of acceptability studies could be influenced by the general stigma on mental health topics and interventions. Therefore, future research should incorporate acceptability measures of mental health issues into their analysis to assess for confounding variables. Second, regarding quantitative data on acceptability, it would be beneficial if future research included a more in-depth analysis of the reasons for dropout or attrition rates. Third, future research should also address the conceptual and methodological limitations of the research in the field. If there were more organizations using mental health interventions from various settings, the research analysis could be more homogeneous.
Organizations might lack the knowledge on how to apply personal health support but could provide their employees with interventions that range from broader aspects of stress management to specified apps that tackle specific mental health issues (eg, depression). Finally, this research was conducted before the COVID-19 pandemic, which changed work styles and environments and affected how people sought and received mental health support. Further research should analyze changes in acceptability as a result of the pandemic to examine shifts in use and acceptability of mental health interventions both within and outside of the workplace.

Conclusions
This study assessed the area of acceptability of web-based workplace interventions for mental health. In general, workers are open to web-based mental health interventions. However, qualitative and quantitative studies suggested varying levels of acceptability, raising the possibility of bias. The importance of stigma, organizational culture, and the implementation of the intervention were highlighted, the latter relating to the engaging design and quality of the intervention as well as the approach to delivery in the workplace itself. Several factors were identified that need to be considered to ensure the effective implementation of web-based interventions in the workplace, some aspects of which may also apply to the general use in supporting people's mental health. Interventions should be tailored to the respective individual needs and cultural context, use nonstigmatized language, and be made interactive and easy to use. It is also recommended to foster an understanding of the potential value of an intervention to increase its acceptability. Methodological limitations were highlighted to guide the cautious interpretation and generalization of early evidence in this area along with the need to improve methodological rigor in emerging research.