Offline Digital Education for Postregistration Health Professions: Systematic Review and Meta-Analysis by the Digital Health Education Collaboration

Background: The shortage and disproportionate distribution of health care workers worldwide is further aggravated by the inadequacy of training programs, difficulties in implementing conventional curricula, deficiencies in learning infrastructure, or a lack of essential equipment. Offline digital education has the potential to improve the quality of health professions education. Objective: The primary objective of this systematic review was to evaluate the effectiveness of offline digital education compared with various controls in improving learners’ knowledge, skills, attitudes, satisfaction, and patient-related outcomes. The secondary objectives were (1) to assess the cost-effectiveness of the interventions and (2) to assess adverse effects of the interventions on patients and learners. Methods: We searched 7 electronic databases and 2 trial registries for randomized controlled trials published between January 1990 and August 2017. We used Cochrane systematic review methods. Results: A total of 27 trials involving 4618 individuals were included in this systematic review. Meta-analyses found that compared with no intervention, offline digital education (CD-ROM) may increase knowledge in nurses (standardized mean difference [SMD]=1.88; 95% CI 1.14 to 2.62; participants=300; studies=3; I 2 =80%; low certainty evidence). A meta-analysis of 2 studies found that compared with no intervention, the effects of offline digital education (computer-assisted training [CAT]) on nurses and physical therapists’ knowledge were uncertain (SMD 0.55; 95% CI –0.39 to 1.50; participants=64; I 2 =71%; very low certainty evidence). A meta-analysis of 2 studies found that compared with traditional learning, a PowerPoint presentation may improve the knowledge of patient care personnel and pharmacists (SMD 0.76; 95% CI 0.29 to 1.23; participants=167; I 2 =54%; low certainty evidence). A meta-analysis of 4 studies found that compared with traditional training, the effects of computer-assisted training on skills in community (mental health) therapists, nurses, and pharmacists were uncertain (SMD 0.45; 95% CI –0.35 to 1.25; participants=229; I 2 =88%; very low certainty evidence). A meta-analysis of 4 studies found that compared with traditional training, offline digital education may have little effect or no difference on satisfaction scores in nurses and mental health therapists (SMD –0.07; 95% CI –0.42 to 0.28, participants=232; I 2 =41%; low certainty evidence). A total of 2 studies found that offline digital education may have little or no effect on patient-centered outcomes when compared with blended learning. For skills and attitudes, the results were mixed and inconclusive. None of the studies reported adverse or unintended effects of the interventions. Only 1 study reported costs of interventions. The risk of bias was predominantly unclear and the certainty of the evidence ranged from low to very low. Conclusions: There is some evidence to support the effectiveness of offline digital education in improving learners’ knowledge and insufficient quality and quantity evidence for the other outcomes. Future high-quality studies are needed to increase generalizability and inform use of this modality of education.


Background
There is no health care system without health professionals. The health outcomes of people rely on well-educated nurses, pharmacists, dentists, and other allied health professionals [1]. Unfortunately, these professionals are in short supply and high demand [2,3]. Almost 1 billion people are negatively affected by the lack of access to adequately trained health professionals, suffering ill-health or dying [4,5]. In many low-and middle-income countries (LMICs), this situation is further aggravated by the difficulties in implementing traditional learning programs; deficiencies in health care systems and infrastructure; and lack of essential supplies, poor management, corruption, or low remuneration [6].
Digital education also known as e-learning is an umbrella term encompassing a broad spectrum of educational interventions characterized by their tools, technological contents, learning objectives or outcomes, pedagogical approaches, and delivery settings, which includes, but is not limited to, online and offline computer-based digital education, massive open online courses (MOOCs), mobile learning (mLearning), serious gaming and gamification, digital psychomotor skill trainers, virtual reality, or virtual patient scenarios [7]. Digital education aims to improve the quality of teaching by facilitating access to resources and services, as well as remote exchange of information and peer-to-peer collaboration [8]; it is also being increasingly recognized as one of the key strategic platforms to build strong education and training systems for health professionals worldwide [9]. The United Nations and the World Health Organization consider digital education as an effective means of addressing the educational needs among health professionals, especially in LMICs.
This review focused on offline digital education. This refers to the use of personal computers or laptops to assist in delivering stand-alone multimedia materials without the need for the internet or local area network connections [10]. The educational content can be delivered via videoconferences, emails, and audio-visual learning materials kept in either magnetic storage, for example, floppy disks, or optical storage, for example, CD-ROM, digital versatile disk, flash memory, multimedia cards, external hard disks, or downloaded from a networked connection, as long as the learning activities do not rely on this connection [11].
There are several potential benefits of offline digital education such as unrestrained knowledge transfer, enriched accessibility, and significance of health professions education [12]. Further benefits include flexibility and adaptability of educational content [13], so that learners can absorb curricula at a convenient pace, place, and time [14]. The interventions can also be used to deliver an interactive, an associative, and a perceptual learning experience by combining text, images, audio, and video via combined visual, auditory, and spatial components, further improving health professionals' learning outcomes [15,16]. By doing so, offline digital education can potentially stimulate neurocognitive development (memory, thinking, and attention) by enhancing changes in the efficiency of chemical synaptic transmission between neurons, increasing specific neuronal connections and creating new patterns of neuronal connectivity and generating new neurons [17]. Finally, health professionals better equipped with knowledge, skills, or professional attitudes as a result of offline digital education might improve the quality of health care services provision, as well as the patient-centered and public health outcomes, and reduce the costs of health care.

Objectives
This systematic review was one of a series of reviews evaluating the scope for implementation and the potential impact of a wide range of digital health education interventions for postregistration and preregistration health professionals. The objective of this systematic review was to evaluate the effectiveness of offline digital education compared with various controls in improving learners' knowledge, skills, attitudes, satisfaction, and patient-centered outcomes.

Methods
At the time of conducting and reporting the review, we used and adhered to the systematic review methods as recommended by the Cochrane Collaboration [18]. For a detailed description of the methodology, please refer to the study by Car et al [7].

Search Strategy and Data Sources
We searched the following databases (from January 1990 to August 2017): MEDLINE (via Ovid), Excerpta Medica dataBASE (via Elsevier), Web of Science, Educational Resource Information Center (via Ovid), Cochrane Central Register of Controlled Trials, The Cochrane Library, PsycINFO (via Ovid), and the Cumulative Index to Nursing and Allied Health Literature (via EBSCO). The search strategy for MEDLINE is presented in Multimedia Appendix 1. We searched for papers in English but considered eligible studies in any language. We also searched 2 trial registries (EU Clinical Trials Register and ClinicalTrials.gov), screened reference lists of all included studies and pertinent systematic reviews, and contacted the relevant investigators for further information.

Eligibility Criteria
Only randomized controlled trials (RCTs) and cluster RCTs (cRCTs) of postregistration health professionals except medical doctors-as they were covered in a separate review [19]-using either stand-alone or blended offline digital education with any type of controls (active or inactive) measuring knowledge, skills, attitudes, satisfaction, and patient-centered outcomes (as primary outcomes) as well as adverse effects or costs (as secondary outcomes) were eligible for inclusion in this review.
We excluded crossover trials, stepped wedge design, interrupted time series, controlled before and after studies, and studies of doctors (including medical diagnostics and treatment technologies) or medical students. Participants were not excluded on the basis of sociodemographic characteristics such as age, gender, ethnicity, or any other related characteristics.

Data Selection, Extraction, and Management
The search results from the different electronic databases were combined in a single EndNote (X8.2) library, and duplicate records of the same reports were removed. In total, 2 reviewers independently screened titles and abstracts to identify studies that potentially meet the inclusion criteria. The full text versions of these articles were retrieved and read in full. Finally, 2 review authors independently assessed articles against the eligibility criteria, and 2 reviewers independently extracted the data for each of the included studies using a structured data extraction form and the Covidence Web-based software (Veritas Health Innovation, Melbourne, Australia). We extracted all relevant data on the characteristics of participants, intervention, comparator group, and outcome measures. For continuous data, we reported means and SDs and odds ratios (ORs) and its 95% CIs for dichotomous data. For studies with multiple arms, we compared the relevant intervention arm to the least active control arm, so that double counting of data does not occur. Any disagreements were resolved through discussion between the 2 authors and if no consensus was reached, a third author acted as an arbiter.

Assessment of Risk of Bias
In total, 2 reviewers independently assessed the risk of bias of the included studies using the Cochrane Collaboration's Risk of Bias tool [18]. Studies were assessed for risk of bias in the following domains: random sequence generation; allocation concealment; blinding of participants or personnel; blinding of outcome assessment; completeness of outcome data (attrition bias); selective outcome reporting (reporting bias); validity and reliability of outcome measures; baseline comparability; and consistency in intervention delivery. For cRCTs, we also assessed and reported the risk of bias associated with an additional domain: selective recruitment of cluster participants. Judgments concerning the risk of bias for each study fell under 3 categories: high, low, or unclear risk of bias.

Data Synthesis
Data were synthesized using Review Manager version 5.3. In cases where studies were homogeneous enough (in terms of their population interventions, comparator groups, outcomes, and study designs) to make meaningful conclusions, we pooled them together in a meta-analysis using a random-effects model and presented results as standardized mean difference (SMD). We assessed heterogeneity through a visual inspection of the overlap of forest plots and by calculating the chi-square tests and I 2 inconsistency statistics [18].

Summary of Findings Tables
We prepared the Summary of Findings (SoF) tables to present the results for each of the primary outcomes. We converted results into absolute effects when possible and provided a source and rationale for each assumed risk cited in the table(s) when presented. A total of 2 authors (PP and MS) independently rated the overall quality of evidence as implemented and described in the GRADEprofiler (GRADEproGDT Web-based version) and Chapter 11 of the Cochrane Handbook for Systematic Reviews of Interventions [20]. We considered the following criteria to assess the quality of evidence: limitations of studies (risk of bias), inconsistency of results, indirectness of the evidence, imprecision and publication bias, and downgraded the quality where appropriate. This was done for all primary outcomes reported in the review.

Results
Our searches yielded a total of 30,532 citations; and 27 studies with 4,618 participants are included in Figure 1. For characteristics of excluded studies, please refer to Multimedia Appendix 2.

Risk of Bias in Included Studies
We present our judgments about each risk of bias item for all included studies as (summary) percentages in Figure 2. Figure 3 shows separate judgments about each risk of bias item for each included study.
The risk of bias was predominantly low for random sequence generation (55.5% of the studies), selective reporting, baseline comparability, and consistency in intervention delivery. The risk of bias was predominantly unclear for allocation concealment blinding of participants, personnel, or outcome assessors. A total of 12 studies (44.4%) had a high risk of attrition bias; 6 studies (22.2%) had a high risk of bias for validity and reliability of outcome measures; and 5 studies (18.5%) had a high risk of bias for baseline comparability. In total, 3 studies (11.1%) had a high risk of performance bias; and 1 study (3.7%) had a high risk of detection bias. For cRCTs, all 5 studies had a low risk of bias for selective recruitment of cluster participants.
A total of 2 studies did not report sufficient data that could be included in the meta-analysis. Weingardt [47] reported that compared with no intervention, CD-ROM probably improves substance abuse counselors' knowledge (P<.01; moderate certainty evidence). Albert [43] reported that compared with no intervention, CD-ROM and email may slightly improve dentists' knowledge (P<.05; low certainty evidence).

Skills
Schneider [37] reported an increase in nurses' skills (decreased core 1 error rates) between baseline and postintervention periods in the intervention group (OR 0.38, 95% CI 0.19 to 0.74; P=.004; low certainty evidence). Albert [43] reported that compared with no intervention, the offline digital education (CD-ROM and email) intervention may slightly improve dentists' skills (P<.01; low certainty evidence). Gasko [31] reported that the CD-ROM intervention may have little or no effect on nurse anesthetists' skills compared with traditional learning (mean 33 [SD 7] vs mean 35 [SD 10]; low certainty evidence). Schermer [24] reported that compared with traditional training (joint baseline workshop), CD-ROM may slightly improve the rate of adequate tests (32.9% vs 29.8%; OR 1.2, 95% CI 0.6 to 2.5; P=.663; low certainty evidence).

Satisfaction
Liu [33] reported that 87% of participants in the CD-ROM groups agreed or strongly agreed that the program was flexible (mean 4.28; low certainty evidence). There was no comparison group for this outcome. For a summary of the effects of these comparisons on knowledge, skills, and satisfaction, see SoF in Multimedia Appendix 4.

Knowledge
A meta-analysis of 2 studies [26,45] considered to be homogeneous enough found that compared with no intervention, the effects of offline digital education (computer-assisted training [CAT]) on nurses and physical therapists' knowledge were uncertain (SMD 0.55; 95% CI -0.39 to 1.50; very low certainty evidence; Figure 5).
A substantial level of heterogeneity of the pooled studies was detected (Tau 2 =.33; χ 2 =3.40; P=.07; I 2 =71%). One study [44] did not present data that could be included in the meta-analysis for this outcome. Hsieh reported that compared with no intervention, offline digital education may improve dentists' knowledge (P<.01; low certainty evidence).
Beidas [42] reported that compared with routine training, offline digital education (computer training) may have little or no effect . Taken together, these results suggest that computer-assisted interventions may slightly improve various health professionals' knowledge, but the quality of evidence was low and results were mixed.

Attitudes
In Moran [45], 93% of respondents reported a strong agreement or an agreement with the statement that computer-assisted instructions were helpful. There was no comparison group for this outcome (low certainty evidence). Hsieh [44] reported that compared with no intervention, the computer-based tutorial group may improve dentists' attitudes (P<.01; low certainty evidence). Lawson [41] found that compared with traditional education, offline digital education may have little or no effect on participants' attitudes concerning expected helpfulness (P=.082; low certainty evidence).

Satisfaction
A meta-analysis of 4 studies [25,28,36,42] considered to be homogeneous enough found that compared with traditional training, offline digital education may have little effect or no difference on satisfaction scores in nurses and mental health therapists (SMD -0.07; 95% CI -0.42 to 0.28; low certainty evidence; Figure 7). A moderate level of heterogeneity of the pooled studies was detected (Tau 2 =.05; χ 2 =.10; P=.16; I 2 =41%).
A total of 2 studies [23,38] were not included in the meta-analysis for this outcome as they did not report a sufficient amount of data for pooling. Boh [38] found that compared with traditional learning, offline digital education (audio cassette and microcomputer simulation) may have little or no effect on pharmacists' satisfaction postintervention (low certainty evidence). Rosen [23] found that compared with usual education, offline digital education (computer-based training) may improve nurses' satisfaction at 6 months (P<.0001; low certainty evidence).

Patient-Centered Outcomes
Ismail [22] reported that compared with no intervention, offline digital education may have little or no effect on the average percentage of women reporting perineal pain on sitting and walking at 10 to 12 days (mean difference [MD]=0.7%; 95% CI −10.1 to 11.4; P=.89; low certainty evidence).

Secondary Outcomes
Only 1 study [27] mentioned the costs of offline digital education. Bayne and Bindler [27] reported the costs as US $54 per participant in the computer-assisted group compared with US $23 per participant in the no intervention control group. For a summary of the effects of these comparisons on all outcomes, see SoF Multimedia Appendix 4.

Knowledge
A meta-analysis of 2 studies [40,46] considered to be homogeneous enough found that compared with traditional learning, a PowerPoint presentation may improve the knowledge of patient care personnel and pharmacists (SMD 0.76; 95% CI 0.29 to 1.23; low certainty evidence; Figure 8). A considerable level of heterogeneity of the pooled studies was detected (Tau 2 =.06; χ 2 =2.19; P=.14; I 2 =54%).
One study did not report sufficient data to be included in the meta-analysis. Donyai [39] reported that compared with traditional learning, a PowerPoint presentation may improve pharmacy professionals' knowledge (MD=9.9; 95% CI 0.4 to 19.3; P=.04).

Satisfaction
de Beurs [25] reported that compared with blended learning, offline digital education (software) may have little effect or no difference on patients' satisfaction at 3 months (mean [SD] 6.8 (4.4) vs 6.8 (4.3); low certainty evidence).

Patient-Centered Outcomes
de Beurs [25] reported that compared with blended learning, offline digital education (software) may have little effect or no difference on patients' suicidal ideation at 3 months (mean [SD] 4.2 (13.4) vs 4.9 (10.5); low certainty evidence). For a summary of the effects of these comparisons on all outcomes, see SoF Multimedia Appendix 4.
There was not enough data included in any of the pooled analyses to allow sensitivity analyses to be conducted. Similarly, given the small number of trials contributing data to outcomes within different comparisons in this review, a formal assessment of potential publication bias was not feasible.

Discussion
We summarized and critically evaluated evidence for effectiveness of offline digital education for improving knowledge, skills, attitudes, satisfaction, and patient-centered outcomes in postgraduate health professions except medical doctors. A total of 27 studies with 4618 participants met the eligibility criteria. We found highly diverse studies in different professions and evidence to support the effectiveness of certain types of offline digital education such as CD-ROM and PowerPoint compared with no intervention or traditional learning in improving learners' knowledge. For other outcomes (and comparators), the evidence was less compelling in improving learners' skills, attitudes, satisfaction, and patient-related outcomes.

Quality of the Evidence
Overall, the quality of evidence was low or very low. We assessed the quality of evidence using the Grading of Reducing the dropout rate might reduce the risk of attrition bias and further improve the quality of the studies. Only 8 studies (29.6%) had a low risk of bias for validity and reliability of outcome measures. This issue of nonvalidated measurement tools has repeatedly been raised and is paramount to advance the field [49]. Only 2 studies (7.4%) adequately described blinding of participants and personnel. As with many educational interventions, blinding of participants or personnel might prove challenging. However, we highlighted the need for more adequate descriptions of masking to further reduce the risk of performance bias and allow clearer judgments to be made. We also downgraded the overall quality of evidence for inconsistency (where there was a high level of heterogeneity, ie, I 2 >50%). Overall, there was a moderate-to-considerable level of heterogeneity of meta-analyses (I 2 range 41% to 88%); and 4 (out of 5) meta-analyses had I 2 >50%. More reasons for downgrading included indirectness (we downgraded once for 1 outcome only-where there were differences in the population used). Participants were not homogeneous and ranged from nurses, pharmacists, mental health therapists, dentists, midwives and obstetricians, physical therapists, patient care personnel to substance abuse counselors. Other sources of indirectness also stemmed from heterogeneous interventions (their duration, frequency, and intensity), comparison groups, and outcome assessment tools ranging from multiple choice or single choice questionnaires, tests, observations, checklists, scales, surveys, visual analogue scales, and simulations. Finally, we also downgraded for imprecision where the sample size was small. The included studies also failed to provide details of sample size and power calculations and may have therefore been underpowered and unable to detect change in learning outcomes.

Strengths and Limitations of the Review
This systematic review has several important strengths that include comprehensive searches without any language limitations, robust screening, data extraction and risk of bias assessments, and a critical appraisal of the evidence. However, some limitations must be acknowledged while interpreting the results of this study. First, we considered subgroup analyses to be unfeasible because of the insufficient number of studies under the respective outcomes and professional groups. However, we minimized potential biases in the review process and maintained its internal validity by strictly adhering to the guidelines outlined by Higgins et al [18].

Agreements and Disagreements With Other Studies or Reviews
A review by Al-Jewair [50] found some evidence to support the effectiveness of computer-assisted learning in improving knowledge gains in undergraduate or postgraduate orthodontic students' or orthodontic educators' knowledge, but no definite conclusions were reached; and future research was recommended. Rosenberg [51] concluded that computer-aided learning is as effective as other methods of teaching and can be used as an adjunct to traditional education or as a means of self-instruction of dental students. Based on 4 mixed-results RCTs, Rosenberg [52] was unable to reach any conclusions on knowledge gains and recommended more high-quality trials evaluating the effectiveness of computer-aided learning in orthodontics. However, we are familiar with newer technologies being currently evaluated for the same outcomes; MOOCs or mLearning can play a very important role in health professions education such as improving clinical knowledge and promoting lifelong learning [53][54]. We are also aware of recent reviews, which reached similar conclusions [55][56][57][58][59][60][61][62]. For example, digital education seems to be at least as effective (and sometimes more effective) as traditional education in improving dermatology, diabetes management, or smoking cessation-related skills and knowledge [58,61,62]. Most of these reviews, however, stressed the inconclusiveness of overall findings mainly because of the low certainty of the evidence.

Conclusions
Offline digital education may potentially play a role in the education of health professionals, especially in LMICs, where there is a lack of access to Web-based digital education for a variety of reasons, including cost; and there is some evidence to support the effectiveness of these interventions in improving the knowledge of health professionals. However, because of the existing gaps in the evidence base, including limited evidence for other outcomes; lack of subgroup analyses, for example, CD-ROM or PowerPoint; low and very low quality of the evidence, the overall findings are inconclusive. More research especially evaluating patient-centered outcomes, costs, and safety (adverse effects); involving those subgroups; and originating from LMICs is needed. Such research should be adequately powered, be underpinned by learning theories, use valid and reliable outcome measures, and blind outcome assessors.