Implementation Strategies for Knowledge Products in Primary Health Care: Systematic Review of Systematic Reviews

Background The underuse or overuse of knowledge products leads to waste in health care, and primary care is no exception. Objective This study aimed to characterize which knowledge products are frequently implemented, the implementation strategies used in primary care, and the implementation outcomes that are measured. Methods We performed a systematic review (SR) of SRs using the Cochrane systematic approach to include eligible SRs. The inclusion criteria were any primary care contexts, health care professionals and patients, any Effective Practice and Organization of Care implementation strategies of specified knowledge products, any comparators, and any implementation outcomes based on the Proctor framework. We searched the MEDLINE, EMBASE, CINAHL, Ovid PsycINFO, Web of Science, and Cochrane Library databases from their inception to October 2019 without any restrictions. We searched the references of the included SRs. Pairs of reviewers independently performed selection, data extraction, and methodological quality assessment by using A Measurement Tool to Assess Systematic Reviews 2. Data extraction was informed by the Effective Practice and Organization of Care taxonomy for implementation strategies and the Proctor framework for implementation outcomes. We performed a descriptive analysis and summarized the results by using a narrative synthesis. Results Of the 11,101 records identified, 81 (0.73%) SRs were included. Of these 81, a total of 47 (58%) SRs involved health care professionals alone. Moreover, 15 SRs had a high or moderate methodological quality. Most of them addressed 1 type of knowledge product (56/81, 69%), common clinical practice guidelines (26/56, 46%) or management, and behavioral or pharmacological health interventions (24/56, 43%). Mixed strategies were used for implementation (67/81, 83%), predominantly education-based (meetings in 60/81, 74%; materials distribution in 59/81, 73%; and academic detailing in 45/81, 56%), reminder (53/81, 36%), and audit and feedback (40/81, 49%) strategies. Education meetings (P=.13) and academic detailing (P=.11) seemed to be used more when the population was composed of health care professionals alone. Improvements in the adoption of knowledge products were the most commonly measured outcome (72/81, 89%). The evidence level was reported in 12% (10/81) of SRs on 62 outcomes (including 48 improvements in adoption), of which 16 (26%) outcomes were of moderate or high level. Conclusions Clinical practice guidelines and management and behavioral or pharmacological health interventions are the most commonly implemented knowledge products and are implemented through the mixed use of educational, reminder, and audit and feedback strategies. There is a need for a strong methodology for the SR of randomized controlled trials to explore their effectiveness and the entire cascade of implementation outcomes.


Background
The effective implementation of knowledge products is essential for improving and sustaining the well-being of populations and reducing waste in health care. In 2019, health care spending represented 17.7% of the US gross domestic products [1] and 11.5% of that for Canada [2]. However, the underuse of effective knowledge products that would be beneficial to the population, combined with the misuse or overuse of knowledge products that offer no added value or even provide more harm than benefits to populations, contribute to this lack of impact and waste [3,4]. Knowledge products include a wide range of health interventions or policies, programs, practices, or processes of technological, pharmacological, behavioral, or managerial nature and guidelines [5,6].
Given this gap between the production of knowledge products and their application in clinical practices and health policies, a growing emphasis has been placed on knowledge translation (KT) [7,8] and implementation strategies [8][9][10]. Implementation strategies can be understood as an actively planned and deliberately initiated set of processes, methods, techniques, activities, and resources, with the intention of translating a given knowledge product into practice within a particular setting and context [5,[11][12][13].
In recent years, given the many constraints on resources (human and financial) faced by most, if not all, health care systems, which have recently been made even worse by the COVID-19 pandemic [14], there has been a growing urgency in regard to synthetizing what is known about effective implementation strategies [9,[15][16][17][18][19][20][21][22][23][24]. Despite these efforts, gaps in KT remain in relation to overviews of variable methodological and reporting qualities [25], which sometimes lead to conflicting conclusions and make it challenging for health care stakeholders to decide which strategies are effective for the implementation of a given knowledge product. This concern has not been explicitly addressed in the existing literature. Therefore, we planned a 3-phase project, with the ultimate goal to identify, for each category of knowledge product, the most effective implementation strategies for their uptake into health care professionals' clinical practice. The first phase was to critically analyze the existing literature overviews to determine their strengths and weaknesses. This allowed us to highlight many methodological challenges such as the definition of eligibility criteria and literature search, the way in which data were synthesized, the methodological quality assessment of the literature reviews included, and the assessment of the evidence level. These points informed the realization of the present systematic review (SR) of SRs, which is the second phase of our project.

Objective
We sought to characterize which knowledge products are frequently implemented, the implementation strategies used in primary care, and the implementation outcomes measured.

Project Design and Registration
To optimize the identification of effective implementation strategies in the area of primary care, we conducted a 3-phase project using SR methodologies. In phase 1 (completed review), we conducted a critical analysis of the methodological strengths and weaknesses of the existing overviews. In phase 2 (current overview), we conducted an SR of SRs to characterize the most frequently implemented knowledge products, implementation strategies used, implementation outcomes measured, and reported levels of evidence in individuals or stakeholders participating in the provision of health care (referred to as health care professionals) or in health care professionals and end users (patients and clients) in the context of primary health care. In the included SRs, primary studies may either be of more robust experimental designs (randomized controlled trials [RCTs]) or less robust designs. Therefore, the effectiveness of key knowledge products and key implementation strategies was measured in a separate phase 3 (future review) using an SR of RCTs.

Population and Clinical Context
We included any person involved in health care provision, that is, health care professionals or caregivers and end users (patients). By caregivers, we mean the parents, guardians, friends of patients, community health workers, or any other nonclinician who provides health care. The empirical studies in the included reviews could concern either health care professionals or caregivers alone, or health care professionals or caregivers and patients. They were excluded from cases in which only the patients were concerned. We did not place restrictions on age, gender, or health conditions. Reviews had to cover the primary care setting [31], as it is a major level of health service use. Rather than targeting the physical location of activities, we were interested in primary health care services, such as health promotion and prevention, diagnosis, and treatment of illness and injury. By primary health care services, we refer to family physicians, nurse practitioners, and pharmacists who ensure the direct provision of health care services to clients and coordinate to ensure the continuity of care to upper levels [31].

Intervention
We focused on implementation strategies that were predetermined in our protocol [27] and based on the Effective Practice and Organization of Care (EPOC) [8] to include the following implementation strategies: audit, feedback, audit and feedback, clinical incident reporting, monitoring the performance of the delivery of health care, communities of practice, continuous quality improvement, educational games, educational materials, educational meetings, educational outreach visits or academic detailing, clinical practice guidelines, interprofessional education, local consensus processes, local opinion leaders, managerial supervision, patient-mediated interventions, public release of performance data, reminders, routine patient-reported outcome measures, and tailored interventions. A review may have included primary studies that use exclusively 1 type of implementation strategy (mono-faceted) or exclusively more than 1 type (multifaceted). Within the same review, some primary studies may have used exclusively one implementation strategy, whereas others may have used exclusively more than one implementation strategy (mixed). We excluded interventions that were used to develop the knowledge product and the scaling up and sustainability of interventions (studies that were housed under a separate project). Knowledge products are tools used to share knowledge with users [32,33]. They include tools such as clinical practice guidelines, decision support tools, policy briefs or decision-making tools, one pagers, and health interventions (technological, pharmacological, behavioral, or management). In the clinical practice guidelines category, we included clinical practice guidelines, disease management protocols, clinical recommendations, and clinical procedures. For health interventions, we included knowledge products for which the implementation aimed to change professional behavior or attitude (behavioral), professional competencies or processes or quality of care (management), prescribing or testing (pharmacological), and the use of technologies (technological). In the shared decision-making and support tools category, we included clinical decision support systems and tools aimed at improving clinical decision-making. In a given SR, 1 type of knowledge product (single) or more than 1 type (multiple) may have been implemented. A review was included if the knowledge product and implementation strategies were specified.

Comparators
We considered either usual practice (no predetermined implementation strategies as defined previously) or any of the predetermined implementation strategies defined earlier.

Outcomes
Our interest was focused on implementation outcomes, including acceptability, adoption, appropriateness, feasibility, adherence or fidelity, implementation costs, and penetration or reach of a knowledge product, as defined in the taxonomies by Proctor et al [34] and Lewis et al [35]. Detailed definitions are provided in Multimedia Appendix 1. Several of these outcomes may have been studied in the same SR.

Design of Included Reviews
We included both Cochrane and non-Cochrane SRs (with or without meta-analyses) and mixed method reviews that used a comprehensive and reproducible approach and met our inclusion criteria. The reviews may have included one or more types of experimental or observational primary study designs. We excluded reviews of reviews, non-SRs, original research, protocols, comments, editorials, conference abstracts, working groups and colloquium reports, experts' opinions, and pilot studies.

Information Sources and Search Strategy
We searched MEDLINE (Ovid), EMBASE, CINAHL (EBSCO), Ovid PsycINFO (Ovid), Web of Science (Web of Science), and Cochrane Library (Cochrane Library) databases from their inception to October 18, 2019, without restrictions on language or geographic settings. We searched the bibliographies of the included reviews to identify additional relevant ones.
We followed an extensive literature search process to identify SRs of interventions that implement health knowledge products. In March 2017, an information specialist (NR) designed the search strategy for each database. The initial search strategy developed in MEDLINE was reviewed and approved by some of the team members before its translation into other bibliographic databases by the information specialist. During the selection process, gaps were identified in the search strategy. The search strategy was modified and rerun in October 2019. We used the following main concepts: KT, strategies, reviews, health professionals, and primary care. Multimedia Appendix 2 details the search strategy for each of the aforementioned databases. The records found were exported to the EndNote software (Clarivate), and duplicates were removed.

Study Selection
We used Microsoft Excel developed for our review to perform the study selection in 3 steps. First, our reviewers performed pilot selection and held discussions regarding any discordance to ensure a common understanding of the eligibility criteria before subsequent steps were taken. Second, pairs of reviewers independently screened the titles and abstracts. Records coded as included or unclear were eligible for a full report review against the inclusion criteria by pairs of reviewers. Third, full reports were coded on one side as included or unclear and as excluded on the other side. At each step, consensus discussions were held to resolve disagreements. A senior reviewer validated the final list of included SRs. We did not need to contact any of the review authors. A flow diagram, according to the PRISMA guidelines [29], was produced to summarize the process of study selection (Figure 1).

Data Extraction
We used the piloted Microsoft Excel format developed for our review to extract the data. To develop the format, we used the taxonomies of the EPOC [8] for the categories of implementation strategies and complemented the information by specifying whether the implementation strategies were mono-faceted, multifaceted, or mixed. For the outcome definitions, we used the Proctor et al [34] and Lewis et al [35] evaluation frameworks. These frameworks integrate more dimensions not found in other frameworks, such as acceptability, appropriateness, feasibility, and implementation costs. They also provide outcome synonyms found in the literature, thus facilitating recategorization when needed. For each outcome, we specified whether the measurement was objective and the measurement tools used, if reported. Evidence-based interventions are practices in which health professionals use available evidence-based information to make decisions for individual patients or community health [6,36]. It operates by appraising evidence and formulating recommendations or guidelines [6,37,38] and by integrating evidence and community preferences for policy and practice changes at the public health level (health interventions) [6,38]. We were unable to find a formal taxonomy of knowledge products; therefore, we used the literature [5,6,32,33] to categorize whether they were clinical practice guidelines, health interventions, or shared decision-making and support tools. In cases where they were health interventions, we specified their technological, pharmacological, behavioral, or management nature. Furthermore, we extracted information regarding whether the type of implemented knowledge product was single (eg, clinical practice guidelines alone) or multiple (eg, clinical practice guidelines and health interventions). The population was defined as health care professionals only or health care professionals and patients and their number and characteristics of age and gender were extracted where available.
To give context to our review, the following additional information was also extracted: general characteristics of the included review (such as year of publication, number and names of databases searched, search date ranges considered, any language restriction, method of synthesizing data, medical area of concern, settings, designs, and number of primary studies), whether the authors of the included reviews completed methodological quality assessment (tools used and overall result), whether they completed publication bias assessment (tool used and whether any treatment was done), and whether they completed the assessment of quality of evidence (tool used and level of evidence by each reported outcome).
Pairs of reviewers piloted the tool on at least 2 reviews and independently carried out extractions and validations by comparing the extracted information. Discussions for consensus were held in case of discrepancies.

Assessment of Methodological Quality
The methodological quality of the included reviews was assessed using A Measurement Tool to Assess Systematic Reviews (AMSTAR; AMSTAR 2) [39]. In contrast to the first version, this updated version allows the assessment of SRs that include RCTs, nonrandomized studies of health interventions, or both [39]. We conducted a pilot phase and held discussions on discordance. Where necessary, the pilot phase was extended until a common understanding of the assessment criteria was achieved. Pairs of assessors independently scored each of these 16 items. An overall rating was also provided, which indicated high (no or one noncritical flaw), moderate (more than one noncritical flaw), low (one critical flaw with or without noncritical flaws), or critically low (more than one critical flaw with or without noncritical flaws) ratings [39]. Critical flaws included protocol not registered before the beginning of the review (standard 2), lack of adequacy and comprehensiveness of the search strategy (standard 4), no provision of the justification for excluding individual reviews (standard 7), the use of an unsatisfactory technique to assess the risk of bias from individual included reviews (standard 9), the inappropriateness of meta-analytical methods (standard 11), no consideration of the risk of bias when interpreting the results of the review (standard 13), and lack of suitability for the assessment of the presence and the likely impact of publication bias (standard 15) [39]. Reviewers compared their results and reached a consensus in cases of disagreement by discussion or by the arbitration of a third reviewer.

Data Synthesis
For the second phase, reanalysis by meta-analysis was not performed [27]. Using SAS software (SAS Institute Inc) and taking the included review as the unit of analysis, we performed a descriptive analysis that aimed to summarize the characteristics of the implemented knowledge products, implementation strategies used, outcomes measured, and levels of evidence reported. We summarized the data as numbers and percentages for categorical variables and as means and SDs or medians and IQRs for continuous variables. Counts were performed overall and then stratified according to methodological quality scores (high, moderate, low, and very low). We grouped Technological Health Interventions and Decision Support Tools as implemented clinical decision support tools were electronic or computerized decision support systems. For reviews in which the level of evidence of outcomes was measured and reported, we summarized what was reported as the level of evidence for the reported implementation outcome by the implementation strategy used and by the specific implemented single knowledge product. We used the number of outcomes as the unit of analysis.

Search and Selection Process
Our database search identified 11,101 records, of which 6915 (62.29%) titles and abstracts were screened after removing duplicates. Among these 6915, a total of 428 (6.19%) full reports were screened for eligibility, after which 81 (18.9%) admissible SRs remained   (Figure 1). The reasons for the exclusion of each examined full report are provided in Multimedia Appendix 3.  . Table 1. General characteristics of included reviews overall and by methodological quality scores (reviews: N=81).
We compared the proportions of implementation strategies used when the population was health care professionals alone and when the population was health care professionals and patients. Education meetings and academic detailing seem to be used when the population is composed of health care professionals alone, without any statistical significance (Table 3).

Reported Effectiveness of Implementation Strategies and Level of Evidence of Outcomes in the Included Reviews
We synthesized the evidence by combining information on the reported level of evidence for the outcomes measured, among single knowledge products implemented, and by the implementation strategy used (Table 4). For SRs in which single knowledge products were implemented (56/81, 69%), the level of evidence was reported in 10 SRs with 62 specified outcomes (Table 4). Of these, 50 outcomes were related to the implementation of clinical practice guidelines in 6 SRs [74,77,83,91,110,114], 5 outcomes on management and behavioral or pharmacological health interventions in 3 SRs [60,67,103], and 7 outcomes for health technology interventions or decision support tools in one SR [47] (Table 4).
Level of evidence Knowledge products, implementation strategies a , and categories of outcomes b,c Clinical practice guidelines (n=50 outcomes)

Continuous quality improvement (n=10)
• Monitoring the performance of the delivery of health care (n=3) Adoption (n=3)

Clinical practice guidelines (n=1)
• Moderate (n=1) Implementation costs (n=1) a Categories are not mutually exclusive. b Positive outcome (eg, increase in adoption and increase in knowledge). c Within the same review, it may have implemented 1 type of single knowledge product (eg, clinical practice guidelines) but used different specific practices (eg, general obstetric care guidelines and emergency obstetric care guidelines). Although these practices may report the same category of implementation outcome (eg, adoption), if those practices presented and reported different levels of evidence specific for each one (eg, low for general obstetric care guidelines and moderate for emergency obstetric care guidelines), then their outcomes were extracted separately and analyzed separately.

Principal Findings
In this paper, we report the results of an SR of SRs, thus providing a detailed portrait of (1) the knowledge products or innovations implemented in primary health care, (2) the implementation strategies used by health care professionals in primary care, and (3) implementation outcomes evaluated as well as their reported level of evidence in primary care.
The findings of this review will be used to inform future SRs of RCTs on the effectiveness of implementation strategies for specific knowledge products.
In this review, which summarized a total of 81 studies, for most (56/81, 69%) of the included SRs, only 1 type of knowledge product (single) was implemented, the majority of which were clinical practice guidelines or health interventions (of management and behavioral or pharmacological nature). Implementation strategies commonly combine education-based strategies (material distributions, meetings, and outreach), reminders, and audits and feedback. Improvement in the adoption of knowledge products was the most measured outcome.
Education-based strategies, audits, feedback, and reminders were mainly used to improve the adoption of clinical practice guidelines and health interventions related to management, behavior, or pharmacology. In contrast, reminders and audit and feedback were used to improve the acceptability and implementation costs of health technology interventions. The reported effectiveness of these strategies was of a high or moderate level of evidence in a few cases and of a low or very low level of evidence in most cases.

Comparison With Prior Work
Clinical practice guidelines and management, behavioral or pharmacological health interventions, and health technology interventions and decision support tools have been developed to improve clinical practice and patient health outcomes. Despite their comparable effectiveness, the level or degree of implementation varies widely. For instance, as seen in this review, health technology interventions and decision support tools appear to be less implemented or less frequently reported. This does not mean that they are less developed than other knowledge products, but they are probably less commonly addressed in formal research or possibly less known by end users. These interventions, which are generally in the format of mobile-based or computerized-based interventions, are created to accelerate the accessibility and use of KT interventions. A decade ago, such interventions were considered new in the health domain, and it was reasonable that they were minimally implemented [65]. Currently, it is unclear why this situation persists, when health technology interventions and decision support tools are generally recognized as important aspects of care and the way of the future. The reasons may be attributed to the policy-making and funding level (health technologies are often short-term projects, ie, no long-term vision, nonexistence, and unpredictable changes in policies and regulations, financial constraints, eg, affordability, lack of infrastructure [such as office space, supplies, equipment, etc], human resource availability, and digital literacy) [121,122], or the implementation level (unawareness of the technology, perceived usefulness, ie, acceptability, etc) [121,122]. Finally, barriers may differ across settings and cultures [121,122].
The predominance of education-based, reminder, and audit and feedback implementation strategies suggests that they were prioritized based on existing barriers and facilitators of the implementation of the mentioned knowledge products. In fact, some of the most recent systematic and scoping reviews on the topic highlighted a lack of both provider awareness and knowledge of the existence of guidelines, and unfavorable attitudes about them [123][124][125], in response to which educational and audit and feedback strategies were judged to be suitable [123,125]. In contrast, a lack of access to guidelines and limited time available to providers was also mentioned [124,125], thereby calling for the use of decision support systems or reminders [125]. However, it is important to know whether these strategies are effective in implementing knowledge products. The included SRs demonstrated an all-directions effect, which was sometimes consistently positive or negative, or inconsistent, depending on factors such as single versus combined strategies [41,50,71,100] or type of comparator [110]. For example, in a study by Al Zoubi et al [41], single educational strategies appeared to have a small effect, whereas multifaceted strategies that combine educational strategies and other types of strategies, such as reminders, appeared to be more effective, although inconsistent. Kovacs et al [81] found the opposite result, showing that a single intervention is more effective. Others found that effectiveness may depend on the format in which education strategies are delivered; for example, by multimedia and computers [96].
Adoption, which is also referred to as "uptake or utilization," is "the intention, initial decision, or action to try or employ an innovation or evidence-based practice" [34]. This outcome occurs early or in the middle of the implementation process, is preceded by acceptability and appropriateness, and occurs at the same time as feasibility, followed by fidelity, implementation costs, penetration, and sustainability [34]. All proximal and distal implementation outcomes are important for measurement. The ongoing focus of the literature on the proximal outcome of adoption is more easily understood for recently introduced health technology interventions and strategies but is more difficult to explain when traditional strategies, such as education, are predominantly used.
Regarding the level of evidence of effectiveness, very few SRs have evaluated the level of evidence, as most reviews are narrative. The authors were unable to perform meta-analyses owing to high heterogeneity. In contrast, among the few reviews that assessed the level of evidence, most scored a low or very low grade. This makes it difficult to recognize potentially effective strategies and calls for more methodologically strong SRs to obtain reliable conclusions on the topic.

Strengths and Limitations
One strength of this SR is its broad objective, which included all EPOC strategies used to implement a variety of health knowledge products, with consideration given to the different implementation outcomes. No type of health care provider was excluded, and even if our target was primary health care, most included SRs covered both primary and secondary health care settings. We did not target any health area. We performed an extensive search and included both Cochrane and non-Cochrane reviews. It has been estimated that including only Cochrane reviews may lead to a loss or change in a median of 31% of the outcome data [126]. The objective of this phase was to characterize rather than measure effectiveness. Both types of SRs offered a large database of 81 reviews for future projects dealing with individual, unique RCTs. Therefore, we believe that our conclusions can be applied to many different contexts.
Few of the included reviews were of high or moderate methodological quality. The criteria lowering the scores may be linked to the unavailability of reporting guidelines at the time of publication or nonadherence to those guidelines when they were available. As per many other overviews, we used the AMSTAR tool and, as suggested, did so in a dual independent team format with a consensus process [25]. It is also possible to exclude reviews based on methodological quality issues when the aim is to produce a detailed picture of a topic [127], as in our case.
With regard to the quality and completeness of the extracted information, in many of the included SRs, the categories of knowledge products, implementation strategies, and outcomes were not reported as per the standard taxonomies used, thereby requiring us to recategorize. This may have introduced some misclassification of information. We addressed this issue and its potential impact on our conclusion by piloting our data extraction process and reaching a consensus for all disagreements (by reviewing the discordant information together).
In the field of overviews, overlapping occurs when one primary study is included in more than one review or when more than one review addresses the same topic [128]. We cannot guarantee that our review will be free of overlapping issues. In addition, we did not evaluate the quality of evidence of outcomes for the included reviews, as we did not intend to demonstrate the effectiveness of the interventions. These 2 issues will be addressed in future projects on the effectiveness of strategies using the design of SRs of individual RCTs. For the comprehensiveness of our strategy, we searched 5 key databases in the field of intervention studies. In addition, we searched the reference lists of all included SRs. However, gray literature was not searched. For efficiency considerations, it was planned to update the search strategy in phase 3 of the project to avoid missing any recently published RCTs on the effectiveness of implementation strategies. We could look for gray literature in this phase.

Conclusions and Future Directions
Through this SR of SRs, we demonstrated that in the field of implementation, clinical practice guidelines and management, behavioral, or pharmacological health interventions are the most commonly implemented knowledge products, mainly through educational, reminder, and audit and feedback implementation strategies. However, the literature still focuses on the proximal outcomes of improving the adoption of knowledge products, generally with a limited level of evidence.
This SR aimed to provide insight into which knowledge products are frequently implemented, how they are implemented (implementation strategies), and the implementation outcomes measured, rather than providing information on the effectiveness of the implementation strategies. Therefore, in this step, we do not suggest changes in practice; rather, this review provides a good foundation for planning future research on effectiveness. Only detailed and contextualized information on knowledge products and implementation strategies will lead to changes in practice.
We constructed a database of SRs that may be used to strengthen the methodology for the SR of RCTs to overcome the issue of the variable effectiveness of commonly used implementation strategies, such as educational, reminder, and audit and feedback strategies. Future well-designed SRs of RCTs should fully describe the implementation strategy attributes of dose and intensity, format and duration of delivery, geographic location of interventions, and so on. In addition, qualitative studies and reviews involving a variety of collaborators from different domains and levels should be conducted to better understand the barriers and facilitators that contribute to why health technology interventions remain poorly implemented. Future implementation research should explore the entire cascade of implementation outcomes, including proximal and distal outcomes.