Do evidence summaries increase health policy‐makers' use of evidence from systematic reviews? A systematic review

This review summarizes the evidence from six randomized controlled trials that judged the effectiveness of systematic review summaries on policymakers' decision making, or the most effective ways to present evidence summaries to increase policymakers' use of the evidence. This review included six randomized controlled studies. A randomized controlled study is one in which the participants are divided randomly (by chance) into separate groups to compare different treatments or other interventions. This method of dividing people into groups means that the groups will be similar and that the effects of the treatments they receive will be compared more fairly. At the time the study is done, it is not known which treatment is the better one. The researchers who did these studies invited people from Europe, North America, South America, Africa, and Asia to take part in them. Two studies looked at “policy briefs,” one study looked at an “evidence summary,” two looked at a “summary of findings table,” and one compared a “summary of findings table” to an evidence summary. None of these studies looked at how policymakers directly used evidence from systematic reviews in their decision making, but two studies found that there was little to no difference in how they used the summaries. The studies relied on reports from decision makers. These studies included questions such as, “Is this summary easy to understand?” Some of the studies looked at users' knowledge, understanding, beliefs, or how credible (trustworthy) they believed the summaries to be. There was little to no difference in the studies that looked at these outcomes. Study participants rated the graded entry format higher for usability than the full systematic review. The graded entry format allows the reader to select how much information they want to read. The study participants felt that all evidence summary formats were easier to understand than full systematic reviews. Plain language summary Policy briefs make systematic reviews easier to understand but little evidence of impact on use of study findings It is likely that evidence summaries are easier to understand than complete systematic reviews. Whether these summaries increase the use of evidence from systematic reviews in policymaking is not clear. What is this review about? Systematic reviews are long and technical documents that may be hard for policymakers to use when making decisions. Evidence summaries are short documents that describe research findings in systematic reviews. These summaries may simplify the use of systematic reviews. Other names for evidence reviews are policy briefs, evidence briefs, summaries of findings, or plain language summaries. The goal of this review was to learn whether evidence summaries help policymakers use evidence from systematic reviews. This review also aimed to identify the best ways to present the evidence summary to increase the use of evidence. What is the aim of this review? This review summarizes the evidence from six randomized controlled trials that judged the effectiveness of systematic review summaries on policymakers' decision making, or the most effective ways to present evidence summaries to increase policymakers' use of the evidence. What are the main findings of this review? This review included six randomized controlled studies. A randomized controlled study is one in which the participants are divided randomly (by chance) into separate groups to compare different treatments or other interventions. This method of dividing people into groups means that the groups will be similar and that the effects of the treatments they receive will be compared more fairly. At the time the study is done, it is not known which treatment is the better one. The researchers who did these studies invited people from Europe, North America, South America, Africa, and Asia to take part in them. Two studies looked at “policy briefs,” one study looked at an “evidence summary,” two looked at a “summary of findings table,” and one compared a “summary of findings table” to an evidence summary. None of these studies looked at how policymakers directly used evidence from systematic reviews in their decision making, but two studies found that there was little to no difference in how they used the summaries. The studies relied on reports from decision makers. These studies included questions such as, “Is this summary easy to understand?” Some of the studies looked at users' knowledge, understanding, beliefs, or how credible (trustworthy) they believed the summaries to be. There was little to no difference in the studies that looked at these outcomes. Study participants rated the graded entry format higher for usability than the full systematic review. The graded entry format allows the reader to select how much information they want to read.. The study participants felt that all evidence summary formats were easier to understand than full systematic reviews. What do the findings of this review mean? Our review suggests that evidence summaries help policymakers to better understand the findings presented in systematic reviews. In short, evidence summaries should be developed to make it easier for policymakers to understand the evidence presented in systematic reviews. However, right now there is very little evidence on the best way to present systematic review evidence to policymakers. How up to date is this review? The authors of this review searched for studies through June 2016. Executive summary/Abstract Background Systematic reviews are important for decision makers. They offer many potential benefits but are often written in technical language, are too long, and do not contain contextual details which makes them hard to use for decision‐making. Strategies to promote the use of evidence to decision makers are required, and evidence summaries have been suggested as a facilitator. Evidence summaries include policy briefs, briefing papers, briefing notes, evidence briefs, abstracts, summary of findings tables, and plain language summaries. There are many organizations developing and disseminating systematic review evidence summaries for different populations or subsets of decision makers. However, evidence on the usefulness and effectiveness of systematic review summaries is lacking. We present an overview of the available evidence on systematic review evidence summaries. Objectives This systematic review aimed to 1) assess the effectiveness of evidence summaries on policy‐makers' use of the evidence and 2) identify the most effective summary components for increasing policy‐makers' use of the evidence. Search methods We searched several online databases (Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, Global Health Library, Popline, Africa‐wide, Public Affairs Information Services, Worldwide Political Science Abstracts, Web of Science, and DfiD), websites of research groups and organizations which produce evidence summaries, and reference lists of included summaries and related systematic reviews. These databases were searched in March‐April, 2016. Selection criteria Eligible studies included randomised controlled trials (RCTs), non‐randomised controlled trials (NRCTs), controlled before‐after (CBA) studies, and interrupted time series (ITS) studies. We included studies of policymakers at all levels as well as health system managers. We included studies examining any type of “evidence summary”, “policy brief”, or other product derived from systematic reviews that presented evidence in a summarized form. These interventions could be compared to active comparators (e.g. other summary formats) or no intervention. The primary outcomes were: 1) use of systematic review summaries decision‐making (e.g. self‐reported use of the evidence in policy‐making, decision‐making) and 2) policymaker understanding, knowledge, and/or beliefs (e.g. changes in knowledge scores about the topic included in the summary). We also assessed perceived relevance, credibility, usefulness, understandability, and desirability (e.g. format) of the summaries. Results Our database search combined with our grey literature search yielded 10,113 references after removal of duplicates. From these, 54 were reviewed in full text and we included 6 studies (reported in 7 papers, 1661 participants) as well as protocols from 2 ongoing studies. Two studies assessed the use of evidence summaries in decision‐making and found little to no difference in effect. There was also little to no difference in effect for knowledge, understanding or beliefs (4 studies) and perceived usefulness or usability (3 studies). Summary of Findings tables and graded entry summaries were perceived as slightly easier to understand compared to complete systematic reviews. Two studies assessed formatting changes and found that for Summary of Findings tables, certain elements, such as reporting study event rates and absolute differences were preferred as well as avoiding the use of footnotes. No studies assessed adverse effects. The risks of bias in these studies were mainly assessed as unclear or low however, two studies were assessed as high risk of bias for incomplete outcome data due to very high rates of attrition. Authors' conclusions Evidence summaries may be easier to understand than complete systematic reviews. However, their ability to increase the use of systematic review evidence in policymaking is unclear.

compared more fairly. At the time the study is done, it is not known which treatment is the better one.
The researchers who did these studies invited people from Europe, North America, South America, Africa, and Asia to take part in them. Two studies looked at "policy briefs," one study looked at an "evidence summary," two looked at a "summary of findings table," and one compared a "summary of findings table" to an evidence summary.
None of these studies looked at how policymakers directly used evidence from systematic reviews in their decision making, but two studies found that there was little to no difference in how they used the summaries. The studies relied on reports from decision makers. These studies included questions such as, "Is this summary easy to understand?" Some of the studies looked at users' knowledge, understanding, beliefs, or how credible (trustworthy) they believed the summaries to be. There was little to no difference in the studies that looked at these outcomes. Study participants rated the graded entry format higher for usability than the full systematic review. The graded entry format allows the reader to select how much information they want to read.. The study participants felt that all evidence summary formats were easier to understand than full systematic reviews.

What do the findings of this review mean?
Our review suggests that evidence summaries help policymakers to better understand the findings presented in systematic reviews. In short, evidence summaries should be developed to make it easier for policymakers to understand the evidence presented in systematic reviews. However, right now there is very little evidence on the best way to present systematic review evidence to policymakers.

How up to date is this review?
The authors of this review searched for studies through June 2016.

Executive summary/Abstract
Background Systematic reviews are important for decision makers. They offer many potential benefits but are often written in technical language, are too long, and do not contain contextual details which makes them hard to use for decision-making. Strategies to promote the use of evidence to decision makers are required, and evidence summaries have been suggested as a facilitator. Evidence summaries include policy briefs, briefing papers, briefing notes, evidence briefs, abstracts, summary of findings tables, and plain language summaries. There are many organizations developing and disseminating systematic review evidence summaries for different populations or subsets of decision makers. However, evidence on the usefulness and effectiveness of systematic review summaries is lacking. We present an overview of the available evidence on systematic review evidence summaries.

Objectives
This systematic review aimed to 1) assess the effectiveness of evidence summaries on policymakers' use of the evidence and 2) identify the most effective summary components for increasing policy-makers' use of the evidence.

Search methods
We searched several online databases (Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, Global Health Library, Popline, Africa-wide, Public Affairs Information Services, Worldwide Political Science Abstracts, Web of Science, and DfiD), websites of research groups and organizations which produce evidence summaries, and reference lists of included summaries and related systematic reviews. These databases were searched in March-April, 2016.

Selection criteria
Eligible studies included randomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before-after (CBA) studies, and interrupted time series (ITS) studies. We included studies of policymakers at all levels as well as health system managers.
We included studies examining any type of "evidence summary", "policy brief", or other product derived from systematic reviews that presented evidence in a summarized form. These interventions could be compared to active comparators (e.g. other summary formats) or no intervention.
The primary outcomes were: 1) use of systematic review summaries decision-making (e.g. self-reported use of the evidence in policy-making, decision-making) and 2) policymaker understanding, knowledge, and/or beliefs (e.g. changes in knowledge scores about the topic included in the summary). We also assessed perceived relevance, credibility, usefulness, understandability, and desirability (e.g. format) of the summaries.

Results
Our database search combined with our grey literature search yielded 10,113 references after removal of duplicates. From these, 54 were reviewed in full text and we included 6 studies (reported in 7 papers, 1661 participants) as well as protocols from 2 ongoing studies. Two studies assessed the use of evidence summaries in decision-making and found little to no difference in effect. There was also little to no difference in effect for knowledge, understanding or beliefs (4 studies) and perceived usefulness or usability (3 studies). Summary of Findings tables and graded entry summaries were perceived as slightly easier to understand compared to complete systematic reviews. Two studies assessed formatting changes and found that for Summary of Findings tables, certain elements, such as reporting study event rates and absolute differences were preferred as well as avoiding the use of footnotes. No studies assessed adverse effects. The risks of bias in these studies were mainly assessed as unclear or low however, two studies were assessed as high risk of bias for incomplete outcome data due to very high rates of attrition.

Authors' conclusions
Evidence summaries may be easier to understand than complete systematic reviews. However, their ability to increase the use of systematic review evidence in policymaking is unclear.

Background
Policy makers are increasingly utilizing systematic reviews for decision-making (Lavis et al., 2006;Petticrew et al., 2004;Welch et al., 2012). The shift from single studies has occurred because systematic reviews offer additional benefits to policymakers, such as having lower risk of bias than other studies, and offering more confidence in results than single studies (Lavis et al., 2006a). However, health policies are often made without the use of research evidence (Oxman et al., 2009). Barriers to the use of research, specifically systematic reviews, in policymaking have been identified . Systematic reviews are often written using technical language, lack important contextual information and can be quite long. Because of this, research groups and organizations have begun creating summaries of the evidence . Strategies to promote the use of research evidence to policy-makers are required, and evidence summaries have been suggested as a facilitator to evidence-informed decision-making (Bunn & Sworn, 2011).

The problem, condition or issue
There are several organizations that develop and disseminate evidence summaries for different populations or subsets of decision makers. For example, within the Cochrane Collaboration, the Evidence Aid Project was developed in response to the 2004 Indian Ocean Tsunami as a means of providing decision makers and health practitioners 'on the ground' with summaries of the best available evidence needed to respond to emergencies and natural disasters (Kayabu et al., 2013). SUPPORT Summaries were developed for policy-makers in low-and middle-income countries (LMICs) making decisions about maternal and child health programs and interventions (www.support-collaboration.org). Health Systems Evidence provides a onestop shop for systematic reviews related to health systems including policy briefs for policymakers and other stakeholders (www.healthsystemsevidence.org/). Other examples include Cochrane Summaries (http://www.cochrane.org/evidence), Communicate to vaccinate (COMMVAC (http://www.commvac.com), Rx for change (www.cadth.ca/resources/rx-forchange), and Harvesting Evidence (http://www.harvesting-evidence.org). A document analysis conducted by Adam et al. identified 16 organizations involved in the production of summaries for policymakers in LMICs (Adam et al., 2014). A needs assessment conducted by Evidence Aid found that while complete systematic reviews were perceived to be useful for workers 'on the ground' (i.e. Non-Governmental Organizations (NGOs), health care providers), summaries containing contextual information were considered helpful for decision-making about the applicability of the findings to their local setting (Kayabu et al., 2013).

The intervention
Evidence summaries of systematic reviews are identified using many different terms including evidence summaries, policy briefs, briefing papers, briefing notes, evidence briefs, abstracts, summary of findings, and plain language summaries (Adam et al., 2014). They are intended to assist decision makers in understanding the evidence and encourage its use in their decision-making. These user-friendly formats highlight the policy-relevant information and allow policymakers to quickly scan the document for relevance (Lavis et al., 2005a;Lavis et al., 2006a). The various products have some differences. For example, abstracts, evidence summaries, and summary of findings tables usually summarize evidence from a single systematic review while policy briefs often utilize evidence from one or more systematic reviews and may use additional sources to provide contextual or economic information (Adam et al., 2014).

How the intervention might work
Systematic review summaries consist of summarized evidence from systematic reviews intended to assist policy-makers in understanding the systematic review evidence and using it in their decision-making. These interventions may include structured summaries (e.g. SUPPORT summaries, Evidence Aid), policy briefs which are based on systematic reviews (e.g. Health Systems Evidence), and plain language summaries, structured abstracts, and Summary of Findings tables (e.g. Cochrane reviews). These may be provided in print or webbased formats and are aimed at policy-makers, and other decision makers making decisions about health. The summaries may include information about the context in which the studies were conducted, the applicability of the results (e.g. SUPPORT Summaries comment on the relevance of the findings for disadvantaged communities), as well as the findings, methods and conclusions.
Evidence suggests that policy makers are more likely to use systematic reviews when the evidence is provided in a timely manner, and aligns with interests, values, and political goals of policymakers (Lavis et al., 2005a;Lavis et al., 2005b). Evidence summaries may increase the use of systematic review evidence by policymakers because they fulfil these by: 1) providing "user friendly" and plain language summaries of the evidence, 2) providing evidence "at-a-glance" with links to the complete systematic reviews, and 3) focusing on policy-relevant topics (Adam et al., 2014;Lavis et al., 2009). In addition, they may improve access to systematic reviews, because most organizations make summaries available freely through online databases and repositories (Adam et al., 2014).

Why it is important to do the review
Interest in the production and use of systematic review summaries is increasing, as evidenced by the growing number of organizations developing and disseminating them (Adam et al., 2014). However, evidence on the usefulness and effectiveness of systematic review derivatives is lacking. Previously conducted systematic reviews have looked at interventions to increase the use of systematic reviews among decision makers. However, these have focused on the use of complete systematic reviews in decision-making and none focused specifically on derivatives of systematic reviews. For example, one systematic review examined the effectiveness of interventions for improving the use of systematic reviews in decision-making by health system managers, policymakers, and clinicians . This review included eight studies and the authors concluded that information provided as a single, clear message may improve evidence-based practice but increasing awareness and knowledge of systematic review evidence might require a multi-faceted intervention. Similarly, another systematic review assessed interventions encouraging the use of systematic reviews by health policymakers and managers . Four studies were included and the authors concluded that future research should identify how systematic reviews are accessed and the formats used to present the information. A systematic review by Wallace et al. found that a description of benefits as well as harms and costs, and using a graded entry approach (in which evidence is available as a 1 page summary, 3 page summary, or 25 page full report) facilitated systematic review use by policymakers (Wallace et al., 2012). Similarly, a systematic review by Oliver et al. also assessed barriers and facilitators to the use of research by policymakers; they found that access to high quality, relevant research as well as collaboration between researchers and policymakers were the most important factors for increasing research use . In addition, we focused on studies of evidence summaries for health policy-makers and health system managers making decisions on behalf of a large jurisdiction or organization but did not include studies related to decision-making for an individual person or patient.

Objectives
The objectives of this review were to: 1) assess the effectiveness of evidence summaries on policy-makers' use of the evidence and; 2) identify the most effective summary components for increasing policy-makers' use of the evidence.

Methods
The protocol for this review was published in the Campbell Library on 3 August 2017.

Types of studies
Eligible studies included randomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before-after (CBA) studies, and interrupted time series (ITS) studies.

Types of participants
We included studies whose participants were health policymakers at all levels. We defined policymakers as health ministers and their political staff, civil servants, and health system managers, and health-system stakeholders as civil society groups, patient groups, professional associations, non-governmental organizations, donors, international agencies (Lavis et al., 2004). We included populations involved in the development of clinical practice guidelines. To be included, the population had to be responsible for decision-making on behalf of a jurisdiction or organization and we did not include studies related to decisionmaking for an individual person or patient (Lavis et al., 2004;. For the purposes of this review, we defined 'health policy-makers' as those responsible for making decisions about healthcare policies and programs which are those intended to restore or maintain physical, mental, or emotional wellbeing (WHO, 2017). We included studies in which there were mixed participants as long as some participants are directly involved in health policy-making.

Types of interventions
We included studies of interventions examining any type of "friendly front end", "evidence summary", or "policy brief" or other product derived from systematic reviews or guidelines based on systematic reviews that presents evidence in a summarized form to policy-makers and health system managers. Interventions had to include a summary of a systematic review and be actively "pushed" to target users. This means that the summary had to be disseminated or shown to the study participants via email, mail or other means. We included any comparisons including active comparators (e.g. other summary formats) or no intervention.

Types of outcome measures
Primary Outcomes 1. Use of systematic review derivative product in decision-making (e.g. self-reported use of the evidence in policy-making, decision-making as well as self-reported access of research, appraisal of research, or commissioning of further research within the decision-making process (Redman et al., 2015). We defined "use" as instrumental, conceptual, or symbolic use of research in decision-making. Instrumental use of is the direct use of research, conceptual use includes using research to gain an understanding of a problem or intervention, and symbolic use is the use of research to confirm a policy or program already implemented (Amara, et al., 2004).
2. Understanding, knowledge, and/or beliefs (e.g. changes in knowledge scores about the topic included in the summary) as reported by the authors of the included studies (Amara et al., 2004).

Secondary Outcomes
• Perceived relevance of systematic review summaries

2011)
Some studies may use different terms to describe these outcomes, therefore, our team assessed each outcome and categorized them according to the above list. Studies were not excluded on the basis of outcomes. Outcome measures may include Likert scales to assess understandability, credibility, likelihood to use a summary in decision-making, or other outcomes; knowledge scores or responses to questions regarding summary content; and preferred format styles.
Two reviewers independently screened titles and abstracts to identify relevant studies meeting the pre-specified inclusion criteria. The full text of each potentially included study was then screened independently by two authors.

Electronic searches
Information Specialists (APA, HC) developed and translated the search strategy using the PRESS Guideline (Sampson et al., 2008). We expected that the indexing for eligible studies would be poor. Therefore, our search strategy was intentionally broad and we were prepared to retrieve a high number of citations for a low yield of included studies.
We used the search strategy developed by Perrier et al. and Murthy et al. for their systematic reviews of interventions to encourage the use of systematic reviews by health managers and policymakers to inform our search . This search included the following databases: Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials. We expanded the Perrier search by including additional databases, as suggested by John Eyres, of the International Initiative for Impact Evaluation (3ie) and the Campbell International Development Review Group. These included Global Health Library (from WHO), Popline, Africa-wide, Public Affairs Information Service, Worldwide Political Science Abstracts, Web of Science, and DfiD (Research for Development Database). The search strategies were translated using each database platform's command language and appropriate search fields. Both controlled vocabulary terms and text-words were used for the search concepts of policymaking, evidence synthesis, systematic reviews, knowledge translation, and dissemination. No date restrictions were used. The complete MEDLINE search strategy is available in online supplement 1. All databases were searched in March-April, 2016.

Searching other resources
We identified and searched websites of research groups and organizations which produce evidence summaries building on the list of organizations identified by Adam et al. (Adam et al., 2014). We searched for unpublished studies evaluating the effectiveness of the systematic review derivatives in increasing policymakers' understanding (e.g. Health Systems Evidence, the Canadian Agency For Drugs And Technologies In Health, SUPPORT Summaries). A complete list of grey literature sources is provided in online supplement 2 and these sources were searched in June, 2016. We also checked the reference lists of included studies and related systematic reviews to identify additional studies. We contacted researchers to identify ongoing and completed work. The results of the search are reported in Figure 1.

Selection of studies
References identified through database and grey literature searching were screened independently, in duplicate by two members of the research team using Covidence software.

Data extraction and management
The data extraction form was pre-tested, and included factors related to the population, intervention, comparison, and outcomes. Data extraction was completed by two authors independently using a structured Excel sheet. Disagreements were resolved by discussion and with a third member of the research team when necessary. Data were extracted for the following:

Assessment of risk of bias in included studies
The methodological quality was assessed using the risk of bias tool from the Cochrane Handbook for randomized trials. If we had identified eligible ITS, CBA, or NRS we planned to use the Effective Practice and Organization of Care (EPOC) Review Group criteria for ITS and CBA studies (Ballini et al., 2011;Higgins & Green, 2011) and Risk Of Bias In Non-randomized Studies -of Interventions (ROBINS-I) (Sterne et al., 2014;M. G. Wilson et al., 2015).
We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the quality of evidence for the outcomes reported in this review (Guyatt et al., 2011).

Dealing with missing data
We attempted to contact the authors of studies with missing data and the authors of ongoing studies.

Assessment of heterogeneity
Our included studies assessed interventions and reported outcomes that were too different to pool; therefore, meta-analysis was not possible. If it had been, we planned to explore heterogeneity using forest plots and the I 2 statistic according to guidance of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al.). We were also thus unable to conduct planned meta-regression to assess the role of mediating factors, such as: target audience of summary (e.g. focused on specific local context, generic summary); type of decision maker (e.g. federal policy-maker versus hospital administrator); and components of friendly front end (e.g. bulleted list, text, summary of findings table, causal chain).

Assessment of reporting biases
We did not assess publication bias using funnel plots because only six studies were included.

Data synthesis
We planned to synthesize the results using meta-analysis, if possible but, as stated in the protocol, we planned to present a narrative summary if the results could not be pooled. The interventions assessed in the studies we have included were too different with respect to their methods, populations, and outcomes to meaningfully pool results across studies.

Subgroup analysis and investigation of heterogeneity
Meta-analysis was not possible therefore we were unable to conduct subgroup analyses or assess heterogeneity.

Sensitivity analysis
We planned to assess the impact of including studies assessed as high risk of bias or studies in which there were unit of analysis errors that could not be reanalysed but meta-analysis was not possible.

Results of the search
The search strategy yielded 11,733 references (10,113 after removal of duplicates). Figure 1 depicts the results of the search and screening. During the title and abstract screening process we excluded 10,059 references for failing to meet one or more of our inclusion criteria. The remaining 50 references were reviewed as full-text plus three additional references identified through reference-list checking and one additional reference identified through grey literature searching. We excluded 45 studies that did not meet our eligibility criteria (see online supplement 1).

Description of the interventions
Details of the different evidence summary formats are reported in table 3. Briefly, two studies assessed policy briefs (Brownson et al., 2011;Masset et al., 2013), one assessed an "evidence summary" (Dobbins et al., 2009), two assessed different formats of summary of findings tables, which are distinct table formats presenting the main findings of the review (absolute and relative effects for each important outcome) and quality of the evidence (Carrasco-Labra et al., 2016;Vandvik et al., 2012), and one compared a Summary of Findings table alone to a summary of findings table as part of a "graded entry" evidence summary (a short one page summary, then a narrative report, followed by access to the complete systematic review) (Opiyo et al., 2013). Two studies assessed evidence summaries which included recommendations for programs or policies, (Brownson et al., 2011;Dobbins et al., 2009) while the others did not specify whether recommendations were provided within the summary (Masset et al., 2013;Opiyo et al., 2013;Vandvik et al., 2012).
Carrasco-Labra et al. compared a standard format summary of findings table to a new format that presented some of the data in a different way as well as provided supplementary data (Carrasco-Labra et al., 2016). All the other included studies tested evidence summary formats using multiple arms. Brownson et al. compared four versions of a policy brief: a state-level data-focused brief, a local-level data-focused brief, a story-focused brief with state-level data, and a story-focused brief with local-level data (Brownson et al., 2011). Dobbins et al. had three groups. The first had access to the online database, the second received targeted, tailored messages in addition to access to an online database, and the third group received the same intervention as the second group plus access to a full-time knowledge broker (Dobbins et al., 2009).
Masset et al., and the companion paper by Beynon et al., assessed three versions of a policy brief. The first was the standard policy brief, the second was the same policy brief with an additional commentary by a sector expert (the Director of the institution who conducted the review), and the third was the same except the commentary was attributed to an unnamed research fellow (Beynon et al., 2012;Masset et al., 2013).
The study by Opiyo et al. compared a systematic review alone to a systematic review with a summary of findings table and a 'graded-entry format' which includes a short one page summary, a contextually-framed narrative report including an interpretation of the main findings and conclusions (with a summary of findings table), followed by access to the full systematic review (Opiyo et al., 2013).
Finally, the study by Vandvik et al. compared two versions of summary of findings tables with or without four formatting modifications (the placement of additional information, the placement of the overall rating for quality of evidence, the study event rates, and the absolute risk differences) (Vandvik et al., 2012). These were labelled as Table A and Table B. Table A contained additional information within the table, an overall assessment of the quality of evidence in a 'quality assessment' heading, and reported the absolute risk differences but not the study event rates. Table B presented the additional information in footnotes, included the overall quality of the evidence under a "summary of findings' heading, reported study event rates, but did not report absolute risk differences.

Excluded studies
After title and abstract screening, 10,059 references were excluded because they did not meet our eligibility criteria. After reviewing 55 full text references an additional 46 studies were excluded. The details of these exclusions are included in online supplement 3.

Risk of bias in included studies
The summary of the Risk of Bias assessments is presented in Figure 2 and details are provided in online supplement 4.

Incomplete outcome data
Incomplete outcome data was assessed as low risk of bias for four studies (Carrasco-Labra et al., 2016;Dobbins et al., 2009;Opiyo et al., 2013;Vandvik et al., 2012) but high for two studies (Beynon et al., 2012;Brownson et al., 2011;Masset et al., 2013). These two studies had very high rates of attrition; Brownson et al. had an overall response rate of 35% and the Masset study had 50% attrition between baseline and first follow-up (Brownson et al., 2011).

Knowledge of allocated interventions
Knowledge of allocated interventions was assessed as unclear for four of the studies (Beynon et al., 2012;Brownson et al., 2011;Dobbins et al., 2009;Masset et al., 2013;Opiyo et al., 2013). One study reported that panelists, data collection, and data analysis were blinded (Vandvik et al., 2012) and one reported that allocation was done in real-time when the survey was completed and these were therefore assessed as low risk of bias (Carrasco-Labra et al., 2016).

Protection from contamination
Adequate protection from contamination was assessed as unclear for four studies. The Dobbins study included public health departments from across Canada and therefore little risk of contamination was expected (Dobbins et al., 2009) and Carrasco-Labra et al. reported that allocation was done in real-time when completing the survey leaving little risk of contamination (Carrasco-Labra et al., 2016).

Selective outcome reporting
All studies were assessed as low risk of bias for selective outcome reporting.

GRADE
Most outcomes were assessed as moderate certainty of evidence using GRADE (Guyatt et al., 2011). Reasons for downgrading the evidence were due to unclear risk of bias. 'Perceived credibility' was assessed as low certainty of evidence due to unclear risk of bias and only one eligible study for this outcome.. The assessments are included in table 4.

Evidence of effectiveness
We generated a Summary of Findings table for this review (Table 4). This is a narrative summary of all studies assessing a particular outcome domain, grouped across different policy brief formats. We did not conduct a meta-analysis because the interventions and outcomes of the included studies were too different to combine.

Use of summaries in decision-making
Two studies assessed self-reported use of summaries in decision-making. First, Dobbins et al. assessed the change in global evidence-informed decision-making (EIDM) which is defined as the extent to which research evidence was considered in a decision 18 months after the intervention. The authors found that the intervention had no significant effect on EIDM. The post-intervention change for the group receiving targeted, tailored messages was -0.42 (95% CI: -1; 0.26) and -0.09 (95% CI: -0.78, 0.60). This study also reported on evidence-based public health policies and programs as a measure of the actual number of strategies, policies, and interventions for health body weight promotion among children that were implemented by the health department. For this outcome, the group that received the targeted, tailored messages had a statistically significant increase in evidence-based public health policies and programs (post-intervention change 1.67, 95% CI: 0.37, 2.97) compared with no change for the group with access to a knowledge broker (-0.09, 95% CI (-0.78, 0.60) (Dobbins et al., 2009).
The study by Brownson et al. asked policymakers how likely they would be to use the evidence summary in decision-making. On a 5-point Likert scale, where 1 is strongly disagree and 5 is strongly agree, there was little to no difference based on the type of policy brief (data-driven versus story driven) (range 3.3 to 3.4). However, there were differences in self-reported likelihood of using the policy brief depending on type of policymaker. Staff members reported being most likely to use the story-focused brief with state-level data (mean rating of 3.4, 95% confidence interval (CI) 3.0 to 3.9) and least likely to use the data-focused brief with state-level data (2.5, 95% CI 2.0 to 3.0). Legislators reported being most likely to use the data-focused brief with state level data (4.1, 95% CI 3.6 to 4.6) and least likely to use story-focused brief with state-level data (3.1, 95% CI 2.6 to 3.6) (Brownson et al., 2011).

Understanding, knowledge, and/or beliefs
Carrasco-Labra et al. found that respondents receiving the new summary of findings format had a higher proportion of correct answers for almost all questions. These included the ability to interpret footnotes (risk difference (RD) 7%, p=0.18), ability to determine risk difference (RD 63%, p=<0.001), understanding of quality of evidence and treatment effect (RD=62%, p=<0.001), understanding of the quality of evidence (RD 7%, p=0.06), and ability to quantify risk (RD 6%, p=0.06) (Carrasco-Labra et al., 2016). However, for one question, the ability to relate the number of participants and studies to outcomes, the group receiving the standard summary of findings scored slightly higher (RD -3%, p=1.0).
The Masset study examined changes in beliefs about the effectiveness of the intervention as well as the strength of the evidence included in the policy briefs. The authors found that the policy brief increased the number of participants who had an opinion about the strength of the evidence. For example, those who did not have an opinion at baseline formed an opinion based on the policy brief. The difference-in-difference coefficients indicate that the policy brief increased the percentage of participants with an opinion about the strength of the evidence by 20-25 decimal points (Masset et al. 2013). However, the intervention was less effective in changing participants' ratings of the strength of the evidence or the effectiveness of the intervention (Masset et al., 2013). The policy brief did not change opinions of those who had an opinion at baseline about the evidence and effectiveness. The Opiyo study found little to no difference between interventions for the odds of correct responses to questions about the intervention. The adjusted odds ratio (OR) for the summary of findings table compared to the systematic review alone was 0.59(95% CI 0.32 to 1.07), and graded entry format compared to systematic review alone was 0.66 (95% CI 0.36 to 1.21). However, both of these indicated that the odds of correct responses were higher for the groups who received an evidence summary or summary of findings (Opiyo et al., 2013).When comparing groups of participants, both the summary of findings tables and the graded entry formats slightly improved understanding for policymakers. For the summary of findings table compared to systematic review alone, the adjusted OR was 1.5 (95% CI 0.15-15.15) and the graded entry format compared to systematic review alone was 1.5 (0.64-3.54) (Opiyo et al., 2013 estimates (58% correct compared to 11% correct, p<0.0001) and the range in which the effect may lie (95% correct versus 54% correct, p<0.0001) (Vandvik et al., 2012).

Secondary outcomes:
Credibility of the summaries Brownson et al. reported little to no differences in credibility for the different intervention formats (low certainty of evidence). Mean scores for perceived credibility ranged from 4.4 to 4.5 on a 5-point Likert scale in which 5 indicated "strongly agree" (Brownson et al., 2011). For different policymaker groups there were also little to no differences with mean scores ranging from 4.2 to 4.5 for staff members, 4.3 to 4.7 for legislators, and 4.3 to 4.6 for executives (Brownson et al., 2011).

Perceived usefulness and usability of the summaries
The Carrasco-Labra study reported that the new summary of findings format was more accessible than the standard format (Carrasco-Labra et al., 2016). This was assessed using a 7-point scale for which 1 indicates "strongly disagree" and 7 indicates "strongly agree". Participants who received the new SOF format reported that it was easy to find the information about the effects more often (adjusted mean difference (MD) 0.4, SE 0.19, p=0.04, representing a 5.7% difference) and easy to understand the information more often (MD 0.5, SE 0.2, p=0.011, representing a 7.1% difference). The respondents also reported that the new format displayed results in a way that was more helpful for decision-making (MD 0.5, SE 0.18, p=0.011, 7.1% difference).
Opiyo et al. measured this outcome by assessing the 'value and accessibility' of each intervention. The graded entry format received a higher mean score than the systematic review alone (MD 0.52 (95% CI 0.06 to 0.99). The odds of a one-point increase for the graded entry format compared to the systematic review alone were 1.52 (95% CI: 1.06-2.20). There was little to no difference in effect when comparing the summary of findings table and the systematic review alone (MD -0.11, 95% CI -0.71 to 0.48). The odds of a one-point increase were 0.91 (95% CI: 0.57-1.46) (Opiyo et al., 2013).
Vandvik et al. reported that accessibility of information for quality of evidence as well as absolute and relative effects was rated similarly with no significant differences between groups (Vandvik et al., 2012). Only pooled results were presented.

Perceived understandability of the summaries
All the groups in the Brownson et al. study reported that the summaries were easy to understand (Brownson et al., 2011). Mean ratings ranged from 4.3 to 4.4 on a 5-point Likert scale. For the different policymaker groups, there was little to no difference with mean scores ranging from 4.3 to 4.5 for staff members and legislators and 4.1 to 4.4 for executives (Brownson et al., 2011).
The study by Opiyo et al. reported that 60% (95% CI: 48% to 73%) of the participants found systematic reviews to be more difficult to read than the narrative reports included in the graded entry formats. Fifty-one percent of participants (95% CI: 38% to 63%) found systematic reviews to be easier to read compared to 26% (95% CI: 15% to 37%) who foundsummary of findings tables easier. 53% (95% CI: 41% to 65%) of participants preferred the narrative report format (graded entry) compared to 25% (95% CI: 14% to 36%) who preferred the full systematic review (Opiyo et al., 2013). The majority of participants interviewed reported that narrative formats were clearer, easy to read, and easy to understand and some participants reported that the Summary of Findings tables were difficult to understand as a stand-alone product.

Perceived desirability of the summaries
Two studies of different summary of findings formats assessed this outcome. One study found that participants preferred the summary of findings table which presented study event rates versus the one without them (median 1, interquartile range (IQR) 1, on 1-7 scale in which 1 indicates 'strong preference for' and 7 indicates 'strong preference against'). Participants preferred the table with the absolute risk differences versus presentation of absolute risks only (median 2, IQR 3). Additional information embedded in table was preferred versus having it as footnotes (median 1, IQR 2). No significant differences were found for the placement of the column for 'overall quality of evidence' either as the final column or before the effect size or the overall table format (Vandvik et al., 2012).
The other study found that overall, respondents preferred the new summary of findings format (MD 2.8, SD 1.6) compared to the standard format (Carrasco-Labra et al., 2016).
None of the included studies reported on policymakers' perceived relevance of the summaries.

Effect modifiers
The organizational research culture was found to influence the effect of the intervention on evidence-based public health policies and programs in one study which found that tailored, targeted messages were more effective than access to a database alone (healthevidence.org) or access to a knowledge broker when the organization valued research evidence in decisionmaking (Dobbins et al., 2009). In this study, organizational culture referred to characteristics such as the value placed on research in decision-making, the expectation of the use of research in decision-making, and staff training in research methods and critical appraisal and was assessed on a seven point scale.
The Carrasco-Labra study conducted regression analyses to assess potential effect modifiers, such as the number of years of experience in guideline development, as a researcher or as a health care provider. They found that the number of years of experience modified the effect on understanding by more than 10% (adjusted OR 1.83; 95% CI 0.91 to 3.67) for the questions about the ability to determine a risk difference. For the question assessing whether respondents understand the quality of evidence and treatment effect combined, the authors found that years of experience, familiarity with GRADE, and level of training modified the effect by more than 10% (adjusted OR 0.72; 95% CI 0.20 to 2.56) (Carrasco-Labra et al., 2016).

Summary of main results
This review has summarized the evidence on the use of systematic review summaries in policy-making, policy-makers' understanding of systematic review evidence, and different components and design features. Overall, the results suggest that evidence summaries may be easier to understand than complete systematic reviews. However, their ability to increase the use of systematic review evidence in policymaking is unclear because not enough evidence is available.
Six studies were included in this review. For our primary outcome, use of systematic review evidence in decision-making, one study found that targeted, tailored messages increased the number of evidence-based public health policies and programs. However, for the two studies that assessed effect on decision-making or likelihood of using the summary in decisionmaking there was little to no difference between intervention groups (Brownson et al., 2011;Dobbins et al., 2009). We assessed these results as having moderate certainty using GRADE.
For the secondary outcome of understanding, knowledge, and beliefs, there was little to no difference in effect and moderate certainty evidence in three of the four studies assessing this outcome (Masset et al., 2013;Opiyo et al., 2013;Vandvik et al., 2012). There was a slight increase in understanding for summary of findings tables and graded entry formats compared to systematic reviews alone. The fourth study found that participants provided with an alternate version of the summary of findings table had greater understanding (Carrasco-Labra et al., 2016).
For perceived desirability of summaries we found moderate certainty of evidence. One study found that the alternate version of the summary of findings table was preferred, (Carrasco-Labra et al., 2016) and the other study found that certain formatting elements such as study event rates, and absolute risk differences were preferred as well as additional information provided in the table and not in footnotes (Vandvik et al., 2012). One study found the alternate format to be more accessible than the standard format, (Carrasco-Labra et al., 2016) however, the other study assessing formatting changes found little to no difference in effect for perceived usefulness (Vandvik et al., 2012).
For perceived usefulness and usability of the summaries, we found moderate certainty evidence. The graded entry summary was rated higher than a systematic review alone for usability (Opiyo et al., 2013). Summaries were perceived as easier to understand than systematic reviews (moderate certainty evidence) (Brownson et al., 2011;Opiyo et al., 2013). There was little to no difference in effect for different versions of the policy brief (data-driven versus story-driven, local versus state-level data; moderate certainty evidence) for perceived credibility of the summaries (Brownson et al., 2011).

Overall completeness and applicability of evidence
We identified two protocols for ongoing studies which is promising as the results of these studies will enhance the available evidence about the effectiveness of evidence summaries . We also identified other relevant studies assessing the effectiveness of systematic review derivatives that did not use an eligible study design (e.g. used interviews or other methods without a control group) . One of these studies was intended to be a RCT and process evaluation but was not eligible for our review because poor recruitment (only 15% of the planned sample) resulted in the termination of the trial . This demonstrates the difficulty with recruiting these types of participants. Recruitment for the process evaluation remained low and the authors noted that those included are likely those already more interested in using systematic review derivatives . The authors noted that, for future RCTs, recruitment may be more successfully achieved from randomizing divisions versus individuals because the nature of policymaking is quite complex and often not completed at the individual level. Additionally, we identified other studies that were not focused on policymakers but rather clinicians Laure Perrier et al., 2015) or the public . These studies demonstrated that evidence summaries can improve understanding of research evidence within these populations however use of evidence in decision-making was not assessed.

Quality of the evidence
We used the GRADE approach to assess the overall quality or certainty of the evidence included in this review. We assessed most of the evidence to be 'moderate' certainty. For perceived desirability of the summaries, we assess the evidence as 'high' certainty.

Limitations and potential biases in the review process
Our primary outcome, policymakers' use of systematic review evidence in decision-making is challenging to measure. Other studies have noted the inherent challenges in measuring this outcome because many factors contribute to decision-making and it is often difficult for an individual to identify which factors had a role in their final decision (Dobbins et al., 2009;. Instead of determining the actual use of research in decision-making, studies assessed self-reported use of research or other outcomes, such as perceived credibility or relevance since these may affect the likelihood of research use in decision-making.
Our primary outcome was policymakers' use of evidence and therefore we restricted to studies that would provide a quantitative measure. We therefore limited inclusion to RCTs, NRCTs, CBA studies, and ITS studies but planned to extract qualitative data for these outcomes, when provided. Only one study presented qualitative data and and five studies were excluded because they were qualitative studies. These studies may provide additional data regarding the credibility, desirability, relevance, understandability, and usefulness of evidence summaries which would be helpful for understanding how and why evidence summaries could be useful in policymaking.
We did not assess the use of health behaviour theory. This may be an important concept for understanding the effectiveness of evidence summaries and therefore we plan to include it in future updates of this review. Our review is limited by the indexing of studies in this area. To address this issue, we conducted a broad search using search strategies adapted from similar systematic review. Our search identified over 10,000 references but we had a low yield of included studies. The methods used in the included studies were poorly reported. For example, only two studies adequately reported on random sequence generation or allocation concealment, which means that most studies have unclear risk of bias.

Agreements and disagreements with other studies or reviews
Previous systematic reviews have assessed the effectiveness of interventions for improving the use of systematic reviews in decision-making by health system managers, policymakers, and clinicians ; interventions encouraging the use of systematic reviews by health policymakers and managers ; and facilitators to the uptake of systematic review evidence (Wallace et al., 2012).
Murthy et al. assessed the effectiveness of interventions for improving the use of systematic reviews in decision-making by health system managers, policymakers, and clinicians . Similar to our findings, they found that evidence presented as a single, clear message may improve evidence-based practice . The review by Perrier et al. found little evidence on interventions that encourage the use of systematic reviews by policymakers and managers . The review by Wallace et al. assessed potential facilitators to the use of systematic review evidence by policy-makers and identified five potential factors: the perception that systematic reviews may be used for improving knowledge, research, and evidence-based medicine skills, content that assesses benefits, harms and costs and is perceived as current, transparent and timely; a 'graded entry' format; training in use of systematic reviews and peer-group support (Wallace et al., 2012). We did not assess facilitators to the use of systematic review evidence in our review; however, we did find that the 'graded entry' format may be easier to understand than complete systematic reviews.

Authors' conclusions
The interventions assessed in the studies included in our review are quite diverse with a variety of outcome measures. We included a broad range of interventions to provide an overview of the evidence on systematic review derivative products. These products have important differences in design and source material. For example, a policy brief includes evidence from one or more systematic reviews and includes information from additional sources (Adam et al., 2014; whereas a summary of findings table reports results for a single systematic review. We chose to include all systematic review derivative products as there are limited studies on a single product type. We recognize that this creates a challenge for interpreting the results because the interventions were quite different. Therefore, we have provided a narrative summary of each study and presented an overview of the available evidence.

Implications for practice and policy
Overall, the studies included in our review suggest that evidence summaries may be easier to understand than complete systematic reviews. To facilitate the use of systematic review evidence by policymakers, reviews need to be perceived to be of high quality and assess a relevant research question . Systematic reviews and their derivative products should be developed with collaboration from their intended users.

Implications for research
Future studies should include an assessment of delivery strategies because the effectiveness of the systematic review derivative product in practice will be affected by policymakers' knowledge of and access to the summaries themselves. Our included studies suggest that evidence summaries have a small effect on improving knowledge and understanding and may be easier for policymakers to understand. However, we have very little evidence to inform the design of evidence summaries because we only found a handful of different formats (none the same), and there was little to no difference between formats when compared directly.
It is important to note that only two of the included studies compared the evidence summary to a full systematic review or access to a database of systematic reviews. The others compared different versions of evidence summaries and, in general, found little to no differences in the effects. Had these studies included systematic reviews as a control group the results may be different.
Future studies should ensure that derivative products are compared to complete systematic reviews. Qualitative methods may be appropriate for future evaluations to assess the effectiveness, credibility, desirability, understandability, usefulness, and relevance of evidence summaries. Similarly, future reviews may consider qualitative synthesis which may add a deeper level of understanding to our findings.
Additional research on the use of evidence summaries derived from systematic reviews is needed. Researchers should consider the primary goal of these derivative products, which is to increase the uptake of systematic review evidence, and aim to assess this in future studies. The production of evidence summaries and other systematic review derivative products is increasing and therefore, more evaluation of their effectiveness on increasing the use of systematic reviews by policymakers and health system managers is needed to ensure they have the desired effect of increasing the use of systematic review evidence in decisionmaking.
Tricco, A.C., Cardoso, R., Thomas, S.M., Motiwala, S., Sullivan, S., Kealey, M.R. Hemmelgarn B, Ouimet M, Hillmer MP, Perrier L, et al. (2016).: Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review. Implement Sci, 11:4. Wallace, J., Byrne, C., . Making evidence more wanted: a systematic review of facilitators to enhance the uptake of evidence from systematic reviews and metaanalyses. Int J Evid Based Healthc, 10:338-346. Wilson, M.G., Moat, K.A., Lavis, J.N. (2013). The global stock of research evidence relevant to health systems policymaking. Health Res Policy Syst, 11:32. Yavchitz, A., Ravaud, P., Hopewell, S., Baron G., Boutron, I. (2014). Impact of adding a limitations section to abstracts of systematic reviews on readers' interpretation: a randomized controlled trial.   At baseline, all participants will receive the 'selfserve' evidence service (includes a listing of relevant systematic reviews, links to PubMed records, and worksheets to help find and use research evidence). During the intervention, one group will receive the 'full-serve' version of SHARE ('Synthesized HIV/AIDS Research Evidence') which includes access to a database of HIV systematic reviews, emailed updates, access to user-friendly summaries, links to scientific abstracts, peer relevance assessments (indicating how useful the information is), as well as an interface for comments in the records, plus links to the full-text, and access to worksheets to help find and use evidence. The control group will continue to receive the 'self-serve' evidence service. During the final two-month period, both groups will receive the 'full-serve' version of SHARE.
The primary outcome measure will be the mean number of logins/month/organization. The secondary outcome will be intention to use research evidence (measured with a survey administered to one key decision maker from each organization).
Wilson 2015 (P. M.  CBA Clinical Commissioning Groups: Governing body and executive members, clinical leads and any other individuals deemed as being involved in commissioning decisionmaking processes Three arms: 1) consulting plus responsive push of tailored evidence (access to an evidence briefing service provided by the Centre for Reviews and Dissemination (CRD) plus advice and support via phone, email, face-to-face; monthly check in to discuss further evidence needs; issues around use of evidence; alert team to new SRs and other synthesized evidence relevant to priorities); 2) consulting plus an unsolicited push of non-Primary outcome: change at 12 months from baseline of a CCGs ability to acquire, assess, adapt and apply research evidence to support decision-making. Secondary outcomes will measure individuals' intentions to use research evidence in decision-making.

46
The Campbell Collaboration | www.campbellcollaboration.org tailored evidence (access to intervention 1 without tailored evidence briefings and instead just evidence briefings without contextual information) ; or 3) 'standard' service (CRD will disseminate evidence briefings generated in intervention 1 and any other non-tailored briefings produced by CRD over the intervention period).

47
The Campbell Collaboration | www.campbellcollaboration.org Pack C compared to pack A was associated with a significantly higher mean 'value and accessibility' score. Pack C compared to pack A, was associated with a 1.5 higher odds of judgments about the quality of evidence being clear and accessible. More than half of participants preferred narrative report formats to the full version of the SR (53% versus 25%). A higher respondent percentage (60%) found SRs to be more difficult to read compared to narrative reports, but some (17%) said that SRs were easy to read. About half of the participants (51%) found SRs to be easier to read compared to summary-of-findings tables (26%).

50
The Campbell Collaboration | www.campbellcollaboration.org Vandvik 2012(Vandvik et al., 2012 Summary of Findings  table   table  Email  Tables presented outcomes, number of participants, summary of findings, and quality assessment using GRADE Participants liked presentation of study event rates over no study event rates, absolute risk differences over absolute risks, and additional information in table cells over footnotes.
Panelists presented with time frame information in the tables, and not only in footnotes, were more likely to properly answer questions regarding time frame and those presented with risk differences and not absolute risks were more likely to rightly interpret confidence intervals for absolute effects. Information was considered easy to find and to comprehend, and also helpful in making recommendations regardless of table format.

Use of systematic review evidence in decisionmaking
Little to no difference in effect on evidenceinformed decision-making when compared to access to a knowledge broker or online registry of research. (Dobbins et al., 2009) Little to no difference in effect on selfreported likelihood of using data-driven versus story-driven policy briefs (with statelevel or local-level data). (Brownson et al., 2011) 399 (2) ⊕⊕⊕⊝ moderate 1

Understanding, knowledge and/or beliefs
One study found little to no effect on understanding of information when provided in different Summary of Findings table formats (Vandvik et al., 2012)while the other found that those provided with a new version of the summary of findings table had consistently higher proportions of correct answers assessing understanding of key findings provided in the table. (Carrasco-Labra et al., 2016) Little to no effect in understanding of information for a graded entry format compared to an summary of findings table or systematic review alone. (Opiyo et al., 2013) Little to no effect on changing participants' beliefs about the strength of the evidence for those who already had beliefs but increased the number of participants who had beliefs about the strength of the evidence. (Beynon et al., 2012;Masset et al., 2013) 676 (4) ⊕⊕⊕⊝ moderate 1

Perceived credibility of the summaries
Little to no difference in perceived credibility for different versions of the policy brief (data-driven versus story-driven, local versus state-level data). (Brownson et al., 2011) 291 (1) ⊕⊕⊕⊝ low 1,2

Perceived usefulness and usability of systematic review summaries
The graded entry format was rated higher than the systematic review alone and there was little to no difference between the ratings for the summary of findings Different summary of findings table formats had little to no effect in one study (Vandvik et al., 2012), but a new summary of findings format was found to be more accessible than the standard summary of findings in another. (Carrasco-Labra et al., 2016) Perceived understandability of the summaries All formats of the policy brief were reported as easy to understand. (Brownson et al., 2011) Graded entry formats were easier to understand the summary of findings tables or systematic reviews alone. (Opiyo et al., 2013) 356 (