Updating Systematic Reviews: An International Survey

Background Systematic reviews (SRs) should be up to date to maintain their importance in informing healthcare policy and practice. However, little guidance is available about when and how to update SRs. Moreover, the updating policies and practices of organizations that commission or produce SRs are unclear. Methodology/Principal Findings The objective was to describe the updating practices and policies of agencies that sponsor or conduct SRs. An Internet-based survey was administered to a purposive non-random sample of 195 healthcare organizations within the international SR community. Survey results were analyzed using descriptive statistics. The completed response rate was 58% (n = 114) from across 26 countries with 70% (75/107) of participants identified as producers of SRs. Among responders, 79% (84/107) characterized the importance of updating as high or very-high and 57% (60/106) of organizations reported to have a formal policy for updating. However, only 29% (35/106) of organizations made reference to a written policy document. Several groups (62/105; 59%) reported updating practices as irregular, and over half (53/103) of organizational respondents estimated that more than 50% of their respective SRs were likely out of date. Authors of the original SR (42/106; 40%) were most often deemed responsible for ensuring SRs were current. Barriers to updating included resource constraints, reviewer motivation, lack of academic credit, and limited publishing formats. Most respondents (70/100; 70%) indicated that they supported centralization of updating efforts across institutions or agencies. Furthermore, 84% (83/99) of respondents indicated they favoured the development of a central registry of SRs, analogous to efforts within the clinical trials community. Conclusions/Significance Most organizations that sponsor and/or carry out SRs consider updating important. Despite this recognition, updating practices are not regular, and many organizations lack a formal written policy for updating SRs. This research marks the first baseline data available on updating from an organizational perspective.


Introduction
Systematic reviews (SRs) have become a gold standard for evidence-based decision-making, and are the key building blocks for clinical practice guidelines (CPGs), and health technology assessments (HTAs). Since evidence is continually evolving, results from SRs are prone to change over time and, if ignored, can undermine their validity. [1] To maximize the potential of evidence from SRs, it is important to continually consider currency of information and to emphasize the relevance of keeping them up to date.
Few of the estimated 2500 new English language SRs indexed annually in Medline are reported as updates [2] according to a proposed definition for updating. [3] It is suggested that indicators for updating may occur frequently, and within relatively short timelines. [1] Signals for updating can be quantitative (i.e., a change in statistical significance using a conventional threshold or a change in the magnitude of effect estimate) or qualitative in nature (i.e., including 'a different description of effectiveness, a new harm that would alter decision-making, a better alternate therapy, a caution that affects clinical decision-making, or the growth of treatment to a new patient group.'). Nonetheless, the level at which updates are undertaken remains unclear as does the basis upon which updating is conducted (e.g., ad hoc versus a formal process).
Healthcare organizations are both large-scale consumers and producers of evidence from SRs. Some organizations have made recommendations about the frequency by which the evidence base needs to be updated in order to keep it up-to-date and valid. [4,5] For example, the Cochrane Collaboration recommends that SRs be assessed for the need of updating every two years, or a commentary be provided to explain why this was done less frequently. [4] As with the conduct of original SRs, Cochrane's commitment to updating is predicated on authors volunteering of their time to periodically ensure a review is current. This is thought to be unique from that of other organizations that are more likely prompted to update based on immediate need, and which is usually accompanied by specific funding.
Emerging research suggests that updating SRs according to fixed time intervals is perhaps too basic an approach for what appears to be a complex issue influenced by several factors, such as the context in which 'update' decisions are made, approaches to monitoring the literature, how meaningful changes or signals are defined in relation to SRs that when detected may trigger updating, and subsequent update procedures used. [6] Nonetheless, generally little is known about the frequency of updating policies and practices across organizations. It has not yet been determined how best to balance being up-todate with the resources required to achieve this goal. The lack of adequately developed globally coordinated, reasonable and costeffective methodologies for updating may be why those who conduct and/or fund SRs do not commonly update. On several levels it seems advantageous to work towards development of the most efficient updating approaches, which must start with an understanding of current updating experiences. Therefore, the aim of this survey was to examine and describe the updating policies and procedures used by healthcare organizations that produce and/or sponsor SRs worldwide.

Methods
The survey was developed with input from a team of methodologists and systematic reviewers, and was guided by a conceptual framework on updating SRs. [6] The survey consisted of 48 questions (including skip-logic functionality) on the following topics: a) updating policies; b) responsibility for updating; c) changes in estimates of outdated reviews; d) updating strategies and practices (e.g., including when to update, how to update, surveillance, and triggers impacting updating decisions; e) barriers and facilitators to the updating process; f) views on harmonization of updating; the openness to collaboration between groups; and g) descriptive characteristics of the organization and the representative key informant. For the purposes of this research, an 'update' was defined as 'a discrete event aiming to search for and identify new evidence to incorporate into a previously completed SR; with new evidence taken to mean any evidence not included in the previously completed SR irrespective of its chronological appearance in the literature.' [3] In addition, the term harmonization was defined as 'the coming together of different organizations that are involved in the funding, conduct, or reporting of SRs in order to bring into line or to harmonize on issues of conduct, reporting and policy as it relates to updating SRs.' In order to ascertain what organizations view 'updating policy' to mean (i.e., mere guidance or a set of formal procedures implemented either informally or on a compulsory basis etc.) no formal definition was provided to respondents. All items provided non-response options (e.g., not sure) and participants were allowed to skip questions they did not wish to answer. A pilot was administered (January 29 th to April 4 th , 2007) to a sub-sample of 22 organizations including the Evidencebased Practice Centers as designated by the Agency for Healthcare Research and Quality (AHRQ). [7] The survey was provided to participants via the Survey Monkey web-based service; [8] a suitable format given distribution of the sample across a wide international geographical area and that key informants had email addresses and therefore were likely familiar with Internet use. [9,10] Emails were sent directly to organizational Directors or to the highest ranking scientific or administrative official asking them to identify the most appropriate internal respondent to answer the survey, which took an estimated 20 to 30 minutes to complete.
Recommended survey methods were employed to maximize Internet survey participation including offering a small fiscal incentive to all participants (e.g., a $10 gift certificate from Amazon.com or iTunes). [10][11][12][13] The main survey was administered between April 12 th and June 8 th , 2007 with reminder emails sent at approximately 1, 3 and 6 weeks from the first point of contact.
Using a purposive non-random sampling approach, traditional organizations or groups commonly involved in undertaking and/ or funding SRs were sampled as potential respondents. The sample was expanded to include entities involved in HTAs and CPGs to allow for stratification by type of evidence synthesis conducted. Key international membership lists of established networks and associations known in the field (i.e., Health Technology Assessment International (HTAi), International Association of Health Technology Assessment (INAHTA), and Guidelines International Network (G-I-N)) were used to create the sampling pool of relevant research organizations. In addition, the 52 Cochrane Collaboration Review Groups (CRGs) were invited to participate in the survey. Thus, a direct attempt was made to broadly select organizations likely to provide the most relevant knowledge and insight into updating. The final sampling frame consisted of 195 different organizations.
Closed-ended questions were analyzed using a descriptive summary of findings in the form of frequencies and percentages. In addition, other details reported in the text were summarized in tabular form. A subgroup analysis was performed comparing CRGs to non-Cochrane organizations in terms of their responses across select updating characteristics.
Participating organizations are not identified in the results as only aggregate data are reported. The institutional ethics review boards of the University of Toronto and the Children's Hospital of Eastern Ontario approved the survey.

Results
Of the 195 Internet surveys sent by email, 127 organizations responded yielding an overall response rate of 65%. Of those organizations that initially responded, 10% (13/127) formally declined participation and 58% (114/195) completed the survey. Eighty-eight percent (100/114) of respondents completed more than 80% of all questions. (Figure 1. Survey Flow Diagram)

General Findings
Approximately 96% of the survey sample (103/107) 'strongly' or 'somewhat' agreed with our definition of updating. [3] The majority of respondents (84/107; 79%) viewed the importance of updating SRs as high or very high.
Of the respondent organizations, 57% (60/106) indicated having an updating policy. However, when asked to elaborate Responsibility. Respondents thought that authors of the original review (42/106; 40%), the funder(s) of the original review (16/106; 15%), and policy-makers utilizing the evidence (14/106; 13%) were most accountable for updating SRs. We note that 16% of respondents suggested that responsibility for updating was a collective effort including all the above mentioned with the additional responsibility of information specialists involved with SRs while information specialists separately were thought of as least responsible (2/106; 2%).
Surveillance. Regarding updating methods and strategies used to monitor the literature, 63% (66/105) of respondents claimed to be engaged in regular literature searches to identify new evidence while 28% (29/105) reported no such activity. Of groups reporting to search regularly, search frequencies varied: 11% (7/ 65) searched monthly; 9% (6/65) searched every six months; 20%  Of organizations reporting to monitor the literature, 74% (46/ 62) reported to 'always' or 'often' use the same search strategy of the original SR while 43% (25/48) reported to use a modified search. Eighteen percent (9/50) reported 'always' or 'often' developing a new search strategy therefore discarding the original.
When assessing specific issues that factor into determining 'when' to update, a formal request from a policy or healthcare decision-maker was the most frequently cited factor by the majority of respondents (80/99; 81%) followed by number of new studies identified (77/100; 77%. Additional issues are provided in Table 4. Conducting Updates. Most organizations are seldom utilizing current existing methods, for example cumulative metaanalytic approaches, when undertaking updating. The most frequently used approach is a pre-set time based updating frequency (66/99; 67%). (Table 5) Updating Action. When examining levels of updating, 84% (85/101) of organizations reported (always, often, or sometimes) carrying out full updates of all sections of the original SR; 66% (66/100) reported taking part in partial updates relating to only certain sections of original SRs; and 61% (59/99; 60%) reported participation in conducting an entirely new review upon updating. Seventy percent of groups (71/101) indicated knowing an SR is out of date but were unable to commence updating due to lack of resources (e.g., funding, personnel, or time). (Figure 3) Seventy-three percent (74/101) of organizations reported 'sometimes' or 'always' identifying literature published after the date of the last search but before completion of the original SR. Organizations reported that this information is usually incorporated as an addendum in the review (47/101; 47%), or as a formal revision to the analysis (40/101; 40%). Twenty-two percent (22/ 101) characterized this issue as the following: referred to studies as awaiting assessment, ongoing or unclassified; referenced studies in the section of the review where included; or noted and/or incorporated studies qualitatively into the discussion. An additional 9% (9/101) reported to be unsure as to how their organization dealt with this issue of identifying literature post-hoc.
Forecasting the need for a future update within the text of a SR was reported by 53% (52/99) groups. Withdrawal of at least one SR from circulation after judging the review as out of date was reported by 56% (55/98) of groups, while formal retirement of at least one SR when deemed out of date, or no longer in need of investigation was noted by 38% (37/97) of respondents. Approximately, 54% of organizations (56/103) reported the ability to (always or often) draw on the same people involved in the original SR.

Discussion
We believe that this is the first survey to examine updating practices of organizations engaged in knowledge synthesis. This survey is guided by a conceptual framework, [6] has a response rate higher than those typically obtained from Internet surveys (58%), [11] and includes strong international representation.
Importantly, this survey revealed inconsistencies between the belief of the importance of updating and limited updating activity among respondents outside the Cochrane Collaboration (nearly 70% of the respondents). Analysis of Cochrane Review Groups (CRGs) and non-Cochrane organizations showed several significant differences in approaches to updating. Most fundamental is the Cochrane Collaboration's policy that Cochrane Reviews should be assessed for updating within two years of publication and its policy that authors must agree to keep reviews up-to-date when registering with the Collaboration. [4] Not surprisingly, the CRGs perform updates in greater numbers than the other entities who responded.
Within the Cochrane Collaboration, responsibility for updating resides predominantly with the authors. This did yield an advantage in that CRGs were able to draw on the same review team for updating to a greater extent than non-Cochrane respondents. Still, when reviewers had responsibility for updating, reviewer motivation was the most prominent barrier to updatingevery responding CRG reported this to be a moderate or serious obstacle, compared to only 39% of those who did not place primary responsibility with the original authors.
For both Cochrane and non-Cochrane respondents, funding is the prominent barrier to updating, with 70% reporting inadequate  resources. As many of these groups estimated that half or more of their SRs were outdated, supporting updates becomes the crucial issue to address within the overall allocation of evidence synthesis funding. Value of Information (VOI) analysis is a possible mechanism for establishing the costs and benefits of further information gathering and subsequent organizational prioritysetting. [14] It may have relevant application to the field of updating SRs. A conceptual updating framework [6] and evidence-based update decision-making tools are needed to guide this process. Placing the onus for updating mainly on authors of SRs has had some success within the Cochrane Collaboration although may not be a practical approach for agencies that do not share its values and culture. Journal publishers and academic organizations can contribute to overcoming some of the known motivational challenges faced by authors with updating duties. [15] Academic institutions can support updating by according academic recognition on par with conducting and publishing original SRs. Journals can increase publishing outlets for updates, for instance, when accepting a review for publication, by also committing to publishing any future updates. Organizations can make updates more prominent by tying them to the original review. For example, the Public Library of Science (PLoS) electronically links SR updates to freely available original reports. [16] Updates are also indexed in PubMed and Medline provided that authors explicitly identify that the review is an update of a previously published article.
[17] Although useful, currently few SRs are cited as updates in Medline. [2] Despite the paucity of methodological tools for updating, [18] and limited adoption of those that do exist [ Table 5], very few organizations reported lack of updating methodologies as barriers to updating. Most reported composite factors as the drivers for when to update, and most reported using a variety of established methods to identify new evidence. Thus, the problem seems largely a lack of resources -financial and human. Therefore solutions that either infuse additional resources or greatly reduce the work required for updating seem most needed.
One approach may be for organizations to cooperate in harmonizing updating efforts, sharing resources and knowledge on issues of surveillance, conduct, reporting and policy for updating. The idea of harmonizing efforts was supported by the majority of organizations responding to this survey but such cooperation is in its infancy.
Although the generalizability of the survey findings is limited due to the use of a non-random sampling frame, it is apparent that, at the time of the survey, several organizations lacked a formal policy for maintaining the SRs that they produce. Given that it has been empirically demonstrated that review findings are overturned by new evidence, [1] often within short time horizons, it is likely prudent for those organizations to address this issue. Even if resources are not available for updating, some mechanism is necessary to monitor the emerging evidence so that SRs, or the guidelines upon which they are based, can be formally withdrawn when they can no longer inform best practice. Agencies, as opposed to individual authors who might undertake a SR purely as a scientific endeavour, have a responsibility to manage the information they sponsor or commission throughout its lifecycle.
Updating is a complex and resource intensive process, often weighed down by barriers, and it needs to be balanced with other  research endeavours. Nonetheless, updating should be viewed as a worthy undertaking that ensures health practice and policy are based on the best and most current evidence.