Registration of randomized clinical trials (RCTs) in easily accessible electronic databases was broadly promoted at the turn of the century as a means of promoting the transparency and validity of human subjects research. Currently, 17 different web-based databases are available for this purpose, sponsored by national or international initiatives such as clinicaltrials.gov (USA), REBEC (http://www.ensaiosclinicos.gov.br, Brazil), and ISRCTN.org (international). With prospective registration of all RCTs, key stakeholders would have the ability to access details of any contemporary trial. Systematic reviews and meta-analyses of these trials would become more valid, because the potential for publication bias (inclusion of only those trials that get published, which often differ systematically from those that do not) would at least be clarified and possibly mitigated. Finally, prospective disclosure of other key details of RCTs, such as the sample size targeted and the trial’s primary outcome, might combat the myriad temptations that arise during the arduous conduct of RCTs to alter these and other key design features in light of lessons learned.

Despite the obvious virtues of trial registration, it was quickly recognized that the launch of controlled-trials.com in 1998, ClinicalTrials.gov in 2000, and other registries would only be fruitful if mechanisms to promote (or better yet, require) prospective trial registration were enacted. The most important step in this direction was the 2004 decision by the International Committee of Medical Journal Editors (ICMJE) that proposed several criteria for trial registration (Table 1), including that all trials launched in 2005 or later must be registered prior to the enrollment of the first patient in order to be published in a member journal [1].

Table 1 Summarized ICMJE’s clinical trial registration policy: a quick guide for researchers

How well has this worked? Unfortunately, as reported by Dr. Anand and colleagues in the current issue of Intensive Care Medicine, intensive care RCTs frequently fail to live up to the originally intended goals. The study by Anand et al. shows that only two-thirds of published intensive care RCTs were registered, and that in only about a quarter of such trials were the protocols registered prior to enrollment. Furthermore, the authors note that fewer than 15 % of trials met their criteria for adequate registration—that is, prospective registration with specification of, and no subsequent changes to, the primary outcome or target sample size [2].

These striking findings may help to understand several of the flaws in RCTs published in the past decade [3] as alterations in these variables may affect the trials’ internal validity by representing shifts from the original study design or aim. Unfortunately for both clinicians and patients, problems with the registration and reporting of trials do not seem to be limited to critical care. For example, Mathieu et al. [4] reported that only 45.5 % of the registered trials in cardiology, rheumatology, and gastroenterology that were published in journals with high impact factors had been adequately registered. In a study evaluating compliance with mandatory reporting of clinical trial results in the whole database of clinicaltrials.gov and DRUGS@FDA, Prayle et al. [5] demonstrated that only 22 % of the completed trials had reported their results. Similarly, in an analysis of the ten highest ranked surgical journals (according to JCR 2011), Hardt et al. [6] found that only 12 % of RCTs had their data summary or results published at clinicaltrials.gov 1 year after the completion of trials.

The study by Anand and colleagues adds important new information to this evolving story. Nonetheless, it should be interpreted with a few limitations in mind. First, it is likely that the ICMJE guidelines have experienced differential diffusion and uptake across journals, first among high-impact general medical journals and then among specialty journals. Because the study considered a relatively small sample of trials published in the era immediately following the ICMJE policy, it may be unreasonable to expect that all problems would have already vanished. It is important to know if the incidence of the problems that Anand and colleagues identified are declining with time since 2005, and whether specialty journals in critical care have improved less so than general medical journals. Our anecdotal impression is that the problems identified by Anand and colleagues are far less likely to manifest among the top general medical journals. For example, one of us (S.D.H.) recently completed an RCT that was registered after enrollment of the first patient. Although no changes were made to key elements of the trial, four high-impact general medical journals refused to even review the ensuing manuscript because of our accidental non-compliance with the ICMJE guidelines.

Additionally, the authors’ method for classifying studies as having “changed” may have been overly sensitive. Trialists may enter data intermittently over a period of days or weeks, such that some of the “changes” noted by the authors in the sample size or primary outcome may have reflected corrections of errors within a few days rather than fundamental changes in trial design following experience with recruitment. Refined methods for tracking changes in these registries may help surmount similar methodological challenges in future studies.

Despite these limitations, the authors make clear that new steps are needed to combat publication bias and the potential dissemination of invalid data. How should journals and investigators act to improve transparency? One approach may be for more journal editors to ascertain appropriate registration of an RCT (in a similar manner as applied to IRB approval) as an essential condition to initiate the peer-review process. As noted, several of the highest-impact journals already do this. Another step in the right direction would be to require that all changes in study design (such as in sample size, outcome ascertainment, eligibility criteria, or statistical methods) be clearly registered in RCT databases [7] and made transparent to journal editors at the time of submission.

Third, more journals might choose to follow the policies of the New England Journal of Medicine and other top journals whereby online publication of original trial protocols and any protocol revisions is required for all accepted manuscripts reporting RCTs. This could reduce the need to rely on registries to obtain assurances that trial reports are robust and valid. Finally, because the utility of databases such as clinicaltrials.gov depends upon the completeness and accuracy of the registration process conducted by the investigator, checklists might be developed and disseminated to researchers, reviewers, and editors to facilitate the process of reporting and registering studies in online registries. The recent publication of the Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (www.icmje.org) represents an important step in this direction. Table 1 summarizes its recommendations.

In summary, the advancement of knowledge through RCTs is a difficult task that requires a partnership among participants, investigators, funders, reviewers, editors, and others. The study by Anand et al. sheds light on ways in which this partnership can fall apart if investigators and editors do not attend earnestly to their parts of the bargain. The improvements in the trial registration and dissemination processes we advocate, and others, may help improve the transparency and validity of RCTs in critical care, thereby enabling investigators and editors to better fulfill their commitments to RCT participants and funders for their contributions to public health [8].