Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Quality of Registration of Clinical Trials: Still a Problem

  • Roderik F. Viergever ,

    rikviergever@gmail.com

    Affiliations Department of Primary and Community Care, Radboud University Nijmegen Medical Center, Nijmegen, The Netherlands, Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, United Kingdom

  • Ghassan Karam,

    Affiliation International Clinical Trials Registry Platform (ICTRP), Department of Ethics and Social Determinants of Health (ESD), World Health Organization, Geneva, Switzerland

  • Andreas Reis,

    Affiliation Department of Ethics and Social Determinants of Health (ESD), World Health Organization, Geneva, Switzerland

  • Davina Ghersi

    Affiliation NHMRC Clinical Trials Centre, Sydney Medical School, University of Sydney, Sydney, Australia

Abstract

Introduction

The benefits of clinical trials registration include improved transparency on clinical trials for healthcare workers and patients, increased accountability of trialists, the potential to address publication bias and selective reporting, and possibilities for research collaboration and prioritization. However, poor quality of information in registered records of trials has been found to undermine these benefits in the past. Trialists' increasing experience with trial registration and recent developments in registration systems may have positively affected data quality. This study was conducted to investigate whether the quality of registration has improved.

Methods

We repeated a study from 2009, using the same methods and the same research team. A random sample of 400 records of clinical trials that were registered between 01/01/2012 and 01/01/2013 was taken from the International Clinical Trials Registry Platform (ICTRP) and assessed for the quality of information on 1) contact details, 2) interventions and 3) primary outcomes. Results were compared to the equivalent assessments from our previous study.

Results

There was a small and not statistically significant increase from 81.0% to 85.5% in the percentage of records that provided a name of a contact person. There was a significant increase from 68.7% to 74.9% in the number of records that provided either an email address or a telephone number. There was a significant increase from 44.2% to 51.9% in the number of intervention arms that were complete in registering intervention specifics. There was a significant increase from 38.2% to 57.6% in the number of primary outcomes that were specific measures with a meaningful timeframe. Approximately half of all trials continued to be retrospectively registered.

Discussion

There have been small but significant improvements in the quality of registration since 2009. Important problems with quality remain and continue to constitute an impediment to the meaningful utilization of registered trial information.

Introduction

Clinical trials registration is now broadly considered an ethical and scientific responsibility.[1][8] In the past fifteen years, national and regional trial registries have been established in Africa, Asia, Australia/Oceania, Europe, North America and South America.[9] The WHO International Clinical Trials Registry Platform (ICTRP) was established in 2005 with the aim of bringing registered trial data from different trial registries together and creating a single point of access to information on all clinical trials conducted globally.[10] It now combines data from 15 national and regional clinical trial registries, offering access to data from more than 200,000 trials.

There are important advantages to the increased transparency on clinical trial conduct and reporting brought about by these developments. It improves access to information on clinical trials for healthcare workers, researchers and patients [11], [12]; it allows for steps to be taken against publication bias and selective reporting [2], [12][20]; it carries the potential to increase the accountability of those conducting clinical trial research; and it makes the identification of gaps in the health research landscape possible, thus facilitating priority setting in research [21][27].

The degree to which registered trial data can be used for these purposes depends on the completeness and meaningfulness of the data registered. The quality of data in registered records has been shown to be poor in the past.[2], [14], [15], [28][41] However, clinical trials registration has matured in recent years. Trialists may have gotten better at registering. Moreover, registries are likely to have improved their registration systems after the implementation of the International Standards for Clinical Trial Registries in 2010.[42]

This study was conducted to investigate whether the poor quality of registration observed in the past has been due to trial registration being in its nascence, or whether it is a more persistent problem. To do so, we repeated a study conducted by us in 2009, using the same methods and the same research team.[2]

Methods

A random sample of 400 registered records of clinical trials that were registered between 1 January 2012 and 1 January 2013 was taken from the ICTRP database. Records of trials that were registered as having an observational study design were not eligible for the sample. For trials that were registered in more than one registry (duplicate records), only the record with the earliest registration date was considered eligible.[43] At the time the sample was taken the database included trials registered in fifteen different registries.[9]

Sample size calculation

A sample size of 380 records was chosen, to ensure that all upper and lower 95% confidence intervals for extrapolation to the entire ICTRP dataset, calculated using the Wilson score interval (see further under analysis), would deviate 5% at most from the estimated number. A sample size of 380 also fulfilled this study's requirements to detect relatively minor changes in the quality of the three primary outcomes: the quality of contact details, interventions and outcomes (minor changes were defined as a 10% increase or decrease in the proportion of adequately registered records). It allowed for detecting an increase or decrease of 10% (using two-tailed test and α = 0.05) with β>0.85 in the quality of contact details and interventions and with β>0.95 in the quality of primary outcomes.

In our previous study in 2009, 3% of trials were incorrectly registered as interventional.[2] Therefore, a final sample size of 400 records was chosen to allow for exclusion of these trials.

Data extraction

Registry name, trial ID, target sample size, inclusion criteria for gender and age of participants, recruitment status, date of registration, date of first enrolment and the public and scientific title for each record were downloaded from the ICTRP database and imported into Excel on 13 February 2013. Records were checked for the presence of entries in each of these fields.

All information that had to be extracted manually from the registered records was collected between 13 February 2013 and 23 February 2013. Information was always extracted manually from the complete registered record in the source registry.

Descriptive information on study design was extracted manually. Data on interventions and sponsorship was also extracted manually and was then coded. The system used to code interventions was adapted from the codes used for intervention types on ClinicalTrials.gov.[44] Primary sponsors were coded as being foundation, government, industry, university/hospital, or other. Trials were coded as being industry funded (primary sponsor was industry), partially industry funded (primary sponsor was non-industry, but secondary sponsor or source of monetary or material support was industry), or non-industry funded.

Records of trials that were registered as interventional but, during manual data extraction, turned out to be records of observational trials, diagnostic accuracy trials or treatment protocols for continuation of treatment after inclusion in a study protocol were excluded from further data extraction.

Descriptive statistics were generated for registry name, primary sponsor category, intervention type, study phase, study design, target sample size, randomization status and inclusion criteria for gender and age of participants. Registration dates and dates of first enrolment were compared to assess the degree of retrospective registration.

Contact information.

The presence or absence of the following contact details was evaluated: name of a contact person (investigator or other), email address and telephone number. The WHO 20-item Trial Registration Data Set requires registration of separate scientific and public contact details.[45] There was, however, variation in registration formats for contact details between different registries. Some registries had one field for contact details, others had two separate fields for public and scientific contact details and others multiple contact fields. For records with only one contact field the presence of contact information was extracted from that field. For records with multiple contact fields, if the contact details were present in any of the fields, the information was denoted to be present.

Interventions.

Given the considerable variability in the types of interventions evaluated in trials, comparison of registration quality between different intervention categories is difficult. Therefore, the evaluation of the quality of registered intervention data was limited to trials that investigated drugs, biologicals or vaccines, including active comparators. Placebo comparators were not evaluated. For each intervention and active comparator the presence or absence of the following five intervention specifics was collected: name, dose, duration of the intervention, frequency of administration and route of administration. All intervention arms were assessed separately. Name was denoted to be present if a company serial number or a drug name was provided. Only interventions and active comparators mentioned in the intervention field were assessed. Other texts in the record were scanned for additional information on mentioned interventions. To assess the overall completeness of registration of intervention specifics, a binary outcome variable was used that could be incomplete versus complete registration of the intervention. Complete registration entailed the reporting of drug name, dose, duration, frequency and route.

Outcome measures.

The number of primary outcomes per record was collected. Each primary outcome was evaluated for specificity, using a classification system adapted from the system used by Zarin et al in their assessment of quality of outcomes.[30], [37] If a record contained multiple outcomes, all were assessed separately. Outcomes were classified as being a specific measure, a domain, vague, an unexplained abbreviation, or a part of safety monitoring.

Besides assessing the specificity of each outcome, the presence or absence of a time frame was collected for every outcome. Some outcomes assessed the duration of an event, the time to an event or were safety monitoring outcomes. For these outcomes, reporting a time frame is not possible, and the timeframe was therefore denoted as irrelevant. Time frames were denoted to be not meaningful when they did not specify a point in time when the outcome was to be measured.

Only outcomes mentioned in the outcome fields were assessed. Other texts in the record were scanned for additional information on mentioned outcomes.

To assess the overall quality of registration of primary outcomes, a binary outcome variable was used that could be registration of a specific measure with a meaningful time frame present or for which a time frame was irrelevant, versus any other outcome.

Finally, when there was more than one intervention (or active comparator) arm registered for a trial, or when there was more than one primary outcome registered, all intervention (and active comparator) arms and primary outcomes were assessed in this study. Multiple intervention arms and primary outcomes from one registered record are not independent. The effects of this non-independence on our reported outcomes are expected to be limited.

Internal inconsistency in study design

Internal inconsistencies in study design fields were identified.[46] Internal inconsistencies were defined as records with multiple descriptors that were not compatible, such as “single-group” and “controlled or randomized”; “open-label” and “blinded”; and “double-blinded” without subject or investigator blinding.

Assessment rules

The assessment rules and methods for data extraction for this study are analogous with the rules and methods used in our previous study on the quality of registration.[2] As then, all records were assessed for eligibility by RV who then extracted and coded the data. A more detailed overview of the rules used in all assessments is provided in the supporting file that accompanies our previous publication.[2]

Analysis

95% confidence intervals (CI) were calculated for proportions of trials in the samples using continuity corrected Wilson score intervals with Singleton et al. adjustments for finite populations.[47][49] These 95% CIs reflect the confidence with which these proportions, measured in our samples of records, predict true proportions in the overall populations of all interventional trials on the ICTRP. The quality of registration was compared between trials registered between 17 June 2008 and 17 June 2009 [2] and trial registered between 1 January 2012 and 1 January 2013 using the Newcombe-Wilson test with continuity correction (with α = 0.05).[47][49]

This study intended to analyse changes in the quality of registration across all registries from 2008/2009 to 2012. However, the distribution of clinical trials across the registries changed from the former dataset to the latter, and several new registries were added to the ICTRP database. To be able to draw conclusions about changes in the quality of registration among the registries that were included in our first study, we conducted a sensitivity analysis for changes in data quality in the registered records from the three largest registries from 2008/2009 (ClinicalTrials.gov, ISRCTN and ANZCTR).

Statistical analyses were performed using MS Excel and SPSS 20.

Results

A sample of 400 records was taken from a total of 23,046 unique interventional trials that were registered between 1 January 2012 and 1 January 2013. 14 records were excluded from data extraction because the corresponding trials were of an observational or diagnostic accuracy study design or were a treatment protocol for continuation of treatment after inclusion in a study protocol. A total of 386 records was included for data extraction, of which 221 (57.3% [52.2%–62.2%]) investigated drugs, biologicals or vaccines (Figure 1).

thumbnail
Figure 1. Flowcharts for the old 2009 study and for the new 2013 study.

https://doi.org/10.1371/journal.pone.0084727.g001

Baseline data on registry name, primary sponsor category, intervention type, study phase, study design, randomization status and inclusion criteria for gender of participants are presented in Table 1.

thumbnail
Table 1. General descriptive information from the two samples of clinical trials registered in 2008/2009 and in 2012.

https://doi.org/10.1371/journal.pone.0084727.t001

Records were additionally checked for the presence of entries in the fields for recruitment status, date of first enrolment and the public and scientific title. The former three were present in all records, the latter was reported in 379 records (98.2% [96.1%–99.2%]), which constituted a significant improvement from the observed 95.8% in 2008/2009. Furthermore, information was collected on sample size and age of participants. Sample size was reported in 384 records (99.5% [97.9%–99.9%]), which was not statistically different from the observed 98.6% in 2008/2009. The median target sample size was 77 [IQR 39–200]. Age of participants was reported in 375 records (97.2% [94.8%–98.5%]), which was not statistically different from the observed 95.8% in 2008/2009. 56 records (14.5% [11.2%–18.5%]) mentioned inclusion of participants <18 years of age. Finally, registration dates and dates of first enrolment were compared. The majority of records in our sample did not provide a day for the date of first enrolment but only a month and a year, which limited this analysis to comparing the month in which trials were registered to the month in which the first participant was recruited. The registration date was in a later month than the date of first enrolment in 185 records (47.9% [42.9%–53.0%]), which was not statistically different from the observed 53.4% in 2008/2009. This difference was more than one month in 158 records (40.9% [36.0%–46.0%]), which was not statistically different from the observed 43.6% in 2008/2009. The median of the difference was 8 months. Registration date and date of first enrolment were in the same month in 76 records (19.7% [15.9%–24.1%]). The registration date was in an earlier month than the date of first enrolment in 125 records (32.4% [27.8%–37.3%]). The median of this difference was 2 months.

Quality of registration of contact information

Overall, 330 records reported a name of a contact person (85.5% [81.5%–88.8%]). 259 records provided an email address (67.1% [62.2%–71.7%]) and 272 records a telephone number (70.5% [65.6%–74.9%]). 289 records provided either an email address or a telephone number (74.9% [70.2%–79.0%]). These constituted significant improvements as compared to 2008/2009 for the presence of an email address, the presence of a telephone number and the presence of either (Table 2). Improvement in the presence of a name of a contact person was not significant. All changes for the subcategories of industry, non-industry and partially industry sponsored records were not significant. Contact details remained present less frequently among industry sponsored trials than among non-industry sponsored trials.

thumbnail
Table 2. The presence of contact details in registered records in 2008/2009 and 2012.

https://doi.org/10.1371/journal.pone.0084727.t002

The presence of contact details was disaggregated according to trials' recruitment status (Table 3). The presence of names of contact persons did not differ markedly for trials with a different recruitment status, but email addresses, telephone numbers or either were present more frequently among recruiting or not yet recruiting trials than among completed trials, especially for industry sponsored trials.

thumbnail
Table 3. The presence of contact details according to recruitment status for trials registered in 2012.

https://doi.org/10.1371/journal.pone.0084727.t003

Sensitivity analysis among the three largest registries showed effects that were congruent with the changes found in the full dataset. From 2008/2009 (693 trials) to 2012 (260 trials), reporting improved from 79.9% to 86.2% for the name of a contact persons, from 57.9% to 61.9% for an email address, from 62.5% to 67.7% for a telephone number, and from 67.2% to 70.8% for the presence of either.

Quality of registration of interventions involving drugs, biological or vaccines

There were 221 records of trials that investigated drugs, biologicals or vaccines. These reported 351 experimental or active comparator arms (Table 4). Completeness of registration of the name of the intervention, the duration of the intervention, the frequency of administration and the route of administration did not significantly change between 2008/2009 and 2012. Information on the dose was present significantly more often in 2012 than in 2008/2009. 182 arms (51.9% [46.5%–57.1%]) were complete in registering intervention specifics, which also constituted a significant improvement from the observed 44.2% in 2008/2009.

thumbnail
Table 4. The completeness of intervention specifics in registered records in 2008/2009 and 2012.

https://doi.org/10.1371/journal.pone.0084727.t004

Sensitivity analysis showed small improvements for the completeness of registration of all intervention characteristics in registered records from the three largest registries. From 2008/2009 (696 intervention arms) to 2012 (217 intervention arms), reporting improved from 98.9% to 100.0% for the name of the drug, from 71.3% to 82.0% for dose, from 71.0% to 79.3% for duration, from 76.7% to 84.3% for frequency, and from 74.7% to 84.8% for route. The proportion of arms that were complete in registering interventions specifics rose from 44.7% to 57.6%.

Quality of registration of outcome measures

The 386 included trial records reported 705 primary outcomes. 261 records (67.6% [62.7%–72.2%]) reported one primary outcome, 62 (16.1% [12.6%–20.2%]) reported two, 29 (7.5% [5.2%–10.7%]) reported three and 32 (8.3% [5.8%–11.6%]) reported four or more. The maximum number of primary outcomes reported in one record was 52. Two records (0.5% [0.1%–2.1%]) reported no primary outcome at all.

The degree of specificity of reported outcomes was assessed (Table 5). To prevent skewing of the data, the outcomes in the record with 52 outcomes were counted as one for this analysis (the 2nd highest number of outcomes in any record was 12). 377 primary outcomes (57.6% [53.8%–61.4%]) were specific measures for which a meaningful time frame was present or for which a time frame was irrelevant. This constituted a significant improvement from the observed 38.2% in 2008/2009.

thumbnail
Table 5. Degree of specificity of primary outcomes in 2008/2009 and 2012.

https://doi.org/10.1371/journal.pone.0084727.t005

Sensitivity analysis also showed improvements for the quality of reported primary outcomes in registered records from the three largest registries. From 2008/2009 (1186 primary outcomes) to 2012 (401 primary outcomes), the proportion of primary outcomes that were specific measures for which a meaningful time frame was present or for which a time frame was irrelevant improved from 38.5% to 66.1%.

Internal inconsistencies in study design

Internal inconsistencies in the study design fields were encountered in 10 records (2.6% [1.3%–4.9%]). This was a significant improvement from the observed 9.3% in 2008/2009.[46]

Differences between registries

Differences between registries in the quality of information on contact details, interventions and primary outcomes were assessed (Table 6). Only registries with more than ten records, intervention arms or outcomes, respectively, were included for this comparison. There are differences between registries in the quality of reporting, yet there are few that score good on all aspects of quality, or bad on all aspects.

thumbnail
Table 6. The quality of information on contact details, interventions and primary outcomes per registry for trials registered in 2012.

https://doi.org/10.1371/journal.pone.0084727.t006

To learn more about how data recording formats might influence data quality, data recording formats for contact details, interventions and primary outcomes were denoted for each of the registries that provided data to the WHO ICTRP at the time of the study (Table 7).

thumbnail
Table 7. Data recording formats for the three primary outcomes of this study (contact information, intervention specifics and outcome quality) at the registries that provided data to the ICTRP at the time of the study in 2013.

https://doi.org/10.1371/journal.pone.0084727.t007

Discussion

A persistent problem

This study was conducted using the same methods and the same research team as our previous study on the quality of registration.[2] There have been small but significant improvements in the quality of registration since 2009. However, important problems with quality remain and continue to constitute an impediment to the meaningful utilization of registered trial information.

There have been small improvements to the presence of contact details overall. This is partially due to the larger proportion of non-industry trials in the analysis of trials registered in 2012, which do better on registering contact details. But across all sponsor categories quality also improved, the main exception being the continued lack of mentioning of names of contact persons by industry sponsors. Explicit mentioning of the name of the principal investigator is important to increase the accountability of trialists. Furthermore, despite improvements, contact information such as a telephone number or email address often remain absent. Remarkably, trialists appear to remove contact details when trials have been completed or stopped, in particular for industry sponsored trials. To allow patients, healthcare workers and other researchers to inform themselves of clinical trials, it is important that trialists can be contacted at any stage of a trial. Such information should remain available after a trial is completed or stopped.

There was some improvement in the completeness of intervention specifics for drug trials, however, the improvement was minor. Contrariwise, the improvement in the quality of registered outcomes was marked. This is a hopeful development for systematic reviewers, since in the absence of a complete trial protocol, registered clinical trial data constitute the only way to identify selective reporting.[2], [13], [15][19] However, specific information about the outcome in registered records is necessary to detect selective outcome reporting as part of systematic reviews, and still almost half of all primary outcomes do not constitute a specific measure with a meaningful timeframe. Moreover, it has been proposed that the specificity of outcomes should be assessed at a greater level of granularity, to take into account more subtle forms of selective reporting.[30]

Finally, a very large percentage of records remains registered retrospectively, as has also been concluded by others.[50] Without prospective registration, before enrolment of the first participant, we cannot be certain that trial outcomes are not retrospectively registered in such a way that favours a particular result.

In conclusion, there have been small improvements to the quality of registered trial data, but poor quality is a persistent problem. Recent publications have also shown concomitant results reporting at individual registries to be problematically incomplete, such as at ClinicalTrials.gov [51][53], despite legal obligations in the US to report the findings of trials.[51], [54]

The causes of poor quality (and learning from other registries)

The persistent nature of poor quality of registered clinical trial data suggests one or more pervasive causes. Although trialists themselves have a responsibility to ensure that the information in registered records is complete and accurate, registries can encourage high-quality registration through quality control processes and appropriate data recording practises.[2] Both are addressed in the International Standards for Clinical Trial Registries.[42]

Our analysis suggests that there are important differences between registries with regards to registration quality. Notably, there are few that score badly on all three aspects of quality that we tested, or well on all. Rather, there are differences depending on which aspect is assessed, as becomes clear from Table 6, and from our sensitivity analyses, which showed that the three largest registries score better on intervention and primary outcome quality, but worse on the presence of most contact details. One explanation for these differences is the variation in data recording formats between registries.[31] For example, some registries specifically ask trialists for the methods of measurement for each outcome. Others have only free text fields for outcomes. Some registries ask for specific details on interventions, others, again, have only free text fields. Some registries ask trialists to categorize interventions and outcomes, others do not. For data quality and data aggregation purposes,[23] it is important that discrete options are offered where there is a limited set of possible answers (supplemented by a free text field to allow for additional explanation where needed), that different sub-aspects of data set items are specifically queried (Table 7), and that the data recording formats are harmonized across all individual registries. A second explanation for the differences in the quality of registration between registries is the level of quality control that registries apply.

The differences in the quality of registration of different data items found in this study suggest that registries can learn from each other. Differences between registries in terms of data recording formats and their consequences for data quality deserve to be studied in more detail, so that registries can improve their formats based on the lessons from other registries. Registries could also draw lessons from each other about quality control, for example with regards to the information that is considered mandatory and a precondition for registration, and the different tiers of data checking (e.g. automated checks and manual checks [30], [42]) that can be implemented to detect incomplete or non-meaningful entries. The International Standards for Clinical Trial Registries state that benchmarking of registries should be one of the next steps in standards development for registries.[42]

Enforcement

To be able to make use of the potential benefits that clinical trials registration offers, it is of paramount importance that registration is complete and accurate. However, it must also be comprehensive.[2] Enforcement of clinical trials registration has increased substantially over the past decade,[55] owing to national legislation on registration [4], [30], [56], policies by journal editors and publishers making registration a prerequisite for publication [1], [5][7], [57], ethics committees and national research ethics oversight agencies requiring registration as part of procedures for ethics approval [3], [55], [58], policies by funders making registration a prerequisite for grant approval [59], international codes of research practice that recommend trial registration, such as the SPIRIT 2013 and CONSORT 2010 statements which include sections recommending the admission of trial registration details to both clinical trial protocols and reports [60], [61], international codes of research ethics, such as the declaration of Helsinki [62], and self-regulation by universities [55] and the pharmaceutical industry [8]. Despite these measures, a proportion of trials currently remains unregistered, especially in countries lacking legislation on trial registration.[63][67]

National legislation is crucial in enforcing the registration of all clinical trials.[4] Several of the other enforcement measures outlined above have been instrumental in creating momentum for clinical trials registration, such as journal and ethics review board requirements for registration, yet not all journal editors require registration as a pre-condition for publication,[67] not all clinical trials are conducted with the goal of publication,[4] and not all ethics committees have policies on clinical trials registration in place [58]. Therefore, it is imperative that all countries that have not implemented legislation on trial registration do so.[4] Furthermore, it is important that the remit of legislation on registration should cover all possible clinical trials, as is being recognized in the US and the EU.[68], [69] Currently, in those countries where legislation to enforce registration is present, its remit is often limited to a sub-set of trials.[30], [68]

With regards to enforcement, the commitment of the pharmaceutical industry to clinical trials registration is important and the past development of a Joint Position of several pharmaceutical associations on the disclosure of clinical trial information via clinical trial registries and databases is laudable.[8] However, the Joint Position needs revisiting on three important aspects. First, currently, it allows for registration after commencement of patient enrolment. This is in contradiction with policies on clinical trial registration by WHO and the International Committee of Medical Journal Editors (ICMJE).[1], [2] Second, it allows trialists to withhold data specified by the WHO Minimum Trial Registration Data Set if they consider it sensitive. This, too, is in contradiction with policies on clinical trial registration by WHO and the ICMJE.[1], [2] Third, the Joint Position mentions that “registration of clinical trials on any one of a number of free, publicly accessible, internet-based registries should achieve the intended objectives”. To ensure the quality of registered trial data, the WHO ICTRP search portal only provides access to data from trials registered at registries that meet certain quality standards (excluding, for example, registries managed by for-profit agencies).[70] To realize a single point of access to all clinical trial data conducted globally, it is important that the pharmaceutical associations include a commitment to registration in WHO approved registries in the next update of their Joint Position, as the ICMJE already has.[7] Finally, enforcement of trial registration by the pharmaceutical industry would be further advanced if support for clinical trials registration and results reporting would not be limited to statements from the pharmaceutical associations, but if more individual pharmaceutical companies would subscribe to the AllTrials campaign, following the example of GlaxoSmithKline.[71]

Besides increasing the number of trials that is registered, enforcing measures could also help improve the quality of registration. Journal editors, for example, have been called upon to not only enforce registration itself, but to also implement quality control procedures.[19] Although editors have made clear that trial registration with missing or uninformative fields for the minimum data elements is inadequate,[1], [5], [57], [72] little is known about to what degree journals are putting such measures into practice.[6] Similarly, both in the EU and in the US legislature supports the WHO Minimum Trial Registration Data Set – the minimum amount of trial information that must appear in a register in order for a given trial to be considered fully registered.[73], [74] Failure to comply with registration legislation may result in penalties or withholding of federal grants.[54] Yet, little is known to what extent legislators are planning to invoke such measures, and whether the quality of registration could play a role in such decisions. For both legislators and journal editors discussion needs to be initiated on how far measures should go to discourage incomplete or inadequate registration. This applies to both the initial registration of a clinical trial, which was the subject of this study, as for results reporting in registry databases.[51][53]

Conclusion

There have been small but significant improvements in the quality of registration since 2009. However, important problems with quality remain and continue to constitute an impediment to the meaningful utilization of registered trial information. More effort needs to be made to improve data recording formats, enhance quality control measures and scale up enforcement of trial registration.

Acknowledgments

Disclaimer

Ghassan Karam and Andreas Reis are staff members of the World Health Organization. The authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy or views of the World Health Organization.

Data sharing statement

Source data can be requested from the corresponding author at rikviergever@gmail.com.

Author Contributions

Conceived and designed the experiments: RFV GK AR DG. Performed the experiments: RFV. Analyzed the data: RFV. Wrote the paper: RFV GK AR DG.

References

  1. 1. Clinical Trial Registration: A statement from the International Committee of Medical Journal Editors (2004). Available: http://www.icmje.org/clin_trial.pdf. Accessed 31 August 2012.
  2. 2. Viergever RF, Ghersi D (2011) The Quality of Registration of Clinical Trials. PLoS One 6: : e14701. Available: http://dx.plos.org/10.1371/journal.pone.0014701. Accessed 27 February 2011.
  3. 3. Ghersi D, Pang T (2008) En route to international clinical trial transparency. Lancet 372: : 1531–1532. Available: http://www.ncbi.nlm.nih.gov/pubmed/18984176. Accessed 20 July 2012.
  4. 4. Bian Z-X, Wu T-X (2010) Legislation for trial registration and data transparency. Trials 11: : 64. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2882906&tool=pmcentrez&rendertype=abstract. Accessed 1 September 2012.
  5. 5. World Association of Medical Editors (WAME) (2005) The Registration of Clinical Trials. Available: http://www.wame.org/resources/policies#trialreg. Accessed 1 September 2012.
  6. 6. Drazen JM, Zarin DA (2007) Salvation by registration. N Engl J Med 356: : 184–185. Available: http://www.ncbi.nlm.nih.gov/pubmed/17215537. Accessed 1 September 2012.
  7. 7. Laine C, Horton R, DeAngelis CD, Drazen JM, Frizelle FA, et al.. (2007) Clinical trial registration: looking back and moving ahead. Lancet 369: : 1909–1911. Available: http://www.ncbi.nlm.nih.gov/pubmed/17560431. Accessed 29 June 2011.
  8. 8. IFPMA/EFPIA/JPMA/PhRMA (2009) Joint Position on the Disclosure of Clinical Trial Information via Clinical Trial Registries and Databases. Available: http://clinicaltrials.ifpma.org/clinicaltrials/fileadmin/files/pdfs/EN/November_10_2009_Updated_Joint_Position_on_the_Disclosure_of_Clinical_Trial_Information_via_Clinical_Trial_Registries_and_Databases.pdf. Accessed 2 September 2012.
  9. 9. International Clinical Trials Registry Platform (ICTRP): Data providers (n.d.). Available: http://www.who.int/ictrp/search/data_providers/en/index.html. Accessed 6 September 2012.
  10. 10. WHO International Clinical Trials Registry Platform (ICTRP) (n.d.). Available: http://www.who.int/ictrp.
  11. 11. Antes G, Dickersin K (2004) Trial registration to prevent duplicate publication. JAMA 291: : 2432. Available: http://www.ncbi.nlm.nih.gov/pubmed/15161892. Accessed 12 February 2013.
  12. 12. Antes G (2004) Registering clinical trials is necessary for ethical, scientific and economic reasons. Bull World Health Organ 82: : 321. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2622841&tool=pmcentrez&rendertype=abstract. Accessed 12 February 2013.
  13. 13. Van Enst WA, Scholten RJPM, Hooft L (2012) Identification of additional trials in prospective trial registers for cochrane systematic reviews. PLoS One 7: : e42812. Available: http://dx.plos.org/10.1371/journal.pone.0042812. Accessed 31 August 2012.
  14. 14. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM (2009) Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med 6: : e1000144. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2728480&tool=pmcentrez&rendertype=abstract. Accessed 7 February 2013.
  15. 15. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P (2009) Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302: : 977–984. Available: http://www.ncbi.nlm.nih.gov/pubmed/19724045. Accessed 12 February 2013.
  16. 16. You B, Gan HK, Pond G, Chen EX (2012) Consistency in the analysis and reporting of primary end points in oncology randomized controlled trials from registration to publication: a systematic review. J Clin Oncol 30: : 210–216. Available: http://www.ncbi.nlm.nih.gov/pubmed/22162583. Accessed 27 February 2013.
  17. 17. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M (2011) Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord 12: : 278. Available: http://www.biomedcentral.com/1471-2474/12/278. Accessed 27 February 2013.
  18. 18. Wildt S, Krag A, Gluud L (2011) Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis. BMJ Open 1: : e000309. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3211057&tool=pmcentrez&rendertype=abstract. Accessed 27 February 2013.
  19. 19. Huić M, Marušić M, Marušić A (2011) Completeness and changes in registered data and reporting bias of randomized controlled trials in ICMJE journals after trial registration policy. PLoS One 6: : e25258. Available: http://dx.plos.org/10.1371/journal.pone.0025258. Accessed 27 February 2013.
  20. 20. The OPEN project: To Overcome failure to Publish nEgative fiNdings (n.d.). Available: http://www.open-project.eu/.
  21. 21. Pandolfini C, Bonati M, Rossi V, Santoro E, Choonara I, et al.. (2008) The DEC-net European register of paediatric drug therapy trials: contents and context. Eur J Clin Pharmacol 64: : 611–617. Available: http://www.ncbi.nlm.nih.gov/pubmed/18351329. Accessed 12 February 2013.
  22. 22. Viergever RF, Rademaker CMA, Ghersi D (2011) Pharmacokinetic research in children: an analysis of registered records of clinical trials. BMJ Open 1: : e000221. Available: http://bmjopen.bmj.com/cgi/content/full/bmjopen-2011-000221. Accessed 10 August 2011.
  23. 23. Viergever RF, Terry RF, Karam G (2013) Use of data from registered clinical trials to identify gaps in health research and development. Bull World Health Organ 91: 416–425C Available: http://www.who.int/bulletin/volumes/91/6/12-114454/en/.
  24. 24. Viergever RF, Olifson S, Ghaffar A, Terry RF (2010) A checklist for health research priority setting: nine common themes of good practice. Heal Res policy Syst 8: : 36. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3018439&tool=pmcentrez&rendertype=abstract. Accessed 17 December 2010.
  25. 25. Røttingen J-A, Regmi S, Eide M, Young AJ, Viergever RF, et al. (2013) Mapping available health R&D data: what's there, what's missing and what role for a Global Observatory. Lancet 382: 1286–1307 Available: http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(13)61046-6/fulltext.
  26. 26. Viergever RF (2013) The mismatch between the health research and development (R&D) that is needed and the R&D that is undertaken: an overview of the problem, the causes, and solutions. Glob Health Action 6: : 22450. Available: http://www.ncbi.nlm.nih.gov/pubmed/24119660. Accessed 15 October 2013.
  27. 27. Viergever RF, Rademaker CMA (n.d.) Finding better ways to fill gaps in pediatric health research. Pediatrics: In press.
  28. 28. Viergever RF, Ghersi D (2012) Information on blinding in registered records of clinical trials. Trials 13: : 210. Available: http://www.trialsjournal.com/content/13/1/210. Accessed 21 November 2012.
  29. 29. Reveiz L, Chan A-W, Krleza-Jerić K, Granados CE, Pinart M, et al.. (2010) Reporting of methodologic information on trial registries for quality assessment: a study of trial records retrieved from the WHO search portal. PLoS One 5: : e12484. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2930852&tool=pmcentrez&rendertype=abstract. Accessed 9 August 2011.
  30. 30. Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC (2011) The ClinicalTrials.gov results database–update and key issues. N Engl J Med 364: : 852–860. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3066456&tool=pmcentrez&rendertype=abstract. Accessed 1 September 2012.
  31. 31. Liu X, Li Y, Yu X, Feng J, Zhong X, et al.. (2009) Assessment of registration quality of trials sponsored by China. J Evid Based Med 2: : 8–18. Available: http://www.ncbi.nlm.nih.gov/pubmed/21348977. Accessed 7 February 2013.
  32. 32. Scherer M, Trelle S (2008) Opinions on registering trial details: a survey of academic researchers. BMC Health Serv Res 8: : 18. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2245930&tool=pmcentrez&rendertype=abstract. Accessed 12 February 2013.
  33. 33. Sekeres M, Gold JL, Chan A-W, Lexchin J, Moher D, et al.. (2008) Poor reporting of scientific leadership information in clinical trial registers. PLoS One 3: : e1610. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2229844&tool=pmcentrez&rendertype=abstract. Accessed 12 February 2013.
  34. 34. US Food and Drug Administration (2005) Food and Drug Administration Modernization Act (FDAMA) Section 113: Status Report on Implementation. (n.d.). Available: http://www.fda.gov/ForConsumers/ByAudience/ForPatientAdvocates/.
  35. 35. Moja LP, Moschetti I, Nurbhai M, Compagnoni A, Liberati A, et al.. (2009) Compliance of clinical trial registries with the World Health Organization minimum data set: a survey. Trials 10: : 56. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2734552&tool=pmcentrez&rendertype=abstract. Accessed 12 February 2013.
  36. 36. Glasziou P, Chalmers I, Altman DG, Bastian H, Boutron I, et al.. (2010) Taking healthcare interventions from trial to practice. BMJ 341: : c3852. Available: http://www.ncbi.nlm.nih.gov/pubmed/20709714. Accessed 12 February 2013.
  37. 37. Zarin DA, Tse T, Ide NC (2005) Trial Registration at ClinicalTrials.gov between May and October 2005. N Engl J Med 353: : 2779–2787. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1568386&tool=pmcentrez&rendertype=abstract. Accessed 12 February 2013.
  38. 38. Scoggins JF, Patrick DL (2009) The use of patient-reported outcomes instruments in registered clinical trials: evidence from ClinicalTrials.gov. Contemp Clin Trials 30: : 289–292. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2683916&tool=pmcentrez&rendertype=abstract. Accessed 12 February 2013.
  39. 39. Jones CW, Platts-Mills TF (2012) Quality of registration for clinical trials published in emergency medicine journals. Ann Emerg Med 60: : 458–64.e1. Available: http://www.ncbi.nlm.nih.gov/pubmed/22503374. Accessed 27 February 2013.
  40. 40. Pino C, Boutron I, Ravaud P (2012) Inadequate description of educational interventions in ongoing randomized controlled trials. Trials 13: : 63. Available: http://www.trialsjournal.com/content/13/1/63. Accessed 27 February 2013.
  41. 41. Dekkers OM, Soonawala D, Vandenbroucke JP, Egger M (2011) Reporting of noninferiority trials was incomplete in trial registries. J Clin Epidemiol 64: : 1034–1038. Available: http://www.ncbi.nlm.nih.gov/pubmed/21444195. Accessed 27 February 2013.
  42. 42. International Standards for Clinical Trial Registries (2012). Geneva.
  43. 43. Linking related records on the ICTRP Search Portal (n.d.). Available: http://www.who.int/ictrp/unambiguous_identification/bridging/en/index.html. Accessed 29 August 2012.
  44. 44. ClinicalTrials.gov Protocol Data Element Definitions (DRAFT) (2013). Available: http://prsinfo.clinicaltrials.gov/definitions.html.
  45. 45. WHO Trial Registration Data Set (n.d.). Available: http://www.who.int/ictrp/network/trds/en/.
  46. 46. Viergever RF, Ghersi D (2011) The ClinicalTrials.gov results database. N Engl J Med 364: : 2169–2170. Available: http://www.ncbi.nlm.nih.gov/pubmed/21631344. Accessed 3 June 2011.
  47. 47. Wallis S (2012) Binomial confidence intervals and contingency tests: Mathematical fundamentals and the evaluation of alternative methods. London. Available: http://www.ucl.ac.uk/english-usage/staff/sean/resources/binomialpoisson.pdf.
  48. 48. Wallis S (2012) Comparing χ2 tests for separability: Interval estimation for the difference between a pair of differences between two proportions, and related tests. London. Available: http://www.ucl.ac.uk/english-usage/staff/sean/resources/comparing-x2-tests.pdf.
  49. 49. Wallis S (2012) Wilson score interval with Singleton et al adjustment. Available: www.ucl.ac.uk/english-usage/staff/sean/resources/wilson-s-pop-interval.xls. Accessed 1 March 2013.
  50. 50. Huser V, Cimino JJ (2013) Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration. J Am Med Inform Assoc 20: : e169–74. Available: http://www.ncbi.nlm.nih.gov/pubmed/23396544. Accessed 12 October 2013.
  51. 51. Prayle AP, Hurley MN, Smyth AR (2012) Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study. BMJ 344: : d7373–d7373. Available: http://www.bmj.com/content/344/bmj.d7373. Accessed 11 February 2013.
  52. 52. Gopal RK, Yamashita TE, Prochazka A V (2012) Research without results: inadequate public reporting of clinical trial results. Contemp Clin Trials 33: : 486–491. Available: http://www.ncbi.nlm.nih.gov/pubmed/22342449. Accessed 27 February 2013.
  53. 53. Gill CJ (2012) How often do US-based human subjects research studies register on time, and how often do they post their results? A statistical analysis of the Clinicaltrials.gov database. BMJ Open 2 . Available: http://bmjopen.bmj.com/content/2/4/e001186.full. Accessed 21 March 2013.
  54. 54. FACTSHEET Registration at ClinicalTrials.gov: As required by Public Law 110–85, Title VIII (2009). Available: http://prsinfo.clinicaltrials.gov/s801-fact-sheet.pdf.
  55. 55. International Clinical Trials Registry Platform (ICTRP): About Trial Registration: Organizations with Policies (n.d.). Available: http://www.who.int/ictrp/trial_reg/en/index2.html. Accessed 9 August 2013.
  56. 56. Krleza-Jeriç K, Lemmens T, Reveiz L, Cuervo L, Bero L (2011) Prospective registration and results disclo- sure of clinical trials in the Americas: a roadmap toward trans- parency. Rev Panam Salud Publica 30: 87–96.
  57. 57. Costa LOP, Christine Lin C-W, Grossi DB, Mancini MC, Swisher AK, et al.. (2013) Clinical trial registration in physiotherapy journals: recommendations from the International Society of Physiotherapy Journal Editors. Man Ther 18: : 1–3. Available: http://www.ncbi.nlm.nih.gov/pubmed/23158021. Accessed 27 February 2013.
  58. 58. Tharyan P (2007) Ethics committees and clinical trials registration in India: opportunities, obligations, challenges and solutions. Indian J Med Ethics IV: 168–169.
  59. 59. European & Developing Countries Clinical Trials Partnership (EDCTP): EDCTP Policy on Health Research Ethics Review (n.d.). Available: http://www.edctp.org/Ethics_and_Regulatory.507.0.html. Accessed 24 July 2013.
  60. 60. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, et al.. (2010) CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340: : c869. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2844943&tool=pmcentrez&rendertype=abstract. Accessed 27 July 2010.
  61. 61. Chan A-W, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, et al.. (2013) SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 346: : e7586. Available: http://www.bmj.com/content/346/bmj.e7586#ref-31. Accessed 12 June 2013.
  62. 62. World Medical Association Declaration of Helsinki - Ethical Principles for Medical Research Involving Human Subjects (2013). Available: http://www.wma.net/en/30publications/10policies/b3/.Accessed 29 October 2013.
  63. 63. Patrone D (2010) Discrepancies between research advertisements and disclosure of study locations in trial registrations for USA-sponsored research in Russia. J Med Ethics 36: : 431–434. Available: http://www.ncbi.nlm.nih.gov/pubmed/20605999. Accessed 25 September 2012.
  64. 64. Glickman SW, McHutchison JG, Peterson ED, Cairns CB, Harrington RA, et al.. (2009) Ethical and scientific implications of the globalization of clinical research. N Engl J Med 360: : 816–823. Available: http://www.ncbi.nlm.nih.gov/pubmed/19228627. Accessed 5 September 2012.
  65. 65. Li Z-J, Liu M-L, Wang J-N, Liang F-R (2012) [Method and current situations on acupuncture clinical trial registration in the world]. Zhen Ci Yan Jiu 37: 86, inside back cover. Available: http://www.ncbi.nlm.nih.gov/pubmed/22574577. Accessed 27 February 2013.
  66. 66. Reveiz L, Bonfill X, Glujovsky D, Pinzon CE, Asenjo-Lobos C, et al.. (2012) Trial registration in Latin America and the Caribbean's: study of randomized trials published in 2010. J Clin Epidemiol 65: : 482–487. Available: http://www.ncbi.nlm.nih.gov/pubmed/22285461. Accessed 27 February 2013.
  67. 67. Pinto RZ, Elkins MR, Moseley AM, Sherrington C, Herbert RD, et al.. (2013) Many randomized trials of physical therapy interventions are not adequately registered: a survey of 200 published trials. Phys Ther 93: : 299–309. Available: http://www.ncbi.nlm.nih.gov/pubmed/23125281. Accessed 3 June 2013.
  68. 68. Drazen JM (2012) Transparency for Clinical Trials – The TEST Act. N Engl J Med 367: : 120808140026005. Available: http://www.ncbi.nlm.nih.gov/pubmed/22873430. Accessed 9 August 2012.
  69. 69. Proposal for a regulation of the European Parliament and of the Council on clinical trials on medicinal products for human use, and repealing Directive (2012/192(COD)) 2001/20/EC (2012). Brussels: European Commission.
  70. 70. International Clinical Trials Registry Platform (ICTRP): About Registries - WHO Registry Criteria. (n.d.). Available: http://www.who.int/ictrp/network/criteria_summary/en/index.html. Accessed 24 September 2012.
  71. 71. Coombes R (2013) Andrew Witty: the acceptable face of big pharma? BMJ 346: : f1458–f1458. Available: http://www.bmj.com/content/346/bmj.f1458. Accessed 9 March 2013.
  72. 72. Laine C, Horton R, DeAngelis CD, Drazen JM, Frizelle FA, et al.. (2007) Clinical trial registration. BMJ 334: : 1177–1178. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1889969&tool=pmcentrez&rendertype=abstract. Accessed 1 September 2012.
  73. 73. Communication from the Commission regarding the guideline on the data fields contained in the clinical trials database provided for in Article 11 of Directive 2001/20/EC to be included in the database on medicinal products provided for in Article 57 of Re (2008) Off J Eur Union C. 168: 3–4 Available: http://ec.europa.eu/health/files/eudralex/vol-10/2008_07/c_16820080703en00030004_en.pdf.
  74. 74. Food and Drug Administration Amendments Act of 2007, Title VIII, Section 801. Expanded clinical trial registry data bank (2007). Available: www.govtrack.us/congress/billtext.xpd?bill=h110-3580.