Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evaluation of the Quality of Reporting of Observational Studies in Otorhinolaryngology - Based on the STROBE Statement

  • Martine Hendriksma ,

    Contributed equally to this work with: Martine Hendriksma, Michiel H. M. A. Joosten

    Affiliation Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Utrecht, the Netherlands

  • Michiel H. M. A. Joosten ,

    Contributed equally to this work with: Martine Hendriksma, Michiel H. M. A. Joosten

    Affiliation Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Utrecht, the Netherlands

  • Jeroen P. M. Peters,

    Affiliations Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Utrecht, the Netherlands, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands

  • Wilko Grolman,

    Affiliations Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Utrecht, the Netherlands, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands

  • Inge Stegeman

    i.stegeman@umcutrecht.nl

    Affiliations Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Utrecht, the Netherlands, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands

Abstract

Background

Observational studies are the most frequently published studies in literature. When randomized controlled trials cannot be conducted because of ethical or practical considerations, an observational study design is the first choice. The STROBE Statement (STrengthening the Reporting of OBservational studies in Epidemiology) was developed to provide guidance on how to adequately report observational studies.

Objectives

The objectives were 1) to evaluate the quality of reporting of observational studies of otorhinolaryngologic literature using the STROBE Statement checklist, 2) to compare the quality of reporting of observational studies in the top 5 Ear, Nose, Throat (ENT) journals versus the top 5 general medical journals and 3) to formulate recommendations to improve adequate reporting of observational research in otorhinolaryngologic literature.

Methods

The top 5 general medical journals and top 5 otorhinolaryngologic journals were selected based on their ISI Web of Knowledge impact factors. On August 3rd, 2015, we performed a PubMed search using different filters to retrieve observational articles from these journals. Studies were selected from 2010 to 2014 for the general medical journals and from 2015 for the ENT journals. We assessed all STROBE items to examine how many items were reported adequately for each journal type.

Results

The articles in the top 5 general medical journals (n = 11) reported a mean of 69.2% (95% confidence interval (CI): 65.8%–72.7%; median 70.6%), whereas the top 5 ENT journals (n = 29) reported a mean of 51.4% (95% CI: 47.7%–55.0%; median 50.0%). The two journal types reported STROBE items significantly different (p < .001).

Conclusion

Quality of reporting of observational studies in otorhinolaryngologic articles can considerably enhance. The quality of reporting was better in general medical journals compared to ENT journals. To improve the quality of reporting of observational studies, we recommend authors and editors to endorse and actively implement the STROBE Statement.

Introduction

Most published studies in literature are observational studies. Observational studies are recognized as a useful insight into identifying best practices and addressing new hypotheses. Sometimes observational studies initiate the need for further investigation and lead to the emergence of randomized controlled trials (RCTs) [1]. Observational studies also offer the opportunity to establish high external validity in a practical setting, which is difficult to achieve in RCTs [2]. While RCTs are being advocated as the gold standard for the evaluation of treatment oriented interventions, well designed observational studies have been shown to provide similar results [35]. Because of their design, observational studies are prone to bias, confounding, cause and chance [6]. However, when RCTs cannot be conducted because of ethical or practical considerations, an observational study design is the first choice [7,8]. Clear presentation of its methods, execution and analyses is crucial for its valid implementation in clinical care. Poor reporting of observational studies can further reduce their usefulness.

Therefore, in 2004 [9] the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement was introduced to improve the quality of reporting of observational studies. The aim of the STROBE statement is to increase transparency in reporting [9]. A checklist made of 22 items was constructed in order to assess the quality of reporting of observational studies. The STROBE Statement is currently being endorsed by a growing number of biomedical journals [10].

To our knowledge, this quality of reporting in otorhinolaryngology has never been assessed before. In otorhinolaryngology, observational studies are an often used study design. Therefore, our objective was to assess the quality of reporting of observational research in otorhinolaryngologic literature using the STROBE Checklist. As a second objective, we aimed to compare the quality of reporting between the top 5 otorhinolaryngologic journals and the top 5 general medical journals, as a benchmark for presumably high quality of reporting. Finally, based on our findings, recommendations were forged in order to improve the reporting of observational studies in otorhinolaryngology.

Methods

Journals

We selected the top 5 medical journals (New England Journal of Medicine (NEJM), The Lancet, Journal of the American Medical Association (JAMA), British Medical Journal (BMJ) and PLOS Medicine (PLOS Med)) and the top 5 Ear Nose Throat (ENT) journals (Rhinology, Hearing Research (Hear Res), Ear & Hearing (Ear Hear), Head & Neck and Journal of the Association for Research in Otolaryngology (JARO)) based on their ISI Web of Knowledge impact factors. Table 1 presents the ISI Web of Knowledge impact factors of 2012 for general medical journals and of 2014 for ENT journals (www.webofknowlegde.com, date of access August 3rd, 2015).

thumbnail
Table 1. Top 5 general medical journals and ENT journals impact factors and STROBE endorsement.

https://doi.org/10.1371/journal.pone.0169316.t001

In addition, we evaluated the ‘Instruction to authors’ section on the websites of the included journals to check if they endorsed the STROBE statement (see also Table 1).

Search

We performed a PubMed search on August 3rd 2015 using several search syntaxes. First, to retrieve only observational studies, a syntax developed by the Scottish Intercollegiate Guidelines Network was used [11]. Second, an adapted version of the Cochrane ENT search syntax was used to retrieve otorhinolaryngologic articles [12]. Third, a filter was used to retrieve articles published in the top 5 general medical journals (Table 1) and articles published in the top 5 ENT journals (Table 1) separately. Fourth, a date restriction filter was applied. To retrieve sufficient otorhinolaryngologic articles published in general medical journals, we searched from January 1st, 2010 until December 31st, 2014. We did not search for articles earlier than 2010, since the STROBE Statement was first published in 2007. Hereby, we allowed for sufficient time for implementation of the STOBE Statement by study authors. Naturally, it is easier to retrieve otorhinolaryngologic articles in ENT journals, so for articles published in ENT journals we applied a date restriction from January 1st, 2015 until August 3rd, 2015. Finally, a combination of search syntaxes was made to retrieve observational studies from general medical journals and ENT journals respectively (see S1 File).

Study selection

Two authors (MH and MHMAJ) independently assessed titles, abstracts and full texts of the retrieved articles to check if the study was indeed an observational study and if it was conducted in the otorhinolaryngologic field. To be considered an observational study, studies must have had a cross-sectional, cohort or case control design. To be considered as a study in the otorhinolaryngologic field, studies must have assessed patient populations generally treated by ENT physicians or procedures generally performed by ENT physicians, including procedures performed by head and neck surgeons. Discrepancies between the two reviewers were discussed until consensus was reached.

Strobe statement adherence

To score the quality of reporting, the most recent version of the STROBE statement was used [9]. The included articles were read and scored independently by two authors (MH and MHMAJ). We evaluated the items of the STROBE checklist that were adequately reported. The total number of items on the STROBE checklist is 34 (subitems included). Some items (6a, 6b, 12d, 14c, 15) are specific for some study designs only (e.g. cohort or case control). Consequently, if an item was not applicable for the study design, it was scored as ‘not applicable’. For a more detailed description of the requirements to score ‘adequately reported’, see S2 File.

Because of an imbalance in the number of articles per journal category, we scored one article from the general medical journals for every three articles from the ENT journals, so a possible learning effect in use of the STROBE checklist was distributed evenly across both journal categories. Differences in opinion were discussed until consensus was reached.

Data analysis

We divided the number of adequately reported items by the total number of applicable items, which resulted in a proportion of adequately reported items. A higher proportion reflects that the item was reported more adequately. Mean, medians and 95% confidence intervals (CI) were calculated per item.

The 2-tailed Mann Whitney U test for two independent samples was used to test if there was a statistically significant difference between the STROBE scores for articles published in the top 5 general medical journals and in the top 5 ENT journals.

Furthermore, the interobserver agreement (Cohen’s kappa) was calculated to determine if there were large differences in the scoring of items between the two authors.

Statistical tests were performed using SPSS v21 statistics package. A p-value of < .05 was considered statistically significant.

Results

Search

The search process is shown in Fig 1. The combined search syntaxes yielded 42 articles from general medical journals and 44 from ENT journals.

thumbnail
Fig 1. Flowchart of search.

Date of search: August 3rd, 2015. For complete search syntax, see S1 File. ENT = Ear, Nose, Throat, OS: observational study, N = number.

https://doi.org/10.1371/journal.pone.0169316.g001

Study selection

Of the 42 articles in the general medical journals, 14 articles were not considered otorhinolaryngologic articles, 14 articles were considered not to be an observational study and 3 articles were neither considered an otorhinolaryngologic nor an observational study. Finally, we included 11 observational studies on otorhinolaryngologic topics from the general medical journals.

Of the 44 articles in the ENT journals, 11 articles were not considered to be an observational study, 3 articles were not considered otorhinolaryngologic articles, and 1 article was considered neither an otorhinolaryngologic nor an observational study. Finally, we included 29 observational studies on otorhinolaryngologic topics from ENT journals.

STROBE Statement adherence

The 11 articles published in general medical journals (NEJM = 1, Lancet = 1, JAMA = 4, BMJ = 2, PLOS Med = 3) reported a mean of 69.2% (95% CI: 65.8%–72.7%; median 70.6%) of STROBE items adequately. The 29 articles published in ENT journals (Rhinology = 1, Hear Res = 0, Ear & Hearing = 8, Head & Neck = 20 and JARO = 0) reported a mean of 51.4% (95% CI: 47.7%–55.0%; median 50.0%) of STROBE items adequately. The exact percentage of adequately reported STROBE items for studies published in general medical journals and ENT journals are presented in Table 2. A graphic illustration is provided in Fig 2.

thumbnail
Fig 2. Adequately reported STROBE items.

For an overview of all STROBE items see S2 File. Items highlighted with a # are statistically significant between journal categories. ENT = Ear, Nose, Throat.

https://doi.org/10.1371/journal.pone.0169316.g002

thumbnail
Table 2. data table of Fig 2.

Percentage of adequately reported STROBE items per journal category.

https://doi.org/10.1371/journal.pone.0169316.t002

We compared the difference in reporting individual items between observational studies published in general medical journals and ENT journals. The ENT journals reported several items inadequately more frequently than general medical journals. The difference in reporting between journal categories was statistically significant for 8 out of 34 items (highlighted with a # in Fig 2). First, less than 25% of the articles published in ENT journals reported funding and the role of the funders, whereas this was reported in 100% of the articles published in general medical journals (item 22). Second, several other items were scored significantly better in general medical journals than in ENT journals. Item 6a (eligibility criteria and the sources of methods of selection of participation) and item 8 (data sources and measurement for each variable of interest) were scored better by general medical journals, as well as item 12b (methods used to examine subgroups and interaction). Likewise for items 13a, 13b and 13c (reporting of numbers of individuals at each stage of the study, reasons for non-participation at each stage and the use of a flow diagram). Also items 14a, 14b and 14c (descriptive data of the study participants) and at last item 15 (the outcome data) were all scored significantly better in general medical journals.

Besides these differences, both types of journals scored insufficient on several other items. For example, both journal categories scored poorly in reporting the variables (item 7); also both categories of journals reported less than 20% on items 9, 12c, 12d and 12e. Moreover, none of the articles published in ENT journals reported sensitivity analyses (item 12e) or translated estimates of relative risk into absolute risk (item 16c).

The overall interobserver agreement kappa is 0.64. A kappa score between 0.61 and 0.80 means there is a good agreement [13]. S3 File shows the interobserver agreement per STROBE item for both journal categories, as well as both journal categories together. Items scored with the most discrepancy between the two authors, were the defining of outcomes (item 7), the handling of quantitative variables (item 11), the reporting of individuals at each stage of the study (item 13a), the study characteristics of participants (item 14a), the reporting of unadjusted estimates (item 16c), the overall interpretation of results (item 20) and the generalizability (item 21).

Discussion

To the best of our knowledge, this study is the first to assess the quality of reporting of observational studies in otorhinolaryngologic literature. Furthermore, we made a comparison between the top 5 general medical journals and the top 5 ENT journals, by using the STROBE checklist. This study shows a substantial difference in the quality of reporting between ENT journals and general medical journals.

Interpretation of results

First, notable differences were reported regarding the eligibility criteria (item 6a). Eligibility criteria help readers understand the applicability and generalizability of the reported results. In observational studies, these characteristics are often not adequately reported [14].

Second, sub-analyses are only reported in about 50% of the top 5 ENT journals. Being one of the items of the STROBE checklist, there is debate about this item (item 12b) [15,16]. The reporting of sub-analyses allows one to examine whether effects or associations differed across groups [17].

Third, missing data is common in observational research (item 14b). It is essential to describe for which variable of interest data was missing, because (in)voluntary exclusion of patients can lead to a distorted image in the results.

Fourth, other analyses were only reported in 3% of the articles published in ENT journals (item 17). However, these analyses are important, because they may address specific subgroups, display potential interaction between risk factors, calculate attributable risks or use alternative definitions of study variables in sensitivity analyses [17].

Fifth, almost 80% of the ENT journals do not report funding (item 22), although there are strong associations between the source of funding and conclusions [18,19]. It is therefore important to be transparent about financial support, as specific types of sponsorships may be associated with positive study results [20].

Sixth, the use of a flow diagram is an easy way to display extensive information in a compact design (item 13c); 7% of the articles published in ENT journals made use of a flow diagram compared to 45% of the articles published in general medical journals.

Seventh, reasons for non-participation (item 13b) was reported in 3% of the ENT journals. This item lets readers judge whether the study population was representative of the target population [17].

Last, translating estimates of relative risk into absolute risk (item 16c) was not reported in any of the ENT articles. However, in some cases this item may not be relevant.

As for the top 5 general medical journals, 11 items scored particular lower than expected, i.e. less than 50%; items 4, 7, 9, 12c, 12d, 12e, 13b, 13c, 14b, 16c and 17. Of these, 6 items (item 4, 7, 9, 12c, 12d and 12e) scored similar poor compared to the top 5 ENT journals (no statistical difference).

Comparison with literature

In other research fields, like ophthalmology and hand surgery, items like bias (item 9), the explanation of missing data (item 12c), use of a flow-diagram (item 13c) and indicating missing data for each variable of interest (item 14b) scored similarly insufficient to the articles published in ENT journals [21,22]. Items which scored better in these research fields were presenting key elements of study design early in the paper (item 4), defining all outcomes, exposures, predictors, potential confounders and effect modifiers (item 7), describing loss to follow-up/matching of cases and controls/sampling (item 12d) and displaying characteristics of study participants (item 14c). Only item 10, the explanation of how study size was arrived at, was scored better in the articles published in ENT journals in our sample. In another study, Delaney et al. evaluated platelet transfusion studies according to the STROBE criteria [23]. They also scored missing data as reported inadequately, in accordance with our results. Unfortunately, an average compliance score was not calculated [23]. Parsons et al.[24] evaluated general orthopedic journals and found an overall average compliance to the STROBE checklist of 58% (95% CI: 56%–60%), close to our findings. There were 9 items that scored similarly poor (items 9, 12c, 12d, 12e, 13b, 13c, 14b and 16c). It was unclear whether the journals in which the included articles were published endorsed the STROBE Statement. Since the articles included by Parsons et al. were selected from the years 2005 to 2010, active endorsement was not entirely possible, as the STROBE Statement was first published in 2007.

Similar results were seen in the research field of plastic surgery, where the 9 items mentioned above along with item 17 scored similarly poor in reference to our results [25]. This study also suggested that reporting could possibly be improved by making the STROBE checklist mandatory at submission [25].

One other study showed that the quality of reporting of confounding improved in some aspects, but the authors concluded that the overall quality remains suboptimal [26]. Quantitative bias analysis (item 9) was scored very rarely, which we also observed.

Two reporting guidelines (Consolidated Standards of Reporting Trials (CONSORT) and Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA)) have been published before the STROBE Statement. Several studies found that the endorsement of the CONSORT and PRISMA Statements led to a better quality of reporting [2734]. Two other articles evaluating the CONSORT Statement concluded that active endorsing would further increase the quality of reporting [35,36]. Unfortunately, no studies have been conducted evaluating the STROBE Statement for multiple disciplines simultaneously or investigating the implementation of the STROBE Statement. For better applicability of the STROBE Statement, some papers have adjusted the STROBE statement to their specific research field [37,38].

Methodological considerations

Strengths of our study include the independent assessment of all articles by two authors. Besides, to accumulate the learning effect in reference to the imbalance in the number of articles per journal type, we scored one article from general medical journals for every three articles from ENT journals. All details of our search were clear and transparent and our search can therefore easily be reproduced. Furthermore, some of the authors were involved in the assessment of the PRISMA and CONSORT Statement checklists [39,40]. This scientific context and experience could have improved the overall quality of our study. Additionally an inter-observer agreement (Cohen’s kappa) was calculated to determine which items were often disputed (these items deserve extra attention when assessing, and possibly even rephrasing by the STROBE workgroup, because they may be hard to interpret).

However, our study also has limitations. First, scoring of items remains a subjective task, with differences between observers. Several items were prone to discussion between the scoring authors. However, our analysis showed that we reached good interobserver agreement (Cohen’s kappa 0.64). Second, only eleven studies were found for the top 5 general medical journals. This small sample size makes it more difficult to draw appropriate conclusions for this journal category. Third, the inclusion periods (top 5 general medical journals: January 1st, 2010 –December 31st, 2014; top 5 ENT journals January 1st, 2015 –August 3rd, 2015) of both journals types were not identical. As expected, less otorhinolaryngologic articles were published in general medical journals than in ENT journals. Even with a broader time frame in the search syntax for articles from general medical journals, this still resulted in less articles (n = 11) than the number of articles included from ENT journals (n = 29). However, retrieving earlier studies would not lead to a more accurate comparison, since the STROBE Statement was first published in 2007. Consequently, earlier studies would not have had time to implement the STROBE Statement. On the other hand, we did not broaden our date restriction for ENT journals, since 29 articles form a good sample to base conclusions on. Including otorhinolaryngologic articles from ENT journals from 2010–2014 would result in many more articles to score, leading to incomparable samples. Moreover, including more articles published in ENT journals would probably not have changed conclusions. Finally, by choosing the most recent articles published in ENT journals, we would expect maximum improved quality of reporting, because more time was available to implement the STROBE Statement. It would be interesting to evaluate the quality of reporting over a period of time. However, eleven articles from five years do not provide enough statistical power for such a comparison.

Recommendations

Clear and transparent reporting in research papers facilitates clinicians and researchers that they can evaluate the validity of articles. The STROBE Statement was developed to aid researchers to report their observational studies adequately. Therefore, we think it is highly plausible that using the STROBE Statement will improve the quality of reporting. We recommend authors of otorhinolaryngologic articles to use the STROBE Statement and recommend editors of otorhinolaryngologic journals to actively endorse this statement in their guidelines for authors.

Conclusion

Current quality of reporting of observational studies in otorhinolaryngologic journals is suboptimal. According to the STROBE Statement checklist, the reporting of otorhinolaryngologic observational studies is significantly better in articles published in general medical journals (2010–2014) than in articles published in ENT journals (2015). We recommend authors of otorhinolaryngologic articles to use the STROBE Checklist to help them to improve the quality of reporting. Furthermore, we suggest editors of ENT journals to actively endorse the STROBE Statement in their submission process.

Supporting Information

S1 File. Search Syntaxes.

Date of search: August 3rd, 2015. We used a study syntax developed by the Scottish Intercollegiate Guidelines Network [11]. An adapted version of the Cochrane ENT search syntax was used to retrieve otorhinolaryngologic [12] articles. Finally, a date restriction was applied from January 1st, 2010 until December 31st, 2014 for the top 5 general medical journals and from January 1st, 2015 until August 3rd, 2015 for the top 5 ENT journals.

https://doi.org/10.1371/journal.pone.0169316.s001

(DOCX)

S2 File. Explanation of reported items of the STROBE checklist.

The original STROBE Statement can be assessed by visiting the STROBE website [10] or see Von Elm et al. [9]. For an explanatory and elaborated view see Vandenbroucke et al. [17]. Items were either scored as ‘adequately reported’, ‘inadequately reported’ or ‘not applicable’. Five items of the STROBE Statement are specific for study design (6a, 6b, 12d, 14c, 15). If any of these items were not applicable for the study design, it was scored as ‘not applicable’. ‘Not applicable’ items were not added to the amount of items to score.

https://doi.org/10.1371/journal.pone.0169316.s002

(DOCX)

S3 File. Kappa per STROBE item.

The interobserver agreement for each STROBE item was calculated for both journal categories, as well as both journal categories together. For an explanation of scale division for the interobserver agreement, see Altman [13].

https://doi.org/10.1371/journal.pone.0169316.s003

(DOCX)

Author Contributions

  1. Conceptualization: IS JPMP.
  2. Data curation: MHMAJ MH JPMP.
  3. Formal analysis: JPMP.
  4. Investigation: MHMAJ MH.
  5. Methodology: IS JPMP.
  6. Project administration: MHMAJ MH JPMP IS.
  7. Resources: MHMAJ MH JPMP IS.
  8. Supervision: JPMP IS WG.
  9. Validation: IS JPMP.
  10. Visualization: MHMAJ MH JPMP.
  11. Writing – original draft: MHMAJ MH.
  12. Writing – review & editing: MHMAJ MH JPMP IS WG.

References

  1. 1. Glasziou P, Vandenbroucke JP, Chalmers I. Assessing the quality of research. BMJ. 2004;328: 39–41. pmid:14703546
  2. 2. Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996;312: 1215–1218. pmid:8634569
  3. 3. Song JW, Chung KC. Observational studies: cohort and case-control studies. Plast Reconstr Surg. 2010;126: 2234–2242. pmid:20697313
  4. 4. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;342: 1878–1886. pmid:10861324
  5. 5. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342: 1887–1892. pmid:10861325
  6. 6. Jepsen P, Johnsen SP, Gillman MW, Sorensen HT. Interpretation of observational studies. Heart. 2004;90: 956–960. pmid:15253985
  7. 7. The periodic health examination. Canadian Task Force on the Periodic Health Examination. Can Med Assoc J. 1979;121: 1193–1254. pmid:115569
  8. 8. Sackett DL. Rules of evidence and clinical recommendations on the use of antithrombotic agents. Chest. 1989;95: 2S–4S. pmid:2914516
  9. 9. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007;4: e296. [pii]. pmid:17941714
  10. 10. STROBE statement Strengthening the reporting of observational studies in epidemiology. Jan 2016. Available: http://www.strobe-statement.org.
  11. 11. Scottisch Intercollegiate Guidelines Network. Search filter for observational studies (adapted). Accessed August 3rd, 2015. Available: http://www.sign.ac.uk/methodology/filters.html#obs.
  12. 12. The editorial team, Cochrane Ear Nose and Throat Disorders Group. About the Cochrane Collaboration (Cochrane Reviwes Group (CRGs)). 2012, issue 7, art. no.: ENT. CENTRAL search strategy.
  13. 13. Altman DG. Practical Statistics for medical research. Chapman and Hall. 1991.
  14. 14. Tooth L, Ware R, Bain C, Purdie DM, Dobson A. Quality of reporting of observational longitudinal research. Am J Epidemiol. 2005;161: 280–288. [pii]. pmid:15671260
  15. 15. Pocock SJ, Collier TJ, Dandreo KJ, de Stavola BL, Goldman MB, Kalish LA, et al. Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ. 2004;329: 883. [pii]. pmid:15469946
  16. 16. Gotzsche PC. Believability of relative risks and odds ratios in abstracts: cross sectional study. BMJ. 2006;333: 231–234. [pii]. pmid:16854948
  17. 17. Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007;4: e297. [pii]. pmid:17941715
  18. 18. Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003;289: 454–465. [pii]. pmid:12533125
  19. 19. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003;326: 1167–1170. pmid:12775614
  20. 20. Sun GH, Houlton JJ, MacEachern MP, Bradford CR, Hayward RA. Influence of study sponsorship on head and neck cancer randomized trial results. Head Neck. 2013;35: 1515–1520. pmid:22987508
  21. 21. Fung AE, Palanki R, Bakri SJ, Depperschmidt E, Gibson A. Applying the CONSORT and STROBE statements to evaluate the reporting quality of neovascular age-related macular degeneration studies. Ophthalmology. 2009;116: 286–296. pmid:19091408
  22. 22. Sorensen AA, Wojahn RD, Manske MC, Calfee RP. Using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement to assess reporting of observational trials in hand surgery. J Hand Surg Am. 2013;38: 1584–9.e2. pmid:23845586
  23. 23. Delaney M, Meyer E, Cserti-Gazdewich C, Haspel RL, Lin Y, Morris A, et al. A systematic assessment of the quality of reporting for platelet transfusion studies. Transfusion. 2010;50: 2135–2144. pmid:20497518
  24. 24. Parsons NR, Hiskens R, Price CL, Achten J, Costa ML. A systematic survey of the quality of research reporting in general orthopaedic journals. J Bone Joint Surg Br. 2011;93: 1154–1159 pmid:21911523
  25. 25. Agha RA, Lee SY, Jeong KJ, Fowler AJ, Orgill DP. Reporting Quality of Observational Studies in Plastic Surgery Needs Improvement: A Systematic Review. Ann Plast Surg. 2016;76: 585–589 pmid:25643190
  26. 26. Pouwels KB, Widyakusuma NN, Groenwold RH, Hak E. Quality of reporting of confounding remained suboptimal after the STROBE guideline. J Clin Epidemiol. 2016;69: 217–224. pmid:26327488
  27. 27. Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006;185: 263–267. [pii]. pmid:16948622
  28. 28. Moher D, Jones A, Lepage L, CONSORT Group (Consolidated Standards for Reporting of Trials). Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA. 2001;285: 1992–1995. [pii]. pmid:11308436
  29. 29. Turner L, Shamseer L, Altman DG, Weeks L, Peters J, Kober T, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev. 2012;11: MR000030. pmid:23152285
  30. 30. Tao KM, Li XQ, Zhou QH, Moher D, Ling CQ, Yu WF. From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One. 2011;6: e27611. pmid:22110690
  31. 31. Fleming PS, Seehra J, Polychronopoulou A, Fedorowicz Z, Pandis N. A PRISMA assessment of the reporting quality of systematic reviews in orthodontics. Angle Orthod. 2013;83: 158–163. pmid:22720835
  32. 32. Panic N, Leoncini E, de Belvis G, Ricciardi W, Boccia S. Evaluation of the endorsement of the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement on the quality of published systematic review and meta-analyses. PLoS One. 2013;8: e83138. pmid:24386151
  33. 33. Tunis AS, McInnes MD, Hanna R, Esmail K. Association of study quality with completeness of reporting: have completeness of reporting and quality of systematic reviews and meta-analyses in major radiology journals changed since publication of the PRISMA statement? Radiology. 2013;269: 413–426. pmid:23824992
  34. 34. Stevens A, Shamseer L, Weinstein E, Yazdi F, Turner L, Thielman J, et al. Relation of completeness of reporting of health research to journals' endorsement of reporting guidelines: systematic review. BMJ. 2014;348: g3804. pmid:24965222
  35. 35. Pandis N, Shamseer L, Kokich VG, Fleming PS, Moher D. Active implementation strategy of CONSORT adherence by a dental specialty journal improved randomized clinical trial reporting. J Clin Epidemiol. 2014;67: 1044–1048. pmid:24837296
  36. 36. Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344: e4178. pmid:22730543
  37. 37. White RG, Hakim AJ, Salganik MJ, Spiller MW, Johnston LG, Kerr L, et al. Strengthening the Reporting of Observational Studies in Epidemiology for respondent-driven sampling studies: "STROBE-RDS" statement. J Clin Epidemiol. 2015;68: 1463–1471. pmid:26112433
  38. 38. Nicholls SG, Quach P, von Elm E, Guttmann A, Moher D, Petersen I, et al. The REporting of Studies Conducted Using Observational Routinely-Collected Health Data (RECORD) Statement: Methods for Arriving at Consensus and Developing Reporting Guidelines. PLoS One. 2015;10: e0125620. pmid:25965407
  39. 39. Peters JP, Hooft L, Grolman W, Stegeman I. Assessment of the quality of reporting of randomised controlled trials in otorhinolaryngologic literature—adherence to the CONSORT statement. PLoS One. 2015;10: e0122328. pmid:25793517
  40. 40. Peters JP, Hooft L, Grolman W, Stegeman I. Reporting Quality of Systematic Reviews and Meta-Analyses of Otorhinolaryngologic Articles Based on the PRISMA Statement. PLoS One. 2015;10: e0136540. pmid:26317406