The relationship between endorsing reporting guidelines or trial registration and the impact factor or total citations in surgical journals

Background A journal’s impact factor (IF) and total citations are often used as indicators of its publication quality. Furthermore, journals that require authors to abide by reporting guidelines or conduct trial registration generally have a higher quality of reporting. In this study, we sought to explore the potential associations between the enforcement of reporting guidelines or trial registration and a surgical journal’s IF or total citations in order to find new approaches and ideas to improve journal publication quality. Methods We examined surgical journals from the 2018 Journal Citation Report’s Expanded Scientific Citation Index to quantify the use of reporting guidelines or study registration. We reviewed the “instructions for authors” from each journal and used multivariable linear regression analysis to determine which guidelines were associated with the journal IF and total citations. The dependent variable was the logarithm base 10 of the IF in 2018 or the logarithm base 10 of total citations in 2018 (the results were presented as geometric means, specifically the ratio of the “endorsed group” results to “not endorsed group” results). The independent variable was one of the requirements (endorsed and not endorsed). Models adjust for the publication region, language, start year, publisher and journal size (only used to adjust total citations). Results We included 188 surgical journals in our study. The results of multivariable linear regression analysis showed that journal IF was associated (P < 0.01) with the following requirements: randomized controlled trial (RCT) registration (geometric means ratio (GR) = 1.422, 95% CI [1.197–1.694]), Consolidated Standards of Reporting Trials (CONSORT) statement (1.318, [1.104–1.578]), Preferred Reporting Items for Systematic Reviews Meta-Analyses (PRISMA) statement (1.390, [1.148–1.683]), Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement (1.556, [1.262–1.919]), Standards for Reporting Diagnostic Accuracy (STARD) statement (1.585, [1.216–2.070]), and Meta-analysis of Observational Studies in Epidemiology (MOOSE) statement (2.113, [1.422–3.133]). We found associations between the endorsement of RCT registration (GR = 1.652, 95% CI [1.268–2.153]), CONSORT (1.570, [1.199–2.061]), PRISMA (1.698, [1.271–2.270]), STROBE (2.023, [1.476–2.773]), STARD (2.173, [1.452–3.243]), and MOOSE statements (2.249, [1.219–4.150]) and the number of total citations. Conclusion The presence of reporting guidelines and trial registration was associated with higher IF or more total citations in surgical journals. If more surgical journals incorporate these policies into their submission requirements, this may improve publication quality, thus increasing their IF and total citations.


INTRODUCTION
For research institutions, universities, and individual scholars to understand a publication's impact, the use of bibliometric indices is crucial (Roldan-Valadez et al., 2019). There are many bibliometric tools used to measure journal quality including impact factor (IF), total citations, eigenfactor score, h-index, and source normalized impact per paper (SNIP). These tools correspond to different aspects of journal performance, such as the impact, output, and reputation (Jones, Huggett & Kamalski, 2011). The IF and total citations are the two indicators considered most important to authors, medical editors, funding agencies, and the journal itself (Jones, Huggett & Kamalski, 2011;Roldan-Valadez et al., 2019). Garfield (1996) from the Institute of Scientific Information (ISI) first proposed using reference counting to measure a publication's impact. IF was first used for the 1961 Science Citation Index (SCI) in 1963 and has since been widely regarded as one of the primary indicators for evaluating the quality, importance, and impact of medical journals in their respective disciplines (Garfield, 1996;Roldan-Valadez et al., 2019). A given journal's IF is calculated by dividing the number of citations the journal received in one year for articles published the previous 2 years (numerator) by the number of articles published over the previous 2 years (denominator) (Kumar, Upadhyay & Medhi, 2009). For example, Journal X's IF for 2018 ("IF2018") would be the total number of times the articles published in Journal X were cited in 2016 and 2017 divided by the total number of articles published by Journal X in 2016 and 2017. Journal size is a factor when determining journal subscriptions, and studies have shown that journal size is associated with longitudinal journal IF stability (Koelblinger et al., 2019). A journal may request that authors include references from previous publications in order to manipulate and increase its IF through self-citations (Ioannidis & Thombs, 2019). Even so, a journal's IF is frequently cited in the scientific world when determining its quality and scientific output (Kumar, Upadhyay & Medhi, 2009). A journal's total citations refer to the total number of times it has been cited by all journals in the journal citation report (JCR) database in a year, and is used to reflect the journal's value, role, and status in the scientific community.
In seeking to improve their journals' competitiveness and publication quality, scientific journal publishers rely on peer review (Wierzbinski-Cross, 2017), statistical review (Dexter & Shafer, 2017), and editorial policy. Editorial policy is known to affect the quality of a journal's articles and usually includes requirements such as the disclosure of conflicts of interest (COI), copyright issues, article layout, chart format, reporting guidelines, trial registration, and data availability.
Reporting guidelines may improve reporting quality by promoting openness and transparency of research information, as well as controlling selective reporting (Sims et al., 2018). Reporting guidelines specify in detail how to standardize and comprehensively report each part of a study from the abstract to conclusion, especially in cases where a bias may be present. The guidelines typically dictate the use of checklists, flow diagrams, or explicit text . Certain reporting guidelines are already well known by researchers, e.g., the Consolidated Standards of Reporting Trials (CONSORT) statement published in 1996 (Junker et al., 1996), the Standards for Reporting Diagnostic accuracy (STARD) statement published in 2003 (Bossuyt & Reitsma, 2003), the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement published in 2007(von Elm et al., 2007, and the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement published in 2009 (Moher et al., 2009). Experts have systematically developed reporting guidelines applicable to different types of research, added suggestions to improve scientific writing, and developed instructions for authors in specific journals . The Enhancing the Quality and Transparency of Health Research (EQUATOR) network was founded in 2006 and put into practice in 2008. Their website includes comprehensive reporting guidelines, and they seek to raise awareness for and promote the adoption of good publishing practices Moher et al., 2008;Simera et al., 2010). The International Committee of Medical Journal Editors (ICMJE) also have recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals that are widely regarded as an effective solution to standardizing manuscript preparation and formatting (Matheson, 2011). These recommendations cover ethical issues, publication problems, and the preparation, structure, and submission of manuscripts (http://www. icmje.org/).
Clinical trial registration is also used to improve the reporting quality of articles (De Angelis et al., 2004). Trial registration promotes transparency and accountability and may also limit bias (Sims et al., 2018). Researchers and journal editors tend to publish positive rather than negative trials, which leads to the selective reporting of experimental results (Bonati & Pandolfini, 2006;Pansieri, Pandolfini & Bonati, 2015). The full implementation of clinical trial registration enables each trial to be publicly recorded, which is of great benefit to those who want to obtain comprehensive clinical evidence (McFadden et al., 2015). Section 801 of the Food and Drug Administration Amendments Act (FDAAA 801) requires that relevant clinical trials be registered within 21 days of participant enrollment. The ICMJE and some journals require clinical trial registration prior to participant enrollment (Sim et al., 2006).
Currently, most medical professionals access cutting-edge knowledge and the subject dynamics of their profession via medical journals. Additionally, surgeons' clinical treatment decisions depend to a large extent on the reported results of clinical trials. Therefore, articles in surgical journals have stricter requirements. Empirical studies have shown that journals that require authors to abide by reporting guidelines and conduct trial registration generally have higher quality reporting (Smith et al., 2015;Toews et al., 2017), and these policies are effective at achieving research repeatability and improving reporting transparency (Kim et al., 2012;Simera et al., 2010).
Therefore, we conducted a systematic review of some disciplines related to clinical medicine (including internal medicine, oncology, nursing, obstetrics and gynecology, and anesthesiology), and collected the requirements for various policies and the information related to the journal for research and analysis. We found that a large number of journals have not adopted reporting guidelines or study registration, which suggests that these measures have not been fully utilized. Since journals that implement these policies have a higher quality of literature, are these policies related to publication quality and journal reputation? How is this association reflected in the journal's IF and total citations? To investigate these questions, we took surgical journals as an example and explored the association between the presence of reporting guidelines or study registration with the IF or total citations. We hypothesized that the implementation of these measures was associated with higher IF and more total citations in surgical journals. If these policies gain the attention of surgical journals, they can provide possible new approaches and ideas for improving publication quality and journal reputation.

Study design
We used cross-sectional studies to investigate surgical journals' compliance with reporting guidelines or study registration. At the same time, we extracted journal characteristic information in order to carry out a follow-up analysis. The STROBE guidelines were observed during the design, performance, analysis, and reporting sections of our study.

Reporting guidelines or study registration
Data were collected from the sample journals with regard to adopting reporting guidelines or study registration. All requirements were sorted into the "endorsed group" or the "not endorsed group", and we used these groupings to explore which requirements were associated with the IF and total citations. We determined whether compliance with various requirements was "required", "recommended", or "not mentioned". The representative words for "recommended" compliance were "refer to", "encourage", or "suggest". For "required" compliance, the representative words were "should", "must", or "otherwise the manuscript will not be considered for publication". We classified "recommended" and "required" compliance as an endorsement. If these requirements were not mentioned, then this was considered not an endorsement.

Journal characteristics
We collected some characteristic information about sample journals, including IF in 2018, total citations in 2018, publication region, publisher, start year, language, and journal size (all citable items of the journal found in the 2018 JCR). Among these, we artificially divided the languages into two groups: English and non-English (which included French, German, Spanish, and Turkish).

Data collection
We selected surgical journals from the 2018 JCR's Expanded Scientific Citation Index. We excluded journals with incomplete data, those that did not publish original research, and journals whose author instructions could not be accessed. Our search resulted in a total of 188 surgical journals (Fig. 1). We collected data from March to June 2020 by evaluating each journal's instructions for authors and related information, including author guides and guidelines, information to contributors, submission guidelines, COI information, journal policies, manuscript guidelines, publisher policies, instructions for manuscript preparation, information for authors, and submission policies. At the beginning of the study, two authors (JZ and JGZ) independently extracted data for analysis. Any discrepancies in the results were evaluated by a third author (XBZ) who rendered a final decision after discussion with all authors. Raw measurements obtained from the journals are shown in Data S1. We listed the information of journals that were excluded from this study in Data S2.

Model hypothesis
We hypothesized that the implementation of these measures was associated with higher IF or more total citations in surgical journals. The residual analysis results showed that the data were not subject to normality. Therefore, we performed logarithmic transformation of the dependent variable for subsequent multivariable linear regression analysis. The dependent variable was the logarithm base 10 of the IF in 2018 or the logarithm base 10 of total citations in 2018 (the results were converted back to the original scale and presented as geometric means, specifically the ratio of the results of the "endorsed group" to that of the "not endorsed group"). The independent variable was one of the requirements (endorsed and not endorsed) and the adjusted covariables were publication region, language, start year, publisher, and journal size (only used to adjust total citations). Among these, start year and journal size were continuous variables, and the rest of the covariables were categorical variables. The model hypothesis is shown in Fig. 2.

Data analysis
Categorical variables were presented as frequencies.
Based on the results of the P-P plot (or histogram), the data were not subject to normal distribution. Therefore, continuous variables were presented as median and quartile (P 25 -P 75 ). After univariable linear regression analysis, we also performed multivariable linear regression analysis to adjust for the possible confounding of the factors (publication region, language, start year, publisher, and journal size) and assess whether the use of reporting guidelines and study registration was associated with the IF and total citations. We used residual analysis to diagnose the model and variance inflation factor (VIF) to judge the collinearity between independent variables. All statistical analyses were performed using SPSS Statistics 18.0 (IBM Corporation, Armonk, NY, USA). All reported P values were two-sided, and P values ≤ 0.05 were statistically significant.

Descriptive characteristics
A total of 188 of the 203 surgical journals met the inclusion criteria and were included in our study (Fig. 1). Each journal had a website from which we obtained information for our research. The distribution of data with regards to the journal characteristics is shown in Table 1. English language journals accounted for 92.6% (n = 174) of the journals and 51.1% (n = 96) of journal editorial offices were located in North America. The median IF was 1.909 (P 25 -P 75 : 1.159-2.998). The median total citations was 3,124 (P 25 -P 75 : 1,131-7,849). The specific distribution of the IF and total citations in surgical journals is shown in Figs. S1 and S2.

Endorsement of requirements
A total of 21 requirements (including reporting guidelines, study registration, disclosure of COI, and EQUATOR network) were endorsed by the 188 surgical journals. The frequency of these endorsements for each requirement is shown in Fig. 3. COI disclosure (n = 170, 90.4%) was most likely to be adopted, followed by ICMJE recommendations (n = 155, 82.4%) and Randomized Controlled Trials (RCT) registration (n = 101, 53.7%). The CONSORT statement was the most frequently endorsed (n = 94, 50.0%) statement, followed by the PRISMA statement (n = 66, 35.1%). None of the other reporting guidelines were endorsed by more than 30% of the journals. The least-endorsed reporting guideline was the Meta-Analysis of Observational Studies in Epidemiology (MOOSE) statement (n = 9, 4.8%), and the Systematic Reviews/Meta Analyses (SRs/MAs) registration was the least-endorsed requirement (n = 9, 4.8%). Most journals that endorsed reporting guidelines and study registration provided a corresponding website for further reference.

Multivariable linear regression analysis
Residual analysis showed that multivariable linear models basically conformed to linearity, normality, and homoscedasticity. The collinearity of 42 multivariable linear models was weak and had little influence on the stability of the results (VIF: 1.061-2.116). The start year, language, region, publisher, and journal size (only used to adjust total citations) were adjusted and showed that a journal's IF was associated with the endorsement of RCT registration (geometric means ratio (

DISCUSSION
We investigated the extent to which author instructions in surgical journals endorsed different reporting guidelines and study registration procedures. The relationship between these requirements and the journal's IF or total citations was then analyzed.
Our research data showed that COI disclosure was most likely to be endorsed (90.4%). About four-fifths (86.7%) of journals endorsed one or more reporting guidelines, and approximately half (54.8%) of the journals endorsed implementing study registration. Notes: * The difference was statistically significant. a The covariables that need to be adjusted in multivariable linear regression are start year, language, region, publisher. b GR (95% CI) : geometric means ratio (95% confidence interval), this value is calculated by using the "not endorsed" group as the reference group. For example, GR = 1.422 says that the value of Impact Factor is 42.2% higher for journals that endorse RCT registration, the same is true for other results.

Requirements
Number showed an association with a greater number of total citations too. Journals that endorsed these requirements were more likely to correspond to higher IF and more total citations, perhaps because strict manuscript submission standards improved the quality of the publication to some extent. Additionally, empirical studies have demonstrated a real improvement in research quality after the introduction of reporting guidelines and a study registration mechanism Limb et al., 2019;Matheson, 2011;Moher et al., 2008;Smith et al., 2015).
Reputable journals are widely read because of their influence on medical practices, and the influence of journals is traditionally measured using IF and total citations (Jones, Huggett & Kamalski, 2011;Trueger et al., 2015). Authors are primarily responsible for the quality of their manuscripts and should completely and accurately report their findings. However, many authors do not have the ability or experience to do so (Kunath et al., 2012). Esene et al. (2018) found that research design was frequently misrepresented in neurosurgical literature and that mislabeling research impairs the indexing, classification, and sorting of evidence. Journals that continue to accept manuscripts with substandard reporting provide little incentive for authors to meet the higher standards outlined by their reporting guidelines (Camm, Agha & Edison, 2015). A journal's submission guidelines are a basic threshold for a paper's publication. As the "gatekeeper" of scientific research, journal publishers play a vital role in controlling the quality of published papers. Surgical journal policies may help improve article quality by forcing compliance with reporting guidelines and the registration of clinical trials. We recommend that journals endorse reporting guidelines and study registration and take steps to promote the adequate reporting of methods and results in accordance with these guidelines. We also recommend that trial registration numbers are submitted in manuscripts. Journals can ensure that each paper meets the minimum standards for publication by implementing these policies and increasing submission requirements, while simultaneously providing the public with more comprehensive trial information and increasing confidence in their results.
Though our study did include a variety of reporting guidelines, there were some limitations to our study that might have impacted our results. We did not collect requirements that were only made known during the submission process and our results may differ slightly from journal editorial policies. Additionally, we did not know how various requirements were fulfilled, which means the journal's endorsement rate for reporting guidelines and study registration may be overestimated. Our statistical analyses considered some confounding factors that may have influenced the results and other confounding factors (such as number of reviewers, review cycle, and whether statisticians are involved) may not have been included in this study. Future studies should try to collect as many of these factors as possible to better correct for the confounders, which may require researchers to get in touch with journal editors via email to gather information that is not publicly available. Additionally, since we surveyed only the two most common types of study registrations (RCT and Systematic Reviews/Meta-analyses), our study is not comprehensive. Some journals do not publish certain types of research and some reporting guidelines did not apply, which may have led to an inaccurately lower rate of endorsement for the corresponding reporting guidelines. In order to minimize this bias, researchers can try to contact the editor to determine what types of articles the journal does not publish. Lastly, the cross-sectional nature of the study means that temporality could not be assessed, and an explanation that higher quality journals were more likely to endorse reporting guidelines could not be excluded.

CONCLUSION
In conclusion, the endorsement of reporting guidelines and study registration was associated with higher IF and more total citations in surgical journals. Good publishing practices are critical for the long-term development of journals and the improvement of their reputations. Therefore, we encourage journals and publishers to follow these requirements to strengthen the value of health research and journals themselves as much as possible. Additional studies are needed to obtain empirical data on the relationship between reporting guidelines or study registration and the quality of journals in other fields of medicine.

ADDITIONAL INFORMATION AND DECLARATIONS Funding
This study was supported by the Improvement Program for the Education of Graduate Students in Shandong Province, China (Grant Number: SDYAL18047), the National Steering Committee for Education of Medical Degree Postgraduate (Grant Number: B2-YX20180203-01), and the 2018 Qingdao University Graduate Case Database Construction Project. There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.