Enhancing the Reporting Quality of Meta-Analyses for Trustworthy Decision of Breast Cancer Intervention: A Cross-Sectional Survey

Background Meta-analysis of RCTs has been widely employed to evaluate effectiveness of the interventions for breast cancer, but little is known of their reporting validity. Related studies showed that meta-analysis may mislead clinical practice when the reporting is uninformative. The purpose of the study was to assess the reporting quality of meta-analyses of RCTs for breast cancer intervention, and explore potential factors associated with the reporting.


Background
Meta-analysis (MA) is an important tool for summarizing available ndings of studies with the same topic for healthcare interventions [1] . While MAs may mislead the healthcare decisions if they were poor reported. Transparent reporting of MAs is a critical issue as the reported information could crucially impacts on the decision-making of users [2] . Related study showed that compared with MAs with adequate reporting, the inadequately reported MAs yielded exaggerated estimates of treatment effect [3] . Therefore, it is ungently necessary to pay more attention to the reporting quality of MAs in various types of studies.
Breast cancer is the most common malignancy in women, and the second leading cause of cancerrelated death [4] . More than 1.68 million new cases are diagnosed in 2012, resulting in 521,900 deaths worldwide [5] . It was expected that nearly 276,480 new cases of invasive breast cancer will be diagnosed in women in the US [6] . Despite the dramatically increasing trend in its incidence rate, the breast cancer survival rate has improved markedly [7] . This can be attributed to the great progress made in the screening, diagnosis and therapeutic strategies of breast cancer management. Evidence showed that the past and ongoing research plays important rules in the improving strategies for breast cancer [8] .
There has been a sharped increase on meta-analyses of healthcare intervention for breast cancer over the past decades, and many of which has been utilized in the clinical practical guidelines. However, whether these meta-analyses were well-reported to support an informative decision-making remains unclear. And thus may lead to inaccurate interpretation of the results even mislead the clinical practice. It was in this context that we plan a cross-sectional survey to speci cally assess the extent to which the current breast cancer MAs comply with the PRISMA guideline, and explore factors associated with the reporting.

Method
Research protocol A protocol was developed in advance and available in supplementary le 1. The original protocol contained both the reporting and methodological issues, and current study focused on the former.
No substantial changes were made for the current study from protocol.

Eligibility criteria
We included meta-analyses of RCTs that focused on the healthcare intervention for breast cancer. However, those meta-analyses published as a short report, a commentary, a letter, or a conference were not considered. We also did not consider meta-analyses of individual participant data and those with a multiple-arms comparison since the involve more sophisticated reporting considerations. In addition, systematic reviews without a meta-analysis, overviews, scoping reviews, narrative reviews, and study protocol were also excluded.

Literature search
We searched the Medline, Embase and Cochrane Database of Systematic reviews from the inception to November 2019 and elected only English publication. A combination of keywords and Medical Subject Heading related to meta-analysis, RCT and breast cancer were used for different database. This was done by a well-trained author (Z.B). The search strategy was rst developed by the principle author (methodologist), and then discussed with three clinicians for further potential terminologies in this area. The nal full search strategy was presented in the supplementary le 2.

Study screen
Two authors (SSL and WZ) screened the title and abstract independently and in duplicate to exclude obviously ineligibility, then they read the full texts for nal eligibility. A third reviewer (PLJ) was involved in case of a consensus could not be reached. All these process was carried out by the Rayyan online application (https://rayyan.qcri.org/welcome), which allows to blind the screen process by the two authors [9] . The Kappa statistic was used to measure the agreement rate of the screen process.

Data collection
Using pre-standardized and pilot-tested forms, the following information were extracted: publication year, journal name, country of rst author, department of the rst author, number of authors, number of RCTs in a meta-analysis and the total patient enrolled, reporting guideline utilized, the registration information and the funding source (pro t, non-pro t, no funding or not report).

Quality assessment
We used the PRISMA statement which consisted of 27 items to assess the reporting quality of the included MAs [1] . Each item was assigned one of the three response options: "Yes" for compliance of reporting, "Partially yes" for half compliance and "No" for non-compliance. A list of the required components was available to help raters qualitatively make a judgement for a certain item. A group discussion was undertaken to clarify the de nition and distinguish of each item. Then we developed a data extraction form and pre-tested by 15 randomly selected included study and re ned it accordingly. The quality assessment was rst conducted by a well-trained clinician (ZB) and then checked by a methodologist (PLJ).

Data analysis
We used descriptive statistics to summarize the characteristics of the included studies. Dichotomous measures were presented as number and percentage and continuous variables were described as median with interquartile range (IQR). We used the adhered rate to measure the compliance of each items. The adhered rate was categorized as three levels: 80% to 100% as well reported, 30% to 80% as moderately reported and less than 30% as poorly reported [10] .
To investigate the potential association between the reporting quality and the study characteristics, we pre-speci ed ve factors, including journal type (cancer-speci c journal vs. other journals), use of reporting guidance (yes vs. no), nancial support (funded vs. not funded), development of protocol (yes vs. no) and year of publication (≤2009 vs. ≥ 2010). The year of publication was categorized based on the publication year of the PRISMA statement [1] .
We categorized each the 27 items through these variables by the prede ned cutoffs and compared the rates of "No" in each group. The rate difference (RD) was used to measure the potential difference since this effect estimator have been proven to be valid to deal with potential zero-events in a single arm or double arms. We used the RevMan 5.  Table 1.
Adherence of the breast cancer meta-analysis to the PRISMA Overall, for the 27 PRISMA items, only six items were well reported, 12 items were moderately reported, while as much as 9 items were poorly reported, see Figure 3.
Among the 4 items related to title, abstract and background reporting, the items "description of the rationale for the review" (n=296, 100%) and "provide of structured summary of abstract" (n=294, 99.32%) were well reported. Two items "statement of the report was systematic review or meta-analysis" (n=236, 79.73%) and "explication of the research objective" (n=115, 38.85%) were moderately reported.
Regarding to the method domain reporting (12 items), two items were well reported, including speci cation of the eligibility criteria (n=274, 92.57%) and description the search information source (n=294, 99.32%). Five moderately reported items were: presentation of the full search strategy (n=93, 31.42%), statement of the process for study selection (n=113, 38.18%), assessment of the risk of bias across study (n=142, 47.49%), description of additional analysis method (n=179. 60.47%) and providing of the method of handing data and combining results (n=191, 63.88%). Remaining ve poor reported items including document of risk of bias in individual study (n=53, 17.91%), list and de nition of the data item (n=54, 18.24%), statement of the principal summary measure (n=65, 21.74%), indication of the protocol and registration information (n=65, 21.96%) and description of the data collection process (n=88, 29.43%).
For 7 items of result domain reporting, 2 of which were well reported: given the result of the study selection (n=243, 82.09%) and presented the characteristic of each study (n=253, 85.47%). Three moderately reported items were clari ed the results of risk of bias assessment across studies (n=114, 38.51%), synthesized the results of each meta-analysis (n=192, 64.86%) and speci ed the results of additional analysis (n=194, 65.54%). Poor reported items including clari ed the results of risk of bias assessment within study (n=67, 22.63%) and presented the results for individual studies (n=86, 29.05%).
Factors associated with the reporting quality Figures S1-S5 (supplementary le 4) showed the potential factors associated with the reporting. Our results suggested that meta-analyses that published in recent years (RD=-0.07, 95% CI: -0.12 to -0.03), complied with reporting guideline (RD=-0.04, 95% CI: -0.07 to -0.02) and pre-speci ed protocol (RD=-0.09, 95% CI: -0.09 to -0.01) were associated with decreased rate of reporting issues. However, meta-analysis published in cancer related journal was associated with a higher likelihood of poor reporting compare with those published in other journals (RD=0.04, 95% CI:0.01 to 0.07). There was no statistically difference on the reporting rate of funding supported meta-analysis as compared with those not funded (RD=0.00, 95% CI: -0.01 to 0.01).

Discussion
Our study identi ed 296 MAs related to the breast cancer. To the best knowledge, this is the rst study that comprehensively assessed the reporting of meta-analyses on breast cancer for the healthcare intervention with the PRISMA checklist. Our study found that more than two thirds (n = 21, 77.78%) of the 27 items were inadequately reported among the 296 MAs, and these unpleasant issues were mainly embodied in the methods and results sections. These reporting issues were more serious in those cancerspeci c journals. We further identi ed some potential factors that may be bene cial for the reporting, which include referring any reporting guidelines and developing a research protocol in advance.
Overall, our results suggested that the reporting quality of these breast cancer MAs was poor. This is consistent with the other studies addressing reporting quality in urology [11] , orthopedic [12] , acupuncture [13] and burn care management [14] . However, some inconcordance were also documented. In our study, those MAs published in cancer-speci ed journals showed poorer reporting quality than those published in other journals, while Moher et al have reported a contrast nding [15] . The inconsistent reporting across study may be attributed to the editorial attitude to the reporting guideline [16] . For example, the surgery journals always instructed authors to refer reporting guideline when submitting a MAs [17] . Thus the editorial interest in the reporting guideline may partially explain the high quality MAs in certain journals [18] . The results from our study supported this: those MAs followed the PRISMA guideline showed better reporting.
This also supported by our nding that reporting of MAs has improved over time since the ful llment of PRISMA guideline in 2009. Similar nding was convinced in the improved reporting quality of RCTs since the introduced of CONSORT guideline [19] .
Our nding showed that few authors develop a research protocol in advance, while it is a vital process for conducting a MA. The absence of protocol makes MAs fragile to selective reporting of results or conclusions and facilitates post hoc modi cations of methods [20] . A survey found that 68% of the Cochrane reviews have important method change from their protocols that were highly likely to alter the review's results [21] . As well, our results suggested that MAs with pre-speci ed protocol was associated with better reporting quality. So we would argue that more attention should take place to the protocol development of systematic reviews and meta-analyses [22] .
The reporting weakness of methodologic areas we identi ed include risk of bias assessment in method and results and limitation discussion section. Risk of bias evaluation was central to the conduct of unbiased MAs [23] . There were some correlations among the reporting quality of these information in the above three sections. One may reasonably infer that adequate use of rigorous methodologies is less likely among MAs with serious limitations in reporting. Few MAs in this study reported limitations, but most highlighted issues related to the quality of the included studies (study level bias) rather than weaknesses in approach or conduct (review level bias). Bias may be introduced in the literature search, study selection appraisal and data analysis process of a MAs [2] . Discussion all potential bias was important to improve the generalizability of MAs.
The strengths of our research include a full sample study with nearly all the published breast cancer MAs, a representative research conclusion, a protocol-driven design, a comprehensive search, explicit eligibility criteria, rigorous methods for screening studies and collecting data, and widely accepted checklists. Our study also has limitation. First, we just included MAs with RCT, and the ndings may not be applicable to other MAs included observation study. Second, the quality assessment was based on information published on a journal and may not accurately re ect the true process of a MA.

Conclusion
The reporting of the meta-analyses for breast cancer intervention was uninformative to support the decision-making, especially those published in cancer-speci ed journals. Although improvement has seen over times, further efforts are still needed. Some easy-to-implement measures could be considered such as referring to a reporting guideline, develop a protocol in advance to help further researchers to improve the reporting of their meta-analysis. PLJ and BZ conceived and designed the study; SSL and WZ screened the study; BZ and PLJ conducted the quality assessment; CX analyzed the data; PLJ drafted the manuscript; PLJ and JSW Kwong provided careful comments and revised the manuscript. All authors read and approved the manuscript.