Design
As part of a systematic review on the global prevalence of PDR in key populations living with HIV,[12] we conducted a separate methodological study on reporting quality.
Data sources
We searched PubMed, Scopus, CINAHL, LILACS, WHO Global Health Libraries, Ovid Global Health, Sociological Abstracts, PsycINFO, EMBASE, and POPLINE from inception to January 2019 (See Additional File 1 for search strategy).
Eligibility
We included studies of any design, reporting PDR and published in full text. We excluded abstracts because they are unlikely to report all relevant items.
Data extraction and management
Screening and data extraction were performed using DistillerSR (Evidence Partners, Ottawa, Canada). We extracted basic bibliometric information such as: author name, year of publication, country of study (organised by region and income level). Region was determined based on the WHO regional groupings of countries,[13] and income level was determined based on the World Bank classification.[14] We collected study characteristics such as sample size, design, location, setting, source of funding and whether the studies performed a sample size calculation.
We checked for the reporting of baseline characteristics such as: age, gender, sexual orientation, transmission risk group, profession, country of residence, ethnicity, education, income level and prior exposure to ART.
We also assessed the availability and completeness of the following information on drug resistance including: the type of resistance testing used (such as population based-sequencing or Sanger vs next generation sequencing; the number of participants enrolled as well as the number of available genotypes; the drug classes for which resistance testing was conducted; the definition and interpretation of drug resistance and whether the authors distinguished between major (greater reductions in drug susceptibility) and minor drug resistance mutations. We also extracted data on source of funding. A complete list of the 23 data items assessed is available in Table 2.
Assessment of methodological quality
We assessed the risk of bias in the reporting of prevalence using an adapted version of a tool proposed by Hoy et al.[15] Using this tool, risk of bias is based on the representativeness of the sample, the sampling frame, sampling technique, response bias, the use of proxies, case definition, validity of measurements, uniformity of data collection, the prevalence period and the appropriateness of the numerator and denominator. We judged each study’s risk of bias as overall high, low, or moderate based on an appraisal of these items. For instance, a judgment of high risk of bias would imply that further research is very likely to have an important impact on our confidence in the estimate of prevalence and is likely to change the estimate. Moderate risk of bias would imply that further research is likely to have an important impact on our confidence in the estimate of prevalence and may change the estimate and low risk of bias would imply that further research is very unlikely to change our confidence in the estimate of prevalence.
All data were extracted in duplicate by pairs of reviewers (OM, COZ, BZ, FM, AN, AW, MK, HE, AL, MY, NR) and disagreements were adjudicated by a third reviewer (LM). Agreement was computed separately for data extraction and risk of bias using the Kappa statistic[16], since we used a tool that has not been previously validated for prevalence studies of drug resistance.
Data analyses
Our findings are reported as counts and percentages and mean (standard deviation) or median (quartile 1; quartile 3) as appropriate. We created a summary score for the number of items reported (possible range 0–23) and investigated the factors associated with reporting using linear regression models. We entered factors that may explain reporting completeness as a block (year of publication, source of funding, income level, region, and sample size). For these analyses, studies that reported on more than one country were excluded when they had overlapping income levels and regions. Sample size data were highly skewed (skewness statistic = 6.9), so we grouped them into large (n > 239) and small studies (n ≤ 239) based on the median sample size. We added risk of bias to the model to determine if reporting quality was associated with risk of bias. We assessed model fit using the R2 statistic. Beta coefficients (β), 95% confidence intervals and p-values are reported. We used the number of items reported as measure of quality of reporting.