Evaluating the impact of malaria rapid diagnostic tests on patient-important outcomes in sub-Saharan Africa: a systematic review of study methods to guide effective implementation

ABSTRACT Objective To perform critical methodological assessments on designs, outcomes, quality and implementation limitations of studies evaluating the impact of malaria rapid diagnostic tests (mRDTs) on patient-important outcomes in sub-Saharan Africa. Design A systematic review of study methods. Data sources MEDLINE, EMBASE, Cochrane Library, African Index Medicus and clinical trial registries were searched up to May 2022. Eligibility criteria Primary quantitative studies that compared mRDTs to alternative diagnostic tests for malaria on patient-important outcomes within sub-Sahara Africa. Data extraction and synthesis Studies were sought by an information specialist and two independent reviewers screened for eligible records and extracted data using a predesigned form using Covidence. Methodological quality was assessed using the National Institutes of Health tools. Descriptive statistics and thematic analysis guided by the Supporting the Use of Research Evidence framework were used for analysis. Findings were presented narratively, graphically and by quality ratings. Results Our search yielded 4717 studies, of which we included 24 quantitative studies; (15, 62.5%) experimental, (5, 20.8%) quasi-experimental and (4, 16.7%) observational studies. Most studies (17, 70.8%) were conducted within government-owned facilities. Of the 24 included studies, (21, 87.5%) measured the therapeutic impact of mRDTs. Prescription patterns were the most reported outcome (20, 83.3%). Only (13, 54.2%) of all studies reported statistically significant findings, in which (11, 45.8%) demonstrated mRDTs’ potential to reduce over-prescription of antimalarials. Most studies (17, 70.8%) were of good methodological quality; however, reporting sample size justification needs improvement. Implementation limitations reported were mostly about health system constraints, the unacceptability of the test by the patients and low trust among health providers. Conclusion Impact evaluations of mRDTs in sub-Saharan Africa are mostly randomised trials measuring mRDTs’ effect on therapeutic outcomes in real-life settings. Though their methodological quality remains good, process evaluations can be incorporated to assess how contextual concerns influence their interpretation and implementation. PROSPERO registration number CRD42018083816.

Blinding means that one does not know to which groupintervention or control-the participant is assigned.It is also sometimes called "masking."The reviewer assessed whether each of the following was blinded to knowledge of treatment assignment: (1) the person assessing the primary outcome(s) for the study; (2) the person receiving the intervention; and (3) the person providing the intervention.
Sometimes the individual providing the intervention is the same person performing the outcome assessment.This should be noted.
Were the people assessing the outcomes blinded to the participants' group assignments?
Were the groups similar at baseline on important characteristics that could This question relates to whether the intervention and control groups have similar baseline characteristics on average especially those characteristics that may affect the affect outcomes (e.g., demographics, risk factors, co-morbid conditions)?
intervention or outcomes.The point of randomized trials is to create groups that are as similar as possible except for the intervention(s) being studied in order to compare the effects of the interventions between groups.When reviewers abstracted baseline characteristics, they noted when there was a significant difference between groups.
Was the overall drop-out rate from the study at endpoint 20% or lower of the number allocated to treatment?
"Dropouts" in a clinical trial are individuals for whom there are no end point measurements, often because they dropped out of the study and were lost to follow up.
Generally, an acceptable overall dropout rate is considered 20 Was the exposure for each person measured more than once during the course of the study period?Multiple measurements with the same result increase our confidence that the exposure status was correctly classified.Also, multiple measurements enable investigators to look at changes in exposure over time.
Were the outcome measures (dependent variables) clearly defined, valid, reliable, and implemented consistently across all study participants?
Were the outcomes defined in detail?Were the tools or methods for measuring outcomes accurate and reliable-for example, have they been validated or are they objective?
This issue is important because it influences confidence in the validity of study results.Also important is whether the outcomes were assessed in the same manner within groups and between groups.
Were the outcome assessors blinded to the exposure status of participants?
Blinding means that outcome assessors did not know whether the participant was exposed or unexposed.It is also sometimes called "masking."Sometimes the person measuring the exposure is the same person conducting the outcome assessment.In this case, the outcome assessor would most likely not be blinded to exposure status because they also took measurements of exposures.If so, make a note of that in the comments section.Think about whether it is likely that the person(s) doing the outcome assessment would know (or be able to figure out) the exposure status of the study participants.
Was loss to follow-up after baseline 20% or less?
Higher overall follow-up rates are always better than lower follow-up rates, even though higher rates are expected in shorter studies, whereas lower overall follow-up rates are often seen in studies of longer duration.Usually, an acceptable overall follow-up rate is considered 80 percent or more of participants whose exposures were measured at baseline.
Were key potential confounding variables measured and adjusted statistically for their impact on the relationship between exposure(s) and outcome(s)?
Were key potential confounding variables measured and adjusted for, such as by statistical adjustment for baseline differences?Logistic regression or other regression methods are often used to account for the influence of variables not of interest.This is a key issue in cohort studies, because statistical analyses need to control for potential confounders, in contrast to an RCT, where the randomization process controls for potential confounders.
All key factors that may be associated both with the

Part 4 :Part 5 :
Africa Index Medicus Search strategy Malaria or plasmodium [Words] and diagnosis or diagnostic or RDT$ [Words] and endpoint$ or outcome$ or mortality or prognosis or prescription$ or attitude$ or experience or perception or benefit [Words] Clinical Trial Registries Search Strategies • Clinicaltrials.govRapid diagnostic test | Malaria • WHO ICTRP Malaria and (rapid diagnostic test* or RDT*) • Meta-register of controlled trials (mRCT) Malaria and (rapid diagnostic test* or RDT*) • Pan African Clinical Trials Registry Malaria and (rapid diagnostic test* or RDT*) Supplementary file 3: National Institute of Health (NIH) tool used to assess the methodological quality of included studies Part 1: Quality assessment of controlled intervention studies Criteria Description Was the study described as randomized, a randomized trial, a randomized clinical trial, or an RCT?

Part 2: Quality assessment tool for observational cohort and cross-sectional studies Criteria Description
percent or less of participants who were randomized or allocated into each group.An acceptable differential dropout rate is an absolute difference between groups of 15 percentage points at most (calculated by subtracting the dropout rate of one group minus the dropout rate of the other group).Did participants in each treatment group adhere to the protocols for assigned interventions?For example, if one group that was assigned to receive a particular drug at a particular dose had a large percentage of participants who did not end up taking the drug or the dose as designed in the protocol.forexample,havethey been validated, or are they objective?This is important as it indicates the confidence you can have Investigators should pre specify outcomes reported in a study for hypothesis testing-which is the reason for conducting an RCT.Without prespecified outcomes, the study may be reporting ad hoc analyses, simply looking for differences supporting desired findings.Investigators also should pre specify subgroups being examined.i.e., did they use an intention-to-treat analysis?Intention-to-treat (ITT) means everybody who was randomized is analysed according to the original group to which they are assigned.This is an extremely important concept because conducting an ITT analysis preserves the whole reason for doing a randomized trial; that is, to compare groups that differ only in the intervention being tested.exposureand outcome even if one exists.Also as important is whether the exposures were assessed in the same manner within groups and between groups; if not, bias may result.Was the exposure(s) assessed more than once over time?

Framework for Supporting the Use of Research Evidence (SURE) for identifying the implementation challenges facing studies that evaluate mRDTs' impact on patient-important outcomes Level Barriers and enablers Description Recipients of care
exposure of interest and the outcome-that are not of interest to the research question-should be controlled for in the analyses Supplementary file 4: Number of patients in the intervention arm (RDT); C-Number of patients in the comparator arm (Clinical diagnosis or microscopy); Sens.-Sensitivity; Spec.-Specificity;CHW-Community Health Worker; ACT-Artemisinin-based Combination Therapy, Quasiexperimental studies: Non-randomized studies of intervention.

Supplementary file 9: Summary of the methodological quality of observational studies Criteria Bonful 2019 Bonko 2019 Ikwuobe 2013 Yuckich 2010
For observational studies, except for Yuckich et al. which was a cohort study, the remaining three studies were cross-sectional.Therefore, items regarding exposure measurement preceding outcome and sufficient time frame were not applicable for cross-sectional studies.Management of exposure at different levels was not applicable to all observational studies because mRDTs were performed at a time.For experimental studies blinding was not necessary given the nature.Of the quasiexperimental studies details on randomization were not applicable due to the nature of the design.