Critical Appraisal of Cohort Studies

Was the outcome accurately measured to minimise bias? [Did they use subjective or objective measurements? · Do the measures truly reflect what you want them to (have they been validated)? · Has a reliable system been established for detecting all the cases (for measuring disease occurrence)? · Were the measurement methods similar in the different groups? · Were the subjects and/or the outcome assessor blinded to exposure (does this matter)?]


Introduction
Cohort studies are classified according to exposure groups exposed and not exposed to a certain factor. Its main feature is the follow up of the study subjects over time to evaluate the occurrence of the opposite outcome or not an exhibition ( Figure 1) [1][2][3][4][5][6][7][8].

Appraisal questions
Did the study address a clearly focused issue? (The population studied · The risk factors studied · The outcomes considered · Is it clear whether the study tried to detect a beneficial or harmful effect?) The two groups being studied are selected from source populations that are comparable in all respects other than the factor under investigation. The study indicates how many of the people asked to take part did so, in each of the groups being studied The likelihood that some eligible subjects might have the outcome at the time of enrolment is assessed and taken into account in the analysis. What percentage of individuals or clusters recruited into each arm of the study dropped out before the study was completed?
Comparison is made between full participants and those lost to follow up, by exposure status.
Was the cohort recruited in an acceptable way (Was the cohort representative of a defined population? · Was there something special about the cohort? · Was everybody included who should have been included?) Was the exposure accurately measured to minimise bias? (Did they use subjective or objective measurements? · Do the measurements truly reflect what you want them to (have they been validated)? · Were all the subjects classified into exposure groups using the same procedure) Was the outcome accurately measured to minimise bias? [Did they use subjective or objective measurements? · Do the measures truly reflect what you want them to (have they been validated)? · Has a reliable system been established for detecting all the cases (for measuring disease occurrence)? · Were the measurement methods similar in the different groups? · Were the subjects and/or the outcome assessor blinded to exposure (does this matter)?] Have the authors identified all important confounding factors? Have they taken account of the confounding factors in the design and/or analysis? (Look for restriction in design, and techniques e.g. modelling, stratified-, regression-, or sensitivity analysis to correct, control or adjust for confounding factors). The outcomes are clearly defined. The assessment of outcome is made blind to exposure status. If the study is retrospective this may not be applicable. Where blinding was not possible, there is some recognition that knowledge of exposure status could have influenced the assessment of outcome.
The method of assessment of exposure is reliable. Evidence from other sources is used to demonstrate that the method of outcome assessment is valid and reliable. Exposure level or prognostic factor is assessed more than once.
Was the follow up of subjects complete enough? Was the follow up of subjects long enough? (The good or bad effects should have had long enough to reveal themselves · The persons that are lost to follow-up may have different outcomes than those available for assessment · In an open or dynamic cohort, was there anything special about the outcome of the people leaving, or the exposure of the people entering the cohort?) The main potential confounders are identified and taken into account in the design and analysis. Have confidence intervals been provided?
What are the results of this study? [What are the bottom line results? · Have they reported the rate or the proportion between the exposed/unexposed, the ratio/the rate difference? · How strong is the association between exposure and outcome (RR,)? · What is the absolute risk reduction (ARR)?] How precise are the results? Look for the range of the confidence intervals, if given.
Do you believe the results? [Big effect is hard to ignore! · Can it be due to bias, chance or confounding? · Are the design and methods of this study sufficiently flawed to make the results unreliable? · Bradford Hills criteria (e.g. time sequence, dose-response gradient, biological plausibility, consistency)] Can the results be applied to the local population? (A cohort study was the appropriate method to answer this question · The subjects covered in this study could be sufficiently different from your population to cause concern · Your local setting is likely to differ much from that of the study · You can quantify the local benefits and harms).
What are the implications of this study for practice? (One observational study rarely provides sufficiently robust evidence to recommend changes to clinical practice or within health policy decision making · For certain questions observational studies provide the only evidence · Recommendations from observational studies are always stronger when supported by other evidence) Do the results of this study fit with other available evidence?
How well was the study done to minimise the risk of bias or confounding?
Taking into account clinical considerations, your evaluation of the methodology used, and the statistical power of the study, do you think there is clear evidence of an association between exposure and outcome?
Can the results be applied to your organization?
Conflicts of interest are declared.
Rate the overall methodological quality of the study, using the following as a guide: Rate the overall methodological quality of the study, using the following as a guide: High quality (++): Majority of criteria met. Little or no risk of bias. Low quality (-): Either most criteria not met, or significant flaws relating to key aspects of study design. Reject (0): Poor quality study with significant flaws. Wrong study type. Not relevant to guideline Use this checklist can improve the evaluation of cohort studies.