The Odds Ratio: A Primer for Geriatric Clinicians

The odds ratio (OR) is a statistic that is used to quantify the degree of risk increase or decrease of a particular outcome given the presence of a particular risk factor. The OR is a statistic that is used when the outcome variable is binary (e.g., disease/no disease; yes/no) and can have either categorical or continuous variables as the predictor variable(s) [1,2]. The value of 1.00 is the anchor for which ORs are interpreted in terms of risk increase or decrease. The value of 1.00 indicates no risk increase or decrease, while values greater than 1.00 indicate a risk increase and values less than 1.00 indicate a risk decrease.


Introduction
The odds ratio (OR) is a statistic that is used to quantify the degree of risk increase or decrease of a particular outcome given the presence of a particular risk factor. The OR is a statistic that is used when the outcome variable is binary (e.g., disease/no disease; yes/no) and can have either categorical or continuous variables as the predictor variable(s) [1,2]. The value of 1.00 is the anchor for which ORs are interpreted in terms of risk increase or decrease. The value of 1.00 indicates no risk increase or decrease, while values greater than 1.00 indicate a risk increase and values less than 1.00 indicate a risk decrease.
The OR can be interpreted in terms of a percent risk increase or decrease. For example, an OR of 1.26 would be interpreted as a 26% risk increase. Risk decreases, expressed as a percent, are derived by subtracting the OR from 1.00 in which the new value indicates the percent risk decrease associated with the predictor variable. For example, an OR 0.89 would be interpreted as an 11% risk decrease as 1.00 -0.89=0.11, or 11%. In cases where the odds ratio is equal to 2.00 or greater, the wording of the interpretation can be changed so that the OR indicates the number of times more likely that an outcome is to occur given the presence of a risk factor (e.g., an OR of 3.00 would mean that X is 3 times more likely to occur given the presence of Y as opposed to saying that X is 200% more likely to occur).
When the predictor variable is continuous the risk increase or decrease is interpreted in terms of a per-unit change in the predictor variable. An OR 1.26 would indicate that for every one-unit increase in X, the risk of Y increases by 26%. For an OR of 3.00 with a continuous predictor variable, the interpretation is that for every one-unit increase in X, Y is 3 times more likely to occur. Conversely, an OR 0.89 for a continuous variable would indicate that for every one-unit increase in X, the risk of Y decreases by 11%.
95% confidence intervals (CIs) usually accompany the OR when reported in medical studies. Although p-values are often reported with the OR and its 95% CIs, the CIs themselves can indicate whether or not the OR is statistically significant. When CIs contain the value of 1.00, the corresponding OR is considered to be not statistically significant as the value of 1.00 indicates no risk increase or decrease. In terms of precision, wider CIs for the OR indicate decreased precision of the estimate and also indicate decreased statistical power of a study where CIs that are closer to the estimate indicate increased precision and statistical power of a study [3].

Cautions in Interpreting the OR
Although the OR lends itself to a relatively easy and practical interpretation of disease/risk-factor association, there are some issues that must be considered when utilizing and interpreting the OR.
The OR is highly dependent upon the prevalence of the outcome (i.e., disease) and can be significantly biased when the prevalence is high. This often results in an inflated estimate of the association. Also, when the baseline risk for a particular outcome is high in the population of interest, the OR will often over-estimate the degree of association between a risk factor and its outcome [4]. Another point of caution that should be considered is when the predictor variable for an outcome is continuous. Since the OR is interpreted as a per-unit risk increase or decrease, the clinical significance of a one-unit change in the predictor variable must be considered prior to interpreting its associated OR [1].
One of the larger issues associated with interpreting an OR has to do with using it as a measure of risk in the context of baseline risk for the outcome. Since the OR is assessing relative, not absolute, risk for an outcome it is important to consider other risk factors that are also associated with the outcome. For example, a 30% risk increase may not have the same impact for someone who is relatively healthy and has few, if any, additional risk factors for the outcome. On the other hand, an individual who may already have other significant risk factors for the outcome may be impacted more by the same 30% risk increase. In these cases, clinicians can better inform their patients regarding the impact that a risk factor may have on them at the individual level by discussing disease/risk-factor associations in the context of the patient's baseline risk for a disease or outcome.