Odds Ratios
Odds Ratios
What are odds ratios?
The measure of association generated via case control studies is the ‘odds ratio’, although other measures such as ‘relative risk’ may also be reported. The odds ratio is not the same as the more familiar ‘relative risk’ – although they are often (and often justifiably) used interchangeably. The OR is in fact an estimate of the effect of the factor under investigation, and therefore comes with an associated ‘confidence interval’ which must be considered alongside the OR. While odds ratios are an apparently simple measure of association, interpreting what the outcome of case control studies means is not always straightforward.
In the table below, the final two columns report the odds ratio (OR) associated with room sharing (baby in a cot in the parents’ room) compared with sleeping in a room alone. Odds ratios indicate the magnitude of the difference of being in one state compared to another. When the OR=1, there is no difference between states. Here we are considering risk of SIDS, so if the OR=1, then the risk of SIDS in each state (e.g. sleep location) is the same.
If the OR is greater than one, the risk of SIDS is greater in the case group (babies who died) than in the controls (group of comparable babies who didn’t die). If, as here, the ORs are less than one, a lower risk of SIDS in the comparison group is indicated. In the studies reported below a clear protective effect (lower risk of SIDS) is demonstrated from room sharing in all studies, both in univariate analyses (just looking at sleep location – parental room compared to alone) and in multivariate analyses (when the two locations are compared whilst controlling for confounding factors, such as baby’s sleep position, parental smoking, socio-economic variables, and other factors that may cause the risk to vary).
Note: In the research summaries provided on this site, ‘aOR’ and ‘mOR’ refer to ‘adjusted’ or ‘multivariate’ odds ratios, as they are reported in the paper summarised.
Percent exposed | |||||
Author | Country | Case | Control | Univariate OR | Multivariate OR |
Scragg 1996 | New Zealand | 20.7 | 37.1 | 0.44 | 0.25 |
Blair 1999 | England | 25.3 | 39.0 | 0.53 | 0.51 |
Hauck 2003 | United States | 20.8 | 28.1 | 0.67 | Not reported |
Carpenter 2004 | Europe | 28.0 | 44.5 | 0.49 | 0.32 |
Tappin 2005 | Scotland | 35.8 | 63.5 | 0.32 | 0.31 |
Table after Mitchell 2009:1715
What are confidence intervals?
With every odds ratio, a confidence interval is also generated. The format in which this is presented varies somewhat, but it is usually presented alongside the OR as a range (e.g. OR 7.4, 1.4-26.7).
With any test of association, researchers need to be able to work out the risk that any association identified was ‘real’ (statistically significant), or whether it could have occurred by chance. In a test of whether (for example) mean adult height differs between two different populations, a t-test could be used, which would produce an associated p-value. A p-value smaller than 0.05 indicates that there is less than 5% probability that the observed difference was due to chance, rather than to a true difference in height between the two populations. This is the same as saying there is a 95% likelihood that the observed difference is real.
Similarly, confidence intervals are used by researchers to evaluate both the robustness and the statistical significance of the association represented by the odds ratio. CIs indicate the level of accuracy (or inaccuracy) of the effect estimate measurement (the OR). Conventionally a ‘95% CI’ is used, indicating a 95% likelihood that the true OR lies within the range given. There are two key points to note here; Firstly, if the CI contains ‘1’ (e.g. OR 7.4, 0.12-12.8), the association is not considered to be statistically significant, since ‘1’ is the null – or no difference – value. Secondly, the smaller (or ‘narrower’) the CI, the more ‘confidence’ we can have in the OR (so for an OR of 7.4 a CI of 4.4-9.2 is ‘better’ than a CI of 1.91-26.3).