My colleage Gary Schwitzer at HealthNewsReview.org has a post today questioning the validity of observational studies (where epidemiological researchers look at selected variables in large populations to see if there is a relationship between a cause and effect).
My colleage Gary Schwitzer at HealthNewsReview.org has a post today questioning the validity of observational studies (where epidemiological researchers look at selected variables in large populations to see if there is a relationship between a cause and effect). He’s legitimately frustrated by constant media attention on small and poorly constructed observational studies that generate misleading headlines like “Vitamin D Cures Cancer,” only to be followed a few weeks later by “Vitamin D Causes Cancer.” “Such research CAN NOT PROVE CAUSE-AND-EFFECT,” he screams in frustration.
Alas, Schwitzer has gone overboard to make a valid point. Carefully constructed observational studies have been crucial to advancing medicine and safety. Where would drug safety be today if David Graham of the FDA hadn’t done the observational study of a quarter million Kaiser Permanente Vioxx patients that proved that pain pill caused an excess of heart attacks and strokes? And where would the campaign against smoking be without the pioneering observational studies conducted by Richard Doll and Austin Bradford Hill (and later studies by Richard Peto) that showed smoking causes lung cancer?
Retrospective observational studies may not be the gold standard of double blind, placebo-controlled trials or even prospective observational studies (like the Framingham heart study). But if done properly, they are valid and sometimes crucial to learning how medical interventions work in real world populations (not in the controlled environment of a clinical trial), and the impact that environmental insults are having on human health.
Let’s not throw the baby out with the bathwater in our efforts to increase journalistic skepticism about poorly constructed observational studies.