Historical Roots of Epidemiology – Part 1

This post is the first part of many that will review certain articles pertaining to the historical roots of epidemiology. It is in preparation for comprehensive examinations and a class this fall semester. My goal is to give about a one page summary of the article.

Do we really know what makes us health?
Gary Taubes
New York Times, Sept. 16, 2007
Link: http://www.nytimes.com/2007/09/16/magazine/16epidemiology-t.html

Author Gary Taubes has written several newspaper articles critical of the field of epidemiology. In a nutshell, Mr. Taubes lays blame on epidemiology for the “here-today-gone-tomorrow nature of medical wisdom”. His favorite (but not only ) example is the story of hormone replacement therapy (or HRT) for women. The prior scientific consensus was that if women took estrogen to replace that which declined with age (especially after menopause), it would protect a woman’s health – especially for lowering the risk of a heart attack. HRT became a common treatment until a pair of studies concluded that HRT could actually increase a woman’s risk for “heart disease, stroke, blood clots, breast cancer, and perhaps even dementia.” This “flip-flop rhythm of science” is emblematic of the epidemiologic field.

More after the jump…

In the opinion of the journalist, the heart of the problem is with reliance of the observational study, which cannot definitively prove causation of some exposure (like HRT) to some outcome (like prevention of heart disease in women). The only surefire way (according to the article) of proving causation is a randomized control trial (RCT), which that latter HRT-studies showing an increased health risk had been. But even then, “…it may be right…[i]t may not.” The only way to be better sure is to do another RCT.

Epidemiology is given praise for certain uses, such as predicting adverse outcomes of new prescription drugs, identifying certain risk factors for disease, and showing the distribution of diseases in the population, but the rest is by and large “rubbish”. RCTs fail to replicate the findings of prior observational studies – perhaps even finding an opposite effect. Even large-scale observational studies can’t test small-to-moderate effects on health.

The author does briefly mention some of the counter arguments. First, that epidemiologists are not focused on just one single study, but the totality of the data, which include not only epidemiological studies, but also basic biological science studies that can support findings. Second, that the issue is not with epidemiology as a science, but with the press that reports on it and sensationalizes every new (and contradictory) study that comes out on a particular health topic. The author points to the epidemiological “track record” and insists that the track record is poor.

Then he discusses why RCTs are the “gold standard” of research in that they are able to control for known and unknown possible sources of “variables…that might affect the outcome”, but are limited by their high costs, usually long duration, and ethical inappropriateness when the randomized exposure would be potentially harmful. He also discusses the bias in recruiting relatively healthy subjects into a RCT, which could potentially limit the generalizability of the study’s results. These limitations leaves researchers little choice but to rely on observational studies to investigate long-term health effects in certain circumstances.

Mr. Taubes does highlight some of epidemiology’s successes, such as in its early history with Snow’s work on cholera and Goldberger’s work on pellagra. Of course, the link between cigarette smoking and lung cancer is also a favorite example, but the argument is that these types of ‘smoking gun’ exposures where the elevated risk is high are rare, and the problem is that many exposure-outcome associations today are very small and modest.

In explaining the story of HRT in women, the issue raised is the possibility of biases that might affect the results. One is a healthy-user bias. This is when people who take care of their health (like being faithful to HRT treatment) are different than those who do not. These healthy-users may be different than those who don’t use HRT in ways that make them healthier to begin with – and this explains the reduced health risk. Another is the compliance effect where HRT woman in observational studies who faithfully take their estrogen treatment may be systmatically different from women who do not. Yet another bias could be by the doctors of the study subjects themselves, the so-call prescriber effect, where healthier subjects are given a treatment more often than sicker patients whom doctors might feel wouldn’t benefit anyway.

Three explanations are offered to interpret the results of the studies on HRT in woman. First, is that the observational studies are wrong and the RCT studies are right. Second, the RCT studies are wrong and the observational studies are right. And the third option, is that both are right, but ask different questions in that HRT might be beneficial at certain periods, but a health risk at others.

Finally, the author lists a few ‘rules’ when deciding what to believe. First, don’t believe the first study and wait until it has been replicated in different populations. Second, if there is a consistent effect over many studies, but the size is small, then doubt it. Third, if the association involves some aspect of human behavior, doubt the validity, unless there is some unexpected harm.

As for my own personal take on the articles, while the author does a decent job highlighting some of the limitations of epidemiological research, I think the ultimate issue of why it seems like medical science “flip-flops” has more to do with how results are communicated to the public. One could easily write an article of how journalists fail to report or de-emphasize the caveats that are part of just about any epidemiological article. Certainty is conveyed when probabilistic statements are more accurate. However, certainty (at least about health issues) sells newspapers as does controversy. Why wouldn’t contradictory reports get press more than studies that confirm previous findings?

Advertisements

Publication Progress

I wanted to note here that I finally re-submitted my paper Estimated cannabis-associated risk of developing a newly incident depression spell: Focus on early-onset cannabis use” to Psychological Medicine over the past weekend.

Summer Projects

This summer is shaping up to be very busy for me. As such, I want to jot down some of the projects/classes/responsibilities that ought to be on my mind every week. Every Monday, I should be asking myself “What progress can I make towards these items?”, and every Friday, I should be asking myself “What progress have I made?”

  1. Submission of my F31 grant this August.
  2. Completion of my College on Problems of Drug Dependence poster on cigar and blunt use.
  3. Writing a draft of my cigar and blunt research project for publication.
  4. Preparing for the comprehensive examination.
  5. Studying for EPI 495 (Epi & Behavioral Health)
  6. Studying for EPI 823 (Cancer Epi)
  7. Self-study for survival analysis
  8. Self-study for historical roots of epidemiology
  9. Preparation for field study in fall
  10. Undetermined brief research report.

Previously published papers

In this first post, I am just going to highlight a paper that I served as a co-author. The abstract is copied below. The full-text might be available through PubMed.

Early cannabis use and estimated risk of later onset of depression spells: Epidemiologic evidence from the population-based World Health Organization World Mental Health Survey Initiative.

Am J Epidemiol. 2010 Jul 15;172(2):149-59. Epub 2010 Jun 9

de Graaf R, Radovanovic M, van Laar M, Fairman B, Degenhardt L, Aguilar-Gaxiola S, Bruffaerts R, de Girolamo G, Fayyad J, Gureje O, Haro JM, Huang Y, Kostychenko S, L├ępine JP, Matschinger H, Mora ME, Neumark Y, Ormel J, Posada-Villa J, Stein DJ, Tachimori H, Wells JE, Anthony JC.

Abstract

Early-onset cannabis use is widespread in many countries and might cause later onset of depression. Sound epidemiologic data across countries are missing. The authors estimated the suspected causal association that links early-onset (age or =17 years) risk of a depression spell, using data on 85,088 subjects from 17 countries participating in the population-based World Health Organization World Mental Health Survey Initiative (2001-2005). In all surveys, multistage household probability samples were evaluated with a fully structured diagnostic interview for assessment of psychiatric conditions. The association between early-onset cannabis use and later risk of a depression spell was studied using conditional logistic regression with local area matching of cases and controls, controlling for sex, age, tobacco use, and other mental health problems. The overall association was modest (controlled for sex and age, risk ratio = 1.5, 95% confidence interval: 1.4, 1.7), was statistically robust in 5 countries, and showed no sex difference. The association did not change appreciably with statistical adjustment for mental health problems, except for childhood conduct problems, which reduced the association to nonsignificance. This study did not allow differentiation of levels of cannabis use; this issue deserves consideration in future research.