A study in the NEJM reports that after 11 years of follow up in a very large cohort of men randomized either to PSA screening every 4 years (~73,000 subjects) or to no screening (~89,000 subjects) there was both a reduction in death and no mortality advantage. How confusing can things get? Here is a screenshot of today’s headlines about it from Google News:
A study in the NEJM reports that after 11 years of follow up in a very large cohort of men randomized either to PSA screening every 4 years (~73,000 subjects) or to no screening (~89,000 subjects) there was both a reduction in death and no mortality advantage. How confusing can things get? Here is a screenshot of today’s headlines about it from Google News:
How can the same test cut prostate cancer deaths and at the same time not save lives? This is counter-intuitive. Yet I hope that a regular reader of this blog is not surprised at all. For the rest of you, here is a clue to the answer: competing risks.
What’s competing risks? It is a mental model of life and death that states that there are multiple causes competing to claim your life. If you are an obese smoker, you may die of a heart attack or diabetes complications or a cancer, or something altogether different. So, if I put you on a statin and get you to lose weight, but you continue to smoke, I may save you from dying from a heart attack, but not from cancer. One major feature of the competing risks model that confounds the public and students of epidemiology alike is that these risks can actually add up to over 100% for an individual. How is this possible? Well, the person I describe may have (and I am pulling these numbers out of thin air) a 50% risk of dying from a heart attack, 30% from lung cancer, 20% from head and neck cancer, and 30% from complications of diabetes. This adds up to 130%; how can this be? In an imaginary world of risk prediction anything is possible. The point is that he will likely die of one thing, and that is his 100% cause of death.
Before I get to translating this to the PSA data, I want to say that I find the second paragraph in the Results section quite problematic. It tells me how many of the PSA tests were positive, how many screenings on average each man underwent, what percentage of those with a positive test underwent a biopsy, and how many of those biopsies turned up cancer. What I cannot tell from this is precisely how many of the men had a false positive test and still had to undergo a biopsy — the denominators in this paragraph shape-shift from tests to men. The best I can do is estimate: 136,689 screening tests, of which 16.6% (15,856) were positive. Dividing this by 2.27 average tests per subject yields 6,985 men with a positive PSA screen, of whom 6,963 had a biopsy-proven prostate cancer. And here is what’s most unsettling: at the cut-off for PSA level of 4.0 or higher, the specificity of this test for cancer is only 60-70%. What this means is that at this cut-off value, a positive PSA would be a false positive (positive test in the absence of disease) 30-40% of the time. But if my calculations are anywhere in the ballpark of correct, the false positive rate in this trial was only 0.3%. This makes me think that either I am reading this paragraph incorrectly, or there is some mistake. I am especially concerned since the PSA cut-off used in the current study was 3.0, which would result in a rise in the sensitivity with a concurrent decrease in specificity and therefore even more false positives. So this is indeed bothersome, but I am willing to write it off to poor reporting of the data.
Let’s get to mortality. The authors state that the death rates from prostate cancer were 0.39 in the screening group and 0.50 in the control group per 1,000 patient-years. Recall from the meat post that patient-years are roughly a product of the number of subjects observed by the number of years of observation. So, again, to put the numbers in perspective, the absolute risk reduction here for an individual over 10 years is from 0.5% to 0.39%, again microscopic. Nevertheless, the relative risk reduction was a significant 21%. But of course we are only talking about deaths from prostate cancer, not from all other competitors. And this is the crux of the matter: a man in the screening group was just as likely to die as a similar man in the non-screening group, only causes other than prostate cancer were more likely to claim his life.
The authors go through the motions of calculating the number needed to invite for screening (NNI) in order to avoid a single prostate cancer death, and it turns out to be 1,055. But really this number is only meaningful if we decide to get into death design in a something like “I don’t want to die of this, but that other cause is OK” kind of a choice. And although I don’t doubt that there may be takers for such a plan, I am pretty sure that my tax dollars should not pay for it. And thus I cast my vote for “doesn’t.”