Over at Scientific American Mind Gerd Gigerenzer and his colleagues have published a terrific article documenting the statistical illiteracy that sometimes runs rampant in health care fields. The article, “Knowing Your Chances,” appears in the April/May/June 2009 edition.
The authors point out numerous medical care fallacies caused by statistical illiteracy , including Rudy Giuliani’s 2007claim that because 82% of Americans survived prostate cancer, compared to only 44% in England, that he was lucky to be living in the United States and not in England. This sort of claim is based on Giuliani’s failure to understand statistics. Yes, in the United States, men will be more quickly diagnosed as having prostate cancer (because many more of them are given PSA tests), and then many more of them will be treated. Despite the stark differences in survival rates (the percentage of patients who survive the cancer for a least five years, “mortality rates in the two countries are close to the same: about 26 prostate cancer deaths per 100,000 American men versus 27 per 100,000 in Britain. That fact suggests the PSA test
has needlessly flagged prostate cancer in many American men, resulting in a lot of unnecessary surgery and radiation treatment, which often leads to impotence or incontinence. Because of overdiagnosis and lead-time bias, changes in five-year survival rates have no reliable relation to changes in mortality when patterns of diagnoses differ. And yet many official agencies continue to talk about five-year survival rates.
Gigerenzer and his colleagues give a highly disturbing as example regarding mammogram results. Assume that a woman just received a positive test result (suggesting breast cancer) and asks her doctor “What are the chances that I have breast cancer?” In a dramatic study researchers asked 160 gynecologists taking a continuing education course to give their best estimate based upon the following facts:
A.) the probability that a woman has breast cancer (prevalence) is 1%
B.) if a woman has breast cancer the probability that she tests positive (sensitivity) is 90%
C) if a woman does not have breast cancer, the probability that she nonetheless tests positive (false-positive rate) is 9%
The best answer can be quickly derived from the above three statements. Only about one out of 10 women who test positive actually has breast cancer. The other 9/10 have been falsely diagnosed. Only 21% of physicians picked the right answer. 60% of the gynecologists believed that there was either an 81% or 90% chance that a woman with a positive test result actually had cancer, suggesting that they routinely cause horrific and needless fear in their patients.
What I found amazing is that you can quickly and easily determine that 10% is a correct answer based upon the above three statements–simply assume that there are 100 patients, that one of them (1%) actually has breast cancer and that nine of them (9%) test false positive. This is grade school mathematics: only about 10% of the women testing positive actually have breast cancer.
As the article describes, false diagnosis and bad interpretations often combine (e.g., in the case of HIV tests) to result in suicides, needless treatment and immense disruption in the lives of the patients.
The authors also discuss the (tiny) increased risk of blood clots caused by taking third-generation oral contraceptives. Because the news media and consumers so often exhibit innumeracy, this news about the risk was communicated in a way that caused great anxiety. People learned that the third-generation pill increased the risk of blood clots by “100%.” The media should have pack is aged the risk in a more meaningful way: whereas one out of 7000 women who took the second-generation pill had a blood clot, this increased to two in 7000 women who took the new bill. The “absolute risk increase” should have been more clearly communicated.
Check out the full article for additional reasons to be concerned about statistical illiteracy.
Actually, the figure of about 10% of positives being true is not at all simple to arrive at.
Sorry if the following is hard to follow, but I'm having to type it out or the second time because I accidentally deleted it just a minute ago.
1% of women have breast cancer, but only 90% of those will test positive for it, leading to a .9% true positive rate. (1 * .9)
99% of women do not have breast cancer in the example, but 9% of them will test positive for it, leading to a 8.91% false positive rate. (99 * .9)
Thus the percent of women who both test positive for and actually have breast cancer is found as follows: the percentage of true positives over the the percentage of total positives, which is the percentages of true and false positives added together. Filling in the numbers, we get .9% / (.9% + 8.91%), which becomes .9% / 9.81%, which is about 9.2%, meaning that only 9.2% of positive diagnoses of breast cancer are actually true positives.
9.2% could be considered to be about 10%, but it is hardly child's play to determine that number, and so I don't find it at all surprising that there is a high level of statistical illiteracy.
Voynix. It wouldn't be easy for a typical patient to figure this out. But if you are a doctor, you had damned better know what that positive test result means and be able to explain it to your patients in simple language. To not know is malpractice. Most doctors in the survey thought that the positive result meant you almost for sure had cancer, whereas the truth is that a positive result meant merely that you have a (approximately) 10% chance of having cancer. Sure, it takes a bit of effort to know what the numbers mean. It would take about two minutes for anyone with basic math skills.
My criticism was aimed at the doctors, not the patients. Again, if doctors don't know enough math to understand the test result in context, they're not qualified to be giving advice to their patients.
Voynix I believe the method described in the article is valid and close enough. Much easier for folk to grasp.
http://chance.dartmouth.edu/chancewiki/ has more examples of "chance" in the media.
Erich, you're quite right. I just wanted to point out that it's not super simple or obvious and that thus patients shouldn't really be expected to work out this sort of thing. But you're completely right about how doctors absolutely should be able to understand and explain statistical phenomena like this.
sosman, I saw the method used in the article, which wasn't fully explained, for the reason, I assume, that it isn't central to the main point, as getting about the right answer using slightly sloppy work, which is why I wrote my comment. The site you linked looks interesting.