Statistical illiteracy afflicts health care professionals and their patients

Over at Scientific American Mind Gerd Gigerenzer and his colleagues have published a terrific article documenting the statistical illiteracy that sometimes runs rampant in health care fields. The article, "Knowing Your Chances," appears in the April/May/June 2009 edition. The authors point out numerous medical care fallacies caused by statistical illiteracy , including Rudy Giuliani's 2007claim that because 82% of Americans survived prostate cancer, compared to only 44% in England, that he was lucky to be living in the United States and not in England. This sort of claim is based on Giuliani's failure to understand statistics. Yes, in the United States, men will be more quickly diagnosed as having prostate cancer (because many more of them are given PSA tests), and then many more of them will be treated. Despite the stark differences in survival rates (the percentage of patients who survive the cancer for a least five years, "mortality rates in the two countries are close to the same: about 26 prostate cancer deaths per 100,000 American men versus 27 per 100,000 in Britain. That fact suggests the PSA test

has needlessly flagged prostate cancer in many American men, resulting in a lot of unnecessary surgery and radiation treatment, which often leads to impotence or incontinence. Because of overdiagnosis and lead-time bias, changes in five-year survival rates have no reliable relation to changes in mortality when patterns of diagnoses differ. And yet many official agencies continue to talk about five-year survival rates.

Gigerenzer and his colleagues give a highly disturbing as example regarding mammogram results. Assume that a woman just received a positive test result (suggesting breast cancer) and asks her doctor "What are the chances that I have breast cancer?" In a dramatic study researchers asked 160 gynecologists taking a continuing education course to give their best estimate based upon the following facts:

A.) the probability that a woman has breast cancer (prevalence) is 1% B.) if a woman has breast cancer the probability that she tests positive (sensitivity) is 90% C) if a woman does not have breast cancer, the probability that she nonetheless tests positive (false-positive rate) is 9% The best answer can be quickly derived from the above three statements. Only about one out of 10 women who test positive actually has breast cancer. The other 9/10 have been falsely diagnosed. Only 21% of physicians picked the right answer. 60% of the gynecologists believed that there was either an 81% or 90% chance that a woman with a positive test result actually had cancer, suggesting that they routinely cause horrific and needless fear in their patients. What I found amazing is that you can quickly and easily determine that 10% is a correct answer based upon the above three statements--simply assume that there are 100 patients, that one of them (1%) actually has breast cancer and that nine of them (9%) test false positive. This is grade school mathematics: only about 10% of the women testing positive actually have breast cancer. As the article describes, false diagnosis and bad interpretations often combine (e.g., in the case of HIV tests) to result in suicides, needless treatment and immense disruption in the lives of the patients. The authors also discuss the (tiny) increased risk of blood clots caused by taking third-generation oral contraceptives. Because the news media and consumers so often exhibit innumeracy, this news about the risk was communicated in a way that caused great anxiety. People learned that the third-generation pill increased the risk of blood clots by "100%." The media should have pack is aged the risk in a more meaningful way: whereas one out of 7000 women who took the second-generation pill had a blood clot, this increased to two in 7000 women who took the new bill. The "absolute risk increase" should have been more clearly communicated. Check out the full article for additional reasons to be concerned about statistical illiteracy.

Continue ReadingStatistical illiteracy afflicts health care professionals and their patients

Just run the Monty Hall experiment and get it over with

Monty Hall was the host of a TV game show called "Let's Make a Deal." I watched it when I was a boy and it was quite entertaining. One of the specific games on the show involved offering to allow a contestant to pick one out of three wrapped prizes. Some of the prizes were valuable, but one was often a worthless gag. After the contestant chose one gift, Monty invariably removed one of the two wrapped gifts that the contestant did not take. He then asked the contestant if he/she wanted to trade the box he/she originally chose in return for choosing the other remaining gift. Should the contestant stay put or should he/she switch? My gut feeling says that there is nothing to gain by switching, but there are many experts who disagree with me. Frankly, I’m tired of hearing about the Monty Hall problem. Many mathematically-inclined experts insist that you should ALWAYS switch after Monte takes away one of the three hidden prizes. There’s all kinds of high end mathematics involved in many of these analysis (see here for instance). The dispute gets really high-pitched sometimes, which is usually a clue that experts are claiming to be certain when they really don’t have any right to be. What I’m wondering is this: why don’t some social scientists simply gather empirical data in a lab? Assign someone to be Monty and let college students play the roles of contestants. Set the experimental parameters precisely (this needs to be done carefully because there is some question as to what, exactly Monty knows and does) and run the test over and over, until you’ve got LOTS of data. Have some students always make the switch. Have others never make the switch. Then tally the results and tell the mathematicians that you have the real answer. So that’s my thought: allow real-world trials tell the theoreticians the answer. Then let’s move on, please. Or has someone actually run the Monty Hall skit over and over in a lab yet and added up the results? I haven’t seen it yet, if this has been done.

Continue ReadingJust run the Monty Hall experiment and get it over with

How to weed out junk science when discussing climate change.

George Will's recent journalistic malpractice has inspired much discussion by many people concerned about climate change. It's a critically important issue given that 41% of Americans currently think that the threat of global warming is being exaggerated by the media. The intellectual energy runs even deeper than criticism of George Will, though, leading us to the fundamental issue of how journalists and readers can distinguish legitimate science from sham (or politicized) science. The Washington Post recently agreed to publish a precisely-worded response to Will by Christopher Mooney. Here's Mooney's opener:

A recent controversy over claims about climate science by Post op-ed columnist George F. Will raises a critical question: Can we ever know, on any contentious or politicized topic, how to recognize the real conclusions of science and how to distinguish them from scientific-sounding spin or misinformation?

Mooney methodically takes Will to task on point after point. For instance, weather is not the same thing as the climate. The state of the art in 1970s climate science has been superseded by 2007 climate science. You can't determine long-term trends in Arctic ice by comparing ice thickness only on two strategically picked days. The bottom line is not surprising. If you want to do science well you have to do it with precision, measuring repeatedly, crunching the numbers every which way and then drawing your conclusions self-critically. What is not allowed is cherry picking.

Readers and commentators must learn to share some practices with scientists -- following up on sources, taking scientific knowledge seriously rather than cherry-picking misleading bits of information, and applying critical thinking to the weighing of evidence. That, in the end, is all that good science really is. It's also what good journalism and commentary alike must strive to be -- now more than ever.

Mooney has given considerable thought to these topics. His byline indicates that he is the author of "The Republican War on Science" and co-author of the forthcoming "Unscientific America: How Scientific Illiteracy Threatens Our Future." I would supplement Mooney's well-written points, borrowing from our federal courts. They have long been faced with the struggle to determine what is real science and what is junk science, and they have settled on what is now called the "Daubert" test, (named after the case first applying the test, Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993)). The Daubert analysis is applied many times every day in all federal courts (and many state courts) all across America. The problem facing judges is that the parties to law suits often produce experts who express scientific theories and explanations that are never heard outside of courtrooms. This justifiably makes judges suspicious. Is the witness doing "real" science or his he/she doing sham science to further the interests of the party paying his/her bills? The Daubert test asks the judge to serve as gatekeeper, to make sure that only legitimate science sees the light of day in courtrooms. Here are the relevant factors:
  • Does the method involve empirical testing (is the theory or technique falsifiable, refutable, and testable)?
  • Has the method been subjected to peer review and publication?
  • Do we know the error rate of the method and the existence and maintenance of standards concerning its operation?
  • Is the theory and technique generally accepted by a relevant scientific community?
Positive answers to each of these factors suggests that the witness is doing real science. Astrology would fail this test miserably. Applied to climate science, the Daubert test would require that we listen carefully to what the scientists talk about with each other, in person and in their peer-reviewed journals. Daubert would require that we know enough about the techniques of climate science to know how it makes its measurements and conclusions. Daubert would certainly require that we know the difference between the weather and the climate. Applying Daubert is not simply a matter of listening to the scientists. Quite often, the scientists are bought and paid for (e.g., scientists working for tobacco companies and corrupt pharmaceutical companies). Applying Daubert requires taking the time to understand how the science works to solve real-world questions and problems and then taking the time to see that its methodology is being used with rigor in this application. There are no shortcuts, expecially for outsider non-scientists. No shortcuts. No cherry-picking.

Continue ReadingHow to weed out junk science when discussing climate change.

George Will’s irresponsible article denying climate change and the Washington Post’s irresponsible fact-checking

George Will has written an irresponsible article denying climate change (AKA global warming). Here’s the basic problem with George Will’s writing, as stated succinctly by The Wonk Room:

In “Dark Green Doomsayers,” Will attacked Secretary of Energy Steven Chu for discussing a worst-case scenario of California drought caused by the decimation of Sierra snowpack, falsely claiming Chu predicted this will come to pass “no later than 10 years away.” Will also incorrectly claimed that “global sea ice levels now equal those of 1979″ — based on a 45-day-old blog post by Daily Tech’s Michael Asher, one of Marc Morano’s climate denial jokers.

Will’s article is riddled with falsehoods. The radically untrue nature of Will’s article is beyond dispute. Confronted with Will’s cauldron of conservative climate denial propaganda, the Washington Post was faced with a stark choice. It could either A) confess that it failed to do any competent fact-checking or B) compound Will’s lies with its own by claiming that it did real fact-checking. It chose “B.”

Continue ReadingGeorge Will’s irresponsible article denying climate change and the Washington Post’s irresponsible fact-checking

Reading On The Rise

According to this report, reading is on the rise in America for the first time in a quarter century. It's difficult for me to express how pleased this makes me. Civilization and its discontents have been in the back of my mind since I became aware of how little reading most people do. To go into a house---a nice house,well-furnished, a place of some affluence---and see no books at all has always given me a chill, especially if there are children in the house. Over the last 30 years, since I've been paying attention to the issue, I've found a bewildering array of excuses among people across all walks of life as to why they never read. I can understand fatigue, certainly---it is easier to just flip on the tube and veg out to canned dramas---but in many of these instances, reading has simply never been important. To someone for whom reading has been the great salvation, this is simply baffling. Reading, I believe, is the best way we have to gain access to the world short of physically immersing ourselves in different places and cultures. Even for those who have the opportunity and resource to travel that extensively, reading provides a necessary background for the many places that will be otherwise inaccessibly alien to our sensibilities.

Continue ReadingReading On The Rise