The wrong type of math

In this three-minute talk, Mathemagician Art Benjamin urges that we change our emphasis when we teach our children math. I couldn't agree with him more. It saddens me to consider the immense amount of self-inflicted damage that Americans could have avoided, if only they were more savvy regarding probability and statistics. For example, very few Americans die of "terrorism," whereas the lives of millions of Americans are severely damaged or destroyed every year by crappy schools, lack of health care (including the failure to obtain colonoscopies), wars begun on the basis of lies, various risky behaviors and many other problems almost too many to mention, all of which leave the actual danger of "terrorism" in the dust. Yet Americans spend a massively lop-sided portion of their tax-dollars each year preventing "terrorism." Each of the serious causes of death we face would be much more preventable if only Americans had a better grasp of statistics and probability.    With better training in statistics and probabilities, Americans could better understand the risks that they faced and the probabilities of success of various proposed "solutions."   With better training, as Art Benjamin suggests, we would be better able to order our national priorities to better prevent the things that are most likely to harm us.

Continue ReadingThe wrong type of math

Training people to quantify risks

In the October 29, 2009 edition of Nature (available online only to subscribers), writer Michael Bond considered whether members of the general public could benefit from specialized training so that they could better appreciate risks. Believe it or not, there's a controversy in this field. According to Bond, many specialists think that the public "will never really be capable of making the best decision on the basis of the available scientific information." This pessimistic position advocates that risk-related decision-making should be conducted by paternalistic expert agencies, which should nudge people into making better decisions without educating them deeply as to why they should make the choices they are being encouraged to make. A classic example is changing the default on one's driver's license with regard to whether one would like to donate one's organs after death. Making the default that one will donate one's organs unless the box is checked dramatically increases those who participate in the program. The optimistic camp is represented by a variety of experts, including psychologist Gerd Gigerenzer, who advocates that "people can be taught to improve their decision make and skills." As the Nature article points out, poor decision-making is ubiquitous and it seriously undermines the well-being of people. When faced with unfamiliar emotion-fraud situations, "most people suspend their powers of reasoning and go with an instinctive reaction that will often lead them astray." Bond gives the examples of people refusing to get vaccinations for measles-mumps-rubella, and the unjustified fear many people have with regard to genetically modified crops. He also mentions the statistical deficiencies of healthcare providers, an issue I discussed in an earlier post. We still get all exercised about snakes, even though we rarely encounter them, but we ignore such things as peak oil and the danger of getting into automobiles. Why do people have such a hard time evaluating risks?

The problem, as many researchers in cognitive neuroscience and psychology have concluded, is that people use two main brain systems to make decisions. One is instinctive--it operates below the level of conscious control and is often driven by emotions. The other is conscious and rational. The first system is automatic, quick and highly effective in situations such as walking along a crowded pavement, which requires the near-instantaneous integration of complex information and the carrying out of well-practiced action. The second system is more useful and novel situations such as deciding on the savings plan, which calls for deliberate analysis. Unfortunately, the first system has a way of kicking in even when deliberation would serve best.
Gigerenzer argues that proper education and training could assist people to put the rational system in charge of the instinctive one. He claims that even one half-hour of training in statistics significantly improves people's ability to quantify risk. Bond lists several promising methods for improving critical thinking. One method is to train people to look at problems from an outsiders’ perspective. Another is training them to weigh multiple options simultaneously rather than looking at options one at a time. Another trick is to use "actively open-minded thinking," which requires people to intentionally consider more than the first conclusion that comes to their mind. How important is it that people learn better how to evaluate risks? In addition to the examples cited at the top of this article, research suggests that people who suffer from innumeracy overestimate the likelihood of terrorist attacks. They "tend to have a high body-mass index and tend to be poor at managing their own health." Those who believe that people can be trained to better appreciate statistics believe that people need to be taught to better "feel the numbers." They need to use real-life situations to illustrate the statistics. Many students don’t receive any training in statistics at all. In fact, your article mentions that only one law school in the United States requires a course in statistical thinking. This means that many judges and lawyers are not properly prepared to assess risks in our modern world. Younger students are neglected too. They are only taught the mathematics of certainty (such as geometry and trigonometry), not the mathematics of uncertainty. Bond’s article concludes with the suggestion that we now have a society of people who don't understand that they don't understand. He argues that society would see long-term benefits if we would only stress the need for a rigorous education in the statistics of risk.

Continue ReadingTraining people to quantify risks

How foreclosed homes affect the rest of us

Arianna Huffington referred to the Brennan Center's recent study in reminding us that 300,000 homes are foreclosed every month in the U.S. This is terrible news for the people who used to make those houses their homes. But the problem is bad for the rest of us too:

[A]n estimated 40 million homes are located next door to a foreclosed property. The value of these homes drops an average of $8,667 following a foreclosure. This translates into a total property value loss of $352 billion. And vacant properties take a heavy toll on already strapped local governments: an estimated $20,000 per foreclosure (California is estimated to have lost approximately $4 billion in tax revenue in 2008). And the negative impact of a foreclosed home can affect the entire community: a one percent increase in foreclosures translates into a 2.3 percent rise in violent crimes.

Continue ReadingHow foreclosed homes affect the rest of us

Statistical illiteracy afflicts health care professionals and their patients

Over at Scientific American Mind Gerd Gigerenzer and his colleagues have published a terrific article documenting the statistical illiteracy that sometimes runs rampant in health care fields. The article, "Knowing Your Chances," appears in the April/May/June 2009 edition. The authors point out numerous medical care fallacies caused by statistical illiteracy , including Rudy Giuliani's 2007claim that because 82% of Americans survived prostate cancer, compared to only 44% in England, that he was lucky to be living in the United States and not in England. This sort of claim is based on Giuliani's failure to understand statistics. Yes, in the United States, men will be more quickly diagnosed as having prostate cancer (because many more of them are given PSA tests), and then many more of them will be treated. Despite the stark differences in survival rates (the percentage of patients who survive the cancer for a least five years, "mortality rates in the two countries are close to the same: about 26 prostate cancer deaths per 100,000 American men versus 27 per 100,000 in Britain. That fact suggests the PSA test

has needlessly flagged prostate cancer in many American men, resulting in a lot of unnecessary surgery and radiation treatment, which often leads to impotence or incontinence. Because of overdiagnosis and lead-time bias, changes in five-year survival rates have no reliable relation to changes in mortality when patterns of diagnoses differ. And yet many official agencies continue to talk about five-year survival rates.

Gigerenzer and his colleagues give a highly disturbing as example regarding mammogram results. Assume that a woman just received a positive test result (suggesting breast cancer) and asks her doctor "What are the chances that I have breast cancer?" In a dramatic study researchers asked 160 gynecologists taking a continuing education course to give their best estimate based upon the following facts:

A.) the probability that a woman has breast cancer (prevalence) is 1% B.) if a woman has breast cancer the probability that she tests positive (sensitivity) is 90% C) if a woman does not have breast cancer, the probability that she nonetheless tests positive (false-positive rate) is 9% The best answer can be quickly derived from the above three statements. Only about one out of 10 women who test positive actually has breast cancer. The other 9/10 have been falsely diagnosed. Only 21% of physicians picked the right answer. 60% of the gynecologists believed that there was either an 81% or 90% chance that a woman with a positive test result actually had cancer, suggesting that they routinely cause horrific and needless fear in their patients. What I found amazing is that you can quickly and easily determine that 10% is a correct answer based upon the above three statements--simply assume that there are 100 patients, that one of them (1%) actually has breast cancer and that nine of them (9%) test false positive. This is grade school mathematics: only about 10% of the women testing positive actually have breast cancer. As the article describes, false diagnosis and bad interpretations often combine (e.g., in the case of HIV tests) to result in suicides, needless treatment and immense disruption in the lives of the patients. The authors also discuss the (tiny) increased risk of blood clots caused by taking third-generation oral contraceptives. Because the news media and consumers so often exhibit innumeracy, this news about the risk was communicated in a way that caused great anxiety. People learned that the third-generation pill increased the risk of blood clots by "100%." The media should have pack is aged the risk in a more meaningful way: whereas one out of 7000 women who took the second-generation pill had a blood clot, this increased to two in 7000 women who took the new bill. The "absolute risk increase" should have been more clearly communicated. Check out the full article for additional reasons to be concerned about statistical illiteracy.

Continue ReadingStatistical illiteracy afflicts health care professionals and their patients