It’s bad enough that we often have to listen to blowhards while we’re out and about–I’m referring to people whose are rendering long strings of opinions even though they have no credentials, expertise or curiosity about the facts. Now what do you think when you hear those many experts pontificating about the future? I’m talking about those many experts the media provides to us, people talking with great confidence about upcoming catastrophes, including the prices of houses or stocks, or the consequences of social unrest (and, what the hell, let’s add sports “experts” to the mix).
Dan Gardner wondered about this, and he wrote a book titled: “Future Babble: Why Expert Predictions Are Next to Worthless and You Can Do Better. I learned about Gardner in a well-written article by Ronald Bailey in Reason, “It’s Hard to Make Predictions, Especially About the Future“:
In Future Babble, Gardner acknowledges his debt to political scientist Phililp Tetlock, who set up a 20-year experiment in which he enrolled nearly 300 experts in politics. Tetlock then solicited thousands of predictions about the fates of scores of countries and later checked how well they did. Not so well. Tetlock concluded that most of his experts would have been beaten by “a dart-throwing chimpanzee.” Tetlock found that the experts wearing rose-tinted glasses “assigned probabilities of 65 percent to rosy scenarios that materialized only 15 percent of the time.” Doomsters did even worse: “They assigned probabilities of 70 percent to bleak scenarios that materialized only 12 percent of the time.”
The problem with experts was also discussed in a March 2011 issue of Scientific American, “Financial Flimflam: “Why Economic Experts’ Predictions Fail,” which offers this finesse to Tetlock’s findings:
There was one significant factor in greater prediction success, however, and that was cognitive style: “foxes” who know a little about many things do better than “hedgehogs” who know a lot about one area of expertise. Low scorers, Tetlock wrote, were “thinkers who ‘know one big thing,’ aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who ‘do not get it,’ and express considerable confidence that they are already pretty proficient forecasters.” High scorers in the study were “thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ‘ad hocery’ that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.”
I suppose the bottom line advice is that you need a psychological profile of an expert before determining whether to believe him or her. But maybe a nice long impressive track record would be a reasonable substitute.
Anti-science can be generalized to anti-expert. Actually knowing anything is going to conflict with belief. An apologist is not an expert on the Bible, just someone with a talented tongue for weaving confusion.