Archive for March 23rd, 2009
George Will’s recent journalistic malpractice has inspired much discussion by many people concerned about climate change. It’s a critically important issue given that 41% of Americans currently think that the threat of global warming is being exaggerated by the media.
The intellectual energy runs even deeper than criticism of George Will, though, leading us to the fundamental issue of how journalists and readers can distinguish legitimate
science from sham (or politicized) science. The Washington Post recently agreed to publish a precisely-worded response to Will by Christopher Mooney. Here’s Mooney’s opener:
A recent controversy over claims about climate science by Post op-ed columnist George F. Will raises a critical question: Can we ever know, on any contentious or politicized topic, how to recognize the real conclusions of science and how to distinguish them from scientific-sounding spin or misinformation?
Mooney methodically takes Will to task on point after point. For instance, weather is not the same thing as the climate. The state of the art in 1970s climate science has been superseded by 2007 climate science. You can’t determine long-term trends in Arctic ice by comparing ice thickness only on two strategically picked days.
The bottom line is not surprising. If you want to do science well you have to do it with precision, measuring repeatedly, crunching the numbers every which way and then drawing your conclusions self-critically. What is not allowed is cherry picking.
Readers and commentators must learn to share some practices with scientists — following up on sources, taking scientific knowledge seriously rather than cherry-picking misleading bits of information, and applying critical thinking to the weighing of evidence. That, in the end, is all that good science really is. It’s also what good journalism and commentary alike must strive to be — now more than ever.
Mooney has given considerable thought to these topics. His byline indicates that he is the author of “The Republican War on Science” and co-author of the forthcoming “Unscientific America: How Scientific Illiteracy Threatens Our Future.”
I would supplement Mooney’s well-written points, borrowing from our federal courts. They have long been faced with the struggle to determine what is real science and what is junk science, and they have settled on what is now called the “Daubert” test, (named after the case first applying the test, Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993)). The Daubert analysis is applied many times every day in all federal courts (and many state courts) all across America.
The problem facing judges is that the parties to law suits often produce experts who express scientific theories and explanations that are never heard outside of courtrooms. This justifiably makes judges suspicious. Is the witness doing “real” science or his he/she doing sham science to further the interests of the party paying his/her bills? The Daubert test asks the judge to serve as gatekeeper, to make sure that only legitimate science sees the light of day in courtrooms. Here are the relevant factors:
- Does the method involve empirical testing (is the theory or technique falsifiable, refutable, and testable)?
- Has the method been subjected to peer review and publication?
- Do we know the error rate of the method and the existence and maintenance of standards concerning its operation?
- Is the theory and technique generally accepted by a relevant scientific community?
Positive answers to each of these factors suggests that the witness is doing real science. Astrology would fail this test miserably.
Applied to climate science, the Daubert test would require that we listen carefully to what the scientists talk about with each other, in person and in their peer-reviewed journals. Daubert would require that we know enough about the techniques of climate science to know how it makes its measurements and conclusions. Daubert would certainly require that we know the difference between the weather and the climate.
Applying Daubert is not simply a matter of listening to the scientists. Quite often, the scientists are bought and paid for (e.g., scientists working for tobacco companies and corrupt pharmaceutical companies). Applying Daubert requires taking the time to understand how the science works to solve real-world questions and problems and then taking the time to see that its methodology is being used with rigor in this application. There are no shortcuts, expecially for outsider non-scientists.
No shortcuts. No cherry-picking.