Tag Archives: global warming

On Climategate

As you know, emails leaked (read: stolen) from a climate research group suggested some inappropriate attitudes amongst the scientists – at least when carefully edited and comber for inflammatory material. Review by major publications found that the emails did not constitute evidence of fraud, but the public perception was quite the opposite.

Of course, lots of people are emotionally invested in climate research. If it’s true, a lot of our habits will have to change. If it’s not true, it is a very expensive mistake. Furthermore, lots of scientists have staked their careers on propositions that it’s a big deal. So, yes, there is a bit of incentive to defend that proposition. But a lot of the public discussion concerns the “consensus” among “scientists.”

Wrong question: “do scientists believe in global warming?”

Right question: “do specialists in the field of climate science find a credible risk?”

With regard to the second question, there is an answer. There is consensus. Yes, there is a risk. Consensus does not equal truth, of course. Nor does credible risk imply a guaranteed catastrophe. Nor does outright fraud imply a bankrupt field. Let me explain these three.

Consensus is not truth. If you had asked a well-educated ornithologist to describe swans a few hundred years ago (prior to 1790), he would have told you about white, majestic birds. There was consensus based on thousands of observations that all swans are white. This was credible science based on good evidence and it would be wise to respect the conclusion as the best given the available evidence. It was entirely wrong, of course. There are black swans. But a consensus based on the preponderance of evidence is often the most trustworthy guideline available, and we would be foolish to discount them because they might be disproven tomorrow. Of course, we must keep collecting data, and we must be prepared to throw out formerly cherished beliefs if the data contradicts them.

Credible risk does not imply a guaranteed catastrophe. It’s a risk. Like in gambling. And lots of people are trying to estimate the odds. There is some pressure to estimate high – that gets the headlines. There is another pressure to make the estimate high: the precautionary principle. An editorial in the WSJ gave this version: “precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”
The precautionary principle is reasonable for governments and individuals, but not for scientists who are actively trying to fully establish the cause and effect relationships. Those relationships determine the risk, and we have to be honest about them. We don’t get to cheat and say “as a precaution, I estimate the risk to be 90%”
If the best estimate of the risk is 10%, that may not scare people enough. It doesn’t matter – we still have to report 10%. Let the politicians explain why avoiding a 10% risk of total economic shutdown is a lot more important than a 99% risk of inconvenience. But that principle only applies to the evaluations of the conclusions, not the interpretation of the data. Did the East Anglia fall into this trap? I have no idea, but at least that is a legitimate worry.

Isolated fraud should not discredit the whole community. To be clear, these emails do not constitute fraud. But, even if they were, and a retraction of a publication was required, that’s not the same as the whole conclusion being false. If a gold medal sprinter turns out to have used steroids, we don’t conclude that all fast people are dishonest, and that it’s impossible to run fast. There was an incident a while back where a Korean researcher claimed some amazing breakthroughs with stem cells. It was completely fabricated. Some time later, other groups actually did much of what he falsely claimed to have done. Just because one guy cheated didn’t make the achievement impossible. It was just really hard.

What we have with climate change is a consensus based on available data that there is a credible risk to humans due to anthropogenic climate change. A few people have gone to lengths to present this in a black-and-white manner. I suspect that they were trying to strip ambiguities because of a decent moral impulse (the precautionary principle) without considering the proper distinction between interpretation (assessing the risk) and evaluation (determining the appropriate response to that risk). When scientists do that, they erode the credibility of science in general, as an opinion piece in the WSJ points out. But, then, this sort of philosophizing isn’t really stressed in our training. Maybe it should be.

Cheers,
Peter