Category Archives: Philosophy

Tea Parties and the hunger for meaning, intimacy and social power

I had written off the “Tea Party” as the right-wing equivalent of a sale on anarchy T-Shirts. Corporate sponsored populist rage does not a “movement” make. It’s viral marketing on behalf of the Right Wing directed at conspiracy theorists. However, the Reverend Billy Talen of the Church of Stop Shopping has a different, interesting take. The Reverend Billy was the amusing main character of the wonderful documentary What Would Jesus Buy.

Reverend Billy suggests that the Tea Party is the natural outgrowth of a more fundamental current in American culture. People feel impotent and powerless. To wit, “We are surrounded by a creeping dullness. A lack of traction with the outside world.” The Tea Partiers have an undirected sense of their own lack of freedom and they are expressing this by blaming Washington. Of course, the choice of scapegoat is partially the result of cynical politicians and media personalities capitalizing on the vulnerability of angry people to crass manipulation.

But I think that we all sense a creeping loss of liberty in what Billy describes as “this bizarro ‘built environment’ of Consumerism.” As we accumulate incentives to buy panacea products, the associated feelings of powerlessness inspire a desire to rebel. And that desire is another chink in our psychological armor against marketing science. Buying into this manipulation – literally in many cases – is a way to feel instantly righteous and to couch life in a epic narrative of freedom and tyranny.

But buying into that feeling of instant righteousness does not address the root cause of powerlessness. Concrete, local, effective action addresses that need. Reverand Billy sees hunger riots in our future, but not literal hunger. Rather, he fears “the hunger for meaning, for community intimacy, for the satisfaction of our social souls.”

That is a perspective on the Tea Party that makes sense to me.

-Peter

On Climategate

As you know, emails leaked (read: stolen) from a climate research group suggested some inappropriate attitudes amongst the scientists – at least when carefully edited and comber for inflammatory material. Review by major publications found that the emails did not constitute evidence of fraud, but the public perception was quite the opposite.

Of course, lots of people are emotionally invested in climate research. If it’s true, a lot of our habits will have to change. If it’s not true, it is a very expensive mistake. Furthermore, lots of scientists have staked their careers on propositions that it’s a big deal. So, yes, there is a bit of incentive to defend that proposition. But a lot of the public discussion concerns the “consensus” among “scientists.”

Wrong question: “do scientists believe in global warming?”

Right question: “do specialists in the field of climate science find a credible risk?”

With regard to the second question, there is an answer. There is consensus. Yes, there is a risk. Consensus does not equal truth, of course. Nor does credible risk imply a guaranteed catastrophe. Nor does outright fraud imply a bankrupt field. Let me explain these three.

Consensus is not truth. If you had asked a well-educated ornithologist to describe swans a few hundred years ago (prior to 1790), he would have told you about white, majestic birds. There was consensus based on thousands of observations that all swans are white. This was credible science based on good evidence and it would be wise to respect the conclusion as the best given the available evidence. It was entirely wrong, of course. There are black swans. But a consensus based on the preponderance of evidence is often the most trustworthy guideline available, and we would be foolish to discount them because they might be disproven tomorrow. Of course, we must keep collecting data, and we must be prepared to throw out formerly cherished beliefs if the data contradicts them.

Credible risk does not imply a guaranteed catastrophe. It’s a risk. Like in gambling. And lots of people are trying to estimate the odds. There is some pressure to estimate high – that gets the headlines. There is another pressure to make the estimate high: the precautionary principle. An editorial in the WSJ gave this version: “precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”
The precautionary principle is reasonable for governments and individuals, but not for scientists who are actively trying to fully establish the cause and effect relationships. Those relationships determine the risk, and we have to be honest about them. We don’t get to cheat and say “as a precaution, I estimate the risk to be 90%”
If the best estimate of the risk is 10%, that may not scare people enough. It doesn’t matter – we still have to report 10%. Let the politicians explain why avoiding a 10% risk of total economic shutdown is a lot more important than a 99% risk of inconvenience. But that principle only applies to the evaluations of the conclusions, not the interpretation of the data. Did the East Anglia fall into this trap? I have no idea, but at least that is a legitimate worry.

Isolated fraud should not discredit the whole community. To be clear, these emails do not constitute fraud. But, even if they were, and a retraction of a publication was required, that’s not the same as the whole conclusion being false. If a gold medal sprinter turns out to have used steroids, we don’t conclude that all fast people are dishonest, and that it’s impossible to run fast. There was an incident a while back where a Korean researcher claimed some amazing breakthroughs with stem cells. It was completely fabricated. Some time later, other groups actually did much of what he falsely claimed to have done. Just because one guy cheated didn’t make the achievement impossible. It was just really hard.

What we have with climate change is a consensus based on available data that there is a credible risk to humans due to anthropogenic climate change. A few people have gone to lengths to present this in a black-and-white manner. I suspect that they were trying to strip ambiguities because of a decent moral impulse (the precautionary principle) without considering the proper distinction between interpretation (assessing the risk) and evaluation (determining the appropriate response to that risk). When scientists do that, they erode the credibility of science in general, as an opinion piece in the WSJ points out. But, then, this sort of philosophizing isn’t really stressed in our training. Maybe it should be.

Cheers,
Peter

Believing we are right: Why Dr. House is a good role model

Humans, doctors and grad students included, are all prone to rationalization. This can be a good thing. Think about that TV show House, M.D.. The main character is not always right, but he always is totally convinced of his own opinions. If you think about that, it’s pretty remarkable.

When his opinions are refuted by hard evidence, he drops them without remorse. But up to that point, he is sufficiently certain to risk your life on the basis of his conviction. That’s actually a pretty good thing, in the following sense: if he were unwilling to change his opinion after finding new evidence, he would be an extremely dangerous person to have as a physician. By a similar token, if he wanted conclusive proof of a given diagnosis before starting treatment, he would lose patients because they would die before he was certain.

The following formula is reasonable: get the information you can and act decisively on that until better information is available. But it’s only reasonable so long as you keep the information channels open. That’s why Dr. House is a good role model. Despite being a jerk and despite seldom acknowledging that he was wrong, he never persists in a wrong opinion once it’s disproven.

The problem is that we are prone to rationalize the facts based on the diagnosis we had before. Take people who still believe that Saddam Heussein was involved in the September 11 attack. Presented with new evidence, many people will choose to ignore or rationalize around that evidence in order to preserve their old, erroneous conclusion.

And with just a few simple, mental sleights-of-hand, we can preserve that belief. Here’s another fine example: form the NYT, an Iraqi official purchased several million dollars worth of totally useless “electrostatic magnetic ion attraction” detectors that are billed by the manufacturer as being able to detect bombs and ammunition. A few simple tests are sufficient to show that they are capable of no such thing.

Why would someone believe something patently false in light of clear data to the contrary? Before we get all proud about how we are different from them, those other people, I would offer the following words of caution: believing that we are right is seductive to all of us. The only shared standard against which anyone can test his opinions is the physical world and the data that comes from it and that’s not an easy standard to uphold.

Cheers,
Peter

I.Q. and Wisdom for Pre-Med: worry less about your MCAT

Today’s Big Upshot concerns IQ. I’m not going to do this as well as Malcolm Gladwell who has a great section in his book Outliers: The Story of Success. But, nonetheless, I think it’s worth talking about in the context of a discussion of medical careers. It might be presumed that I.Q. measures intelligence and that intelligence is an important quality in a physician. If intelligence is the brightness of your mental spotlight, then in diagnosing disease it would probably be good to have lots of it.

However, it is at least as important to be concerned with where that spotlight is pointing as it is to have it be very bright. I hope everyone has head the med-school-admissions-anomaly stories (i.e. “this happened to a friend of a friend”). There was this guy who got a 4.0 GPA in college and got a perfect MCAT score and then went to his med school interviews and didn’t get admitted to any of the schools to which he applied. He ended up working at Kaplan, teaching kids how to do well on their MCAT. Weird, huh? If you have not met this guy, you probably will. There’s one in any big school’s pre-med program at any given time. You won’t see much of him, though, because he has a 16 hour a day study schedule

The guy is smart. He has a high I.Q. But the admissions committee knew better than to let him in their institution’s door. They knew that a certain degree if wisdom is prerequisite to be a decent doctor.

Gladwell tells a great tale about a large-scale study of I.Q. in California kids. The researchers followed the fates of these super-smart kids through their lives. Their fates turned out to be remarkable only in their ordinary-ness. These super genius kids did not turn out to be the captains of industry and leaders of tomorrow. In fact, most telling, there were two Nobel prize winners in the original, large sample. They were dropped from the study because their I.Q.s were not high enough.

The New Scientist has an article up this morning that explores come clever ways of testing another aspect of cognitive ability – the analytical, careful reasoning side. What the article really stresses (correctly, in my estimation) is that high I.Q. is only useful if it is fully engaged on the problem at hand. What’s scary is that for lots of questions in life and on tests, people (even really smart people) don’t fully engage their careful reasoning abilities.

So, here’s my point – the Big Upshot, if you will. Tests do help open doors – they validate other achievements, in a way. If grades are grossly disproportionate to SAT or MCAT scores, it might be a red flag. But a standardized test score is only one data point in the minds of any admissions committee, and they’re the only people who care at all. Frankly, a personal connection of any kind trumps any score hands-down. So if you’re pre-med (or on an admissions committee, for that matter) keep that in mind. Being wise enough to really engage with the right questions is at least as important as having the strongest possible abilities which could (potentially) be engaged.

Cheers,
Peter

Happy birthday to the iron lung – forgotten legacy of polio

Credit Wikimedia Commons Wired magazine has a piece this morning on the iron lung, the amazing machine that let polio stricken children breathe (instead of suffocating when their nerve-damage became severe enough to cause respiratory failure). What is hard for us to understand in this modern age is that this hellish contraption was an amazing success – being trapped in a metal cylinder was better than dying for lots of little kids. This is the real picture of polio: life in a tube. That’s why it irritates me when people go disparaging vaccines in general.

-Peter

Addendum: There’s a nice article over at the Huffington Post that covers some more details on the “debate” over vaccines that’s going on in the news. My favorite part:

There is a lot of fear-mongering about the dangers or effectiveness of vaccines, particularly swine flu vaccine. But if you look at the sources of this information, they come from less than credible sources.

That “less than credible source” is David Icke. Less than credible, indeed.