Category Archives: Philosophy

Things concerned with the love of wisdom

On Climategate

As you know, emails leaked (read: stolen) from a climate research group suggested some inappropriate attitudes amongst the scientists – at least when carefully edited and comber for inflammatory material. Review by major publications found that the emails did not constitute evidence of fraud, but the public perception was quite the opposite.

Of course, lots of people are emotionally invested in climate research. If it’s true, a lot of our habits will have to change. If it’s not true, it is a very expensive mistake. Furthermore, lots of scientists have staked their careers on propositions that it’s a big deal. So, yes, there is a bit of incentive to defend that proposition. But a lot of the public discussion concerns the “consensus” among “scientists.”

Wrong question: “do scientists believe in global warming?”

Right question: “do specialists in the field of climate science find a credible risk?”

With regard to the second question, there is an answer. There is consensus. Yes, there is a risk. Consensus does not equal truth, of course. Nor does credible risk imply a guaranteed catastrophe. Nor does outright fraud imply a bankrupt field. Let me explain these three.

Consensus is not truth. If you had asked a well-educated ornithologist to describe swans a few hundred years ago (prior to 1790), he would have told you about white, majestic birds. There was consensus based on thousands of observations that all swans are white. This was credible science based on good evidence and it would be wise to respect the conclusion as the best given the available evidence. It was entirely wrong, of course. There are black swans. But a consensus based on the preponderance of evidence is often the most trustworthy guideline available, and we would be foolish to discount them because they might be disproven tomorrow. Of course, we must keep collecting data, and we must be prepared to throw out formerly cherished beliefs if the data contradicts them.

Credible risk does not imply a guaranteed catastrophe. It’s a risk. Like in gambling. And lots of people are trying to estimate the odds. There is some pressure to estimate high – that gets the headlines. There is another pressure to make the estimate high: the precautionary principle. An editorial in the WSJ gave this version: “precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”
The precautionary principle is reasonable for governments and individuals, but not for scientists who are actively trying to fully establish the cause and effect relationships. Those relationships determine the risk, and we have to be honest about them. We don’t get to cheat and say “as a precaution, I estimate the risk to be 90%”
If the best estimate of the risk is 10%, that may not scare people enough. It doesn’t matter – we still have to report 10%. Let the politicians explain why avoiding a 10% risk of total economic shutdown is a lot more important than a 99% risk of inconvenience. But that principle only applies to the evaluations of the conclusions, not the interpretation of the data. Did the East Anglia fall into this trap? I have no idea, but at least that is a legitimate worry.

Isolated fraud should not discredit the whole community. To be clear, these emails do not constitute fraud. But, even if they were, and a retraction of a publication was required, that’s not the same as the whole conclusion being false. If a gold medal sprinter turns out to have used steroids, we don’t conclude that all fast people are dishonest, and that it’s impossible to run fast. There was an incident a while back where a Korean researcher claimed some amazing breakthroughs with stem cells. It was completely fabricated. Some time later, other groups actually did much of what he falsely claimed to have done. Just because one guy cheated didn’t make the achievement impossible. It was just really hard.

What we have with climate change is a consensus based on available data that there is a credible risk to humans due to anthropogenic climate change. A few people have gone to lengths to present this in a black-and-white manner. I suspect that they were trying to strip ambiguities because of a decent moral impulse (the precautionary principle) without considering the proper distinction between interpretation (assessing the risk) and evaluation (determining the appropriate response to that risk). When scientists do that, they erode the credibility of science in general, as an opinion piece in the WSJ points out. But, then, this sort of philosophizing isn’t really stressed in our training. Maybe it should be.

Cheers,
Peter

Believing we are right: Why Dr. House is a good role model

Humans, doctors and grad students included, are all prone to rationalization. This can be a good thing. Think about that TV show House, M.D.. The main character is not always right, but he always is totally convinced of his own opinions. If you think about that, it’s pretty remarkable.

When his opinions are refuted by hard evidence, he drops them without remorse. But up to that point, he is sufficiently certain to risk your life on the basis of his conviction. That’s actually a pretty good thing, in the following sense: if he were unwilling to change his opinion after finding new evidence, he would be an extremely dangerous person to have as a physician. By a similar token, if he wanted conclusive proof of a given diagnosis before starting treatment, he would lose patients because they would die before he was certain.

The following formula is reasonable: get the information you can and act decisively on that until better information is available. But it’s only reasonable so long as you keep the information channels open. That’s why Dr. House is a good role model. Despite being a jerk and despite seldom acknowledging that he was wrong, he never persists in a wrong opinion once it’s disproven.

The problem is that we are prone to rationalize the facts based on the diagnosis we had before. Take people who still believe that Saddam Heussein was involved in the September 11 attack. Presented with new evidence, many people will choose to ignore or rationalize around that evidence in order to preserve their old, erroneous conclusion.

And with just a few simple, mental sleights-of-hand, we can preserve that belief. Here’s another fine example: form the NYT, an Iraqi official purchased several million dollars worth of totally useless “electrostatic magnetic ion attraction” detectors that are billed by the manufacturer as being able to detect bombs and ammunition. A few simple tests are sufficient to show that they are capable of no such thing.

Why would someone believe something patently false in light of clear data to the contrary? Before we get all proud about how we are different from them, those other people, I would offer the following words of caution: believing that we are right is seductive to all of us. The only shared standard against which anyone can test his opinions is the physical world and the data that comes from it and that’s not an easy standard to uphold.

Cheers,
Peter

I.Q. and Wisdom for Pre-Med: worry less about your MCAT

Today’s Big Upshot concerns IQ. I’m not going to do this as well as Malcolm Gladwell who has a great section in his book Outliers: The Story of Success. But, nonetheless, I think it’s worth talking about in the context of a discussion of medical careers. It might be presumed that I.Q. measures intelligence and that intelligence is an important quality in a physician. If intelligence is the brightness of your mental spotlight, then in diagnosing disease it would probably be good to have lots of it.

However, it is at least as important to be concerned with where that spotlight is pointing as it is to have it be very bright. I hope everyone has head the med-school-admissions-anomaly stories (i.e. “this happened to a friend of a friend”). There was this guy who got a 4.0 GPA in college and got a perfect MCAT score and then went to his med school interviews and didn’t get admitted to any of the schools to which he applied. He ended up working at Kaplan, teaching kids how to do well on their MCAT. Weird, huh? If you have not met this guy, you probably will. There’s one in any big school’s pre-med program at any given time. You won’t see much of him, though, because he has a 16 hour a day study schedule

The guy is smart. He has a high I.Q. But the admissions committee knew better than to let him in their institution’s door. They knew that a certain degree if wisdom is prerequisite to be a decent doctor.

Gladwell tells a great tale about a large-scale study of I.Q. in California kids. The researchers followed the fates of these super-smart kids through their lives. Their fates turned out to be remarkable only in their ordinary-ness. These super genius kids did not turn out to be the captains of industry and leaders of tomorrow. In fact, most telling, there were two Nobel prize winners in the original, large sample. They were dropped from the study because their I.Q.s were not high enough.

The New Scientist has an article up this morning that explores come clever ways of testing another aspect of cognitive ability – the analytical, careful reasoning side. What the article really stresses (correctly, in my estimation) is that high I.Q. is only useful if it is fully engaged on the problem at hand. What’s scary is that for lots of questions in life and on tests, people (even really smart people) don’t fully engage their careful reasoning abilities.

So, here’s my point – the Big Upshot, if you will. Tests do help open doors – they validate other achievements, in a way. If grades are grossly disproportionate to SAT or MCAT scores, it might be a red flag. But a standardized test score is only one data point in the minds of any admissions committee, and they’re the only people who care at all. Frankly, a personal connection of any kind trumps any score hands-down. So if you’re pre-med (or on an admissions committee, for that matter) keep that in mind. Being wise enough to really engage with the right questions is at least as important as having the strongest possible abilities which could (potentially) be engaged.

Cheers,
Peter

Happy birthday to the iron lung – forgotten legacy of polio

Credit Wikimedia Commons Wired magazine has a piece this morning on the iron lung, the amazing machine that let polio stricken children breathe (instead of suffocating when their nerve-damage became severe enough to cause respiratory failure). What is hard for us to understand in this modern age is that this hellish contraption was an amazing success – being trapped in a metal cylinder was better than dying for lots of little kids. This is the real picture of polio: life in a tube. That’s why it irritates me when people go disparaging vaccines in general.

-Peter

Addendum: There’s a nice article over at the Huffington Post that covers some more details on the “debate” over vaccines that’s going on in the news. My favorite part:

There is a lot of fear-mongering about the dangers or effectiveness of vaccines, particularly swine flu vaccine. But if you look at the sources of this information, they come from less than credible sources.

That “less than credible source” is David Icke. Less than credible, indeed.

Vaccines post – ‘moral statistics’ case-in-point

 

Let’s say that you’re a doctor some day. Or a professor, for that matter. And someone (e.g. your patient, a party guest, or friend on facebook) starts talking about the Dangers of Immunization. You could respond with “The anti-vaccination people are misled, crazy or amoral…” but that will be highly counterproductive. I think I’ve got a better one. I must preface, however: this argument applies to life-threatening childhood endemic diseases like polio, not so much to optional flu shots and such, whose risks and efficacy are less well known.

Why a person might not want a vaccine: All medical treatments carry a certain measure of risk. Like crossing the street or taking a bus, everything is a risk at some level. The problem is causation. If you get hit by a bus, it’s not your fault. If you choose to get vaccinated and there are some side effects, then you feel like you’ve screwed yourself. And that is a terrible feeling.
But, look, we need to evaluate risk in a sane and rational way. Let’s say that the choice is between:
1. Doing nothing and taking a 1 in 100 risk of contracting a life-threatening disease

or

2. Take a concrete action which carries a complication risk of 1:1000.

Clearly, your odds are better with option 2. But a 1:100 chance of being screwed by external random events may feel preferable to a 1:1000 chance of screwing yourself. So why not just say “screw statistics, I’m going with my gut”? Because there’s more at work than a choice between possible regret and ‘leaving the matter to fate’. There’s a moral imperative at work.

Why we are morally obligated to get vaccinated: Now, if everyone but one selfish guy gets vaccinated, then he will still be safe (because there’s nobody from whom he can catch the disease) and he has no risk of side-effects. So he gets all the reward without any of the sacrifice. That makes him a freeloader. It’s profiting from others’ misfortune. It’s cheating.

How good people avoid seeing this moral issue: If we can avoid the statistics and just say “vaccines are poison” then the vaccine looks worse than the disease it was meant to prevent. The moral/statistics problem is solved. Some people have heard that there is mercury in vaccines. That happens to be partially true for some vaccines. Mercury is not healthy. Thus the logic progresses.

But is mercury really so toxic? There is potassium in lethal injections and there is chlorine in bleach. Is potassium bad? Chlorine? No. Very different chemistry, scenario, concentration, etc. can give rise to wholly different levels of toxicity. Some mercury compounds are pretty nasty. Others are pretty benign. But gram for gram, there was more mercury in one salmon than was in the whole first-year vaccine course for an infant. And that was prior to it being removed completely in the last few years from infant vaccines.

In some other vaccines, there is a small amount of a mercury compound called thiomersal (not metallic mercury or methyl-mercury which are relatively nasty kinds). No mercury compound is good for you, but a little mercury-based preservative turns out to be statistically better than the risk of a bad batch of vaccine. Vaccines are made of protein – they are like broth. They will rot. Rotten vaccine is useless. Useless vaccine leaves you vulnerable to the disease that is supposed to be prevented.

A slight risk of low level toxicity is better than risking polio. The odds are still in your favor if you get vaccinated. But since there is a known risk (mercury!) versus an unknown risk (nobody gets polio any more, right?) people will be misled into false beliefs about relative risks.

The point is: this all comes down to statistics. We have to weigh the relative risks of a terrible disease becoming endemic again versus the risks of mass-scale injections. We have to weigh the risks of a trace quantity of mercury versus the risks of inactive or contaminated medicine. There’s math involved. And to someone who sees the world in terms of “us” and “them” – who sees Nefarious Motivations in the hearts of his fellow men – this can all look like obfuscation. I wish it were as simple as “it either works or it doesn’t” but in actual real life, things work with some probability, and weighing those probabilities is never an easy job.

Strangely enough, sound moral reasoning requires statistical analysis. And that puts us all at a disadvantage when trying to Do the Right Thing. Try telling someone that on Facebook. Or, for that matter, good luck getting your patient’s HMO to cover your time explaining all of that to your patient.

-Peter