Tag Archives: ethics

Short post: Green, compensatory ethics

There was an article in the Guardian that talks about how “green” consumers may be more likely to engage in socially irresponsible or unethical behavior due to a phenomenon called “compensatory ethics.” I take that to mean that if a person feels good about himself for one reason (he only buys fair trade organic coffee, for instance) he won’t feel as bad about himself if he steals from the barista’s tip jar.

In the words of Dieter Frey, a social psychologist at the University of Munich quoted in the Guardian piece, “at the moment in which you have proven your credentials in a particular area, you tend to allow yourself to stray elsewhere.”

Does that mean your hippie friends are cheaters? No – but it does explain why my vegetarian roommate was habitually rude and inconsiderate.

NatureBlog picked up on it too!


On Climategate

As you know, emails leaked (read: stolen) from a climate research group suggested some inappropriate attitudes amongst the scientists – at least when carefully edited and comber for inflammatory material. Review by major publications found that the emails did not constitute evidence of fraud, but the public perception was quite the opposite.

Of course, lots of people are emotionally invested in climate research. If it’s true, a lot of our habits will have to change. If it’s not true, it is a very expensive mistake. Furthermore, lots of scientists have staked their careers on propositions that it’s a big deal. So, yes, there is a bit of incentive to defend that proposition. But a lot of the public discussion concerns the “consensus” among “scientists.”

Wrong question: “do scientists believe in global warming?”

Right question: “do specialists in the field of climate science find a credible risk?”

With regard to the second question, there is an answer. There is consensus. Yes, there is a risk. Consensus does not equal truth, of course. Nor does credible risk imply a guaranteed catastrophe. Nor does outright fraud imply a bankrupt field. Let me explain these three.

Consensus is not truth. If you had asked a well-educated ornithologist to describe swans a few hundred years ago (prior to 1790), he would have told you about white, majestic birds. There was consensus based on thousands of observations that all swans are white. This was credible science based on good evidence and it would be wise to respect the conclusion as the best given the available evidence. It was entirely wrong, of course. There are black swans. But a consensus based on the preponderance of evidence is often the most trustworthy guideline available, and we would be foolish to discount them because they might be disproven tomorrow. Of course, we must keep collecting data, and we must be prepared to throw out formerly cherished beliefs if the data contradicts them.

Credible risk does not imply a guaranteed catastrophe. It’s a risk. Like in gambling. And lots of people are trying to estimate the odds. There is some pressure to estimate high – that gets the headlines. There is another pressure to make the estimate high: the precautionary principle. An editorial in the WSJ gave this version: “precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”
The precautionary principle is reasonable for governments and individuals, but not for scientists who are actively trying to fully establish the cause and effect relationships. Those relationships determine the risk, and we have to be honest about them. We don’t get to cheat and say “as a precaution, I estimate the risk to be 90%”
If the best estimate of the risk is 10%, that may not scare people enough. It doesn’t matter – we still have to report 10%. Let the politicians explain why avoiding a 10% risk of total economic shutdown is a lot more important than a 99% risk of inconvenience. But that principle only applies to the evaluations of the conclusions, not the interpretation of the data. Did the East Anglia fall into this trap? I have no idea, but at least that is a legitimate worry.

Isolated fraud should not discredit the whole community. To be clear, these emails do not constitute fraud. But, even if they were, and a retraction of a publication was required, that’s not the same as the whole conclusion being false. If a gold medal sprinter turns out to have used steroids, we don’t conclude that all fast people are dishonest, and that it’s impossible to run fast. There was an incident a while back where a Korean researcher claimed some amazing breakthroughs with stem cells. It was completely fabricated. Some time later, other groups actually did much of what he falsely claimed to have done. Just because one guy cheated didn’t make the achievement impossible. It was just really hard.

What we have with climate change is a consensus based on available data that there is a credible risk to humans due to anthropogenic climate change. A few people have gone to lengths to present this in a black-and-white manner. I suspect that they were trying to strip ambiguities because of a decent moral impulse (the precautionary principle) without considering the proper distinction between interpretation (assessing the risk) and evaluation (determining the appropriate response to that risk). When scientists do that, they erode the credibility of science in general, as an opinion piece in the WSJ points out. But, then, this sort of philosophizing isn’t really stressed in our training. Maybe it should be.


Happy birthday to the iron lung – forgotten legacy of polio

Credit Wikimedia Commons Wired magazine has a piece this morning on the iron lung, the amazing machine that let polio stricken children breathe (instead of suffocating when their nerve-damage became severe enough to cause respiratory failure). What is hard for us to understand in this modern age is that this hellish contraption was an amazing success – being trapped in a metal cylinder was better than dying for lots of little kids. This is the real picture of polio: life in a tube. That’s why it irritates me when people go disparaging vaccines in general.


Addendum: There’s a nice article over at the Huffington Post that covers some more details on the “debate” over vaccines that’s going on in the news. My favorite part:

There is a lot of fear-mongering about the dangers or effectiveness of vaccines, particularly swine flu vaccine. But if you look at the sources of this information, they come from less than credible sources.

That “less than credible source” is David Icke. Less than credible, indeed.

Vaccines post – ‘moral statistics’ case-in-point


Let’s say that you’re a doctor some day. Or a professor, for that matter. And someone (e.g. your patient, a party guest, or friend on facebook) starts talking about the Dangers of Immunization. You could respond with “The anti-vaccination people are misled, crazy or amoral…” but that will be highly counterproductive. I think I’ve got a better one. I must preface, however: this argument applies to life-threatening childhood endemic diseases like polio, not so much to optional flu shots and such, whose risks and efficacy are less well known.

Why a person might not want a vaccine: All medical treatments carry a certain measure of risk. Like crossing the street or taking a bus, everything is a risk at some level. The problem is causation. If you get hit by a bus, it’s not your fault. If you choose to get vaccinated and there are some side effects, then you feel like you’ve screwed yourself. And that is a terrible feeling.
But, look, we need to evaluate risk in a sane and rational way. Let’s say that the choice is between:
1. Doing nothing and taking a 1 in 100 risk of contracting a life-threatening disease


2. Take a concrete action which carries a complication risk of 1:1000.

Clearly, your odds are better with option 2. But a 1:100 chance of being screwed by external random events may feel preferable to a 1:1000 chance of screwing yourself. So why not just say “screw statistics, I’m going with my gut”? Because there’s more at work than a choice between possible regret and ‘leaving the matter to fate’. There’s a moral imperative at work.

Why we are morally obligated to get vaccinated: Now, if everyone but one selfish guy gets vaccinated, then he will still be safe (because there’s nobody from whom he can catch the disease) and he has no risk of side-effects. So he gets all the reward without any of the sacrifice. That makes him a freeloader. It’s profiting from others’ misfortune. It’s cheating.

How good people avoid seeing this moral issue: If we can avoid the statistics and just say “vaccines are poison” then the vaccine looks worse than the disease it was meant to prevent. The moral/statistics problem is solved. Some people have heard that there is mercury in vaccines. That happens to be partially true for some vaccines. Mercury is not healthy. Thus the logic progresses.

But is mercury really so toxic? There is potassium in lethal injections and there is chlorine in bleach. Is potassium bad? Chlorine? No. Very different chemistry, scenario, concentration, etc. can give rise to wholly different levels of toxicity. Some mercury compounds are pretty nasty. Others are pretty benign. But gram for gram, there was more mercury in one salmon than was in the whole first-year vaccine course for an infant. And that was prior to it being removed completely in the last few years from infant vaccines.

In some other vaccines, there is a small amount of a mercury compound called thiomersal (not metallic mercury or methyl-mercury which are relatively nasty kinds). No mercury compound is good for you, but a little mercury-based preservative turns out to be statistically better than the risk of a bad batch of vaccine. Vaccines are made of protein – they are like broth. They will rot. Rotten vaccine is useless. Useless vaccine leaves you vulnerable to the disease that is supposed to be prevented.

A slight risk of low level toxicity is better than risking polio. The odds are still in your favor if you get vaccinated. But since there is a known risk (mercury!) versus an unknown risk (nobody gets polio any more, right?) people will be misled into false beliefs about relative risks.

The point is: this all comes down to statistics. We have to weigh the relative risks of a terrible disease becoming endemic again versus the risks of mass-scale injections. We have to weigh the risks of a trace quantity of mercury versus the risks of inactive or contaminated medicine. There’s math involved. And to someone who sees the world in terms of “us” and “them” – who sees Nefarious Motivations in the hearts of his fellow men – this can all look like obfuscation. I wish it were as simple as “it either works or it doesn’t” but in actual real life, things work with some probability, and weighing those probabilities is never an easy job.

Strangely enough, sound moral reasoning requires statistical analysis. And that puts us all at a disadvantage when trying to Do the Right Thing. Try telling someone that on Facebook. Or, for that matter, good luck getting your patient’s HMO to cover your time explaining all of that to your patient.


Statistics, drugs, and hard ethical questions

The New York Times has an article this morning on the FDA and drug approval process and some interesting controversy surrounding experimental cancer treatments. (Did you know you can get the NYT on the Kindle? Cool stuff)The FDA’s lead cancer guy is under attack from both sides of the debate. Some people say that he’s letting unsafe, unproven drugs get through and others say he’s holding back life saving treatments with unnecessary bureaucracy. Strangely, both camps are talking about the same drugs. It’s a pretty good article. It gets to the heart of the matter: Gleevec has obvious, amazing benefits. It goes through FDA review really quickly. Other drugs are subtle. And in that subtlety is the controversy.

Arthur Benjamin did a TED talk where he suggested that high schools forgo calculus in favor of statistics. I tend to agree – calculus is really important for scientists, but they can get it in their freshman year at University. To make sense of this and many other important ethical issues, everyone needs some statistics background. How many people need to suffer as a control group without treatment in order to assess whether a drug is subtly helping?This is a pretty hard statistical question. “Just look in your heart and your conscience will be your guide” just doesn’t cut it for these kinds of questions.

For instance: Aspirin seems to help prevent heart attacks. Out of 22,000 people, 56 per year had heart attacks with aspirin as compared with 96 not on aspirin. That implies that taking aspirin is a good idea, but without thousands of data points, it would be impossible to tell. If you only had 220 people, you get absolutely no conclusion. Look at it this way: if you take aspirin every day and don’t have a heart attack this year, you may be one of the 40 people who aspirin saved, or one of the 21,904 who wouldn’t have had a heart attack anyway! All you know for sure is that you’re not one of the 56 people who had heart attacks.

What we know is not what we think we know or what our gut instinct or common sense might tell us. At 546:1 odds against low-dose aspirin having any effect, it seems stupid to take it except that it’s so cheap it’s almost free, virtually no side effects, and heart attacks are serious as… um… well, they are really serious. If aspirin cost $10 per dose and caused erectile dysfunction, I doubt it would be worth taking. But what if it cost $1 per dose and sometimes (1:10,000) caused permanent deafness? Should your grandmother take it? Let your heart be your guide.