Category Archives: Philosophy

Vaccines post – ‘moral statistics’ case-in-point

 

Let’s say that you’re a doctor some day. Or a professor, for that matter. And someone (e.g. your patient, a party guest, or friend on facebook) starts talking about the Dangers of Immunization. You could respond with “The anti-vaccination people are misled, crazy or amoral…” but that will be highly counterproductive. I think I’ve got a better one. I must preface, however: this argument applies to life-threatening childhood endemic diseases like polio, not so much to optional flu shots and such, whose risks and efficacy are less well known.

Why a person might not want a vaccine: All medical treatments carry a certain measure of risk. Like crossing the street or taking a bus, everything is a risk at some level. The problem is causation. If you get hit by a bus, it’s not your fault. If you choose to get vaccinated and there are some side effects, then you feel like you’ve screwed yourself. And that is a terrible feeling.
But, look, we need to evaluate risk in a sane and rational way. Let’s say that the choice is between:
1. Doing nothing and taking a 1 in 100 risk of contracting a life-threatening disease

or

2. Take a concrete action which carries a complication risk of 1:1000.

Clearly, your odds are better with option 2. But a 1:100 chance of being screwed by external random events may feel preferable to a 1:1000 chance of screwing yourself. So why not just say “screw statistics, I’m going with my gut”? Because there’s more at work than a choice between possible regret and ‘leaving the matter to fate’. There’s a moral imperative at work.

Why we are morally obligated to get vaccinated: Now, if everyone but one selfish guy gets vaccinated, then he will still be safe (because there’s nobody from whom he can catch the disease) and he has no risk of side-effects. So he gets all the reward without any of the sacrifice. That makes him a freeloader. It’s profiting from others’ misfortune. It’s cheating.

How good people avoid seeing this moral issue: If we can avoid the statistics and just say “vaccines are poison” then the vaccine looks worse than the disease it was meant to prevent. The moral/statistics problem is solved. Some people have heard that there is mercury in vaccines. That happens to be partially true for some vaccines. Mercury is not healthy. Thus the logic progresses.

But is mercury really so toxic? There is potassium in lethal injections and there is chlorine in bleach. Is potassium bad? Chlorine? No. Very different chemistry, scenario, concentration, etc. can give rise to wholly different levels of toxicity. Some mercury compounds are pretty nasty. Others are pretty benign. But gram for gram, there was more mercury in one salmon than was in the whole first-year vaccine course for an infant. And that was prior to it being removed completely in the last few years from infant vaccines.

In some other vaccines, there is a small amount of a mercury compound called thiomersal (not metallic mercury or methyl-mercury which are relatively nasty kinds). No mercury compound is good for you, but a little mercury-based preservative turns out to be statistically better than the risk of a bad batch of vaccine. Vaccines are made of protein – they are like broth. They will rot. Rotten vaccine is useless. Useless vaccine leaves you vulnerable to the disease that is supposed to be prevented.

A slight risk of low level toxicity is better than risking polio. The odds are still in your favor if you get vaccinated. But since there is a known risk (mercury!) versus an unknown risk (nobody gets polio any more, right?) people will be misled into false beliefs about relative risks.

The point is: this all comes down to statistics. We have to weigh the relative risks of a terrible disease becoming endemic again versus the risks of mass-scale injections. We have to weigh the risks of a trace quantity of mercury versus the risks of inactive or contaminated medicine. There’s math involved. And to someone who sees the world in terms of “us” and “them” – who sees Nefarious Motivations in the hearts of his fellow men – this can all look like obfuscation. I wish it were as simple as “it either works or it doesn’t” but in actual real life, things work with some probability, and weighing those probabilities is never an easy job.

Strangely enough, sound moral reasoning requires statistical analysis. And that puts us all at a disadvantage when trying to Do the Right Thing. Try telling someone that on Facebook. Or, for that matter, good luck getting your patient’s HMO to cover your time explaining all of that to your patient.

-Peter

Risk, statistics and ethics: the AIDS Vaccine

This idea of risk that we have been discussing on TBU for a while now has come up again. The results of a new HIV/AIDS vaccine study were released this week. The Thai trial has shown some promise. The incidence of HIV was 30% lower in the group vaccinated with RV 144 than the control group.

First: the basic bioethics question. Was it OK to give a lot of people a placebo which might let them think they were protected from HIV when they were not at all protected? (Answer: Yes) First-off, they were not told that they were getting an effective drug. They were told they were getting either a placebo or a probably ineffective, experimental vaccine. So the subjects knew beforehand that this shouldn’t be considered a real vaccine.

Second question: why not just give the vaccine to everyone in the study (16,000 people) and compare the effectiveness to the general population? The problem is that you would have a change in HIV incidence that was due to lots of factors. Behavior, knowledge, unknown risk factors (maybe people at higher risk had a greater desire to be in the study than the general population) all could affect the measured efficacy. How would you know which produced your result?

If the vaccine were 100% effective, then there would be no need for a placebo controlled trial. But nothing is 100% effective and – besides – how would you know before you tried?

Now, here’s the more difficult bioethics question: if you have a 30% effective vaccine, who should get it?

This is more tricky. You don’t want to encourage risky behavior (the ‘conservatives’ are always concerned about this). So there is a question to be answered by a careful psychology study: do people modify their behavior after receiving a drug that may or may not prevent a transmissible disease? It seems like they might, but scientists don’t make decisions on “might” if they can avoid it. We make decisions based on what is demonstrably consistent with experiment.

But then it becomes a quantitative statistics problem (more statistics!). It’s only worth vaccinating people if their behavior changes don’t outweigh the efficacy. And then it’s only worth vaccinating people who are at risk… but what if the higher risk people are more prone to behavior modifications? Is is possible to isolate a medium-risk category?

And in all of this, there are massive political problems, not the least of which are form the anti-vaccination people, which I will talk about next week. The vaccine is a real achievement, in any case. Lots of people thought it wouldn’t work. And it reminds us how complicated it gets when trying to do the right thing with imperfect tools.

-Peter

The data will not lie

Consider empirical skepticism: a high standard for physical evidence before committing to a belief. One might be tempted to think that this means “I’ll believe it when I see it.” Strangely, this is not the case. Example: A few short centuries ago, doctors “saw” imbalanced humors and bad vapors, and they “saw” people cured when those conditions were relieved. The “evidence,”confirmed their incorrect theories because it was limited and not properly examined. Skepticism was the idea that maybe the deductive reasoning from the theoretical premises of the day (four Aristotelian elements, humors, etc.) might be flawed because the theories were affecting the perceptions of the outcome.

Skepticism said: “Don’t trust your eyes. Trust the data.”

Modern surgical technique was developed by Joseph Lister who reportedly said “it’s as important to wash your hands before surgery as after.”

Think of how radical that is! It is tantamount to saying “don’t trust what you can see. I know you can’t see the thing on your hands that will kill your patient. I don’t even know what it is. Some Frenchman named Pasteur thinks maybe he’s on to something about that. Look, just trust my blind data that tells me that more people survive surgery if I wash my hands.”

We fancy now that it’s so obvious that there are these invisible things called “bacteria,” that anyone with any sense would figure that out from a few simple observations and “common sense.” Quite the contrary. As a doctor or a researcher, it is critical to remember that your conception of the world changes your perception of the world. The data is the only thing that will tell the truth, and it will only tell you the truth insofar as you ask the right question with your experiment.

Cheers,
Peter

Statistics, drugs, and hard ethical questions

The New York Times has an article this morning on the FDA and drug approval process and some interesting controversy surrounding experimental cancer treatments. (Did you know you can get the NYT on the Kindle? Cool stuff)The FDA’s lead cancer guy is under attack from both sides of the debate. Some people say that he’s letting unsafe, unproven drugs get through and others say he’s holding back life saving treatments with unnecessary bureaucracy. Strangely, both camps are talking about the same drugs. It’s a pretty good article. It gets to the heart of the matter: Gleevec has obvious, amazing benefits. It goes through FDA review really quickly. Other drugs are subtle. And in that subtlety is the controversy.

Arthur Benjamin did a TED talk where he suggested that high schools forgo calculus in favor of statistics. I tend to agree – calculus is really important for scientists, but they can get it in their freshman year at University. To make sense of this and many other important ethical issues, everyone needs some statistics background. How many people need to suffer as a control group without treatment in order to assess whether a drug is subtly helping?This is a pretty hard statistical question. “Just look in your heart and your conscience will be your guide” just doesn’t cut it for these kinds of questions.

For instance: Aspirin seems to help prevent heart attacks. Out of 22,000 people, 56 per year had heart attacks with aspirin as compared with 96 not on aspirin. That implies that taking aspirin is a good idea, but without thousands of data points, it would be impossible to tell. If you only had 220 people, you get absolutely no conclusion. Look at it this way: if you take aspirin every day and don’t have a heart attack this year, you may be one of the 40 people who aspirin saved, or one of the 21,904 who wouldn’t have had a heart attack anyway! All you know for sure is that you’re not one of the 56 people who had heart attacks.

What we know is not what we think we know or what our gut instinct or common sense might tell us. At 546:1 odds against low-dose aspirin having any effect, it seems stupid to take it except that it’s so cheap it’s almost free, virtually no side effects, and heart attacks are serious as… um… well, they are really serious. If aspirin cost $10 per dose and caused erectile dysfunction, I doubt it would be worth taking. But what if it cost $1 per dose and sometimes (1:10,000) caused permanent deafness? Should your grandmother take it? Let your heart be your guide.

Cheers,
Peter

Smart Dogs

There have been a few articles recently about the finding that dogs are about as smart as two-year-old, human children. They have a similar level of vocabulary and mathematical ability, and they can deliberately deceive – something that children only learn to do later. So that’s where dogs stand today.

 

Here’s a question I have been considering for a while: how long would it take to selectively breed dogs with human level intelligence? I’m not considering transgenic dogs or gene-splicing. Genetic mapping for mate selection is OK. What are we talking about, here? I imagine it would be a logarithmic curve: quick at first as we collected all the smart genes in one dog, then slow as we wait for mutation to produce a breakthrough.

But if we don’t look for anything but intelligence – that is, let the breed characteristics fall where they may – how close could we get and how fast?

Let’s say we have a pool big enough to get to human level intelligence in 200 years. That’s probably quite optimistic – about 100 generations. What are the ethical implications? Moral implications? Did we just create a creature with a soul? Was that morally right or wrong? Are we morally obligated to do this, if it is possible?

Strange.

-Peter