Category Archives: Philosophy

Things concerned with the love of wisdom

Risk, statistics and ethics: the AIDS Vaccine

This idea of risk that we have been discussing on TBU for a while now has come up again. The results of a new HIV/AIDS vaccine study were released this week. The Thai trial has shown some promise. The incidence of HIV was 30% lower in the group vaccinated with RV 144 than the control group.

First: the basic bioethics question. Was it OK to give a lot of people a placebo which might let them think they were protected from HIV when they were not at all protected? (Answer: Yes) First-off, they were not told that they were getting an effective drug. They were told they were getting either a placebo or a probably ineffective, experimental vaccine. So the subjects knew beforehand that this shouldn’t be considered a real vaccine.

Second question: why not just give the vaccine to everyone in the study (16,000 people) and compare the effectiveness to the general population? The problem is that you would have a change in HIV incidence that was due to lots of factors. Behavior, knowledge, unknown risk factors (maybe people at higher risk had a greater desire to be in the study than the general population) all could affect the measured efficacy. How would you know which produced your result?

If the vaccine were 100% effective, then there would be no need for a placebo controlled trial. But nothing is 100% effective and – besides – how would you know before you tried?

Now, here’s the more difficult bioethics question: if you have a 30% effective vaccine, who should get it?

This is more tricky. You don’t want to encourage risky behavior (the ‘conservatives’ are always concerned about this). So there is a question to be answered by a careful psychology study: do people modify their behavior after receiving a drug that may or may not prevent a transmissible disease? It seems like they might, but scientists don’t make decisions on “might” if they can avoid it. We make decisions based on what is demonstrably consistent with experiment.

But then it becomes a quantitative statistics problem (more statistics!). It’s only worth vaccinating people if their behavior changes don’t outweigh the efficacy. And then it’s only worth vaccinating people who are at risk… but what if the higher risk people are more prone to behavior modifications? Is is possible to isolate a medium-risk category?

And in all of this, there are massive political problems, not the least of which are form the anti-vaccination people, which I will talk about next week. The vaccine is a real achievement, in any case. Lots of people thought it wouldn’t work. And it reminds us how complicated it gets when trying to do the right thing with imperfect tools.

-Peter

The data will not lie

Consider empirical skepticism: a high standard for physical evidence before committing to a belief. One might be tempted to think that this means “I’ll believe it when I see it.” Strangely, this is not the case. Example: A few short centuries ago, doctors “saw” imbalanced humors and bad vapors, and they “saw” people cured when those conditions were relieved. The “evidence,”confirmed their incorrect theories because it was limited and not properly examined. Skepticism was the idea that maybe the deductive reasoning from the theoretical premises of the day (four Aristotelian elements, humors, etc.) might be flawed because the theories were affecting the perceptions of the outcome.

Skepticism said: “Don’t trust your eyes. Trust the data.”

Modern surgical technique was developed by Joseph Lister who reportedly said “it’s as important to wash your hands before surgery as after.”

Think of how radical that is! It is tantamount to saying “don’t trust what you can see. I know you can’t see the thing on your hands that will kill your patient. I don’t even know what it is. Some Frenchman named Pasteur thinks maybe he’s on to something about that. Look, just trust my blind data that tells me that more people survive surgery if I wash my hands.”

We fancy now that it’s so obvious that there are these invisible things called “bacteria,” that anyone with any sense would figure that out from a few simple observations and “common sense.” Quite the contrary. As a doctor or a researcher, it is critical to remember that your conception of the world changes your perception of the world. The data is the only thing that will tell the truth, and it will only tell you the truth insofar as you ask the right question with your experiment.

Cheers,
Peter

Statistics, drugs, and hard ethical questions

The New York Times has an article this morning on the FDA and drug approval process and some interesting controversy surrounding experimental cancer treatments. (Did you know you can get the NYT on the Kindle? Cool stuff)The FDA’s lead cancer guy is under attack from both sides of the debate. Some people say that he’s letting unsafe, unproven drugs get through and others say he’s holding back life saving treatments with unnecessary bureaucracy. Strangely, both camps are talking about the same drugs. It’s a pretty good article. It gets to the heart of the matter: Gleevec has obvious, amazing benefits. It goes through FDA review really quickly. Other drugs are subtle. And in that subtlety is the controversy.

Arthur Benjamin did a TED talk where he suggested that high schools forgo calculus in favor of statistics. I tend to agree – calculus is really important for scientists, but they can get it in their freshman year at University. To make sense of this and many other important ethical issues, everyone needs some statistics background. How many people need to suffer as a control group without treatment in order to assess whether a drug is subtly helping?This is a pretty hard statistical question. “Just look in your heart and your conscience will be your guide” just doesn’t cut it for these kinds of questions.

For instance: Aspirin seems to help prevent heart attacks. Out of 22,000 people, 56 per year had heart attacks with aspirin as compared with 96 not on aspirin. That implies that taking aspirin is a good idea, but without thousands of data points, it would be impossible to tell. If you only had 220 people, you get absolutely no conclusion. Look at it this way: if you take aspirin every day and don’t have a heart attack this year, you may be one of the 40 people who aspirin saved, or one of the 21,904 who wouldn’t have had a heart attack anyway! All you know for sure is that you’re not one of the 56 people who had heart attacks.

What we know is not what we think we know or what our gut instinct or common sense might tell us. At 546:1 odds against low-dose aspirin having any effect, it seems stupid to take it except that it’s so cheap it’s almost free, virtually no side effects, and heart attacks are serious as… um… well, they are really serious. If aspirin cost $10 per dose and caused erectile dysfunction, I doubt it would be worth taking. But what if it cost $1 per dose and sometimes (1:10,000) caused permanent deafness? Should your grandmother take it? Let your heart be your guide.

Cheers,
Peter

Smart Dogs

There have been a few articles recently about the finding that dogs are about as smart as two-year-old, human children. They have a similar level of vocabulary and mathematical ability, and they can deliberately deceive – something that children only learn to do later. So that’s where dogs stand today.

 

Here’s a question I have been considering for a while: how long would it take to selectively breed dogs with human level intelligence? I’m not considering transgenic dogs or gene-splicing. Genetic mapping for mate selection is OK. What are we talking about, here? I imagine it would be a logarithmic curve: quick at first as we collected all the smart genes in one dog, then slow as we wait for mutation to produce a breakthrough.

But if we don’t look for anything but intelligence – that is, let the breed characteristics fall where they may – how close could we get and how fast?

Let’s say we have a pool big enough to get to human level intelligence in 200 years. That’s probably quite optimistic – about 100 generations. What are the ethical implications? Moral implications? Did we just create a creature with a soul? Was that morally right or wrong? Are we morally obligated to do this, if it is possible?

Strange.

-Peter

Peter’s Take on the 7 Habits of Highly Effective People

Introducing the 7 Habits is rather silly at this point considering how old and well regarded it is. Time magazine said “Over the past two decades, Stephen Covey’s best seller The 7 Habits of Highly Effective People has become a management bible in the boardroom.” Its merits are well known. I happen to love the book, but I have some reservations.

The Jester is right on the following point: the 7 Habits of Highly Effective People is not about “Success” in the common sense of the word. In my estimation, that is a very good thing. It is extremely different from How to Win Friends and Influence People, for instance. The 7 Habits addresses the distinction between effectiveness and conventional success pretty directly: It’s more important to develop character than it is to learn any given technique.

A large part of Character is responsibility (read “response ability”): the ability to respond to a stimulus by choice rather than by instinct and emotion. Character means never saying “I was so mad/sad/frustrated that I couldn’t help what I did.”

The book talks about the reasons for building responsibility and some methods for doing so. The first half really focuses on personal responsibility, and the second half is more about social responsibility (read, “not being a jerk”). Despite what the Jester may believe, people can learn to lead, communicate carefully, and really consider the needs of all stakeholders involved. That builds trust, and with trust there can be a whole different level of productivity.

I have never read Machiavelli, but I’ve learned a bit from others who have. From what I can tell, the 7 Habits principles are really a better approach. Even if both achieve the same result, effectiveness can leave a real legacy; brutality only leaves a power vacuum. I talked a while back about ways to motivate others, and there really are not all that many. Reward and fear are the two most basic (and widely used). There are better ways, based on trust and mutual aspirations. But the Jester is right about this: the higher path is not the easy path.

I will say this about the 7 Habits: I find some of it to be a little hokey. Nonetheless, it is of real value that someone has spelled out a clear framework of concepts around the central principles of personal growth, trust, and shared enterprise.

Next week, I’ll talk about the differences between Allen’s Getting Things Done system and Covey’s First Things First.

-Peter