Tag Archives: statistics

2011 Japanese Earthquake, 6 Months Later

The devastating March 2011 earthquake in Japan is now six months behind us. Tragically, more than 15,000 people lost their lives.

I heard a speaker a few weeks ago suggest that her Father’s cancer may have resulted from living too close to the Three Mile Island incident. Admittedly (and to her credit) she did not insist that the nuclear accident was the cause, only suggested that it might be the cause. And (like all wobble language) it is hard to argue against such claims. About 140,000 people evacuated from a region around the plant with a radius of 20 miles. Afterward, about 2 additional cases of cancer resulted beyond what would be expected, statistically. The normal incidence of cancer is about 400-500 per 100,000 people per year. So it is completely impossible to discriminate between of the people who lived around TMI and would normally get cancer and the two extra cases that resulted from the accident. Maybe her father was one of those people. But the likelihood is slim.

I (of course) did not confront a grieving woman after the death of her father. Her statistical analysis was off, but her emotions took precedence. Yet her attribution of that cancer to TMI leaves her audience with a little more fear of radiation and nuclear power.

With that in mind, I think back to the Japanese earthquake. Again, tragically, 15,000 people lost their lives.  A dam broke and drowned at least four people. Living in front of a dam carries some risk.  Are dams a threat to humanity? No, and neither are nuclear power plants. So far, as of yet, not one person has died from radiation.

Alarmists would have everyone believe that the nuclear disaster associated with the accident was far more terrible than the earthquake itself. If that were true, we would expect that there would be many thousands of sick or dead people due to radiation. But there are none yet. Now, if the nuclear tragedy doubles the risk of cancer across a 30 mile swath of Japan containing 100,000 people, we might see 500 additional cases of cancer per year. That is a terrible (and unrealistically bad) scenario, but it is (still) not remotely so terrible as a single tragedy killing 15,000 people without warning.

So it is strange to me that irrational fears resulting from Fukushima nuclear disaster will have so great an emotional impact relative to the natural disaster. And it may be that the emotion will have a lot bigger political impact than reality.


P.S. To be fair, one rad-worker has leukemia, but that disease takes years to develop, so is not likely due to the meltdown.

Fukushima Daiichi reactors faults


Donate to the Red Cross

Let’s assume that the Fukushima Daiichi reactors collectively manage a plume the size of a square kilometer with radiation levels of 400 mSv/hour. Now, to get comparable numbers we need to get the dose per year:

400 x 24 x 365 = 3.5 million mSv/year.

Now let’s take that as a uniform distribution over 1 km square and spread it out over the whole earth. Divide by 500 million square km (global distribution).

3.5 million ÷500 million = .007 milliseiverts / year

One Japanese reactor site is not going to sustain that level of emission for a week, much less for a year. Also, 400 mSv/hour is probably a peak value not average value. The real numbers are much, much less. So,the absolute crazy-absurd worst case scenario is less than .007 mSv / year globally.

Typical, natural background radiation levels are about 2.4 millisievert (mSv) per year.  You are already being irradiated at this moment with 342 times the absolute worst case dose from that reactor. So crunch your iodine tablets if it makes you happy, but people in Japan are suffering from the quake and tsunami damage, not radiation. How about we spend our iodine budget on helping out the Red Cross?

Adapted from Pournelle http://www.jerrypournelle.com/

Correlation and causation: meditations on violence

XKCD looks at causation and correlation

XKCD looks at causation and correlation

With XKCD’s comic firmly in mind, I considered the news today. Constance Holden with the ScienceNOW Daily News over at Science Magazine drew my attention to an article linking violence to childhood sugar consumption. The telling quote is this:

Although lower education levels correlated with daily sweet-eating, the connection with violence remained significant even when the researchers controlled for factors such as family circumstances, parental attitudes, and IQ. “Try as I did, I couldn’t get rid of the sweets-violence connection,” says Morris.

So… it may be that highly sugared children (sugartots?) are made more violent by sugar… or it could be that violent people are drawn to sugar as children… the candy manufacturers may be drugging children to reduce their impulse control… or kids whose without parents failed to help them learn impulse control lack impulse control in adulthood… it’s all very complicated.


Risk, statistics and ethics: the AIDS Vaccine

This idea of risk that we have been discussing on TBU for a while now has come up again. The results of a new HIV/AIDS vaccine study were released this week. The Thai trial has shown some promise. The incidence of HIV was 30% lower in the group vaccinated with RV 144 than the control group.

First: the basic bioethics question. Was it OK to give a lot of people a placebo which might let them think they were protected from HIV when they were not at all protected? (Answer: Yes) First-off, they were not told that they were getting an effective drug. They were told they were getting either a placebo or a probably ineffective, experimental vaccine. So the subjects knew beforehand that this shouldn’t be considered a real vaccine.

Second question: why not just give the vaccine to everyone in the study (16,000 people) and compare the effectiveness to the general population? The problem is that you would have a change in HIV incidence that was due to lots of factors. Behavior, knowledge, unknown risk factors (maybe people at higher risk had a greater desire to be in the study than the general population) all could affect the measured efficacy. How would you know which produced your result?

If the vaccine were 100% effective, then there would be no need for a placebo controlled trial. But nothing is 100% effective and – besides – how would you know before you tried?

Now, here’s the more difficult bioethics question: if you have a 30% effective vaccine, who should get it?

This is more tricky. You don’t want to encourage risky behavior (the ‘conservatives’ are always concerned about this). So there is a question to be answered by a careful psychology study: do people modify their behavior after receiving a drug that may or may not prevent a transmissible disease? It seems like they might, but scientists don’t make decisions on “might” if they can avoid it. We make decisions based on what is demonstrably consistent with experiment.

But then it becomes a quantitative statistics problem (more statistics!). It’s only worth vaccinating people if their behavior changes don’t outweigh the efficacy. And then it’s only worth vaccinating people who are at risk… but what if the higher risk people are more prone to behavior modifications? Is is possible to isolate a medium-risk category?

And in all of this, there are massive political problems, not the least of which are form the anti-vaccination people, which I will talk about next week. The vaccine is a real achievement, in any case. Lots of people thought it wouldn’t work. And it reminds us how complicated it gets when trying to do the right thing with imperfect tools.


The data will not lie

Consider empirical skepticism: a high standard for physical evidence before committing to a belief. One might be tempted to think that this means “I’ll believe it when I see it.” Strangely, this is not the case. Example: A few short centuries ago, doctors “saw” imbalanced humors and bad vapors, and they “saw” people cured when those conditions were relieved. The “evidence,”confirmed their incorrect theories because it was limited and not properly examined. Skepticism was the idea that maybe the deductive reasoning from the theoretical premises of the day (four Aristotelian elements, humors, etc.) might be flawed because the theories were affecting the perceptions of the outcome.

Skepticism said: “Don’t trust your eyes. Trust the data.”

Modern surgical technique was developed by Joseph Lister who reportedly said “it’s as important to wash your hands before surgery as after.”

Think of how radical that is! It is tantamount to saying “don’t trust what you can see. I know you can’t see the thing on your hands that will kill your patient. I don’t even know what it is. Some Frenchman named Pasteur thinks maybe he’s on to something about that. Look, just trust my blind data that tells me that more people survive surgery if I wash my hands.”

We fancy now that it’s so obvious that there are these invisible things called “bacteria,” that anyone with any sense would figure that out from a few simple observations and “common sense.” Quite the contrary. As a doctor or a researcher, it is critical to remember that your conception of the world changes your perception of the world. The data is the only thing that will tell the truth, and it will only tell you the truth insofar as you ask the right question with your experiment.