Heparin, analytical chemists to the rescue, and how bias could hurt open science

I was really impressed by the science that tracked down the problem with the contaminated heparin. It made me think of the enormity of coming up with good data in contentious issues like this. I have to wonder what this kind of issues a loaded question like “is this heparin contaminated?” would present for open science.

For people who are unfamiliar, heparin is a drug derived from meat animals that prevents blood coagulation. It’s really important for dialysis patients because in dialysis, blood is passed through a machine to remove the wastes that would usually go out as urine. The machine doesn’t like coagulated blood, an your body doesn’t want the coagulated blood back. So it’s important that it stay un-coagulated. Given that, heparin is pretty important. A bunch of people got sick taking heparin recently (66 died, sadly) and it was up to the analytical chemists to figure out why. It turned out that an impurity was causing the adverse reaction, but the impurity was so similar to the real heparin that it was being missed by the usual tests. In fact, thee have been some allegations that the impurity was deliberately introduced because it is cheaper, and in standard tests will show up as the legitimate compound.

heparin structure diagram from wikipedia
So, to get this figured out, the FDA did something clever. They gave a bunch of samples of the questionable heparin and the good heparin to some analytical chemists, but they didn’t tell the chemists which was which. Two groups of chemists looked for a contaminant and found the same thing independently. The fact that they both found the same impurity in the same sample is good evidence. But without knowing in the first place which sample caused health problems made the conclusion all the stronger.

But I would like to point out that this kind of science is hard. We’re looking at a lot of work, yes. But in this kind of business, it’s so murky and so easy to be biased that careful people need to blind themselves to the facts that might affect their interpretations. I’ve been reading a lot about open science recently, and I think it’s a marvelous idea: share data the way people share code so that the maximum good can come from whatever work people are doing. The problem with the idea is that whole “science is hard” thing. If people start sharing preliminary data, and they have biases and they share these biases, we could end up with open sciecne discrediting itself pretty fast. Unlike broken code that just doesn’t work, broken science can persist for a long time. It would be a shame to see something so promising become the home of charlatans and the self-delusional.