Category Archives: Uncategorized

Thanksgiving turkey with data

I am a huge fan of the Martha Stewart turkey recipe. It always comes out moist and flavorful. This year I substituted quartered onions, thyme and garlic for stuffing. I also decided to collect temperature data over time and see if I could get it to hit 180 °F at 40 minutes before plating. Here’s how that worked out.

I thawed my turkey in cold water overnight. Estimated time for cooking a defrosted 16 pound turkey range from 4-6 hours. I decided to plan for the worst and give myself 6. Once it was apparent I was going to be able to be done in 4, I reduced my cook temperature (top trace)

turkey temp.png

We had people coming by to eat at 3:00. I hit 180 at 1:30, but I went to 190 just to be sure. I pulled the bird at 1:50. That gives the bird time to rest. So long as nobody is late, that’s not bad timing. I’m glad I built the spreadsheet.

 

 

FP binding assays, time to give up

After too many hours, I’m giving up on fluorescence polarization (FP) as a generic binding assay. Clearly, it works in some cases with some labels. I got a weak response from one of the aptamers I tried, but another two failed to show anything. And I’m screaming for signal even in the “positive” case. Published results suggest several other aptamers have a FP shift upon binding. But, clearly, not all of them.

For a dye like fluorescein, the local environment around the molecule is probably more important than the size of the complex. So, unless binding changes the conformation in a way that is near to the dye, there’s no FP change. It would be interesting to try a survey of published DNA aptamers and their targets to see which show an FP change and which don’t. The results could be related to structural elements. That could inform design. One group did do some design work to make a FP-aptasensor for small molecules. The binding site and the fluorophore modification were strategically placed close together so that binding was more likely to affect FP. More work in this direction could help such designs. But, ultimately, FP is not as generically useful as I would like.

Today I tried thermofluorimetric analysis based on work by the Easley lab at Auburn University. I got immediate results from two published aptamers comparable to the results in their Analytical Methods paper.

The basic idea is to melt the aptamer and then look for the changes in the melt curve after addition of the target. The target should stabilize the aptamer, so there should be a peak in the melt curve at higher temperatures. Indeed, when I add protein to my aptamer, I see such a peak. I did it at multiple concentrations and it looks like a binding curve. But when I fit the data I get a weaker Kd than the original papers suggested.

It seems to me that deriving the dissociation constant using this method will inherently report the equilibrium constant at elevated temperatures. The methods paper showed strong binding, but they chose a particularly strong sub-nanomolar aptamer as proof of concept. They didn’t show a Kd calculation. So maybe at 62 °C the 0.1 nM aptamer is acting like a 1 nM aptamer. That would be a lot weaker than the original but would still give a clear binding curve.

I need to try more conditions to prove that I’m able to derive a Kd from thermofluorimetric assay. Or not.

I also need to do a protein-only control. My DNA dye should be pretty specific, but it’s important to check that it’s not interacting with the protein and giving spurious signals.

FP binding assays, sanity check

When I am confronted with a frustrating problem, I like to run a “sanity check” to test my assumptions. For instance: I’m running this FP assay. When I add protein, I should get a change in the fluorescence polarization in my sample. There’s lots of literature that suggests it does happen.But is my instrument capable of detecting it?

As a sanity check, I ran control samples in my plate reader 36 times. I took  the plate out, put the plate back in, and ran the same samples in my plate reader 36 times. I need to know about my reader’s stability.

I expect a relatively small ΔFP. The fluorescein is attached to a 20 kD DNA aptamer which binds to a 16 kD protein. That’s not a very big change in molecular weight. The previous binding assay gave a max ΔFP of 10 mP. I need  a standard deviation of less than 3 mP to have any confidence this measurement.

Thankfully, the standard deviation of the ΔFP across 22 wells was 2 mP. The well-to-well-variability was terrible. Each well needs its own control.I can’t measure absolute FP with any accuracy, but I should be able to measure ΔFP if it’s greater than 6 mP. That gives me more confidence about the measurements from the last few days.

2016-11-21-11_45_46-20161121-052440_161015-fa-384-row-b-c_summry-average-xls-compatibility-mode

I looked for some literature to compare. A small fluorescein-labeled peptide (MW 1.3kD) binding to a large fusion protein (49 kD) gives a ΔFP of ~200 mP. The scatter around their curve is about 6 mP.That’s similar to my own. If I get a ΔFP of that magnitude, I should be able to measure it easily.  I measured the FP of erythrosin and got ~340 mP so I am definitely able to detect FP when it’s strong (literature value 316 mP).

2016-11-21-10_11_37-beacon_fluorescence_guide-pdf

From Invitrogen product literature

FP binding assays, continued

More science for my students. Yesterday, I showed a fluorescence polarization (FP) binding assay with frustratingly large error bars. The standard deviation among three replicates was disturbingly large. Put another way: three samples that were supposed to be the same looked different from one another. It would be like pouring three glasses of wine from the same bottle only to discover that one glass was red, one white, and one rose.

One possibility is that the samples are actually not as similar as we thought. Just because the three glasses of wine came from the same bottle doesn’t mean they were poured the same way. Maybe the bottle had sediment that made one glass look darker.

The other possibility is that the instrument is just not very good. In my wine analogy, maybe it’s not that the three glasses are different; maybe I just need to get my eyes checked. Or to stop drinking.

I tested that yesterday. The FP assay, not stopping drinking. Heaven forbid.

I filled seven wells with the exact same sample. I used a microscope to confirm that there were no bubbles (bubbles play mad hob with the FP measurement). I ran the same plate with 7 identical samples through the machine 15 times. Here’s what it looked like:

2016-11-18 06_50_19-20161117-100940_20161117_10.16-recovered.xlsx - Excel.png

What it should look like is seven overlapping flat lines. That word “should” is a dead giveaway that something is amiss. What I see here is a whole lot of noise. How can I correct for this? (short of buying a new plate reader – which I covet)

I can compensate by averaging. When I average across all 15 replications, I get one halfway decent measurement. I added protein to the experimental wells and took 15 more measurements. The control sample changed dramatically (-8.5 mP). To be clear, that control sample was not touched between the initial and final measurements. That apparent change in the first sample was introduced by the instrument.

2016-11-18 06_45_50-20161117-100940_20161117_10.16-recovered.xlsx - Excel.png

I can use the control as an internal standard to help correct that kind of variability. I took every individual fluorescence measurement and normalized it to the control sample. The run-to-run standard deviation increased from 6mP to 15 mP (presumably because I was adding independent noise from the standard well to the noise in the experimental wells). So there is no systematic across-the-row error. But it does correct the drifting background.

2016-11-18 06_46_02-20161117-100940_20161117_10.16-recovered.xlsx - Excel.png

With all of that work, we can get marginally reliable measurements. The error bars are the standard error of the average with n=15. The Kd came out to 100 nM again, which gives me more confidence. Going about it this way has the advantage of being able to get those error bars down just by adding more replicates. It also has the advantage of not costing $20,000 for a new platereader.

Fluorescence polarization binding assays

Here’s a little science for my students. I am going a little bit crazy trying to get a binding assay to work using fluorescence polarization (FP). The basic idea is this: take a constant amount of a fluorescent molecule (aptamer, Apt), add something that changes its fluorescence polarization (ligand, L), and measure the FP. As ligand is added, fluorescence polarization should change.

2016-11-17-06_12_27-krita

We measure the initial FP, add ligand, measure the FP again and look at the change. The total FP should be the weighted average of the bound and unbound quantities. So, we can model the delta-FP as a binding curve. We know the total Apt and L. For a given Kd we can calculate predicted concentrations of all the species. Delta-FP is proportional to the concentration of bound aptamer in in the product. That’s freshman chemistry and algebra.

2016-11-17-06_18_42-20161015-154131_161015-fa-384-row-l_allrawdata_10-15-2016_03-59-09-70-binding-as

We If we guess the equilibrium constant and guess the maximum FP, we can compare to the experimental results. After a lot of guess-and-check (called a nonlinear fit and done automatically with the Excel Solver Add-in) we get a binding curve (line) that sort-of matches the data (dots). It suggests a Kd of ~100 nM which is within an order of magnitude of the Kd of this aptamer as measured by dot-blot… but look at those error bars. That’s the standard deviation among 3 replicates. Not good.

Why are these error bars so big? Sample preparation or instrument? We pipetted 24 samples of our 20 nM aptamer across one row of wells on the 384-well plate. The same solution went into each well. Results were disappointingly inconsistent.

2016-11-17 06_34_25-20161115-175035_20161115 Fluoprescenced polarization binding assay_ 384 row O_Al.png

The well-to-well standard deviation is .02, which is as large as our maximum delta-FP signal. That’s not usable. The scan-to-scan repeatability is not as bad. The orange and blue data are repeated scans of the same row.  Since the scan-to-scan repeatability is OK, we used delta-FP (before and after adding ligand) for the binding assay (rather than raw FP). The standard deviation of the delta-FP is .002. The change after adding ligand is as large as .015. So, maybe there’s something, but it’s still not good.

Why is it so bad and how can we fix it? We can go a higher on aptamer concentration. That will give better SNR and maybe overwhelm whatever the variable interference is from well-to-well. We can also take numerous data points for each well and average them. If the plate reader’s positional reproducibility is the problem, averaging should help.