Optimally Designing Experiments in Academic Research

When we publish a paper, we usually just show the end result of an experiment that highlights the point we’re trying to make or the “discovery” that we have just made. Behind this one figure, there are typically many, many optimization experiments and titrations that narrowed down the BEST experimental parameters to illustrate this one point. These optimization experiments are often trial-and-error or experiments changing just one condition (or factor) at a time. For example, a simple Western blot has the following factors to consider: Primary antibody concentration, blocking buffer composition, blocking buffer concentration, antibody incubation time & temperature, amount of shaking, secondary antibody concentration, incubation time/temperature, and shaking; and washing conditions. That’s 11 factors for just one experiment and the optimal conditions for each experiment could be different from one experiment to the next. These factors often interact with each other, the optimal amount of antibody might be higher for shorter incubation times and everyone knows that the optimal temperature for short incubation times is higher than long incubation times. But with all these factors to consider, we could spend (waste?) months of time and thousands of dollars just figuring out how to do the experiment before actually testing the thing we’re interesting in. This is where the concept of Design of Experiments (DoE) comes in. This is using statistical principles (and software) to: (1) Consider the possible factors that could influence an outcome, (2) Build a model of how you think they might interact, (3) Design a set of conditions and measurements that covers the appropriate “experimental space,” (4) Measure the outcomes and model the effects of the factors. This is a principle used in biotech/pharma/agriculture and engineering to identify the optimal conditions for a particular process. It is sort of sad that biomedical research and undergraduate life science education ignores this tool, but we can change that.

It got me thinking about an old experiment I did when I was in grad school, here. In this experiment, I was measuring nuclear translocation of the glucocorticoid receptor (GR) in response to cortisol. We hypothesized that knocking out an adapter protein would affect GR translocation in response to cortisol. So a simple experiment would be: Knock out the gene, Add cortisol, Measure nuclear translation …. profit. And this is what I presented in the paper.

But there are lots of hidden experiments here. How much cortisol should I use? Too much and the system is saturated, the effect of my gene knockout would be lost, too much is also toxic. Too little, and there won’t be any effect to observe. How much time after adding cortisol should I measure it? Too little, and not enough time for the translocation to happen would have passed; too much, and the effect might have worn off, or the effect would be lost because of diffusion. The medium the cells are in has plasma, which contains cortisol-binding-proteins. How much plasma? Too little and the cells will be stressed and not survive; too much and all the cortisol will be bound up in the media and not make it into the cells. All these conditions had to be optimized before I could even start my experiment on the adapter protein and these conditions interact with each other.

This is where DoE would have helped me in grad school.

Image
Set of parameters for cortisol concentration (y-axis) and plasma (x-axis) for experiment. “Experimental Space” for these two factors constrained by the inequality.

First let’s look at cortisol concentration and plasma. The plasma concentration has to be between 1% to 10%. At 1%, they respond slower to stimuli, at 10%, they are super healthy and responsive. The cortisol could range anywhere from 5 to 500 nM. The less cortisol, the more time is necessary to see a response and the dimmer it is. I also already have a good idea about the cortisol-binding-proteins in the plasma. I estimate that at the highest plasma concentration (10%), there needs to be at least 250 nM cortisol in order for it to have a chance of reaching the cells. So I can plot Cortisol vs. Plasma, and put a point at 1% & 5 nM and another at 10% and 250 nM and connect the dots; and I know that there’s no point in doing any conditions below that line, thus limiting my experimental space. I can even write an inequality for this, 9y - 240x geq 81.

Now I’m ready to input my factors (Time [5 to 60 minutes], Cortisol concentrations [5 to 500 nM], and Plasma concentration [1 to 10%]) with the given constraint respecting the relationship between plasma & cortisol. But now I need to think about my outcome, which is nuclear translocation of GR. I’ve decided that it is a Percent-translocation (100% meaning ALL the GR is in the nucleus, 0% meaning none is). Should I design my experiment such that I get 100%? Well — no. My gene manipulations are going to try to speed it up and slow it down; so optimally, I would want 50% translocation. Then my gene manipulations, all other factors staying the same, would either increase or decrease this. So I set my target at 50 and my possible range from 0-100.

There’s now just one more thing to consider. How many experimental units (i.e.: measurements) can I afford to do? It takes about 3 hrs to image & quantify each one and I want the answer in a week. So I can afford, really, to do roughly two per day, assuming I can get time on the microscope and work on Saturday. Our tissue culturing plates come in 12 and 24 well-plates; so I decide that the number I want is 12. I can figure out the optimal design by running one 12-well plate; and doing my microscopy over the course of one week, and I’ll be all set to test my real hypothesis. I plug this all into my stats software (I use JMP), and it spits out the following table of conditions. I can plot the “Experimental Space” too. As you can see, it covers the borders the of the space across the different factors with one or two replicates in the middle.

Experimental Runs for Nuclear Translocation Experiment
Run Cortisol (nM) Plasma (%) Time (min)
1 35.7 1 60
2 376.7 2.8 46.25
3 500 10 60
4 99.4 3.4 18.75
5 500 10 5
6 500 1 60
7 210.9 7.6 46.25
8 500 1 5
9 275.7 10 5
10 275.7 10 60
11 35.7 1 5
12 392.1 8 18.75
Design Space of optimization experiment. Showing plasma concentrations (x-axis), Cortisol Concentrations (left y-axis, circles), and Time (right y-axis, plus-signs). Testing for one outcome across three parameters using only twelve replicates.
Design Space of optimization experiment. Showing plasma concentrations (x-axis), Cortisol Concentrations (left y-axis, circles), and Time (right y-axis, plus-signs). Testing for one outcome across three parameters using only twelve replicates.

Results

A week later, and I have my results. I can put my results into a model, and get the predicted set of conditions that will give me 50% translocation. I set up a simulation, given estimates of what I already know, and those results are above. In the figure below, these are the optimal parameters. Here, I set my “desirability” at 50%. It shows me how changing each factor influences the effect of all the other factors. So when I do my experiment to knock out the adapter proteins, I will use Cortisol = 124 nM, Plasma at 4.3%, and stop the experiment at 6.5 min. The cool thing about this is that I can even estimate my variance to be 3%. So my gene manipulations need to either increase or decrease the translocation by at least 3% to see an effect.

Simulated Results of Optimization Experiment
Run Nuclear Translocation (%)
1 47.65
2 74.67
3 71.75
4 55.35
5 35.32
6 85.53
7 55.64
8 51.3
9 38.78
10 56.55
11 56.67
12 52.85
Prediction Profiler to optimize three parameters for obtaining 50% nuclear translocation.
Prediction Profiler to optimize three parameters for obtaining 50% nuclear translocation.

Why don’t we teach this is in grad school or undergrad?

The short answer is: because the scientists in charge right now don’t know or seem to care about these principles. They teach this in engineering schools, but increasingly, I’m finding that pharmaceutical companies, biotech companies, biomedical devices, or therapeutics research want to hire scientists who can use resources wisely, find answers rationally, and design experiments well. We need to give our microbio, mol bio, genetics, immunology, neuroscience, biochem, or even zoology graduates an edge and incorporate some DoE into our teaching labs. So I’m working in my spare time on an advanced microbiology laboratory course that teaches these principles. I’ll start with the basics — making one measurement with two different pieces of equipment that have differing variances, and a limited run number -> to something complicated -> optimal formulation of an antibiotic that kills pathogenic bacteria but preserves desirable bacteria; optimal fermentation conditions to get a target CO2 and ethanol content; and draft regulations on temperature and humidity to prevent Listeria growth given epidemiology data and lab experimentation. Looking back, I’m interested to know … could DoE have helped anyone in grad school? Will you use it in the future in discovery science, academic labs, or NIH-funded “basic” research?

Advertisements

One thought on “Optimally Designing Experiments in Academic Research

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s