# Significance of E. Coli Evolution Experiments

Blount, Borland, and Lenski^{[1]} claimed that a key evolutionary innovation was observed during a laboratory experiment. That claim is false. The claim was based on incorrect measurements of statistical significance. Rather than using a test from the statistics literature, a flawed test was contrived and used to measure significance. The flawed test (“mean mutation generation”) produced artificially low p-values.

## Contents

## Test Statistics

The test statistic used in Blout, Borland, and Lenski was mean mutation generation. For example, the experiment one mean mutation generation is (see Tables 1 and 2 of the paper)

- <math>

\frac{1}{4}\left(30500+31500+2\times32500\right) =31750. </math>

The null hypothesis from the paper was constant mutation rate over all generations. However, measurements of mean mutation generation can fail to observe deviations from that null hypothesis. Consider an experiment where the mutation probabilities for generations 1, 2, and 3 are <math>p_1</math>, <math>p_2</math>, and <math>p_3</math>, respectively. If the mutation probabilities per generation are <math>p_1=p_2=p_3=p</math>, then the mean mutation generation is 2. However, if the mutation probabilities are <math>p_1=p_3=p/2</math> and <math>p_2=2p</math>, then the mean mutation generation is still 2. In the latter example the experiment has deviated from the null hypothesis, but the mean mutation generation is insensitive to the change. Because this test can fail to observe deviations from the null hypothesis, levels of statistical significance computed using this test (p-values) are meaningless.

If the trials from an experiment have two outcomes (e.g. success or failure), then the chi-square test for independence can be written

- <math>

X^2
=\sum\limits_{i=1}^M
\frac{\left(x_i-\hat{p}N_i\right)^2}
{\hat{p}(1-\hat{p})N_i}
</math>
where <math>x_i</math> is the number of successes (e.g. mutations) in the *i*-th experiment, <math>N_i</math> is the number of trials in the *i*-th experiment, and

- <math>

\hat{p}=\frac{\sum\limits_{i=1}^Mx_i}{\sum\limits_{i=1}^MN_i} </math> is the fraction of trials that are successful in all experiments. The chi-square test is, on average, at a minimum when all success probabilities are equal (<math>p_i=p</math> for <math>i=1,2,\ldots,M</math>) and will, on average, increase whenever the data follows any other hypothesis (<math>\sum\limits_{i=1}^M \left(p_i-\bar{p}\right)^2>0</math> where <math>\bar{p}</math> is the mean of the success probabilities). Thus the chi-square test is an effective hypothesis test for the data and hypotheses from Blount et al.

## Comparison of p-Values

The following table compares the p-values reported in Table 2 of Blount et al. to the chi-square p-values for the same experiments. For experiments one and three, the chi-square p-values are much larger than the "mean generation" test p-values from the paper.

Experiment 1 | Experiment 2 | Experiment 3 | |
---|---|---|---|

p-Value from Paper | 0.0085 | 0.0007 | 0.082 |

Chi-square p-value | 0.19 | 0.0004 | 0.22 |

The chi-square test p-values are computed by comparing the test statistic to the chi-square distribution. It is generally assumed that the cell frequencies should be greater than five so that the statistic's distribution follows chi-square distribution. However, there is no consensus about what minimum cell frequency is necessary or how many expected values need to cross that threshold.

### Experiment One Data

The data from experiment one of the paper is shown below (see Table 1 of the paper). The expected outcomes under the null hypothesis (no evolutionary innovation occurs) are also shown.

Generation | Trials | Mutants | Statics | Expected Mutants | Expected Statics |
---|---|---|---|---|---|

0 | 6 | 0 | 6 | 0.333 | 5.667 |

10000 | 6 | 0 | 6 | 0.333 | 5.667 |

20000 | 6 | 0 | 6 | 0.333 | 5.667 |

25000 | 6 | 0 | 6 | 0.333 | 5.667 |

27500 | 6 | 0 | 6 | 0.333 | 5.667 |

29000 | 6 | 0 | 6 | 0.333 | 5.667 |

30000 | 6 | 0 | 6 | 0.333 | 5.667 |

30500 | 6 | 1 | 5 | 0.333 | 5.667 |

31000 | 6 | 0 | 6 | 0.333 | 5.667 |

31500 | 6 | 1 | 5 | 0.333 | 5.667 |

32000 | 6 | 0 | 6 | 0.333 | 5.667 |

32500 | 6 | 2 | 4 | 0.333 | 5.667 |

Total | 72 | 4 | 68 | 4 | 68 |

When the flawed test is used to compute the significance of this data, the p-value is 0.0085 (see Table 2 of the paper). This p-value is considered statistically significant. However, when the data is analyzed using a standard method (the chi-square test) the p-value is 0.19. This p-value is much larger than the one from the paper and indicates that there is no reason to reject the null hypothesis. The chi-square test p-value for experiment two is small (0.0004). However, experiment three is not statistically significant because its p-value is 0.22.

The chi-square test is a common statistical method.^{[2]} It can be implemented in Microsoft Excel. If the numbers from the last four columns of the experiment one data table (excluding the “totals” row) are entered into Excel in rows 1-12 and columns A-D, then the p-value can be computed by entering “=CHITEST(A1:B12,C1:D12)” into any empty cell of the spreadsheet.

### Experiment Three Data

The experiment three data from Blount et al. is shown in the table below. The expected numbers of mutants under the null hypothesis (constant mutation rate) is also shown.

Generation | Trials | Mutants | Statics | Expected Mutants | Expected Statics |
---|---|---|---|---|---|

0 | 200 | 0 | 200 | 0.571 | 199.429 |

10000 | 200 | 0 | 200 | 0.571 | 199.429 |

20000 | 200 | 0 | 200 | 0.571 | 199.429 |

25000 | 200 | 0 | 200 | 0.571 | 199.429 |

27500 | 200 | 2 | 198 | 0.571 | 199.429 |

29000 | 200 | 0 | 200 | 0.571 | 199.429 |

30000 | 200 | 2 | 198 | 0.571 | 199.429 |

30500 | 200 | 0 | 200 | 0.571 | 199.429 |

31000 | 200 | 0 | 200 | 0.571 | 199.429 |

31500 | 200 | 0 | 200 | 0.571 | 199.429 |

32000 | 200 | 1 | 199 | 0.571 | 199.429 |

32500 | 200 | 1 | 199 | 0.571 | 199.429 |

Total | 2800 | 8 | 2792 | 8 | 2792 |

## References

- ↑ http://www.pnas.org/content/105/23/7899.full.pdf
- ↑
*Mathematical Statistics with Applications*by Wackerly, Mendenhall, and Scheaffer, Section 14.4.