Talk:Flaws in Richard Lenski Study

From Conservapedia
Jump to: navigation, search

See Talk:Letter to PNAS for a focused discussion on the letter that was actually sent to PNAS. The discussion below is broader.

Now, is this page a report over other people thinking Lenski's paper is flawed or is this Aschafly reporting about himself and putting it in the headline of the Front Page? And please specify by references which "two other" experiments are referenced, and explaing how you can see that the "historical contingency" is not true. --Stitch75 23:22, 12 July 2008 (EDT) And if you think you argue that well, why dont you submit it as a comment òn the paper to PNAS. --Stitch75 23:24, 12 July 2008 (EDT)

This article is seriously partisan. I see no outside or third party analysis of the paper in the references, just a bunch of cites to the article or conservapedia itself. Wisdom89 00:34, 13 July 2008 (EDT)

Folks, you're in the wrong place if you're more interested in who says what rather than determining the truth itself. A true wiki gets at the substantive truth rather than trying to rely on biased gatekeepers and filters of the truth.

The flaws in the statistical analysis in Lenski's paper are clearly set forth and well-referenced. If you're interested in the truth, then look at the paper and see the flaws yourself. If you're not interested in the truth and think you can distract people's attention from it by using other tactics, then you're wasting your time here.--Aschlafly 00:42, 13 July 2008 (EDT)

I agree with Aschlafly on this, there needs to be some kind of admission or response from Lenski but little has been forthcoming and conservapedia itself has taken it on. Notice how no-one, aside from conservapedia (and I think Creationwiki?) has asked such questions of Lenski? All the magazines etc have taken his study at face value without actually taking the time to critique his claims. Aside from the "peer reviewers" of course. JJacob 00:47, 13 July 2008 (EDT)

The "peer reviewers" who spent somewhere between 0 and only 14 days looking at the paper, and missed an obvious contradiction between Figure 3 (specifying the "Historical contingency" hypothesis) and Table 1, Third Experiment. The statistical analysis in the paper appears so shoddy to me that I doubt anyone with real statistical knowledge or expertise even reviewed it.--Aschlafly 00:59, 13 July 2008 (EDT)
I have expertise in research and statistics and I'm just not seeing this shoddiness that you make reference to. You are allowed to have your doubts, but we should get a bunch of people familiar with such fields to examine the paper's statistical analysis. Wisdom89 01:01, 13 July 2008 (EDT)
I have the same problem. i hold a Dr.Rer.Nat title and did some statistics (although i am no expert on it) and fail to see the "shoddiness" please help my underdeveloped mind, Mr. Schafly and enlighten me. I it is so obvious it should be a one-liner to formulate it. --Stitch75 09:54, 13 July 2008 (EDT)
Please try harder then. I've expanded the explanations a bit also.--Aschlafly 10:42, 13 July 2008 (EDT)
At least now i can recongnize your statements more or less clearly. Yet i think (cite from the paper) We also used the Z-transformation method (49) to combine the probabilities from our three experiments, and the result is extremely significant (P < 0.0001) whether or not the experiments are weighted by the number of independent Cit+ mutants observed in each one. has to be addressed more specifically than you do in order to discredit the statistics used --Stitch75 11:54, 13 July 2008 (EDT)
Just for comparative purposes and a frame of reference, a P value that is less than the significance level of 0.05 is considered significant. Wisdom89 13:10, 13 July 2008 (EDT)

I wonder, do the PNAS allow questions to raised and asked of the "peer reviewers" themselves? Are we able to find out who/what experience they themselves have? Perhaps that is an avenue that we could look at? I apologise in advance if this has already been asked or answered. JJacob 01:07, 13 July 2008 (EDT)

Definitly not. Peer reviewers are anonymous and only known to the editor, for good reason. Having peer reviewers non-anonymous would cause reviewers to be very careful to step on nobodys foot to evade revenge. --Stitch75 09:54, 13 July 2008 (EDT)
No, PNAS probably won't disclose who supposedly did the 14-days-or-less peer review on the Lenski paper. You're right that such disclosure could shed some light on the final product.
"Wisdom89", your claim that you "have expertise" and don't see the flaws only makes me conclude that you don't really have the expertise that you claim. Judging by your silly user name, perhaps you've tried that approach before. We're not fooled by it here.--Aschlafly 01:14, 13 July 2008 (EDT)

(removed silly comment by person who has since been blocked for 90/10 talk violation)

Marginally significant

In #1, that's not inconsistent. The figure makes it clear that, according to the hypothesis, the mutations should also occur earlier than 31,000, but should become more common at that point. 12/17 mutants occurred after 31,000. The point in #3 is not clear. What do you mean by weighting? In #5, Lenski's paper is not clear in explaining how the results of his largest experiment...his paper refers to his largest experiment as "marginally ... significant," which serves to obscure its statistical insignificance. Actually, marginally significant is clear. It means that the p-value is between .05 and .10 (in this case it's .08, table 2). It's a pretty standard phrase to describe an effect that falls into that range. Murray 13:41, 13 July 2008 (EDT)

Your defense of Lenski per point #1 is contradicted by the paper's own abstract, and by the comments by the other Lenski defender below. In point #3, proper weighting is needed to combine multiple studies. In point #5, you don't cite any authority for the unscientific claim of being "marginally significant."--Aschlafly 20:45, 13 July 2008 (EDT)
Your defense of Lenski per point #1 is contradicted by the paper's own abstract, and by the comments by the other Lenski defender below. In point #3, proper weighting is needed to combine multiple studies. In point #5, you don't cite any authority for the unscientific claim of being "marginally significant."--Aschlafly 20:45, 13 July 2008 (EDT)
Define "proper weighting". Are you referring to how they interpreted the sum of the work or to a specific analysis? You're right, I didn't cite any scientific authority. Here's one: Motulsky, H. (1995). Intuitive Biostatistics, Oxford Univ. Press. Chapter 12. Also, try searching for the phrase in Google Scholar or PubMed, you'll find plenty of uses of it. It's a shorthand way of describing an effect that came close to the arbitrary threshold for statistical significance but did not reach it. Murray 21:55, 13 July 2008 (EDT)
You seem unwilling to accept that Point #1 identifies a clear mistake between the figure and the abstract in the paper, which requires correction by Lenski or PNAS. Given that unwillingness, I doubt it will productive discussing the other mistakes further with you. You're right that other usage of the dubious concept of "marginally significant" can be found on the internet, but the first link to such usage on my internet search for it returned the non-rigorous "Intuitive Biostatistics" and the second link returned a criticism of the concept similar to the criticism express here. [1] --Aschlafly 00:42, 14 July 2008 (EDT)
It's not a clear mistake in my mind. There does seem to be some confusion or contradiction in terms of the number of generations, but I haven't seen a clear explanation of why this is or is not a mistake. What seems like a contradiction is the abstract statement that no mutations occurred before 31,500, but it's not clear to me whether I'm misunderstanding that. I doubt it will productive discussing the other mistakes further with you. Of course you doubt it, because you are unlikely to be willing to concede anything no matter what anyone says. Why do you call the concept dubious? The procedure of determining whether an effect is significant requires the setting of an arbitrary threshold, which is usually .05. That means, in analyses of the sort in the Blount et al. paper, that there's less than a 5% chance that the findings are due to sampling error. When an effect comes close to the threshold it is worth noting, because of the problems inherent in significance testing, which itself is widely criticized in the statistical literature. I am not clear what you mean by weighting, as I mentioned before - in the interpretation or statistically? Murray 13:30, 14 July 2008 (EDT)

Richard Lenski incorrectly included generations of the E. coli already known to contain Cit+ variants in his experiments.[3] Once these generations are removed from the analysis, the data disprove Len

The author of the above statement must not have read the paper or the supplement carefully. Lenski took clones from those cultures that weren't Cit+. Careful reading of the paper or the supplement reveals that Cit+ mutants appeared at 750 generations or later into the replay experiment. The authors write: "New Cit+ variants emerged between 750 and 3,700 generations..."--Argon 15:03, 13 July 2008 (EDT)

Your explanation is not convincing. Show us how Lenski proved that the samples did not have any Cit+ variants, if you really think he's claiming that.--Aschlafly 20:51, 13 July 2008 (EDT)
I count at least four distinct diagnostic methods used in the paper for distinguishing whether clones are Cit+. One in particular they describe as being very sensitive to weakly citrate-using cells. It's not that I 'think' Blount et al. claim they started with Cit- cells. They say that in the 'Supporting information' document.
Andy, for tonight, I'll leave it as an exercise for you to answer your own question of how one might test a clone to be sure that is was Cit-. Use the paper if you'd like or present another means. Tomorrow evening I'll cite the methods Blount used in the paper.
Meanwhile, here's another question to ponder: If the colonies used to start the 'replay' cultures were Cit+ at the start, why is it that no Cit+ cells were found in such cultures before 750 generations? These cultures were all started from single clones which must have either had the Cit+ phenotype or not.--Argon 18:38, 14 July 2008 (EDT)

- How did Lenski obtain Cit- clones from later populations after Cit+ had become the dominant phenotype? It's pretty standard procedure. First, keep in mind that these populations contained a minority of Cit- cells. Now, a small liquid growth is made from the frozen glycerol stock of cells from a given generation, say 32,000. This is grown to an appropriate OD and then plated on a standard rich media agar plate. Provided that the plated sample isn't too dense, this plating will deposit somewhere between 15-100 individual cells on the agar, with plenty of spacing between them. The plate is then incubated at 37 C for a given time, during which the individual cells replicate to form small, visible colonies. These colonies will consist solely of cells that are genetically identical to the original cell deposited on the agar. This is all well and good, but how do you find which of the colonies are Cit-? Here you use a technique called replica plating. The agar plate is gently inverted onto a sterile swatch of velvet so that some of the cells from each colony are deposited onto the fabric. Next, a citrate-only agar plate is pressed against the velvet, transferring cells from the fabric to the new plate in exactly the same spacial orientation as the first plate. The cells are then allowed to grow on the citrate-only plate. When we then compare the two plates, we look for colonies on the rich plate that did not grown on the citrate-only plate. These colonies will be Cit-, and can be used for the replay experiments. Gerlach 13:45, 14 July 2008 (EDT)

Just a correction: The Cit+ cells didn't dominate the cultures at generations 32,000 & 32,500. Cit+ mutants represented 12% and 19% of the population at these times points, respectively. Replica plating is definitely one means of identifying Cit- and Cit+ clones but because the Cit+ cells represented a fraction of the total, it is simple enough to streak the samples to individual colonies (founded by single cells) and test each colony individually. Otherwise, a nice description of replica plating (a technique developed by Joshua Lederberg that allowed him do the research for which he was awarded one of the 1958 Nobel Prizes in Medicine)--Argon 18:38, 14 July 2008 (EDT)
Oops, you're right, Argon. Cit+ wasn't dominant at generation 32,000, so replica plating might not be the best option. Still, in the case of almost 20% Cit+, I think replica plating might be labor reducing. I'm a biochemist by training, not a microbiologist, but I'm sure there are other methods that could be used. For example, Christensen's agar appears to provide a sensitive, colorimetric method of identifying even weakly citrate-utilizing colonies, so one might be able to plate cells on a Christensen's agar plate and pick uncolored colonies. Again though, I'm no microbiologist so I don't know the best method. However, there are ways to easily pick Cit- clones from later generations in this case. That said, I think that this specific "flaw" cited in Lenski's paper should be removed from the main article. Clearly, Cit+ were not used in the replays from later generations, so Conservapedia's objection is totally baseless.Gerlach 19:19, 14 July 2008 (EDT)
All appropriate descriptions. For generating estimates of the population distribution (Cit-/Cit+) in a culture, replica plating would certainly be applicable. The main point is that identifying and isolating Cit- clones from the generations used in the experiment is straightforward. I agree that it would be a mistake to keep that objection in the article (actually that thread runs across several points in the article).--Argon 20:12, 14 July 2008 (EDT)
Nothing in the paper rules out contamination of those samples by Cit+ variants, and you quote nothing in the paper to rule it out. There was no reason to use and rely on these samples that already have Cit+ variants.--Aschlafly 23:04, 14 July 2008 (EDT)
The whole point of the discussion above was to illustrate that there are methods to isolate cells with a particular genotype from a mixed population. Suppose we used the replica plating method (maybe not the best choice, but it's standard procedure). Care to explain where the Cit+ contaminants could possibly come from? And the naivete inherent in objecting that "nothing in the paper" rules it out is simply astounding. Standard laboratory procedures, like isolating specific clones, typically don't show up in papers. For example, if I to write a paper on my research, it would not include a detailed discussion of the construction of my expression plasmid. I might mention that I cloned my gene into a particular vector, but there wouldn't a discussion of how I did my PCR, restriction digests, or transformations. Things like that are simply extraneous details that are taken for granted by experienced researchers. There is simply no reason for Lenski et al. to include a discussion of their method for isolating specific clones. This is why this whole exercise of Conservapedia criticizing Lenski's paper is folly, because most Conservapedia users simply don't know standard laboratory techniques. It's kind of important to know what scientists today are able to do on a routine basis before wading in to claim that they couldn't have done what they claim.Gerlach 09:05, 15 July 2008 (EDT)
          • *****
Actually the methods cited in the paper would separate the Cit+ lines from the Cit-. They took single colonies (founded by single cells), and tested them on selective agar (minimal citrate media - MC agar) and an indicator medium, Christensen's citrate agar (Product information from Sigma here: As Gerlach notes from the Blount paper, Christensen's citrate agar is very sensitive to citrate utilization. Typical E. coli strains do not produce a color change but other citrate-using enterics like S. typhimurium and Cit+ E. coli mutants appear pink/red on the plates.
This conclusion is further validated (as I mention above), by the fact that Cit+ mutants did not appear immediately in the replay experiments (Others have also noted this). For example, in replay set-1, it took 750 generations before the first Cit+ mutants were isolated. Many didn't produce Cit+ cells after 3700 generations. If the starting line was Cit+, *all* the cells in the culture would have show up as Cit+ in the first pass.
Overall, it is abundantly clear that the cell lines used in the replay experiments were not Cit+ at the beginning.--Argon 19:44, 15 July 2008 (EDT)

1. Lenski's "historical contingency" hypothesis, as specifically depicted in Figure 3, is contradicted by the data presented...

Again, more issues with comprehension: The authors did not know when the potentiating mutation first arose but they knew it was before generation 31,500. Figure 2 is merely figurative, being an illustration, and not quantitative. Their analyses suggested that the potentiating mutation did arise at about the 20,000 generation point or later. Their conclusion is that Cit+ mutation rate is low even in a potentiated background but apparently distinguishable from a low-incidence single, unpotentiated event.--Argon 15:04, 13 July 2008 (EDT)

Statistics101 package:

What does the author imply by writing: "...Lenski himself does not have any obvious expertise in statistics. In fact, Richard Lenski admits in his paper that he based his statistical conclusions on use of a website called "statistics101"?

From the web site: Professionals: Although it was originally developed to aid students, the Statistics101 program is suitable for all levels of statistical sophistication. It is especially useful for Monte Carlo, resampling, and bootstrap applications. It has been used by professionals in many fields. These include anthropology, biology, ecology, evolutionary biology, epidemiology, marine biology, psychology, toxicology, veterinary pathology. It would appear that the package from the web site (not the web site itself) was used to perform the Monte Carlo resampling tests. Is there any evidence that the package produces incorrect results?--Argon 19:41, 13 July 2008 (EDT)

Formal Response to PNAS

I've moved this thread from the open section above since this page is being categorized. --DinsdaleP 21:52, 13 July 2008 (EDT)

I think this article presents very sound arguments. Conservapedia should now take action, offering to publish a rebuttal of Lenski in the PNAS journal.--JBoley 11:31, 13 July 2008 (EDT)

I agree with JBoley in that if Conservapedia wants to present a formal, professional response to Professor Lenski's paper that questions specifics within his paper, then it should happen. That is the proper execution of the scientific method, and I'm certain that a professional response to PNAS would yield better results than vague "give us all the data " demands. Is a formal response to PNAS from Conservapedia in the works, or is this article the only place these questions/objections were intended to be raised? --DinsdaleP 11:52, 13 July 2008 (EDT)

I don't know if PNAS would embarrass itself by printing a rebuttal, or whether it has the integrity to retract Lenski's paper. Conservapedia's audience is probably bigger than PNAS's, and we're certainly not going to suspend our exposure of the truth here in order to await correction by PNAS.
PNAS publishes, quite easy to find Rules for "Rebuttals" (they call it "Letter"). It is specifically what you want. PNAS:Information for Authors . I cite: Letters are brief online-only comments that contribute to the discussion of a PNAS research article published within the last 3 months. Letters may not include requests to cite the letter writer's work, accusations of misconduct, or personal comments to an author. Letters are limited to 250 words and no more than five references. --Stitch75 13:25, 14 July 2008 (EDT)

In addition, Lenski has already demonstrated how he reads this site and he can certainly correct his own paper, and he should do so. Indeed, professionalism might support giving Lenski the time to correct it himself first.--Aschlafly 12:14, 13 July 2008 (EDT)
I understand what you're saying, but there's nothing improper or unprofessional in submitting a formal request to PNAS to have the points in this article addressed by Professor Lenski and his team. To be frank, you've been adamant in your insistence that PNAS has been less than rigorous in the review of Lenski's paper, so if one of your intentions is to demonstrate this then having PNAS respond to a formally submitted response to the paper in public would serve that purpose. This can be done in addition to publishing these objections on Conservapedia--DinsdaleP 12:21, 13 July 2008 (EDT)
If not PNAS, then perhaps some other public forum? I know Andy Schlafly has appeared on television, effectively arguing against gardisal and other dangerous vaccines. Perhaps if a TV program were interested you could argue against Lenski? You could be the spokesperson against these false claims of evolution.--JBoley 12:24, 13 July 2008 (EDT)
I'm not opposed to the above suggestions, but the future is here, folks. Lenski, PNAS editors and television producers have free will to reject or ignore the truth, and I'm more interested in getting the truth out here than trying to persuade someone in dying media like print or television. Lenski and his defenders can see the truth here, and they can decide for themselves whether to reject or admit it.--Aschlafly 12:30, 13 July 2008 (EDT)
I agree with you about the dying nature of print (I don't think television is dying, merely changing). The problem is that information about the flaws in Lenski's study are not registering outside of sites like Conservapedia. In effect, Conservapedia is an echo chamber. People that come to this site already agree with its point of view. I encourage you to attempt to attract the attention of other forms of media, or Lenski's false claims will simply be accepted as fact by the public and even worse, by educators.--JBoley 12:36, 13 July 2008 (EDT)
While there's nothing wrong with taking one's message to various forums or outlets, I believe there's a specific value in submitting these objections as a formal response to PNAS. Conservapedia was established as a trustworthy resource for students, and in my mind all of it's actions should be done with the goal of informing and educating. The Lenski debate is over the findings published in a scientific journal after undergoing a peer-review process. The objections to this paper by the CP leadership are not just about its content, but to the process by which it was reviewed and published in the timeframe it was. Talking about these objections is fine, but it's more instructional to the students using Conservapedia, and a better example of the scientific method in action, to respond to a scientific paper published in a journal through the formal process by which such papers are either defended or corrected. In the end, Lenksi's work will either stand up as good science, or any errors will be addressed and the paper's conclusions modified accordingly, which is also good science. Seeing this process in action regarding a such a significant paper is a great learning opportunity, and the Conservapedia leadership would be remiss in not standing by their conviction in these objections and submitting them formally to PNAS. --DinsdaleP 12:38, 13 July 2008 (EDT)

PNAS has a letters section available with the online edition. Many journals have a letters section for rebuttals or clarification. Legitimate corrections are welcome. Andy, have you run your list of 'flaws' past any biologists?--Argon 15:36, 13 July 2008 (EDT)

Funny, Argon, how you don't apply your demand of expertise to Lenski himself. What are Lenski's credentials with respect to statistical analysis? Has he even taken and passed an upper-class statistics course of any substance?--Aschlafly 00:45, 14 July 2008 (EDT)
I don't follow. Are you suggesting that Lenski doesn't understand the proper application of the statistical methods used in his paper? If so, I haven't seen a description of which alternate methods you'd employ, let alone any output from such an analysis. Here's a thought: Why don't you substantiate your claims by writing up the work and submitting it as a correction letter to PNAS?--Argon 18:46, 14 July 2008 (EDT)
Argon, if you really "don't follow," then try harder. You insist on credentials by others who criticize Lenski, and yet you do not insist on expertise by Lenski in statistics with respect to his "analysis". Perhaps Lenski should first take and try to pass "Statistics 101" before trying to use a website by its name to draw flawed conclusions.--Aschlafly 22:36, 14 July 2008 (EDT)
Do you really think that Lenski went to to learn about statistics and how to apply them? That simply is not the case. The Lenski group knew that they needed to do a Monte Carlo resampling analysis on the results of their replay experiments. In this situation, they are faced with two choices: either code an appropriate program themselves, or utilize one that has already been developed and is readily available to researchers. Since statistics101 had such a program available, they chose the latter option. was simply the source of the program that the Lenski group used to perform the statistical analysis. If you want to argue against that choice, then you need to examine the source code for the statistics101 package and enumerate why it should not have been used.Gerlach 11:40, 15 July 2008 (EDT)
          • *****
I don't insist on credentials per se. I *recommend* developing a working understanding of the experiments or a willingness to do the necessary background research to get the details straight before heading off on possibly the wrong direction. It saves a lot of thrashing about. It's perfectly OK to raise questions but before leveling accusations it might be nice to do that privately and discuss that with others who can provide useful feedback. Just my 2 cents.--Argon 19:48, 15 July 2008 (EDT)
Mr. Schlafly, does your response above, "I'm more interested in getting the truth out here than trying to persuade someone in dying media like print or television", mean that you will not be submitting a response with these identified flaws directly to PNAS? As I mentioned above, it takes nothing away from the value of posting these statements here on CP to also submit them to PNAS, but the proper way to prompt a journal to review and correct an article is through a direct response, not publication on an unrelated website like CP. It's not proper for anyone but the author(s) of the objections in this article to make that submission, so I'm hoping they step up with the conviction of their beliefs and respond to PNAS directly. Thanks. --DinsdaleP 12:32, 14 July 2008 (EDT)
By now Lenski has probably seen the flaws identified on the content page here. What's "proper" is for him to correct his own paper in PNAS. The criticism will likely continue as long as he declines to do so. If anyone here would like to educate the PNAS editors about the flaws, then please feel free to do so.--Aschlafly 22:36, 14 July 2008 (EDT)
I'll be glad to submit these items to PNAS on behalf of Conservapedia using the "Letters" forum described by Stitch75 above. Should I cite you the author of this analysis, and is there an email you'd prefer me to include instead of my personal one for any PNAS response? I'll put up a draft of the Letters submission here for your approval before sending anything out. Thanks. --DinsdaleP 18:39, 15 July 2008 (EDT)
It would have a very strange scientific taste should somebody else that Mr. Schlafly be the first author of the letter (however, if you insist on that, i suggest "personal communications" as the right kind of citing the work here.). It is his turn to stand up to his claims in this way (by publishing it into a Forum which he is not the owner of). I personally share the doubts of many here about his argumentation (and his understanding os the experiments), but i am sure only more of his dismissive comments will follow. However i see that he comes up with a clear alternate Hypothesis (contamination), so he is free to show that this is more likely (calculations please) and peform Monte-Carlo simulations on it (for a person complaining that statistics101 is to simple that should be no difficult task). As far as i understood and see the data Lenski and coworkers did the best to exclude this Hypothesis, however i did not run own simulations (And I won't do it, because i think nothing will come out - furthermore Mr. Schlaflys personal style in the communication "you have to try harder" is not the style i am used to be adressed by people whose qualification in a subject is appearlingly nor more than mine). So running the simulation, evaluating his own hypothesis using a valid statistical method is now Mr. Schlaflys job - if he come up with a decent calculation showing this Hypothesis is more likely, the letter would for sure be accepted and Lenksi would have to react. If Mr. Schlafly is not the first author of the letter he could evade the critics after that by saying that he was misunderstood, which means somebody else take the risk of submitting the letter, but in case of success Mr. Schlafly would take the glory. --Stitch75 20:10, 15 July 2008 (EDT)
I would prefer that Mr. Schlafly submit his objections to PNAS directly as well, but since he has declined to do so the next best response is to submit it "on behalf of Conservapedia", which he has authorized above. I'll post the draft letter tomorrow, and it will credit him as the author unless I'm asked to include other individuals who contributed to the analysis. --DinsdaleP 22:05, 15 July 2008 (EDT)
I hope ASchlafly does offer a rebuttal of Lenski's flawed "study." However, if he does not, I am all for DinsdaleP's suggestion. I look forward to reading your draft. If you take all the objections that Conservapedians have raised to Lenski's paper, I do not see how PNAS can possibly object.--JBoley 11:28, 16 July 2008 (EDT)
They could object because Conservapedia's criticisms are obviously false. If this page hasn't made that clear, I don't know what could.Gerlach 13:44, 16 July 2008 (EDT)
They would have to prove that Conservapedia's criticisms are false, and they cannot do that. You sound like a Lenski supporter. You need to open your mind to the truth.--JBoley 13:59, 16 July 2008 (EDT)
Gerlach, the most obvious lesson from this page is how a few, yourself included, seem determined to defend a flawed paper no matter what the truth brings. You have free will to reject whatever you want, but you're only hurting yourself by that approach. People who do open their minds are amazed by the insights and happiness it brings.--Aschlafly 21:54, 16 July 2008 (EDT)
This is such a strange response, Andy. The attitude that you and several of your defenders seem to have is that the criticisms you have made of Lenski's paper cannot be rebutted. However, the fact that you can level criticism at something doesn't make that criticism true. That is, you may be wrong. And that is the case here. Myself and others have pointed out that your alleged flaws in Lenski's paper are incorrect, and based on misunderstanding it, or, even worse, simply not reading it carefully. There has been little to no substantive response to our detailed rebuttals. Replies from you or other defenders of the Conservapedia article amount to little more than brazen declarations that we are wrong or "nonresponsive", and that criticisms in the original article remain unscathed. But for people who claim to have "the truth" on their side, this is simply baffling. If the veracity of your claims against Lenski's paper is so obvious, then it should be an easy task to provide in depth responses to the points that we have been raising against your article. I'm not above criticizing scientific papers. Bad papers get published frequently, after all. However, Lenski's paper does not appear to be one of those. Notice, though, that I am not claiming it to be perfect, no paper is. That said, whatever flaws the paper may have, those presented in the Conservapedia article are not among them. Gerlach 11:43, 17 July 2008 (EDT)
Gerlach, now you seem to admit that the Lenski paper may be flawed, but that Conservapedia has not identified any of those flaws! With all due respect, you seem to have taken closemindedness to new heights.
Lenski claims there is a mutation rate, yet his presented data show that the number of mutations do not scale with sample size. His presented data disprove both of his hypotheses.--Aschlafly 13:27, 17 July 2008 (EDT)
First, we need to clarify what we mean by "flaw". Are we talking about serious methodological flaws or errors in interpretation that fatally undermine the paper's conclusion? If so, I fail to see any such flaws. Of course, any honest scientist recognizes that any work at the frontier of science has the potential to be wrong. It may be that future work shows Lenski to be wrong. But let's not confuse this possibility with actuality: saying that something might be wrong is not the same as saying that it is. On the other hand, by "flaw" we may mean minor experimental details that could have been better. In this case, the paper is flawed, and Lenski himself admits this in the supporting information. But every paper is flawed in this manner, and I doubt you'll find any investigator who wouldn't say that they wish they had done some things differently during the course of their research. As an example of what I'm talking about, the Lenski group's statistical analysis would have been improved if they could have accounted for the evolution of increased cell size (and, therefore, decreased cell density) in later generations. As it stands, their analysis underestimates the potentiation effect in these generations because replays of later generations involved fewer cells. But this flaw, and other flaws of this type, do not undermine the conclusion of the paper.
Regarding the mutation rate argument, see the earlier discussion on this page. The third experiment was significantly different than the first two. The experimental scheme was different, it was performed at at different time and with different conditions, and utilized, for the most part, different clones than the first two. And Lenski doesn't just claim there is a mutation rate in potentiated cells, he actually measures it. But frankly, I don't understand your argumentation here. It seems that you are trying to suggest that all the Cit+ mutants isolated were the result of contamination. This clearly isn't the case, for several reasons. However, this isn't the place for this particular discussion, as there is already such a discussion elsewhere on this page. I still haven't seen any adequate response to our points against the Conservapedia article, and I don't think I am closed-minded for expecting such a response. As I said, I'm open to the possibility that the Lenski paper is flawed, but I expect cogent argumentation to support any such flaws. I haven't seen that here.Gerlach 15:09, 17 July 2008 (EDT)
Gerlach, look at how many words you used to sidestep my simple explanation of a flaw in Lenski's work: "Lenski claims there is a mutation rate, yet his presented data show that the number of mutations do not scale with sample size. His presented data disprove both of his hypotheses." If the third experiment of Lenski's was independently flawed as implied by your response, then that does not help your defense. Note that in Lenski's second experiment the mutations also failed to scale with sample size.--Aschlafly 18:26, 17 July 2008 (EDT)
I think there is a misunderstanding. Blount et al. did not say that the absolute mutation rate was constant for the generation of Cit+. After all, they found that variations in culture conditions had some effect. The replay experiment was performed to determine if clones from later cultures were more or less likely to give rise to Cit+ cells than clones from earlier generations. The authors' hypothesis was that a potentiating mutation was required before the Cit+ mutations could arise. If that was the case then clones from later generations would be more likely to produce Cit+ cells. If the Cit+ capability was the result of an extremely rare, single mutation, then any generation of clones would be equally likely to produce Cit+ cell. The hypothesis is that in a particular experiment, the relative probability of generating a Cit+ mutant would be greater with clones from later generations. Absolute mutation rates (which appear to be Andy's concern) may be contingent on the growth conditions, which differed between the three replay experiments but within any particular set of conditions, one might expect the relationships between Cit+ recovery and the generations from which the starting clones were derived would still hold.
pg 7902 of the paper in the journal:
"According to the rare-mutation hypothesis, Cit+ variants should evolve at the same low rate regardless of the generation of origin of the clone with which a replay started. By contrast, the historical-contingency hypothesis predicts that the mutation rate to Cit+ should increase after some potentiating genetic background has evolved. Thus, Cit+ variants should re-evolve more often in the replays using clones sampled from later generations of the Ara-3 population."
From the abstract:
"The long-delayed and unique evolution of this function might indicate the involvement of some extremely rare mutation. Alternately, it may involve an ordinary mutation, but one whose physical occurrence or phenotypic expression is contingent on prior mutations in that population. We tested these hypotheses in experiments that ‘‘replayed’’ evolution from different points in that population’s history. We observed no Cit+ mutants among 8.4 x 10^12 ancestral cells, nor among 9 x 10^12 cells from 60 clones sampled in the first 15,000 generations. However, we observed a significantly greater tendency for later clones to evolve Cit+, indicating that some potentiating mutation arose by 20,000 generations. This potentiating change increased the mutation rate to Cit+ but did not cause generalized hypermutability."
What the authors found was that the later generations really did produce more Cit+ mutants than you'd expect if the Cit+ mutation was instead randomly distributed (or, as Andy has claimed and not yet retracted, contaminated by Cit+ cells). Yes, the absolute rates didn't scale across the three different conditions but within each experiment it is clear that the Cit+ mutants arose from cells taken at later generations. Yes, there were differences in the absolute rates under different conditions but that doesn't mean the results and conclusions about potentiated clones are wrong.
As for calculations of mutation rates: Blount et al. performed additional fluctuation experiments (journal pg. 7903) in an attempt to estimate the relative effect of the potentiating mutation and to calculate a rough estimate of the mutation rates. Keep in mind, those rates are referenced to growth under the specific conditions used in that particular experiment and were used to provide ballpark estimates for comparison to other classes of known mutations. As for the second experiment not 'scaling' (with the first?), I wouldn't expect that. The first involved clones grown in continuous liquid subcultures and both the total number of generations, cells/generation and growth conditions (e.g. liquid with nutrient replenishment vs. solid agar) are very different.--Argon 19:20, 17 July 2008 (EDT)
I have not "sidestepped" your point, Andy. Your claim is that the number of Cit+ mutants obtained did not scale with sample size in the third experiment. In response, I reiterated what Argon and I have said previously, specifically that there are differences between the third and second experiments that make a straight comparison between the two inappropriate. That said, if anyone is sidestepping the issue, it is you. Argon and I have addressed your objection previously, but your only response has been to stubbornly restate the original claim. If you don't agree with our statements, then you must explicate why. Additionally, I did not say or imply in any way that the third experiment was flawed. I said that it was significantly different from the second, but this is not the same as saying it was flawed. Argon has provided a response to the scaling of the second experiment to the first.Gerlach 20:21, 17 July 2008 (EDT)
In response to earlier postings about drafting and submitting a letter to PNAS, I'm for it and would be happy to contribute. In response to a comment above, I don't want any "glory" and learned a long time ago that nobody gets credit or money for telling the truth. More often those who speak the truth are reviled and insulted, but mockery doesn't bother the truth as much as it bothers falsehoods.--Aschlafly 22:01, 16 July 2008 (EDT)
(removed false and baseless claim by Argon about sponsorship of this site)

Draft of PNAS Letters Response from Conservapedia

Apologies for the delay in following up on this - I've spent the past few days attending to family priorities, and this is my first CP-related priority now that I have time at my PC again.

Since the guidelines for submitting PNAS Letters restricts the submissions to 250 words, the following is the draft submission I'd like to send pending Andy Schlafly's approval:

( Note - The sole copy of the draft has been relocated to Letter to PNAS so only one copy is tracked and referred to in the submission. Please apply further revisions there, thanks.)

Mr. Schlafly, please let me know if this is acceptable, and apply any revisions as you see fit, thanks. --DinsdaleP 10:38, 21 July 2008 (EDT)

It's an excellent draft, DinsdaleP. I made a few minor revisions above. After others improve this, then I'll plan on sending it to PNAS later this week. —The preceding unsigned comment was added by Aschlafly (talk)
I like it. I'll sign my name to it when the time comes.--DamianJohn 09:35, 22 July 2008 (EDT)


Thanks for the feedback - When applying changes, please keep in mind that the "Text" section in the final version needs to be 250 words or less. If there are important points to add that would exceed this limit, they could be added to the main Flaws in Richard Lenski Study article instead since PNAS is being asked to respond to the full list there, and not just the summary. --DinsdaleP 09:47, 22 July 2008 (EDT)

I think if i would have presented some draft of that quality to my supervisor i think i would not have reached the door of his office alive and in one piece. It starts with the fact that the correct citation of the article is missing. Please use the appropriate form, inclucing journal number and page. Please have a look at other PNAS Letters. Restate the central issue you criticise in the first sentence, then explicitely describe what your claim about the same issue is and state using what method you come to your conclusion. Keep a neutral tone. Don't make requests. It is obvious that the original author should respond (please look at PNAS for examples of responses, which are published at the same location). Plese fill in your numbers and precise arguments at the points where i left the dots in the following suggestion (Please note that nothing of this is my opinion, i just tried to rephrase your opinions in a way that they have the chance to be exposed to a broader view - i skipped tyhe details, because i will not rephrase your arguments, just the structure):
Recently ...... inferred from their experiments (1) that ...... . We analyzed the statistical analysis in terms of ..... and conclude that several variables do not scale as .... . Using hypothesis tests under such circumstances is, in our opinion, ...... , and we do not understand how the authors of the original publication ..... their results.
The replay experiments yield an ..... scaling with .... . We do not find a consistent value of ..... between the experiments. Furthermore the statistical deviation due to ...... in each sample set does not allow to infer ..... with a sufficient precision. This lack of scaling makes, in our opionion a constant or random source of contamination a likely explanation for a random observation of the ...... dependence of the mutation rate claimed in the paper. The following calculation supports this hypothesis: ....... .
Furthermore we point out that Fig. ... contains a serious disagreement with .....: while the data would suggest ..... from gen. ..... the figure suggests .....
We find the material cited in the original article (2)...(n) about the same long-time experiment not to describe the following procedures and experimental constants in a way accessible to us: Handling of ..., contamination rates of ...., and ..... We would kindly ask the authors to clarify these issues. --Stitch75 12:48, 22 July 2008 (EDT)
No offense taken. I have no experience in these types of submissions, and would appreciate it if you could restructure the submission improve the quality while adhering to the 250-word limit. As I suggested above, it makes the most sense to incorporate these revisions into the main page for this article, where length is not an issue. --DinsdaleP 12:52, 22 July 2008 (EDT)
I realized that you have obviously not much experience in it; getting the right tone for a scientific publication is hard and i had to try it quite some times on conferences and i still dont get it right sometimes - and from what you said seem to be a student. Sadly, it is against my conviction to rephrase the original arguments in the right way because it would make me an co-author of argumentations i strongly object. In case you did not realize it, helping here to get the structure right doesn't mean i agree - actually the two reasons i would like to see it published is because then the (wrong) idea that scientific journals are not accepting criticism could be obviously be put aside and because i would like to see the needed scientific rigorousity applied to the arguments presented here, because this would put this discussion onto a scientific basis. Quite frankly - i am a liberal by the standards of this site. But i believe the discussion must be carried out with all respect to define the borders of science. The more effective the discussion is carried out, the better the outcome will be. I am willing to listen, as i have proven here, even when beeing treated by people like Mr. Schlafly as if I would be one of his students, while evaluating his qualification in natural sciences quickly shows that i more likely could supervise him in the issues he discusses here (which is something he has proven all along). Regarding that, i am close to giving up, but nevertheless i have seen that a lot of conservatives actually are willing to lead this discussion in a scientific way, which is something, which fulfills me with hope. I recommend you not to fight a fight in where you don't understand the arguments. Don't pick up arguments from others. If you can not fill in the missing words, numbers and arguments in my text, i cant help you. I see what Mr. Schlafly believes, however i do not know how to get the calculation right to support his hypothesis (random or contant mutation rate) - and, this is most likely not because of a lack of statistical knowledge. The only way i would see is to use the rudest form of descriptive statistics and agreggate the data in a very specific way, while ignoring the structure of the experiment - and ignoring the fact that the authors pointed out the problem they see and adressed them. So i can only give oy a few hints (maybe i can form a short contibution to conservapedia; i am just thinking about the title):
If you claim something is wrong, put your opposing claim in a positive formulation, with a supporting calculation, in contrast . Even if the calculation is simple, this is very important to provide it. E.g. we estimate a rate of x+-y per z for dataset N, in which we aggregated generations a,b,c,d, etc .... In the end, you should either prove a mathematical mistake (which was not done) or shoe you hypothesis is more likely.
Don't be rude. You are not the referee and you are not member of a commitee to examine scientific misbehaviour. Dont act like one (and even referees have a friendlier tone usually). Dont act like an personal enemy either. Don't ask for retraction of the article. It is up to the author to make the conclusion respond or retract. This happens more often than you may think as a response to an critisism (actually it's fun to read the "reply section" of scientific journals - sometimes you find things like: "yes, the commenter was right we copied the paper and retract it"). And you are never requesting, but you are kindly asking. Everybody understands that "kindly asking" does not mean "kindly asking" in this context.
Always give full and specific citations which back your claims. Give it in the form required by the specific journal. General citations like "materials on his website" will make your text bein trown out in the editorial screening (because you can not expect that somebody read trough all information to find something backing you claim - this is your job). See for specific styles []. Ypu may even reference a page/paragraph/eq/figure number to point the reader to what you mean (for papers longer than 4 pages i usually do that).
Run a style checker over your text to eliminate common style mishaps.
Most important: go to your university library. Take the time to just read a few PNAS Letters and replys, and the original articles (Try to finde some with an easy understandable subject). Understanding how these are written and how authors usually reply will help you to get your one right. You are writing against somebody who has twenty years of experience in a field of publishing in natural sciences. You seem to have little experience and Andrew Schlafly, honestly, neither. This game is an uphill battle and unfair game anyway. Make sure you maximize your chances by understanding the rules of the game.
Focus on a single you are sure about. It is better to present one claim well that two claims badly.
Good luck. You will need it. --Stitch75 14:22, 22 July 2008 (EDT)
"Stitch75", you seem to think that the truth depends on whether PNAS accepts it. It doesn't. Lenski's paper is badly flawed regardless of whether he admits it, PNAS admits it, or you admit it. That's the beauty of the truth: it doesn't require admission by anyone. I'm fine with Lenski and PNAS refusing to admit the flaws in their paper. After all, if they really cared about quality then I doubt they would have published their flawed paper after merely 14 days or less of peer review.--Aschlafly 15:38, 22 July 2008 (EDT)
Maybe its because English is not my native language. Somehow you seem not to understand what i am saying. I try to rephrase it so that there is no room for misunderstanding: I never can talk about truth, which is a religious thing. I talk about science and observable reality. Primarily i can tell you only what it takes that your thoughts are at least looked at. If rejection happens du to formal reasons (like using unsuitable Formulations, etc) it's not good to draw conclusions on the content or the scientific community. The strong formalism is to save time. And in turn the taxpayers money. I am screening the title of approx. 70 articles per day (10 Minutes), namely everything which come in on the preprint servers on my subject. From approx 70 titles, 10 are interesting enough to read the abstract (6 Minutes), and 2 are interesting enough to look at the summary (4 Minutes) and one in two days in interesting enough to read Section 2 (10 Minues), skipping the introduction, one in a week in interesting enough to print it out and read it (2-4 Hours). Something which does not follow the form end up with beeing thrown out of my rss feed quite quickly. The editors of the journals know that and in a refereed journal such thigs may even be trown out by the editor (and not the referee). Claiming from not getting a response published that "the article is still wrong, no matter what others say, and i am rights anyway" doesnt sound very scientific to me. From everything you have said here, you are unwilling to learn, and you don't expect a response. If you formulate your comment in that way, then skip it. Publishing a letter should stimulate a discussion, if it's not meant to, seen from the style, it will not be accepted. Moreover, according to everything i have seen here, Mr. Schlafly, you seem to have no clue what you are talking about. Please at least consider one time that you could be wrong and try to follow the statistical arguments in the paper - and build up own one on a real calculation (and show the numbers you get). --Stitch75 20:55, 22 July 2008 (EDT)
Stitch75 - you are too far inside the belly of the beast to understand what ASchalfly is talking about. You have probably spent your (young, naive?) life as a scientist, or inside the sciences, and as such you are blinded and cannot think with real, free logic. If you look at ASchlafly's arguments with an open mind, and not the blinkered mentality you seem to want to perpetrate, you'll see he's got many excellent points, and I'm convinced that the PNAS will ask Lenski to retract major conclusions of his obviously flawed study once ASchalfly submits this letter. Try to open your mind more to other ways of thinking and you'll see the truth for what it is. RobCross 21:01, 22 July 2008 (EDT)
Stitch75 just went far out of his way to be helpful and yet you still continue to respond with hostility and a self-righteous attitude. You explicitly stated that now matter what response you get, you won't accept the conclusions of the paper. That is your prerogative, but you are directly stating that their replies, other than a retraction, will be irrelevant and/or wrong, despite the many decades of combined practical experience of the authors compared with your complete lack of such. So why is it again that you're even bothering? Several contributors to this forum and others related to it on this website have addressed your questions in great detail- repeatedly- and you continue make vague, unspecific accusations. As Stitch75 discussed, something like "table X is wrong because of figure Y" is nonproductive. Furthermore, if you want to have PNAS readers listen to you- your flaws should be real. For instance, even now your fifth "flaw" is that the statistical results of the third experiment are "obscured" and "not defin[ed]... in the traditional way". Well, the p-value for the third experiment is 0.0823. It took me literally four seconds of flipping through the PDF to find it. It's even in a nice table to make it easy to find. How is that obscure? (I might add that considering P<0.05 significant and P=0.0823 not statistically significant is completely arbitrary. If the observed pattern wasn't at all what was expected, the P value would be roughly around 0.8-1.0 or so. So 0.0823 does suggest that the underlying idea is correct in exactly the same way that P=0.05 would. That is a fundamental concept in basic statistics.) The final problem is that your arguments have drifted from your original concern- whether or not a new trait evolved- since only one of your listed "flaws" actually addresses this issue. The others are tangential and relate to interpretation and mechanisms. Even if 6 of your 7 flaws were so, the core finding of the paper- around which all else is based- would still hold: in earlier generations, there weren't Cit+ E. coli, in later generations, there were. The mathematical analyses and timeline of occurrence don't change this qualitative, directly observable fact.
I do encourage you to refine the letter and submit it to PNAS- I really do- but Stitch75 is very correct in saying that if you submit it in its current shoddy form it will be laughed at and promptly ignored. You would do well to take the advice of people who actually work in the field when it comes to considering how something will be received. Also, your comment about peer review above indicates that your still have not learned anything at all about how it actually works despite the long discussions previously.
As you said, the truth doesn't require admission. But it does require empirical support, which Lenski provided in ample quantities- but you still have yet to provide anything even beginning to resemble scientific rigor. That is why Stitch75 wrote that response- to help you improve your letter from that condition.Kallium 21:36, 22 July 2008 (EDT)
You have it backwards, "Kallium". Lenski has the burden of providing "scientific rigor," and his paper falls short. In fact, the data presented in the paper tend to disprove his hypothesis about a mutation rate, as the mutations identified in his paper do not scale in a meaningful way with sample size.--Aschlafly 23:11, 22 July 2008 (EDT)
I honestly don't know how to make this any more clear. Yes, Lenski does have the burden of providing scientific rigor, as any scientist does. I never said otherwise. However, to claim flaws in any study requires that they be presented with equal rigor. This is just rewording the last statement of my previous post. Simply making vague, unspecified and unsubstantiated hand-waving claims won't get you anywhere. Consider an analogous situation in an appeals court: someone doesn't like the decision of a lower court, so they go to a higher level to present their objections. Now, to do this requires legal rigor to the same degree with which the decision was originally made. The defendant would need to address specifics in the decision and explain exactly why they were incorrectly interpreted. If, however, that defendant were to waltz into the higher court and simply read quotes from the decision and say "that's illogical" or "those statements are self-contradictory" or "that's unfair", without going into detail to explain their reasoning and without addressing the previous cases cited in the decision or providing any further legal precedents, they would utterly fail to make their case and likely be chastised by the judge for wasting the court's time. Now replace the court references with their respective scientific analogs and you have exactly what is going on here. That is why both Stitch75 and I have been giving you this advice- to help you present your case. Lastly, you haven't developed your arguments since you first posted them but keep relisting them. You have yet to show in detail how the data disprove the hypothesis (which you also vaguely define), and as others have explained repeatedly, mutation rates are only expected to scale with sample size under identical conditions, which were not used. That's why your letter needs improvement- it simply won't be taken seriously if it shows a flawed understanding of basic experimental biology. You can't pass rigor completely off of your shoulders; you have to make your case or it will get thrown out of court.Kallium 12:12, 23 July 2008 (EDT)
You are correct that the truth does not require acceptance. Kallium is not quite correct in in saying that it requires empirical support. It does not, the truth is the truth is the truth and that’s an end to it. The search for truth, however, requires certain actions and, in this case, Stitch75’s advice should be welcome. Following Stitch75’s advice will not alter the truth one little bit, but will aid the search for it. According to you the truth is that this was a flawed paper that was published, demonstrating flaws in the peer review process. It gets science nowhere, the public scrutiny of science nowhere and the use of public money nowhere for this simply to be the truth. The scientific community would need to see your objections in the "correct" format before it would do anything about it.
You may say that the scientific community should be doing this already. (We might also question whether the format stipulated by the scientific community really is "correct"). Whether they should or should not is moot: the simple fact is that they aren’t and are unlikely to start anytime soon without a “correctly” prepared objection. Stitch75’s advice is designed to help you prepare the objection “correctly” and, in doing so, aid the search for truth.--YoungA 09:26, 24 July 2008 (EDT)
YoungA, I think we're thinking along the same lines, just using different vocabulary. As you said, reality is what reality is (in that sense of the word "truth"- I changed words to avoid the usual connotation of the vague philosophical ideal, which is itself unproductive). My meaning was the same as yours, referring to "truth" requiring empirical support in the sense that any given claim to it must be backed up by real evidence. But thanks for explaining the situation from a different angle.Kallium 15:55, 24 July 2008 (EDT)

I tend to agree with you Andy. I say we get this thing sent to PNAS and see what happens. If they refuse to answer it then we know what that means, and if they thumb their noses at you that's fine too. However I have a little more faith than you in the system and I hold out hope that they'll respond to our queries. Anyway lets get this thing sent. --DamianJohn 15:50, 22 July 2008 (EDT)


I'd like to thank Stitch75, because he took the time to explain his points constructively, and I learned something from them. (I'm actually an IT Specialist in my 40's, not a full-time student, but learning is a never-ending process and I appreciated the lesson). I consider myself bound by the same ethical constraints on editing that he mentioned, because these objections to Lenski's work are Mr. Schlafly's, not my own. I tend to believe that the Lenski experiment was properly executed, but I'm a strong believer in the scientific process, and Mr. Schlafly's objections deserve a fair hearing whether one believes in them or not. My contribution is to help in the process of getting these objections to the proper forum, namely PNAS, and leaving the response up to them. --DinsdaleP 16:20, 22 July 2008 (EDT)

While some here are willing to discuss, obviously Mr. Schlafly seems to be not. This will be the last thing i say before i see a calculation by Mr. Schlafly and everything else has been said already. I point out that submitting an Letter to any journal will involve the "editor in chief" exactly if the internal handling process of the journal involves this. So it is arrogant to prescribe the journal who should read the submission. Furthermore i think, that if you send this with a cc to "watchdog" groups, you should read the publication guidelines of PNAS. it might be that it collides with the publication guidelines to publish the contribution somewhere else at the same time. This most likely holds for articles, and maybe for comments/letters as well. To put congressmen on the cc is, in my opinion a waste of taxpayers money. it would be much better to wait until you have something in your hands. Right now you havent. The only rational reason for congressmen in the cc is to hope for an intimidating effect on somebody. Be assured, if you are long enough in science you are not scared easily. If this would be meant to intimidate the editors of PNAS, be assured that- if they have mood- will pin your Letter to their door to have something to laugh. And for new scientist, i can assure you, nothing will happen before the Reaction of PNAS. based on that you can write one more comment on the "New Scientist" article. --Stitch75 12:15, 24 July 2008 (EDT)

Listen Stitch75- Right now, there are some pretty serious flaws in the Lenski study. Conservapedia has identified enough of them to put the entire conclusion in doubt, and even if one of our arguments turns out to be not valid, I am sure that because we raised so many valid points at least part of the paper will need to be reconsidered. You claim to have so much respect for the scientific process and scientific work, but you contradict this because you yourself refuse to allow a piece of scientific work to be legitimately challenged. If you truly believed that the work was infallible, you wouldn't mind us scrutinizing it. Please stop telling us about how to follow a good scientific method, when as you know the most important part is checking your work. We are contributing to the scientific community by revising a conclusion that is fatally flawed. Try to open your mind a bit more.--RoyS 15:26, 24 July 2008 (EDT)
By using "fatally flawed" in an a priori fashion, you contradict your own words regarding opening one's mind. Read more carefully- Stitch75 did not try to keep you from submitting your letter-, but rather tried to help you improve it. It seems you immediately criticize anyone you've decided to disagree with regardless of what they say. As several others here have said, you should submit a letter but with its current form of poor scholarship and misunderstanding of basic microbiology techniques, it won't be taken seriously. You haven't given it a reason to be. And Stitch75 is right about all the CCs: they serve only to undermine your argument and erode your credibility. Unfortunately, with each revision the letter becomes less and less tactful, thus further reducing the likelihood of it being posted by PNAS. Stitch75 gave that detailed advice because he does have "respect for the scientific process and scientific work"- thus helping you develop your letter in a scientifically acceptable format. Also, if you've already decided that you won't agree with any response other than retraction, there's not much point in the effort because that in itself is not a scientific approach. Through this and already calling it "fatally flawed" before any formal scientific response to those claims, you've already made up your minds. Again, submit a letter, really, but if you want them to take you seriously you should reciprocate in due course and not pass judgment before even sending it.Kallium 15:55, 24 July 2008 (EDT)
ASchlafly's statement, "In fact, while the paper states the generation periods for the First (replay) Experiment, it does not disclose the generation periods for the Second and Third Experiments" demonstrates a surprising lack of comprehension of the procedures described for the second and third replay experiments. There were no generation periods for the second and third experiments, because the bacteria in those experiments were not serially cultured. They were plated once onto MC agar (which contains no glucose) and allowed to sit on those same plates for 59 days (replay 2) or 49 days (replay 3).
If you continue your insulting tone, then you'll join others who have been blocked. If you're claiming that the Second and Third replay Experiments did not have new generations during their 59 and 49 day periods, then please clearly say so. Note that Lenski does not observe when the Cit+ variants were allegedly observed during those periods.--Aschlafly 23:07, 24 July 2008 (EDT)
The second and third replay experiments did not have 'generations' in the way that the LTEE and the first replay experiment did. Some of the Cit- cells may have divided once or twice before they ran out of stored glucose, but for the vast majority of the 59 or 49 days they were sitting dormant on the plates until they became Cit+, at which point they started to form colonies. Lenski does state that Cit+ mutant colonies were noted between 8 and 28 days after plating, and states that control plates that started out with a mix of Cit+ and Cit- cells showed Cit+ colonies after two to three days. Furthermore, the paper points out that when the new Cit+ cells are replated on MC agar, they also form visible colonies after 2 days - so they are not inherently slow-growing. --Brossa 10:22, 25 July 2008 (EDT)
Similarly, his statement "The same reason that the paper admits an inability to "exclude an earlier origin" (p. 7901) for the Cit+ variants also results in an inability to exclude Cit+ variants from the samples taken after generation 31,000" is also wrong. It is impossible to exclude an earlier origin in a population of millions of cells because it is impractical to plate all of the cells for each generation out so thinly that each and every cell gives rise to a unique colony which can then be tested for the Cit+ trait. For each generation you would have to run thousands of plates to determine exactly when the first Cit+ cell arose. Conversely, if you wish to sample only a few hundred or thousand cells, rather than multiple millions, it is straightforward to plate them out at such a great dilution that every cell in the sample gives rise to a unique colony, separated in space from all the others, which can then be tested individually for the traits you wish to identify. Since each colony arises from a single cell in the sample, after generation 31,000 you will have colonies that are entirely Cit+ and those that are entirely Cit-. His claim that "there is no scientific basis for including these Cit+ populations in this study, and it only serves to distort the results" is also wrong - there is a basis to include those populations - or to be more specific, the Cit- subset of those populations. At the time the replay experiments were designed, before there was any evidence for a potentiating mutation at 20,000 generations, it was impossible to know which generations were more likely to give rise to the Cit+ trait. Suppose that a potentiating mutation arose at the 25,000th generation and expanded through the population or simply persisted at a low level through the 31,000th generation, by which time it was outcompeted by some other mutation, or was lost through drift. It could have been the case that the Cit+ mutation could ONLY have arisen between, say, the 25,000th and 31,500th generations, with no 'potentiated-but-not-Cit+' cells lasting past the 31,500th generation. As it turns out, it appears that the potentiated cells continue to persist in the Cit- population through the 32,500th generation. If you are trying to determine whether the Cit+ trait is contingent or simply random, it absolutely makes sense to include Cit- cells of later generations.
The alleged potentiating mutation must have occurred prior to the 31,500th generation, and it makes no sense to test subsequent generations for the potentiating mutation. Rather, including later generations only distorts the statistical analysis.--Aschlafly 23:07, 24 July 2008 (EDT)
The replay experiments were a test of the hypothesis of evolutionary contingency. The hypothesis would be supported by a pattern of Cit+ mutants that only arise after a certain generation. It would be even more supported by a pattern of Cit+ mutants that arise after a certain generation and then extinguish after a later generation. If you are going through the trouble of running such a massive experiment, it makes perfect sense to include the later generations. It is entirely possible that the generations past a certain point have no ability to generate Cit+ mutants because the potentiating mutation has vanished from the population. In what way, exactly, does the inclusion of post-31,500 generatins distort the statistical analysis? If the contingency hypothesis were false, the latter generations would be no more likely to form Cit+ mutants than the early generations. If your argument boils down to 'the latter generations may have been contaminated with Cit+ cells', remember that there is evidence that this was not the case.--Brossa 10:22, 25 July 2008 (EDT)
His statement "The paper incorrectly combined the Third Experiment with the other two based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance" is meaningless. If ASchlafly objects to the statistical method used, he should recalculate the results using his preferred test and then present both his results and a defense of the alternate statistical method. If you send a letter to PNAS that includes such obvious errors of fact and unsupported conclusions, it will only embarrass ASchlafly and Conservapedia further.--Brossa 16:20, 24 July 2008 (EDT)
You resort to insults when you lack substance. One need not propose an alternative, or a solution, in order to identify a flaw. One may announce that a bridge is defective and should be closed prior to figuring out how it can be fixed, if indeed it can be fixed.--Aschlafly 23:07, 24 July 2008 (EDT)
It is quite true that one does not have to be able to repair a bridge in order to declare it defective. One should, however, be able to defend an assertion of defectiveness with actual data. If you want a bridge to be closed, you can't simply assert that the engineers' math was wrong; you have to show why it was wrong. You can't just claim that the concrete was contaminated; you have to prove it. If you went before the city council demanding that a bridge be condemned because the math was wrong and the concrete contaminated, with no support other than your claim, would you expect to be successful? I stand by my statement that "The paper incorrectly combined the Third Experiment with the other two based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance" has no mathematical meaning and represents an assertion without support.--Brossa 10:22, 25 July 2008 (EDT)
I agree that you don't really need to use an alternative statistical test per se to identify an alleged flaw. However, you do need to meet statistics with statistics and use the method that was used in the paper (the Monte Carlo algorithms, preferably with the same software) to show quantitatively where the error was made and what you think the result should have been. The reason I suggest discussing your result is that in most correspondence of this nature in scientific journals, the discussion is usually something like "Well, the authors calculated this result, but we recalculated and got something different. This is why we think that happened." So while I don't think you need to propose an alternative technique, you should discuss what the result should have been (using the data in Table 1) as that is the standard approach to such concerns. To borrow from your analogy, you don't need to figure out how the bridge can be fixed before announcing it is defective, as you said, but announcing defectiveness isn't enough to close it- you must show that it is. Hope that helps.Kallium 09:52, 25 July 2008 (EDT)
Dear RoyS, my mind is open enough and I listened long enough, and I would appreciate if you would contribute to the discussion instead of accusing me of beeing closed-minded and acting as if we sat in the pub. If the flaws are so obvious, please present your calculations. Lenski presented his, So unless you show that, using a specific method of calculation, you get other numbers, there is not much of an argument. --Stitch75 20:27, 24 July 2008 (EDT)

Expertise in Statistics

I don’t mind admitting that I have none! The article states that Lenski combined the three trials incorrectly and that he doesn’t make the insignificance of the third clear. Now it may well be very clear to others but, unfortunately, the article doesn’t make clear to people like me what was wrong. Can someone expand on this point? Is anyone able to explain to a layman what Lenski should have done and the conclusions he should have reached? It's all getting a little technical for me. --Bill Dean 12:10, 14 July 2008 (EDT)

OK, here's point 4 in layman's terms: Lenski's hypotheses of a mutation rate imply that a ten-fold increase in sample size should result in a ten-fold increase in mutations. But it doesn't. In fact, a nearly ten-fold increase in sample size results in only a slight increase in mutations in Lenski's data. These data, as presented by Lenski in his paper, suggest (if properly interpreted) that there is no mutation rate at all. Rather, these data are more consistent with occasional contamination, broadly defined.--Aschlafly 23:15, 14 July 2008 (EDT)
Except that the growth pattern of the Cit+ cells in the third replay experiment demonstrated that there were no Cit+ cells in the cultures at the time of plating. Control plates with a mix of Cit- and Cit+ cells were done which showed rapid development of Cit+ colonies, whereas the experimental plates did not show any Cit+ colonies for at least 8 days and up to 28 days. If you believe that the plates became randomly contaminated during the course of the incubation, rather than at the initial plating, the post-plating contamination should have affected all generations equally, but it did not--Brossa 12:46, 15 July 2008 (EDT)
Blount et al. discussed the unexpectedly low Cit+ conversion rate for the third replay set in the supplementary document. They have some speculations but don't know why it occurred. They are dealing with low probability events. I don't agree that the 'proper' interpretation is contamination. First, they isolated and tested each Cit+ isolate for the markers and sequences we've discussed on earlier Conservapedia pages. Second, the distribution of Cit+ isolates does not appear to be random: They correlate strongly with the later generations. If undetected contamination was random, it's unlikely the Monte Carlo resampling tests would reject the null hypothesis of the 'rare-mutation' hypothesis over the 'potentiated' hypothesis. As Brossa correctly notes, contamination would have affected all generations. Blount et al. write in the paper's supplement: "To facilitate handling and minimize possible confounding variables, we divided this third experiment into 20 blocks of 14 clones each. All of the clones within a block came from different generations, and the single ancestral clone was included in all 20 blocks."--Argon 20:01, 15 July 2008 (EDT)
Good and detailed analyis and conclusion, Argon. I had no time to read everything in that detail, but i think you present exactly the arguments (contant background would favour rare mutation hypothesis) would be exactly the one i had in mind, so let's see how Mr. Schlafly will bend his own Monte-Carlo simulations to showing several 1000 times no mutation into accordance with any reasonable background explaining a significant number of counts in *some* samples --Stitch75 20:20, 15 July 2008 (EDT)
The above comments are non-responsive. The essence of both of Lenski's hypotheses is that there is a fixed, or stepped, mutation rate. But any such rate would be roughly proportional with sample size. Yet his three experiments prove otherwise, which Lenski fails to address in a satisfactory manner. Indeed, Lenski's presentation of his data disproves the very thing he claims to have shown.--Aschlafly 20:40, 15 July 2008 (EDT)
Mr. Schlafly, please answer: Which the of the two hypotheses is, according to your calculations the most likely? --Stitch75 20:47, 15 July 2008 (EDT)
The first replay experiment was run under different conditions - roughly 3700 generations in continuous subcultures. One can't compare population sizes or rates between the first and last two replay experiments. As I noted, Blount et al. acknowledged the possible anomaly of lower than expected mutants in the third run. Still, the second and third experiments were run under different conditions. The plates in the second replay experiment were seeded with fewer cells per plate than in the third experiment. That change can affect the survival rates of cells on plates over time (e.g. different rates of nutrient exhaustion). Given the extremely low mutation rates involved, there is no simple means of normalizing the numbers of mutants recovered for the second and third experiments. One might expect 'roughly' 10x more mutants in the third experiment but that's truly a 'rough' estimate that would be affected by conditions under which cells are exposed. It's certainly a question that remains and they may be able to clear up with future research. In any case, the pattern of data does not support Andy's claims of contamination or that the hypotheses of the paper is in error.--Argon 21:27, 15 July 2008 (EDT)
Argon, you have free will and it's clear to me that you are going to exercise it to the point of embracing absurdities. This time you claim that "extremely low mutation rates" would not result in mutations that scale with sample size (of course they would), and that density completely alters mutation rate (if that silly claim were true, then Lenski's experiment was flawed from the get-go). Your belief system is remarkable, but it's not logical.--Aschlafly 23:38, 15 July 2008 (EDT)
Fair enough, but let's be clear: That's my take on the paper and that of most of the scientists who reviewed the paper, read the paper and commented upon the paper so far. For that matter, Michael Behe didn't call the data flawed, nor did those commentators at Dembski's Uncommon Dissent blog, nor did Dr. Georgia Purdom at Answers in Genesis. In contast, those who think the work is flawed appears to be limited mostly to you, Andy. It is true that 'scaling' was not seen in this case but as we've seen, the conditions were not quite the same and it is known that this can have an impact.
However, the fact that the conditions were not identical doesn't detract from the fact that the emergence of Cit+ clones *still* correlated with the sampling of the later generations. What this means is that with three separate experiments and under three sets of conditions, the constant-rate-mutation hypothesis doesn't hold. What the differences between the second and third replay experiments demonstrates is that they were run under different conditions that affected the overall rate of conversion, not that the 'potentiated mutation' hypothesis is wrong. Those are actually distinct questions. Andy, the data simply does not support your claims that the cultures were contaminated (we'd expect random distribution), or that the 'scaling' variations ruined the experiment. In my opinion, you seem focused on red herrings to the exclusion of evaluating the data in the overall context of the experiment which demonstrates a correlation of Cit+ clones emerging from samples taken at later generations. Would you care to address that pattern and discuss why your 'contamination hypothesis' doesn't appear to hold up?--Argon 09:29, 16 July 2008 (EDT)
There is nothing absurd in what Argon said. As he mentioned, the third experiment was performed under different conditions and at a different time than the first or second experiments. The key here is that there are plenty of variables, some unknown, that simply aren't controlled for between the experiments. For example, take the fact that it was performed at a different point in time than the first two. The third experiment, then, is almost certainly being performed with different batches of growth media, liquid and solid. Anyone who has spent any significant time growing cells knows that media can vary significantly in growth characteristics between batches, despite the same recipe being used. The reasons for this can be many. Perhaps the balance or pH meter was off calibration one day, or a different bottle of reagent was used. Take an analytical chemistry course, you'll spend plenty of time talking about this. It is true that, for the most part, this sort of variability has minimal impact. But when you're examining something like an extremely rare mutation, or you're trying to make an extremely accurate measurement, intra-lab variability like this can be significant. For an extremely rare mutation like Cit+, which involved at least two additional mutations in potentiated cells, any change in mutation rate can have a significant effect on your ability to obtain mutants. Mutation rate is sensitive to growth conditions, so cells grown in different conditions are likely to experience a different rate of Cit+ mutation.
Regardless of the reason for the lower-than-expected number of Cit+ mutants in the third experiment, however, the Cit+ mutants isolated absolutely did not arise from contamination. This is clear if you read the paper. You're still left, then, with the two hypotheses presented, and the results support historical contingency.Gerlach 09:45, 16 July 2008 (EDT)

Is there anyone here with expertise in statistics who could give an analysis? Fyezall 16:15, 15 July 2008 (EDT)

Hello Fyezall. I rearranged the position of your question to hopefully keep the conversations clearer. It boils down to this: The researchers found that mutant Cit+ strains arose over the course of time in their long term growth experiment. They wanted to learn something about how that strain acquired this ability. They wondered, 'Was this the result of a single, very low frequency mutation or did some other mutation have to precede it in earlier generations, followed by the final mutation(s) that allowed the cells to grow on citrate?'
If the Cit+ change required a single mutation with a low but constant probability over time one would expect Cit+ mutations to be distributed across cells taken from any generation of the experiment. On the other hand, if a 'potentiating' mutation had to arise at some point in the cultures before the final Cit+ mutation could function, then one would expect the probability for Cit- cells to mutate to Cit+ cells would increase with samples of cells taken from later cultures.
Blount ran the experiment and found that Cit+ mutants arose more frequently in cells taken from later generations in the culture. The Monte Carlo resampling tests were used to assess how likely the pattern of results would fit the models. The statistical significance (smaller P-values mean greater significance), was calculated for each experiment and the combination of experiments. The numbers suggest the distribution of mutants was not randomly distributed across the experimental generations, they tended to appear in cells taken from the later generations. This would argue against the single-mutation, constant, low-probability hypothesis. It appears that a pre-adaptive mutation had to have arisen first, followed by the mutation that finally allowed the cells to utilized citrate. Future work in Lenski's lab will focus on trying to identify the various mutations involved. I hope this brief explanation helps.--Argon 20:25, 15 July 2008 (EDT)

Social Text

This well known hoax does support Aschlalfy's position that the paper was reviewed by PNAC rather quickly, since (apparently) the paper supported the editors/reviewers point of view. Perhaps this would have beeen better placed on the Richard Lenski page rather than here since this deals only with the 6/10/08 PNAS paper per se and not the review process. Marge 12:45, 16 July 2008 (EDT)

I fail to see the relevance of the Sokal Hoax here. You should also know that Social Text was not a peer-reviewed journal when Sokal submitted his paper. That doesn't excuse the credulity of its editors, but Sokal's paper was not sent out for review. Not to say that it would have mattered, postmodernism is just meaningless word salad.Gerlach 13:39, 16 July 2008 (EDT)

The letter is way too long...

from what I understand, PNAS has a 250 word limit on letters. The above proposed letter is too long. Leonard 00:04, 25 July 2008 (EDT)

Looks like we need to start trimming, then. Intelligent suggestions are welcome.--Aschlafly 09:05, 25 July 2008 (EDT)

Science is self-correcting by nature

Very interesting to see all of this. As a researcher myself, I care very much about the nature of research, research methods, science, and scientific communication. Without spending any time to support or criticize Lenski, I would like to think that some here have noticed that this controversy is potentially no different from critiques of methods and conclusions that have abounded in scientific discourse for many decades, with one significant difference. Usually, methods or conclusions are criticized by other scientists whose own research has shown potential deficiencies in published research. While in some fields, there can be some kind of political or academic/cultural agenda that drives criticisms, such corrections/suggestions are usually purely technical in nature. Like some other contributors to this "talk" page, I have frequently seen letters with corrections or comments about articles or other research reports. Even more satisfying and in the true scientific spirit are subsequent articles or studies (sometimes by the same author who wrote the flawed study) that fill in gaps, correct methodological errors, and generally contribute to the body of knowledge of the given subject. I don't see the latter happening here, though. Science is cumulative. What is the motivation behind the criticism of the Lenski study? Is it by a fellow scientist who cares about methodologies used to investigate this topic? Or by a specialist who is also working in this area and also hopes to advance knowledge? No, it seems rather to be an attempt by non-scientists (who, given the stated philosophy of Conservapedia, likely espouse a belief in Biblical Creationism) to discredit the scientific work of a specialist. This seems to be done, not in order to advance scientific knowledge of evolution (of bacteria in this case), but rather to disprove evolution. If the criticisms have any validity, by all means let them be known; however, I would think that sound scientific complaints would carry more weight coming from a peer (a scientist) in the same field, rather than from Biblical Absolutists, who (correct me if I'm wrong here!) believe in scientific progress and methods only insofar as they do not contradict the Bible. Nevertheless, scientific communication is open to all serious participants and if there are sufficiently sound scientific arguments to legitimately correct or carry forward the findings of Lenski, then by all means let such comments contribute to the evolutionary nature that is science! If the arguments aren't sound, then of course they won't contribute to knowledge of the subject and will not warrant serious discussion anywhere. If this journal (PNAS) is like some others, the editors will provide the criticized author with an opportunity to respond and of course an opportunity (that all researchers have) for the authors of the critique to correct or supplement the work with their own research in a peer-reviewed study of their own on the same topic. I wonder, though, if the editor or editorial board will seriously consider the criticisms if 1) they find them to lack substance or 2) believe that the motivation of the critique is not related to scientific inquiry. If the corrections/criticisms are legit, then why not air out the whole thing in the public PNAS forum? The National Academy of Sciences is one of the most highly-regarded academic institutions in the world and its PNAS, being very highly cited, is extremely visible. I always read the "Letters" and other sections and look forward to see what happens with this. But I do have a question for Aschlafly: are your criticisms of Lenski's work motivated by a sudden scientific interest in the evolution of Eschericia coli and the communication of accurate research findings to the world or because Lenski's findings collide with a world view that cannot be contradicted (because it is True) and there must therefor be something suspect about it? If the latter is true, then you have a very long road ahead of you that will involve debunking tens of thousands of articles that support evolution. CPlantin 19:34, 4 August 2008 (EDT)

CPlantin, could you make your point in two or three sentences? Honestly, I wish I had time to read all your stuff, but I don't. I can tell you this: motive is not a basis for disqualifying people from searching for the truth.--Aschlafly 20:39, 4 August 2008 (EDT)
Sigh... lack of time to read has gotten us all into trouble now and again, hasn't it? You'd likely agree that neither liberal nor conservative attitudes alone should disqualify people from searching from the truth. I'd agree. In fact, if there IS some kind of motive besides a genuine desire to see E. coli research advance, then why not state it outright and publicly, for instance in the letter to the PNAS? Seek the truth and also be honest and open about your motives! CPlantin 20:56, 4 August 2008 (EDT)
Lenski's claims seemed incorrect to me. The more I looked at it, the more flaws that I saw. Liberals are awfully conspiracy-minded.--Aschlafly 22:23, 4 August 2008 (EDT)

"Not in citation given"

Now that the distractions are out of the way, I remain curious as to your justification. It seems to me that a thorough reading of the discussion bears out the assertion. Lenski refuses to release raw data and cultures to anyone he doesn't consider qualified. Opinions may vary on whether or not this refusal is justifiable, but the refusal itself is a matter of fact. --Benp 10:54, 21 August 2011 (EDT)



Could you please clarify why you feel that the citation provided for Lenski's refusal to release his results is insufficient? While he did provide reasons, the fact remains that he did refuse to release the raw data and accompanying cultures to members of the general public. --Benp 10:01, 21 August 2011 (EDT)

Hi, Benp. Here's why. In the first citation, Lenski actually said that the data for the primary conclusions are in the paper (which is public), and regarding the secondary conclusions, "We will gladly post those additional data on my website." In fact, further data and details about methods are here. He said he wouldn't ship the bacteria to just anyone, and he quoted a Conservapedia user who said that the bacteria are the real data. Maybe that caused the confusion.
Regarding the second citation, I thought it was citing a different paper, but now I see that it was citing the Lenski paper. Indeed, as you said in your edit summary, one wouldn't necessarily expect the paper to go over its flaws. So, I was mistaken: no {{failed verification}} there. However, the sentence wording and placement of the citation are extremely misleading: "the following serious flaws are emerging about his work[2]". This suggests recent and continuing publication of scientific papers critical of Lenski's results, and that the cited paper describes the flaws that follow.
What do you think? Leave the first {{failed verification}} and reword the sentence to avoid the confusion I fell into?
Ben Kovitz 11:20, 21 August 2011 (EDT)
Lenski didn't simply acknowledge the Conservapedia user's comments; he affirmed them. "One of your acolytes, Dr. Richard Paley, actually grasped this point. He does not appear to understand the practice and limitations of science, but at least he realizes that we have the bacteria, and that they provide “the real data that we [that’s you and your gang] need” Note that he's acknowledging here that Paley was correct, and that the bacteria DO provide "the real data."
I agree that a minor reword would be useful on the second sentence, but I remain unconvinced that the first "failed verification" tag is warranted. --Benp 11:31, 21 August 2011 (EDT)
Here are a couple more thoughts, then.
In ordinary usage, "data" means recorded representations: numerical measurements, written observations, etc., rather than the actual things that the measurements and observations are about. To illustrate: if you asked a solar researcher for his data, you would expect him to send you some records of observations, not the Sun itself. Calling the bacteria themselves "data" strikes me as figurative language to make a point.
I understand that Dr. Paley's point was: you can't trust data (in the ordinary sense) from Darwinists, because it will contain misrepresentations and omissions designed to protect their theories from refutation; we should instead demand the bacteria themselves in order to examine them without bias. Lenski didn't affirm that point, of course. He said that Paley understood that you don't have to rely on his recorded observations (data), because the bacteria themselves still exist and can still be observed. Lenski said he would share the bacteria, but only with qualified scientists, not the general public.
So, it seems to me extremely misleading to say that Lenski refused to make his data public and to support that claim by citing his letter. I'll leave it to you to correct or clarify the page as you see fit.
Ben Kovitz 13:43, 21 August 2011 (EDT)
I removed the tag, but added a qualifier to indicate that Lenski was willing to release certain data, but didn't fully comply with Mr. Schlafly's request. --Benp 14:26, 21 August 2011 (EDT)

Actually, as I read the second letter, Lenski didn't refuse to release his data to the public. He never addressed that question one way or the other. Instead, he called Mr. Schlafly on the carpet for bad manners. —Ben Kovitz 02:16, 23 August 2011 (EDT)