Difference between revisions of "Talk:Flaws in Richard Lenski Study"

From Conservapedia
Jump to: navigation, search
m (Expertise in Statistics)
(Formal Response to PNAS)
Line 130: Line 130:
  
 
::::::I would prefer that Mr. Schlafly submit his objections to PNAS directly as well, but since he has declined to do so the next best response is to submit it "on behalf of Conservapedia", which he has authorized above.  I'll post the draft letter tomorrow, and it will credit him as the author unless I'm asked to include other individuals who contributed to the analysis. --[[User:DinsdaleP|DinsdaleP]] 22:05, 15 July 2008 (EDT)
 
::::::I would prefer that Mr. Schlafly submit his objections to PNAS directly as well, but since he has declined to do so the next best response is to submit it "on behalf of Conservapedia", which he has authorized above.  I'll post the draft letter tomorrow, and it will credit him as the author unless I'm asked to include other individuals who contributed to the analysis. --[[User:DinsdaleP|DinsdaleP]] 22:05, 15 July 2008 (EDT)
 +
 +
:::::::I hope ASchlafly does offer a rebuttal of Lenski's flawed "study." However, if he does not, I am all for DinsdaleP's suggestion. I look forward to reading your draft. If you take all the objections that Conservapedians have raised to Lenski's paper, I do not see how PNAS can possibly object.--[[User:JBoley|JBoley]] 11:28, 16 July 2008 (EDT)
  
 
== Expertise in Statistics ==
 
== Expertise in Statistics ==

Revision as of 15:28, July 16, 2008

Now, is this page a report over other people thinking Lenski's paper is flawed or is this Aschafly reporting about himself and putting it in the headline of the Front Page? And please specify by references which "two other" experiments are referenced, and explaing how you can see that the "historical contingency" is not true. --Stitch75 23:22, 12 July 2008 (EDT) And if you think you argue that well, why dont you submit it as a comment òn the paper to PNAS. --Stitch75 23:24, 12 July 2008 (EDT)

This article is seriously partisan. I see no outside or third party analysis of the paper in the references, just a bunch of cites to the article or conservapedia itself. Wisdom89 00:34, 13 July 2008 (EDT)

Folks, you're in the wrong place if you're more interested in who says what rather than determining the truth itself. A true wiki gets at the substantive truth rather than trying to rely on biased gatekeepers and filters of the truth.

The flaws in the statistical analysis in Lenski's paper are clearly set forth and well-referenced. If you're interested in the truth, then look at the paper and see the flaws yourself. If you're not interested in the truth and think you can distract people's attention from it by using other tactics, then you're wasting your time here.--Aschlafly 00:42, 13 July 2008 (EDT)

I agree with Aschlafly on this, there needs to be some kind of admission or response from Lenski but little has been forthcoming and conservapedia itself has taken it on. Notice how no-one, aside from conservapedia (and I think Creationwiki?) has asked such questions of Lenski? All the magazines etc have taken his study at face value without actually taking the time to critique his claims. Aside from the "peer reviewers" of course. JJacob 00:47, 13 July 2008 (EDT)

The "peer reviewers" who spent somewhere between 0 and only 14 days looking at the paper, and missed an obvious contradiction between Figure 3 (specifying the "Historical contingency" hypothesis) and Table 1, Third Experiment. The statistical analysis in the paper appears so shoddy to me that I doubt anyone with real statistical knowledge or expertise even reviewed it.--Aschlafly 00:59, 13 July 2008 (EDT)
I have expertise in research and statistics and I'm just not seeing this shoddiness that you make reference to. You are allowed to have your doubts, but we should get a bunch of people familiar with such fields to examine the paper's statistical analysis. Wisdom89 01:01, 13 July 2008 (EDT)
I have the same problem. i hold a Dr.Rer.Nat title and did some statistics (although i am no expert on it) and fail to see the "shoddiness" please help my underdeveloped mind, Mr. Schafly and enlighten me. I it is so obvious it should be a one-liner to formulate it. --Stitch75 09:54, 13 July 2008 (EDT)
Please try harder then. I've expanded the explanations a bit also.--Aschlafly 10:42, 13 July 2008 (EDT)
At least now i can recongnize your statements more or less clearly. Yet i think (cite from the paper) We also used the Z-transformation method (49) to combine the probabilities from our three experiments, and the result is extremely significant (P < 0.0001) whether or not the experiments are weighted by the number of independent Cit+ mutants observed in each one. has to be addressed more specifically than you do in order to discredit the statistics used --Stitch75 11:54, 13 July 2008 (EDT)
Just for comparative purposes and a frame of reference, a P value that is less than the significance level of 0.05 is considered significant. Wisdom89 13:10, 13 July 2008 (EDT)

I wonder, do the PNAS allow questions to raised and asked of the "peer reviewers" themselves? Are we able to find out who/what experience they themselves have? Perhaps that is an avenue that we could look at? I apologise in advance if this has already been asked or answered. JJacob 01:07, 13 July 2008 (EDT)

Definitly not. Peer reviewers are anonymous and only known to the editor, for good reason. Having peer reviewers non-anonymous would cause reviewers to be very careful to step on nobodys foot to evade revenge. --Stitch75 09:54, 13 July 2008 (EDT)
No, PNAS probably won't disclose who supposedly did the 14-days-or-less peer review on the Lenski paper. You're right that such disclosure could shed some light on the final product.
"Wisdom89", your claim that you "have expertise" and don't see the flaws only makes me conclude that you don't really have the expertise that you claim. Judging by your silly user name, perhaps you've tried that approach before. We're not fooled by it here.--Aschlafly 01:14, 13 July 2008 (EDT)

(removed silly comment by person who has since been blocked for 90/10 talk violation)

Marginally significant

In #1, that's not inconsistent. The figure makes it clear that, according to the hypothesis, the mutations should also occur earlier than 31,000, but should become more common at that point. 12/17 mutants occurred after 31,000. The point in #3 is not clear. What do you mean by weighting? In #5, Lenski's paper is not clear in explaining how the results of his largest experiment...his paper refers to his largest experiment as "marginally ... significant," which serves to obscure its statistical insignificance. Actually, marginally significant is clear. It means that the p-value is between .05 and .10 (in this case it's .08, table 2). It's a pretty standard phrase to describe an effect that falls into that range. Murray 13:41, 13 July 2008 (EDT)

Your defense of Lenski per point #1 is contradicted by the paper's own abstract, and by the comments by the other Lenski defender below. In point #3, proper weighting is needed to combine multiple studies. In point #5, you don't cite any authority for the unscientific claim of being "marginally significant."--Aschlafly 20:45, 13 July 2008 (EDT)
Your defense of Lenski per point #1 is contradicted by the paper's own abstract, and by the comments by the other Lenski defender below. In point #3, proper weighting is needed to combine multiple studies. In point #5, you don't cite any authority for the unscientific claim of being "marginally significant."--Aschlafly 20:45, 13 July 2008 (EDT)
Define "proper weighting". Are you referring to how they interpreted the sum of the work or to a specific analysis? You're right, I didn't cite any scientific authority. Here's one: Motulsky, H. (1995). Intuitive Biostatistics, Oxford Univ. Press. Chapter 12. Also, try searching for the phrase in Google Scholar or PubMed, you'll find plenty of uses of it. It's a shorthand way of describing an effect that came close to the arbitrary threshold for statistical significance but did not reach it. Murray 21:55, 13 July 2008 (EDT)
You seem unwilling to accept that Point #1 identifies a clear mistake between the figure and the abstract in the paper, which requires correction by Lenski or PNAS. Given that unwillingness, I doubt it will productive discussing the other mistakes further with you. You're right that other usage of the dubious concept of "marginally significant" can be found on the internet, but the first link to such usage on my internet search for it returned the non-rigorous "Intuitive Biostatistics" and the second link returned a criticism of the concept similar to the criticism express here. [1] --Aschlafly 00:42, 14 July 2008 (EDT)
It's not a clear mistake in my mind. There does seem to be some confusion or contradiction in terms of the number of generations, but I haven't seen a clear explanation of why this is or is not a mistake. What seems like a contradiction is the abstract statement that no mutations occurred before 31,500, but it's not clear to me whether I'm misunderstanding that. I doubt it will productive discussing the other mistakes further with you. Of course you doubt it, because you are unlikely to be willing to concede anything no matter what anyone says. Why do you call the concept dubious? The procedure of determining whether an effect is significant requires the setting of an arbitrary threshold, which is usually .05. That means, in analyses of the sort in the Blount et al. paper, that there's less than a 5% chance that the findings are due to sampling error. When an effect comes close to the threshold it is worth noting, because of the problems inherent in significance testing, which itself is widely criticized in the statistical literature. I am not clear what you mean by weighting, as I mentioned before - in the interpretation or statistically? Murray 13:30, 14 July 2008 (EDT)

Richard Lenski incorrectly included generations of the E. coli already known to contain Cit+ variants in his experiments.[3] Once these generations are removed from the analysis, the data disprove Len

The author of the above statement must not have read the paper or the supplement carefully. Lenski took clones from those cultures that weren't Cit+. Careful reading of the paper or the supplement reveals that Cit+ mutants appeared at 750 generations or later into the replay experiment. The authors write: "New Cit+ variants emerged between 750 and 3,700 generations..."--Argon 15:03, 13 July 2008 (EDT)

Your explanation is not convincing. Show us how Lenski proved that the samples did not have any Cit+ variants, if you really think he's claiming that.--Aschlafly 20:51, 13 July 2008 (EDT)
I count at least four distinct diagnostic methods used in the paper for distinguishing whether clones are Cit+. One in particular they describe as being very sensitive to weakly citrate-using cells. It's not that I 'think' Blount et al. claim they started with Cit- cells. They say that in the 'Supporting information' document.
Andy, for tonight, I'll leave it as an exercise for you to answer your own question of how one might test a clone to be sure that is was Cit-. Use the paper if you'd like or present another means. Tomorrow evening I'll cite the methods Blount used in the paper.
Meanwhile, here's another question to ponder: If the colonies used to start the 'replay' cultures were Cit+ at the start, why is it that no Cit+ cells were found in such cultures before 750 generations? These cultures were all started from single clones which must have either had the Cit+ phenotype or not.--Argon 18:38, 14 July 2008 (EDT)

- How did Lenski obtain Cit- clones from later populations after Cit+ had become the dominant phenotype? It's pretty standard procedure. First, keep in mind that these populations contained a minority of Cit- cells. Now, a small liquid growth is made from the frozen glycerol stock of cells from a given generation, say 32,000. This is grown to an appropriate OD and then plated on a standard rich media agar plate. Provided that the plated sample isn't too dense, this plating will deposit somewhere between 15-100 individual cells on the agar, with plenty of spacing between them. The plate is then incubated at 37 C for a given time, during which the individual cells replicate to form small, visible colonies. These colonies will consist solely of cells that are genetically identical to the original cell deposited on the agar. This is all well and good, but how do you find which of the colonies are Cit-? Here you use a technique called replica plating. The agar plate is gently inverted onto a sterile swatch of velvet so that some of the cells from each colony are deposited onto the fabric. Next, a citrate-only agar plate is pressed against the velvet, transferring cells from the fabric to the new plate in exactly the same spacial orientation as the first plate. The cells are then allowed to grow on the citrate-only plate. When we then compare the two plates, we look for colonies on the rich plate that did not grown on the citrate-only plate. These colonies will be Cit-, and can be used for the replay experiments. Gerlach 13:45, 14 July 2008 (EDT)

Just a correction: The Cit+ cells didn't dominate the cultures at generations 32,000 & 32,500. Cit+ mutants represented 12% and 19% of the population at these times points, respectively. Replica plating is definitely one means of identifying Cit- and Cit+ clones but because the Cit+ cells represented a fraction of the total, it is simple enough to streak the samples to individual colonies (founded by single cells) and test each colony individually. Otherwise, a nice description of replica plating (a technique developed by Joshua Lederberg that allowed him do the research for which he was awarded one of the 1958 Nobel Prizes in Medicine)--Argon 18:38, 14 July 2008 (EDT)
Oops, you're right, Argon. Cit+ wasn't dominant at generation 32,000, so replica plating might not be the best option. Still, in the case of almost 20% Cit+, I think replica plating might be labor reducing. I'm a biochemist by training, not a microbiologist, but I'm sure there are other methods that could be used. For example, Christensen's agar appears to provide a sensitive, colorimetric method of identifying even weakly citrate-utilizing colonies, so one might be able to plate cells on a Christensen's agar plate and pick uncolored colonies. Again though, I'm no microbiologist so I don't know the best method. However, there are ways to easily pick Cit- clones from later generations in this case. That said, I think that this specific "flaw" cited in Lenski's paper should be removed from the main article. Clearly, Cit+ were not used in the replays from later generations, so Conservapedia's objection is totally baseless.Gerlach 19:19, 14 July 2008 (EDT)
All appropriate descriptions. For generating estimates of the population distribution (Cit-/Cit+) in a culture, replica plating would certainly be applicable. The main point is that identifying and isolating Cit- clones from the generations used in the experiment is straightforward. I agree that it would be a mistake to keep that objection in the article (actually that thread runs across several points in the article).--Argon 20:12, 14 July 2008 (EDT)
Nothing in the paper rules out contamination of those samples by Cit+ variants, and you quote nothing in the paper to rule it out. There was no reason to use and rely on these samples that already have Cit+ variants.--Aschlafly 23:04, 14 July 2008 (EDT)
The whole point of the discussion above was to illustrate that there are methods to isolate cells with a particular genotype from a mixed population. Suppose we used the replica plating method (maybe not the best choice, but it's standard procedure). Care to explain where the Cit+ contaminants could possibly come from? And the naivete inherent in objecting that "nothing in the paper" rules it out is simply astounding. Standard laboratory procedures, like isolating specific clones, typically don't show up in papers. For example, if I to write a paper on my research, it would not include a detailed discussion of the construction of my expression plasmid. I might mention that I cloned my gene into a particular vector, but there wouldn't a discussion of how I did my PCR, restriction digests, or transformations. Things like that are simply extraneous details that are taken for granted by experienced researchers. There is simply no reason for Lenski et al. to include a discussion of their method for isolating specific clones. This is why this whole exercise of Conservapedia criticizing Lenski's paper is folly, because most Conservapedia users simply don't know standard laboratory techniques. It's kind of important to know what scientists today are able to do on a routine basis before wading in to claim that they couldn't have done what they claim.Gerlach 09:05, 15 July 2008 (EDT)
          • *****
Actually the methods cited in the paper would separate the Cit+ lines from the Cit-. They took single colonies (founded by single cells), and tested them on selective agar (minimal citrate media - MC agar) and an indicator medium, Christensen's citrate agar (Product information from Sigma here: http://www.sigmaaldrich.com/sigma/datasheet/c7595dat.pdf). As Gerlach notes from the Blount paper, Christensen's citrate agar is very sensitive to citrate utilization. Typical E. coli strains do not produce a color change but other citrate-using enterics like S. typhimurium and Cit+ E. coli mutants appear pink/red on the plates.
This conclusion is further validated (as I mention above), by the fact that Cit+ mutants did not appear immediately in the replay experiments (Others have also noted this). For example, in replay set-1, it took 750 generations before the first Cit+ mutants were isolated. Many didn't produce Cit+ cells after 3700 generations. If the starting line was Cit+, *all* the cells in the culture would have show up as Cit+ in the first pass.
Overall, it is abundantly clear that the cell lines used in the replay experiments were not Cit+ at the beginning.--Argon 19:44, 15 July 2008 (EDT)

1. Lenski's "historical contingency" hypothesis, as specifically depicted in Figure 3, is contradicted by the data presented...

Again, more issues with comprehension: The authors did not know when the potentiating mutation first arose but they knew it was before generation 31,500. Figure 2 is merely figurative, being an illustration, and not quantitative. Their analyses suggested that the potentiating mutation did arise at about the 20,000 generation point or later. Their conclusion is that Cit+ mutation rate is low even in a potentiated background but apparently distinguishable from a low-incidence single, unpotentiated event.--Argon 15:04, 13 July 2008 (EDT)

Statistics101 package:

What does the author imply by writing: "...Lenski himself does not have any obvious expertise in statistics. In fact, Richard Lenski admits in his paper that he based his statistical conclusions on use of a website called "statistics101"?

From the web site: Professionals: Although it was originally developed to aid students, the Statistics101 program is suitable for all levels of statistical sophistication. It is especially useful for Monte Carlo, resampling, and bootstrap applications. It has been used by professionals in many fields. These include anthropology, biology, ecology, evolutionary biology, epidemiology, marine biology, psychology, toxicology, veterinary pathology. It would appear that the package from the web site (not the web site itself) was used to perform the Monte Carlo resampling tests. Is there any evidence that the package produces incorrect results?--Argon 19:41, 13 July 2008 (EDT)

Formal Response to PNAS

I've moved this thread from the open section above since this page is being categorized. --DinsdaleP 21:52, 13 July 2008 (EDT)

I think this article presents very sound arguments. Conservapedia should now take action, offering to publish a rebuttal of Lenski in the PNAS journal.--JBoley 11:31, 13 July 2008 (EDT)

I agree with JBoley in that if Conservapedia wants to present a formal, professional response to Professor Lenski's paper that questions specifics within his paper, then it should happen. That is the proper execution of the scientific method, and I'm certain that a professional response to PNAS would yield better results than vague "give us all the data " demands. Is a formal response to PNAS from Conservapedia in the works, or is this article the only place these questions/objections were intended to be raised? --DinsdaleP 11:52, 13 July 2008 (EDT)

I don't know if PNAS would embarrass itself by printing a rebuttal, or whether it has the integrity to retract Lenski's paper. Conservapedia's audience is probably bigger than PNAS's, and we're certainly not going to suspend our exposure of the truth here in order to await correction by PNAS.
PNAS publishes, quite easy to find Rules for "Rebuttals" (they call it "Letter"). It is specifically what you want. PNAS:Information for Authors . I cite: Letters are brief online-only comments that contribute to the discussion of a PNAS research article published within the last 3 months. Letters may not include requests to cite the letter writer's work, accusations of misconduct, or personal comments to an author. Letters are limited to 250 words and no more than five references. --Stitch75 13:25, 14 July 2008 (EDT)


In addition, Lenski has already demonstrated how he reads this site and he can certainly correct his own paper, and he should do so. Indeed, professionalism might support giving Lenski the time to correct it himself first.--Aschlafly 12:14, 13 July 2008 (EDT)
I understand what you're saying, but there's nothing improper or unprofessional in submitting a formal request to PNAS to have the points in this article addressed by Professor Lenski and his team. To be frank, you've been adamant in your insistence that PNAS has been less than rigorous in the review of Lenski's paper, so if one of your intentions is to demonstrate this then having PNAS respond to a formally submitted response to the paper in public would serve that purpose. This can be done in addition to publishing these objections on Conservapedia--DinsdaleP 12:21, 13 July 2008 (EDT)
If not PNAS, then perhaps some other public forum? I know Andy Schlafly has appeared on television, effectively arguing against gardisal and other dangerous vaccines. Perhaps if a TV program were interested you could argue against Lenski? You could be the spokesperson against these false claims of evolution.--JBoley 12:24, 13 July 2008 (EDT)
I'm not opposed to the above suggestions, but the future is here, folks. Lenski, PNAS editors and television producers have free will to reject or ignore the truth, and I'm more interested in getting the truth out here than trying to persuade someone in dying media like print or television. Lenski and his defenders can see the truth here, and they can decide for themselves whether to reject or admit it.--Aschlafly 12:30, 13 July 2008 (EDT)
I agree with you about the dying nature of print (I don't think television is dying, merely changing). The problem is that information about the flaws in Lenski's study are not registering outside of sites like Conservapedia. In effect, Conservapedia is an echo chamber. People that come to this site already agree with its point of view. I encourage you to attempt to attract the attention of other forms of media, or Lenski's false claims will simply be accepted as fact by the public and even worse, by educators.--JBoley 12:36, 13 July 2008 (EDT)
While there's nothing wrong with taking one's message to various forums or outlets, I believe there's a specific value in submitting these objections as a formal response to PNAS. Conservapedia was established as a trustworthy resource for students, and in my mind all of it's actions should be done with the goal of informing and educating. The Lenski debate is over the findings published in a scientific journal after undergoing a peer-review process. The objections to this paper by the CP leadership are not just about its content, but to the process by which it was reviewed and published in the timeframe it was. Talking about these objections is fine, but it's more instructional to the students using Conservapedia, and a better example of the scientific method in action, to respond to a scientific paper published in a journal through the formal process by which such papers are either defended or corrected. In the end, Lenksi's work will either stand up as good science, or any errors will be addressed and the paper's conclusions modified accordingly, which is also good science. Seeing this process in action regarding a such a significant paper is a great learning opportunity, and the Conservapedia leadership would be remiss in not standing by their conviction in these objections and submitting them formally to PNAS. --DinsdaleP 12:38, 13 July 2008 (EDT)

PNAS has a letters section available with the online edition. Many journals have a letters section for rebuttals or clarification. Legitimate corrections are welcome. Andy, have you run your list of 'flaws' past any biologists?--Argon 15:36, 13 July 2008 (EDT)

Funny, Argon, how you don't apply your demand of expertise to Lenski himself. What are Lenski's credentials with respect to statistical analysis? Has he even taken and passed an upper-class statistics course of any substance?--Aschlafly 00:45, 14 July 2008 (EDT)
I don't follow. Are you suggesting that Lenski doesn't understand the proper application of the statistical methods used in his paper? If so, I haven't seen a description of which alternate methods you'd employ, let alone any output from such an analysis. Here's a thought: Why don't you substantiate your claims by writing up the work and submitting it as a correction letter to PNAS?--Argon 18:46, 14 July 2008 (EDT)
Argon, if you really "don't follow," then try harder. You insist on credentials by others who criticize Lenski, and yet you do not insist on expertise by Lenski in statistics with respect to his "analysis". Perhaps Lenski should first take and try to pass "Statistics 101" before trying to use a website by its name to draw flawed conclusions.--Aschlafly 22:36, 14 July 2008 (EDT)
Do you really think that Lenski went to statistics101.net to learn about statistics and how to apply them? That simply is not the case. The Lenski group knew that they needed to do a Monte Carlo resampling analysis on the results of their replay experiments. In this situation, they are faced with two choices: either code an appropriate program themselves, or utilize one that has already been developed and is readily available to researchers. Since statistics101 had such a program available, they chose the latter option. Statistics101.net was simply the source of the program that the Lenski group used to perform the statistical analysis. If you want to argue against that choice, then you need to examine the source code for the statistics101 package and enumerate why it should not have been used.Gerlach 11:40, 15 July 2008 (EDT)
          • *****
I don't insist on credentials per se. I *recommend* developing a working understanding of the experiments or a willingness to do the necessary background research to get the details straight before heading off on possibly the wrong direction. It saves a lot of thrashing about. It's perfectly OK to raise questions but before leveling accusations it might be nice to do that privately and discuss that with others who can provide useful feedback. Just my 2 cents.--Argon 19:48, 15 July 2008 (EDT)
Mr. Schlafly, does your response above, "I'm more interested in getting the truth out here than trying to persuade someone in dying media like print or television", mean that you will not be submitting a response with these identified flaws directly to PNAS? As I mentioned above, it takes nothing away from the value of posting these statements here on CP to also submit them to PNAS, but the proper way to prompt a journal to review and correct an article is through a direct response, not publication on an unrelated website like CP. It's not proper for anyone but the author(s) of the objections in this article to make that submission, so I'm hoping they step up with the conviction of their beliefs and respond to PNAS directly. Thanks. --DinsdaleP 12:32, 14 July 2008 (EDT)
By now Lenski has probably seen the flaws identified on the content page here. What's "proper" is for him to correct his own paper in PNAS. The criticism will likely continue as long as he declines to do so. If anyone here would like to educate the PNAS editors about the flaws, then please feel free to do so.--Aschlafly 22:36, 14 July 2008 (EDT)
I'll be glad to submit these items to PNAS on behalf of Conservapedia using the "Letters" forum described by Stitch75 above. Should I cite you the author of this analysis, and is there an email you'd prefer me to include instead of my personal one for any PNAS response? I'll put up a draft of the Letters submission here for your approval before sending anything out. Thanks. --DinsdaleP 18:39, 15 July 2008 (EDT)
It would have a very strange scientific taste should somebody else that Mr. Schlafly be the first author of the letter (however, if you insist on that, i suggest "personal communications" as the right kind of citing the work here.). It is his turn to stand up to his claims in this way (by publishing it into a Forum which he is not the owner of). I personally share the doubts of many here about his argumentation (and his understanding os the experiments), but i am sure only more of his dismissive comments will follow. However i see that he comes up with a clear alternate Hypothesis (contamination), so he is free to show that this is more likely (calculations please) and peform Monte-Carlo simulations on it (for a person complaining that statistics101 is to simple that should be no difficult task). As far as i understood and see the data Lenski and coworkers did the best to exclude this Hypothesis, however i did not run own simulations (And I won't do it, because i think nothing will come out - furthermore Mr. Schlaflys personal style in the communication "you have to try harder" is not the style i am used to be adressed by people whose qualification in a subject is appearlingly nor more than mine). So running the simulation, evaluating his own hypothesis using a valid statistical method is now Mr. Schlaflys job - if he come up with a decent calculation showing this Hypothesis is more likely, the letter would for sure be accepted and Lenksi would have to react. If Mr. Schlafly is not the first author of the letter he could evade the critics after that by saying that he was misunderstood, which means somebody else take the risk of submitting the letter, but in case of success Mr. Schlafly would take the glory. --Stitch75 20:10, 15 July 2008 (EDT)
I would prefer that Mr. Schlafly submit his objections to PNAS directly as well, but since he has declined to do so the next best response is to submit it "on behalf of Conservapedia", which he has authorized above. I'll post the draft letter tomorrow, and it will credit him as the author unless I'm asked to include other individuals who contributed to the analysis. --DinsdaleP 22:05, 15 July 2008 (EDT)
I hope ASchlafly does offer a rebuttal of Lenski's flawed "study." However, if he does not, I am all for DinsdaleP's suggestion. I look forward to reading your draft. If you take all the objections that Conservapedians have raised to Lenski's paper, I do not see how PNAS can possibly object.--JBoley 11:28, 16 July 2008 (EDT)

Expertise in Statistics

I don’t mind admitting that I have none! The article states that Lenski combined the three trials incorrectly and that he doesn’t make the insignificance of the third clear. Now it may well be very clear to others but, unfortunately, the article doesn’t make clear to people like me what was wrong. Can someone expand on this point? Is anyone able to explain to a layman what Lenski should have done and the conclusions he should have reached? It's all getting a little technical for me. --Bill Dean 12:10, 14 July 2008 (EDT)

OK, here's point 4 in layman's terms: Lenski's hypotheses of a mutation rate imply that a ten-fold increase in sample size should result in a ten-fold increase in mutations. But it doesn't. In fact, a nearly ten-fold increase in sample size results in only a slight increase in mutations in Lenski's data. These data, as presented by Lenski in his paper, suggest (if properly interpreted) that there is no mutation rate at all. Rather, these data are more consistent with occasional contamination, broadly defined.--Aschlafly 23:15, 14 July 2008 (EDT)
Except that the growth pattern of the Cit+ cells in the third replay experiment demonstrated that there were no Cit+ cells in the cultures at the time of plating. Control plates with a mix of Cit- and Cit+ cells were done which showed rapid development of Cit+ colonies, whereas the experimental plates did not show any Cit+ colonies for at least 8 days and up to 28 days. If you believe that the plates became randomly contaminated during the course of the incubation, rather than at the initial plating, the post-plating contamination should have affected all generations equally, but it did not--Brossa 12:46, 15 July 2008 (EDT)
Blount et al. discussed the unexpectedly low Cit+ conversion rate for the third replay set in the supplementary document. They have some speculations but don't know why it occurred. They are dealing with low probability events. I don't agree that the 'proper' interpretation is contamination. First, they isolated and tested each Cit+ isolate for the markers and sequences we've discussed on earlier Conservapedia pages. Second, the distribution of Cit+ isolates does not appear to be random: They correlate strongly with the later generations. If undetected contamination was random, it's unlikely the Monte Carlo resampling tests would reject the null hypothesis of the 'rare-mutation' hypothesis over the 'potentiated' hypothesis. As Brossa correctly notes, contamination would have affected all generations. Blount et al. write in the paper's supplement: "To facilitate handling and minimize possible confounding variables, we divided this third experiment into 20 blocks of 14 clones each. All of the clones within a block came from different generations, and the single ancestral clone was included in all 20 blocks."--Argon 20:01, 15 July 2008 (EDT)
Good and detailed analyis and conclusion, Argon. I had no time to read everything in that detail, but i think you present exactly the arguments (contant background would favour rare mutation hypothesis) would be exactly the one i had in mind, so let's see how Mr. Schlafly will bend his own Monte-Carlo simulations to showing several 1000 times no mutation into accordance with any reasonable background explaining a significant number of counts in *some* samples --Stitch75 20:20, 15 July 2008 (EDT)
The above comments are non-responsive. The essence of both of Lenski's hypotheses is that there is a fixed, or stepped, mutation rate. But any such rate would be roughly proportional with sample size. Yet his three experiments prove otherwise, which Lenski fails to address in a satisfactory manner. Indeed, Lenski's presentation of his data disproves the very thing he claims to have shown.--Aschlafly 20:40, 15 July 2008 (EDT)
Mr. Schlafly, please answer: Which the of the two hypotheses is, according to your calculations the most likely? --Stitch75 20:47, 15 July 2008 (EDT)
The first replay experiment was run under different conditions - roughly 3700 generations in continuous subcultures. One can't compare population sizes or rates between the first and last two replay experiments. As I noted, Blount et al. acknowledged the possible anomaly of lower than expected mutants in the third run. Still, the second and third experiments were run under different conditions. The plates in the second replay experiment were seeded with fewer cells per plate than in the third experiment. That change can affect the survival rates of cells on plates over time (e.g. different rates of nutrient exhaustion). Given the extremely low mutation rates involved, there is no simple means of normalizing the numbers of mutants recovered for the second and third experiments. One might expect 'roughly' 10x more mutants in the third experiment but that's truly a 'rough' estimate that would be affected by conditions under which cells are exposed. It's certainly a question that remains and they may be able to clear up with future research. In any case, the pattern of data does not support Andy's claims of contamination or that the hypotheses of the paper is in error.--Argon 21:27, 15 July 2008 (EDT)
Argon, you have free will and it's clear to me that you are going to exercise it to the point of embracing absurdities. This time you claim that "extremely low mutation rates" would not result in mutations that scale with sample size (of course they would), and that density completely alters mutation rate (if that silly claim were true, then Lenski's experiment was flawed from the get-go). Your belief system is remarkable, but it's not logical.--Aschlafly 23:38, 15 July 2008 (EDT)
Fair enough, but let's be clear: That's my take on the paper and that of most of the scientists who reviewed the paper, read the paper and commented upon the paper so far. For that matter, Michael Behe didn't call the data flawed, nor did those commentators at Dembski's Uncommon Dissent blog, nor did Dr. Georgia Purdom at Answers in Genesis. In contast, those who think the work is flawed appears to be limited mostly to you, Andy. It is true that 'scaling' was not seen in this case but as we've seen, the conditions were not quite the same and it is known that this can have an impact.
However, the fact that the conditions were not identical doesn't detract from the fact that the emergence of Cit+ clones *still* correlated with the sampling of the later generations. What this means is that with three separate experiments and under three sets of conditions, the constant-rate-mutation hypothesis doesn't hold. What the differences between the second and third replay experiments demonstrates is that they were run under different conditions that affected the overall rate of conversion, not that the 'potentiated mutation' hypothesis is wrong. Those are actually distinct questions. Andy, the data simply does not support your claims that the cultures were contaminated (we'd expect random distribution), or that the 'scaling' variations ruined the experiment. In my opinion, you seem focused on red herrings to the exclusion of evaluating the data in the overall context of the experiment which demonstrates a correlation of Cit+ clones emerging from samples taken at later generations. Would you care to address that pattern and discuss why your 'contamination hypothesis' doesn't appear to hold up?--Argon 09:29, 16 July 2008 (EDT)
There is nothing absurd in what Argon said. As he mentioned, the third experiment was performed under different conditions and at a different time than the first or second experiments. The key here is that there are plenty of variables, some unknown, that simply aren't controlled for between the experiments. For example, take the fact that it was performed at a different point in time than the first two. The third experiment, then, is almost certainly being performed with different batches of growth media, liquid and solid. Anyone who has spent any significant time growing cells knows that media can vary significantly in growth characteristics between batches, despite the same recipe being used. The reasons for this can be many. Perhaps the balance or pH meter was off calibration one day, or a different bottle of reagent was used. Take an analytical chemistry course, you'll spend plenty of time talking about this. It is true that, for the most part, this sort of variability has minimal impact. But when you're examining something like an extremely rare mutation, or you're trying to make an extremely accurate measurement, intra-lab variability like this can be significant. For an extremely rare mutation like Cit+, which involved at least two additional mutations in potentiated cells, any change in mutation rate can have a significant effect on your ability to obtain mutants. Mutation rate is sensitive to growth conditions, so cells grown in different conditions are likely to experience a different rate of Cit+ mutation.
Regardless of the reason for the lower-than-expected number of Cit+ mutants in the third experiment, however, the Cit+ mutants isolated absolutely did not arise from contamination. This is clear if you read the paper. You're still left, then, with the two hypotheses presented, and the results support historical contingency.Gerlach 09:45, 16 July 2008 (EDT)

Is there anyone here with expertise in statistics who could give an analysis? Fyezall 16:15, 15 July 2008 (EDT)

Hello Fyezall. I rearranged the position of your question to hopefully keep the conversations clearer. It boils down to this: The researchers found that mutant Cit+ strains arose over the course of time in their long term growth experiment. They wanted to learn something about how that strain acquired this ability. They wondered, 'Was this the result of a single, very low frequency mutation or did some other mutation have to precede it in earlier generations, followed by the final mutation(s) that allowed the cells to grow on citrate?'
If the Cit+ change required a single mutation with a low but constant probability over time one would expect Cit+ mutations to be distributed across cells taken from any generation of the experiment. On the other hand, if a 'potentiating' mutation had to arise at some point in the cultures before the final Cit+ mutation could function, then one would expect the probability for Cit- cells to mutate to Cit+ cells would increase with samples of cells taken from later cultures.
Blount ran the experiment and found that Cit+ mutants arose more frequently in cells taken from later generations in the culture. The Monte Carlo resampling tests were used to assess how likely the pattern of results would fit the models. The statistical significance (smaller P-values mean greater significance), was calculated for each experiment and the combination of experiments. The numbers suggest the distribution of mutants was not randomly distributed across the experimental generations, they tended to appear in cells taken from the later generations. This would argue against the single-mutation, constant, low-probability hypothesis. It appears that a pre-adaptive mutation had to have arisen first, followed by the mutation that finally allowed the cells to utilized citrate. Future work in Lenski's lab will focus on trying to identify the various mutations involved. I hope this brief explanation helps.--Argon 20:25, 15 July 2008 (EDT)

What I have noticed

I tell you what strikes me the most about this discussion - Its watching evolutionists do whatever they can to keep their precious beliefs above question or scrutiny. Its as if they know that it will come toppling down so they must resort to side-lining the hard questions. JJacob 21:43, 15 July 2008 (EDT)