Difference between revisions of "Talk:PNAS Response to Letter"

From Conservapedia
Jump to: navigation, search
(response to Aschlafly)
(Point 5 Rejected)
 
(144 intermediate revisions by 37 users not shown)
Line 1: Line 1:
 
Notice: misrepresentations are not going to be allowed on this page.  Substantive comments only, please.
 
Notice: misrepresentations are not going to be allowed on this page.  Substantive comments only, please.
  
In this day and age, scientists have their own agenda and have corrupted science. Just look at global warming or cloning or stem cells as proof. With that said, the only way to get the real truth is by suing in court. Unfortunately, scientists are bound to vast wealth and have the power to defend themselves vigorously. If ever a fund was set up to pay for a suit, I would contribute. It is a classic case whereby the truth be known, the truth will prevail. -- [[Image:50 star flag.png|14px]] [[User:Jpatt|jp]] 22:14, 12 September 2008 (EDT)
+
Note: earlier posts are archived [[Talk:PNAS_Response_to_Letter/Archive1|here]]. --[[User:BillA|BillA]] 17:05, 20 September 2008 (EDT)
  
: Thanks, Jpatt.  One additional beauty of the truth is that it remains the truth no how much some deny it.  PNAS can deny its errors all it likes, but that doesn't change the fact they are errors.--[[User:Aschlafly|Aschlafly]] 22:21, 12 September 2008 (EDT)
+
== Point 5 Confirmed ==
  
::Well said, Andy and Jpatt. It is perhaps worth pointing out that the President of the NAS is a "climate scientist". If the Academy is dominated by [[pseudoscience]] of that kind, it's hardly a surprise that their response was to cover up and deny the truth. Nevertheless, they had to be given their chance to make good before further steps are taken. I suggest now that the issue be put to potentially supportive congressmen/women and senators, given the public funding for Lenski's activities. [[User:Bugler|Bugler]] 05:46, 13 September 2008 (EDT)
+
I would like to contribute to this discussion because I have taught statistics to graduate biology students for 16 years.
  
::: Right.  The next step is to criticize the taxpayer funding of this junk science.  When the authors and the publishing organization will not even address statistical errors in the work, then it's time to pull the public funding.--[[User:Aschlafly|Aschlafly]] 10:13, 13 September 2008 (EDT)
+
The combination of data from several experiments is a specialist and sometimes difficult area of statistical theory but a simple example shows why Aschafly’s concern about combining the results of three different experiments is not justified and why this aspect of his criticism of Lenski’s recent paper in PNAS is not valid.
  
::::They did address your claims of statistical errors: they said that you were wrong to the degree that they were able to determine what you were talking about. You made a qualitative argument and got a qualitative response.--[[User:Brossa|Brossa]] 11:00, 13 September 2008 (EDT)
+
Suppose we want to conduct a test of whether or not men are taller than women on average. For the sake of the example, I generated random heights of people from a population in which men had an average height of 175cm (5’10’) and women of 165cm (5’6”). The standard deviations of height in both sexes were 7cm. I think these numbers are approximately correct for people in the UK but the details aren’t important.
  
::::: Which of the 5 specific errors do you think they addressed?  None, as far as I can tell.--[[User:Aschlafly|Aschlafly]] 11:11, 13 September 2008 (EDT)
+
Suppose we take 5 samples of 2 men and 2 women. Here are the numbers I generated:
  
:::::The response addresses your qualitative claims about the paper's statistical methods raised in points two, three, and five by the following:"Nevertheless, from a statistical point of view, it is proper to combine the results of independent experiments, as Blount et al. '''did correctly in their original paper'''"(emphasis added); in fact the longest paragraph in the response deals entirely with the statistical claims of the letter and dismisses them.--[[User:Brossa|Brossa]] 11:32, 13 September 2008 (EDT)
+
{| class="wikitable"
 +
|-
 +
! Man1 !! Man2 !! Woman1 !! Woman2 !! Men mean !! Women mean !! Mean difference !! t !! P
 +
|-
 +
| 176 || 179 || 157 || 148 || 177.5 || 152.5 || 25 || 5.27 || 0.017
 +
|-
 +
| 180 || 176 || 160 || 164 || 178  || 162  || 16 || 5.66 || 0.015
 +
|-
 +
| 176 || 175 || 167 || 165 || 175.5 || 166  || 9.5 || 8.50 || 0.0068
 +
|-
 +
| 169 || 171 || 168 || 173 || 170 || 170.5 || -0.5 || -0.19 || 0.57
 +
|-
 +
| 179 || 175 || 166 || 178 || 177 || 172 || 5 || 0.79 || 0.26
 +
|-
 +
|}
  
::::But that's never going to happen because the data availability requirements for public funding have already been met. [[User:Jirby|Jirby]] 11:03, 13 September 2008 (EDT)10:56, 13 September 2008 (EDT)
+
P in the last column is the t-test probability for a one-side test of women being shorter than men. (Formally, it’s the probability of getting a value of t greater than that calculated from the data if women are in fact taller than men on average.)
  
::::: No, I don't think the researchers have met NSF guidelines as referenced in the letter.--[[User:Aschlafly|Aschlafly]] 11:11, 13 September 2008 (EDT)
+
Should the fact that, in the fourth sample, the average height of the women is taller than that of the men make us doubt that men are in fact taller on average? Should we be concerned about the last sample, in which the difference in height of the two sexes is rather small, though in the expected direction? No, in both cases. When we combine the data on all 10 men and all 10 women, we get this:
  
Proof?[[User:Jirby|Jirby]] 11:26, 13 September 2008 (EDT)
+
{| class="wikitable"
 +
|-
 +
! Men mean !! Women mean !! Mean difference !! t !! ''P''
 +
|-
 +
| 175.6 || 164.6 || 11 || 3.85 || 0.00058
 +
|-
 +
|}
  
: As I said, the NSF guidelines are references in the [[Letter to PNAS|letter]].--[[User:Aschlafly|Aschlafly]] 11:29, 13 September 2008 (EDT)
+
Clearly, combining the data from several similar experiments strengthens the conclusions considerably, as shown by the fact that ''P'' is much smaller for the combined data than for any individual sample.
  
You mean the notebooks and ect? [[User:Jirby|Jirby]] 11:32, 13 September 2008 (EDT)
+
Although the combination of data from several experiments is a specialised area of statistics, I see nothing particularly incorrect about the approach used by Lenski and his colleagues. The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so. (For example: A. Combining the results of five samples of the heights of men and women is clearly valid. B. Combining three samples of heights of men and women with two samples of lengths of male and female squid clearly isn’t.) Generally speaking, the outcome of a combined analysis of several small experiments which all point in the same direction (or at least in a similar direction) will be more significant than that of any one of those experiments, as is shown in the larger table above.
  
:Oh my dear God, I can't believe this!! Where has this beautiful country gone to if even science is not reliable anymore nowadays. Hope things will change in the future. Good thing there still are people like Mr. Schlafly, <includeonly>suckadick moron</includeonly>who have the brains and power to stand up, and turn the people of America in the right direction again. [[User:Raul|Raul]] 12:24, 13 September 2008 (EDT)
+
I hope this clarifies the extensive discussion on this point and puts Aschafly’s mind at rest on this subject. [[User:KennyMac|KennyMac]] 08:20, 18 September 2008 (EDT)
  
Mr. Schlafly, I have a question BTW. Was this letter received on a paper, or electronically? Because if it was on a paper, perhaps it would be a good idea to scan it, and post it. It would add a lot to the encyclopedic value of the article. [[User:Raul|Raul]] 12:26, 13 September 2008 (EDT)
+
:That's very nicely put, thanks. You should work on some of the stats pages here. Of course, technically any sample is ultimately just a combination of ''n'' samples of size 1. [[User:MikeR|MikeR]] 13:28, 18 September 2008 (EDT)
  
: PNAS procedures required me to submit the letter electronically using its own electronic submission softwareWhen the PNAS acknowledged that my submission complied with all its requirements, it also said that the authors of the original paper had been notified of my letter.--[[User:Aschlafly|Aschlafly]] 12:39, 13 September 2008 (EDT)
+
:I'll take a look at this FridayIt's not immediately obvious what the point is to your analysis above.--[[User:Aschlafly|Aschlafly]] 23:46, 18 September 2008 (EDT)
 +
::"''The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so"''--[[User:KingOfNothing|KingOfNothing]] 00:57, 19 September 2008 (EDT)
  
 +
:::This makes no sense as an argument. It may be true in this simple case that you can do one large or several small samples and get similar results - which is quite obvious and wouldn't need such a detailed rant. However, you provide no mathematical proof, just one example. [[User:Etc|Etc]] 01:08, 19 September 2008 (EDT)
 +
:''"It's not immediately obvious what the point is to your analysis above"''. No surprise, Aschlafly, really. Maybe you should take your own advice: ''"I suggest you try harder with an open mind"''. --[[User:CrossC|CrossC]] 02:46, 19 September 2008 (EDT)
  
Too bad. It doesn't make much sense though, guess that tells a lot about PNAS. What they should care about is the actual text, not the medium it is in. That's not the case IMHO for encyclopedias however. If not for anything else, a scan would have been useful as a reference for the digital text. Oh well... [[User:Raul|Raul]] 12:53, 13 September 2008 (EDT)
+
It is with great sadness that I note that the author of this - the only significant statistical explanation and discussion in this entire fiasco-  has [http://www.conservapedia.com/index.php?title=Special%3ALog&type=block&user=DeanS&page=User%3AKennyMac just been blocked] for five years.  Even his email is blocked, so he can't even appeal the action.   I don't see such manouvers as having contributed to the much vaunted "open mind" of which various people here speak. [[User:BenHur|BenHur]] 10:27, 19 September 2008 (EDT)
  
Honest question, is it against the rules to disagree with Andrew Schlafly or criticize that letter? I just want to know so I don't end up in the same situation as other people who have been censored here.--[[User:IanG|IanG]] 17:03, 13 September 2008 (EDT)
+
'''REPLY''': I have now reviewed the above analysis, and it supports Point 5 rather than the PNAS paperPoint 5 stated, "The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." The analysis above does nothing more than reinforce Point 5 by combining experiments based on sample size.
:I believe this this page is only for discussion of the response, which is quite straightforwardCriticism of the letter should have gone on it's talk page, but it's too late nowOh well.  There is no censorship on Conservapedia. Your comment is not substantive - please refactor it. Praise Jesus, [[User:Pila|Pila]] 17:26, 13 September 2008 (EDT)
+
  
:: if you REALLY believe that Lenski has committed acedemic FRAUD then lodge a formal complaint with his University. They are taken very seriously and can lead to loss of tenure and dismissal from the university, and with that on his record no other institution would hire him on any basis. [[User:Markr|Markr]] 19:40, 13 September 2008 (EDT)
+
In Pavlovian manner, some Lenski types nod their head here in agreement at the above analysis, apparently unaware that it reinforces Point 5.
 +
 +
When combining results from samples that are vastly different in sample size, it is necessary to factor in the different sample sizes.  Apparently the PNAS paper failed to do that, which helps explain why it refuses to provide a meaningful response to Point 5.--[[User:Aschlafly|Aschlafly]] 19:24, 19 September 2008 (EDT)
  
(deleted non-substantive comments).  Again, the heading on this page will be enforced:  "Substantive comments only, please."  If you have a substantive comment about the identified errors and the PNAS's failure to address them, then please comment.  Non-substantive comments will be removed.  This is an encyclopedic-based search for the truth, not a blog or a place to refuse to contribute in a substantive manner.--[[User:Aschlafly|Aschlafly]] 20:29, 14 September 2008 (EDT)
+
(rants below were deleted for being non-substantive in violation of this page's rules.)--[[User:Aschlafly|Aschlafly]] 19:24, 19 September 2008 (EDT)
  
:Since you've taken the liberty of deciding what is substantive or not in deleting posts like my last one, then I have a serious, respectful question to ask;  What exactly do you mean by "substantive"?  I didn't attack you or your letter; I was attempting to state that the PNAS response did, in fact, address the points of your letter.  Whether one considers the PNAS response to be ''correct'' or not is a separate matter - they read your objections and responded to them instead of ignoring them, that's all.
+
:::I understand point 5 now, or at least I think I do.  What we have as a "sample" is either:
  
:My last post would therefore seem to have met Webster's definition of ''substantive'' - "having or expressing substance", but apparently the measure of "substantive" for a comment on this page appears to be whether it agrees with your view or not. That's your prerogative, but if you intended to allow comments on this page other than endorsements of your view, then please let me know what I did wrong.  --[[User:DinsdaleP|DinsdaleP]] 21:00, 14 September 2008 (EDT)
+
::::1. Individual cultures (Schlafly)
 +
::::2. Cultures that developed cit+. (Lenski)
  
:: Dinsdale, we're here to think and learn.  You can look at my letter, look at the PNAS's response, and provide some ''substantive'' insights. We're not here to say something like, uh, go ask someone else if a (9th grade-level) graph is correct or notIf you think the substantive issues are beyond your depth, and I don't, then comment on them in a substantive and intellectual and specific way.  This is not another waste-of-time blog, and it's not going to become one.--[[User:Aschlafly|Aschlafly]] 21:19, 14 September 2008 (EDT)
+
:::Schlafly contends that the sample should be all the cultures and that Lenski has, improperly, filtered the sample by excluding the vast majority of it (i.e. all those cultures that did not become cit+).  Am I right in thinking this is the argument? --[[User:Toffeeman|Toffeeman]] 19:57, 19 September 2008 (EDT)
  
:::DinsdaleP, you did attack ASchalfy at least indirectly. Suggesting that the PNAS response has merit might also be interpreted by some that the letter ASchafly sent wasn't the very best it could be. Now, contrast that to my deleted comment suggesting that a time-tested response would be to actually try reproducing the experiment. Many bad experiments are exposed when others fail to get the same results as the original authors. I think this would be an excellent, substantive avenue to pursue.--[[User:Argon|Argon]] 21:23, 14 September 2008 (EDT)
+
:::: No, we're talking about how Lenski combined a large study (which did not really support Lenski's hypothesis) with small studies (which Lenski claims does support his hypothesis). The studies were not combined in a logical manner with proper weighting given to the much bigger size of the large study.--[[User:Aschlafly|Aschlafly]] 23:15, 19 September 2008 (EDT)
 +
:::::Aschlafly, have you read the paper on z-transforms which explains the statistical technique used? You can download a .pdf copy of the paper for free [http://www3.interscience.wiley.com/journal/118663174/abstract here]. --[[User:BillA|BillA]] 06:30, 20 September 2008 (EDT)
  
ASchlafly- you said above, "PNAS can deny its errors all it likes, but that doesn't change the fact they are errors".  As you say, in a fair discussion of the merits of two sides of an argument, it's important that both sides take a good, hard look at their own propositions.  Since your position is that PNAS has errors on its own side, I'm just curious to know if you are in any way prepared to accept that there might be errors in your own argument, or are you ''absolutely 100% certain'' that your position is error-free?  I'm wondering if perhaps before submitting this issue to funding authorities, you would be prepared to have an independent statistical expert take a look at your proposal?  [[User:BenHur|BenHur]] 22:17, 14 September 2008 (EDT)
+
:::::: You say "statistical technique used," but you should have said "statistical technique ''cited''."  In fact, a close reading of the Z-transform paper provides more support for Point 5: combined studies must be weighted based on sample size:
  
:The only thing I was criticizing in my original comment today was the Main Page headline statement that "PNAS refuses to address the 5 errors in the Lenski study identified by the Letter to PNAS".  What I pointed out is the fact that they ''did'' in fact respond, by criticizing the statistical analysis used by Aschlafly.  I'm not supporting or attacking Mr. Schlafly's analysis, because I'm the first one to admit that I have no expertise in this area.  My conclusion was a constructive suggestion that Mr. Schlafly present a rebuttal to the PNAS decision by showing how his analysis and conclusions were not erroneous in the manner they claimed.  A public, statistical defense of Mr. Schlafly's work, perhaps accompanied by the endorsement of some regarded experts in the field, would be the best response to PNAS choosing to respond by email instead of through the journal.
+
::::::: "When there is variation in the sample size across studies, there can be a noticeable difference in the power of the two methods, with the weighted Z-approach being superior in all cases. As such, we should always prefer the weighted Z to the unweighted Z-approach when the independent studies test the same hypothesis."[http://www3.interscience.wiley.com/journal/118663174/abstract see p. 1371].
  
:I wrote both the original draft letter to PNAS from Mr. Schlafly's notes and my earlier comments today with the intent of contributing constructively.  I hope this clarification of my view is substantive enough to remain. --[[User:DinsdaleP|DinsdaleP]] 22:24, 14 September 2008 (EDT)
+
:::::: In other words, the cited paper actually ''supports'' Point 5.--[[User:Aschlafly|Aschlafly]] 09:34, 20 September 2008 (EDT)
  
Folks, I've pointed out five ''very specific'' statistical (logical) errorsThe torrent of nonsense above even includes an absurd demand for me to try to repeat the experiments, as thought that would somehow correct a flawed paper.
+
(unindent)Lenski used the weighted methodSee note 49 to the paper and the text around the combination.  Of course there is the question of on what basis Lenski weighted the results.  Lenski weighted the results on the basis of the Cit+ numbers and we may think it would have been better to weight on the basis of the number of replicates.    I have below the calculations (not mine) of combined P-values based on 1) no weighting, 2) weighting on the basis of Cit+ and 3) weighting on the basis of replicates. The weighted Z-transformed = SUM(Weight x Z-score for each run)/SQRT(SUM(Weight^2 for each run))
  
The math is wrong in the PNAS paper. No one at PNAS is even willing to put his name on a response claiming that the math is correct, because it isn't. I'm not going to allow further nonsensical postings here. If you want to address the statistical (logical) errors in a specific way, fine. If you feel it is beyond your depth to do so, then move on. Thanks and Godspeed.--[[User:Aschlafly|Aschlafly]] 22:49, 14 September 2008 (EDT)
+
{| class="wikitable"
 +
|-
 +
! Expt# !! p-value !! z-score !! #Cit+ muts !! #replicates
 +
|-
 +
| 1 || 0.0085 || 2.387 || 4 || 72
 +
|-
 +
| 2 || 0.0007 || 3.195 || 5 || 340
 +
|-
 +
| 3 || 0.0823 || 1.390 || 8 || 2800
 +
|-
 +
|}
  
:"The paper incorrectly applied a Monte Carlo resampling test to exclude the null hypothesis for rarely occurring events." Specifically, why is it incorrect to apply a Monte Carlo test in this circumstance, or why was their application incorrect? Do your own calculations produce a p-value that differs from the published p-value of 0.08?
+
Applying the formula described above..  
  
:"The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." This sounds as though you disagree with the use of the Z-transform technique used to combine the data from the three replay experiments, or believe that the Z-transform analysis was performed incorrectly. Which do you disagree with - the technique, the application, or both, and why? --[[User:Brossa|Brossa]] 23:31, 14 September 2008 (EDT)
+
{| class="wikitable"
 +
|-
 +
! Weighting !! Z transformed !! p
 +
|-
 +
| Equal || 4.025 || <0.001
 +
|-
 +
|By total Cit+|| 3.576 || <0.001
 +
|-
 +
|By total replicates|| 1.825 || 0.034
 +
|-
 +
|}
  
:: {removed personal and non-substantive attack that violated clear rules for this discussion; also, unsubstantiated claims of expertise are not allowed - --[[User:Aschlafly|Aschlafly]] 19:16, 15 September 2008 (EDT))
+
So weighting on the basis of the number of replicates considerably increases the P-value.  It remains, however, well within the range of statistical significance (0<P<0.05).  If we hold that Lenski should have weighted on the basis of replicates  then he should have rejected the null hypothesis and reached exactly the same conclusions that he did.  The entire paper would have been exactly the same except the sentence “the result is extremely significant (P<0.0001) whether or not….” Would read “the result is significant (P<0.04) whether or not”.  Point 5 establishes one number and an “extremely”.  Point 5, therefore, has no weight (excuse the pun). 
 +
--[[User:Toffeeman|Toffeeman]] 15:18, 20 September 2008 (EDT)
  
::: I'm still a little unclear on your position ASchlafly - are you ''absolutely 100% certain'' your own statistical analysis is correct on this?  Before you proceed further it's important to know that the technical analysis you are presenting is indeed indisputable.   [[User:BenHur|BenHur]] 12:01, 15 September 2008 (EDT)
+
: Sorry, Toffeeman, a falsehood is still a falsehood.  Based on your own posting, if Lenski had applied the Whitlock Z-transform paper in a logical manner, the results would not have been nearly as striking as Lenski claimed (his paper said the results were "extremely significant").  Moreover, I found Lenski's description of his application of the Whitlock paper to be particularly misleading.  Lenski's use of "whether or not"<ref>"We also used the Z-transformation method (49) to combine the probabilities from our three experiments, and '''the result is extremely significant (P < 0.0001) whether or not''' the experiments are weighted by the number of independent Cit+ mutants observed in each one." (Lenski paper at 7902).</ref> obscures the basic error that he did not apply Whitlock's paper in the straightforward, correct manner.  I think the wording in the Lenski paper deliberately obscures this falsehood from the reader.
  
:::: Ben, if you have at least a 9th grade-level education, then you can look at the 5 errors and decide for yourself, and comment in a substantive mannerYes, they are obvious and basic errors, and the fact that the reviewer of my letter at PNAS would not attach his name to a specific denial speaks volumes.--[[User:Aschlafly|Aschlafly]] 17:11, 15 September 2008 (EDT)
+
: People have free will to embrace and defend falsehoodsI don't expect them to change quickly or admit they were wrong.  But you'll find me defending and promoting the truth.
  
:::::Aschlafly, I'm a little confused as to why you [http://www.conservapedia.com/index.php?title=Talk:PNAS_Response_to_Letter&curid=73292&diff=517424&oldid=517408 removed this comment of mine]?  I did not cast any aspersions on your argument, and was merely answering your question as you posed it to me?  Is "declining to comment" an indictable offense?  [[User:BenHur|BenHur]] 19:20, 15 September 2008 (EDT)
+
: Point 5 remains valid and the falsehood remains uncorrected by PNAS or Lenski. Four other points in higher priority remain uncorrected by them also.--[[User:Aschlafly|Aschlafly]] 16:47, 20 September 2008 (EDT)
  
:::::: Your comment was not an "indictable offense," but it violated the rules of this page:  "Substantive comments only, please." Got it?  Either say something substantive, or edit somewhere else. Thanks.--[[User:Aschlafly|Aschlafly]] 19:33, 15 September 2008 (EDT)
+
::'''''"a falsehood is still a falsehood"'''''Precisely, the null hypothesis should be considered false and the conclusions of the paper stand.
  
::::::: This is confusing.  Is "I agree with your thesis" or "your methods are 100% correct" a substantive comment?   It can be very hard to infer your intent, Mr.Schalfly, I'm sorry to say.  I have no quarrel with you, but I'm becoming confused as to what is and isn't appropriate comment on what is labelled a "Talk Page".  Are there special rules for this Talk PageIf so, perhaps the title on the page might be changed?    [[User:BenHur|BenHur]] 19:42, 15 September 2008 (EDT)
+
::Oh? Do you mean ''Lenski's falsehood''? And what falsehood is thatLenski said that he had calculated the P-value without weighting and that had come out at <0.0001.  That is true, not false.  Lenski said he had calculated the P-value weighting on the basis of the Cit+ replicates and that had come out at <0.0001.  That is true, not false.  There is no falsehood.  Lenski did not mention the results of weighting on the basis of replicate numbers.  He thus made no claim about weighting on the basis of replicate numbers.  If he made no claim he cannot have made a false claim. 
  
:::::::: No, your quoted phrases are obviously not substantive commentsYour statement of agreement means nothingI doubt you are even using your real name, for starters, which renders your agreement evens sillierI repeat for the nth time, say something substantive or edit somewhere else.--[[User:Aschlafly|Aschlafly]] 19:52, 15 September 2008 (EDT)
+
::How do the words Lenski uses "mislead".  If you read them as written what conclusion do you come to?  You are lead to the conclusion that the mutation was not "rare-but-equal", instead it was contingentThat is the right conclusionIf Lenski had presented the data in a different manner (perhaps by including the results of weighting on the basis of replicate numbers) what conclusion do you come to?  You are again lead to the conclusion that the mutation was not "rare-but-equal", instead it was contingentThat is the right conclusion.  To mislead you must be lead to a conclusion that is incorrect.  By Lenski's paper you are not lead to a conclusion that is incorrect.  Thus is cannot be said to be "misleading".  
  
::::::::: (removed another non-substantive posting)--[[User:Aschlafly|Aschlafly]] 20:28, 15 September 2008 (EDT)
+
::I shall not comment on your second paragraph, the temptation to ''"Tu Quoque"'' would be too great. 
  
:::::Aschafly, perhaps you did not realize that my earlier questions were meant for you. I wish to address the statistical errors in a specific way, which requires a better understanding of your position. I will repeat my main questions: why was it incorrect to apply Monte Carlo techniques to the data in the paper, or in what way was the Monte Carlo technique performed incorrectly? Second, why was it incorrect to apply the Z-transform to the data from the three replays, or in what way was the Z-transform performed incorrectly? In lieu of the Monte Carlo/Z-transform techniques, what statistical calculations ''should'' have been performed? Feel free to be technical; I have more than a ninth grade education.--[[User:Brossa|Brossa]] 17:38, 15 September 2008 (EDT)
+
--[[User:Toffeeman|Toffeeman]] 17:13, 20 September 2008 (EDT)
  
:::::: If you're skipping over the main points, then concede their validity or explain why you've skipped over them.--[[User:Aschlafly|Aschlafly]] 19:14, 15 September 2008 (EDT)
+
::: The falsehood consists of pretending to apply the Whitlock Z-transform in a straightforward, logical and correct manner. I think the Lenski paper is intentionally misleading by using the "whether or not" wording, when both alternatives are nonsensical.  Point 5 has been proven above to be correct in identify an error in the Lenski paper.
  
::::::: If you would like me to through the original letter point by point, and respond to all of the claims in detail, I'd be happy to - but only after I have your explicit permission to do so on this talk page (or some other page of your choosing). Until I'm given that permission, I'll await your response to my previously-stated questions about points three and five: your statements about the incorrect application of Monte Carlo resampling and the erroneous combination of the three replay experiments (the Z-transformation). --[[User:Brossa|Brossa]] 22:00, 15 September 2008 (EDT)
+
::: "Toffeeman", your blocking history suggests you have been less than straightforward yourself.  Go elsewhere if you seek to be deceitful. You're not fooling anyone here.--[[User:Aschlafly|Aschlafly]] 17:54, 20 September 2008 (EDT)
  
:::::::: Substantive postings are welcome, but I still don't have an explanation for why you skipped over the main points 1 and 2. Do you concede them?--[[User:Aschlafly|Aschlafly]] 22:04, 15 September 2008 (EDT)
+
::::Lenski did apply the Z-transformation correctly, both weighted and unweighted. The data points extracted from each replay are the ''generation numbers of those replicates that gave rise to Cit+ mutants''. Thus Replay 1 produced four data points: 30,500 31,500 32,500 32,500. Replay 2 produced five data points: 32,000 32,000 32,000 32,000 32,500. Replay 3 produced eight data points: 20,000 20,000 27,000 27,000 31,000 31,500 32,000 32,000. Thus the N for replay 1 is 4; replay 2 is 5 and replay 3 is 8. The fact that replay 3 used 38 times as many replicates as replay 1 does not mean that it should be weighted 38 times as much; it only produced twice as much data, not 38 times as much.
  
:::::::::: With all due respect Aschlafly, and excuse me if I feel the need to tiptoe as gently as possible here, but I would cautiously disagree with your statement that "Substantive postings are welcome".   It seems to me that you have explicitly removed ''only'' the substantive comments from this page.  What is left are weak and non-substantive comments, to be sure.  It seems very difficult to comment in any way here on this matter, as you seem to be of the opinion that you and you alone are correct. WIth the best will in the world, I certainly don't feel able to speak freely here, despite being well qualified to do so.  Perhaps a slightly more lenient approach might help your own cause?  [[User:BenHur|BenHur]] 22:23, 15 September 2008 (EDT) 
+
::::Suppose I want to find out what the average age of a murderer is in three cities. In L.A. I interview 72 random people and find that 4 of them were convicted of murder; I record the ages of the four. In Seattle I interview 340 people and find that 5 of them are convicted murderers; likewise in Singapore I interview 2800 and find 8 murderers. In the end, I have 4,5, and 8 data points from the three cities; the number of people I had to interview to obtain those data points doesn't factor into the analysis of what the average age of the murderers is.--[[User:Brossa|Brossa]] 00:06, 21 September 2008 (EDT)
::::::::::: Since I am new here I am a bit timid about chiming in on this discussion, but I feel compelled to say that (as I pointed out in the section below), I think this page should be reserved for appropriate responses to the PNAS letter - not for opinions regarding how Aschlafly is running this website. If you think you have valid points to bring out about that - maybe post them to his talk page? --[[User:DRamon|DRamon]] 22:31, 15 September 2008 (EDT)
+
  
(unindent) No, I do not concede points one and two; I've commented on them on the [[Talk:Letter to PNAS]] page. I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to. Similarly, your point four is not something that can be resolved by argument. The only way to prove that the results were due to contamination would be to repeat the experiment with different controls in place and demonstrate different results - which might spur some other labs to run a third or fourth trial to see who was right. It seemed to me that the only points that could be solved by discussion were three and five, since they made statements about the statistical methods that can potentially be resolved through debate. So, once again, what is your response to my questions regarding points three and five?--[[User:Brossa|Brossa]] 22:34, 15 September 2008 (EDT)
+
::::: In your first paragraph you simply repeat the error underlying Lenski's paper. You, like the paper, incorrectly apply Whitlock's Z-transform.
  
: Brossa, I could not find a meaningful rebuttal by you of point 1, and I found no comments by you on point 2, at [[Talk:Letter to PNAS]]Are you trying the classic trick of "the answer is over there," when it isn't? Point 2 alone completely disproves the PNAS paper's thesis, and yet you avoid it and skip towards less obvious errors.  I'm happy to address the subtle errors once you address the obvious ones.--[[User:Aschlafly|Aschlafly]] 16:10, 16 September 2008 (EDT)
+
::::: The quality and reliability of data is proportional to sample size, and when different studies are combined they need to be weighted accordinglyThe results from a very large sample size would not be weighted equally with the results from a small sample size, as you and Lenski have done. That's basic logic, though I'm not optimistic that you or Lenski will admit it. Open-minded people who respect logic have no difficulty elevating logic over personal whim.-[[User:Aschlafly|Aschlafly]] 11:30, 21 September 2008 (EDT)
  
::Aschlafly, you are avoiding Brossa's question.  Why can't you just answer it?  Brossa said that he wanted to limit the discussion to these points.  The reason that he wants to do that is because it the PNAS response targets your approach to the statistical analysis.  [[User:MickA|MickA]] 16:24, 16 September 2008 (EDT)
+
::::::Andy if you read Whitlock's paper you would see it say, and I quote, "Ideally each study is weighted proportional to the inverse of its error variance, that is, by the '''reciprocal of its squared standard error'''." It says nothing about weighting according to sample size, which is what you seem to be insisting should be done.
:I don't mind addressing points one and two; I just don't think that we'll come to an agreement about them. Point one represents a misunderstanding of the figure 3, which is labeled "Alternative hypotheses for the origin of the Cit+ function..." The figure does not represent the results of the experiments and does not conflict with them. It is a cartoon of the a priori hypothesis that was generated before the experiments were performed; it is not itself the hypothesis (the map is not the country). Note that the vertical axis lacks a scale; there is no way of knowing what the actual mutation rates are ahead of time. The location of the vertical jump on the graph is abitrary; it has to lie somewhere between 0 and 31,500 generations, but that point could be anywhere. Quoting from the paper: "The historical contingency hypothesis predicts that the mutation rate to Cit+ should increase after some potentiating genetic background has evolved. Thus, Cit+ variants should re-evolve more often in the replays using clones sampled from later generations of the Ara-3 population." The hypothesis as stated does not specify a generation at which the potentiating mutation occured. The hypothesis is not that potentiation took place at generation 31,000 rather than some other generation; it is that there ''was'' a potentiating mutation ''rather than'' a rare-mutation event. The results of the experiment do not disprove the contingency hypothesis; they confirm it and suggest that the potentiating mutation took place at generation 20,000. You think that the figure is the hypothesis; I think that the hypothesis is what is stated explicitly in the text of the paper; I doubt that we'll agree.
+
::::::Also Whitlock acknowledges in the paper that there is no preference for weighted versus equal weighting, so the fact that both equal weighting and weighting by the standard error give a statistically significant result shows that the 3 experiments combined support rejection of the null hypothesis. [[User:DanB|DanB]] 20:39, 21 September 2008 (EDT)
  
:Point two states: "Both hypotheses propose fixed mutation rates, but the failure of mutations to increase with sample size disproves this." I disagree with this statement. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were similar, but not the same. One could imagine a hypothesis that men commit murder most often between the ages of 25 and 35, with samples taken from the male populations of L.A., Singapore, and London. One would find different murder rates among the men of those three cities, but still might find (or not) that murderers in those cities tend to be between 25 and 35 years old. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were not the same, just as Singapore is not the same as Los Angeles. The rare-mutation hypothesis does not mean that the mutation rate to Cit+ is the same for ''all experimental conditions anywhere''; just that the mutation rate is constant ''given the conditions of a particular replay''. It is possible to have different baseline mutation rates among the three replays, all of which follow the historical contingency pattern. Or the mutation rate could actually be the same across all three replays, and the results seen here are just a statistical fluke that would vanish if the replays could be run thousands of times. Either way, it's not fatal to the paper's conculsions.
+
:::::::ASchlafly, you state that I incorrectly apply "Whitlock's Z-transform" (actually the test belongs to Mosteller & Bush<ref>Mosteller, F. & Bush, R.R. 1954. Selected quantitative techniques. In: ''Handbook of Social Psychology,'' Vol. 1 (G. Lindzey, ed.)</ref> and/or Liptak<ref>Liptak, T. 1958. On the combination of tests. ''Magyar Tud. Akad. Mat. Kutato Int. Kozl.'' '''3''': 171-197</ref>). Whitlock describes weighting by the reciprocal of the squared standard error. The standard error of the mean is proportional to 1/sqrt(N), so the reciprocal of the squared standard error is proportional to N. Thus larger studies ''are'' given more weight. I maintain that the sample sizes N of the three replays are 4, 5, and 8 respectively. Weighting based on those three N does not weight all three replays equally as you claim: it gives replay 2 25% more weight and replay 3 100% more weight than replay 1. Rather than simply repeating that I am wrong, will you please state what ''you'' think the sample sizes of the three replay experiments are, and, in your opinion, what the ''correct'' application of the Z-transformation would be?--[[User:Brossa|Brossa]] 18:00, 22 September 2008 (EDT)
  
:Point two also states "If the authors claim that it is inappropriate to compare for scale the Second and Third Experiments to each other and to the First Experiment, then it was also an error to treat them similarly statistically." This is also incorrect in my view. One can state that the murder rate is different between two cities, and yet the murderers have some characteristic in common. Combining results from different samples is the bread and butter of statistical analysis. Meta-analysis, for example, is used to combine the results of studies that are much different than replay experiments one, two, and three. If you wish to make a more specific argument about the '''techniques''' used to combine the three replay experiments into a single result, I'll address it. I think that the second part of point two is essentially the same as point five in that it criticizes the statistical method used to combine the results of the three replays (the Z-transform), as opposed to point three which mentions the Monte Carlo technique separately.  
+
:::::::: The Lenski paper states how it weighted the experiments, and that weighting is incorrect. Admit it.  Moreover, the incorrect weighting in the Lenski paper was not likely an inadvertent error, as it inflated the significance of the results. I found the wording used by the Lenski paper to describe its (incorrect) weighting to be artfully misleading.
  
:I therefore repeat my questions about points three and five, as stated previously. If you have other questions that I must answer first, please list them all at once, as I'm eager to move on to that discussion.--[[User:Brossa|Brossa]] 17:53, 16 September 2008 (EDT)
+
:::::::: Provide me with federal funding as Lenski received, and I'll write a paper for you.  But I don't have to write an alternative paper to point out glaring errors in Lenski's paper.--[[User:Aschlafly|Aschlafly]] 08:35, 23 September 2008 (EDT)
  
:: It's foolish to debate someone who has already closed his mind. Point 1 is plainly correct: the PNAS article should admit and disclose that the false hypothesis was indeed proven to be false. If you stand behind the falsehood, then you'll refuse to admit other errors also.
+
::::::::: I've been following this discussion for a while and I have to agree with ASchlafly. It hardly seems fair that he should have to, in his spare time, replicate an experiment done by a professional just to "earn" the right to criticize it. I am unfamiliar with statistics, but if some complicated transform goes against common sense, common sense should prevail. After all, there are lies, damned lies, and statistics... [[User:AndyM|AndyM]] 10:57, 23 September 2008 (EDT)
  
:: Your refusal to concede point 2 is even more egregious.  Lenski combined his experiments, and you can't claim that combination was simultaneously correct and incorrect.  Moreover, the lack of scale also exists between Experiment 2 and 3, again disproving the underlying thesis of the paper.--[[User:Aschlafly|Aschlafly]] 18:31, 16 September 2008 (EDT)
 
  
:::I already said that I didn't think that we would reach common ground on points one, two, and four, and that it would be a waste of time to rehash them. You asked me to, so I did. It seems foolish to then treat those points as a shibboleth and refuse to discuss the other points, which I am fully prepared to concede provided you elaborate on why the Monte Carlo technique and the Z-transform were either the wrong tests or performed incorrectly. I am capable of accepting two items from a list of five even if I reject the other three. Even if I were completely incapable of agreeing with you, I still don't see why you won't put your best mathematical argument forward about the paper's statistics. Were there too few Monte Carlo resamplings? Do you feel that the Z-transformation was performed incorrectly? Is the Z-transform itself suspect? Do you propose some other statistical analysis? If you do, there are any number of us here who can crunch the numbers again. What a coup it would be for you if we could use a technique that you suggested to obtain results that smashed Lenski's smug complacency!
+
(unindent)I'm not asking anyone to write a paper or replicate an experiment. I'm asking ASchlafly to support his statement "The results from a very large sample size would not be weighted equally with the results from a small sample size, '''as you and Lenski have done'''"(bolding mine). I have stated publicly, subject to challenge by others, that the sample sizes (n) of the three replays are four, five, and eight respectively. Furthermore, using n of 4, 5, and 8 in the weighted Z-method DOES NOT weight all the replay experiments equally - it weights replay 3 twice as much as replay 1 and 8/5 as much as replay 2. Tell you what: I'll drop all my questions about Monte Carlo and the Z-transform, and simply ask ASchlafly one question: '''what is the sample size, n, of the second replay experiment?''' He need not even do any calculations - a statement in words that will allow someone else to do the calculation will suffice. This is not a complicated question to answer; the paper states how many replicate cultures there were (340), how many cells there were in each replicate (3.9x10^8), how many replicates gave rise to Cit+ cells (5), and which generations those Cit+ replicates came from (4 from 32,000 and one from 32,500). I will even give ''my'' answer: '''five'''. Furthermore, I will say ''why'' I believe that, using the murderer/age analogy: performing the 340 replicates is the same as interviewing 340 people in order to find out if any of them are convicted murderers. Finding that five replicates gave rise to Cit+ mutants is the same as the survey finding that 5 of those 340 people were convicted murderers. Finding that the Cit+ mutants arose from 4 replicates from generation 32,000 and 1 from generation 32,500 is the same as finding the ages of the murderers. The five data points in the Lenski study allow one to calculate the 'mean generation of clones yielding Cit+': 32,100. This is the same as finding the mean age of the five murderers. If I want to compare this hypothetical murderer age study to some other study of the mean age of murderers, I would weight the studies based on how many murderers were in each study, not on how many non-murderers were included in the initial survey.
  
:::I don't claim that the combination was simultaneously correct and incorrect, by the way - I claim that it was correct. Where do I imply otherwise? Also, it's not only improper to compare replay one with replays two and three ''for scale'': it's impossible. Replay one involved constantly changing numbers of cells whereas replays two and three started with fixed numbers. How do you count the number of cells in the first replay to compare it to the other two? Is it the number of cells transferred each time? Is it the maximum population achieved in each flask prior to transfer? 750 generations passed in one case and 3700 generations in another before the Cit+ trait was seen - how do you factor that into the 'scale' equation? It is only the superficial resemblance of replays two and three that brings up the concept of 'scale'. The "underlying thesis" of the paper is not that there is a unique rate of mutation to Cit+ that applies across ''any and all'' experimental conditions.--[[User:Brossa|Brossa]] 19:39, 16 September 2008 (EDT)
+
Surely ASchlafly can say ''what'' he thinks the n of the second replay is, even if he won't say ''why'' he thinks it. Is it five? 340? The number of replicates times the number of cells per replicate? Something else? No analysis need be performed on the resulting number.--[[User:Brossa|Brossa]] 15:54, 23 September 2008 (EDT)
  
== Please provide your statistical analysis ==
+
:OK - would you care to put in writing that after ASchlafly gives you his response, you won't start obfuscating the issue with Monte Carlo and Z-transform issues? You understand: it is typical of liberals to, after being proven wrong, to start pretending that they were talking about an entirely different issue altogether. '''After ASchlafly states the sample size of the second replay experiment  you will consider yourself answered.''' Correct?
  
Andrew, If you are so sure that your statistical analysis of the Lenski paper is correct, you should publish it on Conservapedia. [[User:MickA|MickA]] 17:38, 15 September 2008 (EDT)
+
:Brossa, your rant is misplaced.  One cannot salvage an error in logic by questioning which of superior alternatives should be used instead.  The sample size of an experiment is the number that comprises the underlying sample used in the experiment, not the number of a certain outcome from the experiment.  Maybe you can debate yourself over what the correct underlying sample size is, but it is plainly not the number of a certain outcome from the experiment.--[[User:Aschlafly|Aschlafly]] 19:28, 23 September 2008 (EDT)
  
: I did. Which point don't you understand?--[[User:Aschlafly|Aschlafly]] 19:13, 15 September 2008 (EDT)
+
Maybe the Journal of Nature can be your next letter submission source. ''International Weekly Journal of Science'' -- [[Image:50 star flag.png|14px]] [[User:Jpatt|jp]] 21:17, 22 September 2008 (EDT)
  
::Could you direct me to the page showing your calculations? [[User:MickA|MickA]] 08:50, 16 September 2008 (EDT)
+
==Point 5 Rejected==
  
I think regardless of what the details behind the statistical analysis are, the shame here is that PNAS refused to address anything specific in their response. They simply glossed over everything that was said in the letter sent to them and gave a generic, unsubstantive response. All the people here that are trying to argue with ASchlafly about his position should instead focus on why is it that PNAS refuses to directly address our concerns. --[[User:DRamon|DRamon]] 21:01, 15 September 2008 (EDT)
+
Referring to ASchlafly's latest revert: OK, so let's forget about my specialist knowledge of the subject. It really doesn't matter. In science, what matters is what is said, not who says it. Remember that the most outstanding scientific discoveries of the 20th century were made by a patent clerk; the facts that he had no academic position at the time and a rather moderate educational record were irrelevant.
  
:: It is not the function of the editorial board to "defend" a paper. They mainly said that, from what Mr. Schlafly has written they could not see the mistake in Dr. Lenskis statisics suggested by Mr. Schlafly. It is neither the function of a reviewer to "put his name" on something. A reviewer should check if the average reader of a journal will be able to learn something from a letter. If he thinks there is nothing to learn, either because the letter fails to bring new insight, or because the readers may not understand how using only conservative methods in statistics and nothing which is mathematically considered to be fragile by people who have average mathematical knowledge, is favourable, then they should reject the letter. Although I had the impression from the wording of the response that the reviewer did have at least a brief look at the original article. And PNAS refuses the Letter for the very same reason people her are asking, namely the complete absence of alternative estimations for the numbers. I dont want to check the calculations, if Mr. Schlafly states that by using a well defined method, he finds another result and states this result clearly, i am a priori very willing to believe that, because i see no reason not to believe his word on that (Since he has, to my best knowledge, no record of scientific fraud). After he states his result, i may find the time trying to reproduce it (although i am very busy). --[[User:Stitch75|Stitch75]] 13:00, 16 September 2008 (EDT)
+
So, I repeat: if you don't agree with me, present a reasoned argument against my points and don't just delete my comments (it's not wiki etiquette to delete other people's comments on a Talk page unless they're abusive or obscence). At least you haven't been so feeble as to block me to prevent me from contributing to this page, as you've done to other contributors. So I now repeat once again my cut-and-paste from Longstop's contribution a couple of weeks ago - as I said, if you don't agree with it, present a rational argument against it.
  
:::Could you direct me to the page showing your calculations? [[User:MickA|MickA]] 15:41, 16 September 2008 (EDT)
+
:The main point he [Kennymac] makes is quite correct, that combining experiments will tend to give a more significant result (i.e. a lower P-value) than any single experiment on its own.
  
==Improper==
+
:The point that ASchlafly makes (headed '''REPLY''', shortly after KennyMac's example) is slightly misleading. You don't choose which experiments to combine on the basis of their outcomes - you either combine all relevant experiments or none. In this case, Professor Lenski chose to combine all the experiments and present a single analysis of them. That's fine.
I have no knowledge on this branch of science, so I won't comment on whether the results were correct, or if PNAS's letter was unbiased. Then again neither does Schlafly, who, of what info about him is known to the public is not a biologist. I find it somewhat ''inappropriate'' for someone with no training on a specifict field to come and tell someone that has actually learned something about that field, and say he hasn't done his job right. It's common sense. [[User:Fred1776|Fred1776]] 14:30, 16 September 2008 (EDT)
+
:That's an extraordinary and offensive remark from someone who admits complete ignorance. How come you - Fred1776 - have an exact knowledge of how expert Andrew Schlafly's knowledge of the issue is? He may not be a mathematician, but he is an experienecd educator, amongst other things, with a legal training that enables his mind to get to the crux of a problem, as has been demonstrated very many times in this project. Biologists are properly open to the scrutiny of others. [[User:Bugler|Bugler]] 15:34, 16 September 2008 (EDT)
+
::I simply can't turn down the opportunity of agreeing with Bugler!  Biologists, as with all experts, should be open to scrutiny from everyone. It's good to use people's expertise, but not at the expense of raising their pronouncements to the status of dogma. True, the lay person may not be able to formulate much criticism but sometimes, and especially when they stray outside the purely technical, all experts make mistakes that are evident to others. We should respect expertise, but stopping all questioning is giving respect beyond what is due. --[[User:Toffeeman|Toffeeman]] 15:45, 16 September 2008 (EDT)
+
  
::(EC)Well, the PNAS seems to think his understanding of statistical analysis is less than "elementary". The question is, given that virtually none of us are experts in statistics, why wouldn't we believe the PNAS? I mean, they are the trained scientists, right? If they told me my mis-understanding of statistical analysis was too fundamental to warrant a response in their journal, I'd show some humility and accept that as valid criticism. But hey, that's just me. [[User:KimEide|KimEide]] 15:48, 16 September 2008 (EDT)
+
:The issue of whether to weight the Z-test according to sample size is not a straightforward issue. Before Whitlock's recent paper, the consensus was that the P-value already depends on sample size so it should not be weighted according to sample size. (Basically, a high degree of significance, i.e. a low P-value, can be achieved by having either a large difference between the treatments or a large experiment or, of course, both.) Whitlock's work, based on computer simulations, is an interesting contribution but cannot be regarded as the last word on the subject because any computer simulation involves assumptions about the structure of the particular experiment simulated. I cannot imagine any editor of a scientific journal rejecting a paper because it used an unweighted Z-test rather than the weighted version (or vice-versa).
:::Kim, if you're naive enough to believe everything that the [[Liberal]] establishment tells you, well, hey, don't let us stop you. But don't think that you will be allowed to infect this site with your credulity. [[User:Bugler|Bugler]] 15:54, 16 September 2008 (EDT)
+
::::Well if I'm credulous because I'd accept the opinion of an expert in very difficult and technical field I have no formal training in, then so be it. [[User:KimEide|KimEide]] 15:57, 16 September 2008 (EDT)
+
:::::Bugler, is it possible for you to talk without threatening someone? Try some civility. --[[User:IanG|IanG]] 16:08, 16 September 2008 (EDT)
+
:::::::::::(At the risk of a block) Not "believe everything on the basis that they are experts", but neither "disbelieve everything on the basis that they are assumed to be Liberal".  If one is to make use of expertise then you need to accept ''something'' told you on the basis of expertise: you will have to revise at least one belief because of what the expert says. --[[User:Toffeeman|Toffeeman]] 16:09, 16 September 2008 (EDT)
+
  
KimEide, the experts on Christianity are overwhelmingly in agreement:  Jesus rose from the dead. Yet I expect that you don't accept that expert view.  Meanwhile, you seem to accept the "expert" view of Lenski about statistics despite his having, as far as I can tell, no expertise in that subject.
+
:The conclusion, therefore, is that while there is a quantitative difference in the level of significance obtained by weighted and unweighted Z-tests, there is undeniably a significant biological effect. ASchlafly and other contributors should not be concerned that there is anything incorrect about the biological conclusions of Professor Lenski and his students or that there was anything at all underhand about their analysis or presentation of their data. [[User:DavyJones|DavyJones]] 19:02, 2 November 2008 (EST)
  
Those who don't want to think for themselves can return to Wikipedia and other playpens.  Those who have substantive insights about the logical and statistical issues here, please do comment.--[[User:Aschlafly|Aschlafly]] 16:16, 16 September 2008 (EDT)
+
:: DavyJones, Whitlock's paper is correct and Lenski should have applied it.  You provide nothing meaningful to the contraryRead my posting below and contribute to this encyclopedia, or go rant somewhere elseYour account will be blocked if you simply continue to rant here.
:I happen to know for a fact that Lenski is thoroughly trained in the statistical analysis of experiments like the one in question. In fact, he's published extensively in that area. And I'm afraid the historicity of the Resurrection is a minority view among New Testament scholars. [[User:KimEide|KimEide]] 16:22, 16 September 2008 (EDT)
+
::What might wash with [[Liberal]] [[agnostic]] [[theology]] [[professor values|professors]] won't wash with true [[Christians]]. [[User:Bugler|Bugler]] 16:27, 16 September 2008 (EDT)
+
:::I agree completely. That doesn't change the empirical fact that it's a minority view among scholars. [[User:KimEide|KimEide]] 16:28, 16 September 2008 (EDT)
+
::::I disagreeI have found that when people use the word "scholars" that it is very specific to their sideSo what is your definition of a scholar?  In my life I have had the privilege of attending a number of different churches, three of which were run by men with PHDsAll three of them believed in the bodily resurrection of Christ.  Taken across America, that number would be much higher. [[User:Learn together|Learn together]] 17:39, 16 September 2008 (EDT)
+
:::::If those men had PhDs in New Testament studies (or something related) from a serious University, I would certainly count them as scholars. But they would be in the minority in the field of New Testament studies. That's just an empirical fact I happen to know from being employed by a Liberal Arts University. In general though, I think of a scholar as someone who has 1)Studied the literature extensively 2)Defended his knowledge before a panel of other people who have studied the literature extensively (i.e. been awarded an advanced degree)  3)Published in peer-reviewed journals 4)And is or was actively engaged in the professional field, either through writing, teaching (or preaching), researching, giving papers at conferences, etc. That's just off the top of my head though. Most scholars will have all four of those. Some will only have three. Maybe occasionally one might just have two. [[User:KimEide|KimEide]] 17:49, 16 September 2008 (EDT)
+
  
::::::This is a ridiculous arguement.  The resurrection of Jesus is a matter of faith and can not be verified except by referring to the Bible.  Statistical analysis is a branch of mathematics.  The truths of mathematics are not subject to debate and are not matters of faith. Appealing to authorities in mathematics is completely different than appealing to a religious authority.  [[User:MickA|MickA]] 17:52, 16 September 2008 (EDT)
+
:: Moreover, as I already explained above, I found Lenski's description of his application of the Whitlock paper to be particularly misleading. Lenski's use of "whether or not" obscures the basic error that he did not apply Whitlock's paper in the straightforward, correct manner. I think the wording in the Lenski paper deliberately obscures this falsehood from the reader.--[[User:Aschlafly|Aschlafly]] 19:30, 2 November 2008 (EST)
(undent)We're not arguing whether the opinion of a majority of biblical scholars is a reliable indicator of whether Jesus rose from the dead or not. We're just arguing what that majority opinion is. It may seem irrelevant, but it is a crucially important to the issue at hand...ummm...somehow. [[User:KimEide|KimEide]] 18:02, 16 September 2008 (EDT)
+
  
: Christian theologians hold the equivalent of PhDs, and yet you reject the conclusion of those "experts". I have found nothing in Lenski's published background to indicate any expertise by him in statistics.  Yet you accept Lenski's view on statistics without even thinking through the issues on your own, while rejecting the consensus of Christian experts.  Why?  The answer is obvious: bias and lack of open-mindedness.--[[User:Aschlafly|Aschlafly]] 18:25, 16 September 2008 (EDT)
+
::::Michael Whitlock does not agree with your interpretation. He states that "the appropriate weightings should be 4, 5, and 8, the sample sizes."<ref>Personal email communication with M. Whitlock, 9/22/08</ref>--[[User:Brossa|Brossa]] 18:10, 13 November 2008 (EST)
  
:: Biological studies routinely require at least some application of statistical theory, especially when studying populations of organisms or molecules which molecular (and evolutionary) biology usually involves. Biostatistics (the statistical theories commonly used in biomedicine) is often a required course for PhD students for this reason. Established researchers in the life sciences do not need to have a degree in statistics to indicate expertise - Lenski's publication record itself indicates that he has successfully applied statistical theory in his analyses many times. At any rate, if someone can simply claim without proof that he has taken (and understood) statistics courses on the level needed to evaluate Lenski's paper, then he has no basis on which to accuse others of lacking expertise.
+
:::Aschlafly,
 +
:::1. A reasoned argument is not a rant. No-one should be afraid of a reasoned argument and it would be cowardly to use superior power (i.e. control over who can and cannot contribute to CP) to avoid having one.
 +
:::2. Please see above for my comment regarding Whitlock's paper on whether or not to weight the Z-test. That paper is not decisive (it cannot be decisive as it only considereda limited number of scenarios in computer simulations) so it cannot be considered "correct" (your word). I would say that whether or not to weight the Z-test is purely a matter of choice at the present state of scientific understanding.
 +
:::3. I and many others have pointed out on this page (including discussions which you've archived) that it doesn't actually matter in biological terms whether Lenski used the weighted or unweighted Z-test. The result is still significant, albeit at different levels.
 +
:::4. I understand from that fact you're now arguing about the precise level of significance of Lenski's result that you've conceded my main point, that your Point 5 is not tenable. Can we at least agree on that question? [[User:DavyJones|DavyJones]] 18:19, 3 November 2008 (EST)
  
::: Malarkey.  I've taken and excelled in upperclass statistics courses, and there were not biology students, college or graduate, in them.  If Lenski has expertise in statistics then let's see it.  His own "biographical sketch" doesn't even disclose what his undergraduate major was at Oberlin or what his PhD concentration at the University of North Carolina were in.[http://myxo.css.msu.edu/BioSketch.html]--[[User:Aschlafly|Aschlafly]] 19:26, 16 September 2008 (EDT)
+
==Refereeing of Scientific Papers==
  
==Clarification on peer review and anonymity==
+
Another point that Aschlafly and others deleted was my explanation of why CP readers shouldn't be concerned about the fact that the Lenski paper was reviewed within 14 days. This is perfectly normal practice for a leading journal. If you're invited to review a paper for a leading journal and they give you 7/10/14/etc days, you only agree to review it if you're sure you're going to have time before the journal's deadline. So there's absolutely no reason to suppose the review wasn't done properly.
There has been, I think, undue outrage over the fact that the reviewer's name was not included in the response to Schlafly's letter. To clarify, this is standard procedure when reviewing manuscripts for scientific journals and does NOT in any way indicate cowardice or uncertainty. Anonymous peer review allows for reviews that are rigorous, honest, and, most importantly, objective, since if no one knows your identity, no one can threaten you, bribe you, or otherwise influence your evaluation of the material. Thus, you can carry out your professional duties without concern over how your review of Person A's paper will affect Person A's opinion of you or his/her review of your paper in the future, for example.
+
  
The anonymity of the response is standard procedure in scientific publishing and necessary to ensure objectivity of the evaluation process, and should not be construed negatively as has been done here.
+
Again, if you disagree, discuss, don't delete and don't block. [[User:DavyJones|DavyJones]] 20:39, 31 October 2008 (EDT)
 +
 
 +
== References ==
 +
 
 +
<references/>
 +
 
 +
== Have we given up? ==
 +
 
 +
Not much activity here in the past few weeks. Have we given up on pursuing this matter further?
 +
Conservapedia should work on continuing to keep the Lenski study and its flaws in the news.
 +
Is there anything else we can do? Perhaps write to local officials about the misuse of public funds for studies like Lenski's, and also to complain about PNAS? --[[User:DRamon|DRamon]] 16:39, 31 October 2008 (EDT)
 +
 
 +
: We exposed five fundamental flaws in the study, and told the truth.  The truth doesn't care whether some refuse to accept it.  There will ''always'' be people who deny the truth, particularly when they stand to profit from falsehoods.  For now, I've moved onto more pressing issues on this site.--[[User:Aschlafly|Aschlafly]] 16:56, 31 October 2008 (EDT)
 +
 
 +
:: What do you mean by "profit from falsehoods"? Are you seriously suggesting that everyone who has a different point of view from your own must have mercenary motives for disagreeing with you? That's a very bizarre point of view. [[User:DavyJones|DavyJones]] 20:39, 31 October 2008 (EDT)
 +
:::The suggestion is that those who lie very frequently do so forr reasons of greed. Your denial of this self-evident fact, and your [[deceit]]ful twisting of Mr Schlafly's words, speak volumes of you, DavyJones. [[User:Bugler|Bugler]] 06:56, 1 November 2008 (EDT)
 +
 
 +
::::Well, I don't see how you can accuse me of twisting Aschlafly's words. The implication of his statement above is that he believe that what he wrote was true (although his Point 5, at least, was incorrect) and that people who disagree with him ("deny the truth", in his words) are seeking to profit by doing so, which is not correct.
 +
 
 +
::::More importantly, I don't see how Aschlafly can give up on an argument he started. It's plain from what KennyMac and other wrote that his Point 5 is not correct. Now, is he going to do CP readers the favour of reaching a rational conclusion to this discussion or is he going to resort to his ploy of blocking contributors with whom he doesn't agree? [[User:DavyJones|DavyJones]] 20:03, 1 November 2008 (EDT)
 +
 
 +
::::: "DavyJones" you're making false statements here and that will not be allowed.  Point 5 is confirmed and you've provided nothing meaningful to rebut it.  Also, [[evolution syndrome]] is not allowed here either. If you do not contribute in a meaningful way to non-evolution entries in this encyclopedia, then your account will be blocked.--[[User:Aschlafly|Aschlafly]] 19:26, 2 November 2008 (EST)

Latest revision as of 23:10, November 13, 2008

Notice: misrepresentations are not going to be allowed on this page. Substantive comments only, please.

Note: earlier posts are archived here. --BillA 17:05, 20 September 2008 (EDT)

Point 5 Confirmed

I would like to contribute to this discussion because I have taught statistics to graduate biology students for 16 years.

The combination of data from several experiments is a specialist and sometimes difficult area of statistical theory but a simple example shows why Aschafly’s concern about combining the results of three different experiments is not justified and why this aspect of his criticism of Lenski’s recent paper in PNAS is not valid.

Suppose we want to conduct a test of whether or not men are taller than women on average. For the sake of the example, I generated random heights of people from a population in which men had an average height of 175cm (5’10’) and women of 165cm (5’6”). The standard deviations of height in both sexes were 7cm. I think these numbers are approximately correct for people in the UK but the details aren’t important.

Suppose we take 5 samples of 2 men and 2 women. Here are the numbers I generated:

Man1 Man2 Woman1 Woman2 Men mean Women mean Mean difference t P
176 179 157 148 177.5 152.5 25 5.27 0.017
180 176 160 164 178 162 16 5.66 0.015
176 175 167 165 175.5 166 9.5 8.50 0.0068
169 171 168 173 170 170.5 -0.5 -0.19 0.57
179 175 166 178 177 172 5 0.79 0.26

P in the last column is the t-test probability for a one-side test of women being shorter than men. (Formally, it’s the probability of getting a value of t greater than that calculated from the data if women are in fact taller than men on average.)

Should the fact that, in the fourth sample, the average height of the women is taller than that of the men make us doubt that men are in fact taller on average? Should we be concerned about the last sample, in which the difference in height of the two sexes is rather small, though in the expected direction? No, in both cases. When we combine the data on all 10 men and all 10 women, we get this:

Men mean Women mean Mean difference t P
175.6 164.6 11 3.85 0.00058

Clearly, combining the data from several similar experiments strengthens the conclusions considerably, as shown by the fact that P is much smaller for the combined data than for any individual sample.

Although the combination of data from several experiments is a specialised area of statistics, I see nothing particularly incorrect about the approach used by Lenski and his colleagues. The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so. (For example: A. Combining the results of five samples of the heights of men and women is clearly valid. B. Combining three samples of heights of men and women with two samples of lengths of male and female squid clearly isn’t.) Generally speaking, the outcome of a combined analysis of several small experiments which all point in the same direction (or at least in a similar direction) will be more significant than that of any one of those experiments, as is shown in the larger table above.

I hope this clarifies the extensive discussion on this point and puts Aschafly’s mind at rest on this subject. KennyMac 08:20, 18 September 2008 (EDT)

That's very nicely put, thanks. You should work on some of the stats pages here. Of course, technically any sample is ultimately just a combination of n samples of size 1. MikeR 13:28, 18 September 2008 (EDT)
I'll take a look at this Friday. It's not immediately obvious what the point is to your analysis above.--Aschlafly 23:46, 18 September 2008 (EDT)
"The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so"--KingOfNothing 00:57, 19 September 2008 (EDT)
This makes no sense as an argument. It may be true in this simple case that you can do one large or several small samples and get similar results - which is quite obvious and wouldn't need such a detailed rant. However, you provide no mathematical proof, just one example. Etc 01:08, 19 September 2008 (EDT)
"It's not immediately obvious what the point is to your analysis above". No surprise, Aschlafly, really. Maybe you should take your own advice: "I suggest you try harder with an open mind". --CrossC 02:46, 19 September 2008 (EDT)

It is with great sadness that I note that the author of this - the only significant statistical explanation and discussion in this entire fiasco- has just been blocked for five years. Even his email is blocked, so he can't even appeal the action. I don't see such manouvers as having contributed to the much vaunted "open mind" of which various people here speak. BenHur 10:27, 19 September 2008 (EDT)

REPLY: I have now reviewed the above analysis, and it supports Point 5 rather than the PNAS paper. Point 5 stated, "The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." The analysis above does nothing more than reinforce Point 5 by combining experiments based on sample size.

In Pavlovian manner, some Lenski types nod their head here in agreement at the above analysis, apparently unaware that it reinforces Point 5.

When combining results from samples that are vastly different in sample size, it is necessary to factor in the different sample sizes. Apparently the PNAS paper failed to do that, which helps explain why it refuses to provide a meaningful response to Point 5.--Aschlafly 19:24, 19 September 2008 (EDT)

(rants below were deleted for being non-substantive in violation of this page's rules.)--Aschlafly 19:24, 19 September 2008 (EDT)

I understand point 5 now, or at least I think I do. What we have as a "sample" is either:
1. Individual cultures (Schlafly)
2. Cultures that developed cit+. (Lenski)
Schlafly contends that the sample should be all the cultures and that Lenski has, improperly, filtered the sample by excluding the vast majority of it (i.e. all those cultures that did not become cit+). Am I right in thinking this is the argument? --Toffeeman 19:57, 19 September 2008 (EDT)
No, we're talking about how Lenski combined a large study (which did not really support Lenski's hypothesis) with small studies (which Lenski claims does support his hypothesis). The studies were not combined in a logical manner with proper weighting given to the much bigger size of the large study.--Aschlafly 23:15, 19 September 2008 (EDT)
Aschlafly, have you read the paper on z-transforms which explains the statistical technique used? You can download a .pdf copy of the paper for free here. --BillA 06:30, 20 September 2008 (EDT)
You say "statistical technique used," but you should have said "statistical technique cited." In fact, a close reading of the Z-transform paper provides more support for Point 5: combined studies must be weighted based on sample size:
"When there is variation in the sample size across studies, there can be a noticeable difference in the power of the two methods, with the weighted Z-approach being superior in all cases. As such, we should always prefer the weighted Z to the unweighted Z-approach when the independent studies test the same hypothesis."see p. 1371.
In other words, the cited paper actually supports Point 5.--Aschlafly 09:34, 20 September 2008 (EDT)

(unindent)Lenski used the weighted method. See note 49 to the paper and the text around the combination. Of course there is the question of on what basis Lenski weighted the results. Lenski weighted the results on the basis of the Cit+ numbers and we may think it would have been better to weight on the basis of the number of replicates. I have below the calculations (not mine) of combined P-values based on 1) no weighting, 2) weighting on the basis of Cit+ and 3) weighting on the basis of replicates. The weighted Z-transformed = SUM(Weight x Z-score for each run)/SQRT(SUM(Weight^2 for each run))

Expt# p-value z-score #Cit+ muts #replicates
1 0.0085 2.387 4 72
2 0.0007 3.195 5 340
3 0.0823 1.390 8 2800

Applying the formula described above..

Weighting Z transformed p
Equal 4.025 <0.001
By total Cit+ 3.576 <0.001
By total replicates 1.825 0.034

So weighting on the basis of the number of replicates considerably increases the P-value. It remains, however, well within the range of statistical significance (0<P<0.05). If we hold that Lenski should have weighted on the basis of replicates then he should have rejected the null hypothesis and reached exactly the same conclusions that he did. The entire paper would have been exactly the same except the sentence “the result is extremely significant (P<0.0001) whether or not….” Would read “the result is significant (P<0.04) whether or not”. Point 5 establishes one number and an “extremely”. Point 5, therefore, has no weight (excuse the pun). --Toffeeman 15:18, 20 September 2008 (EDT)

Sorry, Toffeeman, a falsehood is still a falsehood. Based on your own posting, if Lenski had applied the Whitlock Z-transform paper in a logical manner, the results would not have been nearly as striking as Lenski claimed (his paper said the results were "extremely significant"). Moreover, I found Lenski's description of his application of the Whitlock paper to be particularly misleading. Lenski's use of "whether or not"[1] obscures the basic error that he did not apply Whitlock's paper in the straightforward, correct manner. I think the wording in the Lenski paper deliberately obscures this falsehood from the reader.
People have free will to embrace and defend falsehoods. I don't expect them to change quickly or admit they were wrong. But you'll find me defending and promoting the truth.
Point 5 remains valid and the falsehood remains uncorrected by PNAS or Lenski. Four other points in higher priority remain uncorrected by them also.--Aschlafly 16:47, 20 September 2008 (EDT)
"a falsehood is still a falsehood". Precisely, the null hypothesis should be considered false and the conclusions of the paper stand.
Oh? Do you mean Lenski's falsehood? And what falsehood is that? Lenski said that he had calculated the P-value without weighting and that had come out at <0.0001. That is true, not false. Lenski said he had calculated the P-value weighting on the basis of the Cit+ replicates and that had come out at <0.0001. That is true, not false. There is no falsehood. Lenski did not mention the results of weighting on the basis of replicate numbers. He thus made no claim about weighting on the basis of replicate numbers. If he made no claim he cannot have made a false claim.
How do the words Lenski uses "mislead". If you read them as written what conclusion do you come to? You are lead to the conclusion that the mutation was not "rare-but-equal", instead it was contingent. That is the right conclusion. If Lenski had presented the data in a different manner (perhaps by including the results of weighting on the basis of replicate numbers) what conclusion do you come to? You are again lead to the conclusion that the mutation was not "rare-but-equal", instead it was contingent. That is the right conclusion. To mislead you must be lead to a conclusion that is incorrect. By Lenski's paper you are not lead to a conclusion that is incorrect. Thus is cannot be said to be "misleading".
I shall not comment on your second paragraph, the temptation to "Tu Quoque" would be too great.

--Toffeeman 17:13, 20 September 2008 (EDT)

The falsehood consists of pretending to apply the Whitlock Z-transform in a straightforward, logical and correct manner. I think the Lenski paper is intentionally misleading by using the "whether or not" wording, when both alternatives are nonsensical. Point 5 has been proven above to be correct in identify an error in the Lenski paper.
"Toffeeman", your blocking history suggests you have been less than straightforward yourself. Go elsewhere if you seek to be deceitful. You're not fooling anyone here.--Aschlafly 17:54, 20 September 2008 (EDT)
Lenski did apply the Z-transformation correctly, both weighted and unweighted. The data points extracted from each replay are the generation numbers of those replicates that gave rise to Cit+ mutants. Thus Replay 1 produced four data points: 30,500 31,500 32,500 32,500. Replay 2 produced five data points: 32,000 32,000 32,000 32,000 32,500. Replay 3 produced eight data points: 20,000 20,000 27,000 27,000 31,000 31,500 32,000 32,000. Thus the N for replay 1 is 4; replay 2 is 5 and replay 3 is 8. The fact that replay 3 used 38 times as many replicates as replay 1 does not mean that it should be weighted 38 times as much; it only produced twice as much data, not 38 times as much.
Suppose I want to find out what the average age of a murderer is in three cities. In L.A. I interview 72 random people and find that 4 of them were convicted of murder; I record the ages of the four. In Seattle I interview 340 people and find that 5 of them are convicted murderers; likewise in Singapore I interview 2800 and find 8 murderers. In the end, I have 4,5, and 8 data points from the three cities; the number of people I had to interview to obtain those data points doesn't factor into the analysis of what the average age of the murderers is.--Brossa 00:06, 21 September 2008 (EDT)
In your first paragraph you simply repeat the error underlying Lenski's paper. You, like the paper, incorrectly apply Whitlock's Z-transform.
The quality and reliability of data is proportional to sample size, and when different studies are combined they need to be weighted accordingly. The results from a very large sample size would not be weighted equally with the results from a small sample size, as you and Lenski have done. That's basic logic, though I'm not optimistic that you or Lenski will admit it. Open-minded people who respect logic have no difficulty elevating logic over personal whim.-Aschlafly 11:30, 21 September 2008 (EDT)
Andy if you read Whitlock's paper you would see it say, and I quote, "Ideally each study is weighted proportional to the inverse of its error variance, that is, by the reciprocal of its squared standard error." It says nothing about weighting according to sample size, which is what you seem to be insisting should be done.
Also Whitlock acknowledges in the paper that there is no preference for weighted versus equal weighting, so the fact that both equal weighting and weighting by the standard error give a statistically significant result shows that the 3 experiments combined support rejection of the null hypothesis. DanB 20:39, 21 September 2008 (EDT)
ASchlafly, you state that I incorrectly apply "Whitlock's Z-transform" (actually the test belongs to Mosteller & Bush[2] and/or Liptak[3]). Whitlock describes weighting by the reciprocal of the squared standard error. The standard error of the mean is proportional to 1/sqrt(N), so the reciprocal of the squared standard error is proportional to N. Thus larger studies are given more weight. I maintain that the sample sizes N of the three replays are 4, 5, and 8 respectively. Weighting based on those three N does not weight all three replays equally as you claim: it gives replay 2 25% more weight and replay 3 100% more weight than replay 1. Rather than simply repeating that I am wrong, will you please state what you think the sample sizes of the three replay experiments are, and, in your opinion, what the correct application of the Z-transformation would be?--Brossa 18:00, 22 September 2008 (EDT)
The Lenski paper states how it weighted the experiments, and that weighting is incorrect. Admit it. Moreover, the incorrect weighting in the Lenski paper was not likely an inadvertent error, as it inflated the significance of the results. I found the wording used by the Lenski paper to describe its (incorrect) weighting to be artfully misleading.
Provide me with federal funding as Lenski received, and I'll write a paper for you. But I don't have to write an alternative paper to point out glaring errors in Lenski's paper.--Aschlafly 08:35, 23 September 2008 (EDT)
I've been following this discussion for a while and I have to agree with ASchlafly. It hardly seems fair that he should have to, in his spare time, replicate an experiment done by a professional just to "earn" the right to criticize it. I am unfamiliar with statistics, but if some complicated transform goes against common sense, common sense should prevail. After all, there are lies, damned lies, and statistics... AndyM 10:57, 23 September 2008 (EDT)


(unindent)I'm not asking anyone to write a paper or replicate an experiment. I'm asking ASchlafly to support his statement "The results from a very large sample size would not be weighted equally with the results from a small sample size, as you and Lenski have done"(bolding mine). I have stated publicly, subject to challenge by others, that the sample sizes (n) of the three replays are four, five, and eight respectively. Furthermore, using n of 4, 5, and 8 in the weighted Z-method DOES NOT weight all the replay experiments equally - it weights replay 3 twice as much as replay 1 and 8/5 as much as replay 2. Tell you what: I'll drop all my questions about Monte Carlo and the Z-transform, and simply ask ASchlafly one question: what is the sample size, n, of the second replay experiment? He need not even do any calculations - a statement in words that will allow someone else to do the calculation will suffice. This is not a complicated question to answer; the paper states how many replicate cultures there were (340), how many cells there were in each replicate (3.9x10^8), how many replicates gave rise to Cit+ cells (5), and which generations those Cit+ replicates came from (4 from 32,000 and one from 32,500). I will even give my answer: five. Furthermore, I will say why I believe that, using the murderer/age analogy: performing the 340 replicates is the same as interviewing 340 people in order to find out if any of them are convicted murderers. Finding that five replicates gave rise to Cit+ mutants is the same as the survey finding that 5 of those 340 people were convicted murderers. Finding that the Cit+ mutants arose from 4 replicates from generation 32,000 and 1 from generation 32,500 is the same as finding the ages of the murderers. The five data points in the Lenski study allow one to calculate the 'mean generation of clones yielding Cit+': 32,100. This is the same as finding the mean age of the five murderers. If I want to compare this hypothetical murderer age study to some other study of the mean age of murderers, I would weight the studies based on how many murderers were in each study, not on how many non-murderers were included in the initial survey.

Surely ASchlafly can say what he thinks the n of the second replay is, even if he won't say why he thinks it. Is it five? 340? The number of replicates times the number of cells per replicate? Something else? No analysis need be performed on the resulting number.--Brossa 15:54, 23 September 2008 (EDT)

OK - would you care to put in writing that after ASchlafly gives you his response, you won't start obfuscating the issue with Monte Carlo and Z-transform issues? You understand: it is typical of liberals to, after being proven wrong, to start pretending that they were talking about an entirely different issue altogether. After ASchlafly states the sample size of the second replay experiment you will consider yourself answered. Correct?
Brossa, your rant is misplaced. One cannot salvage an error in logic by questioning which of superior alternatives should be used instead. The sample size of an experiment is the number that comprises the underlying sample used in the experiment, not the number of a certain outcome from the experiment. Maybe you can debate yourself over what the correct underlying sample size is, but it is plainly not the number of a certain outcome from the experiment.--Aschlafly 19:28, 23 September 2008 (EDT)

Maybe the Journal of Nature can be your next letter submission source. International Weekly Journal of Science -- 50 star flag.png jp 21:17, 22 September 2008 (EDT)

Point 5 Rejected

Referring to ASchlafly's latest revert: OK, so let's forget about my specialist knowledge of the subject. It really doesn't matter. In science, what matters is what is said, not who says it. Remember that the most outstanding scientific discoveries of the 20th century were made by a patent clerk; the facts that he had no academic position at the time and a rather moderate educational record were irrelevant.

So, I repeat: if you don't agree with me, present a reasoned argument against my points and don't just delete my comments (it's not wiki etiquette to delete other people's comments on a Talk page unless they're abusive or obscence). At least you haven't been so feeble as to block me to prevent me from contributing to this page, as you've done to other contributors. So I now repeat once again my cut-and-paste from Longstop's contribution a couple of weeks ago - as I said, if you don't agree with it, present a rational argument against it.

The main point he [Kennymac] makes is quite correct, that combining experiments will tend to give a more significant result (i.e. a lower P-value) than any single experiment on its own.
The point that ASchlafly makes (headed REPLY, shortly after KennyMac's example) is slightly misleading. You don't choose which experiments to combine on the basis of their outcomes - you either combine all relevant experiments or none. In this case, Professor Lenski chose to combine all the experiments and present a single analysis of them. That's fine.
The issue of whether to weight the Z-test according to sample size is not a straightforward issue. Before Whitlock's recent paper, the consensus was that the P-value already depends on sample size so it should not be weighted according to sample size. (Basically, a high degree of significance, i.e. a low P-value, can be achieved by having either a large difference between the treatments or a large experiment or, of course, both.) Whitlock's work, based on computer simulations, is an interesting contribution but cannot be regarded as the last word on the subject because any computer simulation involves assumptions about the structure of the particular experiment simulated. I cannot imagine any editor of a scientific journal rejecting a paper because it used an unweighted Z-test rather than the weighted version (or vice-versa).
The conclusion, therefore, is that while there is a quantitative difference in the level of significance obtained by weighted and unweighted Z-tests, there is undeniably a significant biological effect. ASchlafly and other contributors should not be concerned that there is anything incorrect about the biological conclusions of Professor Lenski and his students or that there was anything at all underhand about their analysis or presentation of their data. DavyJones 19:02, 2 November 2008 (EST)
DavyJones, Whitlock's paper is correct and Lenski should have applied it. You provide nothing meaningful to the contrary. Read my posting below and contribute to this encyclopedia, or go rant somewhere else. Your account will be blocked if you simply continue to rant here.
Moreover, as I already explained above, I found Lenski's description of his application of the Whitlock paper to be particularly misleading. Lenski's use of "whether or not" obscures the basic error that he did not apply Whitlock's paper in the straightforward, correct manner. I think the wording in the Lenski paper deliberately obscures this falsehood from the reader.--Aschlafly 19:30, 2 November 2008 (EST)
Michael Whitlock does not agree with your interpretation. He states that "the appropriate weightings should be 4, 5, and 8, the sample sizes."[4]--Brossa 18:10, 13 November 2008 (EST)
Aschlafly,
1. A reasoned argument is not a rant. No-one should be afraid of a reasoned argument and it would be cowardly to use superior power (i.e. control over who can and cannot contribute to CP) to avoid having one.
2. Please see above for my comment regarding Whitlock's paper on whether or not to weight the Z-test. That paper is not decisive (it cannot be decisive as it only considereda limited number of scenarios in computer simulations) so it cannot be considered "correct" (your word). I would say that whether or not to weight the Z-test is purely a matter of choice at the present state of scientific understanding.
3. I and many others have pointed out on this page (including discussions which you've archived) that it doesn't actually matter in biological terms whether Lenski used the weighted or unweighted Z-test. The result is still significant, albeit at different levels.
4. I understand from that fact you're now arguing about the precise level of significance of Lenski's result that you've conceded my main point, that your Point 5 is not tenable. Can we at least agree on that question? DavyJones 18:19, 3 November 2008 (EST)

Refereeing of Scientific Papers

Another point that Aschlafly and others deleted was my explanation of why CP readers shouldn't be concerned about the fact that the Lenski paper was reviewed within 14 days. This is perfectly normal practice for a leading journal. If you're invited to review a paper for a leading journal and they give you 7/10/14/etc days, you only agree to review it if you're sure you're going to have time before the journal's deadline. So there's absolutely no reason to suppose the review wasn't done properly.

Again, if you disagree, discuss, don't delete and don't block. DavyJones 20:39, 31 October 2008 (EDT)

References

  1. "We also used the Z-transformation method (49) to combine the probabilities from our three experiments, and the result is extremely significant (P < 0.0001) whether or not the experiments are weighted by the number of independent Cit+ mutants observed in each one." (Lenski paper at 7902).
  2. Mosteller, F. & Bush, R.R. 1954. Selected quantitative techniques. In: Handbook of Social Psychology, Vol. 1 (G. Lindzey, ed.)
  3. Liptak, T. 1958. On the combination of tests. Magyar Tud. Akad. Mat. Kutato Int. Kozl. 3: 171-197
  4. Personal email communication with M. Whitlock, 9/22/08

Have we given up?

Not much activity here in the past few weeks. Have we given up on pursuing this matter further? Conservapedia should work on continuing to keep the Lenski study and its flaws in the news. Is there anything else we can do? Perhaps write to local officials about the misuse of public funds for studies like Lenski's, and also to complain about PNAS? --DRamon 16:39, 31 October 2008 (EDT)

We exposed five fundamental flaws in the study, and told the truth. The truth doesn't care whether some refuse to accept it. There will always be people who deny the truth, particularly when they stand to profit from falsehoods. For now, I've moved onto more pressing issues on this site.--Aschlafly 16:56, 31 October 2008 (EDT)
What do you mean by "profit from falsehoods"? Are you seriously suggesting that everyone who has a different point of view from your own must have mercenary motives for disagreeing with you? That's a very bizarre point of view. DavyJones 20:39, 31 October 2008 (EDT)
The suggestion is that those who lie very frequently do so forr reasons of greed. Your denial of this self-evident fact, and your deceitful twisting of Mr Schlafly's words, speak volumes of you, DavyJones. Bugler 06:56, 1 November 2008 (EDT)
Well, I don't see how you can accuse me of twisting Aschlafly's words. The implication of his statement above is that he believe that what he wrote was true (although his Point 5, at least, was incorrect) and that people who disagree with him ("deny the truth", in his words) are seeking to profit by doing so, which is not correct.
More importantly, I don't see how Aschlafly can give up on an argument he started. It's plain from what KennyMac and other wrote that his Point 5 is not correct. Now, is he going to do CP readers the favour of reaching a rational conclusion to this discussion or is he going to resort to his ploy of blocking contributors with whom he doesn't agree? DavyJones 20:03, 1 November 2008 (EDT)
"DavyJones" you're making false statements here and that will not be allowed. Point 5 is confirmed and you've provided nothing meaningful to rebut it. Also, evolution syndrome is not allowed here either. If you do not contribute in a meaningful way to non-evolution entries in this encyclopedia, then your account will be blocked.--Aschlafly 19:26, 2 November 2008 (EST)