Difference between revisions of "Talk:PNAS Response to Letter"

From Conservapedia
Jump to: navigation, search
(reply)
Line 283: Line 283:
  
 
(rants below were deleted for being non-substantive in violation of this page's rules.)--[[User:Aschlafly|Aschlafly]] 19:24, 19 September 2008 (EDT)
 
(rants below were deleted for being non-substantive in violation of this page's rules.)--[[User:Aschlafly|Aschlafly]] 19:24, 19 September 2008 (EDT)
 +
 +
:::I understand point 5 now, or at least I think I do.  What we have as a "sample" is either:
 +
 +
::::1. Individual cultures (Schlafly)
 +
::::2. Cultures that developed cit+. (Lenski)
 +
 +
:::Schlafly contends that the sample should be all the cultures and that Lenski has, improperly, filtered the sample by excluding the vast majority of it (i.e. all those cultures that did not become cit+).  Am I right in thinking this is the argument? --[[User:Toffeeman|Toffeeman]] 19:57, 19 September 2008 (EDT)

Revision as of 23:57, September 19, 2008

Notice: misrepresentations are not going to be allowed on this page. Substantive comments only, please.

Initial discussion

In this day and age, scientists have their own agenda and have corrupted science. Just look at global warming or cloning or stem cells as proof. With that said, the only way to get the real truth is by suing in court. Unfortunately, scientists are bound to vast wealth and have the power to defend themselves vigorously. If ever a fund was set up to pay for a suit, I would contribute. It is a classic case whereby the truth be known, the truth will prevail. -- 50 star flag.png jp 22:14, 12 September 2008 (EDT)

Thanks, Jpatt. One additional beauty of the truth is that it remains the truth no how much some deny it. PNAS can deny its errors all it likes, but that doesn't change the fact they are errors.--Aschlafly 22:21, 12 September 2008 (EDT)
Well said, Andy and Jpatt. It is perhaps worth pointing out that the President of the NAS is a "climate scientist". If the Academy is dominated by pseudoscience of that kind, it's hardly a surprise that their response was to cover up and deny the truth. Nevertheless, they had to be given their chance to make good before further steps are taken. I suggest now that the issue be put to potentially supportive congressmen/women and senators, given the public funding for Lenski's activities. Bugler 05:46, 13 September 2008 (EDT)
Right. The next step is to criticize the taxpayer funding of this junk science. When the authors and the publishing organization will not even address statistical errors in the work, then it's time to pull the public funding.--Aschlafly 10:13, 13 September 2008 (EDT)
They did address your claims of statistical errors: they said that you were wrong to the degree that they were able to determine what you were talking about. You made a qualitative argument and got a qualitative response.--Brossa 11:00, 13 September 2008 (EDT)
Which of the 5 specific errors do you think they addressed? None, as far as I can tell.--Aschlafly 11:11, 13 September 2008 (EDT)
The response addresses your qualitative claims about the paper's statistical methods raised in points two, three, and five by the following:"Nevertheless, from a statistical point of view, it is proper to combine the results of independent experiments, as Blount et al. did correctly in their original paper"(emphasis added); in fact the longest paragraph in the response deals entirely with the statistical claims of the letter and dismisses them.--Brossa 11:32, 13 September 2008 (EDT)
But that's never going to happen because the data availability requirements for public funding have already been met. Jirby 11:03, 13 September 2008 (EDT)10:56, 13 September 2008 (EDT)
No, I don't think the researchers have met NSF guidelines as referenced in the letter.--Aschlafly 11:11, 13 September 2008 (EDT)

Proof?Jirby 11:26, 13 September 2008 (EDT)

As I said, the NSF guidelines are references in the letter.--Aschlafly 11:29, 13 September 2008 (EDT)

You mean the notebooks and ect? Jirby 11:32, 13 September 2008 (EDT)

Oh my dear God, I can't believe this!! Where has this beautiful country gone to if even science is not reliable anymore nowadays. Hope things will change in the future. Good thing there still are people like Mr. Schlafly, who have the brains and power to stand up, and turn the people of America in the right direction again. Raul 12:24, 13 September 2008 (EDT)

Mr. Schlafly, I have a question BTW. Was this letter received on a paper, or electronically? Because if it was on a paper, perhaps it would be a good idea to scan it, and post it. It would add a lot to the encyclopedic value of the article. Raul 12:26, 13 September 2008 (EDT)

PNAS procedures required me to submit the letter electronically using its own electronic submission software. When the PNAS acknowledged that my submission complied with all its requirements, it also said that the authors of the original paper had been notified of my letter.--Aschlafly 12:39, 13 September 2008 (EDT)


Too bad. It doesn't make much sense though, guess that tells a lot about PNAS. What they should care about is the actual text, not the medium it is in. That's not the case IMHO for encyclopedias however. If not for anything else, a scan would have been useful as a reference for the digital text. Oh well... Raul 12:53, 13 September 2008 (EDT)

Honest question, is it against the rules to disagree with Andrew Schlafly or criticize that letter? I just want to know so I don't end up in the same situation as other people who have been censored here.--IanG 17:03, 13 September 2008 (EDT)

I believe this this page is only for discussion of the response, which is quite straightforward. Criticism of the letter should have gone on it's talk page, but it's too late now. Oh well. There is no censorship on Conservapedia. Your comment is not substantive - please refactor it. Praise Jesus, Pila 17:26, 13 September 2008 (EDT)
if you REALLY believe that Lenski has committed acedemic FRAUD then lodge a formal complaint with his University. They are taken very seriously and can lead to loss of tenure and dismissal from the university, and with that on his record no other institution would hire him on any basis. Markr 19:40, 13 September 2008 (EDT)

(deleted non-substantive comments). Again, the heading on this page will be enforced: "Substantive comments only, please." If you have a substantive comment about the identified errors and the PNAS's failure to address them, then please comment. Non-substantive comments will be removed. This is an encyclopedic-based search for the truth, not a blog or a place to refuse to contribute in a substantive manner.--Aschlafly 20:29, 14 September 2008 (EDT)

Since you've taken the liberty of deciding what is substantive or not in deleting posts like my last one, then I have a serious, respectful question to ask; What exactly do you mean by "substantive"? I didn't attack you or your letter; I was attempting to state that the PNAS response did, in fact, address the points of your letter. Whether one considers the PNAS response to be correct or not is a separate matter - they read your objections and responded to them instead of ignoring them, that's all.
My last post would therefore seem to have met Webster's definition of substantive - "having or expressing substance", but apparently the measure of "substantive" for a comment on this page appears to be whether it agrees with your view or not. That's your prerogative, but if you intended to allow comments on this page other than endorsements of your view, then please let me know what I did wrong. --DinsdaleP 21:00, 14 September 2008 (EDT)
Dinsdale, we're here to think and learn. You can look at my letter, look at the PNAS's response, and provide some substantive insights. We're not here to say something like, uh, go ask someone else if a (9th grade-level) graph is correct or not. If you think the substantive issues are beyond your depth, and I don't, then comment on them in a substantive and intellectual and specific way. This is not another waste-of-time blog, and it's not going to become one.--Aschlafly 21:19, 14 September 2008 (EDT)
DinsdaleP, you did attack ASchalfy at least indirectly. Suggesting that the PNAS response has merit might also be interpreted by some that the letter ASchafly sent wasn't the very best it could be. Now, contrast that to my deleted comment suggesting that a time-tested response would be to actually try reproducing the experiment. Many bad experiments are exposed when others fail to get the same results as the original authors. I think this would be an excellent, substantive avenue to pursue.--Argon 21:23, 14 September 2008 (EDT)

ASchlafly- you said above, "PNAS can deny its errors all it likes, but that doesn't change the fact they are errors". As you say, in a fair discussion of the merits of two sides of an argument, it's important that both sides take a good, hard look at their own propositions. Since your position is that PNAS has errors on its own side, I'm just curious to know if you are in any way prepared to accept that there might be errors in your own argument, or are you absolutely 100% certain that your position is error-free? I'm wondering if perhaps before submitting this issue to funding authorities, you would be prepared to have an independent statistical expert take a look at your proposal? BenHur 22:17, 14 September 2008 (EDT)

The only thing I was criticizing in my original comment today was the Main Page headline statement that "PNAS refuses to address the 5 errors in the Lenski study identified by the Letter to PNAS". What I pointed out is the fact that they did in fact respond, by criticizing the statistical analysis used by Aschlafly. I'm not supporting or attacking Mr. Schlafly's analysis, because I'm the first one to admit that I have no expertise in this area. My conclusion was a constructive suggestion that Mr. Schlafly present a rebuttal to the PNAS decision by showing how his analysis and conclusions were not erroneous in the manner they claimed. A public, statistical defense of Mr. Schlafly's work, perhaps accompanied by the endorsement of some regarded experts in the field, would be the best response to PNAS choosing to respond by email instead of through the journal.
I wrote both the original draft letter to PNAS from Mr. Schlafly's notes and my earlier comments today with the intent of contributing constructively. I hope this clarification of my view is substantive enough to remain. --DinsdaleP 22:24, 14 September 2008 (EDT)

Folks, I've pointed out five very specific statistical (logical) errors. The torrent of nonsense above even includes an absurd demand for me to try to repeat the experiments, as thought that would somehow correct a flawed paper.

The math is wrong in the PNAS paper. No one at PNAS is even willing to put his name on a response claiming that the math is correct, because it isn't. I'm not going to allow further nonsensical postings here. If you want to address the statistical (logical) errors in a specific way, fine. If you feel it is beyond your depth to do so, then move on. Thanks and Godspeed.--Aschlafly 22:49, 14 September 2008 (EDT)

"The paper incorrectly applied a Monte Carlo resampling test to exclude the null hypothesis for rarely occurring events." Specifically, why is it incorrect to apply a Monte Carlo test in this circumstance, or why was their application incorrect? Do your own calculations produce a p-value that differs from the published p-value of 0.08?
"The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." This sounds as though you disagree with the use of the Z-transform technique used to combine the data from the three replay experiments, or believe that the Z-transform analysis was performed incorrectly. Which do you disagree with - the technique, the application, or both, and why? --Brossa 23:31, 14 September 2008 (EDT)
{removed personal and non-substantive attack that violated clear rules for this discussion; also, unsubstantiated claims of expertise are not allowed - --Aschlafly 19:16, 15 September 2008 (EDT))
I'm still a little unclear on your position ASchlafly - are you absolutely 100% certain your own statistical analysis is correct on this? Before you proceed further it's important to know that the technical analysis you are presenting is indeed indisputable. BenHur 12:01, 15 September 2008 (EDT)
Ben, if you have at least a 9th grade-level education, then you can look at the 5 errors and decide for yourself, and comment in a substantive manner. Yes, they are obvious and basic errors, and the fact that the reviewer of my letter at PNAS would not attach his name to a specific denial speaks volumes.--Aschlafly 17:11, 15 September 2008 (EDT)
Aschlafly, I'm a little confused as to why you removed this comment of mine? I did not cast any aspersions on your argument, and was merely answering your question as you posed it to me? Is "declining to comment" an indictable offense? BenHur 19:20, 15 September 2008 (EDT)
Your comment was not an "indictable offense," but it violated the rules of this page: "Substantive comments only, please." Got it? Either say something substantive, or edit somewhere else. Thanks.--Aschlafly 19:33, 15 September 2008 (EDT)
This is confusing. Is "I agree with your thesis" or "your methods are 100% correct" a substantive comment? It can be very hard to infer your intent, Mr.Schalfly, I'm sorry to say. I have no quarrel with you, but I'm becoming confused as to what is and isn't appropriate comment on what is labelled a "Talk Page". Are there special rules for this Talk Page? If so, perhaps the title on the page might be changed? BenHur 19:42, 15 September 2008 (EDT)
No, your quoted phrases are obviously not substantive comments. Your statement of agreement means nothing. I doubt you are even using your real name, for starters, which renders your agreement evens sillier. I repeat for the nth time, say something substantive or edit somewhere else.--Aschlafly 19:52, 15 September 2008 (EDT)
(removed another non-substantive posting)--Aschlafly 20:28, 15 September 2008 (EDT)
Aschafly, perhaps you did not realize that my earlier questions were meant for you. I wish to address the statistical errors in a specific way, which requires a better understanding of your position. I will repeat my main questions: why was it incorrect to apply Monte Carlo techniques to the data in the paper, or in what way was the Monte Carlo technique performed incorrectly? Second, why was it incorrect to apply the Z-transform to the data from the three replays, or in what way was the Z-transform performed incorrectly? In lieu of the Monte Carlo/Z-transform techniques, what statistical calculations should have been performed? Feel free to be technical; I have more than a ninth grade education.--Brossa 17:38, 15 September 2008 (EDT)
If you're skipping over the main points, then concede their validity or explain why you've skipped over them.--Aschlafly 19:14, 15 September 2008 (EDT)
If you would like me to through the original letter point by point, and respond to all of the claims in detail, I'd be happy to - but only after I have your explicit permission to do so on this talk page (or some other page of your choosing). Until I'm given that permission, I'll await your response to my previously-stated questions about points three and five: your statements about the incorrect application of Monte Carlo resampling and the erroneous combination of the three replay experiments (the Z-transformation). --Brossa 22:00, 15 September 2008 (EDT)
Substantive postings are welcome, but I still don't have an explanation for why you skipped over the main points 1 and 2. Do you concede them?--Aschlafly 22:04, 15 September 2008 (EDT)
With all due respect Aschlafly, and excuse me if I feel the need to tiptoe as gently as possible here, but I would cautiously disagree with your statement that "Substantive postings are welcome". It seems to me that you have explicitly removed only the substantive comments from this page. What is left are weak and non-substantive comments, to be sure. It seems very difficult to comment in any way here on this matter, as you seem to be of the opinion that you and you alone are correct. WIth the best will in the world, I certainly don't feel able to speak freely here, despite being well qualified to do so. Perhaps a slightly more lenient approach might help your own cause? BenHur 22:23, 15 September 2008 (EDT)
Since I am new here I am a bit timid about chiming in on this discussion, but I feel compelled to say that (as I pointed out in the section below), I think this page should be reserved for appropriate responses to the PNAS letter - not for opinions regarding how Aschlafly is running this website. If you think you have valid points to bring out about that - maybe post them to his talk page? --DRamon 22:31, 15 September 2008 (EDT)

(unindent) No, I do not concede points one and two; I've commented on them on the Talk:Letter to PNAS page. I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to. Similarly, your point four is not something that can be resolved by argument. The only way to prove that the results were due to contamination would be to repeat the experiment with different controls in place and demonstrate different results - which might spur some other labs to run a third or fourth trial to see who was right. It seemed to me that the only points that could be solved by discussion were three and five, since they made statements about the statistical methods that can potentially be resolved through debate. So, once again, what is your response to my questions regarding points three and five?--Brossa 22:34, 15 September 2008 (EDT)

Brossa, I could not find a meaningful rebuttal by you of point 1, and I found no comments by you on point 2, at Talk:Letter to PNAS. Are you trying the classic trick of "the answer is over there," when it isn't? Point 2 alone completely disproves the PNAS paper's thesis, and yet you avoid it and skip towards less obvious errors. I'm happy to address the subtle errors once you address the obvious ones.--Aschlafly 16:10, 16 September 2008 (EDT)
Aschlafly, you are avoiding Brossa's question. Why can't you just answer it? Brossa said that he wanted to limit the discussion to these points. The reason that he wants to do that is because it the PNAS response targets your approach to the statistical analysis. MickA 16:24, 16 September 2008 (EDT)
I don't mind addressing points one and two; I just don't think that we'll come to an agreement about them. Point one represents a misunderstanding of the figure 3, which is labeled "Alternative hypotheses for the origin of the Cit+ function..." The figure does not represent the results of the experiments and does not conflict with them. It is a cartoon of the a priori hypothesis that was generated before the experiments were performed; it is not itself the hypothesis (the map is not the country). Note that the vertical axis lacks a scale; there is no way of knowing what the actual mutation rates are ahead of time. The location of the vertical jump on the graph is abitrary; it has to lie somewhere between 0 and 31,500 generations, but that point could be anywhere. Quoting from the paper: "The historical contingency hypothesis predicts that the mutation rate to Cit+ should increase after some potentiating genetic background has evolved. Thus, Cit+ variants should re-evolve more often in the replays using clones sampled from later generations of the Ara-3 population." The hypothesis as stated does not specify a generation at which the potentiating mutation occured. The hypothesis is not that potentiation took place at generation 31,000 rather than some other generation; it is that there was a potentiating mutation rather than a rare-mutation event. The results of the experiment do not disprove the contingency hypothesis; they confirm it and suggest that the potentiating mutation took place at generation 20,000. You think that the figure is the hypothesis; I think that the hypothesis is what is stated explicitly in the text of the paper; I doubt that we'll agree.
Point two states: "Both hypotheses propose fixed mutation rates, but the failure of mutations to increase with sample size disproves this." I disagree with this statement. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were similar, but not the same. One could imagine a hypothesis that men commit murder most often between the ages of 25 and 35, with samples taken from the male populations of L.A., Singapore, and London. One would find different murder rates among the men of those three cities, but still might find (or not) that murderers in those cities tend to be between 25 and 35 years old. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were not the same, just as Singapore is not the same as Los Angeles. The rare-mutation hypothesis does not mean that the mutation rate to Cit+ is the same for all experimental conditions anywhere; just that the mutation rate is constant given the conditions of a particular replay. It is possible to have different baseline mutation rates among the three replays, all of which follow the historical contingency pattern. Or the mutation rate could actually be the same across all three replays, and the results seen here are just a statistical fluke that would vanish if the replays could be run thousands of times. Either way, it's not fatal to the paper's conculsions.
Point two also states "If the authors claim that it is inappropriate to compare for scale the Second and Third Experiments to each other and to the First Experiment, then it was also an error to treat them similarly statistically." This is also incorrect in my view. One can state that the murder rate is different between two cities, and yet the murderers have some characteristic in common. Combining results from different samples is the bread and butter of statistical analysis. Meta-analysis, for example, is used to combine the results of studies that are much different than replay experiments one, two, and three. If you wish to make a more specific argument about the techniques used to combine the three replay experiments into a single result, I'll address it. I think that the second part of point two is essentially the same as point five in that it criticizes the statistical method used to combine the results of the three replays (the Z-transform), as opposed to point three which mentions the Monte Carlo technique separately.
I therefore repeat my questions about points three and five, as stated previously. If you have other questions that I must answer first, please list them all at once, as I'm eager to move on to that discussion.--Brossa 17:53, 16 September 2008 (EDT)
It's foolish to debate someone who has already closed his mind. Point 1 is plainly correct: the PNAS article should admit and disclose that the false hypothesis was indeed proven to be false. If you stand behind the falsehood, then you'll refuse to admit other errors also.
Your refusal to concede point 2 is even more egregious. Lenski combined his experiments, and you can't claim that combination was simultaneously correct and incorrect. Moreover, the lack of scale also exists between Experiment 2 and 3, again disproving the underlying thesis of the paper.--Aschlafly 18:31, 16 September 2008 (EDT)
I already said that I didn't think that we would reach common ground on points one, two, and four, and that it would be a waste of time to rehash them. You asked me to, so I did. It seems foolish to then treat those points as a shibboleth and refuse to discuss the other points, which I am fully prepared to concede provided you elaborate on why the Monte Carlo technique and the Z-transform were either the wrong tests or performed incorrectly. I am capable of accepting two items from a list of five even if I reject the other three. Even if I were completely incapable of agreeing with you, I still don't see why you won't put your best mathematical argument forward about the paper's statistics. Were there too few Monte Carlo resamplings? Do you feel that the Z-transformation was performed incorrectly? Is the Z-transform itself suspect? Do you propose some other statistical analysis? If you do, there are any number of us here who can crunch the numbers again. What a coup it would be for you if we could use a technique that you suggested to obtain results that smashed Lenski's smug complacency!
I don't claim that the combination was simultaneously correct and incorrect, by the way - I claim that it was correct. Where do I imply otherwise? Also, it's not only improper to compare replay one with replays two and three for scale: it's impossible. Replay one involved constantly changing numbers of cells whereas replays two and three started with fixed numbers. How do you count the number of cells in the first replay to compare it to the other two? Is it the number of cells transferred each time? Is it the maximum population achieved in each flask prior to transfer? 750 generations passed in one case and 3700 generations in another before the Cit+ trait was seen - how do you factor that into the 'scale' equation? It is only the superficial resemblance of replays two and three that brings up the concept of 'scale'. The "underlying thesis" of the paper is not that there is a unique rate of mutation to Cit+ that applies across any and all experimental conditions.--Brossa 19:39, 16 September 2008 (EDT)
Brossa, you don't address how Lenski did combine the three experiments, and how Experiment 3 does not scale with Experiment 2. Given that you don't address the main errors, it's foolish to waste time discussing more subtle points with you. Put another way, there are plenty of open-minded contributors on this site. Why would one waste time discussing with a close-minded person instead?
Your account will be blocked for your 90/10 rule violation unless you improve soon. Thanks and Godspeed.--Aschlafly 20:06, 16 September 2008 (EDT)
ASchlafly:"Brossa, you don't address how Lenski did combine the three experiments"
Me, earlier on this page:"This sounds as though you disagree with the use of the Z-transform technique used to combine the data from the three replay experiments." "Second, why was it incorrect to apply the Z-transform to the data from the three replays, or in what way was the Z-transform performed incorrectly?" "...and the erroneous combination of the three replay experiments (the Z-transformation)." "If you wish to make a more specific argument about the techniques used to combine the three replay experiments into a single result, I'll address it. I think that the second part of point two is essentially the same as point five in that it criticizes the statistical method used to combine the results of the three replays (the Z-transform)..."
Blount and Lenski used Monte Carlo to derive a p-value for each of the three replay experiments; then used the Z-transformation to derive a final p-value from the three Monte Carlo p-values. I've mentioned the Z-transformation multiple times in connection to combining the results of the three experiments.
ASchlafly: "Brossa, you don't address...how Experiment Three does not scale with Experiment 2."
Me, earlier on this page: "The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were similar, but not the same." "It is possible to have different baseline mutation rates among the three replays, all of which follow the historical contingency pattern. Or the mutation rate could actually be the same across all three replays, and the results seen here are just a statistical fluke that would vanish if the replays could be run thousands of times. Either way, it's not fatal to the paper's conculsions." "It is only the superficial resemblance of replays two and three that brings up the concept of 'scale'. The "underlying thesis" of the paper is not that there is a unique rate of mutation to Cit+ that applies across any and all experimental conditions"
ASchlafly: "Your account will be blocked for your 90/10 rule violation unless you improve soon."
Me, earlier on this page:"If you would like me to through the original letter point by point, and respond to all of the claims in detail, I'd be happy to - but only after I have your explicit permission to do so on this talk page (or some other page of your choosing). Until I'm given that permission, I'll await your response..." "I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to." "I already said that I didn't think that we would reach common ground on points one, two, and four, and that it would be a waste of time to rehash them. You asked me to, so I did."
I came into this discussion wanting to discuss specific issues: the basis of your objections to the Monte Carlo technique and the Z-transformation used to combine the results of the three replays into one final p-value. This could have been resolved quite quickly, but you said that I had to address all the points of your letter before you would answer my questions. I said repeatedly that I did not wish to do so without your explicit permission, which it seemed that you gave. So I jumped through those hoops, only to be accused of a 90/10 violation. Have I simply walked into an cunning trap?--Brossa 21:20, 16 September 2008 (EDT)

Please provide your statistical analysis

Andrew, If you are so sure that your statistical analysis of the Lenski paper is correct, you should publish it on Conservapedia. MickA 17:38, 15 September 2008 (EDT)

I did. Which point don't you understand?--Aschlafly 19:13, 15 September 2008 (EDT)
Could you direct me to the page showing your calculations? MickA 08:50, 16 September 2008 (EDT)

I think regardless of what the details behind the statistical analysis are, the shame here is that PNAS refused to address anything specific in their response. They simply glossed over everything that was said in the letter sent to them and gave a generic, unsubstantive response. All the people here that are trying to argue with ASchlafly about his position should instead focus on why is it that PNAS refuses to directly address our concerns. --DRamon 21:01, 15 September 2008 (EDT)

It is not the function of the editorial board to "defend" a paper. They mainly said that, from what Mr. Schlafly has written they could not see the mistake in Dr. Lenskis statisics suggested by Mr. Schlafly. It is neither the function of a reviewer to "put his name" on something. A reviewer should check if the average reader of a journal will be able to learn something from a letter. If he thinks there is nothing to learn, either because the letter fails to bring new insight, or because the readers may not understand how using only conservative methods in statistics and nothing which is mathematically considered to be fragile by people who have average mathematical knowledge, is favourable, then they should reject the letter. Although I had the impression from the wording of the response that the reviewer did have at least a brief look at the original article. And PNAS refuses the Letter for the very same reason people her are asking, namely the complete absence of alternative estimations for the numbers. I dont want to check the calculations, if Mr. Schlafly states that by using a well defined method, he finds another result and states this result clearly, i am a priori very willing to believe that, because i see no reason not to believe his word on that (Since he has, to my best knowledge, no record of scientific fraud). After he states his result, i may find the time trying to reproduce it (although i am very busy). --Stitch75 13:00, 16 September 2008 (EDT)
Could you direct me to the page showing your calculations? MickA 15:41, 16 September 2008 (EDT)

Improper

I have no knowledge on this branch of science, so I won't comment on whether the results were correct, or if PNAS's letter was unbiased. Then again neither does Schlafly, who, of what info about him is known to the public is not a biologist. I find it somewhat inappropriate for someone with no training on a specifict field to come and tell someone that has actually learned something about that field, and say he hasn't done his job right. It's common sense. Fred1776 14:30, 16 September 2008 (EDT)

That's an extraordinary and offensive remark from someone who admits complete ignorance. How come you - Fred1776 - have an exact knowledge of how expert Andrew Schlafly's knowledge of the issue is? He may not be a mathematician, but he is an experienecd educator, amongst other things, with a legal training that enables his mind to get to the crux of a problem, as has been demonstrated very many times in this project. Biologists are properly open to the scrutiny of others. Bugler 15:34, 16 September 2008 (EDT)
I simply can't turn down the opportunity of agreeing with Bugler! Biologists, as with all experts, should be open to scrutiny from everyone. It's good to use people's expertise, but not at the expense of raising their pronouncements to the status of dogma. True, the lay person may not be able to formulate much criticism but sometimes, and especially when they stray outside the purely technical, all experts make mistakes that are evident to others. We should respect expertise, but stopping all questioning is giving respect beyond what is due. --Toffeeman 15:45, 16 September 2008 (EDT)
(EC)Well, the PNAS seems to think his understanding of statistical analysis is less than "elementary". The question is, given that virtually none of us are experts in statistics, why wouldn't we believe the PNAS? I mean, they are the trained scientists, right? If they told me my mis-understanding of statistical analysis was too fundamental to warrant a response in their journal, I'd show some humility and accept that as valid criticism. But hey, that's just me. KimEide 15:48, 16 September 2008 (EDT)
Kim, if you're naive enough to believe everything that the Liberal establishment tells you, well, hey, don't let us stop you. But don't think that you will be allowed to infect this site with your credulity. Bugler 15:54, 16 September 2008 (EDT)
Well if I'm credulous because I'd accept the opinion of an expert in very difficult and technical field I have no formal training in, then so be it. KimEide 15:57, 16 September 2008 (EDT)
Bugler, is it possible for you to talk without threatening someone? Try some civility. --IanG 16:08, 16 September 2008 (EDT)
(At the risk of a block) Not "believe everything on the basis that they are experts", but neither "disbelieve everything on the basis that they are assumed to be Liberal". If one is to make use of expertise then you need to accept something told you on the basis of expertise: you will have to revise at least one belief because of what the expert says. --Toffeeman 16:09, 16 September 2008 (EDT)

KimEide, the experts on Christianity are overwhelmingly in agreement: Jesus rose from the dead. Yet I expect that you don't accept that expert view. Meanwhile, you seem to accept the "expert" view of Lenski about statistics despite his having, as far as I can tell, no expertise in that subject.

Those who don't want to think for themselves can return to Wikipedia and other playpens. Those who have substantive insights about the logical and statistical issues here, please do comment.--Aschlafly 16:16, 16 September 2008 (EDT)

I happen to know for a fact that Lenski is thoroughly trained in the statistical analysis of experiments like the one in question. In fact, he's published extensively in that area. And I'm afraid the historicity of the Resurrection is a minority view among New Testament scholars. KimEide 16:22, 16 September 2008 (EDT)
What might wash with Liberal agnostic theology professors won't wash with true Christians. Bugler 16:27, 16 September 2008 (EDT)
I agree completely. That doesn't change the empirical fact that it's a minority view among scholars. KimEide 16:28, 16 September 2008 (EDT)
I disagree. I have found that when people use the word "scholars" that it is very specific to their side. So what is your definition of a scholar? In my life I have had the privilege of attending a number of different churches, three of which were run by men with PHDs. All three of them believed in the bodily resurrection of Christ. Taken across America, that number would be much higher. Learn together 17:39, 16 September 2008 (EDT)
If those men had PhDs in New Testament studies (or something related) from a serious University, I would certainly count them as scholars. But they would be in the minority in the field of New Testament studies. That's just an empirical fact I happen to know from being employed by a Liberal Arts University. In general though, I think of a scholar as someone who has 1)Studied the literature extensively 2)Defended his knowledge before a panel of other people who have studied the literature extensively (i.e. been awarded an advanced degree) 3)Published in peer-reviewed journals 4)And is or was actively engaged in the professional field, either through writing, teaching (or preaching), researching, giving papers at conferences, etc. That's just off the top of my head though. Most scholars will have all four of those. Some will only have three. Maybe occasionally one might just have two. KimEide 17:49, 16 September 2008 (EDT)
This is a ridiculous arguement. The resurrection of Jesus is a matter of faith and can not be verified except by referring to the Bible. Statistical analysis is a branch of mathematics. The truths of mathematics are not subject to debate and are not matters of faith. Appealing to authorities in mathematics is completely different than appealing to a religious authority. MickA 17:52, 16 September 2008 (EDT)

(undent)We're not arguing whether the opinion of a majority of biblical scholars is a reliable indicator of whether Jesus rose from the dead or not. We're just arguing what that majority opinion is. It may seem irrelevant, but it is a crucially important to the issue at hand...ummm...somehow. KimEide 18:02, 16 September 2008 (EDT)

Christian theologians hold the equivalent of PhDs, and yet you reject the conclusion of those "experts". I have found nothing in Lenski's published background to indicate any expertise by him in statistics. Yet you accept Lenski's view on statistics without even thinking through the issues on your own, while rejecting the consensus of Christian experts. Why? The answer is obvious: bias and lack of open-mindedness.--Aschlafly 18:25, 16 September 2008 (EDT)
Biological studies routinely require at least some application of statistical theory, especially when studying populations of organisms or molecules which molecular (and evolutionary) biology usually involves. Biostatistics (the statistical theories commonly used in biomedicine) is often a required course for PhD students for this reason. Established researchers in the life sciences do not need to have a degree in statistics to indicate expertise - Lenski's publication record itself indicates that he has successfully applied statistical theory in his analyses many times. At any rate, if someone can simply claim without proof that he has taken (and understood) statistics courses on the level needed to evaluate Lenski's paper, then he has no basis on which to accuse others of lacking expertise.
Malarkey. I've taken and excelled in upperclass statistics courses, and there were not biology students, college or graduate, in them. If Lenski has expertise in statistics then let's see it. His own "biographical sketch" doesn't even disclose what his undergraduate major was at Oberlin or what his PhD concentration at the University of North Carolina were

in.[1]--Aschlafly 19:26, 16 September 2008 (EDT)

Andy, the bottom line is the PNAS publishes letters online criticizing their publication all the time. Every single issue. It's the whole function of "Letters to the PNAS". It's the only reason the letters section of the PNAS exists. They would have published your letter if it had shown the requisite understanding of the issues at hand. How do we know this? Because they do it all the time. Why didn't they publish yours? Because your understanding of how to statistically analyze experiments involving bacteria populations isn't up to snuff. How could it be? A man who was truly interested in learning would show humility, accept criticism, and dive into the relevant literature about Monte Carlo, z-transformations, etc. A man who was only interested in protecting his ego would carry on the lie that Lenski and the PNAS don't know what they are talking about. KimEide 09:27, 17 September 2008 (EDT)
PNAS won't publish any letter critical of evolution in any way, even when the letter points out 5 obvious statistical flaws.--Aschlafly 09:58, 17 September 2008 (EDT)
I still, having read your previous comments on the original letter, don't feel that you've explained either a) why the use of Monte Carlo was wrong, b) why the use of Z-transformation was wrong, nor c) critically, what alternative analysis you would have used, and what p-values you would have obtained with it. That last point is what I feel you really need to address if you want PNAS to really take notice. What would you use, and what p-values would you obtain?
Additionally, graduate level statistics is really quite basic by scientific standards. The gap between a Ph.D. and degree level knowledge is like the gap between a degree and junior high school. Biology as a field is hugely intertwined with complex statistics, and many biologists will have gained an understanding well beyond graduate level.
I'm not "anti" this letter, I'm not a liberal agitator, I've made decent contributions in my time here, but I really feel that this letter needs somebody to just take a step back, and consider that maybe the first attempt can be improved. As I mentioned previously, I think some good, proactive steps would be to concentrate more on the missing data claim, and to bring on board somebody with an expert knowledge of statistics or biology to help refine the claims, and double check whether you're right on some of these points or not. MikeR 09:55, 17 September 2008 (EDT)
MikeR, PNAS never publishes a letter containing a meaningful criticism of a pro-evolution article, no matter how obvious and egregious the flaws. If the Lenski evolution article had incorporated a flaw tantamount to claiming that 2+2=5, the PNAS would still not admit the error. Check out evolution syndrome.--Aschlafly 10:14, 17 September 2008 (EDT)
Perhaps but Andy, with respect, I'm not convinced you were correct in your belief that all of those 5 points were genuine flaws. That's why I think it's worth getting a second opinion from an unbiased outsider with post-graduate expertise in statistics. You've not really given an explanation of why the Monte Carlo or Z-Transformation approaches were wrong, nor have you stated what you believe the correct p-value should have been, and until you do I'm going to continue to have doubts about your analysis, based on my own knowledge. MikeR 12:27, 17 September 2008 (EDT)
MikeR, I obviously welcome another unbiased outside opinion. Let us know when you get one. But beware, the evolution syndrome you've seen here will make anyone think twice before they criticize any aspect of any paper that promotes evolution. Anyone seeking funding or tenure will think twice before daring to question any aspect of an evolution paper, lest they be subjected to the hysteria you've seen here. So I'm not optimistic that you will be able to find anyone willing to attach his name to this dispute. You may have to use your own mind on this one.--Aschlafly 12:32, 17 September 2008 (EDT)
I've not really seen any "hysteria" here, it's just that a number of us are not entirely convinced that you're right. You could persuade us by explaining, specifically, why you believe that the Monte Carlo or Z-transformation approaches were wrong, and what p-values you believe should have been obtained. I think the PNAS letter is somewhat rude, but there's nothing much in it that I can really disagree with regarding the statistics (although they don't exactly say a lot either). I don't see a problem with combining samples of different sizes using a Z-transformation, for example, as it's a technique I use on a regular basis, and what I believe is a pretty standard statistical approach at this level. Indeed, that's why Fisher's Z-transformation technique was invented, to combine samples of different sizes. I'm genuinely interested to know what the issue is with it in this case.
As a side note, some of these statistical terms might be worth elaborating on in articles here on Conservapedia. I might make that my next project actually, if anyone wants to help. MikeR 15:13, 17 September 2008 (EDT)
Aschlafly, would you accept that your description of PNAS is directly analagous to your own resistance to criticism of an anti-evolution stance? I ask you again - i) are you 100% certain that your analysis is correct, ii) do you believe you are the best person to make that judgement, and iii) would you consider bringing in an unbiased third-party statistician to review your methodology? What could be wrong with that if you are certain you are correct? It could only but strengthen your argument. BenHur 13:15, 17 September 2008 (EDT)
Aschlafly - here are three links to letters published in 2008 by PNAS which do, in fact, contain meaningful criticism of pro-evolution articles.:

Perhaps you might care to modify your claims in the light of this....BillK 12:41, 17 September 2008 (EDT)

Your first example is a defense of a theory of evolution, and thus tends to support my point. Given the failure of your first citation to support your point, I did not bother to look at your other two.--Aschlafly 14:48, 17 September 2008 (EDT)
The first article clearly is criticizing a pro-evolution article. You said MikeR, PNAS never publishes a letter containing a meaningful criticism of a pro-evolution article, no matter how obvious and egregious the flaws. Above are the links to three articles that "contain meaningful criticism of a pro-evolution article." I don't know how to be more clear. There are pro-evolution articles published in the PNAS. Three of them are criticized in the letters above. What more do you want? JohnDee 21:38, 17 September 2008 (EDT)

what if the other two were indeed the type of articles you were asking for? Fred1776 16:02, 17 September 2008 (EDT)

Clarification on peer review and anonymity

There has been, I think, undue outrage over the fact that the reviewer's name was not included in the response to Schlafly's letter. To clarify, this is standard procedure when reviewing manuscripts for scientific journals and does NOT in any way indicate cowardice or uncertainty. Anonymous peer review allows for reviews that are rigorous, honest, and, most importantly, objective, since if no one knows your identity, no one can threaten you, bribe you, or otherwise influence your evaluation of the material. Thus, you can carry out your professional duties without concern over how your review of Person A's paper will affect Person A's opinion of you or his/her review of your paper in the future, for example.

The anonymity of the response is standard procedure in scientific publishing and necessary to ensure objectivity of the evaluation process, and should not be construed negatively as has been done here.

Statistics examples

I see lots of people are clamoring for some specific statistical data on here, and although I am no expert on statistics, I can give a simple example that demonstrates flaw #5 in the letter to PNAS (that combining different samples is invalid). So lets say I am testing some hypothesis, and I got a 100 samples, of which 48 are "pro" (support) and 52 are "con" (against) the hypothesis. So this doesn't support my hypothesis at all. But just for fun lets do another, smaller, experiment, with just 10 samples. And suppose in this smaller (statistically insignificant by itself) experiment I only used 1- samples, of which 8 were "pro" and 2 were "con." If I combine the 2 experiments, I get a total of 56 "pro" and 54 "con", so more than 50% "pro" (in a large total sample size!), appearing to support the hypothesis, even though that's obviously not the case! --DRamon 14:33, 17 September 2008 (EDT)

Data from several experiments

I would like to contribute to this discussion because I have taught statistics to graduate biology students for 16 years.

The combination of data from several experiments is a specialist and sometimes difficult area of statistical theory but a simple example shows why Aschafly’s concern about combining the results of three different experiments is not justified and why this aspect of his criticism of Lenski’s recent paper in PNAS is not valid.

Suppose we want to conduct a test of whether or not men are taller than women on average. For the sake of the example, I generated random heights of people from a population in which men had an average height of 175cm (5’10’) and women of 165cm (5’6”). The standard deviations of height in both sexes were 7cm. I think these numbers are approximately correct for people in the UK but the details aren’t important.

Suppose we take 5 samples of 2 men and 2 women. Here are the numbers I generated:

Man1 Man2 Woman1 Woman2 Men mean Women mean Mean difference t P
176 179 157 148 177.5 152.5 25 5.27 0.017
180 176 160 164 178 162 16 5.66 0.015
176 175 167 165 175.5 166 9.5 8.50 0.0068
169 171 168 173 170 170.5 -0.5 -0.19 0.57
179 175 166 178 177 172 5 0.79 0.26

P in the last column is the t-test probability for a one-side test of women being shorter than men. (Formally, it’s the probability of getting a value of t greater than that calculated from the data if women are in fact taller than men on average.)

Should the fact that, in the fourth sample, the average height of the women is taller than that of the men make us doubt that men are in fact taller on average? Should we be concerned about the last sample, in which the difference in height of the two sexes is rather small, though in the expected direction? No, in both cases. When we combine the data on all 10 men and all 10 women, we get this:

Men mean Women mean Mean difference t P
175.6 164.6 11 3.85 0.00058

Clearly, combining the data from several similar experiments strengthens the conclusions considerably, as shown by the fact that P is much smaller for the combined data than for any individual sample.

Although the combination of data from several experiments is a specialised area of statistics, I see nothing particularly incorrect about the approach used by Lenski and his colleagues. The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so. (For example: A. Combining the results of five samples of the heights of men and women is clearly valid. B. Combining three samples of heights of men and women with two samples of lengths of male and female squid clearly isn’t.) Generally speaking, the outcome of a combined analysis of several small experiments which all point in the same direction (or at least in a similar direction) will be more significant than that of any one of those experiments, as is shown in the larger table above.

I hope this clarifies the extensive discussion on this point and puts Aschafly’s mind at rest on this subject. KennyMac 08:20, 18 September 2008 (EDT)

That's very nicely put, thanks. You should work on some of the stats pages here. Of course, technically any sample is ultimately just a combination of n samples of size 1. MikeR 13:28, 18 September 2008 (EDT)
I'll take a look at this Friday. It's not immediately obvious what the point is to your analysis above.--Aschlafly 23:46, 18 September 2008 (EDT)
"The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so"--KingOfNothing 00:57, 19 September 2008 (EDT)
This makes no sense as an argument. It may be true in this simple case that you can do one large or several small samples and get similar results - which is quite obvious and wouldn't need such a detailed rant. However, you provide no mathematical proof, just one example. Etc 01:08, 19 September 2008 (EDT)
"It's not immediately obvious what the point is to your analysis above". No surprise, Aschlafly, really. Maybe you should take your own advice: "I suggest you try harder with an open mind". --CrossC 02:46, 19 September 2008 (EDT)

It is with great sadness that I note that the author of this - the only significant statistical explanation and discussion in this entire fiasco- has just been blocked for five years. Even his email is blocked, so he can't even appeal the action. I don't see such manouvers as having contributed to the much vaunted "open mind" of which various people here speak. BenHur 10:27, 19 September 2008 (EDT)

REPLY: I have now reviewed the above analysis, and it supports Point 5 rather than the PNAS paper. Point 5 stated, "The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." The analysis above does nothing more than reinforce Point 5 by combining experiments based on sample size.

In Pavlovian manner, some Lenski types nod their head here in agreement at the above analysis, apparently unaware that it reinforces Point 5.

When combining results from samples that are vastly different in sample size, it is necessary to factor in the different sample sizes. Apparently the PNAS paper failed to do that, which helps explain why it refuses to provide a meaningful response to Point 5.--Aschlafly 19:24, 19 September 2008 (EDT)

(rants below were deleted for being non-substantive in violation of this page's rules.)--Aschlafly 19:24, 19 September 2008 (EDT)

I understand point 5 now, or at least I think I do. What we have as a "sample" is either:
1. Individual cultures (Schlafly)
2. Cultures that developed cit+. (Lenski)
Schlafly contends that the sample should be all the cultures and that Lenski has, improperly, filtered the sample by excluding the vast majority of it (i.e. all those cultures that did not become cit+). Am I right in thinking this is the argument? --Toffeeman 19:57, 19 September 2008 (EDT)