Talk:PNAS Response to Letter

From Conservapedia
This is an old revision of this page, as edited by Brossa (Talk | contribs) at 18:39, 16 September 2008. It may differ significantly from current revision.

Jump to: navigation, search

Notice: misrepresentations are not going to be allowed on this page. Substantive comments only, please.

In this day and age, scientists have their own agenda and have corrupted science. Just look at global warming or cloning or stem cells as proof. With that said, the only way to get the real truth is by suing in court. Unfortunately, scientists are bound to vast wealth and have the power to defend themselves vigorously. If ever a fund was set up to pay for a suit, I would contribute. It is a classic case whereby the truth be known, the truth will prevail. -- 50 star flag.png jp 22:14, 12 September 2008 (EDT)

Thanks, Jpatt. One additional beauty of the truth is that it remains the truth no how much some deny it. PNAS can deny its errors all it likes, but that doesn't change the fact they are errors.--Aschlafly 22:21, 12 September 2008 (EDT)
Well said, Andy and Jpatt. It is perhaps worth pointing out that the President of the NAS is a "climate scientist". If the Academy is dominated by pseudoscience of that kind, it's hardly a surprise that their response was to cover up and deny the truth. Nevertheless, they had to be given their chance to make good before further steps are taken. I suggest now that the issue be put to potentially supportive congressmen/women and senators, given the public funding for Lenski's activities. Bugler 05:46, 13 September 2008 (EDT)
Right. The next step is to criticize the taxpayer funding of this junk science. When the authors and the publishing organization will not even address statistical errors in the work, then it's time to pull the public funding.--Aschlafly 10:13, 13 September 2008 (EDT)
They did address your claims of statistical errors: they said that you were wrong to the degree that they were able to determine what you were talking about. You made a qualitative argument and got a qualitative response.--Brossa 11:00, 13 September 2008 (EDT)
Which of the 5 specific errors do you think they addressed? None, as far as I can tell.--Aschlafly 11:11, 13 September 2008 (EDT)
The response addresses your qualitative claims about the paper's statistical methods raised in points two, three, and five by the following:"Nevertheless, from a statistical point of view, it is proper to combine the results of independent experiments, as Blount et al. did correctly in their original paper"(emphasis added); in fact the longest paragraph in the response deals entirely with the statistical claims of the letter and dismisses them.--Brossa 11:32, 13 September 2008 (EDT)
But that's never going to happen because the data availability requirements for public funding have already been met. Jirby 11:03, 13 September 2008 (EDT)10:56, 13 September 2008 (EDT)
No, I don't think the researchers have met NSF guidelines as referenced in the letter.--Aschlafly 11:11, 13 September 2008 (EDT)

Proof?Jirby 11:26, 13 September 2008 (EDT)

As I said, the NSF guidelines are references in the letter.--Aschlafly 11:29, 13 September 2008 (EDT)

You mean the notebooks and ect? Jirby 11:32, 13 September 2008 (EDT)

Oh my dear God, I can't believe this!! Where has this beautiful country gone to if even science is not reliable anymore nowadays. Hope things will change in the future. Good thing there still are people like Mr. Schlafly, who have the brains and power to stand up, and turn the people of America in the right direction again. Raul 12:24, 13 September 2008 (EDT)

Mr. Schlafly, I have a question BTW. Was this letter received on a paper, or electronically? Because if it was on a paper, perhaps it would be a good idea to scan it, and post it. It would add a lot to the encyclopedic value of the article. Raul 12:26, 13 September 2008 (EDT)

PNAS procedures required me to submit the letter electronically using its own electronic submission software. When the PNAS acknowledged that my submission complied with all its requirements, it also said that the authors of the original paper had been notified of my letter.--Aschlafly 12:39, 13 September 2008 (EDT)


Too bad. It doesn't make much sense though, guess that tells a lot about PNAS. What they should care about is the actual text, not the medium it is in. That's not the case IMHO for encyclopedias however. If not for anything else, a scan would have been useful as a reference for the digital text. Oh well... Raul 12:53, 13 September 2008 (EDT)

Honest question, is it against the rules to disagree with Andrew Schlafly or criticize that letter? I just want to know so I don't end up in the same situation as other people who have been censored here.--IanG 17:03, 13 September 2008 (EDT)

I believe this this page is only for discussion of the response, which is quite straightforward. Criticism of the letter should have gone on it's talk page, but it's too late now. Oh well. There is no censorship on Conservapedia. Your comment is not substantive - please refactor it. Praise Jesus, Pila 17:26, 13 September 2008 (EDT)
if you REALLY believe that Lenski has committed acedemic FRAUD then lodge a formal complaint with his University. They are taken very seriously and can lead to loss of tenure and dismissal from the university, and with that on his record no other institution would hire him on any basis. Markr 19:40, 13 September 2008 (EDT)

(deleted non-substantive comments). Again, the heading on this page will be enforced: "Substantive comments only, please." If you have a substantive comment about the identified errors and the PNAS's failure to address them, then please comment. Non-substantive comments will be removed. This is an encyclopedic-based search for the truth, not a blog or a place to refuse to contribute in a substantive manner.--Aschlafly 20:29, 14 September 2008 (EDT)

Since you've taken the liberty of deciding what is substantive or not in deleting posts like my last one, then I have a serious, respectful question to ask; What exactly do you mean by "substantive"? I didn't attack you or your letter; I was attempting to state that the PNAS response did, in fact, address the points of your letter. Whether one considers the PNAS response to be correct or not is a separate matter - they read your objections and responded to them instead of ignoring them, that's all.
My last post would therefore seem to have met Webster's definition of substantive - "having or expressing substance", but apparently the measure of "substantive" for a comment on this page appears to be whether it agrees with your view or not. That's your prerogative, but if you intended to allow comments on this page other than endorsements of your view, then please let me know what I did wrong. --DinsdaleP 21:00, 14 September 2008 (EDT)
Dinsdale, we're here to think and learn. You can look at my letter, look at the PNAS's response, and provide some substantive insights. We're not here to say something like, uh, go ask someone else if a (9th grade-level) graph is correct or not. If you think the substantive issues are beyond your depth, and I don't, then comment on them in a substantive and intellectual and specific way. This is not another waste-of-time blog, and it's not going to become one.--Aschlafly 21:19, 14 September 2008 (EDT)
DinsdaleP, you did attack ASchalfy at least indirectly. Suggesting that the PNAS response has merit might also be interpreted by some that the letter ASchafly sent wasn't the very best it could be. Now, contrast that to my deleted comment suggesting that a time-tested response would be to actually try reproducing the experiment. Many bad experiments are exposed when others fail to get the same results as the original authors. I think this would be an excellent, substantive avenue to pursue.--Argon 21:23, 14 September 2008 (EDT)

ASchlafly- you said above, "PNAS can deny its errors all it likes, but that doesn't change the fact they are errors". As you say, in a fair discussion of the merits of two sides of an argument, it's important that both sides take a good, hard look at their own propositions. Since your position is that PNAS has errors on its own side, I'm just curious to know if you are in any way prepared to accept that there might be errors in your own argument, or are you absolutely 100% certain that your position is error-free? I'm wondering if perhaps before submitting this issue to funding authorities, you would be prepared to have an independent statistical expert take a look at your proposal? BenHur 22:17, 14 September 2008 (EDT)

The only thing I was criticizing in my original comment today was the Main Page headline statement that "PNAS refuses to address the 5 errors in the Lenski study identified by the Letter to PNAS". What I pointed out is the fact that they did in fact respond, by criticizing the statistical analysis used by Aschlafly. I'm not supporting or attacking Mr. Schlafly's analysis, because I'm the first one to admit that I have no expertise in this area. My conclusion was a constructive suggestion that Mr. Schlafly present a rebuttal to the PNAS decision by showing how his analysis and conclusions were not erroneous in the manner they claimed. A public, statistical defense of Mr. Schlafly's work, perhaps accompanied by the endorsement of some regarded experts in the field, would be the best response to PNAS choosing to respond by email instead of through the journal.
I wrote both the original draft letter to PNAS from Mr. Schlafly's notes and my earlier comments today with the intent of contributing constructively. I hope this clarification of my view is substantive enough to remain. --DinsdaleP 22:24, 14 September 2008 (EDT)

Folks, I've pointed out five very specific statistical (logical) errors. The torrent of nonsense above even includes an absurd demand for me to try to repeat the experiments, as thought that would somehow correct a flawed paper.

The math is wrong in the PNAS paper. No one at PNAS is even willing to put his name on a response claiming that the math is correct, because it isn't. I'm not going to allow further nonsensical postings here. If you want to address the statistical (logical) errors in a specific way, fine. If you feel it is beyond your depth to do so, then move on. Thanks and Godspeed.--Aschlafly 22:49, 14 September 2008 (EDT)

"The paper incorrectly applied a Monte Carlo resampling test to exclude the null hypothesis for rarely occurring events." Specifically, why is it incorrect to apply a Monte Carlo test in this circumstance, or why was their application incorrect? Do your own calculations produce a p-value that differs from the published p-value of 0.08?
"The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." This sounds as though you disagree with the use of the Z-transform technique used to combine the data from the three replay experiments, or believe that the Z-transform analysis was performed incorrectly. Which do you disagree with - the technique, the application, or both, and why? --Brossa 23:31, 14 September 2008 (EDT)
{removed personal and non-substantive attack that violated clear rules for this discussion; also, unsubstantiated claims of expertise are not allowed - --Aschlafly 19:16, 15 September 2008 (EDT))
I'm still a little unclear on your position ASchlafly - are you absolutely 100% certain your own statistical analysis is correct on this? Before you proceed further it's important to know that the technical analysis you are presenting is indeed indisputable. BenHur 12:01, 15 September 2008 (EDT)
Ben, if you have at least a 9th grade-level education, then you can look at the 5 errors and decide for yourself, and comment in a substantive manner. Yes, they are obvious and basic errors, and the fact that the reviewer of my letter at PNAS would not attach his name to a specific denial speaks volumes.--Aschlafly 17:11, 15 September 2008 (EDT)
Aschlafly, I'm a little confused as to why you removed this comment of mine? I did not cast any aspersions on your argument, and was merely answering your question as you posed it to me? Is "declining to comment" an indictable offense? BenHur 19:20, 15 September 2008 (EDT)
Your comment was not an "indictable offense," but it violated the rules of this page: "Substantive comments only, please." Got it? Either say something substantive, or edit somewhere else. Thanks.--Aschlafly 19:33, 15 September 2008 (EDT)
This is confusing. Is "I agree with your thesis" or "your methods are 100% correct" a substantive comment? It can be very hard to infer your intent, Mr.Schalfly, I'm sorry to say. I have no quarrel with you, but I'm becoming confused as to what is and isn't appropriate comment on what is labelled a "Talk Page". Are there special rules for this Talk Page? If so, perhaps the title on the page might be changed? BenHur 19:42, 15 September 2008 (EDT)
No, your quoted phrases are obviously not substantive comments. Your statement of agreement means nothing. I doubt you are even using your real name, for starters, which renders your agreement evens sillier. I repeat for the nth time, say something substantive or edit somewhere else.--Aschlafly 19:52, 15 September 2008 (EDT)
(removed another non-substantive posting)--Aschlafly 20:28, 15 September 2008 (EDT)
Aschafly, perhaps you did not realize that my earlier questions were meant for you. I wish to address the statistical errors in a specific way, which requires a better understanding of your position. I will repeat my main questions: why was it incorrect to apply Monte Carlo techniques to the data in the paper, or in what way was the Monte Carlo technique performed incorrectly? Second, why was it incorrect to apply the Z-transform to the data from the three replays, or in what way was the Z-transform performed incorrectly? In lieu of the Monte Carlo/Z-transform techniques, what statistical calculations should have been performed? Feel free to be technical; I have more than a ninth grade education.--Brossa 17:38, 15 September 2008 (EDT)
If you're skipping over the main points, then concede their validity or explain why you've skipped over them.--Aschlafly 19:14, 15 September 2008 (EDT)
If you would like me to through the original letter point by point, and respond to all of the claims in detail, I'd be happy to - but only after I have your explicit permission to do so on this talk page (or some other page of your choosing). Until I'm given that permission, I'll await your response to my previously-stated questions about points three and five: your statements about the incorrect application of Monte Carlo resampling and the erroneous combination of the three replay experiments (the Z-transformation). --Brossa 22:00, 15 September 2008 (EDT)
Substantive postings are welcome, but I still don't have an explanation for why you skipped over the main points 1 and 2. Do you concede them?--Aschlafly 22:04, 15 September 2008 (EDT)
With all due respect Aschlafly, and excuse me if I feel the need to tiptoe as gently as possible here, but I would cautiously disagree with your statement that "Substantive postings are welcome". It seems to me that you have explicitly removed only the substantive comments from this page. What is left are weak and non-substantive comments, to be sure. It seems very difficult to comment in any way here on this matter, as you seem to be of the opinion that you and you alone are correct. WIth the best will in the world, I certainly don't feel able to speak freely here, despite being well qualified to do so. Perhaps a slightly more lenient approach might help your own cause? BenHur 22:23, 15 September 2008 (EDT)
Since I am new here I am a bit timid about chiming in on this discussion, but I feel compelled to say that (as I pointed out in the section below), I think this page should be reserved for appropriate responses to the PNAS letter - not for opinions regarding how Aschlafly is running this website. If you think you have valid points to bring out about that - maybe post them to his talk page? --DRamon 22:31, 15 September 2008 (EDT)

(unindent) No, I do not concede points one and two; I've commented on them on the Talk:Letter to PNAS page. I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to. Similarly, your point four is not something that can be resolved by argument. The only way to prove that the results were due to contamination would be to repeat the experiment with different controls in place and demonstrate different results - which might spur some other labs to run a third or fourth trial to see who was right. It seemed to me that the only points that could be solved by discussion were three and five, since they made statements about the statistical methods that can potentially be resolved through debate. So, once again, what is your response to my questions regarding points three and five?--Brossa 22:34, 15 September 2008 (EDT)

Brossa, I could not find a meaningful rebuttal by you of point 1, and I found no comments by you on point 2, at Talk:Letter to PNAS. Are you trying the classic trick of "the answer is over there," when it isn't? Point 2 alone completely disproves the PNAS paper's thesis, and yet you avoid it and skip towards less obvious errors. I'm happy to address the subtle errors once you address the obvious ones.--Aschlafly 16:10, 16 September 2008 (EDT)
Aschlafly, you are avoiding Brossa's question. Why can't you just answer it? Brossa said that he wanted to limit the discussion to these points. The reason that he wants to do that is because it the PNAS response targets your approach to the statistical analysis. MickA 16:24, 16 September 2008 (EDT)
I don't mind addressing points one and two; I just don't think that we'll come to an agreement about them. Point one represents a misunderstanding of the figure 3, which is labeled "Alternative hypotheses for the origin of the Cit+ function..." The figure does not represent the results of the experiments and does not conflict with them. It is a cartoon of the a priori hypothesis that was generated before the experiments were performed; it is not itself the hypothesis (the map is not the country). Note that the vertical axis lacks a scale; there is no way of knowing what the actual mutation rates are ahead of time. The location of the vertical jump on the graph is abitrary; it has to lie somewhere between 0 and 31,500 generations, but that point could be anywhere. Quoting from the paper: "The historical contingency hypothesis predicts that the mutation rate to Cit+ should increase after some potentiating genetic background has evolved. Thus, Cit+ variants should re-evolve more often in the replays using clones sampled from later generations of the Ara-3 population." The hypothesis as stated does not specify a generation at which the potentiating mutation occured. The hypothesis is not that potentiation took place at generation 31,000 rather than some other generation; it is that there was a potentiating mutation rather than a rare-mutation event. The results of the experiment do not disprove the contingency hypothesis; they confirm it and suggest that the potentiating mutation took place at generation 20,000. You think that the figure is the hypothesis; I think that the hypothesis is what is stated explicitly in the text of the paper; I doubt that we'll agree.
Point two states: "Both hypotheses propose fixed mutation rates, but the failure of mutations to increase with sample size disproves this." I disagree with this statement. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were similar, but not the same. One could imagine a hypothesis that men commit murder most often between the ages of 25 and 35, with samples taken from the male populations of L.A., Singapore, and London. One would find different murder rates among the men of those three cities, but still might find (or not) that murderers in those cities tend to be between 25 and 35 years old. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were not the same, just as Singapore is not the same as Los Angeles. The rare-mutation hypothesis does not mean that the mutation rate to Cit+ is the same for all experimental conditions anywhere; just that the mutation rate is constant given the conditions of a particular replay. It is possible to have different baseline mutation rates among the three replays, all of which follow the historical contingency pattern. Or the mutation rate could actually be the same across all three replays, and the results seen here are just a statistical fluke that would vanish if the replays could be run thousands of times. Either way, it's not fatal to the paper's conculsions.
Point two also states "If the authors claim that it is inappropriate to compare for scale the Second and Third Experiments to each other and to the First Experiment, then it was also an error to treat them similarly statistically." This is also incorrect in my view. One can state that the murder rate is different between two cities, and yet the murderers have some characteristic in common. Combining results from different samples is the bread and butter of statistical analysis. Meta-analysis, for example, is used to combine the results of studies that are much different than replay experiments one, two, and three. If you wish to make a more specific argument about the techniques used to combine the three replay experiments into a single result, I'll address it. I think that the second part of point two is essentially the same as point five in that it criticizes the statistical method used to combine the results of the three replays (the Z-transform), as opposed to point three which mentions the Monte Carlo technique separately.
I therefore repeat my questions about points three and five, as stated previously. If you have other questions that I must answer first, please list them all at once, as I'm eager to move on to that discussion.--Brossa 17:53, 16 September 2008 (EDT)
It's foolish to debate someone who has already closed his mind. Point 1 is plainly correct: the PNAS article should admit and disclose that the false hypothesis was indeed proven to be false. If you stand behind the falsehood, then you'll refuse to admit other errors also.
Your refusal to concede point 2 is even more egregious. Lenski combined his experiments, and you can't claim that combination was simultaneously correct and incorrect. Moreover, the lack of scale also exists between Experiment 2 and 3, again disproving the underlying thesis of the paper.--Aschlafly 18:31, 16 September 2008 (EDT)
I already said that I didn't think that we would reach common ground on points one, two, and four, and that it would be a waste of time to rehash them. You asked me to, so I did. It seems foolish to then treat those points as a shibboleth and refuse to discuss the other points, which I am fully prepared to concede provided you elaborate on why the Monte Carlo technique and the Z-transform were either the wrong tests or performed incorrectly. I am capable of accepting two items from a list of five even if I reject the other three. Even if I were completely incapable of agreeing with you, I still don't see why you won't put your best mathematical argument forward about the paper's statistics. Were there too few Monte Carlo resamplings? Do you feel that the Z-transformation was performed incorrectly? Is the Z-transform itself suspect? Do you propose some other statistical analysis? If you do, there are any number of us here who can crunch the numbers again. What a coup it would be for you if we could use a technique that you suggested to obtain results that smashed Lenski's smug complacency!
I don't claim that the combination was simultaneously correct and incorrect, by the way - I claim that it was correct. Where do I imply otherwise? Also, it's not only improper to compare replay one with replays two and three for scale: it's impossible. Replay one involved constantly changing numbers of cells whereas replays two and three started with fixed numbers. How do you count the number of cells in the first replay to compare it to the other two? Is it the number of cells transferred each time? Is it the maximum population achieved in each flask prior to transfer? 750 generations passed in one case and 3700 generations in another before the Cit+ trait was seen - how do you factor that into the 'scale' equation? It is only the superficial resemblance of replays two and three that brings up the concept of 'scale'. The "underlying thesis" of the paper is not that there is a unique rate of mutation to Cit+ that applies across any and all experimental conditions.--Brossa 19:39, 16 September 2008 (EDT)

Please provide your statistical analysis

Andrew, If you are so sure that your statistical analysis of the Lenski paper is correct, you should publish it on Conservapedia. MickA 17:38, 15 September 2008 (EDT)

I did. Which point don't you understand?--Aschlafly 19:13, 15 September 2008 (EDT)
Could you direct me to the page showing your calculations? MickA 08:50, 16 September 2008 (EDT)

I think regardless of what the details behind the statistical analysis are, the shame here is that PNAS refused to address anything specific in their response. They simply glossed over everything that was said in the letter sent to them and gave a generic, unsubstantive response. All the people here that are trying to argue with ASchlafly about his position should instead focus on why is it that PNAS refuses to directly address our concerns. --DRamon 21:01, 15 September 2008 (EDT)

It is not the function of the editorial board to "defend" a paper. They mainly said that, from what Mr. Schlafly has written they could not see the mistake in Dr. Lenskis statisics suggested by Mr. Schlafly. It is neither the function of a reviewer to "put his name" on something. A reviewer should check if the average reader of a journal will be able to learn something from a letter. If he thinks there is nothing to learn, either because the letter fails to bring new insight, or because the readers may not understand how using only conservative methods in statistics and nothing which is mathematically considered to be fragile by people who have average mathematical knowledge, is favourable, then they should reject the letter. Although I had the impression from the wording of the response that the reviewer did have at least a brief look at the original article. And PNAS refuses the Letter for the very same reason people her are asking, namely the complete absence of alternative estimations for the numbers. I dont want to check the calculations, if Mr. Schlafly states that by using a well defined method, he finds another result and states this result clearly, i am a priori very willing to believe that, because i see no reason not to believe his word on that (Since he has, to my best knowledge, no record of scientific fraud). After he states his result, i may find the time trying to reproduce it (although i am very busy). --Stitch75 13:00, 16 September 2008 (EDT)
Could you direct me to the page showing your calculations? MickA 15:41, 16 September 2008 (EDT)

Improper

I have no knowledge on this branch of science, so I won't comment on whether the results were correct, or if PNAS's letter was unbiased. Then again neither does Schlafly, who, of what info about him is known to the public is not a biologist. I find it somewhat inappropriate for someone with no training on a specifict field to come and tell someone that has actually learned something about that field, and say he hasn't done his job right. It's common sense. Fred1776 14:30, 16 September 2008 (EDT)

That's an extraordinary and offensive remark from someone who admits complete ignorance. How come you - Fred1776 - have an exact knowledge of how expert Andrew Schlafly's knowledge of the issue is? He may not be a mathematician, but he is an experienecd educator, amongst other things, with a legal training that enables his mind to get to the crux of a problem, as has been demonstrated very many times in this project. Biologists are properly open to the scrutiny of others. Bugler 15:34, 16 September 2008 (EDT)
I simply can't turn down the opportunity of agreeing with Bugler! Biologists, as with all experts, should be open to scrutiny from everyone. It's good to use people's expertise, but not at the expense of raising their pronouncements to the status of dogma. True, the lay person may not be able to formulate much criticism but sometimes, and especially when they stray outside the purely technical, all experts make mistakes that are evident to others. We should respect expertise, but stopping all questioning is giving respect beyond what is due. --Toffeeman 15:45, 16 September 2008 (EDT)
(EC)Well, the PNAS seems to think his understanding of statistical analysis is less than "elementary". The question is, given that virtually none of us are experts in statistics, why wouldn't we believe the PNAS? I mean, they are the trained scientists, right? If they told me my mis-understanding of statistical analysis was too fundamental to warrant a response in their journal, I'd show some humility and accept that as valid criticism. But hey, that's just me. KimEide 15:48, 16 September 2008 (EDT)
Kim, if you're naive enough to believe everything that the Liberal establishment tells you, well, hey, don't let us stop you. But don't think that you will be allowed to infect this site with your credulity. Bugler 15:54, 16 September 2008 (EDT)
Well if I'm credulous because I'd accept the opinion of an expert in very difficult and technical field I have no formal training in, then so be it. KimEide 15:57, 16 September 2008 (EDT)
Bugler, is it possible for you to talk without threatening someone? Try some civility. --IanG 16:08, 16 September 2008 (EDT)
(At the risk of a block) Not "believe everything on the basis that they are experts", but neither "disbelieve everything on the basis that they are assumed to be Liberal". If one is to make use of expertise then you need to accept something told you on the basis of expertise: you will have to revise at least one belief because of what the expert says. --Toffeeman 16:09, 16 September 2008 (EDT)

KimEide, the experts on Christianity are overwhelmingly in agreement: Jesus rose from the dead. Yet I expect that you don't accept that expert view. Meanwhile, you seem to accept the "expert" view of Lenski about statistics despite his having, as far as I can tell, no expertise in that subject.

Those who don't want to think for themselves can return to Wikipedia and other playpens. Those who have substantive insights about the logical and statistical issues here, please do comment.--Aschlafly 16:16, 16 September 2008 (EDT)

I happen to know for a fact that Lenski is thoroughly trained in the statistical analysis of experiments like the one in question. In fact, he's published extensively in that area. And I'm afraid the historicity of the Resurrection is a minority view among New Testament scholars. KimEide 16:22, 16 September 2008 (EDT)
What might wash with Liberal agnostic theology professors won't wash with true Christians. Bugler 16:27, 16 September 2008 (EDT)
I agree completely. That doesn't change the empirical fact that it's a minority view among scholars. KimEide 16:28, 16 September 2008 (EDT)
I disagree. I have found that when people use the word "scholars" that it is very specific to their side. So what is your definition of a scholar? In my life I have had the privilege of attending a number of different churches, three of which were run by men with PHDs. All three of them believed in the bodily resurrection of Christ. Taken across America, that number would be much higher. Learn together 17:39, 16 September 2008 (EDT)
If those men had PhDs in New Testament studies (or something related) from a serious University, I would certainly count them as scholars. But they would be in the minority in the field of New Testament studies. That's just an empirical fact I happen to know from being employed by a Liberal Arts University. In general though, I think of a scholar as someone who has 1)Studied the literature extensively 2)Defended his knowledge before a panel of other people who have studied the literature extensively (i.e. been awarded an advanced degree) 3)Published in peer-reviewed journals 4)And is or was actively engaged in the professional field, either through writing, teaching (or preaching), researching, giving papers at conferences, etc. That's just off the top of my head though. Most scholars will have all four of those. Some will only have three. Maybe occasionally one might just have two. KimEide 17:49, 16 September 2008 (EDT)
This is a ridiculous arguement. The resurrection of Jesus is a matter of faith and can not be verified except by referring to the Bible. Statistical analysis is a branch of mathematics. The truths of mathematics are not subject to debate and are not matters of faith. Appealing to authorities in mathematics is completely different than appealing to a religious authority. MickA 17:52, 16 September 2008 (EDT)

(undent)We're not arguing whether the opinion of a majority of biblical scholars is a reliable indicator of whether Jesus rose from the dead or not. We're just arguing what that majority opinion is. It may seem irrelevant, but it is a crucially important to the issue at hand...ummm...somehow. KimEide 18:02, 16 September 2008 (EDT)

Christian theologians hold the equivalent of PhDs, and yet you reject the conclusion of those "experts". I have found nothing in Lenski's published background to indicate any expertise by him in statistics. Yet you accept Lenski's view on statistics without even thinking through the issues on your own, while rejecting the consensus of Christian experts. Why? The answer is obvious: bias and lack of open-mindedness.--Aschlafly 18:25, 16 September 2008 (EDT)
Biological studies routinely require at least some application of statistical theory, especially when studying populations of organisms or molecules which molecular (and evolutionary) biology usually involves. Biostatistics (the statistical theories commonly used in biomedicine) is often a required course for PhD students for this reason. Established researchers in the life sciences do not need to have a degree in statistics to indicate expertise - Lenski's publication record itself indicates that he has successfully applied statistical theory in his analyses many times. At any rate, if someone can simply claim without proof that he has taken (and understood) statistics courses on the level needed to evaluate Lenski's paper, then he has no basis on which to accuse others of lacking expertise.
Malarkey. I've taken and excelled in upperclass statistics courses, and there were not biology students, college or graduate, in them. If Lenski has expertise in statistics then let's see it. His own "biographical sketch" doesn't even disclose what his undergraduate major was at Oberlin or what his PhD concentration at the University of North Carolina were in.[1]--Aschlafly 19:26, 16 September 2008 (EDT)

Clarification on peer review and anonymity

There has been, I think, undue outrage over the fact that the reviewer's name was not included in the response to Schlafly's letter. To clarify, this is standard procedure when reviewing manuscripts for scientific journals and does NOT in any way indicate cowardice or uncertainty. Anonymous peer review allows for reviews that are rigorous, honest, and, most importantly, objective, since if no one knows your identity, no one can threaten you, bribe you, or otherwise influence your evaluation of the material. Thus, you can carry out your professional duties without concern over how your review of Person A's paper will affect Person A's opinion of you or his/her review of your paper in the future, for example.

The anonymity of the response is standard procedure in scientific publishing and necessary to ensure objectivity of the evaluation process, and should not be construed negatively as has been done here.