Tuesday, January 04, 2011

Luke Muehlhauser of Common Sense Atheism interviewed me at the CSH Conference for his podcast, on historical method and the historical Jesus. I discuss my upcoming books and their content and progress. We also digress on other topics like education, the McGrews' use of Bayes' Theorem on the resurrection, and Bayes' Theorem's application to the fine tuning argument. The audio is now available to the public here. A transcript is included.

38 comments:

Loved the interview and your analysis of the McGrews' flawed Bayesian argument. I listened to Luke's interview with Lydia McGrew and was frustrated by her seeming certainty she had Bayes down but also her seeming flawed approach to the theorem. I hope you find a publisher for the works you discussed soon.

I hope the following extracts are not regarded as taken 'out of context', but merely 'juxtaposed'.

And while my question of what counts towards making somebody a professional historian is rather pointed, I am curious to know the answer.

CARRIERI’m just following up on what other Scholars have done, demonstrating that the current methodology is bankrupt, it’s invalid. It’s this what they call “criteria of historicity that they’re using.

LUKE: You’re saying it’s hard to blame historians for not taking the Jesus myth theory correctly when all they’ve had to read are poorly argued Jesus myth theories.

CARRIf historians are good at spotting ‘poorly argued’ theories, why does peer-review allow so many articles through that use methods that are ‘bankrupt’ ‘invalid’, to quote Carrier?

CARRIERThe one chapter I have refuting all the historicity criteria is like the deconstructive part of the whole project, because once you see that their methods are wrong they don’t have any valid basis….

CARRIERThat’s the problem with criticism that I’ve made before about pro-myth community: that they’re outside of academia.

They act like outsiders and mavericks and accuse historians of all these awful things.

CARRWhat sort of ‘awful things’? Pointing out that every single criterion used is wrong?

CARRIER…the historians today assume that “Oh, that (the myth theory) was refuted 80 years ago.”

CARRWhat qualifies somebody as an historian? Is it an ability to use these 31 or so criteria , all of which are logically invalid?

To choose a name, Bart Ehrman has a BA from Wheaton (an evangelical college) and ‘At Princeton I did both a master of divinity degree—training to be a minister—and, eventually, a Ph.D. in New Testament studies.’

Does training to be a minister, or a Ph.D in ‘New Testament Studies’ qualify you as a ‘professional historian’, in the way that studying the Illiad would qualify you as a professional historian?

Crossan is an expert on the criterion of double dissimilarity, criterion of embarrassment etc etc – all the criteria that Carrier shows are logically invalid and bankrupt.

What qualifies somebody as a ‘professional historian’, so that people like Bart Ehrman and JD Crossan are professional historians and Earl Doherty isn’t?

Steven Carr said...If historians are good at spotting ‘poorly argued’ theories, why does peer-review allow so many articles through that use methods that are ‘bankrupt’ ‘invalid’, to quote Carrier?

One could say the same of scientific papers that have subsequently been proved false or even have had their methodologies repudiated (such papers would number easily in the thousands and counting). A major recent example is the damage done to the entire regime of standard statistical methodology, which has been shown to be almost entirely unreliable despite having been believed to be the gold standard for over fifty years (Tom Siegfried, “Odds Are, It’s Wrong: Science Fails to Face the Shortcomings of Statistics,” Science News 177.7 (March 27, 2010): pp. 26-29, w. suppl. materials at http://bit.ly/aq1x28).

Sometimes the peers just don't realize the methods they regard as fundamental are in fact invalid. And how would they know, if no one tells them?

What sort of ‘awful things’? Pointing out that every single criterion used is wrong?

No. You must have missed my remark that this "pointing out" is not maverick: every single historian who has examined these methods has confirmed their invalidity (I cite half a dozen, all prominent scholars in the field of Jesus studies). I just summarize what they argue, and build on it. The problem here is that (as I also mentioned in the interview) the quantity of publications has become so huge that experts are failing to even become aware of crucial works in their field. Most peers are still operating in complete ignorance of the fact that other peers have refuted the methods they still regard as standard. This is a common time lag. It usually takes about fifteen years for a firmly replicated finding like this to diffuse through the entire expert community and become standard knowledge (changing the paradigm).

In reference to mythicists, I'm referring to charges many mythers make that all biblical scholars are liars or dogmatists or insane, etc., as a way of "explaining" why they don't "listen" to the likes of Tom Harpur or D.M. Murdock.

Steven Carr said... Does training to be...a Ph.D in ‘New Testament Studies’ qualify you as a ‘professional historian’, in the way that studying the Illiad would qualify you as a professional historian?

Yes. Ehrman is a good example: his professional work is superb (e.g. Orthodox Corruption of Scripture). He has received a fully respectable training in the "current" standards of the field (which are not all bankrupt, e.g. textual criticism).

That the field of history in general suffers a problematic lack of a properly established methodological paradigm is something I hope to help solve (as C. Behan McCullagh has been doing these last twenty years, and a few other notables).

What qualifies somebody as a ‘professional historian’, so that people like Bart Ehrman and JD Crossan are professional historians and Earl Doherty isn’t?

I actually answer this question in chapter two of my forthcoming book. In short, in addition to what I will argue there, a professional historian (1) has extensive coached practice making arguments in the field (so that a huge range of common mistakes are purged, thus greatly reducing their future rate of error), (2) has extensive coached practice in researching historical claims and questions (so they become fully acquainted with what sources of evidence exist, what's been said and surmised about them, and how to go about finding all of these things), (3) has completed years of coursework and supervised reading that build a vast, accurate database of background knowledge in their field (e.g. in my case, papyrology, paleography, Roman law, politics, and sociology, Greek linguistics, etc.), (4) are vetted in oral exams by peers on multiple subjects in their area (to verify that they are sane, well-read, knowledgeable, critical, aware of the arguments and merits on all sides of a debate, and using the methodologies sanctioned by the field and not using methods repudiated by the field). And that's just the short list.

Key to becoming a competent historian is the same as in any other field of expertise: you need to complete 10,000 hours of training under the supervision of an expert who can point out your mistakes and direct you in how to improve and tell you the things you need to know to accelerate your training. This 10,000 hour rule is pretty well accepted for every field (even athletics). That's the equivalent of about seven to ten years of full time schooling. Which is about what a Ph.D. is.

BTW, Doherty has nearly this much expertise (he has a formal undergraduate degree in classical studies and has done nearly a doctorate's worth of personal study) and I consider nearly him as competent as any other Ph.D. in history. I say that knowing many such Ph.D.'s who make disastrously wrong claims and arguments (competence does not entail infallibility); Doherty's books are not perfect, but are nevertheless still superior to that standard. Thus he should be taken seriously. But there is no way for other historians to know that. That's what degrees are for: to verify (the same way a driver's license does) that the recipient has done what's required to produce a competent professional. Otherwise he's just one more unqualified outsider. The effort required to vet thousands of mavericks, so as to find the one Doherty among them, is beyond any historian's resources. That's why we have degree granting institutions: to divide the labor of verifying these things.

Nevertheless, for an idea of how much better Doherty's works would be if he formally trained for a Ph.D., you can get some idea from my criticisms of how his book fell just short of Ph.D. quality.

Dr. Carrier,In the interview you note the importance of peer review. You mentioned peer review was going to be done when you got a publisher. You also said that you have shown some of your work to C. B. Mcullagh, and who was pleased with it. I'm not sure how much of this you would like to share, but have you been getting peer review from both NT scholars or other historians all along the way as you have worked on both books?

I do find Carrier's rhetoric to be unnecessarily hyperbolic. Superficially it sounds as though he's refuted all of historical inquiry. When in reality what he means is that most criteria fall short of better articulation. It's not like there is logic and then Bayes's theorem. There's a spectrum of rigor in between (and this is discussed plainly in the interview) and things get a little weird on the surface when there is a harsh dichotomy of right and wrong at play. He could be consistently inclusive with his rhetoric rather than exclusive and then the "issue" would go away. But he seems to really like talking in terms of "all of history sucks! OMGBayesTheorem!!"

Doubting said...Have you been getting peer review from both NT scholars or other historians all along the way as you have worked on both books?

The first book has been seen by numerous historians and NT scholars. But they haven't commented much and I don't expect them to, simply because the stuff in it that relates to positive conclusions about Jesus is unsurprising (e.g. that the method of criteria is invalid is already established by mainstream scholars, as my book shows) and the rest (on the new method) is hard to evaluate until they see it in action (as in my next book). I'm happy that none have found anything demonstrably wrong in it, but since they aren't being paid to do a thorough vet, I don't count this as a proper peer review.

Morrison said...Dr. Carrier, do you have an academic credentials in mathematics?

I do not have a mathematics degree. But I do have college training in the field (in electronics engineering, calculus, statistics, and others). I have also worked in a mathematics profession (sonar) and published a mathematics paper in a peer reviewed journal (Biology and Philosophy). So I'm not out of my element.

But since I am not doing anything original in mathematics (Bayes' Theorem is already established, and nothing I'm doing with it is complex) my qualifications don't matter as long as mathematics peers approve the work (hence that is what I will require be done). I have already had a few mathematics Ph.D.'s read the manuscript, and have made some revisions in light of their remarks, which have so far met their expectations.

Morrison said... And since many of the contributors to the book you contibuted to with John Loftus do not have a Ph.D. (including Loftus himself) or do not have one in a relevant field, can I dismiss them as well?

I've only been speaking here of historians (note my reasoning for normally requiring a Ph.D. in history was all based on things historians need to know). Expanding the question to other fields is off the present topic, but the need of a Ph.D. in philosophy is vastly less, IMO (since all philosophers actually need the skill of is formal logic; the rest is just history of philosophy, which isn't really "doing philosophy"). As for the sciences, the only science chapters in TCD are Tarico and Long's and both have science Ph.D.'s., and yet neither is doing anything original: they are merely summarizing existing science (done by fully qualified persons). They both have the relevant qualifications to do that.

Similarly for the other chapters: Tobin and Babinsky simply summarize existing scholarship. They are not doing original work in any field nor are they challenging any established consensus of experts in any field. So their chapters aren't even relevant to the issue here. Even Avalos, who cites existing scholarship to make his case for the Nazi program, at least has a Ph.D. in a related field (history), just not in that speciality field (he's biblical and near eastern, not Hitler studies), which at most would prompt me to check his sources to be sure he is correctly reporting the observations he makes (and I did, and he does). By analogy, I'm published under peer review in Hitler studies: even though it's not my specialty field, my work in that field has been professionally vetted as accurate. Everything Avalos argues is based entirely on such work.

As for Loftus, he has several pertinent graduate degrees. So he has been considerably vetted, just not to the full standard of a Ph.D. Does lacking a full Ph.D. matter for the things he argues? Evidently not, as even the Evangelical Philosophical Society has taken him seriously (enough to organize a conference panel in honor of his most original argument, the OTF). Since nothing he argues challenges any established consensus in any field, you don't need to hold him to a Ph.D. standard. It's enough that he has actual qualifications at the graduate level.

But more directly to the point, since Loftus is still just doing philosophy in his chapters, even you can be qualified to vet his arguments, as long as you are adequately skilled in formal logic (for which any graduate degree in a related field, e.g. theology, would suffice).

Of course, if you deem someone who lacks a Ph.D. in philosophy as unqualified to reach correct conclusions in philosophy, then you must deem yourself incompetent to reach correct conclusions in philosophy. Unless you have a Ph.D. in philosophy. Conversely, if you count yourself qualified to verify whether Loftus' philosophical arguments are sound and valid without a Ph.D., then you must certainly count him so.

WAR_ON_ERROR said......It's not like there is logic and then Bayes's theorem.

The rest of what you say is correct. But the actual issue pertaining to this remark specifically is not "logic vs. BT" but "the absence of any formal logic vs. the presence of formal logic," and when this is resolved in favor of the right-hand competitor, what "formal logic" leads you to is BT (thus it's not logic vs. BT but logic and BT vs. neither).

There has been no discussion in history as a field of the underlying formal logic or the formal logical validity of any of their methods (the closest I've ever seen is Gottshalk in 1950, but no one picked up the approach after him, nor even gave his work the formal logical underpinning his words otherwise on occasion only imply; the closest anyone has come is McCullagh forty years later, and yet even he doesn't justify his conclusions with formal logical demonstrations).

Historians have just been muddling through with a "that just looks right to me" approach (even Gottshalk and McCullagh, to be honest), the only difference from amateurism being that a large community of critical, knowledgeable historians pursues a consensus on what "just looks right" to all of them (or rather, most of them; the community divides itself into camps, e.g. modernist and postmodernist, which have largely incommensurate methodologies; something similar plagued the science of psychology throughout the 20th century, and it was only by the end of that century a consensus was brewing as to which camps are validly scientific and which not; and yet history as a field containing a formal discussion of methodology is actually younger than psychology: it begins right around WWII, a whole generation after the opus of Freud).

Thus "obviously" bad methods are rejected and "obviously good" ones accepted. And that's well enough (most sciences are in the same boat, e.g. the methods of geology have never been validated by a formal logical analysis that I know of). But that leaves everything in between (i.e. what do you do when the conclusion does not "obviously" follow from the premises?), as well as the formal validation of the obvious (i.e. though a method may be "obviously" bad, one still can ask what the formal logical explanation is for why it is bad; likewise, though a method may be "obviously" good, one still can ask what the formal logical explanation is for why it is good; historians have so far done neither).

The second problem (lack of logical analysis of the "obvious") is not a major problem since by definition such inferences are so clear one does not need to know why (e.g. we don't have to know the formal logical basis for trusting our knowledge of where our house is in order to know that our inferences in that regard are obviously correct almost all the time; analogously, we can be fully certain that the Gospel of Mark was not written in 18th century China without having to know the formal logic that validates that inference).

But the first problem (everything in between) only isn't a problem in science, since the consensus reached by scientists is that things in between cannot obtain a scientific standard and thus are excluded as unscientific (which does not mean false, it just means not accessible with scientific certainty). If historians of Jesus did that, they would all have to declare on page one of every book "we can know nothing about Jesus or early Christianity with any discernible certainty." Such broad agnosticism might seem fine to propose, but (a) it's never going to happen and (b) it's inadequate, because the stuff in the middle is not all commensurate in uncertainty (e.g. it's not as if things that are not scientifically certain are all therefore completely unknowable: some are, but some are knowable to a modest probability, some to a much higher probability, and so on), and that's what history as a field is about: sifting through those levels of certainty regarding our past (in particular contrast with actual scientific histories, like geology).

Thus historians have more need of attention to the logical validity of their methods than scientists do. And yet they have given it substantially less attention than scientists have. Which is perverse. So I do indeed aim to make a start at filling that gap.

I don't think I'm disagreeing with you in substance, only in presentation. You seem to agree that historians are using informal logic and even formal logic that may reach Bayesian articulation naturally. And we recognize those better conclusions when we see them as a result. You want greater self-awareness and greater levels of precision in addition to skipping to the part where all the historian's cards are on the table in proper form for everyone else to help evaluate.

I wouldn't know what technical name to give what appears to me to be a common cognitive bias (of those used to raging against popular opinions) of unnecessary levels of rhetorical exclusion to popular fixations that happen to have elements we have to keep. We throw the baby out with the bath water because we want to jog people out of their slumber like a cliche movie scene ("Snap out of it!") and then put the baby back on the changing table, because well, we needed that.

When you say "all historical criteria are completely hosed" for example, but then you go on to say that criteria x *is* valid *when* augmented by a further consideration, I consider that an example of this bias. Is the argument from silence bad? Oh throw that out. But put it back on the table and then add in another consideration or two, and it's valid. Is the criteria of embarrassment trash? Oh yeah. Throw that rubbish out. You don't need it. But oh, by the way, you'll need it if only you add in considerations x and y. Etc.

And the only reason to bring this up is that this appears to me to be the weakest point of your presentation as far as people with anti-mythicist axes to grind go (to the small extent that perhaps you can be blamed for them misunderstanding, vs. them just finding completely inexcusable means to misunderstand). The cliche' is that mythicists so baddly want to prove mythicism that they'll chew through their own limbs of proper historical inquiry in order to attain the desired conclusion. Obviously, contextually, that's not what you are doing at all. You're rebuilding the engine of historical method to pimp it out to the fullest, embracing all the parts that are good and adding in the things that are missing. In a perfect world that would be obvious to everyone given the things you've already said.

But as we are well aware we are competing with the cognitive biases of others such as confirmation bias and any superficial quote that too easily plugs into the well known, well established pejorative narratives of others may want to be avoided when better methods of presentation are available. I know you are aware of countless such quote-minings of your own work even in context of clarifying sentences which are literally only a sentence or two away. It really doesn't matter *if they find what they are looking for* (whether through deceit, delusion, or brute carelessness).

If this feedback strikes you as helpful, great. This appears to be a common enough element in your presentation of the topic when discussing the perils of current historical method. If not, I see no reason to press the issue. My sensibilities may not be properly tuned. I don't think it hurts to mention it though.

I know you must be very busy, but I hope you have the time to read and maybe even respond to this.

I sympathize with your criticisms of historical research, but I'm a bit puzzled by what you're proposing to put in its place. For instance, you talk a lot about Bayes' theorem, but how can we use it to settle historical questions? Just to give you an example, I recently read Xenophon's Anabasis, and of course I was moved by that passage which I have since learned is quite famous, wherein the Greeks catch sight of the Euxine from Theches, and raise the cry, "The sea, the sea!" They proceed to construct a cairn as a monument to their moment of relief (the remains of which some modern-day archaeologist named Mitford claims to have discovered, though I cannot seem to find any detailed information on that bit). Where does Bayes' theorem fit into deciding how much of this melodramatic scene is historically accurate? Or if you're not familiar with that particular story, then perhaps you can share a different example of how we can use Bayes' theorem to generate a historical conclusion.

It seems to me that Bayes' theorem helps us maintain logical CONSISTENCY, but it doesn't help us actually form any of our historical conclusions. After all, we need priors to plug into our calculations, and those need to originate from somewhere! But if we use non-Bayesian methods to determine the priors, then Bayes' theorem is going to generate conclusions no more reliable than the non-Bayesian methods on which they are ultimately founded. It seems to me, then, that we should use non-Bayesian methods to form historical conclusions, and then, wherever possible, use Bayes' theorem to verify that we are being logically consistent when we accept all those conclusions to the respective degrees of confidence that we do. Now, I should probably confess at this point that I have no formal training in history, so I openly acknowledge the possibility that I'm just way off, here, in my perspective. But I just can't imagine how else a statistics theorem is going to help a historian draw his historical conclusions.

In addition to logical consistency, though, historians ought to be concerned about ordinary consistency, i.e. not using double standards. (The fact that both of these otherwise unrelated principles are called "consistency" is purely coincidental.) After all, bias is an ever-present danger, and forcing oneself to be consistent is one way to fight bias. I think this business of consistency is why historians have developed those ~31 criteria you've been discussing. It must help, I would think, to standardize one's methods as much as possible. And of course at bottom historical conclusions are still going to depend on subjective judgments, but we might temper that subjective element by doing our best to operate under pre-determined rules, i.e. the ~31 criteria.

You rightly point out that those criteria are deductively invalid. But if what I just described really does capture the role of those criteria in historical research, then it doesn't matter that they are deductively invalid! We don't need the guarantee of truth preservation which deductive procedures provide in order to draw historical conclusions. All we need to do is to make explicit the intuitive principles that we're going to have to use anyway. Once we accomplish that, then we can work towards the goal of consistency, which will hopefully reduce the impact of our biases.

If you don't get a chance to respond, no worries. But I'm very curious to hear what you think about the views I just expressed.

Dr Carrier,Do you plan on commenting on the CommonSenseAtheism Article discussing this latest podcast?

I am also wondering if you plan on responding to Lydia McGrew's rebuttal to the claims you made about her writing:http://lydiaswebpage.blogspot.com/2011/01/odds-form-of-bayess-theorem.html

I also noticed Luke M made this comment on that page:"Carrier apparently has only a few years' experience with Bayes' Theorem, as the historical method section in Sense & Goodness Without God (2005) does not mention Bayes. When asked to guess at the competence in probability theory between two people who have been publishing peer-reviewed philosophy literature on probability theory for at least a decade vs. someone who discovered Bayes' Theorem in the last few years, I'm going to bet on the former in a heartbeat."

So I am also curious how long you have had experience with Bayes' Theorem.

WAR_ON_ERROR said...I don't think I'm disagreeing with you in substance, only in presentation. You seem to agree that historians are using informal logic and even formal logic that may reach Bayesian articulation naturally.

Right: when historians make valid arguments, their arguments conform to Bayes' Theorem, even though they are (usually) unaware of this.

But historians have not been consistently good at distinguishing valid arguments from invalid ones.

They have mitigated the damage that could have resulted from this confusion only by using a standard of obviousness: obviously correct reasoning is accepted as persuasive, and everything else as either an unresolved battle of opinions (if disagreements persist) or a tentative "consensus." The latter often happens when otherwise invalidly argued opinions happen to agree.

The unstated assumption is that when there is near universal agreement of opinions in an expert community, the frequency of such opinions being correct will be substantially higher than that of opinions on that subject among non-experts, which is valid reasoning (that frequency, and hence probability, will indeed be higher), but this does not obtain the degree of certainty or reliability that such concensuses are typically (and thus erroneously) assigned.

I think the support of Q as a distinct document is an example of this: everyone cites the consensus in support of the consensus, without actually checking to see if the arguments supporting it are logically valid; they are not, as Goodacre has demonstrated. Of course, a conclusion reached invalidly could still nevertheless be true, and I think the jury is still out on whether Goodacre's positive arguments against Q produce sufficient certainty to conclude there was no Q, but here is exactly where I think Bayes' Theorem could be put to good use (although I won't be doing that myself).

Correction: only the thirty or so named criteria in Jesus studies that we were discussing; I was not referring to all criteria in the whole of all fields of history.

...but then you go on to say that criteria x is valid when augmented by a further consideration, I consider that an example of this bias.

The criteria are "completely hosed" precisely because they do not include these "further considerations." As I explain in the interview (and in my article in CAESAR and now the chapter in Sources of the Jesus Tradition, which I'll soon blog about, but a heads up: they erroneously published an uncorrected draft of my chapter; the correct version appears in CAESAR), when the criteria are "fixed" (so as to no longer be hosed), they become so altered as to have little to no application in Jesus studies (because they then require a much better source situation).

I don't see anything incorrect in my characterization of this so far.

Is the argument from silence bad?

We never discussed the AfS and it is not one of the "criteria of historicity" we discussed in the interview. This is an example of my point above: you seem to have skipped the part where we ramified the subject of discussion to only a particular list of criteria (the "criteria of historicity" used in Jesus studies). Thus you mistakenly took everything then said as pertaining to all other criteria (like those forming an AfS), when that was clearly not the case.

The cliche' is that mythicists so baddly want to prove mythicism that they'll chew through their own limbs of proper historical inquiry in order to attain the desired conclusion. Obviously, contextually, that's not what you are doing at all. You're rebuilding the engine of historical method to pimp it out to the fullest, embracing all the parts that are good and adding in the things that are missing. In a perfect world that would be obvious to everyone given the things you've already said.

All correct. But even more so, it should be most obvious in how I practice what I preach: e.g. anyone who follows my work at all will have seen me criticize and correct mythers as much as (indeed if not more often than) historicists. Only an ignoramus will mistake me for someone who embraces an invalid method when it supports him. So I'm not worried about being misperceived here. I can't expect ignoramuses to be anything but ignorant. Everyone else will get where I'm at.

But then, this thread discussion here can only assist in that goal, so thank you.

Morrison said...I have also had a few courses in History and Philosphy, so does that mean I can claim the same expertise in those fields that you claim in Math?

Do you have any peer reviewed papers published in those fields?

Which leads me to wonder why you are always flouting a Ph.D., since you did a lot of work before you got a Ph.D.

A complaint that I always find amusingly ironic. Everyone criticized my work before because I "didn't have a Ph.D." Now that I got one, they change tune and criticize me for mentioning I have a Ph.D.

Well, which is it? Does a Ph.D. in history make me more qualified as a historian? If not, then why do you care if I have one or if any author of The Christian Delusion has one? (And if you don't care, why do you make an issue of it?) And if it does make me more qualified, then how would it make any sense for me to conceal the fact that I have one?

It is also ludicrous to ask a professional, who believes in the value of a credentials process enough to have gotten one, why he defends the value of a credential. It is doubly ludicrous to make such a complaint whilst completely ignoring every point of his detailed argument proving the value of that credential.

Was that [prior] work not up to par?

To Ph.D. par? No. Nor would I have ever claimed such, for any of my works in history. And I didn't.

And yet that was all produced with a college degree in both history and classics under my belt, and most of it with a masters degree in ancient history under my belt.

The sole exception are some papers in journalism, not history, and of course a few old works in philosophy, for which I already mentioned I do not deem a Ph.D. necessary--and yet several of those papers were still published under peer review (rather proving my point).

Morrison said...I know you would not operate under a double standard, so I would like a little more clarification.

I have no double standard. But you might perhaps be laboring under a false dichotomy: all writings are either of Ph.D. quality or of no account.

There are instead degrees of qualification, and different claims demand differing degrees of qualification to have prima facie merit, depending on their degree of difficulty, method of citation and argument, and conformity to consensus. And relative to their demands on the reader's belief (thus you will see a huge difference in the sophistication and specificity of my postgraduate work compared to my pregraduate work: the latter did not expect any reader to believe what he could not himself verify, and merely provided the references to facilitate that).

And as I've said here (and often elsewhere), no qualification is necessary to have secunda facie merit. Rather, as I've explained above, experts have no way of vetting all such work to ascertain what has secunda facie merit. Therefore an advanced degree allows them to narrow the field of what they should devote their time to pay attention to. Thus when I lacked a Ph.D. I never expected experts to pay attention to my work (unlike some mythers, and many commentators on my blog, I never went around complaining that they were ignoring me). But now I do. And I have earned the right to. That's the point I have made here, and in considerable detail I might add.

The only thing near an exception are my earlier peer reviewed publications, but peer review does not set a standard of college qualifications, but solely the intrinsic standards of the field (sound method of argument and citation to pass initial muster; formal peer review to pass final muster). Hence in fact by submitting those works for peer review I was requesting that the work be vetted by fully qualified experts before being acknowledged as worth the time of the expert community to pay attention to. But then, that is again my point: such systems are in place for precisely that purpose. People who deliberately bypass them and complain the experts are ignoring them are simply not being wholly rational.

And that only follows for novel, unvetted claims. You don't need a Ph.D. or peer review to write authoritatively on what other people with Ph.D.'s have concluded. Experts don't need to read such things (since they already know what they contain) and non-experts can verify it all themselves (if they feel the need to) by reading the expert references cited.

hatsoff said...you talk a lot about Bayes' theorem, but how can we use it to settle historical questions?

My entire book Bayes' Theorem and Historical Method answers that question, in 320 pages. It is finished, has been read by numerous experts, and is now being shopped for a publisher. You will just have to wait for that to come out and read it.

After all, we need priors to plug into our calculations, and those need to originate from somewhere!

Indeed. And all historians routinely argue from unstated assumptions of prior probability, in every argument they make. The problem is that they are unaware of this, and the very problems it presents, all of which are laid bare when you start using Bayes' Theorem.

As recent scientific papers have shown, even scientists have been making this mistake, and Bayes' Theorem is now recommended there as the solution (since it compels people to make their assumptions explicit and thus subject to questioning, debate, and test). See, for example, George Diamond & Sanjay Kaul, "Prior Convictions: Bayesian Approaches to the Analysis and Interpretation of Clinical Megatrials," Journal of the American College of Cardiology 43.11 (2 June 2004): 1929-39. The same arguments made there obviously hold for all empirical fields, like history.

All we need to do is to make explicit the intuitive principles that we're going to have to use anyway. Once we accomplish that, then we can work towards the goal of consistency, which will hopefully reduce the impact of our biases.

And I prove in my book (on several standard methodologies in the field, not just the method of historicity criteria) that when you do exactly that, you always end up in the same place: Bayes' Theorem. So why not just start there instead? My book explains how.

Hi Richard, it's being put across by many on the McGrew blog that your apology is more than it is. Indeed John Fraser (a student of the McGrews) is putting it across as a kicking.I don't read it like that and I'm not suggesting that The McGrews are condoning such an interpretation but would it be worth expanding on what you meant to say rather than what you did say in the CSA interview ?Thanks.

I'm not aware of Lydia McGrew taking it that way. Can you point me to an actual comment URL where she does? My apology fully explains where I was wrong and what I meant to say but didn't. Of course Lydia and I agree that I still think she's wrong, but for reasons other than the ones I retracted and apologized for (and those actual reasons I did state in my apology). If that is how Fraser is framing what I said, then there is nothing controversial about that.

Hi Richard - see the following links. John Fraser is a personal friend of the McGrews (according to him).Link 1 quote "As I've already pointed out, Richard Carrier tried to criticize it and ended up embarrassing himself. So I don't think other critics are going to be very hasty to try to attack it on those grounds, because the McGrews are experts in this field."

What is the article in CAESAR? For that matter, what is "CAESAR"? Also, I've heard from other sources that you'll be having a paper out in an academic journal on the Josephus passages. Could you blog (or email) an update on all of this sometime? I'd also be interested in finding out how volume 2 is coming along.

Landon Hedrick: I just realized I misremembered. The CAESAR article wasn't corrected, the SJT version was supposed to have been (I sent the editors an email with some of the corrections even before receiving a galley proof, and would have caught the others if I had ever received a galley). So both are defective. Not fatally (just terminological issues). I'll mention this when I blog about SJT later this month.