EVENTS

I choose not to be associated with any political ideology – neither libertarianism, liberalism nor conservatism – as I believe that it reinforces deep-seated tribal instincts that affects our stance and decision making on issues of importance. It does so by imposing constraints on our ability to form good conclusions (motivational reasoning) and fosters conformity (groupthink). Ideologies allow us to evaluate what’s right and wrong, gives a cultural identity and allows us to turn ideas into action. So I see the value of an ideology and not adhering to one probably makes me politically impotent, but I won’t compromise critical thinking skills just to be politically viable. Some may argue, all belief systems within a culture are in a sense just principles, which escapes no one, and so your worldview is really just an ideology too. I agree with this retort, but here I’m talking about ideologies that have dogmatic assertions, well defined agendas, and consist of ingroup members that uphold these things at any cost. Moreover, although principles and ideals are important, I believe they should often times be subordinated to pragmatism – the idea that you do what works.

To the wider point, by distancing myself from groups, for example, not identifying myself with a conservative pundit on TV, I avoid an abuse of ideas such as groupthink, which is a highly dysfunctional way of solving problems and making decisions as a result of, often unconscious, ingroup influences. For our purposes, bias is a preferential treatment or selection of ideas, objects or people. In an ingroup situation, like conservatism, there’s an effort to undermine dissenting and controversial viewpoints, indoctrinate members, isolate from other groups, possess a bias towards the group’s principles and ideas, all in an effort to maintain conformity and harmony. The results are that the ingroup overrates its abilities and underrates its opponents and consequently have an inflated sense of certainty that the right decision has been made [wikipedia]. A good example of this is when the Bush administration believed that there were weapons of mass destruction in Iraq. Other effects are demonization of outgroup members and collective confirmation bias or preferential treatment towards the group’s theory as new evidence appears.

Becoming the victim of indoctrination, a facet of groupthink and an uncritical way of obtaining information, shouldn’t be surprising to anyone as we all see how one can inherit a religion or political persuasion, often unquestionably. We also know how easily novices, particularly when young, can be influenced to believe in persuasive people’s viewpoints, especially from those who are revered. How we become indoctrinated, on the other hand, is complex and numerous pathways probably exist. Although I believe that those that are well versed in logic, critical thinking and are informed in the field of inquiry are less prone, I think much is unavoidable. I believe this to be the case because the belief formation process – whether or not to accept or reject a claim – is easily corrupted by emotional appeal, psychological biases end evolutionary forces. To illustrate emotional appeal, we can become a part of an ideology out of pure preference or sentiment without any analysis; for example, “I like liberals because they seem more compassionate to those in need.” This path is actually probably the more frequent of avenues we take versus sitting down and evaluating every claim and position a group proposes. This is unfortunate but understandable as assessing arguments from economics, history and political science is very time consuming. One of the reasons why we our ideologues is that it serves as a shorthand to being politically relevant but at a cost.

I recall how easily I was indoctrinated by conservative talk show radio – in a matter of months I was parroting all of their arguments and principles – and liberals were the subject of my caricatures and demonization. Likewise, when I went through a period where I identified myself as a liberal, reading exclusively the New York Times and the Huffington Post, I showed much prejudice towards conservatives. So conforming to an ideology not only comes at the cost of possible indoctrination, but it also creates stark ingroup and outgroup members. And this has real consequences as it can evoke tribal instincts to have contempt towards the outsider and affinity towards the insider. This chasm between groups creates defensive behavior when people are challenged on their viewpoints which inevitably leads to obstinance and bigotry – our amygdala probably hijacks our prefrontal cortex. In fact, you can think of conservatism, liberalism and libertarianism to be of different tribes with their own languages. As The Three Languages of Politics by libertarian Arnold Kling puts it:

Humans evolved to send and receive signals that enable us to recognize people we can trust. One of the most powerful signals is that the person speaks our language. If someone can speak like a native, then almost always he or she is a native, and natives tend to treat each other better than they treat strangers. The language that resonates with one tribe does not connect with the others. As a result, political discussions do not lead to agreement. Instead, most political commentary serves to increase polarization. The points that people make do not open the minds of people on the other side. They serve to close the minds of the people on one’s own side.

If you haven’t realized it already, most of us actually get our beliefs first and then look for evidence to support it, not the other way around. This is known as motivated reasoning, where we focus on what we want the cause to be. But these beliefs (causes) come from being indoctrinated within an ingroup in the first place. It’s no coincidence that a majority of liberals believe many people are oppressed by society, that is, after all, an inherited tenet, but this may or may not be true as a general rule. Some Liberals, as conservatives, will hunt for evidence that conforms to their belief – society causes apparent oppression – without ever considering alternative causes. Liberals and conservatives have other heuristics and causes by which to assess the political landscape, but they are finite. The real world, however, is much more nuanced and complex, creating the need to look at many other causes. But ideologies restrict you from doing just that since you are stuck with inherited principles, e.g., big government is always bad, to work with. The following quote by libertarian Arnold Kling reinforces the points on obstinance and motivational reasoning.

If people were open-minded, you would think that the more information they had, the more they would tend to come to agreement on issues. Surprisingly, political scientists and psychologists have found the opposite. More polarization exists among well-informed voters than among poorly informed voters. Moreover, when you give politically engaged voters on opposite sides an identical piece of new information, each side comes away believing more strongly in its original point of view. That phenomenon has been called “motivated reasoning.” When we engage in motivated reasoning, we are like lawyers arguing a case. We muster evidence to justify or reinforce our preconceived opinions. We welcome new facts or opinions that support our views, while we carefully scrutinize and dispute any evidence that appears contradictory. With motivated reasoning, when we explain phenomena, we focus on what we want the cause to be. The philosopher Robert Nozick jokingly referred to this as “normative sociology.” For example, what accounts for the high incarceration rates of young African American males? A progressive would look to racism in our justice system and society as the cause. A conservative would look to high crime rates as the cause. And a libertarian would look to drug laws as the cause.

We discussed the challenges of accepting a miracle as true because it contradicts with how we know the world works. We then proceeded to ask the question: well, what happens if we have good evidence for the miracle? The solution to that problem was to translate our hypothesis, evidence and background knowledge into terms of degrees of certainty and inputting that into our logically valid formula developed by Thomas Bayes. We could always resort to informal argumentation, but using Bayes’ theorem gives us special insight by breaking the problem down into components – hypothesis, prior knowledge, new evidence etc. – and preventing us from falling victim to confirmation bias. This bias is reduced because we are given the opportunity to consider alternate hypotheses in the theorem, which are often ignored through “armchair” arguments. In this post, we will try to show that extraordinary claims do indeed require extraordinary evidence. The hypothesis we will be testing is whether or not Jesus was raised from the dead by a supernatural agent. The formula will be given again below for reference.

It’s helpful to see this formula in terms of an argument to the best explanation, which is often used by historians, especially apologists. It is a type of abductive reasoning. You can think of prior probability as being how plausible your hypothesis is when combined with ad hocness. Plausibility has to do with how typical our explanation is – that is, how much is it in accord with how we know the world works (background knowledge). If you try to develop excuses (ad hoc) for your evidence in order to make it fit your hypothesis, then you lower the prior probability, P(h|b). The other components in ABE, or argument to the best explanation, are the ones that affect the probability of the evidence explaining the hypothesis, P(e|h,b) and P(e|~h,b). The first one is explanatory power which asks if the hypothesis has a higher probability of being true than competing hypotheses, and it should account for the facts without forcing it. Increasing your explanatory power will increase P(e|h,b). Explanatory scope, on the other hand, is about explaining a wider range of evidence better than the competing hypotheses. Increasing explanatory scope lowers P(e|~h,b) relative to competing hypotheses, and if ~h has greater scope, then P(e|h,b) drops. Lastly, explanatory fitness can’t contradict any well established beliefs, and it functions like explanatory power in terms of probabilities affected.

Getting back to priors, an example of ad hocness could be making the assumption that at least one person can defy gravity and fly into the sky without using any special technology [Matthew Ferguson]. We would have to make this assumption or excuse in order to make the hypothesis that Jesus was raised, by natural means, into the sky plausible. Now of course if we said, b, our background knowledge includes the assumptions of metaphysical theism, then God can do anything, and it wouldn’t be ad hoc. However, if we accept that premise, then we are at the mercy of God’s will and whims. And I have yet to be convinced that a theistic God or any other god is at work behind the scenes. Even if you have the hypothesis that God interferes in everyday life, you can’t derive any reliable predictions from it. God is not like a subatomic particle that we can’t see because, as in the case of the particle, we can make predications as to how it behaves but not so with God. We are at a loss when it comes to modeling God, for God is knowable and incomprehensible at the same time. Moreover, as theistic Christianity defines him, he’s a philosophical construct not a scientific one. But, to be fair, we didn’t even assume naturalism in our reference class. We actually relaxed our constraints and claimed that we’ll consider all cases of god raising purported persons from the dead.

Quality of Evidence

Now, before we continue, it’s worthwhile to acknowledge that we will all interpret evidence in different ways based on our presuppositions or how skeptical we are or are not. For example, if we assume miracles are impossible to begin with, then we will underestimate the likelihood of P(h|b) and P(h|b,e). Likewise, if we assume that miracles occur all the time, then we will overestimate the likelihood of P(h|b) and P(h|b,e). We did see how we could reduce these biases in the determination of P(h|b) by keeping the reference class as narrow as possible and by being conservative on our estimates. But how do we reduce the chances of inaccurately predicting P(e|h,b)? To be candid, at this point of my investigation, I have yet to find a systematic way of determining the probability of consequent probabilities. And thus there will be an amount of subjectivity involved, which is true in general of epistemic probabilities, which come as ‘degrees of belief’ when assessing the uncertainty of a particular situation [wikipedia]. The best way to minimize bias would be to be as charitable as possible when ascertaining our probabilities. We will be relying chiefly on Matthew Ferguson’s analysis of the evidence in addition to Richard Carrier’s, while contrasting it against Mike Licona’s and William Lane Craig’s.

Here’s the evidence we have for the fantastical claim that Jesus rose from the dead by God. Why is this claim essential to the Christian faith? Well, because the apostle Paul aptly declared, “If Christ has not been raised, then our preaching is vain, and your faith is vain” (1 Corinthians 15:14).

It’s important that we consider the quality of the evidence. The above evidence comes from the Gospels, written forty to fifty years after Jesus’ death, and the Epistles, written twenty years after Jesus’ death, all contained within the New Testament. So, first, we have evidence decades after the fact. Keep in mind also that the above claims are not necessarily facts. After all, some of these are elements in stories contained in hagiographies (i.e. the Gospels) that are mainly used as propaganda to promote a theology or movement. We have already concluded that the genre of the Gospels are legendary biographies, meaning that the author’s chief objective was to proselytize a message about Jesus Christ and not to teach us history. The sources also don’t have independent attestation since I consider the Gospels as dependent on one another, with the exception of, maybe, John; the Gospels may or may not have had access to the Epistles, but we will grant independency. But it’s not until the second century do we get extra-biblical (outside of the bible) evidence of the crucifixion of Jesus Christ. Moreover, the Gospels are all anonymous, surely not based on eyewitness testimony, and are most likely a combination of literary creations and oral tradition. And, lastly, although we don’t know with certainty, it’s a reasonable assumption to make that superstition and the belief of gods, demons and spirits were more accepted by the population than would be now, probably because the Bronze age did not have the luxury of experiencing the Enlightenment era. The following paragraph underscores the type of evidence we have, by Matthew Ferguson.

Instead, our knowledge for many historical events in the ancient world relies solely on the reports found in ancient texts. Ancient texts, however, cannot be scrutinized as thoroughly as the evidence studied in forensic science, and thus using ancient texts as a form of evidence for knowing about the past is often far more speculative and less certain (especially when such texts are highly literary or symbolic in their composition). Frequently, the ancient sources that a historian consults will be biased, vague, misinformed, speculative, or simply outright liars (see Bart Ehrman’s Forged: Writing in the Name of God).

Methodology

So how do we approach the analysis of the evidence – what’s our methodology? Do we take the author’s – that are often anonymous – word for it, for example, when they say that Jesus Christ’s empty tomb was discovered by women, or do we take the skeptical stance since we don’t know who the authors were, the authors had an agenda, and no extra-biblical corroboration exists near the event? The assumptions and skepticism one exercises affects the outcome, so this is very important. So important that it (skepticism) drives Matthew Ferguson and Richard Carrier to conclude that the resurrection is very improbable while Mike Licona and William Lane Craig conclude that it’s very probable because they take the evidence at face value. Given the kind of evidence we have described in the previous paragraphs and knowing human nature and the era of that period in history, I find no reason other than to approach the evidence with skepticism, giving credence to evidence that is independently corroborated by disinterested sources and discounting that which is not. This may not seem fair since we really don’t have any disinterested sources and absence of evidence is not evidence for absence. However, this presumption – that of taking a skeptical stance – is one that most reasonable person’s would adhere to if they were assessing others’ faith. It’s also important to acknowledge, justifying my stance, that we have better evidence for other supernatural events; for example, the Salem Witch Trials, and we don’t accept those as true. As another good example, see Why the Resurrection is Unbelievable by Richard Carrier, where he discusses if we don’t believe Herodotus’ accounts why would we believe the Gospels or Paul’s account? Before we get into the analysis of the evidence, I believe this excerpt by Richard Carrier in “Proving History” illustrates that we always have to be mindful of our background knowledge since P(e|h,b) is conditioned on that:

The smell test is a common methodological principle in the study of legend, myth and hagiography. This tests can be most simply states as if it sounds unbelievable, it probably is.” When we hear tales of talking dogs and flying wizards, we don’t take them seriously, even for a moment. We immediately rule them out as fabrications. We usually don’t investigate. We don’t wait until we can find evidence against the claim. We know right from the start the tale is bogus. Yet the only basis for this judgment is the smell test. Is that test valid?

It is certainly ubiquitously accepted by historians in every field. It is suspiciously only rejected by religious believers, and then only when it’s applied to amazing claims they prefer to believe. They ground this rejection in the claim that we shouldn’t be biased against the supernatural, and God can do anything. Yet if they honestly believed in those principles they would be compelled to concede the miracle claims of every religion because “you shouldn’t be biased against the supernatural, and God can do anything.” This includes all the pagan miracles (incredible apparitions of goddesses, mass resurrections of cooked fish, wondrous healing, and teleportation), Muslim miracles (splitting moons, wailing trees, flights to outer space), Buddhist miracles (bilocation, levitation, creating golden ladders with a mere thought), and indeed every and any amazing claim whatever. Tales “proving” reincaration? We can’t reject them – because God can do anything. Ghosts confirming to the living that heaven is run by a Chinese magnate and his staff? We can’t rule it out. That would be bias against the supernatural.

Honestly living that way would be impossible. You would believe everything you read … In other words, our bias against the supernatural is warranted, just as our bias against the honesty of politics warranted: we’ve caught them being dishonest so many times it would be foolish to impolitely trust anyone in politics. Likewise, amazing tales: we’ve caught them being fabricated so many times it would be foolish to implicitly trust any of them.

The smell test thus represents an intuitive recognition of the low prior probability of the events described (i.e., P(h|b) << 0.5); the ease with which the evidence could be fabricated (i.e., P(e|h,b) is always high, unless we have sufficient evidence to the contrary), in fact often the ease with which such an even if real would produce or entail much better evidence (i.e., P(e|h,b) is often low); how typically miracle claims are deliberately positioned in places and times where a reliable verification is impossible, which fact alone makes them all inherently suspicious; and sometimes the similarity of a medical story to other tales told in the same time and culture is additionally suspect, like the odd frequency with which gods in the ancient West rose from the dead, transformed water into wine, or resurrected dead fish, oddities that curiously never occur anymore, and which are so culturally specific as to suggest more obvious origins in storytelling.

In the previous post, we focused on how you can use the principle of analogy to conclude that if something contradicts what you already know (what we will call background knowledge), then, unsurprisingly, in all likelihood it is false. We saw the limitations of this principle, however, especially when generalized to mean rare and new events, not just miracles. So we need some other tool or insight to allow us to conclude that if something rare and new occurs that we don’t necessarily rule it out, assuming we have good evidence. To illustrate, we can see all the background knowledge that we have as all the accepted truths in the world, and identify a miracle occurring in the Gospels as our hypotheses. We can then assign a degree of certainty (probability) to our belief that the miracles are true, given our background knowledge; this is known as the prior probability. Now, we were essentially arguing in the last post that the prior probability of miracles is low because our background knowledge consists of truths that contradict it. But what if, say, as Mike Licona and others claim, that we have good evidence for a miracle, for example, the resurrection. We need to find a way to incorporate that “good evidence” into our argument. After all, we can’t a priori rule out miracles because we can never be one hundred percent certain that the supernatural doesn’t exist.

Bayes’ Theorem

The solution to this problem is Bayes’ theorem. The theorem’s conclusion takes into account our background knowledge, and the evidence we have for the hypothesis we make. Where as before we were jumping to the conclusion that the probability of a miracle is low based on our prior knowledge of how the world works, we will now take into account the actual evidence of the miracle. Quantitatively speaking, the theorem makes us express our premises in terms of degrees as opposed to absolutes by forcing us to numerically label them as probabilities. This is important because most claims in life, especially about historical events, can only be discussed in terms of probabilities, with us often saying things like “most likely” or “more likely”, for example. So, those that object to doing math in history, think again, because everyday we already speak in terms of probabilities. Moreover, the theorem forces you to think of alternative hypothesis, reducing confirmation bias. This formalized and systematic approach to viewing your hypothesis allows for a clarity unrivaled by other methods. I offer two quotes below that explain its history and importance.

In simple terms, Bayes’s Theorem is a logical formula that deals with cases of empirical ambiguity, calculating how confident we can be in any particular conclusion, given what we know at the time. The theorem was discovered in the late eighteenth century and has since been formally proved, mathematically and logically, so we now know its conclusions are always necessarily true if its premises are true (probabilities). [Richard Carrier]

Bayes’s theorem is at the heart of everything from genetics to Google, from health insurance to hedge funds. It is a central relationship for thinking concretely about uncertainty, and–given quantitative data, which is sadly not always a given–for using mathematics as a tool for thinking clearly about the world. [Chris Wiggins, Scientific American]

It’s probably best to jump into the formula because the mathematical relationship between the premises reveal a lot about the theorem’s mechanics. We are trying to find how likely or unlikely (probability) our hypothesis is. A hypothesis we’ve been working with is whether or not miracles occurred as purported in the Gospels. So our question is how probable is it that miracles occur in the Gospels relative to the evidence and background knowledge we have, P(h|e,b), which is the posterior or epistemic probability. The prior probability itself is how typical our explanation (hypothesis) is, P(h|b), or how plausible it is, relative to our background knowledge. And, finally, the evidence that we have for the miracles occurring, e, can be best thought of as how expected the evidence is if our explanation (hypothesis) is true, P(e|h,b), often called consequent or explanatory probability. In summary, we have a prior probability, P(h|b), that when it takes into account new evidence, P(e|h,b), the posterior probability, P(h|e,b), gets updated. Two forms of the equation are given, one that takes into account one hypothesis and antithesis hypothesis, while the other takes into account multiple hypotheses. The equation, 2, can be thought of as the probability or the ratio of your hypothesis (h1) to the competitor’s hypotheses (h2, h3, … ), again, reducing our confirmation bias. The prior probabilities must sum to one while the expected probabilities don’t. Please see below for more detail.

Calculating our prior probability is one of great importance because it can mean the difference between a probable event and an improbable event. A prior is derived from all known information about your hypothesis. This leads us to the concept of reference classes. A reference class can be thought of as a category of claims that all address a similar scenario. This information can be used (referenced) to assist us in finding how typical our explanation is; in other words, it estimates our priors. As an example from “Proving History” by Richard Carrier, a hypothesis you may promote to explain the evidence in the Gospels, empty tomb etc., is that Jesus Christ was raised from the dead by a supernatural agent. How do you derive a prior probability based on your background knowledge? Well, what you can do is look for similar scenarios that were believed to have occurred in the past. For instance, Romulus, Asclepius, Zalmoxis, Inanna, Lazarus, many Saints in Matthew or the Moabite of 2 Kings have all been purported to have been raised from the dead by a supernatural agent. So our reference class is all persons purported to be raised from the dead by a supernatural agent. We have at least ten of them from antiquity and probably more. Since prior probability is only based on background knowledge and not conditioned on the evidence, we can assume that each one of the persons claimed to have been raised by a supernatural agent is equivalent. That is, there’s no more reason to believe one story over the other since all equally contradict our background knowledge. If that is the case, then classical probability theory says we can divide the sample space into equivalent pieces such that they sum to one. So the prior probability would be 1/10 or 0.1 that Jesus Christ rose from the dead by a supernatural agent.

It’s important to realize that we could have used a broader reference class and achieved a much lower probability. For this fact and others, probabilities formed via Bayesian are classified as subjective, but they are not arbitrary since we have good reasons for our methodology. To be conservative, we picked the narrowest reference class possible. But we could have found other attributes in common and defined a more generic class. For example, all of these claims of risen figures also have the attribute in common that they are supernatural, miraculous or mythological. If that’s the new class, then we know that there are hundreds of thousands of cases where people wrote, spoke or claimed that a miracle was true when in fact it wasn’t. That would give us a prior of 1/100,000 at least. But as a heuristic we will pick the narrowest reference class in order to produce the most conservative estimates. This rule of thumb reduces the chance that our presuppositions or biases will influence the estimate. This is known as a fortiori estimate. Getting back to reference classes, it’s worth noting that our background knowledge of all accepted truths is quite large – that is, there are a lot of truths that can contradict our belief in a miracle occuring. For example, the fact that we are creative and inventive would make us believe that a lot of miracles are fabricated, that we are meaning making machines and see agency in inanimate objects, that we make things up in order to influence others, that we could be honestly mistaken, that we can be credulous and so forth – all are apart of of b and all are viable hypotheses that can explain the evidence. If incorporated, these can have the effect of lowering the posterior probability, but creating reference classes for multiple hypotheses can be challenging albeit the principle is the same as in the single case.

Epistemic Probabilities

It’s significant enough to make the distinction between epistemic probabilities and physical probabilities. Epistemic probabilities are beliefs that an event happened is true versus someone making it up (or being mistaken), and physical probabilities are probabilities (relative frequencies) of events occurring. An example might be what’s the probability of someone at random having a myocardial bridge in their heart, which is pretty small, incidence of occurrence being 3%. But the probability that you believe someone has a myocardial bridge can be quite high since it’s based or conditioned on the evidence at hand, say a recent angiogram. Moreover, epistemic probabilities often measure events that occur just once, like historical claims, versus physical probabilities which are often statistical averages of repeated phenomena. So you can’t empirically derive epistemic probabilities by repeating an experiment – say by taking the long term average of flipping a coin resulting in a relative frequency or probability of 0.5 – instead you must rely on thought experiments by deriving a reference class. The former method is known as the frequentist approach, while the latter method is known as the Bayesian approach. It’s best to think of these methods as different approaches designed for different kinds of problems rather than as rivals. Please see quote below, emphasizing the fidelity of the Bayesian method. The next post will discuss the consequent probability and eventually compute an epistemic probability of our hypothesis; we’ll stick with the miraculous hypothesis that Jesus was raised from the dead by a supernatural agent in order to explain a wide range of claims found in the Gospels and Epistles.

The specification of the prior is often the most subjective aspect of Bayesian probability theory, and it is one of the reasons statisticians held Bayesian inference in contempt. But closer examination of traditional statistical methods reveals that they all have their hidden assumptions and tricks built into them. Indeed, one of the advantages of Bayesian probability theory is that one’s assumptions are made up front, and any element of subjectivity in the reasoning process is directly exposed. [ Olshausen]

We deem them myths not because of a prior bias that there can be no miracles, but because of the Principle of Analogy, the only alternative to which is believing everything in The National Inquirer. If we do not use the standard of current-day experience to evaluate claims from the past, what other standard is there? And why should we believe that God or Nature used to be in the business of doing things that do not happen now? Isn’t God supposed to be the same yesterday, today, and forever? [Robert Price]

The principle of analogy, presented above, is really just a summary of David Hume’s argument involving the likelihood of miracle stories. Hume’s argument says that since our uniform experience does not include miracles, it’s very unlikely that miracles have occurred in the past. This is analogous reasoning. It’s not that miracles are impossible; it’s just that we should remain agnostic towards them since it’s far more likely that a natural explanation is the cause. Now the principle of analogy has its shortcomings, especially when generalized to mean rare events. What if, for example, Einstein didn’t move forward with his theory of relativity because it contradicted his present day uniform experience? What if we observed a nuclear explosion and explained it as a conventional explosion just because we had no experience with nuclear physics? Moreover, there are much quantum mechanical phenomena that appear to violate the laws of Newtonian physics. Do we also outright reject these, at the time, very rare events just because they don’t agree with our current background knowledge?

The answer to the above questions is obviously no. If the new observation, model or prediction pans out empirically and is corroborated by other means, then we must accept it. So it has been argued by many philosophers that Hume’s argument is circular, that the definition of miracle need not be a violation of natural law and that Hume, in a Bayesian sense, does not take into account all the probabilities involved, to name just a few. So the principle of analogy, in my view, is most useful as a heuristic – a guiding principle rather than a law. It’s there to tell us that if a claim doesn’t square away well with what we already know as plausible (things measured against our background knowledge), then it’s probably false. A caveat is that if something conflicts with our background knowledge, but, we have suspicion that it’s true because of good evidence, then we must test it or observe many other instances to change the state of our current background knowledge. Since Hume primarily had miracles (violations of natural law) in mind, we will investigate miracles further.

So, more specifically, how can we adjudicate the validity of miracle claims in the Gospels? I argue here that a historian can’t say much of anything about them. This is not born out of a prejudice from metaphysical naturalism; in other words, I’m not saying this because a worldview doesn’t permit me to do so. A historian can’t say much about it because the way in which we understand how the world works – today, yesterday and tomorrow – is at odds with the way in which miracles occur. This knowledge accumulated over the centuries is known as background knowledge, which comes from our scientific testing of claims and observations. And miracles contradict our present day background knowledge. We will show in the next post through Bayesian’s theorem that if things aren’t plausible (a technical term), then, not surprisingly, the probability of them occurring is low. And in the case of miracles there is, in my view, an insurmountable amount of contradictory evidence, presented as background knowledge, to overcome.

Our scientific knowledge base gives us a relatively reliable way of understanding how the world works – from predicting the speed of light to understanding complex human behavior – it has a pretty good track record. It doesn’t have all the answers – for example, why are we here or why is there something rather than nothing – but it’s the best we’ve got to work worth. Moreover, it’s important to understand that history is not the proper purview for establishing the kind of world we live in, for that should be reserved for the scientific community. And, as of today, there are no peer reviewed periodicals by forensic scientists or medical professionals confirming a single miracle. That is not to say that one can’t or hasn’t occurred, but they just haven’t been able to stand up to scientific scrutiny. A good example, one of many, is a large study in 2006 on whether or not prayer would assist one in recovering from major cardio surgery. It was found that prayer had no effect on the experimental group, while in some cases the control group actually fared better. So, as to another point, miracles are within the realm of experiemental science – that is, they can be tested.

Some of our background knowledge includes not only our inability to observe and reproduce miracles in the scientific community but also our understanding of how human nature functions. For example, we know that humans as a species are fairly creative and inventive, in addition to having a deep need for meaning making. In fact, humans have lots of faculties that collectively make them receptive to mythology, like a strong tendency to see agency (intentionality in the inanimate), to see design and purpose in the natural world (a river’s purpose is for us to fish in), and to detect patterns out of otherwise meaningless stuff (Marian apparitions). In other words, we have a tendency, starting at a very early age, to connect-the-dots in order to explain why things happen. These meaningful patterns we create become our beliefs and guide us to understand reality, regardless of their accuracy. And these explanations can give us psychological comfort as a reward.

This above is a powerful way of explaining why people believe and create mythology but falls short in explaining the genesis of the Gospels. To the contrary, I believe the Gospel writers invented literary miracles to highlight Jesus Christ’s role as Lord and savior that primarily functions as propaganda. Lastly, we know that there has been plenty of mythology created in the past that is not believed now, see “Why is Jesus so special?”. So why do we dismiss other mythology but not the mythology of the Gospels? There are various explanations to this question, but argument by analogy says that since those were creative inventions, there is a likelihood that the Gospels’ mythology is as well. In conclusion, our background knowledge thus far tells us that miracles aren’t plausible. However, we noted early on that things can contradict our background knowledge and nevertheless be true if good evidence exists. In the next post we will gain insight into this problem with the aid of probability theory. Probability theory says that there exists two probabilities, a prior probability – chances a miracle can occur in light of what we already know – and an explanatory probability – how expected the evidence is if the miracle is true. As a hint, the explanatory probability is a miracle’s only hope.

In this post I go over some biases and influences that we encounter that can interfere with our ability to form well reasoned opinions. As my first post, I feel it’s important to show how I was affected by them and how I eventually was led towards free thought.

Bias is a personal inclination, feeling or opinion – usually not reasoned out – that interferes with obtaining the truth. As far as how we process information concerning ourselves and the world around us, psychologists have identified numerous cognitive biases, but none are more important than confirmation bias. It essentially says that we confirm a hypothesis or belief about something by looking for evidence to support it and discarding that which does not support it. This bias makes our beliefs and hypotheses very resistant to change. A good example of a confirmation bias is if I was convinced that prayer will assist me in getting an interview, and I counted all the times that I got an interview with prayer but ignored the interviews I got without prayer. In other words, as you may have heard before, we count the hits but ignore the misses. The remedy is, in addition to proving our beliefs, that we should always be trying to falsify our beliefs, in this case proving that you’d get an interview without prayer. Moreover, you need to be cautious of the correlation fallacy; that is, just because an interview was obtained when you prayed, it doesn’t mean the prayer caused the interview. You must either look for other explanations (hypotheses) as to why you got that interview or admit falling prey to confirmation bias. [Once you take all that into consideration, and you are still convinced that prayer is the causal factor, see my discussion on Miracles and Probability I, II, and III.]

There is an even more hideous offender to the truth, specifically ingroup bias. Once we are affiliated with a certain ideology, we then frequently inherit the beliefs and preferences of that ingroup. And once we are in an ingroup, we often, unconsciously, argue in favor of that group’s positions. In fact, the supporting of other member’s ideas in our ingroup can be so dysfunctional that a phenomenon known as groupthink can occur. Groupthink is when uniformity in the group’s decisions is elevated at the cost of rational thinking. Examples of groupthink are abound – from the Bay of Pigs invasion to simple meetings at work. To illustrate, think of a meeting at work where you had a good idea but failed to mention it because you know it wouldn’t be well received by influential others. That is a classic example of how groups can be forced into making poor decisions, mainly by refusing anything that’s contrary to the group’s opinion, typically articulated by the higher ranking or more well-liked individuals.

Prior to being educated in the necessary fields, such as economics, history and logic, I personally was affected by the conservative movement ingroup. I started my political career out by listening to Rush Limbaugh and Sean Hannity. The result was disastrous. I became a staunch, conservative zealot. I recall that the hate for liberals spewed by these pundits was contagious, and I used to call my teachers liberals with a contemptuous tone. I reverberated a tribal instinct with an “us” against “them” mentality. I inherited all of their assumptions, arguments and preferences as well. So, for example, I equated government and government intervention as bad, gun control as the government trying to take our guns away, Christians as founding our nation, unbridled pride as something to be cherished, protesting against war as wrong, protecting the environment as unnecessary, and, finally, welfare means doing a person harm and one should practice tough-love in its stead.

So what explains this? For our ancestors, evolutionary psychologists believe that there was survival value in group members demonstrating uniformity and favoritism, while showing prejudice towards outsiders. This conformity would allow for cohesion and cooperation in order to meet similar ends. According to the logic of evolution, if a behavior occurs frequently enough in a high enough percentage of the population, then it’s a good candidate for being an adaptation. If it’s an adaptation, then it’s probably hard-wired into our psychology. So once we are in an ingroup, it comes quite natural for us to adopt their beliefs and fight for their cause, while shunning outsiders. The consequences are clear, and we end up forming conclusions without any forethought. We kind of just back into these beliefs as Michael Shermer says and then defend them at all costs. I recall watching a debate with Dr. Richard Carrier and Michael Licona, and they were perplexed at how they arrived at different conclusions as to whether or not Jesus Christ rose from the dead. I have often wondered how each ingroup they were associated with contributed to their conclusions.

Cognitive dissonance is an uneasy feeling that one gets when there is a contradiction between a deep-seated belief and another opposing belief. It’s not a cognitive bias in it of itself, but it nonetheless assists people in maintaining erroneous views. I felt this dissonance myself when I was a diehard fan of evolutionary psychology. To recall a specific incident, I was listening to a debate between Stephen Pinker and Steven Rose on whether or not the brain was designed as a general problem solving tool or a domain specific problem solving tool. I had so much invested into evolutionary psychology – countless hours of reading it and using it as an explanatory tool to understand the hardships of life – that I was vulnerable. It was perfect, sacred and infallible to me. Because the tenet and pillar of evolutionary psychology is that the mind is a Swiss army knife that is filled with individual tools to solve specific problems at hand, I couldn’t imagine the mind being anything else. Sounds frivolous, but I fought for days to resist thinking that it may be true. Although I don’t know evolutionary psychology’s official stance today, ten years later I’ve conceded that it could be both, with the help of “Adapting Minds” by Buller.

A worldview is a framework of beliefs that allows us to make judgments and decisions about our environment. So not only do beliefs function as cohesive and cooperative mechanisms in groups, they also serve to pragmatically guide us throughout the day. My system of beliefs and values that I had when I worshiped evolutionary psychology constituted a worldview. This worldview allowed me to make sense out of a lot of things in life. However, I was too emotionally entrenched in it, and it was perhaps acting as a distorter of truth by not allowing other more plausible explanations for phenomena. I haven’t abandoned evolutionary psychology; I’m just no longer holding on to it so tightly. I have since reassessed what’s worth holding on to and that so happens to be principles relating to relationships, civics, justice etc. Moreover, I try to now be independent as much as possible, rarely adhering to an ideology. By not adhering to an ideology you are not as liable to be indoctrinated with inflexible assumptions and beliefs that don’t work for every problem. For example, becoming a libertarian forces you to believe that government intervention will most always result in an undesirable outcome, but this may not always be the case. Problems need to be looked at individually, free from dogma.

So we see the disadvantages of affiliating with an ingroup and holding on to beliefs too strongly. But it’s impossible to not have some sort of worldview. As Michael Shermer expounds, we are meaning making machines and demand an interpretive framework to understand the world and humans around us. We can’t function without beliefs, assumptions and values. But that’s not to say that one group doesn’t hold a better set of heuristics that minimizes these biases over another. I believe that philosophical point of view to be found in free thought. It’s less likely to be plagued by bias since its very nature is to be skeptical of it in the first place. The following definition of it is most applicable. Since free thought relies on empiricism like science does, then an assumption or hypothesis can be disputed and discarded. By contrast, an ideology or religion has undisputed truths that would probably never be overturned.