When engaging believers, one is often confronted with the one of the following claims: “you have faith that the sun will rise tomorrow,” or “you have faith that your chair will hold you up when you sit on it.” The purpose of these kinds of accusations is to equivocate between two levels of uncertainty, pulling them both under the umbrella of “faith,” and thus to open the skeptic to the possibility of accepting the superstitious claims of the believer. In order to explain why this tactic is wrongheaded at best, and downright disingenuous at worst, I should first explain a few things about probabilistic reasoning.

Our beliefs do not fall cleanly into a dichotomy of certainty. We are not either 0% certain or 100% certain of a proposition. The certainty of our beliefs distributes on a continuum from 0 to 100% (or 0 to 1, whichever scale you prefer). I may be only 60% certain of the directions to a certain pizza place, but if I consult a map, having gained more evidence, I can increase my certainty to perhaps 99%. That’s what evidence does: it increases or decreases your certainty in a proposition (falsifying evidence decreases it).

Now, I have lots of evidence that the sun will rise tomorrow. I’ve seen it rise over 10,000 times, and I understand the deterministic laws of physics. Very strong evidence exists that it will rise tomorrow. I can probably assign a 99.99999…% certainty to that belief.

On the other hand, I don’t a high degree of certainty that God exists. I haven’t seen any evidence to establish a belief in God. Without any evidence, our certainty in a proposition should be 50% — that is, the proposition is as likely as it is unlikely. And in the case of the Christian God, my certainty is actually less than 1%.

The believer takes any certainty less than 100% as “faith” and equates them. Since you’re not 100% sure the sun will rise tomorrow, and you’re not 100% sure that God exists, your beliefs (or lack thereof) are both based on faith and are equivalent. I think you can see now why this reasoning is erroneous. A certainty of 99.999% is not equivalent to a certainty of 70%, or 50%, or 10%, or less than 1%.

Now, I said that without any evidence one way or another, our certainty should be 50%, but that my certainty of the Christian God is less than 1%. How is this so? In order to explain that, let me use an example.

Let’s say we’re in a public square surrounded by many people. You pull a man out of the crowd and ask me if he is an insurance salesman. Let’s say that I already know insurance salesmen constitute 10% of the general public (yes, I know that number is inaccurate, but let it suffice for the example). I would then have to say that I’m only 10% sure that the man in question is an insurance salesman. I have no other information, and our certainty when randomly pulling someone off the street should reflect the background probability.

Let’s say that you further ask if I believe he is an insurance salesman and a father. I know that 60% of adult men are fathers. I would have to tell you that I’m only 6% sure that he’s an insurance salesman and a father (.10 x .60 = .06).

You see, every time you add another proposition, you have to multiply the probability of each proposition being true together, so the probability decreases. If you claim that a vague “god,” “prime mover,” or “universal energy” exists, without defining the concept any further, I can say I have 50% certainty in that claim. But if you start enumerating a set of characteristics for that God — without any other evidence — my certainty in that exact God decreases.

The Christian God is described in great detail in the Bible. How certain can I be of a God who created the universe is six “days,” who created two people named Adam and Eve, who sent a flood, who dispersed the people of the earth after an incident at the Tower of Babel, who made the Israelites his chosen people, who sent 10 plagues against the Egyptians, who gave Moses a set of 10 commandments inscribed on two tablets, who impregnated a virgin, who took the form of a demigod named Jesus, who walked on water, who fed a multitude with a loaf of bread, who died on a cross, who rose from the dead, who… Well, you get the picture.

With all those propositions about the Christian God, my certainty in that exact god is way, way less than one percent.

Which is a lot less than 99.9999…% — my certainty that the sun will rise tomorrow.

So yes, I’m not 100% certain that the sun will rise. QM theory tells me that there’s something like a 1 in 10^500 chance that all the particles in the sun will vanish overnight. So at best my certainty in the sun rising is only 99.999…% taken to 499 decimal places. But that’s a lot more than my certainty in the Christian God, and they are not equivalent.

The most common argument for the existence of God these days seems to be the Argument from Design. Other arguments have come and gone, but the Design Argument remains popular. I want to finally put this argument to rest.

The argument has several variations, but it usually points out that in order for life to exist, a large number of physical constants must be within extremely narrow ranges. This is true. Given a random distribution of values, the likelihood of our universe having exactly the constants that it does is infinitesimally small. It appears as though it was made for life.

However, even within this exquisitely fine-tuned universe, 99.99999???% of it is uninhabitable. Most of it is vast emptiness. We are aware of billions of stars and an increasingly large number of planetary systems that don’t support life. Even on our own planet, in this tiny corner of the universe, the conditions for life are limiting. Life doesn’t thrive on earth’s deserts, from Sahara to Antarctica, and beyond certain temperature extremes. If you increase your altitude to just 10 miles — thinner than the paint on a desktop globe — you would not be able to survive for a prolonged period of time due to the low temperature and lack of oxygen.

When viewed in that light, the universe doesn’t appear well suited to life. Life struggles to survive in an exceptionally narrow range of this supposedly fine-tuned universe.

Further, I don’t see how anyone could argue that biological systems are well-designed. There are thousands of known hereditary diseases, which means there are thousands of ways for the human body to fail. We spend more as a percentage of GDP on medicine — the organized effort to control organic design failure — than just about anything else. And that doesn’t include all the environmental insults and communicable diseases (as if the innate design of the body wasn’t bad enough, God had to throw in thousands of other hurdles to thwart our ability to live, right?).

No, livings things do not exhibit good design. They exhibit a patchwork of good and bad designs, struggling against each other to survive — which is exactly what we would expect from a system that builds complexity through selection pressures on random modifications and historical constraints.

Every engineer knows that good design involves compartmentalization of subsystems, so if one part of the system fails, the whole system doesn’t have to fail. Yet what we see in living things is pleiotropy — parts get reused in many places, and the various subsystems constitute overlapping networks (that’s why drugs have side-effects). That’s bad design!

And why would a god produce a Creation that was innately pitted against itself? Predator against prey, parasite against host. It’s certainly not a harmonious existence. On the other hand, ecosystem dynamics are perfectly in accordance with what we would expect from evolutionary agents sampling the behavior space for local fitness maximization.

Finally, what about those constants? Why are they so fine-tuned? We don’t know. One hypothesis holds that we live in a multiverse, and that each universe takes on different values for those constants. Some are capable of supporting self-organizing chemistry (which is all that life is, at bottom), and some are not. Naturally, we would only end up in one that does, and we would marvel at how exquisitely designed it was “just for us.”

We have no evidence for the multiverse explanation, but then, we have no evidence for the existence of gods either. Without evidence to arbitrate between these explanations, they are equally likely. So you can’t claim that fine-tuning of the physical constants is ipso facto evidence of gods. I provided one alternative explanation, but there could be many more. Our inability to formulate better answers doesn’t make the ones that we have correct.

And it seems to me that the explanation involving a Supernatural Man, especially a Supernatural Man with all the character flaws of earthly humans, including jealousy and anger and vindictiveness, isn’t the best explanation we can come with. It’s the kind of explanation we would expect from primitive people who couldn’t think beyond their own psychosocial paradigm.

We can do better than to commit a Mind Projection Fallacy in order to explain the universe. The Argument from Design is insufficient.

So he did it. PZ Myers desecrated an allegedly blessed communion wafer by driving a rusty nail through it and pegging it to a few pages of the Koran and The God Delusion.

I think it’s the best blog post PZ has ever written. He takes the time to put the historical rantings and lunacy surrounding the Eucharist into context. Given the historical actions of the Catholic Church and its followers, you should feel like a fool if this desecration upsets you.

PZ summarizes the Catholic reaction to threats of a desecration quite poignantly: “In my years of loud and often inflammatory blogging, it is the most impressive demonstration of mass lunacy I have ever seen.”

Like many people, I was excited when I first heard about netbooks. Most of my computing experience revolves around the Internet, so a low-cost machine for that purpose would be useful to me, and it would lower the barrier to entry for many people who still aren’t on the net. Then I found out how small the screens are. Sorry, but the Web is a visual experience, and the most important part of the machine is the display. How can you view web pages on a seven inch screen?

Well, it looks like a solution is in development. TechCrunchIT has announced that they want to build a “web tablet” that does nothing more than boot into an instance of Firefox. It could run on 512 MB RAM and a 4 GB SSD. In my mind, as long as it has a 15.4″ or 17″ screen with 1440×900 resolution or higher, and sells for around $200, I’m sold!

Here’s a mock up of their web tablet:

With the latest and best wireless networking, this would be a marvelous gadget for many use cases: traveling on business, going on vacation, attending conferences, browsing at the coffee shop, or sitting on the bus.

The most interesting part is that they want to build it completely open-source, and they’ve received an overwhelming response on their blog from people who want to help (here, here, and here). Hopefully they will accomplish their goals.

A self-described mother of five posted an article on Linux.com about a new content filtering add-on for Firefox called Glubble. Like NetNanny, Dansguardian, and all the rest, it relies on white lists and black lists of key words. This approach has its cumbersome limitations. My anonymous suggestion in the comments (which you should recognize after reading this post) is that content filters should tap into the wisdom of crowds.

James Surowiecki wrote an excellent book about the Wisdom of Crowds, where he described the power of crowdsourcing for solutions to difficult problems. This concept goes back to the days of “Wanted” posters in the Wild West. Even if the sheriff couldn’t find a criminal, with enough eyes the culprit could be caught. Today we have Amber Alerts for the same reason.

Eric S. Raymond applied this principle to software development when he famously said, “Given enough eyeballs, all bugs are shallow.”

Likewise, many companies use crowds to build their business models. Dell has IdeaStorm and Ubuntu / Canonical have Brainstorm. Powerset, which was recently bought by Google for US$100 million, is developing its natural language search engine by allowing people to rate the results of its search.

Gmail’s spam filter went from allowing 20-30% of spam to slip through down to less than 1% because it was “taught” the difference between spam and non-spam by users who click the Spam button.

A good content filtering system could be developed if users — parents — were allowed to rate web sites based on their appropriateness for various age groups: say, 0-3 years, 3-6, 6-10, 10-14, and 14-17. The filter would perform poorly at first (thus it would probably have to go through a beta phase where a few thousands select users contributed to its knowledge, much like the development phase of the Powerset search algorithm), but eventually it would achieve much better results than current content filtering systems.

In the best of all worlds, it would go beyond using average ratings for particular domains or web pages and implement a natural language algorithm that could parse sentences and “understand” context. Thus a search for “urinary tract infection” would not block a bunch of pages because of all the associated words that tend to get flagged.

Given that Powerset’s bread and butter is the development of algorithms that understand natural language, this seems a like a good business opportunity for them / Google.