Comforting Thoughts About Death That Have Nothing to Do with God

"In this book Greta Christina tackles the subject of death with the insight of a philosopher and the relaxed candor of a friend — that really cool, intelligent friend who understands and cares."
-David Niose, author of Fighting Back the Right: Reclaiming America from the Attack on Reason

Comforting Thoughts About Death That Have Nothing to Do with God by Greta Christina -- available now!

Coming Out Atheist

“"Witty, wise, helpful, and humane, this clear and engaging book is most timely.”
-Phil Zuckerman, Ph.D., author of Faith No More: Why People Reject Religion

Coming Out Atheist: How to Do It, How to Help Each Other, and Why by Greta Christina -- available now in ebook, print, and audiobook!

I Support Atheism Plus!

Atheists plus we care about social justice.
Atheists plus we support women’s rights.
Atheists plus we protest racism.
Atheists plus we fight homophobia and transphobia.
Atheists plus we use critical thinking and skepticism.

EVENTS

Here’s what I’ve been hearing about “Robin Hood: Prince of Thieves,” pretty consistently ever since it came out: It’s a terrible, stupid movie. Kevin Costner is unwatchable. But Alan Rickman as the Sheriff of Nottingham kicks ass, and is worth sitting through the rest of the movie for. (Or is almost worth sitting through the rest of the movie for. Opinions vary on this point.)

So it showed on HBO recently, and I thought: What the hell. I’ll Tivo it, I’ll watch it out of the corner of my eye while I’m working on my laptop, and when Alan Rickman comes on I’ll pay attention.

Here’s my assessment.

First: Oh, my dog, is this a terrible movie. Bloated, obvious, as formulaic as bad mainstream porn, with ham-handed attempts at both humor and heroism, it’s everything people hate about costume flicks. And Kevin Costner is even more unwatchable than usual.

But who cares about that. What about the RickmanWatch?

Yes, Rickman is great as the Sheriff of Nottingham. As you would expect him to be. But I didn’t think he was quite all that and a bag of chips. It’s a pretty one-note role: Snarling, Cackling, Snidely Whiplash Bad Guy. And that’s not enough… even for a Tivo’ed “Hot Moments with Alan Rickman” fast-forward session.

That’s hardly Rickman’s fault. That’s how the part was written, and I’m sure it’s how it was directed as well. But I like interesting movie villains, movie villains who seem human, movie villains who shed some light on why people do what they do. They’re more compelling — and more pertinently, they’re more hot.

But Rickman does do one thing with the role that makes it stand out: He makes it funny.

The scene where the Sheriff is trying to marry Maid Marian against her will, and the Merry Men keep trying to break the door down, is a great example. He’s not enraged, he’s not frightened — he’s just incredibly annoyed at the constant interruption. He’s not like Snidely Whiplash at all. He’s like an irritable co-worker who’s being interrupted for the tenth time that day and gets snippy.

And that isn’t something you see a lot of in Standard Snidely Whiplash Movie Villains. Standard Snidely Whiplash Movie Villains are usually too entranced with their beautiful wickedness to let themselves be funny. Rickman is very good at finding the kernel of humor in the humorless, self-important prat — he does it in the Harry Potter movies, he did it to perfection in “Galaxy Quest.” And he does it really well here.

But not quite well enough. Rickman is pretty entertaining in “Robin Hood: Prince of Feebs,” and he does the best he can with what he has; but it’s just not a well-crafted enough role. And there’s just not enough of him in it, even with fast-forwarding through on Tivo. I’m coming down on the “almost worth sitting through the rest of the movie for” side on this one.

So here’s the current Rickman Roundup:

Robin Hood: Prince of Thieves: D-. (And it only escapes getting an F because it has Alan Rickman in it.) Alan Rickman in the movie: B.

The Harry Potter series: Ranging from C+ to B+. Alan Rickman in the
movies: A++.

Galaxy Quest: A. Alan Rickman in the movie: A+.

Dogma: A-. Alan Rickman in the movie: A+.

Hitchhiker’s Guide to the Galaxy: B+. Alan Rickman in the movie: A-.

Something the Lord Made: B. Alan Rickman in the movie: B+.

Sense and Sensibility: C. Alan Rickman in the movie: B+. (Been a while since I’ve seen this one, though. I don’t much like the story in the first place, and I thought Rickman was wrong for the part — but he was awfully damn hot. Too hot for the role, actually.)

There’s a piece by Tony Mauro over on Law.com: an interview with Daniel Metcalfe, a former senior attorney at the Department of Justice who retired in January, about Alberto Gonzales’s term of office as Attorney General. And it absolutely gives me the chills. (Found it via Dispatches from the Culture Wars.)

The gist of it: The Justice Department under Gonzales hasn’t just been among the most corrupt and politicized in American history. It’s also been one of the most incompetent.

The quote that jumped out at me: “Most significantly for present purposes, there was an almost immediate influx of young political aides beginning in the first half of 2005 (e.g., counsels to the AG, associate deputy attorneys general, deputy associate attorneys general, and deputy assistant attorneys general) whose inexperience in the processes of government was surpassed only by their evident disdain for it.” (Emphasis mine.)

This is exactly what I was talking about in Hurricane Katrina, and What Government Is For. When government is run — and staffed — by people who think government is a bad idea and hold it in contempt, then that government fails in even its most basic, obvious obligations.

I mean — the Attorney General’s office. The Department of Justice. That’s the central agency for enforcement of federal laws. That’s the people whose job it is to prosecute people who violate federal crime, from fraud to terrorism to, you know, things like kidnapping and murder. That’s the law and order stuff that conservatives are supposed to be all excited about.

And the man in charge of it, the man who staffed it with people whose “inexperience in the processes of government was surpassed only by their evident disdain for it”… this is the man George Bush calls “our No. 1 crime fighter”.

I think I’m going to be sick.

Oh, the other quote that jumped out at me: “I used to think that they (John Mitchell and Ed Meese) had politicized the department more than anyone could or should. But nothing compares to the past two years under Alberto Gonzales.”

Worse than John Mitchell and Ed Meese. That’s actually an amazing accomplishment.

I’ll admit it: I sometimes get frustrated when I hear people dis science. I feel like people who dis science don’t get how hard scientists work, how careful they are, how passionately they value the truth — even more than they value being right. (And like most people, they value being right a lot.) And I feel like people who dis science don’t appreciate the unbelievably vast degree to which it’s improved our lives, from polio vaccines to clean drinking water, from AIDS drugs to iPods.

But I also get it. Dealing with science as a layperson can be frustrating. You have to have a lot of trust in people who are talking a gobbledygook lingo that you don’t understand, about concepts that are often baffling at best and wildly counter-intuitive at worst. And while both experimental methods and results are theoretically transparent and available to anyone for review, the chances that a layperson will be able to make heads or tails of some paper on low-resolution structures of thyroid hormone receptor dimers and tetramers in solution are, shall we say, slim.

So I want to talk here about some of the reasons science gets a bad rap — and why I think that bad rap is much less deserved than many people think.

*****

“Frontier” science and the news media. Most of the scientific research that most of us read or hear about in the news is what’s called “frontier science” — new research, new theories, new results. And pretty much by definition, frontier science isn’t very solid. Frontier science is one study, one person’s theory, one surprising set of results. It’s important, but it hasn’t yet gone through the whole process of replicating and review to see if it holds up. Some of it pans out — a lot of it doesn’t. And when it doesn’t, people’s reaction is to say, “See? Scientists don’t know what they’re talking about.”

You see this a lot in the science of nutrition. Because it’s a subject of tremendous personal importance to most people, new/frontier nutrition science gets a HUGE amount of news coverage. But because what’s being reported on is frontier science, the new research frequently gets discarded or discredited. And so people’s reaction to new discoveries in nutrition is often to say, “God, every week it’s some new damn theory — how the hell am I supposed to decide what to eat?”

The problem, of course, is that while frontier science isn’t solid science, it makes for excellent news. No news agency in the world is going to run with the headline “Scientific Consensus Finally Reached After Years of Careful Replication and Peer Review.” (Unless it’s about global warming — then sometimes they will.) It reads like something you’d see in the Onion. And no news agency in the world is going to run the headline, “No, Really, We Keep Telling You — More Fruits and Vegetables, More Whole Grains, Less Junk Food, And A Whole Lot More Exercise.” It doesn’t sell ad space.

Anyway, this problem doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Medical science. Medicine is almost certainly the branch of science that most people have the most immediate personal experience with. And medical science can be extremely frustrating. The excruciatingly long, uncertain in the short term, “there’s way too much information that we just donât have yet” nature of scientific research… that can be unbearable when you’re trying to get treatment for your cancer or your depression or your bad knee. And I think it leads a lot of people to think that doctors and scientists don’t know anything. It’s not very fair — science is science, and it’s slow and stuttering whether you’re researching gamma rays or HIV. But it’s awfully damn hard to wait fifty years for the research to play out when you’re in unalleviated pain, or you’ve only got one year left.

But as painful as this problem is, it doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

The devaluing of the intellect in our society. Modern American culture is, to put it mildly, not one that values reason, intellect, or education. (To the great detriment of our school system, I might point out.) We’re a culture that sees intelligent, educated people as smarty-pants know-it-alls who think they’re better than the rest of us.

This is a problem for a lot of reasons. I could write a whole post about it, and at some point I might. But one of the biggest reasons is political. When we don’t value reason or evidence, we play right into the hands of leaders who know how to manipulate us by pulling our emotional strings; leaders who get us to trust our gut — i.e., our fears and prejudices — rather than the evidence.

And we still have the central assertion on the table — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Crappy science education. This, I think, is directly related to the previous bit about the devaluing of the intellect. Science education in our schools often seriously sucks. And I’m not even talking about the whole creationism/ evolution controversy; go read Evolutionblog if you don’t know enough about that, and how badly it fucks up our schools.

I got extraordinarily lucky. I got an elementary and high school education that didn’t just teach scientific facts and theories, but that taught — as early as third grade — the scientific method. (Really. In third grade science class, we had these weird little comics explaining, among other things, the difference between observation and inference.) Most kids don’t get that. And the grown-up news media doesn’t do a very good job of explaining it (see “Frontier science and the news media” above). So most adults don’t understand that much about it. (Oh, and for the record: I don’t think this is the fault of science teachers. I think most of them are great and do the best they can with what they have. I think it’s mostly to do with politics: lousy funding, and No Child Left Behind, and pressure from anti-science parents’ groups, and the like.)

And we still have the central assertion on the table — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Limited resources. The complaint Layne made — about how no scientist is going to use their Magnetic Resonance Imagery equipment to examine the theory of interpsychic sexuality — is a very common one. People especially make this complaint about alternative medicine — that scientists dismiss it for not having been tested carefully, but then refuse to devote their resources to doing that testing.

And there’s some truth to that.

The problem is resources. Had they but world enough and time, I’m sure the researchers at Stanford would be delighted to use their MRI equipment for studies on telepathy in SM sex. (If for no other reason, it would be a whole lot more entertaining for the research staff than whatever they’re working on now.) But science is both ungodly time-consuming and ungodly expensive, and researchers aren’t going to put their very limited time and budget into avenues of research they think are unlikely to bear fruit. And the reality is that every single serious, careful study that’s been done on other forms of telepathy has failed to find any evidence of it.

So when there are a hundred scientists in line to use the MRI equipment and the only slot you could get was on Labor Day between two and four a.m., you’re not going to spend it testing sadomasochistic telepathy. You’re going to spend it testing your theory about calcium supplements and bone density, or brain damage in alcoholics. (And even if you do want to spend your time and budget testing sadomasochistic telepathy, the people whose job it is to allocate time slots on the equipment aren’t likely to do it — for exactly the same reason.)

To believers in paranormal phenomena, that can seem really unfair. But here’s the thing we have to remember. In the early days of modern science, metaphysical theories were considered a lot more credible, and they got a fair amount of serious scientific attention. But when they were seriously tested, those theories fell apart — and the more the scientific method improved, the harder they fell. It isn’t that scientists are unwilling to do the research because they don’t believe the theory. It’s the exact opposite — they’re unwilling to seriously consider the theory because the research doesn’t support it, and never has.

Anyway, that’s what CSI (the Center for Skeptical Inquiry, formerly CSICOP) is for. That’s what they do. They take claims of paranormal or spiritual phenomena, and subject them to the same careful scrutiny and controlled experimental protocols that physical phenomena get subjected to. (They do non-paranormal research and analysis as well, on subjects ranging from magnet therapy to sex predator panic, and from the Kennedy assassination to Bigfoot.)

And as unfair as it may seem, this problem still doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Scientists can’t think outside the box. Again, this is one of the most common complaints leveled against scientists by believers in the spiritual, paranormal, and/or woo. It’s related to the argument above: people argue that scientists won’t even consider a theory of, say, telepathy or reincarnation, since it’s outside their narrow beliefs and expectations about the world. And of course, there’s some truth to this. Scientists are human, and like most people they have a difficult time thinking outside the box of their beliefs and expectations.

But it’s also true that in science, revolutionary thinking is highly prized. Possibly even too much. In fact, it’s a complaint I’m beginning to hear a lot of: every goddamn researcher wants to be Galileo or Darwin or Einstein. Nobody wants to be the reliable workhorse who clarifies a fine point of the existing theory. Everyone wants to be the world-famous breakthrough person who changes the paradigm.

And while this trait can be annoying, it also goes a long way towards counteracting the “inability to think outside the box” problem.

Ingrid once gave me a great example of this. She was watching a TV show where the guest was a woman who claimed to be from the planet Venus, who claimed that there were domed cities on Venus and she still had relatives there. The show also had an astronomer on, who asked this woman, “Okay, can you show me where on Venus I can find these domed cities?” The woman hemmed and hawed and said, “Oh, there’s no point, you won’t believe me, you scientists don’t want to believe this.” And the astronomer replied, “Are you kidding? I would LOVE to be able to prove that there are domed cities on Venus! If I could prove that, I’d be the most famous astronomer since Galileo!”

And that’s true of all this other stuff we’ve been talking about. If a scientist could prove — really prove, with hard, carefully-gathered, carefully controlled, replicable evidence — that there was life after death, or metaphysical telepathic communication, or an animating force infusing all living things, or any of this stuff we’ve been talking about — it would be an ENORMOUS contribution to science. They’d be more famous than Freud and Oliver Sacks combined. They’d probably become the most famous scientist in history.

In any case, this problem still doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Science keeps changing — so how can we trust it? One of the problems is that people who distrust or dismiss science often say things like Layne did, that “history is also littered with disproved and discredited science” — and that this somehow discredits science.

But people who value science don’t see this as a sign of science’s failure. On the contrary — we see it as a sign of its success, of science working exactly the way it’s supposed to. When enough evidence comes along that contradicts a theory, that theory gets discarded and replaced by a better one. A theory is only as good as the most recent results.

Now, obviously, there’s a limit to this “most recent result” thing. As a science professor of mine once pointed out, if one of his students got a result that the density of helium and the density of lead were identical, that professor would not be rushing off to publish the results in “Science.” He would, instead, be checking to see whether that student had turned on their scale.

That’s where the whole “extraordinary theories require extraordinary evidence” thing comes in. If a theory has stood up for decades or centuries, if it’s explained all the evidence so far and done a good job of predicting new evidence, then one anomalous result won’t be enough to make everyone question the theory. And it shouldn’t. Anomalous results happen too often — and they too often turn out to be explainable by something in the “they forgot to turn on their scale” department. A really solid theory that’s held up for a long time needs a metric shitload of evidence for it to be discarded and replaced.

And here’s the thing: Of course it’s true that scientific theories have been discarded and replaced. But they’ve consistently been replaced with other scientific theories, other naturalistic explanations of the world. This is the point I was making in The Unexplained, The Unproven, and The Unlikely — not that naturalistic theories never get replaced, but that they never get replaced by supernatural ones. (Not ones that are supported by mountains carefully collected, carefully controlled, peer-reviewed, replicated, etc. evidence, anyway.)

Anyway, this problem still doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Science has resulted in bad things. From tasteless agribusiness food to the atom bomb, from racist intelligence testing to gays and sadomasochists being diagnosed as insane, from thalidomide to eugenics to the Tuskegee syphilis study, history is full of terrible, harmful results of science. (And I haven’t even mentioned Cool Ranch Doritos…) In many of these cases the science was bad science, and eventually got corrected — not just the results, but the scientific method itself. But it sucks deeply when the slow, self-correcting process of the scientific method is being corrected on your back. And in some of these cases the science wasn’t bad. It was flawless. It was just applied in a profoundly unethical way.

I’m not going to pretend that this stuff isn’t real. I don’t think it’s fair to praise the benefits we’ve gained from science — and they are legion, from vaccines to clean drinking water to HIV medicine to the Interweb — and not acknowledge the curses it’s handed us.

All I can say is this: It’s not like human beings need science to do terrible, stupid things to each other. And it’s not like the religious/ spiritual impulses of humanity haven’t led to horrors as well. For every atom bomb and toxic farm and electroshocked homosexual you can show me, I can show you a religious war, a witch-burning, a piece of knowledge being violently suppressed, a fraudulent psychic preying on the hopes and fears of the gullible, a child getting beaten up for being Catholic or Jewish or Muslim.

And unlike the scientific method, religious or spiritual beliefs often don’t have a built-in self-correcting mechanism. Quite the contrary. Any religious or spiritual belief that’s based on the idea that faith/ feeling/ doctrine/ intuition trumps evidence (and many of them are) has the exact opposite — a built-in self-perpetuating mechanism.

Anyway, as troubling as this problem is, it still doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Scientists are arrogant.

Well — yeah.

Some of them are, for sure, what with them being human and all. And it could easily be a disproportionate amount. I don’t know if there’s ever been a double-blind, peer-reviewed, replicated study comparing the arrogance of people in different careers — but if there were, it wouldn’t surprise me to see scientists on the high end of the list. (Especially if you include doctors.) Science is not a field for fragile egos — it’s a field where you’re constantly having to defend your ideas, and the discourse is not always polite.

But even if scientists are, on the whole, more arrogant than average (and I’m not totally convinced that they are — arrogance is a pretty common human trait)… I’m not quite sure how to say this, but it’s a different kind of arrogance. There is nothing in the world like the arrogance and smugness of someone who has been vigorously trained to admit when they’re wrong — whose entire life’s work and professional community are based on the principle that people have to admit when they’re wrong or else the whole thing goes kaflooey — and who knows they’re capable of doing it, and has almost certainly done it dozens or even hundreds of times in their career. (There was just an episode of The Office about this, when Michael says, “It takes a big man to admit his mistake — and I am that big man.”) As smug and self-righteous as people can be when they’re loudly insisting that they’re right, it does not even come close to the smug self-righteousness of people who are loudly pointing out that they’re big enough to admit their mistakes.

And while that kind of arrogance can be extremely annoying personally, it’s not the kind of arrogance that gets in the way of the truth.

Ingrid was actually just talking about this. Ingrid is a nurse practitioner in the field of HIV and AIDS, and she goes to conferences where researchers report the results of their studies. And she says that people report surprising results ALL THE TIME. She says it’s a world full of enormous egos… and yet, people are CONSTANTLY reporting that, “We went into this study completely expecting A to happen, but much to our surprise, B happened instead.” (I keep saying this, but it bears repeating: You can have all the expectations in the world, but if your research protocols are good, the outcome is going to be the outcome no matter what you expected.)

And, she points out, nobody stands around them pointing and laughing and saying, “Ha ha, you thought A was right, you stupid twit, boy were you wrong.” There is hearty and fierce debate in the scientific world… but there’s also a basic understanding that having your hypotheses proven wrong is an essential part of how the process works.

BesidesâŠ you know, it’s not like there aren’t arrogant jerks in religion and spirituality. From Ted Haggard to Deepak Chopra, the world is full of arrogant spiritual leaders. Not just the leaders, either: ordinary spiritual believers can be every bit as condescending about their world view as scientists. And much of the time, they’re NOT in a field that’s founded on the principle that a theory is only as good as the last piece of evidence supporting it. Quite the opposite. (See Science has resulted in bad things above.)

This is the big difference, the thing I keep coming back to. Scientists may be arrogant — but they can back up their arrogant opinions with carefully gathered, rigorously examined, replicable evidence. (And if they’re wrong, they’ll get some equally arrogant scientist smacking them across the head and telling them so.) Spiritual believers are much more likely to back up their beliefs with, “Well, that’s just how I feel,” or, “I know it in my heart,” or, “That’s what the Bible/ Torah/ Koran/ whatever tells me,” or, “It’s just intuitively obvious.” And they’re more likely to think that this somehow ends the conversation — that their intuition and/or doctrine and/or personal experience is good enough evidence, by itself, to base their beliefs on. While there are certainly exceptions (the Quakers are a good counter-example, as they so often are), many religious and spiritual beliefs have, at their very foundation, the idea that faith is more important than reason or evidence. (Again, see Science has resulted in bad things above.)

And this problem still doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

Science is limited â there are certain kinds of questions it simply can’t answer. Is the scientific method limited? You betcha. There are huge, important questions about life and human experience — what kind of art we like, what kind of sex we like, where we decide to live, what career we pursue, who we fall in love with — that are fundamentally subjective, that are about how we experience the world and not how the world is, and that are to a great extent best understood by introspection and emotion. (Although I sure do know a lot of people who would seriously benefit from applying a little more reason and evidence in their romantic and sex lives…)

And of course, in everyday life, we have to make quick decisions about the world without subjecting them to years of careful research and replicability and peer review. Whether to pet the dog or stay three feet away from it, whether we have time to make that left turn before the light turns red, whether someone we pass on the street might be a threat… all these evaluations and thousands more have to be made fast, with limited evidence and our gut feeling. (Although again, I think a little more reason and evidence could help improve these decisions for a lot of people. It could go a long way towards minimizing the “clutching your purse when a black man passes you on the street” phenomenon, just for example…)

But when you’re arguing — for instance — that the soul is a real metaphysical entity that survives death, or that prayer can help treat illness, or that people can communicate telepathically… that’s a completely different ball game. Those are claims, not about our personal subjective experience, but about the objective world. And those are exactly the kinds of claims that the scientific method is suited to investigate.

And once more, this problem still doesn’t contradict the central assertion — that the scientific method is the best method we have for minimizing human error and bias in observing the world and trying to explain it.

*****

Finally: Even if, after all this, you remain dubious about science and the scientific method — how does that dubiosity support a claim of the paranormal? Even if someone could convince me that the scientific method was hopelessly flawed, how would that support a claim that paranormal phenomena are real? Even if someone convinced me that scientists are far too subjective and attached to their opinions to be able to observe the world accurately, why would that show that paranormal claimants are less subjective, less attached to their opinions? Even if someone convinced me not to trust CSICOP, why would that convince me that I should trust Deepak Chopra? (Or whoever.)

In other words: If you’re not going to rely on some sort of methodical system for evaluating what is and isn’t true — apart from “It just seems right,” which we know to be among the worst arguments in history — then on what basis are you trying to convince me (or yourself) that your beliefs are right?

There’s something Richard Dawkins says about this. It’s his response to the saying, “There are no atheists in foxholes.” His response: “There are no cultural relativists at 30,000 feet.” If you’re in an airplane, he says, and it stays in the air, it’s because a whole bunch of scientists got their sums right.

And this is one of the things that bugs me most about the “What has science ever done for me?” argument. Science is why you can fly to London. Science is why you don’t have to be afraid of getting smallpox or polio. Science is why we understand that our planet is not the center of the universe, and that our species is not the center of the planet. Science is why you can go off about how science can’t be trusted… on the Internet. Science is why you have friends with AIDS with a life expectancy of more than six months. The reason for all of this is because of scientists who used the scientific method, instead of their gut, to make sure that their orbit calculations and polio vaccines really worked.

Remember what I was saying earlier about the devaluing of the intellect in our society and the political consequences of it? One of the most crucial examples: Science is why we know global warming is happening. And the mistrust and scorn of science and scientists is a huge part of why we’re not doing enough about it. It’s not the motivation for the denial — the motivation is greed and inertia, mostly — but it sure gives a handy excuse. “Oh, those scientists, they don’t even all agree, and what do they know anyway?”

Scientists aren’t perfect. But they work really hard doing something really important, something fundamental to what makes us human — trying to understand who we are, and what the world around us is, and what our place is in that world. They make mistakes, whammys sometimes. But in the long run those mistakes tend to get filtered out — and in the meantime, they’ve contributed not only practical assistance to our daily lives, but insight into who we are. And their method really is the best one we know of for minimizing human error and bias in observing the world and trying to explain it. I certainly haven’t seen or heard of one that’s better. We should, of course, view scientists with as much skepticism as they view one another. But I strongly believe that, on the whole, they deserve our respect and trust.

I talk about science a lot in this blog. I am passionate about science, especially for someone who’s only studied it as a humanities major and an educated layperson. Scientists are my heroes — most obviously scientists like Galileo or Darwin, who’ve forced people to radically rethink the universe and our place in it, but also Joe and Jane Nerdiac slogging away in a lab or a swamp, trying to figure out some minute detail about the world with more patience and diligence than I could ever muster up.

And periodically, both in this blog and elsewhere, I run into people who try to convince me that my faith in science is misplaced. I hear/read people say things like, “Scientists are human, therefore science is flawed… therefore science is not to be trusted, and/or can’t really tell us anything useful about the world.”

The thing is? The first part of that is absolutely true. Science isn’t perfect. It’s a human endeavor, and it’s therefore fraught with imperfection. It’s shaped by bias, and arrogance, and the intense desire to be right, and the ability to be fooled, and the difficulty people have in seeing or imagining what they don’t expect.

I’ve never met, or read, a scientist who thought otherwise.

Which is exactly why the scientific method has developed the way it has.

People talk a lot about science as if it were a set of beliefs — like a religion, a body of theories and opinions about how things are. But while there’s some truth to this on a practical day-to-day basis, it really isn’t the big picture, or even the medium-sized picture. What science is, ultimately, is a method — a method for observing the world, and trying to explain it.

And here’s the thing about the scientific method: It’s been developed over the years to do one very specific thing — to minimize the effects of human error and bias, as much as is humanly possible.

See, scientists KNOW that they, like the rest of the human race, are arrogant, stubborn bastards who crave recognition and have axes to grind. Believe me: when you point out that many scientists are arrogant, you’ll get a dozen or more scientists laughing and saying, “Buddy, you don’t know the half of it.” And they have therefore developed this method for trying to figure out what is and isn’t real about the world — one which goes as far as we know how to minimize the effects of that arrogance and stubbornness and the rest of it.

It doesn’t do it perfectly. And it takes time, not to mention extremely hard, often tedious work. But I would argue that it does this job better than any other method we have of gathering information about the world and coming up with theories to explain it.

So I want to talk a little about the scientific method — what exactly it is, and how it works, and why it’s done the way it’s done. (FYI, this isn’t meant to be a comprehensive summary of the scientific method — just a quickie tour of the features that I think are most pertinent to these conversations.)

*****

Transparency, of both results and methodology. When scientists publish papers, they don’t just report the results of experiments. They also report — in mind-numbingly boring detail — exactly how those experiments were done.

They do this for two reasons. They do it so other people can repeat the experiment and see if they get the same results (see Replicability below). And they do it so other people can examine and analyze their methodology, and point out any problems there might be with it. Scientists know that outside observers can often spot mistakes that an insider can’t — especially when that insider has been working on their research for years, and has a certain rabid attachment to the outcome.

Replicating results. One of the first things that happens when a scientist reports a surprising result is that a hundred other scientists run to their labs to repeat the experiment and see if they get the same result. So even if one scientist gets a particular result because they expected or wanted it and somehow skewed their experiment to make it happen… when the hundred other scientists repeat the experiment and try to replicate the results, it’s not going to come out the same. (BTW, this doesn’t just work to screen out bias — it also works to screen out fraud.)

Peer review. Again, scientists know that outside observers can often spot mistakes that an insider can’t, either because that insider cares too passionately about the outcome, or because they’re simply too close to the work to have perspective on it. So before it’s even published, research has to be reviewed by other scientists in the field — scientists who don’t have the same personal stake in the outcome as the researcher, and some of whom may even have opposing or competing stakes.

Careful control groups. As much as is humanly possible, scientists set up control groups for their experiments that are identical in every way to the testing group except in the area being tested. (And if they don’t do a good job with this, it’s likely to get caught in the peer review process — and even more likely to get caught in the attempts to replicate the research.) It’s impossible to do this perfectly — especially when you’re doing your testing on human beings and not, say, hydrogen atoms — but they do it as well as they can, and they run it by their peers to see if they missed anything (see Peer Review above). They do this because they know, from experience and history, that a hundred different variables can affect the outcome of an experiment — and a variable that you thought was trivial could turn out to be crucial.

I learned about a wonderful example of the importance of careful controls when I was in middle-school science class. We were learning about the polio vaccine, and our teacher explained that when the vaccine was first being tested, the researchers went to the schools and asked parents for permission to test this experimental vaccine on their kids. Some parents said yes, some said no… so the researchers said, “Great. We’ll test the vaccine on the kids whose parents said Yes, and the ones whose parents said No will be our control group.” But when they went to publish their results, they were told that the experiment was flawed and they had to repeat it. There was an important difference between their control group and their testing group, one that hadn’t occurred to them — namely, whether the parents had said Yes or No to the experiment. So they repeated the study, this time splitting the kids whose parents said Yes into a testing group and a control group.

And when they compared their results to the results of the original experiment, they found that, in fact, kids whose parents had refused the experiment WERE more likely to get polio than kids whose parents allowed it. Regardless of whether they’d gotten the vaccine or not. They would never in a hundred years have expected that outcome — but that’s the outcome they got. And they got it — as well as an accurate answer to the rather more important question of whether the polio vaccine worked — because of the combination of peer review and careful use of controls. (I don’t have space here to go into why they think this outcome happened — if you’re curious, ask me in the comments.)

Double-blind and placebo-controlled testing. Scientists know — especially when it comes to doing tests on people, such as medical or psychological research — that unconscious biases of the testers can influence the results of the tests. (You jiggle the test tubes of your experimental group just a little harder than your control group, and your results are fucked.) And when it comes to medical testing, scientists know about the placebo effect. So as much as possible, experiments are carefully set up so that even the researchers don’t know, for instance, which batch of blood samples came from the group that got the drug, and which batch came from the group that got the placebo — until the testing is all completed.

Falsifiability. This is one of the most important principles of science. If you have a theory that can’t be disproven — if any evidence at all can be made to fit into your theory — then you don’t have a useful theory. It has no predictive power, no explanatory power. So when you offer a theory, you have to be willing to say, “If A, B, or C happens, that would support my theory; if X, Y, or Z happens, that would contradict it.”

This is one of the reasons so many science-lovers and skeptics get so frustrated with so many religious or spiritual beliefs (not all of those beliefs, but many). Anything at all that could ever happen can get twisted around somehow to fit into the belief system. And from a scientific method point of view, that makes the belief system useless.

Which is what I was trying to get at before (somewhat clumsily) in my Lattice of Coincidence post, when I was asking, “If paranormal phenomena were ‘shy’ (i.e., inconsistent and unpredictable and tending to disappear when tested) but real, how would that information be useful?” If you have a theory about the paranormal or metaphysical (or about anything else), and no possible result or evidence — or lack thereof — could contradict that theory or convince you that it’s wrong… then it’s not a useful theory. It has no power to explain past results or predict future ones. And that’s not just a practical problem. Itâs a philosophical problem, and a big one. If you have no way of knowing whether you’re wrong, then you have no way of knowing whether you’re right.

*****

Does this system sometimes screw up? Fuck, yeah. Especially in the short run. Early results can seem promising but don’t pan out. Surprising new evidence gets explained by boatloads of new theories that turn out to be ca-ca. And I’m sure everyone can probably think of (or Google) many, many examples of times when scientists have taken one or more of the abovementioned principles and massively screwed it up.

But when the method is followed, it works. Slowly, in the long run, with lots of stops and slowdowns and detours along the way, it works. And even when it isn’t carefully followed by an individual scientist, the method works in the long run to catch that scientist’s mistakes — and to catch mistaken assumptions and incorrect theories made by all scientists, and provide a new and more accurate theory.

And maybe more to the point:

What else do we have? What other method do we have for gathering information about the world, and coming up with explanations of what that information means, that has anywhere near the same power to minimize bias, and the desire to be right, and the difficulty in seeing what you don’t expect, and all the other obstacles our brains put in the way of understanding the world?

Intuition and inspiration are great. Scientists rely on it heavily to come up with ideas in the first place. But intuition is a starting place — not a final answer. We KNOW that intuition is heavily slanted by bias and expectations and what we want to be true. Intuition gives us ideas, gets us started on roads to explore — but if we want to be really, really sure that our ideas reflect reality, as sure as we can be with our imperfect brains and our huge and mystifying world, then we need a method to test those inspired, intuitive ideas. And as imperfect as it is, I think the scientific method is the best one we have.

In tomorrow’s post: Common objections to science and the scientific method — and my replies to them. If you have arguments against my little love letter, I’d like to ask you to hold them until then.

Continuing with this week’s series of Hilarious Videos I Found On Other People’s Blogs, we have the second of the Tom Lehrer gems I found on Dispatches from the Culture Wars — this one an animated video set to “The Masochism Tango.” Very, very silly. Enjoy! (Video below the fold)

And yet another in this week’s series of Hilarious Videos I Found On Other Blogs. Today’s gem is from Dispatches from the Culture Wars. It’s an animation done to the Tom Lehrer song about the periodic table of the elements, and it’s… well, just enjoy. (Video below the fold.)

Continuing with Hilarious Video Week, we have this gem that I ran into on the Other Magazine Blog. This one falls into the “funny social commentary” category. It’s hilarious, it’s smart, it’s sexy — and it’s saying stuff that I think is important. (Video below the fold.)

The last week has been Hilarious Video Week at some of my favorite blogs. So I’m going to share some of them. This one comes via Pharyngula, and it (a) made me laugh uncontrollably, and (b) got me strangely hot. (Video below the fold.)

I don’t have anything to say, except this: My deepest sympathy goes out to the families and friends of the victims.

That seems so inadequate. But I hate that people are already starting to spew about whether we should blame video games, or lax immigration laws, or inadequate gun control, or the lack of prayer in the schools. It’s too soon, and besides, we still know so little about what happened and why. So for now, I just want to say again: This is a horrible tragedy, and the families and friends of the victims have my deepest and sincerest sympathy.