The other day, in writing about John McPhee's interesting New Yorker piece on "frames of reference," I noted some gentle gibes John takes at alleged frame-of-reference fails in "the wonderful book" Captured by Aliens by a former student in his Princeton writing class, Joel Achenbach, whom I referred to as "the great Washington Post science writer."

(This post, Joel explains, is actually "cross-posted from the Post's new Inspired Life blog, edited by my former Rough Draft editor Sydney Trent." Joel adds, "Please note that there is not a single reference to Hillary Clinton's emails. Well, now there's one, I guess.")

"I spend an inordinate amount of time thinking about the end of the world," Joel begins. "I’m on the catastrophe beat around here."

If we all faced certain doom, I know exactly what I’d be doing in my final hours of existence: Trying to file on deadline. (And then tweeting the link.)

Earthquakes, oil spills, hurricanes – that’s just for starters. I’m also tasked with thinking about low-probability, high-consequence events, such as an asteroid impact, or an eruption of the Yellowstone caldera (you know Yellowstone’s a big, bad-ass volcano, right?).

You see: "I get paid to worry about the future, and talk to other worriers." Isn't it nice to come across someone who loves his work?

HERE'S SOME OF THE FUN JOEL'S BEEN HAVING

"Recently I talked to scientists about 'synthetic biology' and whether something cooked up in a lab might escape and run amok."

The consensus is: not likely. Nature has already “invented” (if you will) an unbelievable array of organisms that invade every niche and exploit every available resource. [As recently noted here on the A-blog, there’s a name for a system that invents novel organisms, and it’s called “evolution.”]

"Then there are the very, very exotic hazards."

The other day I was working on a story about space aliens, and whether we should beam signals to them, shouting into the void, trying to make contact. A major argument against such a move is that the aliens might be unfriendly, and potentially visit us, lugging along their “To Serve Man” cookbooks (obligatory Twilight Zone reference).

I’m going to mark that down as not-gonna-happen, on account of it being way too much hassle to come all the way here from anywhere else. We live in that kind of universe – you can’t get anywhere because everything’s too spread out.

"What if we made contact with the space aliens and they informed us, gently, that everything in the universe is just a big computer simulation, and that our lives here on Earth are not actually 'real'?"

This, Joel explains, is "another crazy thought, mentioned to me by an astronomer."

One has to think, immediately, of Samuel Johnson kicking the stone and saying of Bishop Berkeley’s nothing-is-real thesis, “I refute it THUS.” Besides: If a simulated universe is exactly like a real universe then I don’t think anyone has anything to complain about. (Or is that giving up too easily?)

"My colleague Matt McFarland recently did a post on the 12 biggest existential threats facing humanity, and you'll be surprised by what's number one: Computers. Artificial intelligence. 'Skynet' from The Terminator."

At some point, computers may begin to program themselves, achieving what is known in the worrying trade as Superintelligence. Is this a rational fear? I will say this: My computer is already a lot smarter than I am. Even my phone is. I didn’t expect to live to the day when I wasn’t as smart as a phone. But I’m with Walter Isaacson, who in “The Innovators” says humans-plus-computers will always be smarter than computers alone.

"There are other threats."

Nuclear war. That’s not going away anytime soon. You also have the nanotechnology concerns that Matt wrote about: Again, hard to do risk analysis on something we know very little about.

OKAY, WE'RE GETTING TO THE "UPLIFTING"
PART (WE'RE JUST NOT QUITE THERE YET)

It would be easy to make a list of possible threats, hazards, natural disasters and cosmic catastrophes (have I mentioned solar flares that fry the grid??), and then decide to curl up in a fetal ball and start whimpering. Except go back to Matt’s post: Multiply all those existential threats together and it’s still a longshot. Chances are, the future will be different from the present, just as the present is different from the past — but we’ll probably still be here in some form, and possibly more or less recognizably so.

The biggest issues aren’t existential so much as qualitative. What kind of world will it be? Beautiful? Clean? Marked by freedom and justice? Will children grow up in societies that give them a chance to fulfill their dreams? Will girls be given access to education? Will peace and harmony be the norm, or conflict and war? Utopia, dystopia, or some complicated situation like the one we’re in right now?

ANYTIME NOW FOR THAT "UPLIFT," JOEL . . .

Aha! "I am cautiously optimistic," he says.

I believe human beings are highly adaptive creatures. I think we can create a sustainable and beautiful civilization that doesn’t destroy the planet or impose totalitarianism. Optimism, I should note, is not a fashionable attitude in some circles. Indeed there are those who would argue that optimism is counter-productive — an actual impediment to making changes and being adaptive. But optimism isn’t complacency. It has to be paired with a certain level of wariness and a general alertness. We have to find ways to look beyond our immediate horizons. The future isn’t simply something that happens, but rather it’s something that we’ll actively create, through hard choices, clever engineering and social progress. I believe that.

There you go! Consider yourself (cautiously) uplifted.

[NOTE: Joel's Why Thiings Are: Answers to Every Essential Question in Life (1991), pictured above, was followed by Why Things Are, Volume II: The Big Picture in 1993 and Why Things Are and Why Things Aren't in 1996 -- and a bunch of books since, most recently A Hole at the Bottom of the Sea: The Race to Kill the BP Oil Gusher (2011).]