The Onion A.V. Club recently began airing a series of short films titled Pop Pilgrims. Their intro sums up the purpose of Pop Pilgrims better than I could:

“When the A.V. Club travels, we always make time to visit pop culture landmarks. If something memorable happened in the world of film, TV, books, or music, we want to go there. We’re not just tourists, we’re pop pilgrims.”

The series is a lot of fun, and very informative. Yet up to now, I hadn’t really given much thought to how they were getting their information.

Most of the shorts include interviews with local “experts,” people with firsthand (or at least close secondhand) knowledge of the sites: a pastor from the church in the final scene of The Graduate, say, or the former special counsel who helped to bring Animal House to the University of Oregon campus. That’s a great way to add to the pop lore, especially when the interviewees let us in on some lesser-known facts about the site. The short about Friday Night Lights was particularly illustrative on the ingenious use of a single physical location as many different on-screen places.

In their latest installment, the first of three in Chicago, they take on The Blues Brothers. And beyond the location interview at the Music Court bridge in Jackson Park—site of the Nazi rally in the movie—it would appear that a major portion of the three-minute short was put together by someone sitting down with some editing software, a DVD of The Blues Brothers, and a web browser displaying my site: Chicago Filming Locations of The Blues Brothers.

I say this because of the similarities in the captions that accompany several of the locations—not merely addresses, but phrasings that are somewhat distinctive due to my choice of words and their order. A standout example is their “Jackson Park between East Lagoon and 59 Street Harbor, Chicago, IL,” a near-verbatim copy of my notation, plus a typo and minus “South of Museum of Science and Industry.” (For whatever reason, both in their location shots and the caption, the A.V. Club has obfuscated the proximity of the bridge to MSI—just as the movie did.)

I’ll even go so far as to suspect that all of the on-screen captions, even the addresses, were cribbed from my site. Of course it’s impossible to say that for certain, unless the folks at the A.V. Club fess up—which is why, despite my desire for 100% perfect accuracy, I realize now in hindsight that I should have included a few “ringers.”

In the excellent book by Jeopardy über-champ Ken Jennings, Brainiac, he describes how trivia writers will often add ringers: little bits of unique, often incorrect data, used as markers to let the writers know when their work has been borrowed by others. The classic example Jennings cites is that of “Columbo’s first name: Philip,” a falsity inserted by Ken Worth into his Trivia Encyclopedia in the early 1970s—and which subsequently appeared in the first edition of the Trivial Pursuit game.

Worth’s subsequent lawsuit, and its dismissal in court, made clear that factual data, raw information, is not copyrightable. I’m not complaining about infringement or anything like that; that would be silly. I didn’t create the data—I merely compiled it from numerous sources (which I credited) and built on it with quite a bit of legwork (i.e., on-site location scouting).

An offhanded credit by the A.V. Club, for saving them from that same legwork—even just in the accompanying text, not on-screen—would have been the forthright, ingenuous thing to do. No matter, though; I remain their avid reader and fan, and I get pleasure out of knowing their little secret: that they visited my site and found it useful, regardless of how they used it.

You’re welcome, A.V. Club. Sincerely.

[Follow-up: Less than three hours after I posted this, I wound up in a friendly email exchange with A.V. Club general manager Josh Modell, who admitted that he “most definitely” used my site as a resource and offered to add a note and link to the bottom of their piece (now already in place). If you’ll pardon a cliché, I must say this: The Onion A.V. Club—too cool for school.]

This week Atlantis made the final landing of the thirty-year-long Space Shuttle program. It was a momentous day; thousands of people flocked to the Kennedy and Johnson Space Centers to witness the end of an era. Fully aware of the historic nature of the event, everyone involved shared some finely crafted words, including these from NASA commentator Rob Navias as the Shuttle rolled out on Runway 15:

“Having fired the imagination of a generation, a ship like no other, its place in history secured, the Space Shuttle pulls into port for the last time, its voyage at an end.”

When Commander Chris Ferguson spoke an uncharacteristically wordy version of the standard end-of-mission call “WHEELS STOP,” I—like many Americans—burst into tears. The finality of the moment, combined with the uncertainty of the future of American human spaceflight, was deeply emotional. Space Shuttle has been, as a friend put it so aptly, “our generation’s technological icon,” pervading everything to do with space for three-quarters of my life—nine-tenths if we go back to the 1972 announcement by President Nixon that got the ball rolling.

I’m proud of what the United States has accomplished in spaceflight, and I sincerely hope that this country continues in a leadership role for future spaceflight endeavours. But in the back of my mind, even as my tears dried, I felt a deeper regret—not for Thursday’s closure, but for what might have been.

Someday, if humanity has sufficient luck and foresight, we’ll find a way to live beyond this planet before we make it so uninhabitable that we kill ourselves off. If that happens—and I’m not confident it will, but that’s a different subject—I suspect that those humans living on such far-flung worlds as Mars and Titan and (apologies to A. C. Clarke) Europa and perhaps even beyond this solar system will look back on their distant past, the early days of human spaceflight, and remember the Shuttle and say, “What the heck were they thinking?”

Because, putting aside all tributes to an amazing piece of technology and the hard work of thousands that made it possible, it must be said: Space Shuttle was, from its inception through its final flight, a boondoggle.

It was born on a promise of efficient and economical access to space, a promise it was never capable of delivering. Turnaround time, theoretically touted as less than two weeks, rarely fell below two months, and usually ran to four or five.

It was, as I once heard it called, “a camel—a horse designed by committee.” Compromises and political necessities, fettered only by engineering realities, held sway over the design process. The military imposed its own set of rules, even though the final design had a cargo bay too small to contain the tour-bus-sized spy satellites the DOD was already building. By the time it flew, Space Shuttle was a delivery vehicle that satisfied the needs of none of its intended customers.

It was a transport without a destination. The original proposal was for a spaceplane and a space station for it to go to, yet for the first seventeen years of Space Shuttle operation there was no International Space Station in orbit. (For five years prior, the Soviet/Russian Mir acted as an occasional stand-in, mainly as an excuse to prop up a faltering Russian space industry.)

It was dangerous to fly, even by “spaceflight is inherently dangerous” standards. No launch escape system was included, despite having been standard on Mercury, Gemini, and Apollo—and today’s Soyuz vehicles too. And, as the crew of Columbia fatefully discovered, strapping life-supporting hardware to the side (rather than the nose) of any launch vehicle, all of which tend to shed debris during launch, is a Very Bad Idea.

I could go on, but the point is this: if the United States had chosen instead to continue with stripped-down, Earth-orbit-capable Apollo-style spacecraft; stuck with existing launch vehicles and worked to improve and simplify them, maintaining a human-rated launch capability using expendable rockets; and put its effort into constructing a space station that could serve as both science laboratory and orbital way-station to deep space; then where would we be today?

Figure a few years to get started; the first module might not have flown until 1975 or so (unless Skylab became module 1, in which case 1973; but no matter). Get the Russians on board, as a more substantial (and genuine) act of détente than Apollo–Soyuz, and figure construction would take about as long as ISS—fourteen years.

That puts completion at 1989. More than twenty years ago, to get us to the point where we are now. And yet ahead of where we are now as well, because we still would have had usable flight hardware, like the Russians do with Soyuz. We would not have been staring down a gauntlet of untold years before private enterprise might fill the launch gap, as we are staring now. NASA estimates it will be five years or so until human spaceflight from American shores resumes; I’ll wager it will be at least ten years, perhaps as many as fifteen—at which point, ISS will be nearing retirement.

Where will America go next? There is no clear answer to that question. U.S. space policy is in “disarray,” to put it mildly. Massive budget cuts are coming to NASA. Robotic exploration, for all its scientific advancement, doesn’t spark the public interest: the arrival of the Dawn spacecraft at asteroid Vesta last week was met by a yawning apathy—even from me, and I have a distinct, specific interest in that particular mission.

That’s what makes me truly melancholy today. Not the end of the Space Shuttle program—it had a good run, and a lot of good things came out of it. Rather, the broad chasm standing before us, one lacking exploration to spark the imagination, challenges to inspire the next generation of scientists and engineers. As the classic IMAX film put it, “The Dream Is Alive.” But for how long?

Last weekend, that chilly rainy Sunday morning before Memorial Day, I walked over to the local bakery to pick up a few treats for a stay-at-home brunch. There was a bit of a line. Ahead of me was a man in his early 30s; ahead of him, a woman about the same age. They were not together. The woman was holding an infant maybe nine months old. The baby was whining and fussing and close to tears; she was looking over her mother’s shoulder at the man between us. The man, meanwhile, stared fixedly into space with a quiet glower of grouchiness.

It was a feedback loop: the man was grouchy because he was stuck in line next to a crying baby; the baby was crying because of the grouchy man. The mother, used to the noise, had—like most mothers would in similar conditions—tuned out.

I figured I alone had a chance to break this vicious cycle. I caught the eye of the baby and started making my usual goofy “hello, baby” face: wide, smiling eyes, puffed-out cheeks, a look of joyous surprise. It took the baby about half a second to switch from fussy to happy, and when she switched, the change in her demeanour was almost instantaneous.

At that moment, the woman shifted the weight to her other arm, meaning the baby was now looking over her mother’s other shoulder. The baby could no longer see me, her view blocked by the man in front of me—but her happy smile remained, and now was directed at the man. Within moments he was smiling too, and saying hello to the baby. The mother turned around and struck up a friendly conversation with the man, and by the time they were done ordering everybody was in a cheerful mood.

Neither the man nor the woman had any awareness of me. They never had a clue how my input had improved their Sunday morning. That it also saved me from standing in line with a crying baby was just icing on the cupcake.

For the past few months I’ve had a puzzler simmering on the back burner.

It was triggered by MeTV airing, the Sunday before Valentine’s Day, a marathon of Love, American Style. Amidst the episodes was the segment “Love and the Happy Days” which, as any classic TV fan knows, spun off into the long-running sitcom Happy Days—a show that spawned several spin-offs of its own. More recently, Fox aired an episode of Bones that was obviously—nay, blatantly—a set-up for its next-mid-season replacement series The Finder, something known in the biz as a “backdoor pilot.”

At any rate, here’s my puzzler: What’s the longest chain of spin-offs in television history?

Nor any of the other electronic readers on the market, of which the Kindle seems to be the most popular (and is definitely the best named) option. Even though it’s all too easy to use its name generically—like Kleenex or Band-aid—I’ll refrain and stick to “reader” for the remainder of this post.

Too many arguments against readers are aesthetic: they’re flat-out hard to read, or sunshine causes screen glare, or they lack the feel of a real book in your hands.

Whatever. Those arguments are all too easily dismissed as the carping of Luddites. “I cannot manoeuvre this horseless carriage with its newfangled round steering wheel. Give me the tiller of my flivver any day!”

I’m all for new gadgets, new ways of receiving information. I read enough on a computer screen, day in day out, that it’s much more habitual to me than, say, paging through a newspaper. And there is a lot of stuff out there that is out of print, and hard to find, but which has been digitized and placed online, and that’s a Very Good Thing. So there’s nothing inherently wrong with a reader, at least in principle, as a medium for the printed word.

But here’s why I will not purchase a reader.

A few weeks ago, I got onto an Apollo kick, spurred in part by my recent reading of Moondust by Andrew Smith. I wanted to re-watch the brilliant HBO series From the Earth to the Moon under the contrary-to-popular-opinion mindset, proposed by Smith, that President Kennedy’s famous challenge “killed ‘manned’ Deep-Space exploration, stone dead, for at least the next four decades and probably many more.” At the same time, I decided to re-read the primary source for the HBO series, Andrew Chaikin’s A Man on the Moon.

As I pulled Chaikin’s book from my shelf, I noted my handwritten imprimatur on the flyleaf, which stated that I bought the book in December 1994. I’ve read this book all the way through a few times since then, and used its appendices as a reference many, many times more. I have certainly gotten my money’s worth (a $15.95 cover price) from this book.

Then I thought about that time span. In the sixteen-plus years since I bought that book, how many times have I replaced my desktop computer, my laptop, or their operating systems? A very rough estimate: 7. Technology changes, hardware obsolesces.

If I had a reader, and bought this (or any) book today… how easy would it be, sixteen years from now, to read that book again or even pick it up for a quick fact-check? Would my old reader still work? Would I still have it? Would I be able to load that old digital file into my current reader? How long would it take to find the file, much less upload it? By the time all that was done, would I still care about the fact I was attempting to check, or would I have already resorted to Googling the darn thing and hoping to find an accurate answer elsewhere?

Or—might I have to buy the book all over again?

Instead, I walk into the room that doubles as my home office and library, pull the book off the shelf—it’s right there, easily found under the “C”s—and get the answer I need following a quick riffle through the pages. No waiting for an upload to finish, nor any need to reboot.

In short, it’s all about time—both mine, and the book’s. I’ve spent enough time in my life waiting for computers to do what I’ve asked them to do, that I’m no longer willing to wait for them; besides, a real book never needs a reboot. And while with just a modicum of care a real book will last well beyond my lifetime, I’m skeptical that an e-book will last even long enough for me to get around to reading it a second time.

But yes, sure, I have less tangible reasons as well for liking this book in its old-fashioned, physical form—and moreover, personal ones.

On a wintry day in December 1994, I’m reading this book for the first time, sitting in the fondly remembered Bagel–Fragel deli. In walks one of my bosses, Professor Brian Silver, chair of the political science department. He says hi, and asks what I’m reading. I show him the cover.

He snatches the book out of my hands and starts flipping quickly through the pages. What’s he up to? I wonder. He’s paging through too quickly to read anything, he’s not trying to get the gist of the book. It’s almost like he’s seen it before…

He gets to one of the photo pages between pages 430 and 431, stabs his finger at the top photo, and hands the open book back to me. “That’s my uncle,” he says, with a hint of pride.

I look at the photo, read its caption: Standing on a mountaintop in Colorado amid the primary and backup crews of Apollo 15 is Professor Lee Silver, Cal Tech geologist extraordinaire, the man most responsible for turning a bunch of type-A fighter jocks into able field geologists. (He was portrayed with delightful, eccentric earnestness by David Clennon in From the Earth to the Moon, one of the standout roles in the series.) His nephew’s familial pride is well-earned.

I remember that moment so distinctly because it was the first time in my life that I’d had an inkling of my own connection, albeit tertiary, to some of my biggest heroes—the men who walked on the moon. All at once they were real people, not just faceless spacesuited gnomes humping around a lumpy grey lunar landscape in old NASA footage, nor smiling, aviator-glasses-wearing, crew-cut-sporting military men in grainy photo reprints in a book. And I knew someone who knew someone who knew them.

And that moment, that spark of recognition that everyone in the world is interconnected in some way, is tied to this book, this particular copy of this book, the one I hold in my hands now.