Adam Victor Brandizzi

Shared posts

Next year, the Model M turns 30. But to many people, it’s still the only keyboard worth using. It was recently spotted on the desk of Minecraft creator Markus "Notch" Persson, attached to a gaming PC whose graphic cards alone cost thousands of dollars. "The Model M is basically the best keyboard ever made," he told PC Gamer. YouTube has dozens of Model M typing demos, unboxing videos, and sound comparisons between it and other mechanical keyboards. Since its introduction, the Model M has been the standard to meet for keyboard excellence.

"I enjoy using an iPad, it’s a wonderful device; the Kindle e-reader is a beautiful thing," says says Brandon Ermita, a Princeton University IT manager. "But I could never write a story, I could never write my dissertation, I could never produce work with a touchscreen." Ermita is devoted to keeping the Model M alive: he recovers them from supply depots and recycling centers, sells them through his site, ClickyKeyboards, and runs a veritable Model M private museum. He estimates he’s put between 4,000 and 5,000 of the keyboards under the fingertips of aficionados over the past decade.

Like many people, I have vague memories of using a Model M as a kid. Last month, though, I took a trip to suburban New Jersey to meet Ermita and rediscover the magic of one of the most beloved keyboards of all time.

The day I visited his spacious office, two dozen or so keyboards were ensconced in a rack like fine wines. Above them, a single black keyboard sat protected in a glass case — a prototype Model M that’s one of the oldest pieces in Ermita’s collection. A hamper held recent acquisitions that still needed to be taken apart and cleaned of Doritos, sewing needles, and other pieces of detritus from their former owners. Looking at a Model M for the first time in years, what was most remarkable about the keyboard was just how unremarkable it looks. The Model M might be a relic of the past, but its DNA remains in almost every keyboard we use today.

Keyboards from the '70s and '80s range from familiar to counterintuitive to utterly foreign_

The QWERTY keyboard layout was designed for typewriters in the late 19th century and quickly became universal. But by the time IBM released its first PC in 1981, layout was no longer a simple matter of spaces and capital letters — users now needed special keys to communicate with word processors, terminals, and "microcomputers." In hindsight, keyboards from the '70s and '80s range from familiar to counterintuitive to utterly foreign: in the IBM PC’s original 83-key keyboard — known as the PC / XT — the all-important Shift and Return keys were undersized and pushed to the side, their labels replaced by enigmatic arrows. The entire thing looks like a mess of tiny buttons and inexplicable gaps. In August of 1984, IBM announced the far more palatable PC / AT keyboard. Compared to the previous model, "the AT keyboard is unassailable," said PC Magazine. The AT couldn’t pass for a present-day keyboard: the function keys are arranged in two rows on the far left instead of along the top, Escape is nestled in the numeric keypad, and Ctrl and Caps Lock have been switched. Even so, it’s cleaner and far more comprehensible than its predecessor to modern eyes.

But IBM wanted something more than merely acceptable. In the early ’80s the company had assembled a 10-person task force to build a better keyboard, informed by experts and users. The design for the previous iteration was done "quickly, expeditiously — not the product of a lot of focus group activity," says David Bradley, a member of the task force who also happens to be the creator of the now-universal Ctrl+Alt+Delete function. The new group brought in novice computer users to test a friendlier keyboard, making important controls bigger and duplicating commonly used keys like Ctrl and Alt so they could be reached by either hand. Many of the keys were detachable from their bases, letting users swap them around as needed. And the Model M was born.

Introduced in 1985 as part of the IBM 3161 terminal, the Model M was initially called the "IBM Enhanced Keyboard." A PC-compatible version appeared the following spring, and it officially became standard with the IBM Personal System / 2 in 1987. The very first Model M that Ermita can verify — a terminal version — was produced on June 10th, 1985. That’s an awfully specific date, and it’s available because every Model M keyboard comes with an ID and production date printed on its back — Ermita does steady business with 20-somethings looking for a keyboard made on their birthday. He also curates the Model M Archive Project, a set of dauntingly long spreadsheets that track keyboards that have passed through his business as well as ones submitted (with ID, production date, and plant number) by other users.

"I have the uneasy feeling IBM is telling me, ‘You’d better learn to love it, because this is the keyboard of the future,’" wrote a PC Magazine reviewer_

Ermita’s collection includes many specialized, industry-specific keyboards, like one with baked-in labels for travel-agent booking, or a small model with the keys grouped into thirds, possibly for cashiers. "When computers were introduced, they were introduced as business machines," says Neil Muyskens, a former IBM manager. Vintage keyboards still bear stickers with commands for specific programs, and reviewers judged keyboards partly on how well they worked with software like WordStar and Lotus 1-2-3.

One reviewer was frustrated by the once again reshuffled keyboard layout that the Model M presented, but had a nagging suspicion that this design would stick. "I have the uneasy feeling IBM is telling me, ‘You’d better learn to love it, because this is the keyboard of the future,’" wrote a PC Magazine reviewer, in what would prove to be one of computing’s bigger understatements.

IBM PC/XT

IBM PC/AT

IBM Model M

Unicomp Ultra Classic

Control keys Function keys Typing (alphanumeric) keys

Navigation keys Numeric keypad

That layout of the Model M has been around so long that today it’s simply taken for granted. But the keyboard’s descendents have jettisoned one of the Model M’s most iconic features — "buckling springs," a key system introduced in the PC / XT. Unlike mechanical switches that are depressed straight down like plungers, the Model M has springs under each key that contract, snap flat, or "buckle," and then spring back into place when released. They demand attention in a way that the soft, silent rubber domes in most modern keyboards don’t. This isn’t always a good thing; Model M owners sometimes ruefully post stories of spouses and coworkers who can’t stand the incessant chatter. But fans say the springs’ resistance and their audible "click" make it clear when a keypress is registered, reducing errors. Maybe more importantly, typing on the Model M is a special, tangible experience. Much like on a typewriter, the sharp click gives every letter a physical presence.

Soon after its emergence, Model M clones flooded the market. For its part, IBM gave new versions of the keyboard only the barest of redesigns. As a result, nostalgia for the Model M spans generations. "People contact me often via email, thanking me for reminding them of when they were a 20-something engineering student back in the 1980s," says Ermita. Younger buyers recall rearranging a classmate’s keyboard as a middle-school prank — "I’ve heard that story a few times."

In 1990, IBM spun off its US typewriter, keyboard, and printer business into a new company called Lexmark. Six years later, Lexmark dropped its keyboard division during what Muyskens calls an industry-wide shift towards cheaper products. IBM continued to commission products from a factory in Scotland and, briefly, a company called Maxi-Switch, but the last IBM Model M — as far as we know — rolled off the production line in 1999.

With a limited supply, all Model M fans are typing on borrowed time_

You can still buy an official Model M for about $80, but it won’t wear the IBM badge. After Lexmark left the business, Muyskens and other former employees began slowly purchasing the keyboard’s intellectual property rights and manufacturing equipment, working under the name Unicomp. "We’ve had to change the electronics," Muyskens says. "The clamshell cover material was changed back in ’99. But pretty much everything else has remained the same."

For some, that’s not authentic enough. "We get asked all the time — can we sell [someone] an IBM logo-ed product? And the answer is no, IBM owns the logo," says Muyskens. He says IBM still orders some keyboards for existing commercial customers, but if you want the old-school logo, you’ll have to turn to eBay or people like Ermita. For others, the inherent superiority and versatility of the Model M trumps nostalgic notions of authenticity: some users are adapting them to work wirelessly with Bluetooth. One Reddit user posted a custom modification with backlit keys that evoke the over-the-top designs of Razer or Alienware. But with a limited supply, all Model M fans are typing on borrowed time.

"This is like oil. One day oil will run out. It’ll be a big crash," says Ermita. For now, though, that crash seems far away. The oldest Model Ms have already lasted 30 years, and Ermita hopes they’ll make it for another 10 or 20 — long enough for at least one more generation to use a piece of computing history.

The Model M is an artifact from a time when high-end computing was still the province of industry, not pleasure. The computer that standardized it, the PS / 2, sold for a minimum of $2,295 (or nearly $5,000 today) and was far less powerful and versatile than any modern smartphone. In the decades since, computers have become exponentially more capable, and drastically cheaper. But in that shift, manufacturers have abandoned the concept of durability and longevity: in an environment where countless third-party companies are ready to sell customers specialty mice and keyboards at bargain basement prices, it’s hard to justify investing more than the bare minimum.

That disposability has made us keenly aware of what we’ve lost, and inspired a passion for hardware that can, well, take a licking and keep on clicking. As one Reddit user recently commented, "Those bastards are the ORIGINAL gaming keyboards. No matter how much you abuse it, you’ll die before it does."

Doing Good Better opens, just as you would expect, with an uplifting story of a wonderful person with a brilliant idea to save the world. The PlayPump uses a merry-go-round to pump water. Fun transformed into labor and life saving clean water! The energetic driver of the idea quits his job and invests his life in the project. Africa! Children on merry-go-rounds! Innovation! What could be better? It’s the perfect charitable meme and the idea attracts millions of dollars of funding from celebrities like Steve Case, Jay-Z, Laura Bush and Bill Clinton.

Then MacAskill subverts the narrative and drops the bomb:

…despite the hype and the awards and the millions of dollars spent, no one had really considered the practicalities of the PlayPump. Most playground merry-go-rounds spin freely once they’ve gained sufficient momentum–that’s what makes them fun. But in order to pump water, PlayPumps need constant force, and children playing on them would quickly get exhausted.

The women whose labor was supposed to be saved end up pushing the merry-go-round themselves, which they find demeaning and more exhausting than using a hand-pump. Moreover, the device is complicated and requires extensive maintenance that cannot be done in the village. The PlayPump is a disaster.

MacAskill, however, isn’t interested in castigating donors for their first-world hubris. MacAskill, a frugal Scottish philosopher, doesn’t like waste. Money, time and genuine goodwill are wasted in poorly-conceived charitable efforts and when lives are at stake that kind of waste is offensive. MacAskill, however, is convinced that a hard-headed approach–randomized trials, open-data, careful investigation of effectiveness–can do better. As MacAskill puts it:

When it comes to helping others, being unreflective often means being ineffective.

Of course, there are systematic problems with charitable giving. Most importantly, the feedback mechanism is never going to work as well when people are buying something to be consumed by others (as Milton Friedman explains). That problem, however, doesn’t explain why people do invest large amounts of money and their own time on wasteful projects. A large part of the problem is cultural. MacAskill asks us to consider the following thought experiment:

Imagine, for example, that you’re walking down a commercial street in your hometown. An attractive and frightening enthusiastic young woman nearly assaults you in order to get you stop and speak with her. She clasps a tablet and wears a T-shirt displaying the words, Dazzling Cosmetics…she explains that she’s representing a beauty products company that is looking for investment. She tells you about how big the market for beauty products is, and how great the products they sell are, and how because the company spends more than 90 percent of its money on making the products and less than 10 percent on staff, distribution, and marketing, the company is extremely efficient and therefore able to generate an impressive return on investment. Would you invest?

MacAskill says “Of course, you wouldn’t…you would consult experts…which is why the imaginary situation, I described here never occurs” Actually it’s even worse than that because what he describes does occur. It’s what the boiler rooms do to sell stocks (ala the Wolf of Wall Street). Thus, charities raise money using precisely the techniques that in other contexts are widely regarded as deceitful, disreputable and preying on the weak. Once you have seen how peculiar our charitable institutions are, it’s difficult to unsee.

Fortunately, effective altruism doesn’t require Mother Theresa-like levels of altruism or Spock-like level of hard-headedness. What is needed is a cultural change so that people become proud of how they give and not just how much they give. Imagine, for example, that it becomes routine to ask “How does Givewell rate your charity?” Or, “GiveDirectly gives poor people cash–can you demonstrate that your charity is more effective than cash?” The goal is not the questioning. The goal is to give people the warm glow when they can answer.

The website of Berkshire Hathaway is famously crude. You might even call it ugly. But look at the HTML code of this spartan “WEB page” and you’ll see that the best term for it is simply old school. It is a blast from the internet’s past.

So let’s learn how a WEB page was made in the 1990s by looking at the 2015 website of one of the most successful companies ever.

Here is Berkshire’s site today versus the version saved by the Internet Archive on May 30, 1997. It’s basically the same, nearly 20 years on:

Berkshire Hathaway, then and now.(Berkshire Hathaway)

Style, lack thereof

There are two things fundamental to the aesthetics of every website: styles and positioning. What elements on the page look like, and how they are placed. Both the ’90s and current Berkshire sites have almost the exact same styles. Perhaps more accurate would be to say that neither has hardly any styles at all.

For example, in 1997, Berkshire Hathaway applied exactly five color rules to its site:

Make the background white.

Unvisited links should be this color.

And visited links this color.

Links you are actively clicking on, this color.

All other text should be this color.

There were five colors in 1997. Two decades later, Berkshire has actually simplified the number of colors to four, dropping the one for active links. That’s some extreme minimalism. Even Craigslist, another site renowned for its simplicity, has 40 colors.

The language of web styles today is CSS, or Cascading Style Sheets. Berkshire does not speak this language.

CSS applies style rules to particular elements. So you could apply the rule color: magenta to some element to make it magenta, or set text-align: center for some text. Here’s what Quartz looks like with no CSS—not pretty:

Quartz without style.

Berkshire didn’t use CSS at all in 1997. That was common at the time. But it still doesn’t today, when CSS is virtually ubiquitous. Quartz has over 1,600 CSS rules; Craigslist, 1,300. Berkshire has zero. The method it uses for style and positioning—HTML attributes—is so obsolete that the documentation explaining how to use it either tells you not to or hasn’t been updated since 2002.

Redesign

The Berkshire site has undergone one dramatic design change: the addition of HTML tables around the beginning of 2000. This is why the links today are fancily positioned side-by-side, whereas in 1997 they were displayed in a vertical list. Even tables, however, are obsolete.

In the ’90s and into the naughts, makers of WEB sites used tables to position things. They even used tables to structure the entire layout of a site, for example to put some hyperlinks on the left side of the main content area. Check out the image below of now-defunct Cigar Dude’s Smoking Room, for the kind of layout that was commonly created by tables in the days of GeoCities, the free service that hosted many people’s first-ever web presence. (Cigar Dude’s site is actually created using frames, not tables, but in terms of positioning the idea is the same):

A 90s layout in full force.(The Smoking Room)

Like the outdated HTML attributes above, tables are almost never used in this way today, and positioning too is now the job of CSS. One blog post from 2009 called CSS versus tables “the debate that won’t die,” but the debate died years ago, and tables lost.

The main reason tables lost is flexibility. Cigar Dude is trapped by his layout: If he wanted to move the links to the top or right of the page, he would have to change a lot of code. And doing something other than a basic grid might be downright impossible. CSS allows you to do a lot more and change things more easily. Unlike Cigar Dude, Berkshire Hathaway’s tables are still around.

Another thing that the Berkshire site does not have is JavaScript. That’s the language that allows websites to become interactive and dynamic. These days even the simplest sites contain a bit of JavaScript, at the very least to track who is coming to the site or display the current date.

With no CSS and no JavaScript, the Berkshire site is pure HTML. This is the technological equivalent of preparing for a road trip by printing out driving directions and burning a few CDs. It is Times New Roman. It is a Nokia in a belt holster.

So old it’s new

Berkshire Hathaway’s website is built on technologies so outdated that new web developers probably don’t even know they exist. Is that a bad thing?

Sites today will use whatever new technology helps them compete for your eyeballs. They track your movements and interactions. They yell at you to do things you’d rather not. So Berkshire’s folksiness is a relief. Peoplepraiseitssimplicity.

It is to the web what normcore is to fashion: an oasis of unpretentious calm. And because the tech is so old, it is supported by every browser and device going back twenty years. You could read it on a PalmPilot.

Like normcore, it’s unclear whether the folksiness is natural or intentional. Some Berkshire subsidiaries do have modern-looking websites, like Berkshire Home Services, so it’s not as if the company is fundamentally behind the times or unaware of JavaScript and CSS.

Maybe it’s a conscious choice. Berkshire Hathaway and chairman Warren Buffet’s down-home reputation gives shareholders confidence through nostalgia for simpler times, when investors really knew their companies and stuck with them. The website brings us back to a time when people really knew their websites, when they wrote HTML by hand, character-by-character. That is to say, this ’90s site is completely on-brand.

It does have a new competitor for greatest cash-to-simplicity ratio on the web, though: muskfoundation.org.

Everyone feels something when they’re in a really good starry place on a really good starry night and they look up and see this:

Some people stick with the traditional, feeling struck by the epic beauty or blown away by the insane scale of the universe. Personally, I go for the old “existential meltdown followed by acting weird for the next half hour.” But everyone feels something.

Physicist Enrico Fermi felt something too—”Where is everybody?”

________________

A really starry sky seems vast—but all we’re looking at is our very local neighborhood. On the very best nights, we can see up to about 2,500 stars (roughly one hundred-millionth of the stars in our galaxy), and almost all of them are less than 1,000 light years away from us (or 1% of the diameter of the Milky Way). So what we’re really looking at is this:

When confronted with the topic of stars and galaxies, a question that tantalizes most humans is, “Is there other intelligent life out there?” Let’s put some numbers to it—

As many stars as there are in our galaxy (100 – 400 billion), there are roughly an equal number of galaxies in the observable universe—so for every star in the colossal Milky Way, there’s a whole galaxy out there. All together, that comes out to the typically quoted range of between 1022 and 1024 total stars, which means that for every grain of sand on every beach on Earth, there are 10,000 stars out there.

The science world isn’t in total agreement about what percentage of those stars are “sun-like” (similar in size, temperature, and luminosity)—opinions typically range from 5% to 20%. Going with the most conservative side of that (5%), and the lower end for the number of total stars (1022), gives us 500 quintillion, or 500 billion billion sun-like stars.

There’s also a debate over what percentage of those sun-like stars might be orbited by an Earth-like planet (one with similar temperature conditions that could have liquid water and potentially support life similar to that on Earth). Some say it’s as high as 50%, but let’s go with the more conservative 22% that came out of a recent PNAS study. That suggests that there’s a potentially-habitable Earth-like planet orbiting at least 1% of the total stars in the universe—a total of 100 billion billion Earth-like planets.

So there are 100 Earth-like planets for every grain of sand in the world. Think about that next time you’re on the beach.

Moving forward, we have no choice but to get completely speculative. Let’s imagine that after billions of years in existence, 1% of Earth-like planets develop life (if that’s true, every grain of sand would represent one planet with life on it). And imagine that on 1% of those planets, the life advances to an intelligent level like it did here on Earth. That would mean there were 10 quadrillion, or 10 million billion intelligent civilizations in the observable universe.

Moving back to just our galaxy, and doing the same math on the lowest estimate for stars in the Milky Way (100 billion), we’d estimate that there are 1 billion Earth-like planets and 100,000 intelligent civilizations in our galaxy.[1]The Drake Equation provides a formal method for this narrowing-down process we’re doing.

SETI (Search for Extraterrestrial Intelligence) is an organization dedicated to listening for signals from other intelligent life. If we’re right that there are 100,000 or more intelligent civilizations in our galaxy, and even a fraction of them are sending out radio waves or laser beams or other modes of attempting to contact others, shouldn’t SETI’s satellite array pick up all kinds of signals?

But it hasn’t. Not one. Ever.

Where is everybody?

It gets stranger. Our sun is relatively young in the lifespan of the universe. There are far older stars with far older Earth-like planets, which should in theory mean civilizations far more advanced than our own. As an example, let’s compare our 4.54 billion-year-old Earth to a hypothetical 8 billion-year-old Planet X.

If Planet X has a similar story to Earth, let’s look at where their civilization would be today (using the orange timespan as a reference to show how huge the green timespan is):

The technology and knowledge of a civilization only 1,000 years ahead of us could be as shocking to us as our world would be to a medieval person. A civilization 1 million years ahead of us might be as incomprehensible to us as human culture is to chimpanzees. And Planet X is 3.4 billion years ahead of us…

There’s something called The Kardashev Scale, which helps us group intelligent civilizations into three broad categories by the amount of energy they use:

A Type I Civilization has the ability to use all of the energy on their planet. We’re not quite a Type I Civilization, but we’re close (Carl Sagan created a formula for this scale which puts us at a Type 0.7 Civilization).

A Type II Civilization can harness all of the energy of their host star. Our feeble Type I brains can hardly imagine how someone would do this, but we’ve tried our best, imagining things like a Dyson Sphere.

A Type III Civilization blows the other two away, accessing power comparable to that of the entire Milky Way galaxy.

If this level of advancement sounds hard to believe, remember Planet X above and their 3.4 billion years of further development. If a civilization on Planet X were similar to ours and were able to survive all the way to Type III level, the natural thought is that they’d probably have mastered inter-stellar travel by now, possibly even colonizing the entire galaxy.

One hypothesis as to how galactic colonization could happen is by creating machinery that can travel to other planets, spend 500 years or so self-replicating using the raw materials on their new planet, and then send two replicas off to do the same thing. Even without traveling anywhere near the speed of light, this process would colonize the whole galaxy in 3.75 million years, a relative blink of an eye when talking in the scale of billions of years:

Continuing to speculate, if 1% of intelligent life survives long enough to become a potentially galaxy-colonizing Type III Civilization, our calculations above suggest that there should be at least 1,000 Type III Civilizations in our galaxy alone—and given the power of such a civilization, their presence would likely be pretty noticeable. And yet, we see nothing, hear nothing, and we’re visited by no one.

So where is everybody?

_____________________

Welcome to the Fermi Paradox.

We have no answer to the Fermi Paradox—the best we can do is “possible explanations.” And if you ask ten different scientists what their hunch is about the correct one, you’ll get ten different answers. You know when you hear about humans of the past debating whether the Earth was round or if the sun revolved around the Earth or thinking that lightning happened because of Zeus, and they seem so primitive and in the dark? That’s about where we are with this topic.

In taking a look at some of the most-discussed possible explanations for the Fermi Paradox, let’s divide them into two broad categories—those explanations which assume that there’s no sign of Type II and Type III Civilizations because there are none of them out there, and those which assume they’re out there and we’re not seeing or hearing anything for other reasons:

Explanation Group 1: There are no signs of higher (Type II and III) civilizations because there are no higher civilizations in existence.

Those who subscribe to Group 1 explanations point to something called the non-exclusivity problem, which rebuffs any theory that says, “There are higher civilizations, but none of them have made any kind of contact with us because they all _____.” Group 1 people look at the math, which says there should be so many thousands (or millions) of higher civilizations, that at least one of them would be an exception to the rule. Even if a theory held for 99.99% of higher civilizations, the other .01% would behave differently and we’d become aware of their existence.

Therefore, say Group 1 explanations, it must be that there are no super-advanced civilizations. And since the math suggests that there are thousands of them just in our own galaxy, something else must be going on.

This something else is called The Great Filter.

The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.

If this theory is true, the big question is, Where in the timeline does the Great Filter occur?

It turns out that when it comes to the fate of humankind, this question is very important. Depending on where The Great Filter occurs, we’re left with three possible realities: We’re rare, we’re first, or we’re fucked.

1. We’re Rare (The Great Filter is Behind Us)

One hope we have is that The Great Filter is behind us—we managed to surpass it, which would mean it’s extremely rare for life to make it to our level of intelligence. The diagram below shows only two species making it past, and we’re one of them.

This scenario would explain why there are no Type III Civilizations…but it would also mean that we could be one of the few exceptions now that we’ve made it this far. It would mean we have hope. On the surface, this sounds a bit like people 500 years ago suggesting that the Earth is the center of the universe—it implies that we’re special. However, something scientists call “observation selection effect” suggests that anyone who is pondering their own rarity is inherently part of an intelligent life “success story”—and whether they’re actually rare or quite common, the thoughts they ponder and conclusions they draw will be identical. This forces us to admit that being special is at least a possibility.

And if we are special, when exactly did we become special—i.e. which step did we surpass that almost everyone else gets stuck on?

One possibility: The Great Filter could be at the very beginning—it might be incredibly unusual for life to begin at all. This is a candidate because it took about a billion years of Earth’s existence to finally happen, and because we have tried extensively to replicate that event in labs and have never been able to do it. If this is indeed The Great Filter, it would mean that not only is there no intelligent life out there, there may be no other life at all.

Another possibility: The Great Filter could be the jump from the simple prokaryote cell to the complex eukaryote cell. After prokaryotes came into being, they remained that way for almost two billion years before making the evolutionary jump to being complex and having a nucleus. If this is The Great Filter, it would mean the universe is teeming with simple prokaryote cells and almost nothing beyond that.

There are a number of other possibilities—some even think the most recent leap we’ve made to our current intelligence is a Great Filter candidate. While the leap from semi-intelligent life (chimps) to intelligent life (humans) doesn’t at first seem like a miraculous step, Steven Pinker rejects the idea of an inevitable “climb upward” of evolution: “Since evolution does not strive for a goal but just happens, it uses the adaptation most useful for a given ecological niche, and the fact that, on Earth, this led to technological intelligence only once so far may suggest that this outcome of natural selection is rare and hence by no means a certain development of the evolution of a tree of life.”

Most leaps do not qualify as Great Filter candidates. Any possible Great Filter must be one-in-a-billion type thing where one or more total freak occurrences need to happen to provide a crazy exception—for that reason, something like the jump from single-cell to multi-cellular life is ruled out, because it has occurred as many as 46 times, in isolated incidents, just on this planet alone. For the same reason, if we were to find a fossilized eukaryote cell on Mars, it would rule the above “simple-to-complex cell” leap out as a possible Great Filter (as well as anything before that point on the evolutionary chain)—because if it happened on both Earth and Mars, it’s almost definitely not a one-in-a-billion freak occurrence.

If we are indeed rare, it could be because of a fluky biological event, but it also could be attributed to what is called the Rare Earth Hypothesis, which suggests that though there may be many Earth-like planets, the particular conditions on Earth—whether related to the specifics of this solar system, its relationship with the moon (a moon that large is unusual for such a small planet and contributes to our particular weather and ocean conditions), or something about the planet itself—are exceptionally friendly to life.

2. We’re the First

For Group 1 Thinkers, if the Great Filter is not behind us, the one hope we have is that conditions in the universe are just recently, for the first time since the Big Bang, reaching a place that would allow intelligent life to develop. In that case, we and many other species may be on our way to super-intelligence, and it simply hasn’t happened yet. We happen to be here at the right time to become one of the first super-intelligent civilizations.

One example of a phenomenon that could make this realistic is the prevalence of gamma-ray bursts, insanely huge explosions that we’ve observed in distant galaxies. In the same way that it took the early Earth a few hundred million years before the asteroids and volcanoes died down and life became possible, it could be that the first chunk of the universe’s existence was full of cataclysmic events like gamma-ray bursts that would incinerate everything nearby from time to time and prevent any life from developing past a certain stage. Now, perhaps, we’re in the midst of an astrobiological phase transition and this is the first time any life has been able to evolve for this long, uninterrupted.

3. We’re Fucked (The Great Filter is Ahead of Us)

If we’re neither rare nor early, Group 1 thinkers conclude that The Great Filter must be in our future. This would suggest that life regularly evolves to where we are, but that something prevents life from going much further and reaching high intelligence in almost all cases—and we’re unlikely to be an exception.

One possible future Great Filter is a regularly-occurring cataclysmic natural event, like the above-mentioned gamma-ray bursts, except they’re unfortunately not done yet and it’s just a matter of time before all life on Earth is suddenly wiped out by one. Another candidate is the possible inevitability that nearly all intelligent civilizations end up destroying themselves once a certain level of technology is reached.

This is why Oxford University philosopher Nick Bostrom says that “no news is good news.” The discovery of even simple life on Mars would be devastating, because it would cut out a number of potential Great Filters behind us. And if we were to find fossilized complex life on Mars, Bostrom says “it would be by far the worst news ever printed on a newspaper cover,” because it would mean The Great Filter is almost definitely ahead of us—ultimately dooming the species. Bostrom believes that when it comes to The Fermi Paradox, “the silence of the night sky is golden.”

Explanation Group 2: Type II and III intelligent civilizations are out there—and there are logical reasons why we might not have heard from them.

Group 2 explanations get rid of any notion that we’re rare or special or the first at anything—on the contrary, they believe in the Mediocrity Principle, whose starting point is that there is nothing unusual or rare about our galaxy, solar system, planet, or level of intelligence, until evidence proves otherwise. They’re also much less quick to assume that the lack of evidence of higher intelligence beings is evidence of their nonexistence—emphasizing the fact that our search for signals stretches only about 100 light years away from us (0.1% across the galaxy) and suggesting a number of possible explanations. Here are 10:

Possibility 1) Super-intelligent life could very well have already visited Earth, but before we were here. In the scheme of things, sentient humans have only been around for about 50,000 years, a little blip of time. If contact happened before then, it might have made some ducks flip out and run into the water and that’s it. Further, recorded history only goes back 5,500 years—a group of ancient hunter-gatherer tribes may have experienced some crazy alien shit, but they had no good way to tell anyone in the future about it.

Possibility 2) The galaxy has been colonized, but we just live in some desolate rural area of the galaxy. The Americas may have been colonized by Europeans long before anyone in a small Inuit tribe in far northern Canada realized it had happened. There could be an urbanization component to the interstellar dwellings of higher species, in which all the neighboring solar systems in a certain area are colonized and in communication, and it would be impractical and purposeless for anyone to deal with coming all the way out to the random part of the spiral where we live.

Possibility 3) The entire concept of physical colonization is a hilariously backward concept to a more advanced species. Remember the picture of the Type II Civilization above with the sphere around their star? With all that energy, they might have created a perfect environment for themselves that satisfies their every need. They might have crazy-advanced ways of reducing their need for resources and zero interest in leaving their happy utopia to explore the cold, empty, undeveloped universe.

An even more advanced civilization might view the entire physical world as a horribly primitive place, having long ago conquered their own biology and uploaded their brains to a virtual reality, eternal-life paradise. Living in the physical world of biology, mortality, wants, and needs might seem to them the way we view primitive ocean species living in the frigid, dark sea. FYI, thinking about another life form having bested mortality makes me incredibly jealous and upset.

Possibility 4) There are scary predator civilizations out there, and most intelligent life knows better than to broadcast any outgoing signals and advertise their location. This is an unpleasant concept and would help explain the lack of any signals being received by the SETI satellites. It also means that we might be the super naive newbies who are being unbelievably stupid and risky by ever broadcasting outward signals. There’s a debate going on currently about whether we should engage in METI (Messaging to Extraterrestrial Intelligence—the reverse of SETI) or not, and most people say we should not. Stephen Hawking warns, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.” Even Carl Sagan (a general believer that any civilization advanced enough for interstellar travel would be altruistic, not hostile) called the practice of METI “deeply unwise and immature,” and recommended that “the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand.” Scary.[2]Thinking about this logically, I think we should disregard all the warnings get the outgoing signals rolling. If we catch the attention of super-advanced beings, yes, they might decide to wipe out our whole existence, but that’s not that different than our current fate (to each die within a century). And maybe, instead, they’d invite us to upload our brains into their eternal virtual utopia, which would solve the death problem and also probably allow me to achieve my childhood dream of bouncing around on the clouds. Sounds like a good gamble to me.

Possibility 5) There’s only one instance of higher-intelligent life—a “superpredator” civilization (like humans are here on Earth)—who is far more advanced than everyone else and keeps it that way by exterminating any intelligent civilization once they get past a certain level. This would suck. The way it might work is that it’s an inefficient use of resources to exterminate all emerging intelligences, maybe because most die out on their own. But past a certain point, the super beings make their move—because to them, an emerging intelligent species becomes like a virus as it starts to grow and spread. This theory suggests that whoever was the first in the galaxy to reach intelligence won, and now no one else has a chance. This would explain the lack of activity out there because it would keep the number of super-intelligent civilizations to just one.

Possibility 6) There’s plenty of activity and noise out there, but our technology is too primitive and we’re listening for the wrong things. Like walking into a modern-day office building, turning on a walkie-talkie, and when you hear no activity (which of course you wouldn’t hear because everyone’s texting, not using walkie-talkies), determining that the building must be empty. Or maybe, as Carl Sagan has pointed out, it could be that our minds work exponentially faster or slower than another form of intelligence out there—e.g. it takes them 12 years to say “Hello,” and when we hear that communication, it just sounds like white noise to us.

Possibility 7) We are receiving contact from other intelligent life, but the government is hiding it. The more I learn about the topic, the more this seems like an idiotic theory, but I had to mention it because it’s talked about so much.

Possibility 8) Higher civilizations are aware of us and observing us (AKA the “Zoo Hypothesis”). As far as we know, super-intelligent civilizations exist in a tightly-regulated galaxy, and our Earth is treated like part of a vast and protected national park, with a strict “Look but don’t touch” rule for planets like ours. We wouldn’t notice them, because if a far smarter species wanted to observe us, it would know how to easily do so without us realizing it. Maybe there’s a rule similar to the Star Trek’s “Prime Directive” which prohibits super-intelligent beings from making any open contact with lesser species like us or revealing themselves in any way, until the lesser species has reached a certain level of intelligence.

Possibility 9) Higher civilizations are here, all around us. But we’re too primitive to perceive them. Michio Kaku sums it up like this:

Lets say we have an ant hill in the middle of the forest. And right next to the ant hill, they’re building a ten-lane super-highway. And the question is “Would the ants be able to understand what a ten-lane super-highway is? Would the ants be able to understand the technology and the intentions of the beings building the highway next to them?

So it’s not that we can’t pick up the signals from Planet X using our technology, it’s that we can’t even comprehend what the beings from Planet X are or what they’re trying to do. It’s so beyond us that even if they really wanted to enlighten us, it would be like trying to teach ants about the internet.

Along those lines, this may also be an answer to “Well if there are so many fancy Type III Civilizations, why haven’t they contacted us yet?” To answer that, let’s ask ourselves—when Pizarro made his way into Peru, did he stop for a while at an anthill to try to communicate? Was he magnanimous, trying to help the ants in the anthill? Did he become hostile and slow his original mission down in order to smash the anthill apart? Or was the anthill of complete and utter and eternal irrelevance to Pizarro? That might be our situation here.

Possibility 10) We’re completely wrong about our reality. There are a lot of ways we could just be totally off with everything we think. The universe might appear one way and be something else entirely, like a hologram. Or maybe we’re the aliens and we were planted here as an experiment or as a form of fertilizer. There’s even a chance that we’re all part of a computer simulation by some researcher from another world, and other forms of life simply weren’t programmed into the simulation.

________________

As we continue along with our possibly-futile search for extraterrestrial intelligence, I’m not really sure what I’m rooting for. Frankly, learning either that we’re officially alone in the universe or that we’re officially joined by others would be creepy, which is a theme with all of the surreal storylines listed above—whatever the truth actually is, it’s mindblowing.

Beyond its shocking science fiction component, The Fermi Paradox also leaves me with a deep humbling. Not just the normal “Oh yeah, I’m microscopic and my existence lasts for three seconds” humbling that the universe always triggers. The Fermi Paradox brings out a sharper, more personal humbling, one that can only happen after spending hours of research hearing your species’ most renowned scientists present insane theories, change their minds again and again, and wildly contradict each other—reminding us that future generations will look at us the same way we see the ancient people who were sure that the stars were the underside of the dome of heaven, and they’ll think “Wow they really had no idea what was going on.”

Compounding all of this is the blow to our species’ self-esteem that comes with all of this talk about Type II and III Civilizations. Here on Earth, we’re the king of our little castle, proud ruler of the huge group of imbeciles who share the planet with us. And in this bubble with no competition and no one to judge us, it’s rare that we’re ever confronted with the concept of being a dramatically inferior species to anyone. But after spending a lot of time with Type II and III Civilizations over the past week, our power and pride are seeming a bit David Brent-esque.

That said, given that my normal outlook is that humanity is a lonely orphan on a tiny rock in the middle of a desolate universe, the humbling fact that we’re probably not as smart as we think we are, and the possibility that a lot of what we’re sure about might be wrong, sounds wonderful. It opens the door just a crack that maybe, just maybe, there might be more to the story than we realize.

213,432

IT BEGAN with some marshmallows. In the 1960s Walter Mischel, a psychologist then working at Stanford University, started a series of experiments on young children. A child was left alone for 15 minutes with a marshmallow or similar treat, with the promise that, if it remained uneaten at the end of this period, a second would be added. Some of the children, who were aged four or five at the time, succumbed to temptation before time was up. Others resisted, and held out for the reward.

Then, it was Dr Mischel’s turn to wait. He followed the children’s progress as they grew up. Those who had resisted, he found, did better at school than those who had given in. As adults they got better jobs, were less likely to use drugs and got into trouble with the law less frequently. Moreover, children’s family circumstances suggested that impulsive behaviour was as much learned as inherited. This suggested that it could be unlearned—improving the child in question’s chances in life.

Study after study has confirmed Dr Mischel’s insight, and it is now starting to change public policy—particularly in America, where the Administration for Children and Families, a part of the Department of Health & Human Services, is trying to develop programmes that will teach children the art of self-control. Recent observations, however, raise the possibility that developing self-control is not always an unalloyed good.

Work published two years ago by Gene Brody of the University of Georgia, who looked at a group of young black Americans, showed that those who exhibited self-control as teenagers did indeed get the expected benefits. But if such self-controllers came from deprived backgrounds, they developed higher blood pressure, were more likely to be obese and had higher levels of stress hormones than their less-self-controlled peers. That correlation did not apply to people who started farther up the social ladder.

Dr Brody and his colleagues have followed this study with one that comes to an equally astonishing conclusion: for people born at the bottom of the social heap, self-control speeds up the process of ageing. This research, just published in the Proceedings of the National Academy of Sciences, looked at DNA methylation, a phenomenon which involves the addition of chemicals called methyl groups to genetic material in chromosomes.

Cells use methylation to shut down genes whose services are no longer needed, and observation has shown that people’s methylation patterns change in predictable ways as they get older—thus acting as markers of a cell’s apparent age. Dr Brody and his colleagues followed almost 300 black American teenagers of different backgrounds as they aged from 17 to 22. For the first few years the researchers assessed their volunteers’ levels of self-control, and also looked for signs of depression, aggression and drug use. They assessed, too, those volunteers’ socioeconomic backgrounds. But the last examination, when participants were 22 years old, was different. Then, the researchers took a blood sample, recorded the DNA-methylation patterns of cells in it, and worked out how much these deviated from the pattern expected at that particular age.

As the chart shows, for people from high-status backgrounds, higher self-control meant lower cellular ages. For those whose background was low-status, the reverse was true. Their cells were ageing faster. Add this to the previous data on blood pressure, stress and obesity, and the medical prognosis of these initially low-status individuals does not look promising.

Dr Brody’s findings are both intriguing and worrying. No biologist would find surprising the idea that an animal—any animal—which was rising through its social hierarchy would find the experience stressful. And research into gene methylation, part of a field called epigenetics, suggests changing methylation patterns are a common response to changing circumstances as well as changing age, as the body’s physiology struggles to keep up.

That such epigenetic changes happen to human beings is a salutary reminder that people are subject to the same rules as other species. Unlike other species, though, people can change their circumstances in rational ways: the lesson of the marshmallows shows that. If Dr Brody’s result is confirmed, the challenge it poses will be to work out how to circumvent the adverse effects of self-control.

Although I do not agree with everything, it is a great text to add a lot of context to our world view. This is just a small synopsis, you should go to the site.

Something similar happened with storage, where the growth rate was even faster than Moore's Law. I remember the state-of-the-art 1MB hard drive in our computer room in high school. It cost a thousand dollars.

Here's a photo of a multi-megabyte hard drive from the seventies. I like to think that the guy in the picture didn't have to put on the bunny suit, it was just what he liked to wear.

Modern hard drives are a hundred times smaller, with a hundred times the capacity, and they cost a pittance. Seagate recently released an 8TB consumer hard drive.

But again, we've chosen to go backwards by moving to solid state storage, like you find in smartphones and newer laptops. Flash storage sacrifices capacity for speed, efficiency and durability.

Or else we put our data in 'the cloud', which has vast capacity but is orders of magnitude slower.

These are the victories of good enough. This stuff is fast enough.

Intel could probably build a 20 GHz processor, just like Boeing can make a Mach 3 airliner. But they won't. There's a corrollary to Moore's law, that every time you double the number of transistors, your production costs go up. Every two years, Intel has to build a completely new factory and production line for this stuff. And the industry is turning away from super high performance, because most people don't need it.

The hardware is still improving, but it's improving along other dimensions, ones where we are already up against hard physical limits and can't use the trick of miniaturization that won us all that exponential growth.

Battery life, for example. The limits on energy density are much more severe than on processor speed. And it's really hard to make progress. So far our advances have come from making processors more efficient, not from any breakthrough in battery chemistry.

Another limit that doesn't grow exponentially is our ability to move information. There's no point in having an 8 TB hard drive if you're trying to fill it over an AT&T network. Data constraints hit us on multiple levels. There are limits on how fast cores can talk to memory, how fast the computer can talk to its peripherals, and above all how quickly computers can talk to the Internet. We can store incredible amounts of information, but we can't really move it around.

So the world of the near future is one of power constrained devices in a bandwidth-constrained environment. It's very different from the recent past, where hardware performance went up like clockwork, with more storage and faster CPUs every year.

And as designers, you should be jumping up and down with relief, because hard constraints are the midwife to good design. The past couple of decades have left us with what I call an exponential hangover.

"Certain damp crevices were of great interest to them; other damp crevices were carefully avoided. There appeared to be little logic behind the distinction, but there it was all the same."

"Hands that had very recently been used to pet a cat were now inserted inside another human being's vulnerabilities."

"Although both parties were close enough to one another to be heard using only a very quiet voice, they both insisted on speaking to one another quite loudly, preferring vague and meaningless vocalizations over specific words. Had they used words familiar to the both of them, things might not have become so confusing."

Time Stands Still is a wonderful puzzle platformer that takes years to complete – four hundred year to be precise!

You play an ancient stone being who has foreseen a huge disaster that will take place in four hundred years time, and must travel across the land solving puzzles that involve the passage of time. You’re not the most agile of creatures, but you do have time on your side – being made of stone means that you can stand and wait a LONG time. This comes in handy in a variety of ways on your journey – from waiting for a tree to grow to letting the sea freeze over so you can walk on it.

It’s a very clever premise, well implemented and wrapped up in some charming pixel art animation. Time waits for no-one, but if you’re made of stone there’s no rush!

In a fantastic attempt at urban renewal, the government of Mexico recently collaborated with a group of local street artists called Germen Crew to paint a 20,000 square meter mural across the facades of 209 homes in the district of Palmitas in Pachuca, Mexico. The project was intended to bring about visual and social transformation by temporarily providing jobs and, according to some reports, reduce crime and violence in the neighborhood. You can see a few more photos of the endeavor here. (via StreetArtNews)