Ok ok so the advances are all wild speculation... but you know what else is completely ridiculous? The "consumer impact"..

How the hell would volumetric and flexible displays be anything BUT massively consumer driven? Procedural storytelling is the same.

Also, keys of graphics like this should be at the top. It needs to be viewed at a large size, so its current config requires you to scroll down, then back up. Obtuse "data" presentation isn't really beautiful.

What they're saying is that the consumer impact isn't particularly major compared to the technology involved. I mean, flexible displays? They're cool, no argument, but nobody out there is thinking "wow, the ability to bend my laptop monitor would change my life."

Gesture recognition and speech recognition are the exact opposite - people are already raving about the half-baked barely-functional approaches that Apple and Google have put out. Making those work better will cause an incredible consumer impact.

I know. All of the future tech in Sim City 2000 was based on hypothetical, but possible things. It's just that I think SC2000 and this graphic are the only times in my 27 year existence that I've heard of them.

Important to keep in mind that technology grows exponentially. It took us 66 years to get from just above the dunes of Kitty Hawk, NC, to the Sea of Tranquility. I wouldn't be surprised if in our lifetimes, we will be absolutely blown away with technology that emerges.

I wouldn't be surprised if in our lifetimes, we will be absolutely blown away with technology that emerges.

I'm 30 and I already feel this way. Yesterday I copied a bunch of files from one computer to another. I realized, looking at the copy dialog, that the computer was happily moving five times the size of my entire first hard drive . . . every second.

I can get a chip the size of my thumbnail that could store my first computer a thousand times over. For $30 I could get a computer the size of a deck of cards that stores the entire state of that computer in RAM, hard drive included, and pretends to be it . . . ten times, simultaneously.

A Galaxy Nexus smartphone has twice as much RAM as an XBox 360.

Every few months I hear someone lamenting about how the pace of progress is going to slow down soon because of (insert physical limit here). The thing is, I've been hearing this lament for a decade straight. It ain't slowing down. If anything, it's speeding up.

We're within a stone's throw of duplicating the computing power of the human brain. Things are about to get crazy.

But the last decade hasn't been as rapid as the one before it or the one before that.

What on earth are you talking about? You realize that Google barely existed a decade ago, right? That Facebook didn't exist and Wikipedia had just gotten started? That the iPod had just been released, and the whole always-connected smartphone revolution was nothing more than a twinkle in the eye of a few clever manufacturers?

We have millions of times more information at our fingertips, 24/7, than we did ten years ago. That's important. That's really important.

Many of these are just brand names or incremental steps. Many of them are also inaccurate, or inextricably linked to each other - you're certainly not going to see a GUI before graphics cards, y'know?

If you're going to mention the NES as being important, why not the Wii? (Or the Atari 2600, for that matter - 1977.) If you're going to mention Wordperfect, why not Openoffice?

The first computer virus hit in 1971. Email was started in 1965. The first PC sound card was 1990, and of course, game consoles before that - like the 1977 Atari 2600 - had integrated sound systems. The first online chat system was 1974, video games date back to 1962.

I don't get why you're focusing on desktop RAM as an important factor. Early on, RAM was a bottleneck. Today it isn't. Most people have more RAM than they'll ever use. Today, the big push is price and portability - I can get a portable device with 1gb of RAM for $200. Compare that to the PDA as of a decade ago, and what is that? Oh, it's a 128 fold increase, at less than half the price.

(Although note that the PEG-S300 had no storage besides its RAM and a small chunk of ROM, while the Nexus 7 has 8GB of internal storage. If we count those too, we're up to a 768-fold increase.)

First mainstream home gaming system. Atari came in 77 but was a flop. But the NES is the grandfather of all modern console gaming.

Wordperfect

Modern word processing. Though really, saying GUI covered this sufficiently.

The first computer virus hit in 1971. Email was started in 1965. The first PC sound card was 1990, and of course, game consoles before that - like the 1977 Atari 2600 - had integrated sound systems. The first online chat system was 1974, video games date back to 1962.

These all have varying levels of debatability. IRC really was the father of most chat systems though. Even WoW today uses a type of IRC for their chat system. Computer games became ubiquitous. Yeah pong existed long before then. But most people in north america saw their first computer game in the 80s.

price and portability

Yep. This has been the major improvement of the 2000s. Laptops became common and overtook PCs. Totally agree.

What an odd graph. It completely disregards parallelism, both in the form of multicore CPUs and in the form of vector processors.

Yes, I can't really argue, if you entirely ignore the type of major performance improvement we've seen in the last half-decade, there aren't any performance improvements. I'm not sure why people are so eager to ignore the advances though - why do we care about performance per CPU core, when performance per dollar is drastically more important?

Maybe this graph and this graph show what I mean a little better? It's per-die, not per-dollar, but it should get the idea across.

That isn't skyscraper type construction though. You are talking about a massive base that tapers up. Huge, quantities of steel. The math is actually pretty simple when you aren't going into detail, I remember doing it in my second year of university, I don't recall the limit I got with steel though.

But that doesn't really help, in order to really utilize a space elevator it needs to extend past or, be counterweighted at, geosynchronous orbit.

Earth space elevator will require carbon nanotubes. We don't have a method for making the nanotubes that would be strong enough, but they are theoretically possible as being strong enough +50% extra strength than what would be needed.

This is funny. First off what does the center of a circle mean? Is that the year when it's 'invented', when it hits the consumer sphere, or when it becomes 'impactful'. (Where would computers go, 1950? 1980? 1990? 2000?)

Second, some of these are completely out of order. Seriously, Weather Control (something no one has even proposed a mechanism for, unless we are talking about the rudimentary and insane policy china used where they shoot clouds) in 8 years while carbon sequestration and vertical farming which both have mechanisms but are just currently impractical/undesired are put at 15 and 25 years? How does optogenetics happen before gene therapy? Optogenetics is the process of knocking a gene into a cell and then controlling it with light, it is a form of gene therapy (certainly a less direct one as you can make the cells outside the body and put them in, but this would still be 'gene therapy'). He puts synthetic biology after the three 'main targets' of synthetic biology. Maybe he means personal computer style synthetic biology?

I do appreciate the fact he didn't include hard AI.

I'd be really interested to approach this question scientifically, perhaps look at number of citations and number of patents in a field and try to predict based off previous patterns in the literature. This on the other hand looks like rampant speculation.

Awesome! Thanks for making this, I spent like 2 hours going through it and talking about it with friends, it's a really interesting and controversial peace. I'll definitely email or PM when I have some free time, I'm interested in hearing more.

I'd be really interested to approach this question scientifically, perhaps look at number of citations and number of patents in a field and try to predict based off previous patterns in the literature. This on the other hand looks like rampant speculation.

Well you could do it using google scholar to collect the 'articles citing this article'. And a lot (most?) sites have a free 'export references' button. Patents are generally free online. But A. There are no APIs which makes the whole process a real bear (pubmed has APIs but is field limited and might not have a cited by field) and B. This is big data on a scale I can't even fathom dealing with. I mean even talking about two citations deep for a paper and you might be talking about millions of connections. Then text parsing out a 'field' or invention scope (from the abstract/keywords as that's the only thing free online) and determining how to look at that in light of the complexity of the data seems very hard.

These #G terms for generations of mobile networks/wireless systems have a different actual meaning than what cell providers like you to infer they mean from their advertisements. 4G is just the "fourth generation," generations that have traditionally lasted almost exactly 10 years since the "first generation" appeared in 1979. (In Japan, it hit the US in 1983.)

These generations don't refer to a specific technology, but rather specifications potential technologies would have to meet to be considered a member of that "generation."

For 4G, the basic requirements are include peak speed requirements at 100 megabits per second (Mbit/s) for high mobility communication (such as from trains and cars) and 1 gigabit per second (Gbit/s) for low mobility communication (such as pedestrians and stationary users). Most of what is advertised as 4G right now (in the US at least) are LTE networks that don't come close to meeting those peak speeds.

ITU-R (International Telecommunications Union-Radio communications sector), a division of the UN that sets these standards, decided in late 2010 that they would allow companies advertising LTE as 4G to keep doing so as it does provide significant improvements over previous technology.

We'll probably have a 5G standard come out to represent technology in the 2020's, but what exactly it will entail will depend on what we develop this decade.

tl;dr, we don't really have 4G yet, but LTE networks are allowed to advertise as 4G because they're seen as a significant step towards reaching the standard.

4G isn't yet mainstream. The idea was to indicate when each individual tech reaches maturity/mainstream, which wasn't the case of 4G a year ago. (U.S. mobile providers pseudo-4G-branding notwithstanding!)

Alright, here we go. Biomedical researcher here, working in the field of cancer therapeutics specifically via oncolytic viruses. I'm part of consortium of a handful of researchers in Canada currently working on this. I use a number of the "future" technologies you've indicated regularly in my research. Specifically, gene therapy. This is a commonly used technique used across many disciplines. This technique has not, however, come of age in terms of actual medical applications with the success of what was predicted back in the late 90's when the technique was first developed. Nanomedicines also already exist, as do artificial retinas, rapid personal gene sequencing is VERY close ( J. Craig Venter), in-vitro meat has been done, synthetic biology has been done (google J. Craig Venter), and anti-aging drugs... well, basically that's been done since the first drug was used back when we looked more like apes. True anti-aging drugs also exist, and I've played with some of them in a drosophila lab I worked in. Also, personalized medicine basically already exists, though it's success is intrinsically attached to the success of whole genome sequencing. When we can sequence a genome for less than 1000$ and in under an hour, personalized medicine will explode.