I’ve decided to once again leave my prediction for when human level AGI will arrive unchanged. That is, I give it a log-normal distribution with a mean of 2028 and a mode of 2025, under the assumption that nothing crazy happens like a nuclear war. I’d also like to add to this prediction that I expect to see an impressive proto-AGI within the next 8 years. By this I mean a system with basic vision, basic sound processing, basic movement control, and basic language abilities, with all of these things being essentially learnt rather than preprogrammed. It will also be able to solve a range of simple problems, including novel ones.

Some of you might remember the talk I gave at the 2010 Singularity Summit about Algorithmic IQ, or AIQ for short. It was an attempt to convert the theoretical Universal Intelligence Measure into a working practical test of machine intelligence. The results were preliminary, but it seemed to work…

It’s now over a year later so I guess some of you are wondering what happened to AIQ! I’ve been very busy working on other cool stuff, however Joel Veness and I have been tinkering with AIQ in our spare time. We’re pleased to report that it has continued to perform well, surprising well in fact. There was some trickiness to do with getting it to work efficiently, but that aside, it worked perfectly straight out of the box.

We recently wrote a paper on AIQ that was accepted to the Solomonoff Memorial Conference. You can get the paper here, the talk slides here, and we have also released all the Python AIQ source code here. It’s designed to be easy to plug in your own agents, or other reference machines, if you fancy having a go at that too.

If you’re not sure you want read any of that, here’s the summary:

We implemented the simple BF reference machine and extended it in the obvious ways to compute RL environments. We then sampled random BF programs to compute the environments, and tested against each of these. This can be a bit slow, so we used variance reduction techniques to speed things up. We then implemented a number agents. Firstly, MC-AIXI, a model based RL agent that can learn to play simple games such as TicTacToe, Kuhn poker and PacMan, but is rather slow to learn. Then HLQ(lambda), a tabular RL agent similar to Q learning but with an automatic learning rate. Then Q(lambda), a standard RL agent, and Q(0), a weaker special case. Finally, Freq, a simple agent that just does the more rewarding action most of the time, occasionally trying a random action. There was also a random agent, but that always got an AIQ of zero, as expected. The results appear below, across various episode lengths:

The error bars are 95% confidence intervals for the estimates of the mean. As you can see, AIQ orders the agents exactly we would expect, including picking up the fact the MC-AIXI, while quite powerful compared to the other agents, is also rather slow to learn and thus needs longer episode lengths. We ran additional tests where we scaled the size of the context used by MC-AIXI, and the amount of search effort used, and in both cases the AIQ score scaled sensibly. See the talk slides for more details, or the paper itself.

Prof. Rich Sutton, probably the most famous person in the field of reinforcement learning, gave a talk today at the Gatsby Unit. I was expecting a standard introduction to reinforcement learning to begin with, but it wasn’t to be. Instead he kicked off with 20 minutes about the singularity.

Audience: So when do you expect human level AI?

Rich: Roughly 2030.

Whether or not you agree, views like this seem to be becoming more common in academia.

Looking over my predictions for the teenies from a year ago, they already look pretty lame. Take 1/3 off USA’s PPP GDP and you already get China, the latest Sony portable device has a 4 core processor, Intel’s latest set of CPUs are once again pretty awesome, schemes to let you pay for stuff with your phone are already getting under way (via both screen bar codes and near field communication), and a graphics card review I read the other day noted that the most graphically demanding games on high resolution monitors with all the graphical bells and whistles switched on now run very well on the latest “mid range” graphics cards.

At the time that I made my teenies predictions I thought they seemed a bit like predicting the obvious. But I’m now starting to wonder whether many of my predictions could have been more tightly assigned to the following 3-4 years, rather than the next decade.

One thing I’ve noted in the past is that it’s usually easier to predict fundamental things like FLOPS per dollar than is it to predict how these technological fundamentals will translate into applications. That might be true, but knowing that your computer of five years hence will have X bytes of storage and perform Y computations per second is a bit abstract for most purposes. What will be the new toys, the new applications, the new businesses? These are the things that impact people.

If predicting specific applications is a bit much to ask for (and if I could I might not want to tell you!), perhaps the next best is to predict the general nature of applications during a period of time. What you might call the “technological theme” of a period.

1980 to about 1995 was the period of the PC. Starting with hobbyists and niche applications and spreading to take over a large chunk of the office. The IBM PC marked the point at which this went mainstream. The defining characteristic was that the communication was typically local, if the machines were networked at all.

1995 to about 2010 was the period of the internet. First emails and basic web pages, search, then ordering online, online banking, music, video, etc. Netscape marked the point at which this went mainstream. The defining characteristic was that the communication was now global but the interface with the world was usually pretty traditional: keyboard, mouse, monitor.

So what’s the next theme? Mobile internet might be an answer, but I think that it’s more general than that. As great as the internet is, most of the important stuff still occurs in that other place called reality. Maybe it’s a new house with a swimming pool, throwing a party with friends or coming down with a serious illness. I think the next theme will be for technology to interface more effectively with the world, being mobile is only one aspect of that. If I had to pin the start of this going mainstream on one thing, it’d say it was the iPhone as that’s when the internet started to show up in the day to day moments of people’s lives as they’re out and about doing things.

Once the location, state and function of many everyday objects starts to spread onto the internet, all sorts of creative efficiencies become possible. Need to pay for the coffee? Just press a button on your phone. Not sure where your car is? Ask your phone to show you the way. Need a cab or a pizza or… just select what you want on some menus on your phone. Prices, special deals, time you’ll need to wait — it will all be there. Need to keep a close eye on your health? Get a small sensor implanted that monitors your blood insulin, oxygenation, pressure, cholesterol, heart rate and so on and wirelessly updates this information to your phone. Should a problem arise, your phone can let medics know where you are and what the problem seems to be.

I don’t expect this to be a sudden change, but rather a gradual absorption of goods, services and various everyday objects into an all pervasive information network. I think this will be a hot area until about 2025. Yeah, it’s going to take a while, not so much for many of these things to become possible, but rather for them to become cheap enough to be economic.

What’s my pick for the theme that comes after that? Well, once you have so much of the economy automated and hooked up together with vast amount of information about anything and everything swirling around, the key leverage point becomes how well you can intelligently process all this in order to control and coordinate things. Thus my pick for the theme from 2025 to 2040 is machine intelligence.

Well, well, another year is drawing to a close. That means it’s once again time to review what has happened and where things are going.

It’s been a very eventful year for me, both personally and on the work front. I keep my personal life off this blog, and as for work… um, significant things are happening but I’m not ready to talk about them yet 🙂 Thus, I’ll just stick to my general predictions this time around.

First of all, my set of predictions for the teenies. We’re only 1 year in so it’s not surprising that I’m still pretty comfortable with the predictions I’ve made. The only tweak I’ll make is that over the last year I’ve become slightly more confident that we’ll have a decent understanding of how cortex works before the end of the decade. That’s my only update.

My longest running prediction, since 1999, has been the time until roughly human level AGI. It’s been consistent since then, though last year I decided to clarify things a bit and put down an actual distribution and some parameters. Basically, I gave it a log-normal distribution with a mean of 2028, and a mode of 2025. Over the last year computer power has increased as expected, and so it looks like we’re still on target to have supercomputers with 10^18 FLOPS around 2018. In terms of neuroscience and machine learning, I think things are progressing well, maybe a little faster than I’d expected. I was toying with the idea of moving the prediction very slightly closer, but decided to play it safe and keep the prediction unmoved at 2028. With many people thinking I’m too optimistic, showing restraint is perhaps wise 🙂 I can always move my prediction nearer in a year or two.

One thing I screwed up last year was the 90% credibility region. Going by a log-normal CDF for my predicted mean and mode that David McFadzean did (see bottom of this page) the upper end should be a bit higher at 2045, i.e. at a CDF of 0.95. It seems that I got the lower end right, however, as the CDF is about 0.05 at 2018. With 5% at each end, that gives the 90% interval.

Another great Singularity Summit. I liked the focus on neuroscience this time. I think it will be a major driving force behind AGI over the next 20 years. The talk by Demis Hassabis is the one to look for in this area, once they become available online. My own talk was well received — I had applause during the talk as I put up results, something that I’ve certainly never experienced before. Due to a manic schedule of meetings, deadlines and last minute results, I unfortunately didn’t get to spend much time socialising this year. Hopefully things will be a bit more sane next time around and I’ll be able to catch up with everybody properly. Looking forward to it already.

I don’t know if anybody has thought of a theme for next year’s conference yet, but I’d like to make a suggestion: ethics and AGI safety. The conference has been around for a few years now and had attracted some fairly big names and serious academics. How about a return to the core mission of SIAI? As I think AGI is approaching, we seriously need much deeper and broader thinking on these topics. One other suggestion: while big names draw the crowds, in my opinion they often give the least interesting talks. How about a couple of the most popular and accessible LessWrong posts get selected and their authors present them as Summit talks?

This short film, The Third & The Seventh, by Alex Roman, is a great example of cutting edge computer graphics. The airy elegant style reminds me a bit of Kubrick. I’m not sure what impressed me the most: the wonderful cinematography, the fact that it’s entirely computer generated, or that one guy did it alone in his spare time — including putting the sound track together. Be sure to watch it full screen and in high definition.

I’ve decided to christen the next decade the teenies. Firstly, I’ve still heard no other suggestions; secondly, it’s phonetically consistent with the noughties and the twenties; and thirdly, the name is so downright awfully bad it’s almost quite good. So the teenies it is.

I’ve been scratching my head about these predictions for the last few days. By and large, I feel like I’m just predicting the obvious — which is a bit of a let down. However, when I look at the noughties, while the specific details were not predictable, the general trends were pretty obvious already in 2000. So perhaps predicting the seemingly obvious is not such a bad idea. And what seems obvious to me often is anything but obvious to others, indeed many will flatly disagree with my predictions. So, here goes. Hopefully these precitions are specific enough that I’ll be able to perform a decent analysis come 2020 to see how well I fared.

First up, things generally will become more energy efficient and we will see more solar power. But overall not much will change in energy — we’ll keep on using oil and coal and pumping out lots of CO2.

Chinese GDP on a PPP basis will be roughly comparible to that of the US and the EU (i.e. within 25%). India will be about half their size. The UK and France won’t be in the top 10 countries anymore, though they will still like to think that they are. China will become increasingly associated with luxury designer goods.

Computers will become about 50x faster, though I’m a bit nervous about this prediction. Later in the decade we will have major trouble with silicon chip technology. We might also see computer power overshoot general consumer demand which would spell serious trouble for the big chip manufactures. Everything goes very multi-core, even your cell phone. The graphics card market collapses due to them overshooting consumer demand* and possibly being subsumed by new CPUs.

All things internet and mobile will continue to grow. Smart “phones” will become fully funcational computers. You’ll be able to connect your smart phone to a large monitor, keyboard, mouse, projector etc., just like you’d do with a PC today. It will even become your wallet as you’ll be able to use it to pay for things at the supermarket. The expanding internet will swallow up most of TV and radio. High definition video conferencing will become common, making distance collaboration significantly more natural. High definition matters as it will allow people to have a wider field of view and to more clearly see facial expressions.

Machine learning will grow in importance due to ever increasing quantities of data, computer power, and better algorithms. It mostly won’t be publically seen, however, much like how it’s heavily used in Google and a few financial and pharmaceutical companies at the moment.

Significant progress will be made in understanding the brain. We will have a rough high level sketch of how the brain works, and some of its processes we will understand quite well. We probably still won’t understand cortical function very well, that will take longer.

More groups will start AGI projects, particularly from 2015 onwards. These groups will become increasingly mainstream, serious and well funded. This will be driven by faster computers, better machine learning algorithms and a better understanding of the brain’s architecture. Some of these groups will produce small AGIs that will learn to do some interesting things, but they will be nowhere near human level intelligence. They will, however, be preparing the way for this. Concern at the dangers of artificial intelligence will become less fringe but it won’t go mainstream.

In short, I’m predicting a bigger brighter expanded version of the last few years — nothing particularly radical. I think the real significance of the teenies will be to lay the foundations for more important things to come.

* UPDATE 15/1/2010: I’ve thought a bit about the main criticism of my predictions above, namely that the graphics chip business will collapse. As a result I’ve decided to soften my prediction. I’m now thinking that 10 more years probably won’t be enough for it to collapse due to overshooting demand. Going to 3D creates 2x the computational demand, going to higher resolution can create 5x demand, and better quality and more sophisticated graphics techniques can drive another 10x, maybe a bit more. Overall this approximately 100x might be enough to drive demand through until the end of the teenies. If a collapse does come, I think it will more likely be due to somebody like Intel getting aggressive and building cutting edge GPUs into their CPU chips thus making GPUs redundant.

The start of the Noughties for me was Y2K. It was a non-event, thanks, I might add, to people like me making ourselves mentally unwell fixing endless date issues in crappy database code. Next was the massive dot com crash — our wonderful future of super internet everything was an illusion… except, well, the biggest technological development of the decade was in fact the growth of the internet and all its related technologies. The problem existed in the mind of the market, not in the soundness or long term significance of the underlying technology.

It’s hard to believe that almost everybody was on dial-up internet in 2000, broadband existed, but it was slow and not many had it. The rise of blogging was interesting. To start with many more traditional media sources were freaking out about the idea that some 15 year old from his bedroom could get as much exposure as their latest newspaper article. Now blogging is just another part of the information ecosystem. Wikipedia: the encyclopedia’s went through the classic Ghandi stages of ignore, ridicule, attack and then lose. The iPod completely changed the music business, espeically combined with file sharing. Nobody I knew had DVD‘s before 2000, this was the decade they became big. Same for flatscreen monitors and TVs. I got a digital camera in 2000 when they were just coming out and still cost a fortune. During the Noughties they revolutionised photography. Wifi, nobody I knew had it in 2000, now it’s almost everywhere. Same for internet to the phone. Or text messages, that’s been quite a change. I remember when online banking was seen as strange and a bit risky, now it’s how many people do most of their banking. Google existed, but they really only became huge during this decade. Youtube, another big change in how many people used the internet. Same for Facebook. I still remember how people would react to my enthusiasm for open source software, basically it was seen as a hippy movement that wasn’t something that most serious business people would entertain. That certainly has changed. The iPhone revolutionised the smart phone industry.

In a nutshell, I’d say that the Noughties were all about a massive proliferation of digital communication. In a way the dot coms had roughly the right idea, but it took another decade for the vision to mature.

Outside of technology, 9/11, Bush and Iraq feature strongly in my mind. I think the rise of robotic weapons is something that is currently under appreciated. The rise of China and the way in which global warming went from fringe to mainstream were also significant. For me seeing a black man elected president of the US was one of the most surprising, and thrilling, things to happen in the last ten years. If you’d asked me in 2000 about the probability of that happening, I’d have put it at something like 1%. Was I grossly mis-calibrated, or was Obama really a rare event? I’m still not sure. Then finally we have the financial crisis and the continuing repercussions from that now. I can only presume that the next decade is likely to bring a similar amount of change. It should be an interesting time to be alive…

First question, what will we call the next decade? The “teens”? That seems kind of lame to me! Second question, what do you think are likely to be the changes of the coming decade? Are we in for some big surprises, or just a continuation of current trends?