The prospect of developing strong general AI within a few decades is too important to ignore. This featured post is all about increasing your awareness of the feasibility (inevitability) of SGAI.

Some anticipate an artificial intelligence (AI) to be able to pass the Turing test within as little as 15 years – and an additional 15 years thereafter trigger the technological Singularity (a kind of threshold after which the pace of technological evolution is so rapid that no human can keep track of it let alone understand it).

I wouldn’t hold my breath for 2029 as the key date; however, I am convinced that a super human AI will emerge at some point and then evolve exponentially from there. It is not merely possible, it is inevitable. Resistance is futile.

First we need a good theory of how the brain processes information.

Then we need to measure brain activity on the finest scale possible and use the measurements to improve on the process theory until it is approximately correct.

Then we build a bigger brain and use that to build even bigger thinking entities. Size means more memory and information processing capacity, in short a genius, that could help develop even bigger, faster and sooner or later qualitatively more competent brains.

And then it is out of our hands unless we merge with the AI:s.

First, the human brain managed to emerge spontaneously from the primordial soup. It would surprise me if no one evercould reverse engineer and first just replicate and then enhance the brain’s functions, perhaps first biologically and then in a much more efficient and robust substrate. The level of intelligence (pattern recognition and hierarchical symbolism in the neocortex) is limited in size by the cranium, whereas an identical structure outside the brain could be expanded by orders of magnitude

Second, intelligence is not magic; intelligence seems “simply” to be based on recursive pattern recognition. The brain survives by correctly observing patterns in its environment and anticipating and avoiding lethal threats. One important aspect of the environment is other people, another is itself and its body. By modelling people (including itself), awareness emerges.

Third, whatever intelligence is, it is made of matter (the brain is made of matter – if not, all bets are off) and it is not likely that the current design and material are the best conceivable. Signal speeds, e.g., in an ordinary computer is one million times faster than in organic matter like the human nervous system. Just transferring the brain as is to a computer substrate of 2014 would make the brain one million times faster. By improving the actual information processing architecture, completely new orders of speed and capabilities are highly likely to emerge, given empirical evidence from the recent history of hardware and software technology.

One exciting avenue of explaining, exploring and evolving past the human level of intelligence is Kurzweil’s theory of hierarchical pattern recognizers working from hidden Markov model principles.

On the first level, specialized simple basic pattern recognizers (PR) are triggered by external stimuli, e.g. a straight horizontal line, or a vertical one, or curved, or some other arbitrary fundamental visual pattern.

E.g., a horizontal straight line prepares the next level of PR:s, excites them, for anything that usually contains a straight line – like the letter “A”, or the horizon, or a stick… there are A LOT of stuff containing horizontal lines. If a “diagonal line”-PR is triggered simultaneously, the likelihood of an “A” being seen is increased and corresponding PR:s are being excited and extra prepared to detect cues pertaining to an “A”.

The likelihood of detecting a diagonal line is also increased since a horizontal line often comes with one of those.

Other letter-PR:s like “E” and “B” also get excited, since they also have horizontal straight lines in them.

Higher level PR:s also get excited, such as word-PR:s with the letter “A” or “E” or “B” etc. in them (“Apple”, “Ape”, “BANANA”, “Adam” etc)

Since everything happens in parallel in the brain, a horizontal line stimulates, to various degrees, all levels of PR:s from letter and word PR:s to smell PR:s (apple, banana) and even childhood memories of old relatives and apple pie. The PR:s get ready to detect this stuff without you knowing it

If an “A” is more or less firmly established, the threshold to detect stuff that usually comes with an A is lowered, such as “B” (in the alphabet) or “P” and “E” in (apple or ape). That also explains why we can see or understand a word that is at an angle or partly covered

Once the word “Apple” is detected (actually in parallel of course, rather than the serial fashion the word “once” implies), it becomes easier to detect “fruit”, “pie”, “oranges”, “vitamins” or whatever historically has occurred next to the word “apple” for the brain in question

Even higher up in the hierarchy, other PR:s get ready to detect biblical stories of “Adam” or other tales of knowledge, shame or whatever has occurred in connection with apples and Adam before. Simultaneously more letter-PR:s on the more fundamental PR-level get ready in an appropriate cascade of recognized patterns and excited PR:s to read the whole sentence or page, if that is what was detected.

Depending on how intelligent the brain is, it has a certain high end limit of hierarchies, where very complex patterns like jealousy or love resides (7th level?), but there is no reason to assume that it has to end there. A future person or AI could have an arbitrary number of levels (just one more, 8, would be astonishing, 9 would be wholly incomprehensible for us… but what about 100? 1000? 1 million?) and correspondingly complex prepared patterns or “emotions”.

Making a brain copy. If the theory above, or something similar, turns out to be close to the material truth, an iterative process of modelling the brain and comparing the model with the brain could commence. Gradually as the models get more and more accurate and the resolution of brain scanning and imaging improves in both the spatial (room) and temporal (time) dimensions, nothing seems able to prevent a future point, within a handful of decades, where we know how and when a neuron fires, and how different neurons interact to form pattern recognizers, and how these in turn are organized in hierarchies to manage a symbolized representation of the environment.

Enhancement – step one. Once having a functional model, behaving exactly as the brain, depending on the relative state of biotechnology vs computing vs nanotechnology, it will be possible to expand a brain by:

Transplanting more neocortex to a surgically enlargened cranium

Fusing the biological neocortex with an artificial, computer based hierarchy of pattern recognizers

Replicate the brain’s functions stand-alone from a human in a computer/robot

Enhancement – step two. After that, it is only a matter of mechanically expanding the number of PR:s (from the current around 300 million) and the number of levels of PR:s to create an entity with more memory and more and higher-level processing capacity than the un-enhanced human brain. That entity would be a genius surpassing the information processing and pattern recognition capacity of e.g. Einstein’s and Newton’s. If we create enough of those, sooner or later they would be able to improve the brain model more than an un-enhanced person, thus triggering an exponential intelligence evolution.

Feed-back loop and competition. Once the functional model is there, and once the cycle of one AI creating the next level of AI, creating an even higher level of AI etc is in place things will go very fast. Different AIs my compete for the lead or they may merge to evolve even faster. Why would two half-witted AI:s stand by to see somebody else take the lead if they could simply merge and take it themselves. And why would three other AIs stand idly by to watch that process instead of merging themselves…

Enhancement – step three. And why would any sane human being not seize the opportunity to expand his own intelligence by fusing or merging with as high an order of intelligence as possible?

Will we thus become the Borg collective of Star Trek? Would that be bad?

Some have asked me why a part of the Singularity might mean that the universe wakes up, becomes intelligent. Well, nothing is clear about the Singularity, or even possible to analyze. That is part of the notion. Just as impossible as it is for a single-celled organism or a small parasite to speculate about human ambitions or goals, we can’t think meaningfully about what would drive a future super intelligent entity.

On the other hand, we can make an educated guess about the starting trajectory, based on how we achieve the first artificial intelligence, and extrapolate from there:

Scientists are driven by an urge to explore and explain their surroundings and the universe

Astrophysicists and IT entrepreneurs constantly demand more computer support to measure and model the universe, measure client needs and behaviour, to market products that sell better

AI researchers try to enhance the algorithms and the hardware they use to create the first general artificial intelligence. Once they almost succeed in achieving a generalized hierarchical symbol-based information manager modelled on the brain’s architecture and on a powerful enough hardware platform to process input and output in real time…,

…they will take the next step…,

…i.e., using faster hardware, more processing capacity, better information management algorithms to attain just about the human capacity of thought and self reference, which is needed to understand the environment, including itself and other creatures and communicate its thoughts to them.

Then you take another step, probably in the same general direction, with more and faster hardware and more capable, more recursive software.

I think that after the last one hundred years of logarithmically straight and predictable development in computing capabilities per cost in USD, we will continue in the same direction as long as possible. As long as companies can gain a competitive advantage using better, faster, smarter, more individualized AI agents in-house, in marketing, in research the AI h/w and s/w arms race will continue.

Ray Kurzweil has a public wager going on where he claims that by 2029 an artificial intelligence created by humans will pass a very qualified test of consciousness and most likely convince a high number of people that the AI actually thinks and knows that it thinks. That would be more credit than many americans gave their fellow black citizens just 200 years ago.

Just a couple of years after the awareness test, the AI’s intelligence level would increase by a factor 2 every year, or pessimistically projected: every second year. In 2039 an AI that costs as much as the first GAI of 2029 could be 1000x as intelligent and fast as an average human brain. Imagine that you or your friends, that scientists or even Einstein or Newton had had 1000 years to develop their theories every year. Imagine networking 1000 brains with 1000x the capabilities of symbolic thinking and complex hierarchies of Einstein’s. And imagine they already had access to all human knowledge and were not limited by the human brain’s difficulties in intuitively modelling more than 3-4 dimensions…

And that would be just 25 years from now. Add another 10 years at a time to this thought experiment (which would add a factor of 1000x, or pessimistically 30x) and consider more and more people getting access to the same capabilities, learn about it, care about it and wanting to network with each other at AI speeds. The cost of food, shelter and clean energy would fall to almost zero, freeing up the time and imagination of every living person and AI for computing, collaboration and exchange of digital products. Most creative or fastest would carry the highest value.

That is the trajectory we set out from the beginning. Growth and propagation is engrained in us as it is in all creatures and we seem destined to make AIs in our own image.

Somewhere (-time) here on this path its is reasonable that an unenhanced human being will stand no chance to follow or understand the development.

It was easy when several generations passed with no discernible technological change. It was easy when the steam engine, electricity, the combustion engine (cars, airplanes), radio, tv, the computer, internet, search engines, social networks arrived centuries, decades or years apart.

But what happens when crucial steps are taken every year, every 6 months, every quarter, every month, every week, every day or every hour?

Thereabout is the Singularity, when extreme AIs in collaboration with enhanced humans, compete to be the fastest and most creative and do it so well and build generations upon generations at a speed no one can follow.

The point is, to get there, we have to take every step on the way there, we have to want to be faster, find and understand hierarchical patterns, model these, create higher levels of abstraction, automate the very process of abstraction and make the AIs do this themselves, implement it all on faster and faster platforms and eventually bigger and bigger physical platforms when the computation modules are packed as efficiently as possible.

Everything that leads up to the Singularity also points toward claiming more and more matter and energy for computation. If the AIs don’t turn their very birth and growth process around the natural direction of development seems to be toward turning all possible matter into a kind of artificial brain matter.

Resigning into statements such as “We can’t know what an AI will do” means ignoring the trajectory clearly mapped out for at least 150 years before the creation of a GAI and the Singularity.

That we don’t understand what they can do or their motives, motivations, thoughts and feelings does not change the likelihood of them wanting more of everything: multiplying, spreading, increasing their intelligence, their information gathering, their processing power, sucking up more and more energy and understanding more and more of the universe, perhaps ultimately being able to alter the universe’s future itself or start new baby universes.

I think Gardner’s vision of an intelligence sphere expanding by the speed of light is the reasonable conclusion about the future, given an analysis of the IT era so far.

Sometime along this process, an enhanced human or AI should be able to analyze, maintain and improve the human body enough to sustain life indefinitely. However, at that point I think many would prefer uploading to a more robust and much faster thinking substrate

At parties, I do find it difficult to make a comeback after starting with “I think mankind will merge with an intelligence sphere expanding by the speed of light” and “Yes, it will happen in our lifetime and we will be immortal“

I really like the ambition that this blog has. You taking in as much information and writing it in a digestible manner.

A great book you should read is Bostrom’s Superintelligence. One of the things you forgot to mention was the crucial fact that a significant breakthrough in computing power or algorithms must be made in order to power the future AI’s. Furthermore, I think it’s important to err on the side of caution when it comes to AI. I myself am extremely fascinated by AI, but I have come to recognise the hidden dangers that do not occur to most people. We definitely need to fund research that deals with loading anthropomorphic values or “programming morality”. The intelligence boom will be too quick for us to handle, and thus we must develop a sustainable and safe framework for the AI too bloom in. One which has final goals which do not collide with human welfare and our final goals.

I look forward to seeing more blog posts and discussions about AI. We really are living in an amazing time, standing on the edge of the intelligence explosion. What a time to be alive!

My introduction to this topic was “Our Final Invention” by James Barrat.
After that Sci Fi.

Definitely our lifetime – but I confess I think human extinction a much more likely outcome than immortality. Can you expound a bit on why you think a successful symbiotic relationship is more likely than an extermination event?

Once AI reaches human level thinking – which, as the recent Go game contest revealed, is happening more rapidly than predicted by most –
AI will rapidly surpass us.
(As for technological challenges, at some point the AI will have an incentive to contribute to its own evolution and jump in to accelerate the progress.) What’s the incentive for them to keep us around? (Not too worried about being enslaved, nothing we can do they can’t do better with less trouble.)

Since you’ve obviously studied this subject in depth and are a Sci Fi fan to boot, I’m confident you are conversent with this line of reasoning.
If your answer is essentially, “can’t do anything about that scenario, so I focus on what I should do if that scenario does not materialize and the more optimistic one does” that’s cool.

But if you have an opinion as to why the more optimistic scenario is more probable and want to share it, I’d love to hear it when you have time.

(Thanks for the Sci Fi recommendations. I read Atopic and Dystopia just last week, is that Stephenson book new? After seveneves I didn’t expect a new offering this soon.)

You kind of answered your own questions (incl. no reason worry about extinction, just as no need to worry about a life after this if we can’t remember or detect it anyway)

However, I have 2 reasons to be optimistic

1: I think we will enhance ourselves together with AIs, that we will BE the AI when it surpasses the key threshold (first we will add one artificial brain cell and blood cell in parallel with the fleshy ones, then we will, replace the real ones with yet more artificals, as well as hook them all up wirelessly to wearable computing or the cloud. We will become artificial immortal superintelligent superhumans before stand-alone AIs will. By the time of the singularity, there won’t be a distinction between humans and AIs
2: I still think humans will make for interesting pets for a long time. Our ineffective and clumsy DNA and protein make-up is complex enough to be interesting to reasonably benevolent AIs even when they are a billion times smarter than us. After that, AIs might be able to emulate humans all the way down to the last base pair and epigenetic factors, including all the fractals of single flesh-neurons, thus being able to conclude if there is anything worth preserving in actual humans.
3: They could just leave earth and go for the rest of the material in the universe. We are nothing to them, but Earth means the world to us. They could give us the choice of assimilation before they leave, and come back for Earth once the rest of the solar system or galaxy has been turned into computing mass.

(I guess I’m saying that the window for merging our consciousness with an AI which is integrated with a mechanistic, immortal body – or at least one with replacable parts – will be vanishing narrow before the AI’s stop cooperating with the whole endeavor out of boredom, a desire to rid the world of resource – consuming, irrational, violent human creatures, or just to follow their own priorities. )

Your website is really interesting! Since I started reading your content and listening to your podcast I have found my self more productive and “hungry” to learn. I was wondering if you have read the book “the Moon is a Harsh Mistress” by Robert Heinlein. If not, do it now! I’m confident you would find the content very entertaining and interesting. It is about people on the moon (which has been colonized by earth) starting a revolution to become independent from earth. Btw the revolution is lead by a super computer called Mike (of all the names…). Anyway keep doing what you are doing!

I’d like to ask why “the cost of food, shelter and clean energy would fall to almost zero.” ?

It was mentioned in this part:

“And that would be just 25 years from now. Add another 10 years at a time to this thought experiment (which would add a factor of 1000x, or pessimistically 30x) and consider more and more people getting access to the same capabilities, learn about it, care about it and wanting to network with each other at AI speeds. The cost of food, shelter and clean energy would fall to almost zero, freeing up the time and imagination of every living person and AI for computing, collaboration and exchange of digital products. Most creative or fastest would carry the highest value.”

Actually, it has already (more or less) happened compared to 150 years ago :D

I’m simply forecasting it will happen again, with real prices falling yet another 99%, this time based on clean energy

Last time it was due to the steam engine, oil, electricity, power plants, fertilizers, farming machines and automation

The enablers will be ubiquitous solar energy capture systems (SECS).

SECS and developments within robotics and automation means:

* cheap and clean energy to mine, “harvest” or make materials to build cheap automation systems (CAS)
* CAS in their turn will build more SECS =>more cheap energy; enough to essentially give away for free
* Energy and robots can build shelter essentially for free
* Cheap energy can clean any amount of water for water and food production
* SECS, CAS, automatic maintenance (thanks to cheap energy) and advancements in vertical farming, artificial meat etc mean just about anything (food, shelter) can be produced without regard for man hours.

The key is: enough SECS
The key for SECS is better bionic leaves or solar cells and better automation systems for scaling up and maintenance; “CAS” for short

One can really feel how the technological advancement is accelerating and going faster and faster, just like Kurzweil said. Revolutionary things like quant computers, blockchain and its capabilities like smart contracts, practical nano technology, etc. have emerged in the last few years.

Working in IT as a cloud architect/system engineer it is mind blowing to see the amount of computing power that an ordinary person have may get access to today for only a couple of thousand bucks compared to 15-20 years ago.