Machines take another big step to superintelligence

Summary: The next step in computer evolution is machine learning. Practical applications, such as self-driving cars, are moving from science fiction to daily news. But the leading edge is further ahead, with achievements that look like small miracles.

In 1976, Northwestern University’s Chess 4.5 at the Paul Masson American Chess Championship’s Class B level became the first to win a human tournament. In 1978 it achieved the first computer victory against a Master-class player. In 1996 with IBM’s Deep Blue defeated world champion Kasparov in a game. In 1997 it defeated him in a match, 3½–2½. They have continued to improve.

Another step for Machine evolution

“Artificial Intelligence (AI) was the Next Big Thing back in the 1980s but didn’t really deliver for another 30 years because we grossly under-estimated the required computing power. Running Moore’s Law in reverse we can see that one dollar’s worth of computing today cost $1,024,576 back in 1987. …

“{W}e defined AI back then as encapsulating knowledge we already had while AI today mainly means generating whole new data-driven understandings of how the world really works.”
— Robert X. Cringely at his website.

“The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades.

“In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.”

“{Google’s} DeepMind has now revealed the first multi-skilled AI board-game champ. …AlphaZero …can teach itself to be super-human in any of three challenging games: chess, Go, or Shogi {Japanese chess} ….

“But the ability of one program to learn three different, complex games to such a high level is striking because AI systems — including those that can “learn” — typically are extremely specialized, honed to tackle a particular problem. Even the best AI systems can’t generalize between problems — one reason why many experts say we still have a long way to go before machines rival human abilities.

“AlphaZero could be a small step towards making AI systems less specialized. …AlphaZero can learn to play each of the three games in its repertoire from scratch, although it needs to be programmed with the rules of each game. The program becomes expert by playing against itself to improve its skills, experimenting with different moves to discover what leads to a win. …

“Humans are no longer the best players at chess, Go, and Shogi, so AlphaZero was tested against the best specialized artificial players available. The new software beat all three—quickly. AlphaZero required four hours to become world-beating at chess, two hours to reach that level in Shogi, and eight hours to get good enough to beat DeepMind’s previous best Go player, AlphaGoZero.

“20 years after DeepBlue defeated Garry Kasparov in a match, chess players have awoken to a new revolution. The AlphaZero algorithm developed by Google and DeepMind took just four hours of playing against itself to synthesise the chess knowledge of one and a half millennium and reach a level where it not only surpassed humans but crushed the reigning World Computer Champion Stockfish 28 wins to 0 in a 100-game match. All the brilliant stratagems and refinements that human programmers used to build chess engines have been outdone, and like Go players we can only marvel at a wholly new approach to the game. …

“We’ve long recognised our human inferiority, but we could take comfort from the fact that the chess engines that beat us were also the works of human ingenuity and effort. That was about to change. …

“Generic machine-learning algorithms are game-changers, and not just for chess but the world around us.”

Thanks for the link to the McKinsey report. In general, I find their work well-researched (a great source of data) but with conclusions that echo the prevailing “wisdom.” But this one looks unusually good.

Re: software engineers

Predictions of future skill demand have two contsants: their conclusions are stated with mad confidence, and they never mention how wrong past estimates have been. In high school (early 1970s) we were taught that many or most jobs would need programming skills — so we all learned Basic (plus other languages in college). Like most people I never used them (learning foreign human languages would have been far more helpful).

Today we hear similar predictions. Now you point to something I’ve wondered about, as machine learning advances rapidly — how many software engineers we will need in the future.

It seems to me that the success of these sort of ‘tabula rasa reinforcement learning’ systems would entirely depend upon the extent to which we can quantify the variables and the victory conditions, and upon the extent to which the system could test different scenarios.

In other words, if you can define the world in which the system exists, and then allow the system to play around in that world, then the system will quickly become a master of the world you have defined.

With board games like chess that is very easy to do. With other games like StarCraft, less easy but still quite doable. With stock market, the variables are simple, but the testing would require a large amount of money to play with.

But I don’t know how well it could be used for things like running a business (or a city, or anything big and complicated), where ‘victory conditions’ are amorphous and subjective, where the many variables are hard to quantify (or even to identify), and where experimentation would be slow or undesirable (we wouldn’t want to experiment with people’s lives). AI would still be quite good (even game-changing) at meta-analysis in those areas, but I don’t think it could master them in the same way as it does now for chess, until it reaches the next level.

Both articles discuss this, which machine learning systems are quite well aware of. But the progress of machine learning systems in fields such as self-driving vehicles and reading x-rays shows that this barrier is already being surmounted.

“But I don’t know how well it could be used for things like running a business”

This is the standard “rebuttal” to progress in machine intelligence. It can do “x”, but it cannot run the world. That’s missing the point. These systems are improving at a rapid rate — probably equivalent to electrical power systems in the 19th century — and a thousand other technologies. Most of them went thru the stages of public reaction from “it’s a toy” – “it will never catch on” — to “wow” — to “it’s just business as usual.”

“until it reaches the next level.”

Does anyone disagree? But how soon it reaches the “next level” is the relevant question. And the level after that, and after than. That’s how tech revolutions run.

I suggest that you look at the rate of progress as the key factor, not people’s opinion of what it will be in a hundred years (something at which people have a poor record of success).

Larry, thank you for your response.
I’m not trying to rebut your premise that technology marches on. I’m attempting to frame the current limits, in order to understand (at least for myself) the matter of *how* we will get to the next level (not *if* or *when*).
Also, a lot of amateur readers might not understand how machine intelligence is different from human intelligence. For example, whereas human chess players are often smart in many different areas, computer players (at least until very recently) are extremely task-specific. That’s why I think it’s important to define the limits as they are at present.

“I’m attempting to frame the current limits, in order to understand (at least for myself) the matter of *how* we will get to the next level (not *if* or *when*).”

That’s done — usually quite well — by experts in machine learning in articles about the field. Such as those cited here. None say that AI’s will be “running a business” as the next level, or even in the foreseeable future.

“a lot of amateur readers might not understand how machine intelligence is different from human intelligence.”

That’s why this innovation is so important, as described in these two articles — and in more detail in the paper.

“That’s why I think it’s important to define the limits as they are at present.”

You are kidding, right? It’s done all the time with each crackpot fad that comes along and gets taken up for no obvious reason at all, (anyone remember Graphology) except that someone influential is keen on it, or desperate for a solution to a problem, or just want to make a name for themselves.

I think your point about ‘victory conditions’ is well made. All of the games have well defined ‘win’ conditions, it would be interesting to see how you’d manage to *specify* the win conditions for an AI running a company tasked with supplying water to homes. There are many groups to satisfy some with opposing needs and not all of the factors are under control of the AI.

We’ve got a long way to go, but should remember that an AI doesn’t have to be perfect, it just has to be better than a human.

Would you be as upset if the AI said you were fired if you knew it didn’t care? Tennis players don’t argue about Hawkeye because they know it’s more reliable than the humans. If we can ensure AI doesn’t show bias (no easy task in itself) then it’s already way better than people. It didn’t hire Freda because she’s the daughter of the division head, it didn’t fire you because you’re black, and that guy over there didn’t get his job because he’s a Mason. Result.

“You are kidding, right? It’s done all the time with each crackpot fad that comes along and gets taken up for no obvious reason at all”

That’s a powerful point! I phrased that poorly.

” it would be interesting to see how you’d manage to *specify* the win conditions for an AI running a company tasked with supplying water to homes. …I for one, welcome our robot overlords”

I suggest you rethink your conclusion. Since the people who run America today will program the goals for the machines that will run it in the future, this seems likely to be “meet the new boss, same as the old boss.”

I’m constantly reminded on Asimov’s ‘Foundation’ series. It pretty much foreshadowed what we’re beginning to see the start of, though his ideas were, IIRC, based on gas mechanics, the principle is the same. Big datasets give powerful predictive capabilities when properly analysed.

In the UK the government has a ‘nudge’ unit, the idea being that it’s possible to get people to significantly change their behaviour by making small changes to their social, political, economic environment. At the moment this stuff is crude and correspondingly easy to spot. http://www.behaviouralinsights.co.uk/

But given better analysis from larger datasets it could be a powerful tool for making us jump through government approved hoops.

Color me skeptical that in the next century people or machines will develop reliable forecasting capabilities on social or individual behaviors. Asimov writes about psychohistory as an effective tool developed well over dozens or perhaps scores of millennia in the future (i.e., after a galactic empire has ruled for 12,000 years).

Steve Crook raises the concept of AI postulated by Asimov. As I recall Asimov considered the development of AI as being controlled by a set of rules (and I am not talking about the rules of robots). That too is how I view it. That, in order that a machine can be “self-taught” it must be programmed to do so: its learning will be tied to a set of rules devised by a programmer. Even with the multi-state possibilities offered by quantum computing I believe this will be the case.

On the other hand Heinlein postulated the possibility of a machine becoming sentient i.e. coming alive (“The Moon is a Harsh Mistress”). Being a fan of Heinlein I enjoyed this book more than I did the Foundation Series. Not that I don’t enjoy Asimov’s work but to try to describe the societal framework he was developing I felt the novels became a bit too rigid. The Heinlein book, I felt, was more imaginative and spontaneous.

I don’t believe that a machine will spontaneously become sentient. Unfortunately I won’t be around to see if I am wrong. Even if it were possible I believe the machine would have to be programmed to accomplish it. Now there’s a thought!

Bayesian Statistics continues to remain incomprehensible in the ignited minds of many analysts. Being amazed by the incredible power of machine learning, a lot of us have become unfaithful to statistics. Our focus has narrowed down to exploring machine learning. Isn’t it true?

We fail to understand that machine learning is only one way to solve real world problems. In several situations, it does not help us solve business problems, even though there is data involved in these problems. To say the least, knowledge of statistics will allow you to work on complex analytical problems, irrespective of the size of data.

In 1770s, Thomas Bayes introduced ‘Bayes Theorem’. Even after centuries later, the importance of ‘Bayesian Statistics’ hasn’t faded away. In fact, today this topic is being taught in great depths in some of the world’s leading universities.

With this idea, I’ve created this beginner’s guide on Bayesian Statistics. I’ve tried to explain the concepts in a simplistic manner with examples. Prior knowledge of basic probability & statistics is desirable. By the end of this article, you will have a concrete understanding of Bayesian Statistics and its associated concepts.

Steve was talking about Asimov’s Foundation stories, which don’t involve AI (until, I’ve heard, much later when he attempted to merge his fictional universes). That quibble aside, you raise important questions. Esp this:

“its learning will be tied to a set of rules devised by a programmer.”

That’s the core question for the future. The point of machine learning is that it is, to a degree, writing its own software. As machines get better at that, they’ll probably be given more scope to do so. But — the problem, as many experts have noted, is that machine written software is largely opaque to us. We can’t translate it back to rules that we understand.

That’s a problem in many applications. For example, when a machine gives an interpretation of data to a medical doctor. He can’t act on it, and if there are bad results just say “the machine made me do it!” He has to own the actions, and can’t act on machine advice that the machine can’t explain in terms he understands. At least, not yet.

This problem will get bigger, as a generation of human software is measured in months and years — while a gen of machine software is measured in hours or days.

“I don’t believe that a machine will spontaneously become sentient.”

These kind of discussions quickly get bogged down in sci fi. There are few or no experts who believe that machines will become “sentient” in any time horizon relevant to us today. Our problems are about what they can realistically do!

“Alexa, play Prokofiev Symphony No. 5” produces “I can’t find Symphony No. 5″ by Prokofiev.” Amazon Music has any number of copies, and I even own one in my Library. Apparently she gets confused between composer, conductor, and orchestra.

“Apparently she gets confused between composer, conductor, and orchestra.”

As does any minimum wage worker serving as a cashier or customer service rep. The difference is the machines about to learn from mistakes and the long term impacts and implications. It doesn’t have to be perfect to replace low-level jobs.

That’s true. The point was, though, that Alexa still has problems with queries that are outside of what she’s used to (going to use anthropomorphic language without apology) — “Bernstein” could be either composer or conductor, for example. The problem doesn’t seem that difficult, so I’m going to guess that classical music isn’t high on Alexa team’s priorities. The bigger question is whether AI will ever get to the stage where it doesn’t have to be taught each new skill, i.e., it can figure it out for itself.

In other words, Alexa would realize that she’s returning a lot of “I can’t find” results for a particularly category and solve that deficiency without being told to. And then, for where the category isn’t clear or hasn’t been defined.

That nails it. What answer would be have given a decade ago if Alexa was described to us, and we were asked when it would be commercially available for $30 — a mass market electronic component? In 2007 the iPhone was 3 years in the future.

The key to understanding this new industrial revolution is that progress is rapid, producing unexpected marvels. Of course the initial products are flawed. More important is that new versions come each year, each more powerful.

One of the most interesting themes in The Innovators (Walter Isaacson, Simon & Schuster, 2014) is the longstanding tension between those digital pioneers who believed the future of computing was artificial intelligence and those who believed that the role of the computer was to enhance/extend human intelligence. Excel lets us efficiently analyze scenarios that we couldn’t have dreamed of doing when we were in school. But if we don’t frame the questions right, the answer still makes no sense and can be quite damaging. Engineers don’t need to use trigonometry to calculate angles in designing a structure, that’s all calculated by their software. That doesn’t mean there is no need for engineers, just that engineers no longer get paid for doing routine calculations. As a result they can design much more challenging structures much more efficiently.

Machine learning is impressive, but in the end it can be viewed as one more tool to extend the human using it. The computer beat the human expert because it could run through more possible scenarios than a human could in a thousand lifetimes and determine the probabilities based on real data rather than approximations. For chess players that’s disappointing because it’s taken one challenge away from humans, just as the invention of the sextant systemized the previously demanding challenge of navigation only to be replaced by GPS which made sighting by the stars to be irrelevant. Personally I’m pretty happy that my pilot knows exactly where he is halfway across the Atlantic Ocean.

My guess is that AI and machine learning (not the same thing) will enable some humans to rise a few notches above their competitors, but that will not replace the role of human insight. On the other hand it will create a few new entrants to the .1%. The NYT recently estimated that there are only about 10,000 qualified AI engineers in the world and they easily grab salaries in the mid six figure range. https://www.top500.org/news/ai-engineers-commanding-six-figure-salaries/

That is an impressive example of missing the point. I and hundreds of other people warn about the dangers of semi-intelligent machines (to use James Blish’s term from the 1950s). See my posts listed in the For More Information section for details.

This will cause massive job destruction. While nice if some AI engineers do well, it is unlikely that all the unemployed truck drivers will get new jobs — at least in their lifetimes.

About those AI engineers.

(a) The minimum income to be in the 1% in California is about $500,000. Few Ai engineers make that kind of money. The Bay Area, with its stratospheric cost of living, has an even higher minimum.

(b) US history shows that such temporary shortages of a skill vs. demand are filled with astonishing speed — often producing surpluses in a decade. When the promised demand fails to appear, the result is ugly. In the 1970s: what do you call an aerospace engineer? “Waiter!” In the 1980s, what do you call a petroleum geologist? “Waiter!”

Good editing Larry. I sidelined my main point with the comment about software engineers. I agree fully with your comment about our education system’s ability to miss the mark and over produce skills that are perceived in short supply. That’s probably happening with nursing, a profession that will be heavily impacted by automation.

The primary point is that we don’t know what AI will enable any more than the impact of prior technological advances could have been fully predicted at the time. My point was aimed not at the issue of worker displacement by automation, which is a real issue that likely can only be solved in a political context. I was speaking about the Clash of Titans that drove economic advance throughout the industrial era and beyond. In each era, the importance of a new technology has been perceived by a group of prescient (or lucky) individuals who took advantage of the the capabilities to build entirely new industries.

Some examples:

1. The railroads. While there was money made (and lost) by a few railroad barons and the Chinese and Irish workers who built them had jobs for a while(however bad they appear in retrospect), the real wealth was created in the industries the railroads enabled. The ability to cheaply bring cattle and grain to market enabled Chicago to become one of the world’s great trading cities and the beginnings of modern industry would not have been possible without the demand and access the railroads created. Steel barons built the rails. Agricultural equipment was built to break the lands newly opened by efficient rail access to market, etc.

2. The Interstate Highways. The modern hospitality and over the road trucking industries were made possible by the new transportation infrastructure. Some good construction jobs were made possible by the direct spend, but millions of jobs were made possible by the ancillary developments.

3. The Internet. e-Commerce pioneers leveraged the new information infrastructure and existing transportation networks such as Fedex to build an entirely new approach to the distribution of goods. That has created tens of millions of jobs, primarily in Asia. You can complain about globalization, but this transformation has had a major impact, one result of which has been that billions of people globally are now moving rapidly from a prior subsistence existence.

Similar developments will occur as a result of the advances in computer power and computer “intelligence”. One is clearly that tens of millions of jobs will be made irrelevant through the automation of routine tasks, including many that are considered white collar and well paid such as has happened when digital systems replace the need for thousands of attorneys personally conducting initial review of documents and emails during legal discovery.

We still don’t know what will be enabled on the other side. Perhaps Elon Musk will succeed in creating a global industry based on the colonization of Mars. More likely the big wins will be more mundane and more surprising. Perhaps the application of smart systems to government and education will transform moribund activities to make cities more efficient and healthier for their inhabitants. That one is a long shot, but it has happened before, as when Paris was completely rebuilt in the 19th Century replacing squalid slums with the grand city we see today.

What I do know is that our energies are better served looking out for those next big opportunities being created by AI rather than cowering in fear that a robot is about to steal our job.

Actually the only reason I don’t fear the future is that I am committed to live in it and invest my personal energy preparing myself to do so. I fully understand that there are Americans, who by reason of age, disability, social circumstance, discrimination in education, inadequate healthcare and a broken criminal justice system will have difficulties coping with the changes to come. Addressing the resulting dislocations will be the major public policy challenge of the coming decade(s) and one that is being effectively ignored in Washington for now. From a personal standpoint I can’t solve the political quagmire, but can hopefully play a role in creating solutions to the extent I can help the innovators of the next era build the industries that will create the employment opportunities of the future.

“I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the fear has gone there will be nothing. Only I will remain.”

Dune is great fiction, but I disagree with that statement. Saying that “I must not fear” is like saying “I must not pee.” Fear is hard-wired into us — because of its powerful ability to improve our ability to survive. Fear is the antidote to complacency – a stimulus to action.

This is complacency: ““What I do know is that our energies are better served looking out for those next big opportunities being created by AI rather than cowering in fear that a robot is about to steal our job.”

Development of future technology will happen by normal economic and social processes. History shows that the resulting social turmoil — and massive harm to people affected — can be mitigated only by forethought and bold action. Fear of consequences is the most powerful force to make that happen.

Actually I believe that the Dune quote about fear complements yours. We all fear, sometimes little things, sometimes big things. It’s only by accepting those fears for what they are and letting them wash over us, rather than consume us, can we be freed to act.

I learned a long time ago that I don’t have the stomach for politics, so my focus has to be on what I do effectively. Back to sci-fi, the Foundation trilogy is based on moving society softly and slowly by well timed interventions. You have shown great frustration at the failure of the body politic to wake up and take demand of its own destiny. I have little more faith in the body politic than did the Founding Fathers and have seen little evidence of positive outcomes from populism.

I’ve actually been involved for the past couple of years in a political microcosm, working to save our historical center city park from depredations by the local Zoo, which wants to pave over heavily used greenspace in the center of the park for a parking lot. What I’ve learned is that most people couldn’t care less, but a few dedicated people can make a difference. I was given the luxury of pursuing that effort by some modicum of success in my profession. Perhaps one day, if I am successful in my efforts in the automation space, I can have some impact on the bigger puzzle of addressing the social impact of the new technologies, but, if I spend my time fighting ineffectual battles, I will never be in a position to do that.

That in no way diminishes your contribution, which is extraordinary. The traditional media skims the surface (or in the case of the New Yorker and the Atlantic goes so deep you may never resurface). The blogosphere just likes to yell. Your posts are challenging and well documented, which is why I and, apparently quite a few others, pay them serious attention.

There is, of course, no answer to how we should regard fear. Literature and philosophy have a thousand and one answers. These express the ones I believe.

“Fear is sharp-sighted, and can see things underground, and much more in the skies.”
— Miguel de Cervantes, Don Quixote de la Mancha (1605-1615).

“None but a coward dares to boast that he has never known fear.”
Ferdinand Foch, Marshall of the Allies during the last year of WWI. The soldiers I know all agree with him. They say to fear the commander who knows no fear, as he is a fool or ignorant.

“I learned that courage was not the absence of fear, but the triumph over it. The brave man is not he who does not feel afraid, but he who conquers that fear.”
— Nelson Mandela, Long Walk to Freedom (1995).

“Only the paranoid survive.”
— Andrew Grove, who built Intel into a giant.

Here’s an answer to your quandary over where the jobs will come from to replace those lost to automation: “Millennial Agenda― (and how to pay for it!)” by J.D. ALT. The short version of the article is that, once the Republicans alienate enough of the public to shift control Congress, the far left is planning to issue fiat currency (USD) to address the shortfall in public goods investment. Here is the proposed agenda to be paid for with deficit spending. A Millennial Agenda To-Do List:

Free college or technical school education for every American high school graduate.
Immediate forgiveness/pay-off of all student loans in America.
Free pre-school day-care available in every American neighborhood and community.
Free medical and pharmacy clinics in every American neighborhood and community.
Free universal health-care for all American citizens.
A national housing COOP to enable the creation of affordable, workforce and retirement co-housing.
A national “higher-ground” relocation and rebuilding program for coastal communities.
A guaranteed living wage in exchange for useful community service.
A national workforce program with the following specific targets:
Coal-mine reclamation and watershed restoration
Nuclear and chemical toxic clean-up
Local water and sewage treatment systems
Local renewable energy micro-grids
Desert rain-harvesting and reforestation
Coastal wetland reclamation
Wildlife and fisheries habitat restoration

Quite delusional, like most of the stuff from our far-left and far-right these days. Nice to see that they’re having fun. Let’s encourage more of this, keeping themselves busy and from interfering with the rest of us.

Not so much talk about Unicorns these days. It’s all about Initial Coin Offerings. I remember when I was a kid I collected coins. There were several companies that filled market demand by creating medals as collectors items and selling them for high prices. I bet more than a few of those have ended up in yard sales.