The book is organized into three sections, each focused on a major trend that’s reshaping the business world: the rapidly expanding capabilities of machines; the emergence of large, asset-lightplatform companies; and the ability to now leverage the knowledge, expertise and enthusiasm of the crowd. These three trends are combining into a triple revolution, causing companies to rethink the balance between minds and machines; between products and platforms ; and between the core and the crowd.

I cannot possible do justice to all three trends in one blog, so let me summarize the key themes of the Mind and Machine section, which I found to be an excellent explanation of the current state of AI.

The standard partnership

With the advent of ERP systems and the Internet in the 1990s, businesses settled on what McAfee and Brynjolfsson call the standard partnership between people and computers.The machines would handle routine processes, record keeping, and quantitative tasks, leaving more time for people to exercise their judgement, intuition, creativity, and interactions with each other.

Underlying the standard partnership is the belief that human decisions are generally well thought-out and rational, and that our judgement and intuition are far superior to that of any computer.But, this isn’t quite the case, as shown in the pioneering research of Princeton Professor Emeritus Daniel Kahneman, - for which he was awarded the 2002 Nobel Prize in Economics, - and his long time collaborator Amos Tversky, - who died in 1996.

Their work was explained in Kahneman’s 2011 bestseller Thinking, Fast and Slow.Its central thesis is that our mind is composed of two very different systems of thinking, System 1 and System 2. System 1 is the intuitive, fast and emotional part of our mind. Thoughts come automatically and very quickly to System 1, without us doing anything to make them happen. System 2, on the other hand, is the slower, logical, more deliberate part of the mind. It’s where we evaluate and choose between multiple options, because only System 2 can think of multiple things at once and shift its attention between them.

System 1 typically works by developing a coherent story based on the observations and facts at its disposal. This helps us deal efficiently with the myriads of simple situations we encounter in everyday life. Research has shown that the intuitive System 1 is actually more influential in our decisions, choices and judgements than we generally realize.

But, while enabling us to act quickly, System 1 is prone to mistakes. It tends to be overconfident, creating the impression that we live in a world that is more coherent and simpler than the actual real world. It suppresses complexity and information that might contradict its coherent story, unless System 2 intervenes because it realizes that something doesn’t quite feel right. System 1 does better the more expertise we have on a subject.Mistakes tend to happen when operating outside our areas of expertise.

“The twenty-year old standard partnership of minds and machines more often than not places too much emphasis on human judgment, intuition and gut…” write McAfee and Brynjolfsson.“[O]ur fast, effortless System 1 style of reasoning is subject to many different kinds of bias.Even worse, it is unaware when it’s making an error, and it hijacks our rational System 2 to provide a convincing justification for what is actually a snap judgement.The evidence is overwhelming that, whenever the option is available, relying on data and algorithms alone usually leads to better decisions and forecasts than relying on the judgment of even expert humans… Many decisions, judgements, and forecasts now made by humans should be turned over to algorithms.”

But, algorithms are far from perfect.Inaccurate or biased data will lead to inaccurate or biased predictions.Machines lack common sense, that is, the ordinary, pragmatic, comprehensive understanding of the world that we get from all the information we’re constantly taking in.Machines have a deep but narrow view of the world they were designed for. It’s generally a good idea to have a person check the machine’s decisions to make sure they make sense, being careful we don’t let our intuitive System 1 override a good but counter-intuitive machine decision.

We know more than we can tell

In March of 2016, AlphaGo, - a Go-playing application developed by Google Deep Mind, - claimed victory against Lee Sedol, - one of the world’s top Go players. Go is a much more complex game than chess, for which there are far more possible board positions than there are atoms in the universe.Nobody can explain how the top human players make smart Go moves, - not even the players themselves.As one such top player explained, “I’ll see a move and be sure it’s the right one, but won’t be able to tell you exactly how I know.I just see it.”

Playing world-class Go is an example of tacit knowledge, a concept first introduced in the 1950s by scientist and philosopher Michael Polanyi. Explicit knowledge is formal, codified, and can be readily explained to people and captured in a computer program. Tacit knowledge, on the other hand, is the kind of knowledge we are often not aware we have, and is therefore difficult to transfer to another person, let alone to a machine.

“We can know more than we can tell,” noted Polanyi in what’s become known as Polanyi’s paradox. This seeming paradox succinctly captures the fact that we tacitly know a lot about the way the world works, yet aren’t able to explicitly describe this knowledge. Tacit knowledge is best transmitted through personal interactions and practical experiences. Everyday examples include speaking a language, riding a bike, driving a car, and easily recognizing many different objects.

The Deep Mind team trained AlphaGo using deep learning algorithms, which are partly modeled on the way a young child learns a human language: by listening, speaking, repetition and feedback.AlphaGo was given access to 30 million board positions from an online repository of games, and was essentially told “Use this to figure out how to win” by detecting subtle patterns between actions and outcomes.It also played many games against itself, generating another 30 million board positions which it then further analyzed and learned from.

Deep learning is part of the broad class of machine learning systems that enable computers to acquire capabilities by ingesting and analyzing large amounts of data instead of being explicitly programmed, - thus getting around Polanyi’s pervasive paradox.Machine learning methods are now being applied to vision, speech recognition, language translation, and other capabilities that not long ago seemed impossible but are now approaching human levels of performance in many domains.

While playing a central role in AI’s recent achievements, machine learning still has a long way to go. It’s been most successful with supervised learning, where the training data are tagged. But, it’s made little progress with unsupervised learning, which is the main way humans learn about the world.

A kind of Cambrian Explosion

The Cambrian geological period marked a profound change in life on Earth. Before it, most organisms were composed of individual cells or were simple multi-cell organisms.Then around 550 million years ago a dramatic change took place, which is known as the Cambrian Explosion. Over the next 70 to 80 million years, evolution essentially took off in a different direction, leading to the development of all kinds of innovative life forms, ushering a diverse set of organisms far larger and more complex than anything that existed before.By the end of the Cambrian period, the diversity and complexity of life began to resemble that of today.

Robotics is now undergoing its own kind of Cambrian Explosion.All computer are defined by what their brains, - that is, their hardware and software, - are capable of computing and controlling.Robots are computers that have both a brain and a body.A robot’s capabilities are defined by what its brains and body can jointly do. Digital technology advances have greatly benefitted the robot’s brains, as they have those of all other computers. In addition, the electromechanical components used in robotic devices are also advancing rapidly, making it possible to imagine a future when robots will be ubiquitous in manufacturing, health care, security, transportation, our homes and many other areas.

“Yet the objective of robotics is not to replace humans by mechanizing and automating tasks; it is to find ways for machines to assist and collaborate with humans more effectively,” wrote MIT professor Daniela Rus, in a 2015 Foreign Affairsarticle. “Robots are better than humans at crunching numbers, lifting heavy objects, and, in certain contexts, moving with precision.Humans are better than robots at abstraction, generalization, and creative thinking, thanks to their ability to reason, draw from prior experience, and imagine.By working together, robots and humans can augment and complement each other’s skills.”

Which abilities will continue to be uniquely human as technology races ahead?

This is the most common question they get asked about minds and machines, said Brynjolfsson and McAfee.“As the digital toolkit challenges human superiority in routine information processing, pattern recognition, language, intuition, judgement, prediction, physical dexterity, and so many other things, are there any areas where we should not expect to be outstripped?”

In a 2015 HBRinterview, they noted that, at least for now, humans are still far superior in three skills areas:

High-end creativity.People keep coming up with gripping novels, great business ideas and new scientific breakthroughs.But, computers’ creative abilities are expanding rapidly, especially in areas like industrial design.This is potentially promising for man-machine collaborations, where machines can generate initial proposals that people will then extend and improve.

Emotion and interpersonal relations.Millions of years of evolution enable humans to broadly interpret a situation and read people’s emotions and body language, - skills that are crucial for interpersonal activities like nurturing, coaching, motivating and leading.The most difficult jobs to automate are increasingly those requiring high-level social skills.

Dexterity and mobility.Humans are very good at many tacitly learned, common sense tasks, such as being a waiter, which might involve walking across a crowded restaurant, serving a table, taking dishes back into the kitchen and putting them in the sink without breaking them. Such tasks are quite hard for robots.

Hopefully, as has been the case with previous powerful technologies, AI will have a positive, long-term impact on humanity. But, for that to happen, we must learn to adapt to and work with our increasingly smart machines.

Related Blogs

AIL Featured BloggersThe Emergence of Industry 4.0By Irving Wladawsky-Berger
Over the past few years, Industry 4.0, - aka the 4th Industrial Revolution, - has been the subject of several articles, studies, and surveys, as well as a book published earlier this year by Klaus Schwab, founder and executive chairman of the