To assemble Watson, IBM crated in a mammoth configuration of hardware, about 10 refrigerators’ worth. Watson didn’t go to Jeopardy!; Jeopardy! came to Watson, setting up a temporary game show studio within IBM’s T. J. Watson Research Center.
Double Jeopardy!—Would Watson Win?
Watson was not sure to win. During sparring games against human champions, Watson had tweaked its way up to a 71 percent win record. It didn’t always win, and these trial runs didn’t pit it against the lethal competition it was preparing to face on the televised match: all-time leading Jeopardy! champions Ken Jennings and Brad Rutter.
Reproduced with permission.
The Jeopardy! match was to gain full-scale media publicity, exposing IBM’s analytical prowess or failure. The top-rated quiz show in syndication, Jeopardy! attracts nearly nine million viewers every day and would draw an audience of 34.5 million for this special man-versus-machine match.

Grockit: This test preparation company predicts which GMAT, SAT, and ACT questions a test taker will get wrong in order to target areas for which he or she needs more study.
Table 8 Predictive Analytics in Human Language Understanding, Thought, and Psychology
What’s predicted: Example organizations that use predictive analytics:
Answers to questions () IBM: Developed with predictive modeling the Watson question-answering computer, which defeated the two all-time human champions of the TV quiz showJeopardy! on a televised standoff (more details in Chapter 6).
Lies () University at Buffalo: Researchers trained a system to detect lies with 82 percent accuracy by observing eye movements alone.
Researchers: Predict deception with 76 percent accuracy within written statements by persons of interest in military base criminal investigations.

They considered creating an enormous wall of Watson. It would take over much of the Jeopardy set, perhaps in the form of a projected brain, with neurons firing, or maybe a virtual sketchpad, dancing with algorithms and formulas as the machine cogitated. “They were pretty grand ideas,” said David Korchin, the project’s creative director.
In talking to Jeopardy executives, though, it quickly became clear that they’d have to think smaller. If IBM’sWatson passed muster, it would be a guest on the show. It would not take it over. Its branding space, like that of any other contestant, would be limited to the face behind the podium—or whatever fit there. Jeopardy held the power and exercised it. If IBM’s computer was to benefit from an appearance on Jeopardy, the quiz show would lay down the rules.
Now that Watson was reduced from a possible Jumbotron to a human-sized space, what sort of creature would occupy it?

…

Ferrucci’s response, while cordial, was noncommittal. Jeopardy, not IBM, was in charge of selecting Watson’s sparring partners.
Before going on Jeopardy, Craig had long relied on traditional strategies. He’d read books on the game, including the 1998 How to Get on Jeopardy—And Win, by Michael DuPee. He’d also gone to Google Scholar, the search engine’s repository of academic works, and downloaded papers on Final Jeopardy betting. Craig was steeped in the history and lore of the games, as well as various strategies, many of them named for players who had made them famous. One Final Jeopardy technique, Marktiple Choice, involves writing down a number of conceivable answers and then eliminating the unlikely ones. Formulated by a 2003 champion, Mark Dawson, it prods players to extend the search beyond the first response that pops into their mind.

…

and appeared to pay tribute to its creators, responding: “What is IBM?”
Watson’s greatest weakness was in Final Jeopardy. According to the statistics, after the first sixty clues, Watson was leading an astounding 91 percent on the games. Yet that final clue, with its more difficult wording and complex wagering dynamics, lowered its winning percentage to 67 percent. Final Jeopardy turned Watson from a winner to a loser in one-quarter of the games. This was its vulnerability going into the match, and it would no doubt rise against the likes of Ken Jennings and Brad Rutter. The average human got Final Jeopardy right about half the time, according to Gondek. Watson hovered just below 50 percent. Ken Jennings, by contrast, aced Final Jeopardy clues at a 68 percent rate. That didn’t bode well for the machine.
Brad Rutter, undefeated in his Jeopardy career, walked into the cavernous Wheel of Fortune studio.

With this book, which is aimed at a broad audience rather than just the technical community, we hope to greatly expand the search for answers by stimulating new thinking within industry, government, and academia. And, just as importantly, we hope to inspire university and high school students to pursue careers in science, technology, engineering, and mathematics. Together, we can drive the exploration and invention that will shape society, the economy, and business for the next fifty years.
1
A NEW ERA OF COMPUTING
IBM’sWatson computer created a sensation when it bested two past grand champions on the TV quiz showJeopardy! Tens of millions of people suddenly understood how “smart” a computer could be. This was no mere parlor trick; the scientists who designed Watson built upon decades of research in the fields of artificial intelligence and natural-language processing and produced a series of breakthroughs. Their ingenuity made it possible for a system to excel at a game that requires both encyclopedic knowledge and lightning-quick recall.

…

We depend on them to produce the surprising advances that knock the world off kilter and, ultimately, have the potential to make it a better place. We will need many of them to make the transition to the era of cognitive systems. In the end, this era is not about machines but about the people who design and use them.
2
BUILDING LEARNING SYSTEMS
In November 2009, when IBM’sWatson was under development for its showdown on Jeopardy!, the machine made one laughable mistake after another in test matches. In one particularly funny instance, it was prompted to identify what the “Al” in the company name Alcoa stands for. It fired back, “What is Al Capone?” Everybody in the room cracked up. The machine had confused the first two letters of aluminum with the name of a famous gangster.1
No harm was done.

At the time, the story of IBM’s “Deep Blue” computer and how it had defeated world chess champion Garry Kasparov in 1997, was perhaps the most impressive demonstration of AI in action. Once again, I was taken by surprise when IBM introduced Deep Blue’s successor, Watson—a machine that took on a far more difficult challenge: the television game showJeopardy! Chess is a game with rigidly defined rules; it is the sort of thing we might expect a computer to be good at. Jeopardy! is something else entirely: a game that draws on an almost limitless body of knowledge and requires a sophisticated ability to parse language, including even jokes and puns. Watson’s success at Jeopardy! is not only impressive, it is highly practical, and in fact, IBM is already positioning Watson to play a significant role in fields like medicine and customer service.
It’s a good bet that nearly all of us will be surprised by the progress that occurs in the coming years and decades.

…

WorkFusion has found that, as the system’s machine learning algorithms incrementally automate the process further, costs typically drop by about 50 percent after one year and still another 25 percent after a second year of operation.13
Cognitive Computing and IBM Watson
In the fall of 2004, IBM executive Charles Lickel had dinner with a small team of researchers at a steakhouse near Poughkeepsie, New York. Members of the group were taken aback when, at precisely seven o’clock, people suddenly began standing up from their tables and crowding around a television in the bar area. It turned out that Ken Jennings, who had already won more than fifty straight matches on the TV game showJeopardy!, was once again attempting to extend his historic winning streak. Lickel noticed that the restaurant’s patrons were so engaged that they abandoned their dinners, returning to finish their steaks only after the match concluded.14
That incident, at least according to many recollections, marked the genesis of the idea to build a computer capable of playing—and beating the very best human champions at—Jeopardy!

…

This WorkFusion information is based on a telephone conversation between the author and Adam Devine, vice president of Product Marketing & Strategic Partnerships at WorkFusion, on May 14, 2014.
14. This incident is recounted in Steven Baker, Final Jeopardy: Man vs. Machine and the Quest to Know Everything (New York: Houghton Mifflin Harcourt, 2011), p. 20. The story of the steakhouse dinner is also told in John E. Kelly III, Smart Machines: IBM’sWatson and the Era of Cognitive Computing (New York: Columbia University Press, 2013), p. 27. However, Baker’s book indicates that some IBM employees believe the idea to build a Jeopardy!-playing computer predates the dinner.
15. Rob High, “The Era of Cognitive Systems: An Inside Look at IBM Watson and How it Works,” IBM Redbooks, 2012, p. 2, http://www.redbooks.ibm.com/redpapers/pdfs/redp4955.pdf.
16. Baker, Final Jeopardy: Man vs. Machine and the Quest to Know Everything, p. 30.
17. Ibid., pp. 9 and 26.
18.

The current debate, Berner adds, revives an old one in economics, pointing to a 1947 article, “Measurement Without Theory,” by Tjalling Koopmans, a Dutch-American economist who later won a Nobel Prize. The Koopmans article was a critique of the hard-line “empiricist” approach to the study of business cycles back then.
Few people have wielded the power of data with so dramatic effect as David Ferrucci. He led the IBM research team that created Watson the Jeopardy! winner. That contest ended with Ken Jennings, the all-time champion on the TV quiz show, writing on his video screen, in a gesture of genial surrender, “I, for one, welcome our new computer overlords.”
The human face of Watson, to the extent that there was one, was Ferrucci, a goateed computer scientist who was always articulate and at ease in front of a camera or microphone. Yet at the end of 2012, Ferrucci joined Bridgewater Associates, a giant hedge fund, after what he describes as “a great, great career” at IBM, spanning twenty years.

…

In a technical sense, the law, formulated by Intel’s cofounder Gordon Moore in 1965, is the observation that transistor density on computer chips doubles about every two years and that computing power improves at that exponential pace. But in a practical sense, it also means that seemingly quantitative changes become qualitative, opening the door to new possibilities and doing new things. In computing, you start by calculating the flight trajectory of artillery shells, the task assigned the ENIAC (Electronic Numerical Integrator and Computer) in 1946. And by 2011, you have IBM’sWatson beating the best humans in the question-and-answer game Jeopardy!
To a computer, it’s all just the 1’s and 0’s of digital code. Yet the massive quantitative improvement in performance over time drastically changes what can be done. Trained physicists in the data world often compare the quantitative-to-qualitative transformation to a “phase change,” or change of state, as when a gas becomes a liquid or a liquid becomes a solid.

…

Forty thousand IBM consultants, engineers, sales people, and scientists working in the data business are spread across the company’s services, software, and research divisions. In early 2014, Rometty announced that its prototype projects with the Watson technology in health care and other industries were sufficiently encouraging to justify creating a new business division. IBM will invest $1 billion in the Watson business and the unit would grow to 2,000 people. Watson has become a “cloud” software service, delivered Google-style over the Internet from remote data centers. IBM is sharing Watson technology with outside software developers and start-ups, so they can write applications that run on top of Watson. IBM has created a $100 million equity fund to jump-start that third-party development by outsiders. The company hopes that Watson can become the equivalent of an operating system for artificial intelligence software.

Computers and robots remain lousy at doing anything outside the frame of their programming. Watson, for example, is an amazing Jeopardy! player, but would be defeated by a child at Wheel of Fortune, The Price is Right, or any other TV game show unless it was substantially reprogrammed by its human creators. Watson is not going to get there on its own.
Instead of conquering other game shows, however, the IBM team behind Watson is turning its attention to other fields such as medicine. Here again, it will be limited by its frame. Make no mistake: we believe that Watson will ultimately make an excellent doctor. Right now human diagnosticians reign supreme, but just as Watson soon got good enough to beat Ken Jennings, Brad Rutter, and all other human Jeopardy! players, we predict that Dr. Watson will soon be able to beat Dr. Welby, Dr. House, and real human doctors at their own game.

…

The translation services company Lionbridge has partnered with IBM to offer GeoFluent, an online application that instantly translates chats between customers and troubleshooters who do not share a language. In an initial trial, approximately 90 percent of GeoFluent users reported that it was good enough for business purposes.14
Human Superiority in Jeopardy!
Computers are now combining pattern matching with complex communication to quite literally beat people at their own games. In 2011, the February 14 and 15 episodes of the TV game showJeopardy! included a contestant that was not a human being. It was a supercomputer called Watson, developed by IBM specifically to play the game (and named in honor of legendary IBM CEO Thomas Watson, Sr.). Jeopardy! debuted in 1964 and in 2012 was the fifth most popular syndicated TV program in America.15 On a typical day almost 7 million people watch host Alex Trebek ask trivia questions on various topics as contestants vie to be the first to answer them correctly.*
The show’s longevity and popularity stem from its being easy to understand yet extremely hard to play well.

Instead, by capturing and reusing huge bodies of past experience, this technology provides an approach to professional work that simply was not possible in the past. In the words of Patrick Winston, a leading voice for decades in the world of artificial intelligence, ‘there are lots of ways of being smart that aren&apos;t smart like us’.46
IBM’sWatson
In the same spirit, IBM’s system Watson, which we regard as a landmark development in artificial intelligence, was not designed to solve problems in the way that human beings do.47 Watson was developed in part to demonstrate that machines could indeed attain exceptional levels of apparently intelligent performance. Named after the founder of IBM, the system was developed to compete on Jeopardy!—a TV quiz show in the United States. This represented IBM’s latest contribution to the branch of AI that in the 1980s was called ‘game-playing’. Previously, IBM had developed Deep Blue, a computer system that beat the world chess champion Garry Kasparov in 1997.

…

In section 4.6 we point to a variety of existing techniques and technologies that are already achieving remarkable levels of performance in a wide range of tasks. Perhaps the most dramatic is IBM’sWatson, the computer system that was catapulted to fame by its appearance in 2011 on Jeopardy!, a TV quiz show, on which, in a live broadcast, it beat the two best-ever human contestants (see section 4.6). Much has been said and written about this feat, but nothing perhaps as witty and insightful as a headline in the Wall Street Journal to an opinion piece by the philosopher John Searle. It read: ‘Watson Doesn’t Know It Won on “Jeopardy!”2 We could add that Watson, after its great triumph, had no apparent inclination to laugh or cry, to go for a celebratory drink, to share the moment with a close friend, to chat about what it felt like, or to commiserate with its vanquished opponents.

…

In contemplating the potential of future machines to outperform human beings, what really matters is not how the systems operate but whether, in terms of the outcome, the end product is superior. In other words, whether machines will replace human professionals is not about the capacity of systems to perform tasks as people do. It is about whether systems can outperform human beings—full stop. And so, when IBM’sWatson beat the best-ever human champions on a TV quiz show, what mattered was not whether Watson had cognitive states in common with its flesh-and-blood opponents, but whether its score was higher.
To be more precise, then, the fundamental question to be asked and answered is whether machines and systems can undertake tasks that for human beings require cognitive, affective, manual, and moral capabilities, even if they discharge these tasks by quite different means.

Moore, “Cramming More Components onto Integrated Circuits,” Electronics 38, no. 8 (1965).
10. Ronda Hauben, “From the ARPANET to the Internet,” last modified June 23, 1998, http://www.columbia.edu/~rh120/other/tcpdigest_paper.txt.
11. For the proverbially impaired, here’s the original: “Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.”
12. Joab Jackson, “IBM Watson Vanquishes Human Jeopardy Foes,” PC World, February 16, 2011, http://www.pcworld.com/article/219893/ibm_watson_vanquishes_human_jeopardy_foes.html.
2. TEACHING ROBOTS TO HEEL
1. For a firsthand narrative of some of these events by the inventor himself, see Vic Scheinman’s interview at Robotics History: Narratives and Networks, accessed November 25, 2014, http://roboticshistory.indiana.edu/content/vic-scheinman.
2. I’m indebted to my friend Carl Hewitt, known for his early logic programming language Planner, for his eyewitness report on this incident.

…

They are best understood as developing their own intuitions and acting on instinct: a far cry from the old canard that they “can only do what they are programmed to do.”
I’m happy to report that IBM long ago came around to accepting the potential of AI and to recognizing its value to its corporate mission. In 2011, the company demonstrated its in-house expertise with a spectacular victory over the world’s champion Jeopardy! player, Ken Jennings. IBM is now parlaying this victory into a broad research agenda and has, characteristically, coined its own term for the effort: cognitive computing. Indeed, it is reorganizing the entire company around this initiative.
It’s worth noting that IBM’s program, named Watson, had access to 200 million pages of content consuming four terabytes of memory.12 As of this writing, three years later, you can purchase four terabytes of disk storage from Amazon for about $150. Check back in two years, and the price will likely be around $75.

…

For me, the practice of medicine today conjures the image of a Hieronymus Bosch painting, with tiny, pitchfork-wielding devils inflicting their own unique forms of pain.
As a patient, you would ideally prefer to be treated by a superdoc who is expert in all the specialties and is up to date on all of the latest medical information and best practices. But of course no such human exists.
Enter IBM’sWatson program. Fresh off its Jeopardy! victory over champions Brad Rutter and Ken Jennings, Watson was immediately redeployed to tackle this new challenge. In 2011, IBM and WellPoint, the nation’s largest healthcare benefits manager, entered into a collaboration to apply Watson technology to help improve patient care. The announcement says, “Watson can sift through an equivalent of about one million books or roughly 200 million pages of data, and analyze this information and provide precise responses in less than three seconds.

The driverless car might well be safer than one
controlled by a fallible Homo sapiens.
If the driverless car weren’t enough of a challenge to human superiority, who could
have watched IBM’s Watson supercomputer defeat the Jeopardy Hall of Famers in 2011
and not fretted about the future of physicians, or any highly skilled workers, for that
matter? “Just as factory jobs were eliminated in the twentieth century by new assemblyline robots,” wrote all-time (human) Jeopardy champion Ken Jennings soon after the
lopsided match ended, “Brad [Rutter, the other defeated champ] and I were the first
knowledge-industry workers put out of work by the new generation of ‘thinking’
machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m
sure it won’t be the last.”
Soon after the well-publicized trouncing, IBM announced that one of its first “use
cases” for Watson would be medicine.

…

—Atul Gawande, Complications: A Surgeon’s Notes on an Imperfect Science
When IBM announced that Watson’s first post-Jeopardy focus would be healthcare, the
media immediately ran with the Man Versus Machine meme, dubbing the computer “Dr.
Watson.” “Meet Dr. Watson: Jeopardy Winning Supercomputer Heads into Healthcare,”
proclaimed one headline. “Paging Dr. Watson: IBM’s Medical Advisor for the Future,”
read another.
IBMers immediately steered clear of the “Dr. Watson” narrative, with its implied
cockiness and gauntlet throwing. Paul Grundy, IBM’s global director of healthcare
transformation, told me: “Certainly none of us on the clinical side ever talked about this
being Dr. Watson. That’s not what it does.” Added Michael Weiner, who runs IBM’s
healthcare strategies: “Sure, we said, ‘Look, a machine beat a man at a quiz show,’ but I
don’t think that’s the power of this conversation.

…

,” Artificial Intelligence in
Medicine 5:93–106 (1993).
102 He calls today’s medical IT programs “Version 0” Khosla, “20-Percent Doctor
Included.”
103 These cases illustrate a perennial debate in AI See, for example, M. van Emden,
“Scruffies and Neats in Artificial Intelligence: A Programmer’s Place,” September 11,
2011, available at http://vanemden.wordpress.com/2011/09/11/scruffies-and-neats-inartificial-intelligence/.
103 When he was asked about the difference between human thinking E. Brown, “IBM’s
‘Watson’ in Layman’s Terms by Dr. Eric W. Brown,” available at
https://www.youtube.com/watch?v=gRVjFhEnLRQ.
Chapter 10: David and Goliath
105 “There is a science in what we do” A. Gawande, Complications: A Surgeon’s Notes
on an Imperfect Science (New York: Metropolitan Books, 2002).
105 dubbing the computer “Dr. Watson” The “Meet Dr. Watson” headline is from S. Kliff,
“Meet Dr. Watson: ‘Jeopardy’-Winning Super Computer Heads into Health Care,”
Washington Post, September 12, 2011; the “Paging Dr. Watson” headline is from J.
Jackson, “Paging Dr. Watson, IBM’s Medical Advisor for the Future,” PC World,
August 28, 2014.
105 Paul Grundy … told me Interview of Grundy by the author, July 21, 2014.
105 Added Michael Weiner, who runs IBM’s healthcare strategies Interview of Weiner by
the author, July 28, 2014.
106 The name of the child, and the software, is Isabel www.isabelhealthcare.com.
106 It wasn’t in Jason Maude’s life plan Interview of Maude by the author, July 21, 2014,
as well as L.

However, the advance of Big Data technology doesn’t stop with tomorrow. Beyond tomorrow probably holds surprises that no one has even imagined yet. As technology marches ahead, so will the usefulness of Big Data. A case in point is IBM’s Watson, an artificial intelligence computer system capable of answering questions posed in natural language. In 2011, as a test of its abilities, Watson competed on the quiz showJeopardy!, in the show’s only human-versus-machine match to date. In a two-game, combined-point match, broadcast in three episodes aired February 14–16, Watson beat Brad Rutter, the biggest all-time money winner on Jeopardy!, and Ken Jennings, the record holder for the longest championship streak (74 wins).
Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage, including the full text of Wikipedia, but was not connected to the Internet during the game.

…

Watson demonstrated that there are new ways to deal with Big Data and new ways to measure results, perhaps exemplifying where Big Data may be headed.
So what’s next for Watson? IBM has stated publicly that Watson was a client-driven initiative, and the company intends to push Watson in directions that best serve customer needs. IBM is now working with financial giant Citi to explore how the Watson technology could improve and simplify the banking experience. Watson’s applicability doesn’t end with banking, however; IBM has also teamed up with health insurer WellPoint to turn Watson into a machine that can support the doctors of the world.
According to IBM, Watson is best suited for use cases involving critical decision making based on large volumes of unstructured data. To drive the Big Data–crunching message home, IBM has stated that 90 percent of the world’s data was created in the last two years, and 80 percent of that data is unstructured.

…

To drive the Big Data–crunching message home, IBM has stated that 90 percent of the world’s data was created in the last two years, and 80 percent of that data is unstructured. Furthering the value proposition of Watson and Big Data, IBM has also stated that five new research documents come out of Wall Street every minute, and medical information is doubling every five years.
IBM views the future of Big Data a little differently than other vendors do, most likely based on its Watson research. In IBM’s future, Watson becomes a service—as IBM calls it, Watson-as-a-Service—which will be delivered as a private or hybrid cloud service.
Watson aside, the health care industry seems ripe as a source of prediction for how Big Data will evolve. Examples abound for the benefits of Big Data and the medical field; however, getting there is another story altogether. Health care (or in this context, “Big Medicine”) has some specific challenges to overcome and some specific goals to achieve to realize the potential of Big Data:
Big Medicine is drowning in information while also dying of thirst.

Phase two, which we believe we’re in now, has a start date that’s harder to pin down. It’s the time when science fiction technologies—the stuff of movies, books, and the controlled environments of elite research labs—started to appear in the real world. In 2010, Google unexpectedly announced that a fleet of completely autonomous cars had been driving on US roads without mishap. In 2011, IBM’s Watson supercomputer beat two human champions at the TV quiz showJeopardy! By the third quarter of 2012, there were more than a billion users of smartphones, devices that combined the communication and sensor capabilities of countless sci-fi films. And of course, the three advances described at the start of this chapter happened in the past few years. As we’ll see, so did many other breakthroughs. They are not flukes or random blips in technological progress.

…

‡ Watson doesn’t (yet) understand language the way humans do, but it does find patterns and associations in written text that it can use to populate its knowledge base.
§ Fast Company journalist Mark Wilson loved the “Bengali Butternut” barbecue sauce that Watson came up with (Mark Wilson, “I Tasted BBQ Sauce Made by IBM’s Watson, and Loved It,” Fast Company, May 23, 2014, https://www.fastcodesign.com/3027687/i-tasted-bbq-sauce-made-by-ibms-watson-and-loved-it), but called its “Austrian Chocolate Burrito” the worst he’d ever had (Mark Wilson, “IBM’sWatson Designed the Worst Burrito I’ve Ever Had,” Fast Company, April 20, 2015, https://www.fastcodesign.com/3045147/ibms-watson-designed-the-worst-burrito-ive-ever-had).
¶ A mechanical fillet is a smooth transition from one area of a part to another—for example, a rounded corner between two surfaces that meet at a right angle.
# Or it might not be a good idea.

…

However, the process of eliciting knowledge in interviews would consume a lot of time, would take people away from their job, and probably wouldn’t work very well. The people doing the less routine back-office work are, in all likelihood, not able to accurately and completely tell someone else how to do their job.
The Japanese insurer Fukoku Mutual Life is taking a different approach. In December of 2016, it announced an effort to use IBM’sWatson AI technology to at least partially automate the work of human health insurance claim processors. The system will begin by extracting relevant information from documents supplied by hospitals and other health providers, using it to fill in the proper codes for insurance reimbursement, then presenting this information to people. But over time, the intent is for the system to “learn the history of past payment assessment to inherit the experience and expertise of assessor workers.”

Intelligent algorithms automatically detect credit card fraud, fly and land airplanes, guide intelligent weapons systems, help design products with intelligent computer-aided design, keep track of just-in-time inventory levels, assemble products in robotic factories, and play games such as chess and even the subtle game of Go at master levels.
Millions of people witnessed the IBM computer named Watson play the natural-language game of Jeopardy! and obtain a higher score than the best two human players in the world combined. It should be noted that not only did Watson read and “understand” the subtle language in the Jeopardy! query (which includes such phenomena as puns and metaphors), but it obtained the knowledge it needed to come up with a response from understanding hundreds of millions of pages of natural-language documents including Wikipedia and other encyclopedias on its own. It needed to master virtually every area of human intellectual endeavor, including history, science, literature, the arts, culture, and more.

…

Secondly, they provide a solid basis for the lower levels of the conceptual hierarchy so that the automated learning can begin to learn higher conceptual levels.
As mentioned above, Watson represents a particularly impressive example of the approach of combining hand-coded rules with hierarchical statistical learning. IBM combined a number of leading natural-language programs to create a system that could play the natural-language game of Jeopardy! On February 14–16, 2011, Watson competed with the two leading human players: Brad Rutter, who had won more money than anyone else on the quiz show, and Ken Jennings, who had previously held the Jeopardy! championship for the record time of seventy-five days.
By way of context, I had predicted in my first book, The Age of Intelligent Machines, written in the mid-1980s, that a computer would take the world chess championship by 1998. I also predicted that when that happened, we would either downgrade our opinion of human intelligence, upgrade our opinion of machine intelligence, or downplay the importance of chess, and that if history was a guide, we would minimize chess.

…

The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.
Note that the linear programming that Grötschel cites above as having benefited from an improvement in performance of 43 million to 1 is the mathematical technique that is used to optimally assign resources in a hierarchical memory system such as HHMM that I discussed earlier. I cite many other similar examples like this in The Singularity Is Near.6
Regarding AI, Allen is quick to dismiss IBM’s Watson, an opinion shared by many other critics. Many of these detractors don’t know anything about Watson other than the fact that it is software running on a computer (albeit a parallel one with 720 processor cores). Allen writes that systems such as Watson “remain brittle, their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific areas.”

They don’t initiate analyses on their own, they don’t understand the larger purpose of what they’re doing, and they don’t tell you when they aren’t up to the task at hand. As Mike Rhodin, the head of IBM’sWatson business unit, noted, “Watson doesn’t have the ability to think on its own,” and neither does any other intelligent system thus far created.13
There is some progress, however, in the area of telling humans whether the output of smart machines should be used and trusted. The statistically based systems used for analyzing words and images are increasingly capable of this latter task. In fact, it should be a requirement that all such systems tell you when you should and shouldn’t trust their results, and some already do this. You may recall, for example, that when Watson dominated the Jeopardy! game in 2011, the program displayed a “confidence bar” ranking the top three answers and its confidence level in each one.

…

But with that strong caveat noted, it’s important to keep feeding the knowledge base that will help you make connections between problems your organization has and the solutions that are out there.
For example, it’s been a very recent development that computers have learned to read and make inferences from fast, vast digestion of textual content. If you weren’t part of the AI community, you might have first learned of this when IBM’s Watson won Jeopardy! To come up with each response, Watson (specifically its “Discovery Advisor”) read whole encyclopedias and untold Internet pages. How could you use that power? At the Baylor College of Medicine in Dallas, they used it to read through more than 70,000 scientific articles, looking for accounts of any protein that could modify p53, a protein that regulates cancer growth. Most scientists would struggle to identify one such protein in a year; Watson took only a few weeks to find six (although, to be fair, it took several years to prepare Watson to do this).6 Other organizations are using similar technologies to glean insights from natural-language content that exists in enormous volume.

CHAPTER 7
165Thanks to their ability to dissect: Background on Watson comes from: Rashid, Fahmida. “IBM’s Watson Ties for Lead on Jeopardy but Makes Some Doozies.” EWeek, February 14, 2011. http://www.eweek.com/c/a/IT-Infrastructure/IBMs-Watson-Ties-for-Lead-on-Jeopardy-but-Makes-Some-Doozies-237890; and Best, Jo. “IBM Watson: How the Jeopardy-Winning Supercomputer Was Born, and What It Wants to Do Next.” TechRepublic. http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/.
166IBM collected some of the results: Basulto, Dominic. “How IBM Watson Helped Me to Create a Tastier Burrito Than Chipotle.” Washington Post, April 15, 2015. http://www.washingtonpost.com/blogs/innovations/wp/2015/04/15/how-ibm-watson-helped-me-to-create-a-tastier-burrito-than-chipotle/.
167“Let’s try poker”: Wise, Gary.

…

Ask the human to perform a calculation, and he’d be much slower, not to mention more error prone, than the computer. Even so, there are still some situations that bots struggle with. When playing Jeopardy! Watson found the short clues the most difficult. If the host read out a single category and a name—such as “first ladies” and Ronald Reagan—Watson would take too long to search through its database to find the correct response (which is “Who is Nancy Reagan?”). Whereas Watson would beat a human contestant in a race to solve a long, complicated clue, the human would prevail if there were only a few words to go by. In quiz shows, it seems that brevity is the enemy of machines.
The same is true of poker. Bots need time to study their opponents, learning their betting style so it can be exploited. In contrast, human professionals are able to evaluate other players much more quickly.

…

Afterward, the human player told the bot’s creators, “You have a very strong program. Once you add opponent modeling to it, it will kill everyone.”
7
THE MODEL OPPONENT
WHEN IT CAME TO THE GAME SHOWJEOPARDY! KEN JENNINGS and Brad Rutter were the best. It was 2011, and Rutter had netted the most prize money, while Jennings had gone a record seventy-four appearances without defeat. Thanks to their ability to dissect the show’s famous general knowledge clues, they had won over $5 million between them.
On Valentine’s Day that year, Jennings and Rutter returned for a special edition of the show. They would face a new opponent, named Watson, who had never appeared on Jeopardy! before. Over the course of three episodes, Jennings, Rutter, and Watson answered questions on literature, history, music, and sports. It didn’t take long for the newcomer to edge into the lead.

“the smartest medical student we have ever had”: Joanna Stern, “IBM’s Watson Supercomputer Gets Job as Oncologist at Memorial Sloan-Kettering Cancer Center,” ABC News, March 22, 2012, accessed March 26, 2013, abcnews.go.com/Technology/ibms-watson-supercomputer-job-memorial-sloan-kettering-cancer/story?id=15979580#.UVQxTKt5MhM.
A Pew study found that 22 percent of all TV watchers: Aaron Smith and Jan Lauren Boyles, “The Rise of the ‘Connected Viewer’” (Pew Internet & American Life Project, July 17, 2012), 2, accessed March 26, 2013, pewinternet.org/Reports/2012/Connected-viewers.aspx.
rebuslike short forms of expression: I owe this idea to Crystal’s Txtng, 39.
Andy Hickl, an AI inventor: Thompson, “What Is I.B.M.’sWatson?”
As one of the commenters at the site explained: “Reader’s Comments: What Is IBM’sWatson?” accessed March 26, 2013, community.nytimes.com/comments/www.nytimes.com/2010/06/20/magazine/20Computer-t.html?

…

But the Internet has enabled us to exercise new powers, new ways to talk to each other, to pass things around, to quickly broadcast a hunch—“hey, wasn’t there a joke involving Toto in that movie?”—and get feedback. Using the mammoth stores of knowledge online, people were able to scrutinize and pick apart the reasoning of the world’s most sophisticated artificial intelligence.
For serious fun, imagine if we emulated Kasparov fully here and used Watson to create an entirely new game show—a sort of Advanced Jeopardy!, as it were. Imagine two teams facing one another, a human paired with a Watson-style AI. What type of fiendish, seemingly impossible clues could you throw at a cyborg composed of Ken Jennings and Watson—the brute force of a machine paired with the fuzzy intuition of a human? What puzzles could that combination tackle in everyday life?
How should you respond when you get powerful new tools for finding answers?

pages: 181words: 52,147

The Driver in the Driverless Car: How Our Technology Choices Will Create the Future
by
Vivek Wadhwa,
Alex Salkever

., the narrow and practical stuff that is going to change our lives. The fact is that, no matter what the experts say, no one really knows how A.I. will evolve in the long term.
How A.I. Will Affect Our Lives—And Take Our Jobs
Let’s begin with our bodies. The same type of artificial-intelligence technology that IBM used to defeat champions on the TV show Jeopardy, called Watson, will soon monitor our health data, predict disease, and advise on how to stay fit. Already, IBM Watson has learned about all the advances in oncology and is better at diagnosing cancer than human doctors are.2 Watson and its competitors will soon learn about every other field of medicine and will provide us with better, and better-informed, advice than our doctors do.
A.I. technologies will also be able to analyze a continual flow of data on millions of patients and on the medications they have taken to determine which truly had a positive effect on them, which ones created adverse reactions and new ailments, and which did both.

The ability of patients to take regular tests in the comfort of their homes and upload data to shared servers will make it possible to dramatically increase the quality, and lower the cost, of the health care they receive.
Continuous monitoring of health data by artificial intelligence– based applications will enable the prevention of disease, especially lifestyle diseases such as diabetes and cardiovascular illness. Patients able to operate health systems equipped with a smartly designed user interface will also be able to use IBM Watson or other A.I. systems for personal diagnoses, cutting the doctor entirely out of the loop for initial detection and diagnoses (though we certainly will still need doctors to guide us through more-advanced care choices). So the cost of delivering high-quality care will surely plummet, and acute medical treatments in expensive hospitals rife with nasty resistant bugs will give way to preventive care occurring in our communities and, ultimately, our homes.

Extrapolations based on recent computing trends have a way of turning into fantasies. But even if we assume, contrary to the extravagant promises of big-data evangelists, that there are limits to the applicability and usefulness of correlation-based predictions and other forms of statistical analysis, it seems clear that computers are a long way from bumping up against those limits. When, in early 2011, the IBM supercomputer Watson took the crown as the reigning champion of Jeopardy!, thrashing two of the quiz show’s top players, we got a preview of where computers’ analytical talents are heading. Watson’s ability to decipher clues was astonishing, but by the standards of contemporary artificial-intelligence programming, the computer was not performing an exceptional feat. It was, essentially, searching a vast database of documents for potential answers and then, by working simultaneously through a variety of prediction routines, determining which answer had the highest probability of being the correct one.

The FAA had collected evidence, from crash investigations, incident reports, and cockpit studies, indicating that pilots had become too dependent on autopilots and other computerized systems. Overuse of flight automation, the agency warned, could “lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesired state.” It could, in blunter terms, put a plane and its passengers in jeopardy. The alert concluded with a recommendation that airlines, as a matter of operational policy, instruct pilots to spend less time flying on autopilot and more time flying by hand.1
This is a book about automation, about the use of computers and software to do things we used to do ourselves. It’s not about the technology or the economics of automation, nor is it about the future of robots and cyborgs and gadgetry, though all those things enter into the story.

Now these chips do more than one computation at a time, working together in parallel, to solve problems faster by doing more than one thing at a time.
Let’s look at one machine in particular, IBM’sWatson, which played and won on the game showJeopardy on episodes first broadcast in February 2011. Watson consisted of ninety IBM POWER 750 servers, each with four POWER7 processors. A POWER7 processor actually has eight processors (called cores) so it can run eight simultaneous computations at once. That’s thirty-two cores per server for a total of 2,880 cores in the entire Watson system. Watson does 2,880 simultaneous computations so it can interpret the Jeopardy “answer” and determine whether to buzz in that fraction of a section before its opponents. The 2,880 simultaneous cores will seem a tiny number in the near future as Moore’s law continues to put more on each machine.

Later developments in randomized and quantum computation will show that perhaps we cannot have a fixed notion of efficient computation.
Then in 1971 came Steve Cook’s paper that defined NP, the problems we can verify efficiently, the P versus NP problem, and the first NP-complete problem. A year later Richard Karp gave his paper showing that a number of important problems were NP-complete.
In 1972 the IBM T. J. Watson Research Center hosted the Symposium on the Complexity of Computer Computations, a meeting mostly remembered for Karp’s presentation of his paper. The organizers of the meeting held a panel discussion at the end of the symposium on the future of the field. One of the questions asked was, “How is the theory developing from originally being a scattering of a few results on lower bounds and some algorithms into a more unified theory?”

The more data fed into the program or computer, the more it learns, the better the algorithms, and supposedly the smarter it gets.
Techniques from machine learning and artificial intelligence are what powered the triumph of the IBM Watson supercomputer over humans on Jeopardy. This relied upon quickly answering complex questions that would not be amenable to a Google search.30–32 IBM Watson was taught through hundreds of thousands of questions from prior Jeopardy shows, armed with all the information in Wikipedia, and programmed to do predictive modeling. There’s no prediction of the future here, just prediction that IBM Watson has the correct answer. Underlying its predictive capabilities was quite a portfolio of machine learning systems, including Bayesian nets, Markov chains, support vector machine algorithms, and genetic algorithms.33 I won’t go into any more depth; my brain is not smart enough to understand it all, and fortunately it’s not particularly relevant to where we are going here.

…

House model is ideally suited for computer automation in medicine and it is precisely the output from IBM Watson.70,71 The pretest probability includes all of the medical literature that has been published, up to date. When you submit to IBM Watson all the pieces of evidence about a particular patient in search of the diagnosis, you get a list of the possible ones. Attached to each is a weight or probability (likelihood ratio).
Further, the Bayesian model for computer-assisted diagnosis is quickly becoming part of clinical care and can extend to treatment recommendations. A web-based information resource known as Modernizing Medicine has collective knowledge from over fifteen million patient visits and four thousand physicians with treatments and outcomes of each patient.72 So added to IBM Watson’s differential diagnosis capability, a list of treatments with weighted assignments of probability could be generated that matches the patient at hand to all the patients in the database.

Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.
RAY KURZWEIL
In 2011, we all watched with awe when IBM’s Watson supercomputer beat the world champions on the television game showJeopardy! Using artificial intelligence and natural language processing, Watson digested over 200 million pages of structured and unstructured data, which it processed at a rate of eighty teraflops—that’s eighty trillion operations per second. In doing so, it handily defeated Ken Jennings, a human Jeopardy! contestant who had won seventy-four games in a row. Jennings was gracious in his defeat, noting, “I, for one, welcome our new computer overlords.” He might want to rethink that.
Just three years after Watson beat Jennings, the supercomputer achieved a 2,400 percent improvement in performance and shrank by 90 percent, “from the size of a master bedroom to three stacked pizza boxes.”

…

Just three years after Watson beat Jennings, the supercomputer achieved a 2,400 percent improvement in performance and shrank by 90 percent, “from the size of a master bedroom to three stacked pizza boxes.” Watson has also now shifted careers, using its vast cognitive powers not for quiz shows but for medicine. The M. D. Anderson Cancer Center is using Watson to help doctors match patients with clinical trials, and at the Sloan Kettering Institute, Watson is voraciously reading 1.5 million patient records and hundreds of thousands of oncology journal articles in an effort to help clinicians come up with the best diagnoses and treatments. IBM has even launched the Watson Business Group with a $1 billion investment earmarked to get companies, nonprofits, and governments to take advantage of Watson’s capabilities. These moves are putting supercomputerlevel artificial intelligence into the hands of both small companies and individuals—and in the future likely Crime, Inc. as well.

One night, Horn and his colleagues were dining out at a steak house near IBM’s headquarters and noticed that all the restaurant patrons had suddenly gathered around the televisions at the bar. The crowd had assembled to watch Ken Jennings continue his legendary winning streak at the game showJeopardy!, a streak that in the end lasted seventy-four episodes. Seeing that crowd forming planted the seed of an idea in Horn’s mind: Could IBM build a computer smart enough to beat Jennings at Jeopardy!?
The system they eventually built came to be called Watson, named after IBM’s founder Thomas J. Watson. To capture as much information about the world as possible, Watson ingested the entirety of Wikipedia, along with more than a hundred million pages of additional data. There is something lovely about the idea of the world’s most advanced thinking machine learning about the world by browsing a crowdsourced encyclopedia.

…

Already Watson is being employed to recommend cancer treatment plans by analyzing massive repositories of research papers and medical data, and answer technical support questions about complex software issues. But still, Watson’s roots are worth remembering: arguably the most advanced form of artificial intelligence on the planet received its education by training for a game show.
You might be inclined to dismiss Watson’s game-playing roots as a simple matter of publicity: beating Ken Jennings on television certainly attracted more attention for IBM than, say, Watson attending classes at Oxford would have. But when you look at Watson in the context of the history of computer science, the Jeopardy! element becomes much more than just a public relations stunt. Consider how many watershed moments in the history of computation have involved games: Babbage tinkering with the idea of a chess-playing “analytic engine”; Turing’s computer chess musings; Thorp and Shannon at the roulette table in Vegas; the interface innovations introduced by Spacewar!

…

As Jennings later described it in an essay:
The computer’s techniques for unraveling Jeopardy! clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a 15-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels “sure” enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.
IBM, of course, has plans for Watson that extend far beyond Jeopardy!. Already Watson is being employed to recommend cancer treatment plans by analyzing massive repositories of research papers and medical data, and answer technical support questions about complex software issues.

Computers are excellent tools for producing answers, but they don’t know how to ask questions, at least not in the sense humans do.
In 2014, I got an interesting response to this assertion. I had been invited to speak at the headquarters of the world’s largest hedge fund in Connecticut, Bridgewater Associates. In a revealing turn of events, they had hired Dave Ferrucci, one of the creators of the IBM artificial intelligence project Watson, famous for its triumphs on the American television quiz showJeopardy. Ferrucci sounded disillusioned by IBM’s focus on a data-driven approach to AI, and how it wanted to exploit the impressive Watson and its sudden celebrity by turning it into a commercial product as quickly as possible. He had been working on more sophisticated “paths” that aimed at explaining the “why” of things, not only finding useful correlations via data mining.

…

As with a chess engine crunching through billions of positions to find the best move, language can be broken down into values and probabilities to produce a response. The faster the machine, the more and better quality the data, and the smarter the code, the more accurate the response is likely be.
Adding a bit of irony regarding whether or not computers can ask questions, the format of the television game showJeopardy, where Watson showed off its capabilities by defeating two human former champions, requires contestants to provide their answers in the form of a question. That is, if the show’s host says, “This Soviet program won the first World Computer Chess Championship in 1974,” the player would press the buzzer and answer, “What was Kaissa?” But this odd convention is simple protocol with no bearing on the machine’s ability to find the answers in its fifteen petabytes of data.

…

Smarter computers are one key to success, but doing a smarter job of humans and machines working together turns out to be far more important.
These investigations led to visits to places like Google, Facebook, and Palantir, companies for whom algorithms are lifeblood. There have also been some more surprising invitations, including one from the headquarters of the world’s largest hedge fund, where algorithms make or lose billions of dollars every day. There I met one of the creators of Watson, the Jeopardy-playing computer that could be called IBM’s successor to Deep Blue. Another trip was to participate in a debate in front of an executive banking audience in Australia on what impact AI was likely to have on jobs in their industry. Their interests are quite different, but they all want to be on the cutting edge of the machine intelligence revolution, or at least to not be cut by it.
I’ve been speaking to business audiences for many years, usually on subjects like strategy and how to improve the decision-making process.

It could happen only after Moore’s law entered the second half of the chessboard and gave us sufficient power to digitize almost everything imaginable—words, photos, data, spreadsheets, voice, video, and music—as well as the capacity to load it all into computers and the supernova, the networking ability to move it all around at high speed, and the software capacity to write multiple algorithms that could teach a computer to make sense of unstructured data, just as a human brain might, and thereby enhance every aspect of human decision making.
When IBM designed Watson to play Jeopardy!, Kelly explained to me, it knew from studying the show and the human contestants exactly how long it could take to digest the question and buzz in to answer it. Watson would have about a second to understand the question, half a second to decide the answer, and a second to buzz in to answer first. It meant that “every ten milliseconds was gold,” said Kelly. But what made Watson so fast, and eventually so accurate, was not that it was actually “learning” per se, but its ability to self-improve by using all its big data capacities and networking to make faster and faster statistical correlations over more and more raw material.

…

Today, the IBM team notes, you can get genetic sequencing of your tumor with a lab test in an hour and the doctor, using Watson, can pinpoint those drugs to which that particular tumor is known to best respond—also in an hour. Today, IBM will feed a medical Watson 3,000 images, 200 of which are of melanomas and 2,800 are not, and Watson then uses its algorithm to start to learn that the melanomas have these colors, topographies, and edges. And after looking at tens of thousands and understanding the features they have in common, it can, much quicker than a human, identify particularly cancerous ones. That capability frees up doctors to focus where they are most needed—with the patient.
In other words, the magic of Watson happens when it is combined with the unique capabilities of a human doctor—such as intuition, empathy, and judgment. The synthesis of the two can lead to the creation and application of knowledge that is far superior to anything either could do on their own. The Jeopardy! game, said Kelly, pitted two human champions against a machine; the future will be all about Watson and doctors—man and machine—solving problems together.

…

—Katarn to Skywalker
Yeah—and I sense it, too.
On February 14, 2011, a turning point of sorts in the history of humanity was reached on—of all places—one of America’s longest-running television game shows, Jeopardy! That afternoon one of the contestants, who went by just his last name, Watson, competed against two all-time great Jeopardy! champions, Ken Jennings and Brad Rutter. Mr. Watson did not try to respond to the first clue, but with the second clue he buzzed in first to answer.
The clue was: “Iron fitting on the hoof of a horse or a card-dealing box in a casino.”
Watson, in perfect Jeopardy! style, responded with the question “What is ‘shoe’?”
That response should go down in history with the first words ever uttered on a telephone, on March 10, 1876, when Alexander Graham Bell, the inventor, called his assistant—whose name, ironically, was Thomas Watson—and said, “Mr.

Consumers can tap into that always-on intelligence directly, but also through third-party apps that harness the power of this AI cloud. Like many parents of a bright mind, IBM would like Watson to pursue a medical career, so it should come as no surprise that the primary application under development is a medical diagnosis tool. Most of the previous attempts to make a diagnostic AI have been pathetic failures, but Watson really works. When, in plain English, I give it the symptoms of a disease I once contracted in India, it gives me a list of hunches, ranked from most to least probable. The most likely cause, it declares, is giardia—the correct answer. This expertise isn’t yet available to patients directly; IBM provides Watson’s medical intelligence to partners like CVS, the retail pharmacy chain, helping it develop personalized health advice for customers with chronic diseases based on the data CVS collects.

…

In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’sWatson; DeepMind, Google’s search engine; and Facebook’s algorithms.
This perfect storm of cheap parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue—and there’s no reason to think they won’t—AI will keep improving.
As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life.

…

And this was before smartphones became the norm.
We are just starting to get good at giving great answers. Siri, the audio phone assistant for the iPhone, delivers spoken answers when you ask her a question in natural English. I use Siri routinely. When I want to know the weather, I just ask, “Siri, what’s the weather for tomorrow?” Android folks can audibly ask Google Now for information about their calendars. IBM’sWatson proved that for most kinds of factual reference questions, an AI can find answers fast and accurately. Part of the increasing ease in providing answers lies in the fact that past questions answered correctly increase the likelihood of another question. At the same time, past correct answers increase the ease of creating the next answer, and increase the value of the corpus of answers as a whole.

If the company is marketing the deodorant to Jeff's teenage kids, the "Ugh" from their father might not even be a negative. Jeff makes it easy for Umbria's computer by putting his age and gender on the blog. (We even learn that he's a Leo.) This type of research turns traditional surveying on its head. Unprompted by marketers, bloggers like Jeff volunteer the answers to millions of potential questions. "In a sense, we're very similar to the game showJeopardy!" Kaushansky says. "People have already said that they like a certain car or dislike a movie. It's our job to formulate the questions."
Kaushansky's team is also starting to divide bloggers into different groups, or tribes. Kaushansky envisions nearly endless tribal affiliations. Doritos munchers, bikers for Obama, MINI Cooper enthusiasts. Once the company has sorted bloggers into tribes, it can start digging for correlations between tribes and products.

…

And it's a matter of time before management starts recording such behavior. The very thought fills me with such regret that I click on the video once more, not so much to laugh at the dog as to soak up the on-the-job freedom it represents.
On a late spring morning I drive over the Tappan Zee Bridge, across the wide expanse of the Hudson. Then I hook left, away from New York City and up into the forests of Westchester County, to the headquarters of IBM's Thomas J. Watson Research Laboratory. It sits like a fortress atop a hill, a long, curved wall of glass reflecting the cotton-ball clouds floating above. I have a date there with Samer Takriti, the Syrian-born mathematician who launched me on this entire project. He was the one who described to me early on how his team was building mathematical models of thousands of IBM's tech consultants. The idea, he said, was to piece together inventories of all of their skills and then to calculate, mathematically, how best to deploy them.

…

Story has it, Takriti says after he hangs up, that the original Takritis were warriors who marched from Saddam's native city, Tikrit, in Iraq. His branch of the family, he tells me, eventually settled in Syria. A top engineering student in Damascus, Takriti won a fellowship in the mid-1980s to study at the University of Michigan. He fell head over heels for math. In 1996, by then a Ph.D., he landed a research job at IBM's fabled Watson Research Center, a half-hour drive north of New York City. This son of Tikrit warriors now walked among the gods of math.
Takriti's specialty was stochastic analysis. This is the math that attempts to tie predictions to random events. Say it rains in Tucson from zero to six times per month, and you listen to the weather report, which has been right 19 of the past 20 days, only three times a week.

As I noted above, anxiety about labor-saving technology is actually a constant through the whole history of capitalism. But we do see many indications that we now have the possibility—although not necessarily the reality—of drastically reducing the need for human labor. A few examples will demonstrate the diverse areas in which human labor is being reduced or eliminated entirely.
In 2011, IBM made headlines with its Watson supercomputer, which successfully competed and won against human competitors on the game showJeopardy. Although this feat was a somewhat frivolous publicity stunt, it also demonstrated Watson’s suitability for other, more valuable tasks. The technology is already being tested to assist doctors in processing the enormous volume of medical literature to better diagnose patients, which in fact was the system’s original purpose. But it is also being released as the “Watson Engagement Advisor,” which is intended for customer service and technical support applications.

…

In a world where the economy is based on intellectual property, companies will constantly be suing each other for alleged infringements of others’ copyrights and patents, so there will be a need for a lot of lawyers. This will provide employment for some significant fraction of the population, but again it’s hard to see this being enough to sustain an entire economy, particularly because of a theme that we saw in the introductory chapter: just about anything can, in principle, be automated. Watson, IBM’s Jeopardy-playing computer program, is already automating the work of lower-level law firm staff. And it’s easy to imagine big intellectual property firms coming up with procedures for mass-filing lawsuits that rely on fewer and fewer human lawyers, just as there are now systems that detect copyrighted music in online videos and send requests for removal. On the other hand, perhaps an equilibrium will arise where every individual needs to keep a lawyer on retainer, because no one can afford the cost of auto-lawyer software but they must still fight off lawsuits from firms attempting to win big damages for alleged infringement.

She raised enough money, changed the company’s name from My Cybertwin to Cognea, and set up shop in both Silicon Valley and New York. In the spring of 2014, she sold her company to IBM. The giant computer firm followed its 1997 victory in chess over Garry Kasparov with a comparable publicity stunt in which one of its robots competed against two of the best human players of the TV quiz showJeopardy! In 2011, the IBM Watson system triumphed over Brad Rutter and Ken Jennings. Many thought the win was evidence that AI technologies had exceeded human capabilities. The reality, however, was more nuanced. The human contestants could occasionally anticipate the brief window of time in which they could press the button and buzz in before Watson. In practice, Watson had an overwhelming mechanical advantage that had little to do with artificial intelligence.

…

Watson had originally been designed as a “question-answering” system, making progress toward the fundamental goals in artificial intelligence. With Cognea, Watson gained the ability to carry on a conversation. How will Watson be used? The choice faced by IBM and its engineers is remarkable. Watson can serve as an intelligent assistant to any number of professionals, or it can replace them. At the dawn of the field of artificial intelligence IBM backed away from the field. What will the company do in the future?
Ken Jennings, the human Jeopardy! champion, saw the writing on the wall: “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”23
7|TO THE RESCUE
The robot laboratory was ghostly quiet on a weekend afternoon in the fall of 2013.

…

Artificial intelligence researchers like to point out that aircraft can fly just fine without resorting to flapping their wings—an argument that asserts that to duplicate human cognition or behavior, it is not necessary to comprehend it. However, the chasm between AI and IA has only deepened as AI systems have become increasingly facile at human tasks, whether it is seeing, speaking, moving boxes, or playing chess, Jeopardy!, or Atari video games.
Terry Winograd was one of the first to see the two extremes clearly and to consider the consequences. His career traces an arc from artificial intelligence to intelligence augmentation. As a graduate student at MIT in the 1960s, he focused on understanding human language in order to build a software equivalent to Shakey—a software robot capable of interacting with humans in conversation.

It manufactured scales, time cards,
and most famously, punch card tabulators that allowed for the
storage and analysis of ever-larger amounts of manufacturing,
distribution, and sales data. By 1915, a young salesman named
Thomas J. Watson had risen from a regional ofﬁce to take over
the company, then named the Computer Tabulating Recording
Corporation. Within a decade, he had changed the name of
the company to the International Business Machines Corporation, and encouraged the use of its acronym, IBM. Watson
Sr.’s famous motto was “Think,” but the driving ethos of the
company was “Sell.” In an era when the art of selling was
153
GENERATIONS
revered as being next to godliness, Watson was known as the
world’s greatest salesman (as well as being one of the world’s
richest people).12 Watson Sr. understood that IBM had to keep
its eye on computers and help to shape their future. During the
Second World War, the company cofunded the development of
Harvard University’s Mark I computer, and IBM scientists and
engineers established and solidiﬁed linkages with the military
that became increasingly critical to the company in the postwar era.

…

., was positioned to take over the company in the early 1950s, the biggest
question they both faced was how to confront the changes that
digital technologies would have on their company. Should they
invest in digital computing, or would this undercut the proﬁts
on their ﬂagship mechanical calculators and paper card tabulating machines? Watson Sr.’s contribution was essentially to
hand over the company to his son at the very moment when
this decision became central to IBM’s fortunes. In so doing,
Watson Sr. ensured that the computer would make it out of
the laboratory and into businesses worldwide, with the huge
infrastructure of sales and support that IBM had already built
up during the prewar years and the postwar boom.
It was to Thomas J. Watson Jr.’s credit that he negotiated the
smooth transition between the two regimes at IBM. In the 1950s,
he managed the development and release of the 650 series of
computers, which was the ﬁrst major commercial computing
endeavor.

…

Just as no one will download
mindfully at all times, it is an impossible request to ask people
to only upload meaningfully. But setting the bar too high is
preferable to not setting the bar at all.
Fifty years ago, the categorizing of meaning was considered to be one of—if not the—chief calling of the critic. The
advent of critical theories like poststructuralism, deconstruction, and postmodernism put many of the classic categories
29
CHAPTER 2
in jeopardy: building a canon around the good and the beautiful was “problematized,” high and low ceased to function
as viable categories for culture, and progress and truth were
discussed as creations of power struggles rather than immutables in the human condition. Theory with a capital T practiced
a brilliant negative dialectics, but did not always replace the
overthrown concepts with new, more congenial ones.

After Siri translates your query into text, its three other main talents come into play: its NLP facility, searching a vast knowledge database, and interacting with Internet search providers, such as OpenTable, Movietickets, and Wolfram|Alpha.
IBM's Watson is kind of a Siri on steroids, and a champion at NLP. In February 2011, it employed both brain-derived and brain-inspired systems to achieve an impressive victory against human contestants on Jeopardy! Like chess champion computer Deep Blue, Watson is IBM’s way of showing off its computing know-how while moving the ball down the field for AI. The long-running game show promised a formidable challenge because of its open domain of clues and its wordplay. Contestants must understand puns, similes, and cultural references, and they must phrase answers in the form of questions.

…

However, language recognition is not something Watson specializes in. It cannot understand the spoken word. And since it cannot see or feel, it cannot read, so during the competitions the words of the Jeopardy! clues were hand-entered by Watson’s pit crew. And since Watson cannot hear either, audio and video clues were omitted.
Hey, wait a minute, did Watson really win at Jeopardy! or a custom-tailored variation?
Since its victory, to get Watson to understand what people say, IBM has paired it with Nuance speech recognition technology. And Watson is reading terabytes of medical literature. One of IBM’s goals is to shrink Watson down from its present size—a roomful of servers—to refrigerator-size and make it the world’s best medical diagnostician. One day not long from now you may have an appointment with a virtual assistant who’ll pepper you with questions, and provide your physician with a diagnosis.

…

We’ll find out if the challenge of creating software architectures that match human-level intelligence is just too difficult to conquer, and whether or not all that stretches out ahead is a perpetual AI winter.
Chapter Twelve
The Last Complication
How can we be so confident that we will build superintelligent machines? Because the progress of neuroscience makes it clear that our wonderful minds have a physical basis, and we should have learned by now that our technology can do anything that’s physically possible. IBM’s Watson, playing Jeopardy! as skillfully as human champions, is a significant milestone and illustrates the progress of machine language processing. Watson learned language by statistical analysis of the huge amounts of text available online. When machines become powerful enough to extend that statistical analysis to correlate language with sensory data, you will lose a debate with them if you argue that they don’t understand language.

The Big Ten Network uses algorithms to create original pieces posted just seconds after games, eliminating human copywriters.37
Artificial intelligence took a big leap into the future in 2011 when an IBM computer, Watson—named after IBM’s past chairman—took on Ken Jennings, who held the record of 74 wins on the popular TV show Jeopardy, and defeated him. The showdown, which netted a $1 million prize for IBM, blew away TV viewers as they watched their Jeopardy hero crumble in the presence of the “all-knowing” Watson. Watson is a cognitive system that is able to integrate “natural language processing, machine learning, and hypothesis generation and evaluation,” says its proud IBM parent, allowing it to think and respond to questions and problems.38
Watson is already being put to work. IBM Healthcare Analytics will use Watson to assist physicians in making quick and accurate diagnoses by analyzing Big Data stored in the electronic health records of millions of patients, as well as in medical journals.39
IBM’s plans for Watson go far beyond serving the specialized needs of the research industry and the back-office tasks of managing Big Data.

IBM Healthcare Analytics will use Watson to assist physicians in making quick and accurate diagnoses by analyzing Big Data stored in the electronic health records of millions of patients, as well as in medical journals.39
IBM’s plans for Watson go far beyond serving the specialized needs of the research industry and the back-office tasks of managing Big Data. Watson is being offered up in the marketplace as a personal assistant that companies and even consumers can converse with by typed text or in real-time spoken words. IBM says that this is the first time artificial intelligence is graduating from a simple question-and-answer mode to a conversational mode, allowing for more personal interaction and customized answers to individual queries.40
AI scientists will tell you that the most challenging hurdle for their industry is breaking through the language barrier. Comprehending the rich meaning of complex metaphors and phrases in one language and simultaneously retelling the story in another language is perhaps the most difficult of all cognitive tasks and the most unique of all human abilities.

The composer David Cope of the University of California, Santa Cruz, has developed software that generates novel musical compositions in the style of a given composer. And they sound really good. A Scott Joplin–style composition sounds like Joplin. While Cope didn’t create these songs directly, he still can take pride in their construction. His computational creations can provide him with naches. Similarly, the creators of IBM’sWatson might shep naches from the machine’s win over its human opponents on Jeopardy!
We can broaden this sense of naches still more. Many of us support a sports team and take pride in its wins, even though we had nothing to do with them. Or we become excited when a citizen of our country takes the gold in the Olympics, or makes a new discovery and is awarded a prestigious prize. So, too, should it be with our machines for all humanity: we can root for what humans have created, even if it wasn’t our own personal achievement and even if we can’t fully understand it.

…

Even though we continue to specialize in order to handle the more complicated systems we are building, seeing this web of interconnections reminds us that each domain does not stand alone; they are all part of a vast connected framework.
Since these systems are interconnected in many different ways, we will increasingly require the ability to connect one area of knowledge to another. When constructing a computer program that can play Jeopardy!, for example, you need knowledge of everything from linguistics to computer hardware; specialization alone will not work. We need a certain breadth of knowledge. However, as noted earlier, before too long, we will bump up against the limits to what we can truly understand; we just can’t hold all the relevant knowledge in our heads.
In response, we need to cultivate generalists, individuals who not only can see the lay of the land—the abstract physics style of thinking—but can also delight in the details of a system without necessarily understanding them all—the more miscellaneous biological style of thinking.

Dick.
1989: Tim Berners-Lee invents the World Wide Web.
1990: Seiji Ogawa presents the first fMRI machine.
1993: Rodney Brooks and others start the MIT Cog Project, an attempt to build a humanoid robot child in five years.
1997: Deep Blue defeats Garry Kasparov at chess.
2000: Cynthia Breazeal at MIT describes Kismet, a robot with a face that simulates expressions.
2004: DARPA launches the Grand Challenge for autonomous vehicles.
2009: Google builds the self-driving car.
2011: IBM’sWatson wins the TV game showJeopardy!.
2014: Google buys UK company Deep Mind for $650 million.
2014: Eugene Goostman, a computer program that simulates a thirteen-year-old boy, passes the Turing Test.
2014: Estimated number of robots in the world reaches 8.6 million.1
2015: Estimated number of PCs in the world reaches two billion.2
NOTES
Introduction
1PCs (‘Personal computers’) started becoming widely available in the early 1980s: IBM 5150 in 1981, Commodore PET in 1983.

…

The other seminal event that signalled that something big was changing in the field of Artificial Intelligence took place in February 2011, and was televised. Watson – another computer developed by IBM – beat two former, human, winners of the popular American TV quiz Jeopardy! and won the prize of a million dollars. Watson was a truly amazing machine. It was not a singular entity but a cluster of ninety servers, each one equipped with multiple processors. Its massively parallel hardware architecture was capable of supporting millions of searches into its knowledge base. For the purpose of the TV quiz, the engineers at IBM loaded Watson with 200 million pages of data, including dictionaries, encyclopaedias and literary articles. Moreover, Watson communicated in natural language. You asked it a question, it understood it, and returned an answer. For this to happen, Watson’s designers exploited the whole arsenal of AI tools and techniques, including machine learning, natural language processing and knowledge representation.

…

Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years.’24
Ray Kurzweil adopted Vinge’s argument in a series of popular science books that explore the technological drivers, and potentially devastating impact, of superhuman Artificial Intelligence. Kurzweil marks the year 2030 as a watershed by extrapolating, like Vinge, from today’s exponential improvement of computers according to Moore’s Law:25 2030 thus becomes the year that computer complexity will surpass the complexity of information processing in the human brain. Deep Blue, driverless cars crossing the Mojave Desert, and Watson beating humans at Jeopardy! all seem to validate the arguments made by Vinge and Kurzweil. Brute computer power has made computers more ‘intelligent’. Nevertheless, underneath the correlation between powerful computing and intelligent behaviour lurk two fundamental assumptions that deserve closer examination.
The first assumption is that our computer technology, whose architecture is different from that of the human brain, is nevertheless capable of exhibiting every aspect of human intelligence, including self-awareness.

The algorithm surprised the study’s authors by correctly predicting 75 percent of the verdicts (based on only a handful of different metrics), as compared to the team of legal scholars, who guessed just 59 percent, despite having access to far more specialized information.65 In its own way, the “Supreme Court Forecasting Project” was the legal profession’s equivalent of IBM’s Watson supercomputer winning $1 million on Jeopardy! in 2011—marking, as it did, the culmination of a long-held techno dream first proposed by the jurimetrics movement. In 1897, Oliver Wendell Holmes Jr. wrote enthusiastically of his belief that the legal system, as with science’s natural laws, should be quantifiably predictable. “The object of our study . . . is prediction,” he observed, “the prediction of the incidence of the public force through the instrumentality of the courts.”66
But if anything, the Supreme Court Forecasting Project was a bastardization of the jurimetrics’ utopian vision.

…

In the aftermath of Iamus’s concert, a staff writer for the Columbia Spectator named David Ecker put pen to paper (or rather finger to keyboard) to write a polemic taking aim at the new technology. “I use computers for damn near everything, [but] there’s something about this computer that I find deeply troubling,” Ecker wrote.
I’m not a purist by any stretch. I hate overt music categorization, and I hate most debates about “real” versus “fake” art, but that’s not what this is about. This is about the very essence of humanity. Computers can compete and win at Jeopardy!, beat chess masters, and connect us with people on the other side of the world. When it comes to emotion, however, they lack much of the necessary equipment. We live every day under the pretense that what we do carries a certain weight, partly due to the knowledge of our own mortality, and this always comes through in truly great music. Iamus has neither mortality nor the urgency that comes with it.

Yet had he had different credentials—and not revealed his method—he could have been hailed as the most clever analyst since Charles H. Dow.
As a counterpoint to Koppett’s story, consider now the story of a fellow who does have credentials, a fellow named Bill Miller. For years, Miller maintained a winning streak that, unlike Koppett’s, was compared to Joe DiMaggio’s fifty-six-game hitting streak and the seventy-four consecutive victories by the Jeopardy! quiz-show champ Ken Jennings. But in at least one respect these comparisons were not very apt: Miller’s streak earned him each year more than those other gentlemen’s streaks had earned them in their lifetimes. For Bill Miller was the sole portfolio manager of Legg Mason Value Trust Fund, and in each year of his fifteen-year streak his fund beat the portfolio of equity securities that constitute the Standard & Poor’s 500.

…

Naipaul just another struggling author, and somewhere out there roam the equals of Bill Gates and Bruce Willis and Roger Maris who are not rich and famous, equals on whom Fortune did not bestow the right breakthrough product or TV show or year. What I’ve learned, above all, is to keep marching forward because the best news is that since chance does play a role, one important factor in success is under our control: the number of at bats, the number of chances taken, the number of opportunities seized. For even a coin weighted toward failure will sometimes land on success. Or as the IBM pioneer Thomas Watson said, “If you want to succeed, double your failure rate.”
I have tried in this book to present the basic concepts of randomness, to illustrate how they apply to human affairs, and to present my view that its effects are largely overlooked in our interpretations of events and in our expectations and decisions. It may come as an epiphany merely to recognize the ubiquitous role of random processes in our lives; the true power of the theory of random processes, however, lies in the fact that once we understand the nature of random processes, we can alter the way we perceive the events that happen around us.

…

That may be the case for Marilyn, who is most famous for her response to the following question, which appeared in her column one Sunday in September 1990 (I have altered the wording slightly):
Suppose the contestants on a game show are given the choice of three doors: Behind one door is a car; behind the others, goats. After a contestant picks a door, the host, who knows what’s behind all the doors, opens one of the unchosen doors, which reveals a goat. He then says to the contestant, “Do you want to switch to the other unopened door?” Is it to the contestant’s advantage to make the switch?2
The question was inspired by the workings of the television game show Let’s Make a Deal, which ran from 1963 to 1976 and in several incarnations from 1980 to 1991. The show’s main draw was its handsome, amiable host, Monty Hall, and his provocatively clad assistant, Carol Merrill, Miss Azusa (California) of 1957.

Lionbridge’s GeoFluent shows how much progress has been made in computers’ ability to engage in complex communication. Another technology developed at IBM’s Watson labs, this one actually named Watson, shows how powerful it can be to combine these two abilities and how far the computers have advanced recently into territory thought to be uniquely human.
Watson is a supercomputer designed to play the popular game showJeopardy! in which contestants are asked questions on a wide variety of topics that are not known in advance.1 In many cases, these questions involve puns and other types of wordplay. It can be difficult to figure out precisely what is being asked, or how an answer should be constructed. Playing Jeopardy! well, in short, requires the ability to engage in complex communication.
The way Watson plays the game also requires massive amounts of pattern matching.

…

In January of 2011, however, the translation services company Lionbridge announced pilot corporate customers for GeoFluent, a technology developed in partnership with IBM. GeoFluent takes words written in one language, such as an online chat message from a customer seeking help with a problem, and translates them accurately and immediately into another language, such as the one spoken by a customer service representative in a different country.
GeoFluent is based on statistical machine translation software developed at IBM’s Thomas J. Watson Research Center. This software is improved by Lionbridge’s digital libraries of previous translations. This “translation memory” makes GeoFluent more accurate, particularly for the kinds of conversations large high-tech companies are likely to have with customers and other parties. One such company tested the quality of GeoFluent’s automatic translations of online chat messages. These messages, which concerned the company’s products and services, were sent by Chinese and Spanish customers to English-speaking employees.

…

Although multiplying five-digit numbers is an unnatural and difficult skill for the human mind to master, the visual cortex routinely does far more complex mathematics each time it detects an edge or uses parallax to locate an object in space. Machine computation has surpassed humans in the first task but not yet in the second one.
As digital technologies continue to improve, we are skeptical that even these skills will remain bastions of human exceptionalism in the coming decades. The examples in Chapter 2 of Google’s self-driving car and IBM’sWatson point to a different path going forward. The technology is rapidly emerging to automate truck driving in the coming decade, just as scheduling truck routes was increasingly automated in the last decade. Likewise, the high end of the skill spectrum is also vulnerable, as we see in the case of e-discovery displacing lawyers and, perhaps, in a Watson-like technology, displacing human medical diagnosticians.

The gap between a dumb and a clever person may appear large from an anthropocentric perspective, yet in a less parochial view the two have nearly indistinguishable minds.9 It will almost certainly prove harder and take longer to build a machine intelligence that has a general level of smartness comparable to that of a village idiot than to improve such a system so that it becomes much smarter than any human.
Consider a contemporary AI system such as TextRunner (a research project at the University of Washington) or IBM’sWatson (the system that won the Jeopardy! quiz show). These systems can extract certain pieces of semantic information by analyzing text. Although these systems do not understand what they read in the same sense or to the same extent as a human does, they can nevertheless extract significant amounts of information from natural language and use that information to make simple inferences and answer questions. They can also learn from experience, building up more extensive representations of a concept as they encounter additional instances of its use.

…

It completes perfectly the puzzle rated most difficult by humans, yet is stumped by a couple of nonstandard puzzles that involved spelling backwards or writing answers diagonally.)48
Scrabble
Superhuman
As of 2002, Scrabble-playing software surpasses the best human players.49
Bridge
Equal to the best
By 2005, contract bridge playing software reaches parity with the best human bridge players.50
Jeopardy!
Superhuman
2010: IBM’sWatson defeats the two all-time-greatest human Jeopardy! champions, Ken Jennings and Brad Rutter.51 Jeopardy! is a televised game show with trivia questions about history, literature, sports, geography, pop culture, science, and other topics. Questions are presented in the form of clues, and often involve wordplay.
Poker
Varied
Computer poker players remain slightly below the best humans for full-ring Texas hold ‘em but perform at a superhuman level in some poker variants.52
FreeCell
Superhuman
Heuristics evolved using genetic algorithms produce a solver for the solitaire game FreeCell (which in its generalized form is NP-complete) that is able to beat high-ranking human players.53
Go
Very strong amateur level
As of 2012, the Zen series of go-playing programs has reached rank 6 dan in fast games (the level of a very strong amateur player), using Monte Carlo tree search and machine learning techniques.54 Go-playing programs have been improving at a rate of about 1 dan/year in recent years.

…

There are both moral and prudential reasons for favoring outcomes in which everybody gets a share of the bounty. We will not say much about the moral case, except to note that it need not rest on any egalitarian principle. The case might be made, for example, on grounds of fairness. A project that creates machine superintelligence imposes a global risk externality. Everybody on the planet is placed in jeopardy, including those who do not consent to having their own lives and those of their family imperiled in this way. Since everybody shares the risk, it would seem to be a minimal requirement of fairness that everybody also gets a share of the upside.
The fact that the total (expected) amount of good seems greater in collaboration scenarios is another important reason such scenarios are morally preferable.

But spectacular advances in information technology suggest we are approaching a historical discontinuity in humanity’s relationship with machines. In 1997 IBM’s Deep Blue beat chess champion Garry Kasparov. Now, commercially available chess programs can beat any human. In 2011 IBM’sWatson beat Jeopardy! champions Ken Jennings and Brad Rutter. That was a vastly tougher computing challenge, but Watson’s engineers did it. Today, it’s no longer impossible to imagine a forecasting competition in which a supercomputer trounces superforecasters and superpundits alike. After that happens, there will still be human forecasters, but like human Jeopardy! contestants, we will only watch them for entertainment.
So I spoke to Watson’s chief engineer, David Ferrucci. I was sure that Watson could easily field a question about the present or past like “Which two Russian leaders traded jobs in the last ten years?”

…

When I asked Joshua Frankel what he reads for fun, the young Brooklyn filmmaker rattled off the names of highbrow authors like Thomas Pynchon, thought for a moment, and added that he’d recently read a biography of the German rocket scientist Wernher von Braun and various histories of New York, although Frankel was careful to note that the books about New York are also for his work: he is producing an opera about the legendary clash between Robert Moses, New York’s great urban planner, and the free-spirited antiplanner Jane Jacobs. Frankel is not someone to tangle with on Jeopardy!
Are superforecasters better simply because they are more knowledgeable and intelligent than others? That would be flattering for them but deflating for the rest of us. Knowledge is something we can all increase, but only slowly. People who haven’t stayed mentally active have little hope of catching up to lifelong learners. Intelligence feels like an even more daunting obstacle. There are believers in cognitive enhancement pills and computer puzzles who may someday be vindicated, but most people feel that adult intelligence is relatively fixed, a function of how well you did in the DNA lottery at conception and the lottery for loving, wealthy families at birth.

…

Sailors knew they would be doomed if they strayed too far in either direction. Forecasters should feel the same about under- and overreaction to new information, the Scylla and Charybdis of forecasting. Good updating is all about finding the middle passage.
Captain Minto
In the third season of the IARPA tournament, Tim Minto won top spot with a final Brier score of 0.15, an amazing accomplishment, almost in the league of Ken Jennings’s winning Jeopardy! seventy-four games in a row. A big reason why the forty-five-year-old Vancouver software engineer did so well is his skill at updating.
For his initial forecasts, Tim takes less time than some other top forecasters. “I typically spend five to fifteen minutes, which means maybe an hour or so total when a new batch of six or seven questions come out,” he said. But the next day, he’ll come back, take another look, and form a second opinion.

Sure, there would be thousands of sniffling people on campus, but the Secret Service probably wouldn’t think anything was amiss.
It was December, after all — cold and flu season.
2.
Does the scenario we’ve just sketched sound like nothing beyond science fiction? If so, consider that since the turn of the twenty-first century, rapidly accelerating technology has shown a distinct tendency to turn the impossible into the everyday in no time at all. A few years back, IBM’sWatson, an artificial intelligence, whipped the human champion, Ken Jennings, on Jeopardy. As we write this, soldiers with bionic limbs are fighting our enemies and autonomous cars are driving down our streets. Yet most of these advances are small in comparison to the great leap forward currently underway in the biosciences — a leap with consequences we’ve only begun to imagine.
More to the point, consider that the Secret Service is already taking extraordinary steps to protect presidential DNA.

The system might track respected blogs about Apple such as Mac Rumors, speeches by industry experts, shipping data out of China (where iPhones were built), employment sites measuring the number of workers with Apple experience looking for jobs (an uptick would indicate a round of layoffs, hence trouble and possibly an earnings miss). The system would scour SEC filings, data on Amazon.com or other retail sites that indicated sales performance, and Twitter feeds that mentioned Apple products.
Collectively, the AI program crunched the information like a magical data grinder and spit out a buy or sell recommendation with a certain probability, much like a Wall Street analyst—or IBM’s Watson submitting a response on Jeopardy! That, at least, was the theory.
The goal: predict a company’s performance before it became public. Effectively, they were building from scratch an AI financial analyst. Ideally, Kinetic would be able to detect a company’s fortunes even before the company’s own executives and employees knew what was happening. Sales trends, buzz on a product, a pricing war coming from a tough competitor—they were crystal balls into the future, if only you could find the right data and make sense of it.

…

But now programmers were attempting to build computers that could beat humans at the trading game itself, buying and selling stocks based on fundamentals such as sales trends and economic variables.
While the effort seemed almost quixotic, there were indications that it could be done. IBM, after all, had recently built an AI computer system called Watson that had defeated the world’s elite Jeopardy! players.
The system Kinetic deployed resembled Watson in certain ways. Kinetic’s task, however, was in reality far harder than cracking Jeopardy! Kinetic was trying to hack the market by mining endless terabytes of information stored on databases throughout the world. Its hacker in chief, Ladopoulos, was a charismatic, intense character with a shaved head, rimless glasses, a fondness for vintage tennis shoes, and a million stories.
Many of those stories went back to his previous incarnation in the early 1990s as an infamous hacker, back when being a hacker was actually cool, the closest a computer nerd could get to rock-star status.

Not everyone is fully aware that the next step—cloud computing—will allow home PCs to tap the computing power of an army of warehouse-size supercomputers. It’s hard to imagine just what gains will emerge from this awesome capacity, but as a demonstration to provoke interest, Google recently used its cloud to decode the human genome . . . in eleven seconds. This shift—from merely crunching data to analyzing information—was illustrated in a viewer-friendly way by an IBM computer named “Watson” when, in early 2011, it dominated the most successful human champion of the popular American TV quiz showJeopardy!
However large the impact of digital technology will be on economic productivity (and I believe it will be significant), it is likely to be disproportionately large in the leading economies, particularly the United States. As wages rise in emerging nations, they are starting to automate and digitize their manufacturing plants, but nations like Brazil, Russia, India, and China remain well below the global average on automation measures, such as number of robots per employee.

The “outcompute them” strategy is not frightening, because the computer really has no idea what it’s doing. It can count things fast without understanding what it’s counting. It has counting algorithms—that’s it. We saw this with IBM’sWatson program on Jeopardy!
One Jeopardy! question was, “It was the anatomical oddity of U.S. gymnast George Eyser, who won a gold medal on the parallel bars in 1904.”
A human opponent answered that Eyser was missing a hand (wrong). And Watson answered, “What is a leg?” Watson lost too, for failing to note that the leg was “missing.”
Try a Google search on “Gymnast Eyser.” Wikipedia comes up first with a long article about him. Watson depends on Google. If Jeopardy! contestants could use Google, they’d do better than Watson. Watson can translate “anatomical” into “body part,” and Watson knows the names of the body parts. Watson doesn’t know what an “oddity” is, however.

…

Leave the map reading and navigation to your GPS; it isn’t conscious, it can’t think in any meaningful sense, but it’s much better than you are at keeping track of where you are and where you want to go.
Much farther up the staircase, doctors are becoming increasingly dependent on diagnostic systems that are provably more reliable than any human diagnostician. Do you want your doctor to overrule the machine’s verdict when it comes to making a lifesaving choice of treatment? This may prove to be the best—most provably successful, most immediately useful—application of the technology behind IBM’s Watson, and the issue of whether or not Watson can properly be said to think (or be conscious) is beside the point. If Watson turns out to be better than human experts at generating diagnoses from available data, we’ll be morally obliged to avail ourselves of its results. A doctor who defies it will be asking for a malpractice suit. No area of human endeavor appears to be clearly off-limits to such prosthetic performance-enhancers, and wherever they prove themselves, the forced choice will be reliable results over the human touch, as it always has been.

…

When we stop someone to ask for directions, there’s usually an explicit or implicit “I’m sorry to bring you down to the level of Google temporarily, but my phone is dead, see, and I require a fact.” It’s a breach of etiquette, on a spectrum with asking someone to temporarily serve as a paperweight or a shelf.
I’ve seen this breach, also, in brief conversational moments when someone asks a question of someone else—a number, a date, a surname, the kind of question you could imagine being on a quiz show, some obscure point of fact—and the other person grimaces or waves off the query. They’re saying, “I don’t know. You have a phone, don’t you? You have the entire Internet, and you’re disrespecting me, wasting my time, using me.” Not for nothing do we now have the sarcastic catchphrase, “Here, let me Google that for you.”
As things stand, there are still a few arenas in which only a human brain will do the trick—in which the relevant information and experience lives only in humans’ brains and so we have no choice but to trouble those brains when we want something.

Experts have given a name to this era of the hyperintelligent computer: the “intelligence explosion.” Nearly every computer and neural scientist with expertise in the field believes that the intelligence explosion will happen in the next seventy years; most predict it will happen by 2040. In 2015, more than $8.5 billion was invested in the development of new AI technologies. IBM’sWatson supercomputer is hard at work performing tasks ranging from playing (and winning at) Jeopardy! to diagnosing cancer. What will Earth be like when humans are no longer the most intelligent things on the planet? As science fiction writer and computer scientist Vernor Vinge wrote, “The best answer to the question, ‘Will computers ever be as smart as humans?’ is probably ‘Yes, but only briefly.’”5
As the excitement grows, so too does fear. The astrophysicist and Nobel laureate Dr.

…

He wrote in 2007, “Lonely dissent doesn’t feel like going to school dressed in black. It feels like going to school wearing a clown suit.”14
Bullish tech experts point to how AI has already and will continue to benefit society. Ginni Rometty, the CEO of IBM, says, “In the future, every decision that humankind makes is going to be informed by a cognitive system like Watson, and our lives will be better for it.” Another IBM executive discounts the idea that Watson could become a threat, because “the only data [Watson] has access to is the data we provide it with. It is not capable of going out on its own and creating—in some iRobot-type of form—its own data construct.”15 IBM also has good reason for touting the safety and promise of its technology: Watson is anticipated to generate $10 billion in revenue for IBM by 2023.
Noted futurist and AI cheerleader Ray Kurzweil welcomes the advance of superintelligence and believes man and machine will become one in a happy marriage he calls “the singularity.”

…

What will be the role of humankind when machines can do the vast majority of jobs? What does a society look like when the labor force can no longer earn?29
In 1932, every fourth U.S. household had no breadwinner,30 and the unemployment figures in Europe and Russia were just as glum. Franklin D. Roosevelt saw unemployment as the greatest threat to the nation since the Civil War. “There had never been a time when our institutions were in such jeopardy.”31 FDR was right. Unemployment is corrosive to government stability and calls for remarkably deft leadership, lest the nation collapse. In 1932, the U.S. responded with the New Deal. Western Europe responded with Fascism and the imminent rise of Nazism, Russia deepened into Stalinism and five-year plans.
Large-scale unemployment in the current era is no less disruptive and dangerous. The rise of radical Islam throughout the Middle East, the rise of narco-terror in Latin America, and spikes in inner-city gun violence in the United States all have strong correlations with the very low employment rates of young men in those areas.

Instead of choosing from a set of available options, we can create our own. It’s the triumph of design over choice. Instead of ordering from the menu, we are more empowered than any prior generation to become the cooks.
Are You Structuring Your Reality or Having It Structured for You?
The least free are those whose reality is structured for them. “Stay tuned,” they are told before each Jeopardy commercial break, and they do so. At work they are assigned tasks and roles that are clearly defined. They exert very little freedom over their reality.
The middle class has created a greater degree of independence and structure a greater deal of their own realities. They are more likely to structure their families and hobbies in ways they find more meaningful and not just as the mass media may instruct them.

He asked me about myself, and I explained that I’m a nonfiction writer of science and philosophy, specifically of the ways in which science and philosophy intersect with daily life, and that I’m fascinated by the idea of the Turing test and of the “Most Human Human.” For one, there’s a romantic notion as a confederate of defending the human race, à la Garry Kasparov vs. Deep Blue—and soon, Ken Jennings of Jeopardy! fame vs. the latest IBM system, Watson. (The mind also leaps to other, more Terminator– and The Matrix–type fantasies, although the Turing test promises to involve significantly fewer machine guns.) When I read that the machines came up shy of passing the 2008 test by just one single vote, and realized that 2009 might be the year they finally cross the threshold, a steely voice inside me rose up seemingly out of nowhere.

…

Okuno, “Enabling a User to Specify an Item at Any Time During System Enumeration: Item Identification for Barge-In-Able Conversational Dialogue Systems,” Proceedings of the International Conference on Spoken Language Processing (2009).
18 Brian Ferneyhough, in Kriesberg, “Music So Demanding.”
19 David Mamet, Glengarry Glen Ross (New York: Grove, 1994).
20 For more on back-channel feedback and the (previously neglected) role of the listener in conversation, see, e.g., Bavelas, Coates, and Johnson, “Listeners as Co-narrators.”
21 Jack T. Huber and Dean Diggins, Interviewing America’s Top Interviewers: Nineteen Top Interviewers Tell All About What They Do (New York: Carol, 1991).
22 Clark and Fox Tree, “Using Uh and Um.”
23 Clive Thompson, “What Is I.B.M.’sWatson?” New York Times, June 14, 2010.
24 Nikko Ström and Stephanie Seneff, “Intelligent Barge-In in Conversational Systems,” Proceedings of the International Conference on Spoken Language Processing (2000).
25 Jonathan Schull, Mike Axelrod, and Larry Quinsland, “Multichat: Persistent, Text-as-You-Type Messaging in a Web Browser for Fluid Multi-person Interaction and Collaboration” (paper presented at the Seventh Annual Workshop and Minitrack on Persistent Conversation, Hawaii International Conference on Systems Science, Kauai, Hawaii, January 2006).
26 Deborah Tannen, That’s Not What I Meant!

…

“Speakers can use these announcements,” linguists Clark and Fox Tree write, “to implicate, for example, that they are searching for a word, are deciding what to say next, want to keep the floor, or want to cede the floor.”
We are told by speaking coaches, teachers, parents, and the like just to hold our tongue. The fact of the matter is, however, filling pauses in speech with sound is not simply a tic, or an error—it’s a signal that we’re about to speak. (Consider, as an analogue, your computer turning its pointer into an hourglass before freezing for a second.) A big part of the skill it takes to be a Jeopardy! contestant is the ability to buzz in before you know the answer, but as soon as you know you know the answer—that buzz means, roughly, “Oh! Uh …,” and its successful deployment is part of what separates champions from average players. (By the way, this is part of what has been giving IBM researchers such a hard time preparing their supercomputer Watson for serious competition against humans, especially for short questions that only take Alex Trebek a second or two to read.)

Errors are inevitable, as in any statistical program, but the quickest way to reduce them is to fine-tune the algorithms running the machines. Humans on the ground only gum up the works.
This trend toward automation is leaping ahead as computers make sense of more and more of our written language, in some cases processing thousands of written documents in a second. But they still misunderstand all sorts of things. IBM’s Jeopardy!-playing supercomputer Watson, for all its brilliance, was flummoxed by language or context about 10 percent of the time. It was heard saying that a butterfly’s diet was “Kosher,” and it once confused Oliver Twist, the Charles Dickens character, with the 1980s techno-pop band the Pet Shop Boys.
Such errors are sure to pile up in our consumer profiles, confusing and misdirecting the algorithms that manage more and more of our lives.

Some things, try as we might, are just unpredictable. For the vast middle ground between the two, there’s machine learning.
Paradoxically, even as they open new windows on nature and human behavior, learning algorithms themselves have remained shrouded in mystery. Hardly a day goes by without a story in the media involving machine learning, whether it’s Apple’s launch of the Siri personal assistant, IBM’sWatson beating the human Jeopardy! champion, Target finding out a teenager is pregnant before her parents do, or the NSA looking for dots to connect. But in each case the learning algorithm driving the story is a black box. Even books on big data skirt around what really happens when the computer swallows all those terabytes and magically comes up with new insights. At best, we’re left with the impression that learning algorithms just find correlations between pairs of events, such as googling “flu medicine” and having the flu.

…

For example, if a fever can be caused by influenza or malaria, and you should take Tylenol for a fever and a headache, this can be expressed as follows:
By combining many such operations, we can carry out very elaborate chains of logical reasoning. People often think computers are all about numbers, but they’re not. Computers are all about logic. Numbers and arithmetic are made of logic, and so is everything else in a computer. Want to add two numbers? There’s a combination of transistors that does that. Want to beat the human Jeopardy! champion? There’s a combination of transistors for that too (much bigger, naturally).
It would be prohibitively expensive, though, if we had to build a new computer for every different thing we want to do. Rather, a modern computer is a vast assembly of transistors that can do many different things, depending on which transistors are activated. Michelangelo said that all he did was see the statue inside the block of marble and carve away the excess stone until the statue was revealed.

…

Since then, learning-based methods have swept the field, to the point where it’s hard to find a paper devoid of learning in a computational linguistics conference. Statistical parsers analyze language with accuracy close to that of humans, where hand-coded ones lagged far behind. Machine translation, spelling correction, part-of-speech tagging, word sense disambiguation, question answering, dialogue, summarization: the best systems in these areas all use learning. Watson, the Jeopardy! computer champion, would not have been possible without it.
To this Chomsky might reply that engineering successes are not proof of scientific validity. On the other hand, if your buildings collapse and your engines don’t run, perhaps something is wrong with your theory of physics. Chomsky thinks linguists should focus on “ideal” speaker-listeners, as defined by him, and this gives him license to ignore things like the need for statistics in language learning.

Jack Dorsey’s success without depth is common at this elite level of management. Once we’ve stipulated this reality, we must then step back to remind ourselves that it doesn’t undermine the general value of depth. Why? Because the necessity of distraction in these executives’ work lives is highly specific to their particular jobs. A good chief executive is essentially a hard-to-automate decision engine, not unlike IBM’s Jeopardy!-playing Watson system. They have built up a hard-won repository of experience and have honed and proved an instinct for their market. They’re then presented inputs throughout the day—in the form of e-mails, meetings, site visits, and the like—that they must process and act on. To ask a CEO to spend four hours thinking deeply about a single problem is a waste of what makes him or her valuable.

…

To do so, she did something extreme: She forced each member of the team to take one day out of the workweek completely off—no connectivity to anyone inside or outside the company.
“At first, the team resisted the experiment,” she recalled about one of the trials. “The partner in charge, who had been very supportive of the basic idea, was suddenly nervous about having to tell her client that each member of her team would be off one day a week.” The consultants were equally nervous and worried that they were “putting their careers in jeopardy.” But the team didn’t lose their clients and its members did not lose their jobs. Instead, the consultants found more enjoyment in their work, better communication among themselves, more learning (as we might have predicted, given the connection between depth and skill development highlighted in the last chapter), and perhaps most important, “a better product delivered to the client.”
This motivates an interesting question: Why do so many follow the lead of the Boston Consulting Group and foster a culture of connectivity even though it’s likely, as Perlow found in her study, that it hurts employees’ well-being and productivity, and probably doesn’t help the bottom line?

…

In particular, identify a deep task (that is, something that requires deep work to complete) that’s high on your priority list. Estimate how long you’d normally put aside for an obligation of this type, then give yourself a hard deadline that drastically reduces this time. If possible, commit publicly to the deadline—for example, by telling the person expecting the finished project when they should expect it. If this isn’t possible (or if it puts your job in jeopardy), then motivate yourself by setting a countdown timer on your phone and propping it up where you can’t avoid seeing it as you work.
At this point, there should be only one possible way to get the deep task done in time: working with great intensity—no e-mail breaks, no daydreaming, no Facebook browsing, no repeated trips to the coffee machine. Like Roosevelt at Harvard, attack the task with every free neuron until it gives way under your unwavering barrage of concentration.

Rather than trying to crack the problem, therefore, AI researchers took a different approach entirely—one that emphasized statistical models of data rather than thought processes. This approach, which nowadays is called machine learning, was far less intuitive than the original cognitive approach, but it has proved to be much more productive, leading to all kinds of impressive breakthroughs, from the almost magical ability of search engines to complete queries as you type them to building autonomous robot cars, and even a computer that can play Jeopardy!18
WE DON’T THINK THE WAY WE THINK WE THINK
The frame problem, however, isn’t just a problem for artificial intelligence—it’s a problem for human intelligence as well. As the psychologist Daniel Gilbert describes in Stumbling on Happiness, when we imagine ourselves, or someone else, confronting a particular situation, our brains do not generate a long list of questions about all the possible details that might be relevant.

…

In fact, one can always do this trivially by exhaustively including every item and concept in the known universe in the basket of potentially relevant factors, thereby making what at first seems to be a global problem local by definition. Unfortunately, this approach succeeds only at the expense of rendering the computational procedure intractable.
18. For an introduction to machine learning, see Bishop (2006). See Thompson (2010) for a story about the Jeopardy-playing computer.
19. For a compelling discussion of the many ways in which our brains misrepresent both our memories of past events and our anticipated experience of future events, see Gilbert (2006). As Becker (1998, p. 14) has noted, even social scientists are prone to this error, filling in the motivations, perspectives, and intentions of their subjects whenever they have no direct evidence of them.

But what’s become clear in the years since Deep Blue’s victory is that algorithms will continue to invade professions and skill areas that we have always assumed will remain inherently human. Chess was just the beginning.
In early 2011, IBM’s newest creation, Watson, bested all human contestants on the game showJeopardy!—including Ken Jennings, the most prolific champion in the show’s history. That a bot could be so intellectually nimble in the way it processed random questions, speedily consulted raw stores of data, and issued answers was impressive. Whereas chess is a game played on a limited board with rigid rules, Jeopardy! is chaotic, arbitrary, and offers almost no guidelines on the content or nature of its queries, which can be pocked with humor, puns, and irony. To do it, IBM stored 200 million pages of content on four terabytes of disk drives that were read by twenty-eight hundred processor cores (the newish Apple computer I used to write this book has two) assisted by sixteen terabytes of memory (RAM).

…

No willy-nilly tests, no gut feelings, just data in, data out. Watson won’t miss clues on those rare cases because he’s simply not prejudiced to rely on the easy answers.
Soon after its Jeopardy! triumph, IBM began working with doctors and researchers at Columbia University to develop a version of Watson that won’t be a mere novelty in health care but a true caregiver and diagnostic authority. In September 2011, the giant health insurer WellPoint announced plans to give Watson a job assisting doctors in their offices with diagnoses, providing a valuable and legitimate second opinion. WellPoint’s main purpose in using Watson is saving money, but in paying IBM for Watson’s time, patients also receive the benefit of more correct initial diagnoses.
Herbert Chase, a clinical medicine professor at Columbia, tested Watson with a vexing case from earlier in his career when he had to treat a woman in her midthirties who complained of fleeting strength and limp muscles.24 The woman’s blood tests revealed low phosphate levels and strangely high readings of alkaline phosphatase, an enzyme.

…

The tryst gave us what Peter Parham, an immunogeneticist at Stanford, calls “hybrid vigor,” endowing us with a powerful immune system that allowed humans to colonize the world.23
A generation from now, algorithms like Patterson’s will scan our DNA and tell us what diseases we’re likely to get and even when they may come. Treating those maladies will be handled by a computer the world knows well: IBM’s Watson.
When you head to the doctor’s office with a health quandary, your appointment usually goes something like this: Your doctor asks a question, you answer; your doctor asks another question, you answer. This pattern goes on until your caregiver can suss out what she thinks is your exact problem. She bases her diagnosis on your answers, which lead her through a tree of possibilities within her head.

, 17 September 2013
Positive impacts
– Cost reductions
– Efficiency gains
– Unlocking innovation, opportunities for small business, start-ups (smaller barriers to entry, “software as a service” for everything)
Negative impacts
– Job losses
– Accountability and liability
– Change to legal, financial disclosure, risk
– Job automation (refer to the Oxford Martin study)
The shift in action
Advances in automation were reported on by FORTUNE:
“IBM’sWatson, well known for its stellar performance in the TV game showJeopardy!, has already demonstrated a far more accurate diagnosis rate for lung cancers than humans – 90% versus 50% in some tests. The reason is data. Keeping pace with the release of medical data could take doctors 160 hours a week, so doctors cannot possibly review the amount of new insights or even bodies of clinical evidence that can give an edge in making a diagnosis.

…

Because of this, the ability to determine our individual genetic make-up in an efficient and cost-effective manner (through sequencing machines used in routine diagnostics) will revolutionize personalized and effective healthcare. Informed by a tumour’s genetic make-up, doctors will be able to make decisions about a patient’s cancer treatment.
While our understanding of the links between genetic markers and disease is still poor, increasing amounts of data will make precision medicine possible, enabling the development of highly targeted therapies to improve treatment outcomes. Already, IBM’s Watson supercomputer system can help recommend, in just a few minutes, personalized treatments for cancer patients by comparing the histories of disease and treatment, scans and genetic data against the (almost) complete universe of up-to-date medical knowledge.11
The ability to edit biology can be applied to practically any cell type, enabling the creation of genetically modified plants or animals, as well as modifying the cells of adult organisms including humans.

…

– Blurring the lines between man and machine
Unknown, or cuts both ways
– Cultural shift
– Disembodiment of communication
– Improvement of performance
– Extending human cognitive abilities will trigger new behaviours
The shift in action
– Cortical computing algorithms have already shown an ability to solve modern CAPTCHAs (widely used tests to distinguish humans from machines).
– The automotive industry has developed systems monitoring attention and awareness that can stop cars when people are falling asleep while driving.
– An intelligent computer program in China scored better than many human adults on an IQ test.
– IBM’sWatson supercomputer, after sifting through millions of medical records and databases, has begun to help doctors choose treatment options for patients with complex needs.
– Neuromorphic image sensors, i.e. inspired how the eye and brain communicate, will have impact ranging from battery usage to robotics
– Neuroprosthetics are allowing disabled people to control artificial members and exoskeletons.

Second, even if machines do not reach this level of sophistication, it’s likely that they’ll become very smart indeed, so what happens to the people who previously did the things that machines will do in the future?
Welcome to the future. It’s metallic and uses lots of batteries. Hopefully, it’s not angry and it won’t work out a way to enslave the human race.
the condensed idea
The machines wake up
timeline
1990 iRobot Corporation founded to manufacture industrial and domestic robots
2011 Watson, an IBM computer, wins Jeopardy!, a US TV show
2027 A $79 toaster passes the Turing test
2040 $750 smartphone contains as much processing power as a human brain
2042 Software virus disables 90 percent of machines
2050 Intelligent robots outnumber human beings
2054 Machines start to paint and compose music
2069 Machines demand equal rights
21 Personalized genomics
It’s now possible to sequence, then analyze, the genome of individuals to predict specific human traits or to forecast the probability that an individual will suffer from certain conditions or diseases.

Gates and Grove knew that eventually—and it wasn’t going to take very long at all—the expensive, customized guts of engineering workstations would become juiced-up PC circuit boards, and that the same evolution would ultimately subsume business minicomputers, mainframes, and even supercomputers, those rare and superexpensive machines used for everything from modeling weather patterns to controlling nuclear devices. (For example, IBM’sWatson, the machine that in 2011 beat Jeopardy! phenomenon Ken Jennings, is one such computer based on a PC-like architecture.) As a result, pretty much every computer that companies relied on to manage their most critical operations would adopt the internal electronic architecture of a PC writ large. All were much, much cheaper and easier to program and operate than unwieldy mainframes, because they were built out of the very same semiconductor components as PCs, and usually used a variation of the Windows operating system software.

For the most part, it is better suited to responding to questions—not so good at asking them. Picasso was onto this truth fifty years ago when he commented, “Computers are useless—they only give31 you answers.”
On the other hand, technology can serve up amazing, innovative, life-changing answers—if we know how to ask for them. The potential is mind-boggling,32 as IBM’sWatson system demonstrates. Its winning appearance in 2011 on the TV quiz showJeopardy! proved it could answer questions better than any human. Today, IBM is feeding the system a steady diet of, among other things, medical information—so that it can answer just about any question a doctor might throw at it (If patient exhibits symptoms A, B, and C, what might this indicate?). But the doctor still has to figure out what to ask—and then must be able to question Watson’s response, which might be technically accurate but not commonsensical.

Adaptive Markets: Financial Evolution at the Speed of Thought
by
Andrew W. Lo

Paradoxically enough,
the early primitive computers could deal with the pinnacles of human
thought—chess, logic, mathematics—much more easily than with the
fundamentals of human life.
As computer science progressed, computers were able to imitate
many more basic human abilities, such as voice recognition and speech
synthesis. Today we have integrated expert systems like the iPhone’s
Siri, or IBM’s Jeopardy-winning supercomputer Watson, which answer
questions as well as any reasonably smart human—but in a way completely unlike any human. Artificial intelligence has achieved many
milestones, but the biggest challenge still remains unmet: to produce
truly intelligent behavior. However, artificial intelligence may be getting
closer to its goal as several different research paths converge.
In 1987, one of the founding fathers of artificial intelligence, the late
MIT professor Marvin Minsky, published an important book called The
Society of Mind.38 This was Minsky’s sweeping view of human intelligence, in which he laid out his vision for how to reproduce human consciousness and intelligence in machine form.

…

First, they want a
seat at the table, giving them timely access to the facts as they’re discovered. Second, they want to be able to offer their interpretation of those
facts, or correct mistakes made by others that could reflect poorly on
them. And finally, the NTSB’s accident report isn’t admissible as evidence in lawsuits for civil damages, which allows the stakeholders to be
much more candid about their role in the accident than they might be
if they faced legal jeopardy.
382 • Chapter 11
Once all the facts are collected and agreed on by the various parties,
the second phase of the investigation begins. Only the NTSB’s internal
staff conducts the analysis, to reduce the chances of conflict of interest.
This analysis presents a theory of the probable cause of the accident and
rules out opposing theories. The final accident report contains the facts
of the accident, the analysis of the accident, and policy recommendations regarding the accident.16
Here’s an example of how it works in practice.

…

Lund, in which he wrote:
This letter is written to insure that management is fully aware of the seriousness of the current O-ring erosion problem in the SRM joints from an
422 • Notes to Chapter 1
engineering standpoint. . . . The result would be a catastrophe of the highest
order—loss of human life. . . . It is my honest and very real fear that if we do
not take immediate action to dedicate a team to solve the problem with the
field joint having the number one priority, then we stand in jeopardy of losing
a flight along with all the launch pad facilities.
And on a teleconference call during the evening prior to the January 28 launch, a
number of Morton Th iokol engineers, including Boisjoly, raised concerns about the
cold temperature and argued for postponing the launch but were overruled by Morton Thiokol and NASA senior management. Many studies of the decision-making processes and management structures that led to this tragic event have been completed
since 1986, and NASA, Morton Thiokol, and other organizations have changed a number of their operating procedures in response.

“After
nearly half a century of continuous population growth,” the news
release dolefully continues, ”the demand in many countries for food,
water, and forest products is simply outrunning the capacity of local
life support systems.”8State of the World 2000 from the World Watch
Institute warns that population growth ”may
more directly affect
economic progress than any other single trend, exacerbating nearly
all other environmental and social problem^."^ And Pakistan is imperiled again: ”Pakistan’s projected growth from 146 million today
to 345 million by 2050 will shrink its grainland per person from
0.08
hectares at present to 0.03 hectares, an area scarcely the size of a
tennis court.”1o
The organization Population Action International notes that ”the
capacity of farmers to feed the world’s future population is also in
jeopardy.”’l The Population Institute warns bluntly of ”The Four
Cash for Condoms?
89
Horsemen of the 21st Century Apocalypse: Overpopulation. Deforestation. Water Scarcity. Famine.” As a a result, ”Developed countries
will be lookingat staggering disasterrelief budgets as a result
...and
only a few years from now.’’12
Not only that but, according to Lester Brown, population grows
faster than jobs: ”Inthe absence of an accelerated effort toslow
population growth in the years ahead, unemployment could soar to
unmanageable levels.”

pages: 288words: 92,175

Rise of the Rocket Girls: The Women Who Propelled Us, From Missiles to the Moon to Mars
by
Nathalia Holt

It took a room with the same square footage as most of the computers’ houses to contain the IBM 701. It wasn’t just one big box but eleven separate components that, together, weighed a whopping 20,516 pounds. Notwithstanding its size, the 701 moved IBM into the computer business. At first the company didn’t think it would have many customers for the machine. At a stockholder meeting, IBM’s president, Thomas Watson Jr., explained that they were expecting to sell only five of them, but “we came home with orders for eighteen.” One of those orders was for JPL.
Despite a monthly rental price starting at $11,900, the 701 came with no instruction manual. To use the machine one had to learn an obscure numerical code. Even the simplest of operations, such as obtaining a square root, involved an incredible amount of programming.

…

I pressed the pen to my lips and concentrated, trying to balance my pregnant belly while perched on the wobbly edge of a bar stool. It was the summer of 2010, and my husband and I were trying to come up with names for our daughter’s December arrival. Sitting in a bar in Cambridge, Massachusetts, we brainstormed names, each writing them down privately on a napkin before showing the other, as if we were on some bizarre game show: Name Your Baby! We weren’t having much luck. We both have unusual first names—Nathalia and Larkin—so we wanted to find one that wouldn’t subject our daughter to a lifetime of odd nicknames. When Larkin wrote down Eleanor, I immediately rejected it. It sounded so old-fashioned. I couldn’t imagine naming my daughter that. But as the months went by and my belly grew, the name grew on me too. We started coming up with middle names.

…

The problem lay in a midflight ignition that had caused an electrical short in the camera system. With the design issues resolved, everyone looked ahead to Ranger 7. Surely, after six miserable failures, this would be the one. It had to be.
The launch of Ranger 7 took place on a hot, humid afternoon in July 1964. The control room at JPL was tense. Everyone knew that their jobs, and even the fate of the lab, were in jeopardy. To lighten the mood and distract everyone from the pressure, one of the engineers, Richard Wallace, known as Dick, decided to pass out peanuts. Whether it was the good-luck peanuts or simply the hard-won lessons from six failed missions, the launch went off flawlessly. But it wasn’t time to celebrate yet; the ship had to successfully reach the moon’s surface.
A few days later, in the early morning of July 31, Helen sat in the gallery of the new SFOF.

Spam filters are designed to automatically adapt as the types of junk email change: the software couldn’t be programmed to know to block “via6ra” or its infinity of variants. Dating sites pair up couples on the basis of how their numerous attributes correlate with those of successful previous matches. The “autocorrect” feature in smartphones tracks our actions and adds new words to its spelling dictionary based on what we type. Yet these uses are just the start. From cars that can detect when to swerve or brake to IBM’sWatson computer beating humans on the game showJeopardy!, the approach will revamp many aspects of the world in which we live.
At its core, big data is about predictions. Though it is described as part of the branch of computer science called artificial intelligence, and more specifically, an area called machine learning, this characterization is misleading. Big data is not about trying to “teach” a computer to “think” like humans.

…

For example, using voice-recognition software to characterize complaints to a call center, and comparing that data with the time it takes operators to handle the calls, may yield an imperfect but useful snapshot of the situation. Messiness can also refer to the inconsistency of formatting, for which the data needs to be “cleaned” before being processed. There are a myriad of ways to refer to IBM, notes the big-data expert DJ Patil, from I.B.M. to T. J. Watson Labs, to International Business Machines. And messiness can arise when we extract or process the data, since in doing so we are transforming it, turning it into something else, such as when we perform sentiment analysis on Twitter messages to predict Hollywood box office receipts. Messiness itself is messy.
Suppose we need to measure the temperature in a vineyard. If we have only one temperature sensor for the whole plot of land, we must make sure it’s accurate and working at all times: no messiness allowed.

Moreover, while weak artificial intelligence, whereby robots simply specialize in a specific function, is currently advancing exponentially, strong artificial intelligence, whereby robots demonstrate humanlike cognition and intelligence, is advancing only linearly. While inventions like IBM’sWatson (the computer designed by IBM that beat Jeopardy! champions Ken Jennings and Brad Rutter) are exciting, scientists need a better understanding of the brain before these advances progress beyond winning a game show. Watson didn’t actually “think”; it was basically a very comprehensive search engine querying a large database. As robotics expert and UC Berkeley professor Ken Goldberg explains, “Robots are going to become increasingly human. But the gap between humans and robots will remain—it’s so large that it will be with us for the foreseeable future.”

Nouriel Roubini writes, “[T]here is a new perception of the role of technology. Innovators and tech CEOs both seem positively giddy with optimism.”44 The well-known pair of techno-optimists Erik Brynjolfsson and Andrew McAfee assert that “we’re at an inflection point” between a past of slow technological change and a future of rapid change.45 They appear to believe that Big Blue’s chess victory and Watson’s victory on the TV game showJeopardy presage an age in which computers outsmart humans in every aspect of human work effort. They remind us that Moore’s Law predicts endless exponential growth of the performance capability of computer chips—but they ignore that chips have fallen behind the predicted pace of Moore’s Law after 2005. The decline in the price of ICT equipment relative to performance was most rapid in the late 1990s, and there has been hardly any decline at all in the past few years.

It was like when HAL 9000 took over the spaceship. Like the moment when, exactly thirteen seconds into “Love Will Tear Us Apart,” the synthesizer overpowers the guitar riff, leaving rock and roll in its dust.43
Except it wasn’t true. Kasparov had been the victim of a large amount of human frailty—and a tiny software bug.
How to Make a Chess Master Blink
Deep Blue was born at IBM’s Thomas J. Watson Center—a beautiful, crescent-shaped, retro-modern building overlooking the Westchester County foothills. In its lobby are replicas of early computers, like the ones designed by Charles Babbage. While the building shows a few signs of rust—too much wood paneling and too many interior offices—many great scientists have called it home, including the mathematician Benoit Mandelbrot, and Nobel Prize winners in economics and physics.

…

The Weather of Supercomputers
The supercomputer labs at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, literally produce their own weather. They are hot: the 77 trillion calculations that the IBM Bluefire supercomputer makes every second generate a substantial amount of radiant energy. They are windy: all that heat must be cooled, lest the nation’s ability to forecast its weather be placed into jeopardy, and so a series of high-pressure fans blast oxygen on the computers at all times. And they are noisy: the fans are loud enough that hearing protection is standard operating equipment.
The Bluefire is divided into eleven cabinets, each about eight feet tall and two feet wide with a bright green racing stripe running down the side. From the back, they look about how you might expect a supercomputer to look: a mass of crossed cables and blinking blue lights feeding into the machine’s brain stem.