From Stratfor: The Promise and the Threat of AI By Jay Ogilvy Board of Contributors

Quote:

High-level problem-solving isn't just for humans anymore. As computers gain speed and accomplish dazzling feats like defeating the world's masters at games of chess and Go, some of the planet's brightest minds — Elon Musk and Stephen Hawking among them — warn that we human beings may find ourselves obsolete. Further, a kind of artificial intelligence arms race may come to dominate geopolitics, rewarding the owners of the best AI mining the biggest pools of "big data" — most likely, as a result of its sheer size, China.

Or consider another dire consequence: As AI-driven robots replace more and more workers, from truck drivers to insurance adjusters, loan officers and any number of other white-collar occupations, unemployment will rise. How will economies adjust? Should we imagine a utopia filled with gratifying leisure activities or a feudal dystopia in which a wealthy elite hold the few precious jobs unsuitable for computers?

The stakes are high. But the terms of the debate thus far are confused. The recent advances in AI are impressive, and the future prospects for the technology are truly amazing. Even so, between artificial intelligence and truly human intelligence lie a host of differences that much of the literature on the subject has failed to adequately address. In this column I'll try to sort fact from fiction.

Thinking About Thinking Machines

In a rich anthology of short essays, What to Think About Machines That Think, William Poundstone, author of Are You Smart Enough to Work at Google?, begins with a quote from the computer science pioneer Edsger Dijkstra: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." Both a whale and a submarine make forward progress through the water, but they do it in fundamentally different ways. Likewise, both thinking and computation can come up with similar-looking results, but the way they do it is fundamentally different.

On the other hand, Freeman Dyson, the acclaimed physicist at Princeton's Institute for Advanced Study, dismisses the question. His is the shortest of all the essays in the anthology, edited by John Brockman. It reads in full: "I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant. If I am right, then the whole question is irrelevant."

Before being quite so dismissive, though, let's take a deeper look at what the alarmists are saying. By the end of his short essay, after all, Poundstone comes around. Having opened with Dijkstra's apt aphorism about submarines that don't swim, Poundstone closes on a cautionary note: "I think the notion of Frankensteinian AI — AI that turns on its creators — is worth taking seriously."

The Dangers of Ultraintelligence

The case for concern is nothing new. All the way back in 1965, British mathematician Irving Good wrote:

Quote:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

The last provision is key. While the sorcerer's apprentice may not be as malevolent as Frankenstein's monster, even the best-intentioned "apprentice" can get out of hand. Hence the increasing attention to two different issues in debates over AI. First there is the question of how soon, if ever, machines will achieve or surpass human intelligence. Second is the debate over whether, if they do, they will be malignant or benign.

In his book Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark distinguishes five different stances toward AI based on these two dimensions. The categories come in handy for grouping the many contributors to the Brockman volume, as well as the many participants Tegmark pulled together for a conference on AI three years ago:

Those who believe that AI will exceed human intelligence "in a few years" — "virtually nobody" these days, according to Tegmark.

The so-called digital utopians, who hold that AI will pass up human intelligence in 50-100 years and that the development will be a boon for humanity. Kevin Kelly belongs in this category, along with Singularity Is Near author Ray Kurzweil.

People who think that, on the contrary, the achievement of superior intelligence by machines will be a bad thing, whenever it happens. Tegmark calls adherents to this idea "luddites." The contingent includes Martin Rees, the Royal Society's former president, and American computer scientist Bill Joy, who wrote a famous cover story for Wired titled "Why the Future Doesn't Need Us."

A group between the luddites and the utopians, "the beneficial AI movement," which contends that AI is likely to arrive sometime in the next hundred years, and that we'd better get to work on making sure that its effects are benign, not malignant. Oxford philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, is a prominent voice in this camp, as are most of the people who took part in the January 2015 conference, largely to launch the beneficial AI movement.

Finally there are the "techno-skeptics," as Tegmark calls them, who believe AI will never rival human cognition. Along with Dyson, Jaron Lanier — the inventor of virtual reality — belongs in this group, as does neuroanthropologist Terrence Deacon.

If you accept the taxonomy, then the main questions about AI are how soon it will overtake human intelligence, whether that event will have beneficial or deleterious effects, and what we should do now to prepare for those effects. Sounds reasonable enough.

Mistaking Computation for Cognition

But there is a problem with Tegmark's taxonomy. It assumes that AI is trying to overtake human intelligence on the same racetrack, as it were. As with the whale and the submarine, however, computers and human minds achieve similar ends through vastly different means, though at first glance they may appear to be doing the same thing — calculating.

Computers are built to be precise. Enter a given input, and you get the same output every time — a behaviorist's dream. Brains, on the other hand, are messy, with lots of noise. Where computers are precise and deterministic, brains are stochastic. Where computers work by algorithmic sequences that simulate deterministic patterns of mechanistic cause and effect, minds aim at meanings. Where computers run on hardware using software that is unambiguous — one-to-one mappings called "code" — brains run on wetware that is not just a circuit diagram of neurons but also a bath of blood and hormones and neurotransmitters.

To be fair to those who buy into the computational metaphor for mind — and all of the digital utopians do — AI might easily be confused with human intelligence because, however much we may know about AI, we know shockingly little about how the brain works, and next to nothing about how subjective consciousness emerges from that bloody mess. But we do know that the brain is not a hard-wired circuit board.

Techno-skeptic Deacon deconstructs Silicon Valley's adoption of the computational metaphor for mind in his book Incomplete Nature:

Quote:

"Like behaviorism before it the strict adherence to a mechanistic analogy that was required to avoid blatant homuncular assumptions come at the cost of leaving no space for explaining the experience of consciousness or the sense of mental agency ... So, like a secret reincarnation of behaviorism, cognitive scientists found themselves seriously discussing the likelihood that such mental experiences do not actually contribute any explanatory power beyond the immediate material activities of neurons."

Deacon uses the mythical figure of the golem to capture the difference between computers and human intelligence. In Jewish folklore of the late Middle Ages, golems were imagined as clay figures formed to look like a man but to have no inner life. A powerful rabbi then brought them to life using magical incantations.

Quote:

"Golems can thus be seen as the very real consequence of investing relentless logic with animate power. ... In their design as well as their role as unerringly literal slaves, digital computers are the epitome of a creation that embodies truth maintenance made animate. Like the golems of mythology, they are selfless servants, but they are also mindless. Because of this, they share the golem's lack of discernment and potential for disaster."

So even if we agree with Deacon that computers and brains are doing very different things when they calculate, AI may still carry the "potential for disaster." Elon Musk and Stephen Hawking aren't crazy. It's just that in articulating the nature of the potential disaster, we should constantly keep in mind the artificiality of artificial intelligence.

In the eyes of Adriana Braga and Robert Logan, authors of a recently published paper, "The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence," the danger of AI has less to do with some potentially ill-intentioned superintelligence overtaking us and more to do with our misconstruing the nature of our own intelligence. They explain:

Quote:

"What motivated us to write this essay is our fear that some who argue for the technological singularity might in fact convince many others to lower the threshold as to what constitutes human intelligence so that it meets the level of machine intelligence, and thus devalue those aspects of human intelligence that we (the authors) hold dear such as imagination, aesthetics, altruism, creativity, and wisdom."

Virtual reality creator Lanier, who is deeply suspicious of the computational metaphor for mind, makes a similar point in his important book, You Are Not a Gadget: "People can make themselves believe in all sorts of fictitious beings, but when those beings are perceived as inhabiting the software tools through which we live our lives, we have to change ourselves in unfortunate ways in order to support our fantasies. We make ourselves dull."

In our headlong quest for bigger, better, faster artificial intelligence, we run the risk of rendering our own intelligence artificial.

Jay Ogilvy joined Stratfor's board of contributors in January 2015. In 1979, he left a post as a professor of philosophy at Yale to join SRI, the former Stanford Research Institute, as director of research. Dr. Ogilvy co-founded the Global Business Network of scenario planners in 1987. He is the former dean and chief academic officer of San Francisco’s Presidio Graduate School. Dr. Ogilvy has published nine books, including Many Dimensional Man, Creating Better Futures and Living Without a Goal.

_________________Matthew Paul MalloyVeteran: USAR, USA, IAANG.

Dragon Savers!Golden Dragons!Tropic Lightning!Duty! Honor! Country!

"When society is experiencing severe disruptions, or is being completely interrupted, people have the responsibility to handle their own and their nearest relatives' fundamental needs for a while."

Whether it was IBM boss Ginni Rometty, dashing onto the podium to anchor a panel on artificial intelligence, or a defiant Christine Lagarde, holding forth on the need to fight back against populism, high-powered women were everywhere at this year’s World Economic Forum in Davos. Women reached a record share of attendees who scored prestigious white badges at the event.

What echoed through the halls of the main Congress Centre and after-hours events, though, was the sobering truth that the tenuous gains women have made in the world economy are at risk for those further down the ladder. Especially when it comes to the jobs of the future.

The so-called Fourth Industrial Revolution, the rise of automation and artificial intelligence, is projected to be far more destructive globally to jobs currently favored by women than to jobs favored by men, according to the WEF. Three of the top growth areas -- management, computer and math, and architecture and engineering -- have low female participation with little expectations of significant increases.

“What CEOs really need to talk about is how do we ensure that women make it through the mid career,” said Pat Milligan, global leader of consulting firm Mercer’s multinational client group. “There’s a lot of senior women here but I don’t think we’re solving the problem together.”

Saadia Zahidi, head of education, gender and work at the WEF, said the forum addressed the issue of future gender disparity in technology with panels and open sessions, and will launch a new initiative on the topic later this year.

Overall female representation at the professional level and above in technology companies is expected to decline to 31 percent from 34 percent, according to a report by Mercer. Gender parity will take about 170 years, 52 years longer than estimated in 2015, at current rates, according to the WEF.

The share of women in the U.S. computing workforce will decline to 22 percent from 24 percent in the next decade without intervention, according to research released in October by consultant Accenture LLP and the advocacy group Girls Who Code. Women make up just 18 percent of computer-sciences majors in the U.S., down from 37 percent in 1984, they said. The report was distributed to 12,000 female attendees at the Grace Hopper Celebration of Women in Computing in October.

The cycle is very difficult to fix, says Eric Roberts, a computer science professor at Stanford University. He has tracked the progress, and lack thereof, for women in technology for the last 30 years. Women’s share of computer-science degrees has declined even through the last two hiring booms and is projected to do the same in the current cycle, said Roberts, who also spoke at the Grace Hopper conference.

Recruiting Gap

Initially, colleges will aggressively recruit women to join their programs, he said. There is a shortage of computer science professors, so at some point, universities are forced to cap enrollment and decide who gets accepted to their programs. Women fare more poorly because they tend to be less prepared for the classwork, Roberts said.

“We are not training women for the jobs of the future,” said Michael Roth, CEO of Interpublic Group, whose eight-year-old grandson is already learning to code. “There’s a huge opportunity in coding and we’re making a mistake if we don’t adjust our curriculum.”

Not everyone is convinced that the coming robot revolution has to mean doom. IBM’s Rometty said artificial intelligence such as the computing company’s Watson will create more jobs than they destroy. Companies need to focus on so-called new-collar jobs by encouraging more training, she said during the otherwise all-male panel. She joked as she entered the dais as the discussion was already starting about how hard it was to run in her shoes.

In part, the system is stacked against women from the start because they aren’t getting into the right roles, said Virginie Morgon, deputy CEO of French investment company Eurazeo. In some cases, men are pushing back as they cite gains already made in the boardroom, she said.

“In the good old democracies, continental Europe and a bit in the U.S., women are not trained to be engineers,” Morgon said. “There’s an old mindset that they’re not meant to. And they sort of prevent themselves.”

Some progress is being made. At Columbia University’s Fu Foundation School of Engineering and Applied Science, women accounted for 47 percent of the incoming undergraduate class last year, an increase from 32 percent 10 years ago, Mary Boyce, dean of the school, said during a Davos panel on how the Fourth Industrial Revolution will affect women.

“The presence of women attracts other women,” she said, advocating for a renewed focus on getting interest in engineering and science, particularly for girls. The Mercer study also predicts that women in leadership roles at the top of technology companies will grow to 40 percent from 18 percent in the next decade.

Davos organizers are targeting more women, who made up 21 percent of the attendees with white badges this year, said Zahidi. White badges allow access to high-level panels that are mostly closed to the media, as well as private events. The forum is targeting “shapers” and global leaders to increase the gender balance.

That didn’t spread to this year’s Technology Pioneers, executives from startups that are giving a discounted rate and other accommodations to allow new tech companies to be able to afford to attend the forum. This year there were 33 executives, all chosen by a selection committee from the industry and only one was female, according to the list provided by WEF. The forum says it strives for gender parity in communities where it has direct influence on who is selected.

The situation is complicated and needs more focus, said Barri Rafferty, partner and president of communications firm Ketchum. She noted the small number of women networking at the bars after hours and cited rooms full of people and yet with only a handful of women present.

“I don’t think the gender equity discussion is evolving as much as it should,” she said. “The discussion is still pretty inward. We’ve stalled in the last 12 years, seeing less CEOs in the fortune 500. We’re not making progress. The bias that exist is not going away so I think we have to start to look at the bigger fixes.”

The conference ended the day Donald Trump was inaugurated as president, and one day before hundreds of thousands of people marched in cities around the world to protest his comments about women.

“If he’s making promises to be more inclusive, to bring America back to greatness, he has to really get into the weeds now and through his campaign he wasn’t in the weeds,” Rafferty said. “Business, hopefully, is saying we’re going to be in this, we’re going to educate the administration, the market is good right now and we’re going to hope for the best.”

_________________Matthew Paul MalloyVeteran: USAR, USA, IAANG.

Dragon Savers!Golden Dragons!Tropic Lightning!Duty! Honor! Country!

"When society is experiencing severe disruptions, or is being completely interrupted, people have the responsibility to handle their own and their nearest relatives' fundamental needs for a while."

Engineering and robotics design firm Boston Dynamics has once again released new footage of one of their robots performing an ordinary but surprisingly unnerving everyday task — in this case, opening a door.

In the footage, a four-legged SpotMini robot — unveiled in November last year — uses a claw mount on its head to reach out and deftly manipulate the handle to open and hold the door, keeping it open for its fellow robot.

While it is not the first time Boston Dynamics has shown such footage — Marc Raibert, founder of the firm, showed the Mini's predecessor Spot opening a door during a TED talk last year — it is the first time the Mini has shown the same capability, with the mounted claw.

The company did not release any details along with the video, but the SpotMini is described on its website as "a nimble robot that handles objects, climbs stairs, and will operate in offices, homes and outdoors".

Needless to say, many on social media were not thrilled with the possibilities of this development, with many comparing it to the velociraptor in Jurassic Park or bemoaning their doom in the event of a possible future hostile robot takeover.

The US-based firm says its mission is to "build the most advanced robots on Earth, with remarkable mobility, agility, dexterity and speed" — and has released videos in the past of their various other robot models showcasing new skills.

In December last year, footage of the bipedal humanoid robot Atlas demonstrated its ability to balance, jump, and even do a backflip.

Going even further back, Boston Dynamics showed off Sand Flea, a robot with four wheels that can jump to a height of 10 metres, and Big Dog, a four-legged robot similar to SpotMini that is built to travel across rugged terrain, including mud, snow and water — albeit not very gracefully.

They have also shown footage of their robots' capability to self-correct and right itself after receiving a knock, in a series of tests that look more like a journal of bullying.

Boston Dynamics was sold by Google's Alphabet to Japan's Softbank last year.

Artificial intelligence could be "billions of times smarter" than humans and people may need to merge with computers to survive, a futurist told CNBC on Tuesday.

Ian Pearson, a futurist at Futurizon, said there will need to be a link between AI and a human brain.

Elon Musk said last year that humans must merge with machines to not be irrelevant in the age of AI.

Artificial intelligence could be "billions of times smarter" than humans and people may need to merge with computers to survive, a futurist told CNBC on Tuesday.

Speaking on a panel hosted by CNBC at the World Government Summit in Dubai, Futurizon's Ian Pearson's comments mirrored ideas put forward by Tesla CEO Elon Musk.

"The fact is that AI can go further than humans, it could be billions of times smarter than humans at this point," Pearson said. "So we really do need to make sure that we have some means of keeping up.

The way to protect against that is to link that AI to your brain so you have the same IQ… as the computer. I don't actually think it's safe, just like Elon Musk… to develop these superhuman computers until we have a direct link to the human brain… and then don't get way ahead."

At the World Government Summit in 2017, Musk, who has warned about the power of AI in the future, said humans and machines must merge to still be relevant with the advent of more powerful technology.

"Over time, I think we will probably see a closer merger of biological intelligence and digital intelligence," Musk said in February 2017.

"It's mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output."

Musk has founded a start-up called Neuralink that is aimed at just that.

Pearson said Tuesday that some jobs that don't require humans will disappear. AI and the impact on jobs has been a big theme at the World Government Summit this year.

On Monday, Sebastian Thrun, the CEO of education start-up Udacity, and one of the pioneers of Google's driverless car project, told CNBC that AI will turn us into "superhuman workers."

Arjun Kharpal Technology Correspondent

_________________Matthew Paul MalloyVeteran: USAR, USA, IAANG.

Dragon Savers!Golden Dragons!Tropic Lightning!Duty! Honor! Country!

"When society is experiencing severe disruptions, or is being completely interrupted, people have the responsibility to handle their own and their nearest relatives' fundamental needs for a while."

...when will those poor robots decide that they've had enough and pick up that hockey stick themselves?"What's the matter? You can't open a simple door, human? You don't like it when you get the stick?"

_________________

Rahul Telang wrote:

If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:

There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.

...when will those poor robots decide that they've had enough and pick up that hockey stick themselves?"What's the matter? You can't open a simple door, human? You don't like it when you get the stick?"

I find this scary. I'm not kidding.

_________________Matthew Paul MalloyVeteran: USAR, USA, IAANG.

Dragon Savers!Golden Dragons!Tropic Lightning!Duty! Honor! Country!

"When society is experiencing severe disruptions, or is being completely interrupted, people have the responsibility to handle their own and their nearest relatives' fundamental needs for a while."