robots

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”

—Ray Kurzweil

We came of age imagining New Frontiers, an idyllic time of relative innocence when anything seemed possible: rockets that would travel to the moon like buses, a permanent space station, and flying cars a la the Jetsons. It was the go-go 50s and 60s, when an energized Team America sat astride the top of the world, with few limits on dreams and none on ambition. Optimism hung in the air like the scent of roses on a spring morning.

In the America of the 1950s and 60s, the future was filled to bursting with promise. A youthful and beloved president set the country a challenge to travel from the earth to the moon in a decade, which we did, though he did not live to see it.

Young people read about ENIAC, the first (room-sized) computer designed to compute artillery tables during WWII (and later used for nukes). Large mainframes followed; in went punchcards, out came reports. Even my high school had one. Science fiction writers, envisioning the future, foresaw robots who would reliably assist humans in a variety of tasks and, of course, adventures. As a boy, I had a toy Robby the Robot, a dutiful servant in the 1956 MGM science fiction film Forbidden Planet. Later on, as I begin to read science fiction, I encountered Isaac Asimov's original three laws of robotics.

Introduced in his 1942 short story "Runaround" and included in I, Robot, The Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws provided themes for Asimov's robotic-based fiction, and were devoured by young adults. Intended as a safety feature, The Laws could not be bypassed,. This led to interesting plot twists in many of Asimov's robot-focused stories, as robots react in unusual and counter-intuitive ways as a consequence of how the robot applies the Three Laws to a given situation. Other authors working in Asimov's fictional universe adopted them and over time, we seem to have taken them as a given.

They are not. The utopian futures envisioned to earlier writers have given way to Terminator robots, and Skynet, to say nothing of pilotless drones raining relentless death down on wedding parties. We're a long way from Robby the Robot.

The notion of intelligent automata, a non-human intelligence, dates back to ancient times. More recently, computer technology may trace itself to back to Charles Babbage and his Difference Engine, but "artificial intelligence" can be traced back to 1956 and a conference at Dartmouth where the term was coined. Research in the field ebbed and flowed over decades, and has clearly benefited most recently from in increases in computing power. In 1997, when IBM's Deep Blue defeated Russian grandmaster Garry Kasparov, and in 2011, when IBM's Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings, a technological Rubicon had been crossed.

It's neither my purpose nor within my ability to trace all of the meaningful developments in AI, but thought it might be useful to consider AI's implications for the future. And yes, I am aware that for much of this discursion I am conflating robotics and AI, but since both rely on vast increases in processing power to be fully realized, keep your rotten vegetables in the bag and bear with me.

“The miraculous has become the norm.” –Jonathan Romney

Sales of manufacturing robots increase each year. According to The International Federation of Robotics, robot sales in 2015 showed a 15% increase over the prior year. The IFR estimates that over 2.5 million industrial robots will be at work in 2019, a growth rate of 12% between 2016 and 2019. Workers have been working side-by-side with robots for decades. My wife's father was a foreman at Ford who worked with robots in the 70s, so robotic work technology is common. But the predicted rate of adoption, coupled with the prospects of driverless fleets, raises the question of what happens to the jobs? And the workers?

No doubt robots increase productivity and competitiveness. This productivity can lead to increased demand and new job opportunities, often in more highly skilled and better-paying jobs. Yet for all this rosy optimism, fear nags. More often, it leads right to profits for the owners and immiseration for the laid off.

Several years ago, author and futurist Ray Kurzweil referred to a point in time known as "the singularity," that point at which machine intelligence exceeds human intelligence. Based on the exponential growth of technology based on Moore's Law (which states that computing processing power doubles approximately every two years), Kurzweil has predicted the singularity will occur by 2045.

“The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” —Elon Musk

"The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking had a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig's disease, and communicated using specialized speech software.)

And Hawking isn't alone. Musk told an audience at MIT that AI is humanity's "biggest existential threat." He also once tweeted, "We need to be super careful with AI. Potentially more dangerous than nukes."

Despite these high-profile fears, other researchers argue the rise of conscious machines is a long way off. Says Charlie Ortiz, AI head of a Massachusetts-based software company, "I don't see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm. Lots of work needs to be done before computers are anywhere near that level."

Reassured yet?

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —Eliezer Yudkowsky

“Someone on TV has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.’” —Sybil Sage

"Alexa, make me a cocktail, willya?" Not quite yet, but perhaps soon, as companies are incorporating AI into their products. From smartphone assistants to driverless cars, Google is positioning itself be a major player in the future of AI. Amazon and Apple have staked out their own strong positions, as the ubiquity of digital assistants like Siri and Alexa makes them ghostly familiars… with access to your personal information, internet search histories, text messages and porn habits. And with Facebook and hundreds of apps hoovering up our personal information for resale to unseen third parties for purposes available only on a need to know basis, and you don't need to know…

… because YOU are the product.

"Machine learning" is a term of art referring to computer systems that learn from data. Time was computers followed instructions and performed computations for data crunching. Today's devices use a set of machine-learning algorithms, collectively referred to as "deep learning," that allow a computer to recognize patterns from massive amounts of data. This is a deep and profound change, the implications of which we have not yet grasped. And if we have not grasped it, how can we control it or appreciate its repercussions?

At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language… the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems.

So Facebook had to pull the plug because in a short period of time, the robots had developed their own language. Not sure about you, but when I envision a future where I attempt a transaction with online chatbots armed not only with a chip full of predictive algorithms, but also in possession of the entire dossier of personal information gleaned from every keystroke I've ever recorded, well, I'm not liking my odds. Here is your "permanent record" made real.

And then the prospect of the Internet of Things (IoT), a galaxy of sensors embedded in everyday objects, enabling them to send and receive data. This is made possible by more ubiquitous broadband internet is become more widely available, less expensive connection costs, and more devices created with Wi-Fi capabilities and sensors built in. I already know my phone and TV listen to me; will they next connive against me in concert with the refrigerator and the coffee maker? Encourage the air conditioner to go on strike?

All roads in AI seem to lead to dystopia. Our inability to imagine a more positive future for artificial intelligence may stem from the fact that we've lost faith in ourselves. We're seen the tech companies in action, and they are opaque. And they sell the data mined with impunity to unseen actors. Our morality is defined not by the Church or in civic pride, but by the spreadsheet; our worth found in the lower right-hand corner. Knowing we are cooking the planet, we insist on burning the last few gallons of liquid sunlight left ion the ground to wring the last few dollars of profit. We willingly sacrifice children to the profits of the Slaughter Lobby. We elect louts to lead us, accept sabotage as political business-as-usual, embrace treason as a cost of doing business. Under the circumstances, who would dare possibly envision a happier future?

Who could imagine Asimov's Three Laws emerging from any part of today's debased culture?

Surly1 is an administrator and contributing author to Doomstead Diner. He is the author of numerous rants, screeds and spittle-flecked invective here and elsewhere, and was active in Occupy. He lives in Southeastern Virginia with his wife Contrary in quiet and richly-deserved obscurity.He will have failed if not prominently featured on an enemies list compiled by the current administration.

Originally published on the Doomstead Diner on August 23, 2014
Discuss this article here in the Diner Forum.

Your Robot Overlord Does Not Love You

The Three Laws of Robotics, a set of rules devised by science fiction author Isaac Asimov:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.― Isaac Asimov, “I Robot”

In the process of preparing last week’s overheated screed, I came across an article that, after nearly 4000 words, consideration for my audience bade me defer to another day. That was the fact that Elon Musk, he of Tesla and Space-X, and widely regarded as one of the smartest guys in the room, had concluded that one of the gravest dangers to the continuation of the human race was not nuclear power so much as artificial intelligence.

Consider that for a moment. Or better yet, read the article in the original here. In a couple of reported Tweets, Musk urged that we be “super careful with AI. Potentially more dangerous than nukes,” and “Hope we’re not just the biological boot loader for digital super intelligence. Unfortunately that is increasingly probable.” Musk’s concern was spurred by a book by Nick Bostrom of Oxford’s Future of Humanity Institute entitled “Superintelligence: Paths, Dangers, Strategies.”

The book addresses the prospect of an artificial superintelligence that could feasibly be created in the next few decades. According to theorists, once the AIis able to make itself smarter, it would quickly surpass human intelligence.

What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn’t stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction—and maybe that’s exactly what Elon Musk is afraid of.

So what are some of these thought experiments? Bostrom says,

“We cannot blithely assume the super intelligence will necessarily share any of the final value stereotypically associated with wisdom and intellectual development in humans – scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture for the simple pleasures of life, humility and selflessness, and so forth.”

Your mileage may vary, but from Gaza to Ferguson, we find these so-called human values already lacking in much of what passes for humanity. What worries Musk and his oracles are the unintended consequences of building artificial intelligence detached from ordinary human ethics. Future AI might find more value in computing the decimals of pi or insuring its own survival than solving human problems in ways that we might recognize as helpful.

Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute:

“The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

Without recapitulating the entire article, its point is that it is difficult for programmers to anticipate the instructions necessary to program the ethical dimension and problem solving capability to safeguard human life. On the other hand, we find that in other parts of our military-industrial complex, our tax dollars are already working overtime to create artificial creatures whose purpose is ostensibly benign, but the implications of which are terrifyingly apparent to anyone who has seen Terminator movies.

In a breezy article on Geek Pride entitled, “5 Apocalypses You Are Probably Not Ready For” the authors consider not only technology that enables one monkey to control the actions of another monkey by simply thinking, but also a device they call, “Human Powered, Googlezon Big Spider DroneBotcalypse.”

Now, a robot that can’t be knocked over is terrifying enough. It can also climb stairs and is allegedly powered by your hopes and dreams. Why google are doing this is anyone’s guess, but we can only be lead to assume that it is to take over the world.

“Well,” you say “It’s not like they’re trying to watch our every move or anything!” Well…

So we have a company that watches everything you do online, records video of you when you’re offline and robots that can walk up the stairs. The only way we can hide is the removal of stairs, and living in treehouses.

The drones have been initially designed to eliminate the day long waiting period for Amazon deliveries, shortening the time to a possibility of just 30 mins. Currently the plan is to have them manned remotely by human pilots. so we’re safe, for now. The main problem is what is known in the drone world as “SWaP — size, weight and power. This is essentially a physics problem: The larger your payload, the more lift you need. The more lift you need, the larger your battery has to be, which further adds to the weight, which adds to the power requirements, and so on” (Washington Post, 2013).

Essentially what this boils down to is a matter of time and money before drones can carry a bigger payload, such as a 500lbs Big Dog robot. This may seem a long way off, but all Amazon probably needs is a massive cash-injection for the advances to be put into effect. Cash the likes of which Google might have.

I give you Googlezon, probable merger of the late 2020s and new owners of the world.

The motorized bison is a creature called “Big Dog” currently developed by Boston Dynamics, under a DARPA grant generously provided by you and me. The ostensible purpose is search, rescue and supply, but…

BigDog is a rough-terrain robot that walks, runs, climbs and carries heavy loads. BigDog is powered by an engine that drives a hydraulic actuation system. BigDog has four legs that are articulated like an animal’s, with compliant elements to absorb shock and recycle energy from one step to the next. BigDog is the size of a large dog or small mule; about 3 feet long, 2.5 feet tall and weighs 240 lbs.

BigDog’s on-board computer controls locomotion, processes sensors and handles communications with the user. BigDog’s control system keeps it balanced, manages locomotion on a wide variety of terrains and does navigation. Sensors for locomotion include joint position, joint force, ground contact, ground load, a gyroscope, LIDAR and a stereo vision system. Other sensors focus on the internal state of BigDog, monitoring the hydraulic pressure, oil temperature, engine functions, battery charge and others.

Development of the original BigDog robot was funded by DARPA. Work to add a manipulator and do dynamic manipulation was funded by the Army Research Laboratory’s RCTA program.

And the news keeps getting worse. Rather than embrace the high ground of “robot morality” imagined by Asimov, we find that the Pentagon is in early days of raising a robot army. The justification is that ostensibly the military is rapidly creating weapons systems that will need to make moral decisions. Current military regs prohibit armed systems that are fully autonomous. Yet the increasing sophistication of military technology demands greater and greater autonomy, and where lives are at stake, machines capable of weighing moral factors. What could possibly go wrong?

The U.S. military is trying to develop and deploy a real life terminator. A research agency associated with the Pentagon has unveiled pictures of a robot that looks and walks like a man.

The ATLAS robot is being developed by the Defense Advanced Research Projects Agency (DARPA) and a Massachusetts company called Boston Dynamics. DARPA, known as “the Pentagon’s weird science agency,” is the organization that is stated to have invented the internet. DARPA now has an intensive effort to create robots such as ATLAS underway at their facilities, and a new video reveals some of the latest developments.

DARPA has told the press that ATLAS is designed to enter disaster areas such as places contaminated by radiation or toxic chemicals and provide relief. Yet it would also function perfectly on the battlefield.

The Pentagon has hired a bunch of philosophy professors from leading U.S. universities to tell them how to make robots murder people morally and ethically.

Of course, this conflicts with [Asimov’s] first law above. A robot designed to kill human beings is designed to violate the first law.

The whole project even more fundamentally violates the second law. The Pentagon is designing robots to obey orders precisely when they violate the first law, and to always obey orders without any exception. That’s the advantage of using a robot. The advantage is not in risking the well-being of a robot instead of a soldier. The Pentagon doesn’t care about that, except in certain situations in which too many deaths of its own humans create political difficulties. And there are just as many situations in which there are political advantages for the Pentagon in losing its own human lives: “The sacrifice of American lives is a crucial step in the ritual of commitment,” wrote William P. Bundy of the CIA, an advisor to Presidents Kennedy and Johnson. A moral being would disobey the orders these robots are being designed to carry-out, and — by being robots — to carry out without any question of refusal. Only a U.S. philosophy professor could imagine applying a varnish of “morality” to this project.

The Third Law should be a warning to us. Having tossed aside Laws one and two, what limitations are left to be applied should Law three be implemented? Assume the Pentagon designs its robots to protect their own existence, except when . . . what?

No, it’s not a souped-up version of Robby the Robot — it’s ATLAS, DARPA’s latest attempt at creating a humanoid robot. Unlike the super-realistic Petman, which was designed to test chemical protection clothing, this 330-pound monster is meant to assist in emergency situations. Riiiight...

We’ve seen a proto-version of ATLAS before, but this updated unit can perform a host of new tricks, like walking through rugged terrain and climb using its hands as feet. It has 28 hydraulically actuated degrees of freedom, and of course, two hands, arms, legs, feet, and a torso with some kind of fancy-ass monitor on it that probably goes “ping!” every once in a while.

Its head is equipped with stereo cameras and — ahem — a laser finder. Eventually, DARPA says the 6-foot robot will use its articulated and sensate hands to use tools designed for humans.

Hmmm, by “tools” I wonder if they mean “machine gun.”

No one who watched some of the best legal minds of a generation labor for the Bush administration to create legal justification for torture should be surprised that the Pentagon can hire ethicists and philosophers to determine under what circumstances a robot may commit murder. Paging Dr. Mengele…

Here are the three laws David Swanson posits will replace Asimov’s:

1. A Pentagon robot must kill and injure human beings as ordered.2. A Pentagon robot must obey all orders, except where such orders result from human weakness and conflict with the mission to kill and injure.3. A Pentagon robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Behaving in much the same manner as some of our all-too-human military today, to say nothing of SWAT-gear hungry cops, those Barney Fifes in military drag making up for their dateless high school weekends and various manhood inadequacies by pointing loaded rifles at unarmed civilians to express their inchoate rage.

As anyone not living in a cave knows full well, the foreign-policy of this country, as conducted by the neocons who staged a silent coup to control it (and control it yet despite the nominal change in political administration), operates in a conscience free zone. So perhaps Elon Musk is correct to be worried about artificial intelligence, or more precisely, the lack of ethics that guides its technological development. Our culture has technology in spades. What it lacks is a moral dimension other than materialism and the quest to power to inform its use.

Thus no one should be surprised by developments like these technological fruits, or their subornation to the worst uses imaginable. In a manner analogous to that in MRAPS, SWAT equipment, LRADs and other excess military equipment helpfully provisioned by the Defense Logistics Agency and transferred to local cops, so too are the military populace suppression techniques. Thus the police becomes an armed militia whose sole purpose is to protect the property of the .1% and to keep the rabble in line, as we have seen repeated from Oakland to ferguson to New York City.

Clearly Big Dog and Atlas are just two projects in the robot pipeline, and these are the most visible and showy. For every ostensible “humanitarian use,” there are dozens of less humanitarian uses that don’t make the press releases.

What about the less sexy projects, the smart computers that control systems, that will make decisions based on whatever parameters are fed into it by the best hired “ethicists and philosophers” that Pentagon money can buy? Perhaps that’s what’s keeping Elon Musk up at night. What could be next: Machine-animal hybrids?

Or, other the other hand, nothing to worry about, citizen. Pass the Doritos.

***

Surly1 is an administrator and contributing author to Doomstead Diner. He is the author of numerous rants, articles and spittle-flecked invective on this site, and has been active in the Occupy movement. He shares a home in Southeastern Virginia with Contrary, and every day remarks at his undeserved good fortune at having such a redoubtable woman in his life.

Support the Diner

Search the Diner

Surveys & Podcasts

" As a daily reader of all of the doomsday blogs, e.g. the Diner, Nature Bats Last, Zerohedge, Scribbler, etc… I must say that I most look forward to your “off the microphone” rants. Your analysis, insights, and conclusions are always logical, well supported, and clearly articulated – a trifecta not frequently achieved."- Joe D

Log In

Inside the Diner

I have an e-bike I built with a hub motor. I was using lead acid and that was too heavy and it has been hanging in my garage waiting for batteries. A few weeks ago I figured i’d get some li-on batts and mentioned it to a co-worker.He said that there...

Also, the revelations about Trump's apparent change in behavior from Omarosa's previous time of working with Trump and now lends credence the idea (which has been whispered about throughout the administration's tenure) that Trump has lost his previous ...