As artificial intelligence grows, so do ethical concerns

Published 5:14 pm, Friday, January 31, 2014

Now that Google is delving even deeper into artificial intelligence, the minds behind "Don't be evil" might face real questions of right and wrong.

This week the Mountain View search titan snatched up DeepMind, which develops artificial intelligence software. The price tag and the specifics of the deal remain unclear, but Google will set up an ethics board to oversee DeepMind's artificial intelligence projects, according to the Information website.

Google would not confirm this detail, but as increasing processing speeds help artificial intelligence live up to its hype, tech companies will face challenging ethical questions.

And those questions could start popping up soon, says Eliezer Yudkowsky, a fellow at the Machine Intelligence Research Institute. He says ethics boards should be more common because computer technology is evolving so quickly that an explosion of artificial intelligence might not be far off.

"We have no idea when it's time," Yudkowsky says.

Military concerns

The ethics of artificial intelligence is already an area of huge concern when it comes to the military's use of unmanned vehicles. In November 2012, the Department of Defense tried to establish guidelines around minimizing failures and mistakes for robots deployed to kill targets. The United Nations is expected to address the topic this year.

Military robots can save soldiers' lives, and proponents say they aren't susceptible to mistakes caused by passion, revenge, pressure and fatigue. But David Akerson, a lawyer and member of the International Committee for Robot Arms Control, rejects the idea that the military should eventually rely on artificial intelligence to decide when and where to make targeted kills.

"Right now everything we have is remotely controlled and there's always a human in the loop," Akerson says. "We're heading in the direction to give the decision to kill to an algorithm."

But such ethical decisions won't just be made on the battlefield.

Last year, Moshe Vardi, a computational engineering professor at Rice University, claimed computer power was growing at such rates that human jobs could be automated by 2045. Seems far-fetched, but what if a company developed software that could wipe out a job market overnight - say an algorithm and text-to-speech engine that (actually) mimicked customer service agents? A company's ethics board would have to balance its own gains in profit against the drastic impact on people's livelihoods.

It also remains to be seen how deep learning algorithms, which seek to mimic the way the brain thinks, interpret our lives via the troves of data we upload - often unknowingly - to the Internet every day.

One thing that is certain: Facebook and Google both see that kind of artificial intelligence in their future.

"The real value will be if we can understand the meaning of all the content that people are sharing, we can provide much more relevant experiences in everything we do," Facebook CEO Mark Zuckerberg said on an earnings call this week.

Facebook's push

Facebook is trying to keep pace with Google, and has even been gathering data of users' offline habits, like brick-and-mortar retail spending patterns.

"This is the first academic conference I have attended where there was this much talk about getting rich or being bought out, something that is actually happening to a number of researchers that appeal to Facebook's ambitions," he wrote.

"I sincerely hope that this flirtation with Silicon Valley won't turn into a marriage."

The benefits to Facebook and Google aren't hard to imagine. Rather than depending on users following brands, they could simply scan people's pictures and know whether they prefer Coke or Pepsi. Or, going a step further, the machine could determine your location, recognize you haven't posted a picture of yourself smiling in some time, and recommend you buy tickets to the funny movie playing around the corner.

But marketing data doesn't necessarily herald the coming of the "Terminator"-style Skynet. Yudkowsky believes that even though Google possesses some of the world's most powerful computers and is investing in robotics companies, it's far-fetched to worry about a Google taking over the world. Its ethical discussions would have more to do with targeting advertisements than deploying killer drones.

"That has very little to do with Google having all your e-mails," he says.

Latest from the SFGATE homepage:

Click below for the top news from around the Bay Area and beyond. Sign up for our newsletters to be the first to learn about breaking news and more. Go to 'Sign In' and 'Manage Profile' at the top of the page.