Ben Dickson
No one will argue that Artificial Intelligence has taken great strides in past years. Thanks to AI we’re getting targeted and personalized ads, becoming better in education, healthcare, agriculture and whatnot.
So what’s preventing Artificial Intelligence from taking the next big leap? Maybe it’s intelligence.
Fact of the matter is, AI algorithms are becoming very smart and efficient at doing specific tasks, but they’re not smart enough to explain their decisions. And neither can their creators.
How does that amount to a problem? It doesn’t, as long as AI is making suggestions and not decisions. So for things such as advertisements and purchase suggestions, it’s okay to put the robots in charge. Even in domains such as diagnosis and treatment of illness, AI can make some very good recommendations and help physicians in making decisions about patient treatment. AI can help glean traffic patterns and make recommendations about reducing congestion in cities. And in fact, this has so far been enough to disrupt the employment landscape.
But when it comes to making critical decisions, we’re still not ready to put autonomous AI-powered systems in full control. Because everyone makes mistakes, whether human or not. And if those mistakes have critical and fatal consequences, someone will have to be held to account.
But there are a lot of fields where automation has already fully replaced humans. Manufacturing is just one of the examples. And those systems go awry often enough. In fact, if well-trained, the margin of error of Artificial Intelligence is negligible. For instance, self-driving cars are bound to reduce road accidents by over 90 percent. So what makes AI any different from other software?
Transparency—or opacity, depending on your perspective.
Past generations of software were totally transparent. Everything relied on source code, which could be examined to determine the cause of errors. Open source software are available to all for scrutiny. Even close sourced software can be reverse-engineered, and if not, can be opened for examination with the right legal warrant. So if a software stops working as it should and causes damage, it’s relatively easy to determine culpability. Investigators can determine whether the user was to blame for misusing the application, or if the developer was responsible for not fixing the bugs.
Things are not so clearcut with Artificial Intelligence. Developers create algorithms, provide them with data, train them, and then let them learn on their own. Those algorithms usually end up finding patterns and tricks that even their creators can’t fathom. They become opaque, as engineers will say. Alpha Go, the famous google AI that beat the world champion at the ancient board game Go, made moves that left its creators (and the world) stunned.
And therein lies the problem. No one will object if Alpha Go makes a wrong move, or if the most sophisticated advertisement algorithm makes a bad suggestion. The media will talk when Google Photos labels black people as gorillas, or when an AI judge favors white contestants in a beauty contest. Microsoft’s disastrous chatbot will be remembered as a bad joke. But no one gets hurt (at least not seriously) from these systems and so we shrug off their mistakes.
But what about more critical circumstances? What happens to those 10 percent fatalities that self-driving cars can’t prevent? Who will explain why a self-driving car ran over a pedestrian, even the probability is near zero? Who will be held to account? The engineers will say that they can’t explain every decision their product makes. The driver—if the owner can be called that at all—will have no control over the situation. The car, which made the act, is mum.
The same criticality can be extrapolated to other situations such as healthcare, crime fighting and law. Mistakes in those fields can have social, political, and even fatal repercussions. And while humans make mistakes all the time, and at an even more frequent rate than machines, they take responsibilities for their mistakes, got to court, stand trial, pay fines, go to jail.
So where do we move from here? First, we need more transparency. This means AI developers should make sure both the software artifacts (source code, components…) and the data science (stats, formulas, math…) that powers their products are open to scrutiny. This goes against the current norm, which is to keep secrets away from prying eyes. Fortunately, we’re seeing some systematic efforts in this field. But more needs to be done.
Naturally, not everything can be made completely transparent, especially where deep learning is involved. There’ll still be some major opacity where complex functionalities are involved. So second, I strongly believe that in those cases, humans should still be in exclusive charge. AI systems can act as complementary to human efforts, providing experts with research results, patterns, and useful data and helping them in making critical decisions. The point is, the red button must be pressed by someone who can assume responsibility for their actions.
Things will not stay this way forever. Artificial General Intelligence is just around the corner (but we’ve been saying this for quite a while), and when it becomes a reality, we’ll have robots and machines that can reason, make decisions, explain those decisions and bear the consequences. Some say it’ll take decades. Others say it’ll never come.
Until it does though, AI will still have to take a back seat and let the grownups decide.
This article was originally published on Tech Talks. Read the original article here.]]>