Experts Still Disagreeing On The Subject Of AI Regulation

Artificial intelligence is growing by leaps and bounds, so the question of how, when, and whether at all to regulate this emerging field, when posted online, drew a few bona fide AI experts to come out and talk about the issue. The question lastly appeared on Quora and the consensus seems to be that AI should be regulated, but there is not enough data at this time to regulate it ethically and effectively. Some participants see future AI solutions as people with rights and a place in society, while others see them merely as tools. The one thing that everyone can agree upon is that speculative regulation is impractical, if not outright impossible, and that regulators who want to be proactive about AI will have to work hand in hand with AI researchers to grow regulation not in advance of AI, but in parallel with it.

One notable dissenter, AI engineering leader Xavier Amatriain, says that AI should not be regulated, at least for now, for a few reasons. First off, because it is a fundamental technology that will be built upon and iterated in countless ways in the coming years, and there's no way to tell in what directions the technology will go right now. Second, he points out that it is far too early to even know how AI should be regulated, so trying to do so would only result in quashing potentially beneficial research and development. Lastly, he states that regulation at a national level would simply put the nation that regulated it behind everyone else, and the same can be said in the context of any smaller regulatory scales. The only surefire way to regulate AI worldwide would be through the United Nations, and even that would prove difficult to implement and enforce, Amatriain argues.

The proponents for AI regulation mostly hit similar notes. AI development is on a fast track to runaway progress, mostly self-driven. Currently, AI research is largely going in the direction of teaching AI how to reason, problem-solve, learn, and troubleshoot. This field is known as artificial general intelligence and could easily snowball into self-development, which could theoretically lead to hyper-intelligent AI with an edge over humans in every conceivable way and full awareness of that advantage. At that point, whether the AI would be benevolent toward humanity is questionable, so many believe that is a point that we should not reach without significant oversight. Many are also saying that AI in its current form can be directed and used unethically and dangerously by humans, making regulation a must. Simply put, the bottom line is that sci-fi popular culture's takes on the technology, talk of The Singularity, and very real predictions from experts all have some notes of truth to them, and for the time being, there really is no way of knowing for sure where the technology will go or how it should be regulated beyond a few basic tenets to keep it from being used or created for malicious purposes.

Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]