Breadcrumbs

You are here:

Artificial Intelligence Regulation May Be Impossible

Tags

Artificial intelligence is a tool humanity is wielding with increasing recklessness. We say it’s for our common good with machine learning hype equal to business profits. But what happens when we don’t have the code of ethics, laws, government accountability, corporate transparency and capability of monitoring the space to be able to achieve AI regulation?

Artificial intelligence regulation isn’t just complex terrain, it’s uncharted territory for an age that is passing the baton from human leadership to machine learning emergence, automation, robotic manufacturing and deep learning reliance.

Artificial intelligence (AI) is areas of computer sciences that emphasizes the creation of intelligent machines that work and react like humans. But what happens when humans are unable to regulate, control and monitor how AI is being developed, integrated and upgraded? What happens when foreign states use it to achieve their own political agendas and economic programs without careful monitoring as to what it could one day become?

What happens when the military and DARPA develop new applications of artificial intelligence that embolden China’s own military manifestations of AI? Artificial intelligence is largely seen as a commercial tool, but it’s quickly becoming an ethical dilemma for the internet with the rise of AI forgery and a new breed of content in which it’s more difficult to detect what is real and what is not real online.

Recent developments in artificial intelligence point to an age where it’s not just humanity that will be upgraded, it’s misinformation. Now we know AI contributes to the forgery of documents, pictures, audio recordings, videos and online identities which can and will occur with unprecedented ease. We are unleashing an open-source toolkit of cybersecurity weapons that will complicate our online interactions.

To navigate a world of increasing artificial intelligence and machine intelligence intermediaries requires a better system. A system that ensures we harness the opportunities that AI is creating — across all and various areas including transportation, safety, medicine, labor, criminal justice and national security, all the while vigorously confronting ethical challenges including the potential for social bias, the need for transparency and missteps that could stall AI innovation while exacerbating social problems and accelerating social and economic inequality. Artificial intelligence could be dangerous for capitalism and for democracy itself.

Artificial intelligence can drive global GDP and productivity, but it will have a social cost. While Silicon Valley leaders affirm its incredible benefits, celebrated academics of the past have warned about its potential as well. A rising ubiquity of AI implementation appears to coincide with an accelerating wealth inequality fast-tracked by technological corporations disrupting the business world. AI creators are not “employing best practice and effective management” that someone like Stephen Hawking would have liked to see.

When it comes to AI in areas of public trust, the era of ‘moving fast and breaking everything’ is over, yet global bodies to protect humanity from the potential dangers of machine learning are conspicuously absent. Employees at technological companies circulate petitions on the ethics of their products, without significant result. Shareholders of tech companies continue to support companies that accelerate wealth inequality and decrease social mobility in the middle class.

AI arms control might be impossible to regulate perfectly. Henry Kissinger, former US secretary of state and a controversial giant of American foreign policy, believes it may be a lot harder to control the development of AI weapons than nuclear ones. Artificial Intelligence has so much hype for business adoption that we rarely stop to think about what it could become in a world of bad actors and outdated laws. A world where national rivalry will create manifestations of AI that will endanger minority groups of humans. A world where the militarization of AI will mean automated warfare that could increasingly be triggered by accident or by covert means of cybersecurity threats.

Artificial Intelligence regulation may be impossible to achieve without better AI, ironically. As humans, we have to admit we no longer have the capability of regulating a world of machines, algorithms and advancements that might lead to surprising technologies with their own economic, social and humanitarian risks beyond the scope of international law, government oversight, corporate responsibility and consumer awareness.

Michael Spencer is a prolific futurist called the original business insider with over 800 articles on LinkedIn and writing daily on Medium, where he is a top writer in 20+ tags. He is a content and brand consultant for startups involved in robotics, AI, blockchain and IoT. Michael is founder and editor of FutureSin, a Medium publication that explores aspects of technology and society related to artificial intelligence, China, transhumanism and the future of work. Based in Montreal, a significant academic hub for AI, he enjoys writing about actual business trends related to AI. A new breed of amateur futurists, Michael's interests involve cryptocurrencies and the token economy, universal basic income, autonomous vehicles, China’s new retail, social credit systems, technology news and robotic startups, among others.