Please help us continue to provide you with free, quality journalism by turning off your ad blocker on our site.

Thank you for signing in.

If this is your first time registering, please check your inbox for more information about the benefits of your Forbes account and what you can do next!

I agree to receive occasional updates and announcements about Forbes products and services. You may opt out at any time.

I'd like to receive the Forbes Daily Dozen newsletter to get the top 12 headlines every morning.

Forbes takes privacy seriously and is committed to transparency. We will never share your email address with third parties without your permission. By signing in, you are indicating that you accept our Terms of Service and Privacy Statement.

As the risks of deep learning’s continued evolution have received greater attention, a growing refrain has focused on how best to prevent AI from being used for harm. From killer robots to prevalent facial recognition, societies are increasingly talking about the need for new legislation and corporate responsibility pledges to halt the spread of harmful AI. Unfortunately, the reality is that deep learning’s ease of use and decentralized development across the world means it is simply impossible to constrain how it is used. Instead, societies must focus on how to counteract its most harmful applications.

The public, press, pundits and policymakers speak of laws and pledges to halt the harmful use of AI. Yet “AI” is not a monolithic singular algorithm. It refers to a broad class of machine learning techniques that are being developed by researchers all across the world.

Arguing that we must pass legislation banning “harmful AI” is akin to arguing that we must ban “harmful statistics.” Just as we cannot stop the harmful use of mathematics, we cannot stop the misuse of deep learning techniques given that no single company, government or organization controls the use of deep learning or the broader field of mathematics from which it stems.

Legislation targeting specific societally harmful applications of AI are also of limited utility given the dual nature use of most AI innovations. At first glance an outright ban on facial recognition might seem reasonable until one realizes that this would also ban face-based biometric phone unlocking.

A major terror attack in which the perpetrator was well-known and captured clearly on surveillance camera but was missed due to a ban on facial recognition would also likely rapidly reverse any such bans. Indeed, many of the European nations that once fiercely condemned US digital surveillance efforts have rushed to adopt those very same measures in the face of increased terrorist threats.

Bans on “killer robots” might similarly seem quite reasonable until one realizes that driverless cars and a package delivery drones are merely killer robots in waiting.

AI systems determining judicial outcomes with the power to literally incarcerate or put to death a human being might at first glance seem beyond the pale until one realizes just how biased and capricious today’s human-based judicial system really is and how arbitrary and evidence-free its decisions can be.

AI-powered robotic factory and warehouse workers will displace jobs and cause mass upheaval. At the same time, they will eliminate inhuman working conditions and create new job opportunities.

AI-powered scams, cyberattacks and falsehoods like “deep fakes” will be increasingly difficult to spot. At the same time, AI-powered anti-fraud, cyberdefense and summarization algorithms will help us see past the falsehoods that already deluge our digital world.

In short, AI is not a singular centralized technology that can be regulated or controlled. It is an abstract term for a decentralized field of study being built by researchers all across the world. Many countries with advanced AI development communities have very different perspectives on the deployment of AI-powered weaponry, meaning that even if the US and Europe ban broad swaths of AI applications as immoral and unethical, such bans will carry little importance with the rest of the world that will be rapidly rolling out said applications.

Most AI applications are also dual-use in which any positive application can be repurposed for harm and vice-versa, meaning it is not obvious what specific constraints would have meaning even if codified into law.

In the end, we must accept that we cannot stop harmful applications of deep learning and instead must focus our efforts on countering its impacts.

Based in Washington, DC, I founded my first internet startup the year after the Mosaic web browser debuted, while still in eighth grade, and have spent the last 20 years

…

Based in Washington, DC, I founded my first internet startup the year after the Mosaic web browser debuted, while still in eighth grade, and have spent the last 20 years working to reimagine how we use data to understand the world around us at scales and in ways never before imagined. One of Foreign Policy Magazine's Top 100 Global Thinkers of 2013 and a 2015-2016 Google Developer Expert for Google Cloud Platform, I am a Senior Fellow at the George Washington University Center for Cyber & Homeland Security. From 2013-2014 I was the Yahoo! Fellow in Residence of International Values, Communications Technology & the Global Internet at Georgetown University's Edmund A. Walsh School of Foreign Service, where I was also adjunct faculty. From 2014-2015 I was a Council Member of the World Economic Forum's Global Agenda Council on the Future of Government. My work has appeared in the presses of over 100 nations.