News

New report warns of danger artificial intelligence could pose if used maliciously

We’ve all heard of the potential downfalls of designing artificial intelligence without a conscience, but we often overlook the danger AI could pose if used by a malicious party. Well, according to a group of experts from organizations like OpenAI and the Centre for the Study Of Existential Risk, those dangers are not very far off.

A report titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” was published today, laying out potential ways how AI could become a major threat in the coming years. The report says that AI can very well be used maliciously to carry out attacks to digital, physical, and political security. In the next five years, the cost of utilizing AI will be much lower, and the allure of its power will become more in reach for rogue nations, hacking groups, and even terrorists.

A potential threat that AI could pose is allowing the automation of what’s called “spear phishing”. This entails sending people fake emails or messages that are specifically designed to trick them into giving up personal information, including bank accounts and social security numbers. While this is already being done by actual people, an AI could design a much more believable message that could look exactly like an official email from your bank or close friend.

The threat, however, is deeper than that. The report warns that AI could be also be used to generate fake video and audio. The thought of this brings up a number of alarming instances that could severely impact our way of life. AI could be used to create fake videos that could potentially sway public opinion on a number of things. It could be used to generate fake evidence against people that could falsely incriminate them. The potential is endless, and the danger is alarming.

While the report outlined the threats that AI could pose, researchers also recommended some solutions that can be used to remedy the situation. AI research is progressing at a rapid pace, and researchers need to acknowledge that the work they do can be used for destructive purposes. Policy makers need to understand and learn from these experts, in order to best implement new policies that will strengthen our defense against AI attacks. Finally, ethical frameworks need to be developed and followed by researchers and businesses alike when pursuing research or utilizing AI.

The solutions outlined here are easy to follow, but the process of implementing them will be a long road ahead. It is easy to discuss and read about AI, but it is an incredibly complex subject that has many layers to it. Debates about AI among lawmakers will only become increasingly prevalent, as research continues to progress. While is it unclear how to judge when and what the threat will be, it is important for us to be prepared for the worst.