Technology is reaching extraordinary development levels sometimes it seems unbelievable. However, it is a reality and the most concerning is that it can be used for evil purposes. Researchers at Berkeley have shown that commands can be embedded in music recordings or spoken text. Through this technique that Chinese researchers have called the Dolphin Attack, smart devices can be ordered to visit malicious websites, execute phone calls and even take a picture and send text messages, and although it has its limits as well, scientists from different parts of the world are working to create more powerful ultrasonic reception systems.

The curious and evil side of the Dolphin Attack is that it can be used for espionage and even cyber-terrorism. If this technique is developed by nations that promote terrorism, with the mere embedding of these audio commands in some key cell phones, they would have access to confidential information and even register it almost immediately on their computers.

So far, Berkley researchers warn that the greatest benefit of using this technology would be for advertising companies and marketers like Amazon that currently sells TVs and intelligent audio systems. According to these scientists, through the Amazon Echo speaker you can insert a voice command imperceptible to the human ear to suggest some other product.

Nicholas Carlin one of the developers of the system explained that "they wanted to achieve something furtive". According to this Berkely University’s student who is getting a doctorate degree, the technique has not been yet developed outside of laboratories, but it will undoubtedly be a matter of a few months to start its massive use. “We want to show that it's possible and then wait for other people to say: 'Okay, that's possible; now let's try to fix it”, the cyberneticist said.

On the ethical issues regarding this kind of creation, too much has not been said. It is even known the technology’s legal aspects and who will be able to use it since its indiscriminate and unregulated use could have disastrous economic and political consequences, which also proves once again that modern society is a victim of psychological manipulation to induce certain political positions and predispose people to the consumption of specific products in the market. Nevertheless, Carlin disagrees and warns that "people with bad intentions already use other people to do this".

The saddest thing is that the praiseworthy advances of artificial intelligence also show its negative side. Techniques such as Dolphin Attack are demonstrating that we can be subject to subterfuge and manipulation by large companies that use artificial intelligence for profit. Facebook and Cambridge Analytics are the freshest examples of what can be done through new technologies.

Danger precisely lies on the increasing recurrence of modern societies to the use of intelligent devices that employ artificial intelligence, but above all, the popularity and proliferation of voice-activated devices. Some specialized magazines warn that smart phones and loudspeakers will surpass the number of people by 2021. Ovum, a research company, warns that more than half of homes across the United States will have at least one smart loudspeaker.

For this reason, some experts oppose the further development of this technology because on their point of view, autonomous vehicles and aircraft could be manipulated so that their algorithms are disrupted with only a few variations by voice commands. So that we have an idea, a SUV could change direction if only in a telephone call or a melody we embed this type of ultrasonic command. According to analysts, the world will be more exposed to cyber-terrorism and the request for cyber-ransoms in bitcoins or dollars would be much greater if this knowledge information is leaked.

But, what is the Dolphin Attack? Researchers explain is the gap between human voice recognition and that one of machines. Voice recognition systems generally translate each sound as a letter and eventually join them to form words and phrases. By making minor changes to the audio files, the sound that the voice recognition system should hear is canceled and replaced with another that is transcribed differently in the machine that is undetectable to the human ear.

Besides the discovery of the University of Berkley there is the research conducted by Princeton and Zhejiang University in China who had shown that frequencies inaudible to the human ear could be used in voice recognition.

Among the companies expected to have the highest incidence will be Apple and Amazon. However, these companies have already responded about the implications in their systems particularly the security ones. Apple explained that Homepod, its smart speaker, is designed to avoid this kind of commands. Apple, considered one of the safest computer companies in the world, points out that iPhones and iPads must be unlocked so that Siri can obey voice commands. On the other hand, Google said that security is a continuous approach and that its Assistant has functions to mitigate undetectable audio commands.

Yet, Carlini says that with the techniques they are developing he and his colleagues could set attacks against any intelligent system available in the market. Perhaps this innovation serves precisely to reinforce the security and privacy systems of many of the devices that use voice commands. But who knows how far and what direction this good intention will go.

So far Carlini and his Berkeley colleagues have successfully incorporated these commands into Mozilla's DeepSpeech, an open-code platform and in music files, which includes a four-second audio of Verdi's "Requiem".