As artificial intelligence programs become aware and autonomous, law enforcement will have to figure out where responsibility lies.

As an adult in a society of laws, if I were to commit a crime, it is expected that, if caught, I would be expected to pay the penalty. At my age, I would not be able to blame my parents for my mistakes.

So, as computer programs become more advanced and true artificial intelligence becomes a reality, who will be held responsible for crimes committed by computer programs?

“We’re going to go after the person who wrote it,” said Trent Teyema, chief of cyber readiness and cyber chief operating officer at the FBI, during Nextgov and Defense One’s Genius Machines 2018 on March 7. “The different malicious code out there—ransomware, destructive ware that’s going out—we’re always going to trace it down to the person responsible.”

At the same time, a program designed to learn and adapt might go beyond what was originally intended. This was seen globally in the WannaCry ransomware attack, which, per most researchers, was never intended to be released or spread the way it did.

“What concerns me is now we’re getting into malicious worms, malicious code that is slightly self-aware,” Teyema said. “If it gets loose in the wild and has a larger order effect, it’s not about how we arrest it but how do we stop it. So, either way, we’d be seizing it.”

Taken one step further: What about code that was never intended to be malicious but has taken on a mind of its own? Put another way: Is Tony Stark responsible for the actions of Ultron, the artificial intelligence he built that later went rogue?

“Yes,” Teyema said, though he added, “It’s actually a new area that we’re having to explore.”

He cited various legal issues that will come to bear, including civil and criminal negligence.

“Like WannaCry that went off, and it affected the world very quickly,” he said. “That was an unintended consequence but we still went after the person who wrote it.”