In order to deal with the volume of videos, Mohan and other executives took the decision to override YouTube's content moderation systems, doing away with human moderators entirely. Instead, they used AI software to immediately identify the most violent parts of the video and autonomously block it.

"We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review," said Mohan, adding that a "trade-off" was that some videos unconnected to the shooting got taken down by the system. He also made the decision to disable YouTube's "recent upload" search tool.

Although the AI software made instantaneous decisions on whether to block a video, it could also be tricked. Many uploading copies of the footage made edits to get it past YouTube's safeguards — adding watermarks, cutting together clips, or in some cases even animating people in the video.

These edits meant loads of videos were slipping through YouTube's "hashing" systems, which are used to spot when footage has been duplicated somewhere, often for enforcing copyright.

"Like any piece of machine learning software, our matching technology continues to get better, but frankly, it's a work in progress," said Mohan.

YouTube declined to say exactly how many videos of the shooting it was able to remove or block, but said it was in the tens of thousands. In a statement released on Twitter on Monday, YouTube added that it has terminated hundreds of accounts "created to promote or glorify the shooter."

"Frankly, I would have liked to get a handle on this earlier," said Mohan. "Every time a tragedy like this happens we learn something new, and in this case it was the unprecedented volume [of videos]."

He added: "This was a tragedy that was almost designed for the purpose of going viral ... We've made progress, but that doesn't mean we don't have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there's more work to be done."