In London, reporters were given a first-hand look at the inner workings of the algorithm, though they were asked not to share the actual methodology. According to BBC’s Dave Lee, the “algorithm draws on characteristics typical of [The Islamic State] and its online activity.”

From what we can piece together, the algorithm appears to use image recognition to examine videos and determine the similarity to other, confirmed videos of the same nature. After thousands of hours of video training, it begins to spot patterns and unique characteristics it can apply to videos outside its training dataset. It uses these characteristics to assign a probability score.

When it suspects a video of being extremist content, it flags the video for human review. Humans then get the ultimate decision in whether to pull the video.

Similar tools have been met with criticism by advocates of an open internet. Opponents argue it creates more work for moderators as most of the flagged videos will be false positives, meaning legitimate content may be blocked because an algorithm deemed it to be offensive. Facebook and YouTube have both attempted a similar algorithmic approach, and neither, if we’re being honest, has been all that effective.

The company claims this algorithm is different, however. On a site with five million daily uploads, ASI Data Science reports only 250 flagged videos, about 0.005 percent.