Mark Zuckerberg’s vision for AI was initially somewhat creepier than what he shared in his epic 6,000-word manifesto about the future of Facebook.

In the post, Zuckerberg briefly touches on how artificial intelligence can be used to detect terrorist propaganda.

“Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization,” he wrote.

That sounds like a straightforward enough application of AI — one that’s in line with what Zuckerberg and other executives have discussed in the past — but it’s different from what the CEO had originally written.

In an earlier version of the missive, which was shared with a number of news outlets in advance of its publication on Facebook, Zuckerberg took the idea farther. The “long-term promise of AI,” he wrote, is that it can be used used to “identify risks that nobody would have flagged at all, including terrorists planning attacks using private channels.”

Here’s an expanded version of the quote from the Associated Press (emphasis ours).

The long term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all — including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global. It will take many years to develop these systems.

That’s different from what was described in the final version that was shared Thursday, which made no mention of private communication in relation to AI and terrorism. A Facebook spokesperson confirmed that the above quote appeared in an earlier version of the letter but had since been “revised.” Here is the revised section:

Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community … Going forward, there are even more cases where our community should be able to identify risks related to mental health, disease or crime.

That Zuckerberg suggested AI could be used to monitor “private channels” in the same letter he used to praise WhatsApp’s end-to-end encryption and the importance of protecting user privacy would seem to be at odds. There’s a distinction between using AI to distinguish between propaganda and news coverage and using it to monitor private communication which, if taken at face value, would be counter to the CEO’s stated desire in the letter to keep Facebook users safe without “compromising privacy.”

Perhaps that’s why it was removed from the letter before it was published. (We’ve contacted Facebook for further comment.) But the fact that Zuckerberg considered it seriously enough that it appeared in a a near-final draft that was shared with media raises questions about how user privacy fits into his long-term vision for Facebook.