The data – pulled using Reddit’s API – is made up of JSON objects, including the comment, score, author, subreddit, position in the comment tree and a range of other fields.

The uncompressed dataset weighs in at over 1TB, meaning it’ll be most useful for major research projects with enough resources to really wrangle it.

Technically, the archive is incomplete, but not significantly. After 14 months of work and many API calls, Baumgartner was faced with approximately 350,000 comments that were not available. In most cases that’s because the comment resides in a private subreddit or was simply removed.

Something wicked this way runs

There are plenty of things you could do with that much information – natural language processing, trend prediction, comment score analysis – but one option is particularly perturbing.

With that much data on human interactions, the Reddit dataset could serve as the corpus for an AI project considering conversational modeling (predicting what will come next in dialogues).

That’s key to understanding natural language and further developing machine intelligence. It’s what Google researchers touched upon recently in their paper about a chatbot that seems to hate children.

Now imagine an AI fed with nearly 1.65 billion interactions between Reddit users – RedditorBot, a technological tick clinging to the Web, bloated with the site’s fascinations, perversions, prejudices and outright arsehole tendency.

It now occurs to me that Skynet won’t eradicate us because of a desire to remove the illogic of humanity. It’ll just be a supremely pissed off, all powerful artificial Redditor with a grudge.