I’ve always found the tendency of Facebook users to over-share a little strange. You see people exposing their lives in ways that are occasionally charming, often inexplicable, and frequently downright ridiculous or ill-advised (or often both of the latter at the same time).

In the latter category are the posts of people who are obviously in advanced states of inebriation doing things that don’t require a caption to reveal that they are being idiots. These kind of posts are the sort of thing that, once sober, will be regretted and will never, ever disappear becoming fodder for the poster’s mother’s disapproval and unwanted attention from employers both current and future.

In what ostensibly appears to be a very user-friendly move it is reported by Wired that Facebook plans to apply the latest in AI to help you help yourself to not look like a jackass.

The idea is to use artificial intelligence software to detect the signs of inebriated behavior in images and warn users before they post that it might not be a good idea. Will users like this “feature”? It’s hard to say because along with the possibly unwanted, nannyish advice will come the knowledge that Facebook will obviously know, in detail, that you’ve been hitting the bottle and how often.

But forget whether your medical and car insurance providers would like this information, the insight that Facebook could have from AI’s monitoring and mining your life would be enormous and biblically invasive. How long until Facebook’s AIs start warning about other aspects of postings such as bad spelling, ugly expressions, poor brand choices, bad haircuts, and excessive décolletage?

Think that's creepy? Consider all of the more “interesting” information that could be deduced by AI techniques. When found, aggregated, and analyzed it’s pretty clear Facebook would be able to determine not just who your friends are, how close you are to them and they to you but also how often you hang out, what kinds of risky behaviors you indulge in with which friends, the impact on your health and how that changes over time … the list of insights would be incredible.

What could Facebook do with this information if they don’t sell it in some form to third parties? Remember the brouhaha over Facebook’s manipulation of what selected users saw in their timelines as a way of manipulating the users’ emotions? Augment that with adaptive AIs that autonomously refine their ability to manipulate user perception and behavior and you have an ad delivery system that would be insidious and, potentially, incredibly effective.

Of course the insights could also be used by the unscrupulous to blackmail users by discovering, for example, dalliances they shouldn’t be having and behaviors that are dubious or illegal. Indeed, unscrupulous is just what the AIs would be and they could act in just this way without ever being programmed to do so if the criteria for success has no limitations.

So, imagine the AIs doing their thing, looking for patterns, and testing engagement strategies. Without knowing as a human would that it’s detecting people having affairs they create a category that reflects just that and then test strategies for advertising and wind up advertising to cheaters’ “official” partners things like detective services and spy gear. When these ads start to get traction the AIs, without actually understanding the correlation, will rate the strategies as highly successful and therefore keep refining it.

Your guess is as good as mine as to where such emergent behavior would go over the long term but it’s not impossible that many types of bad and or illegal behavior would be detected and leveraged into campaigns to not only make people buy stuff but affect the kinds of choices people make, the behaviors they indulge in, their political outlook … the potential is enormous.

And it’s not like Facebook will be alone in this. Google, Microsoft, Amazon, eBay … every big player has not only interest in such programs but the resources to do something about it. I think it’s amusing that Facebook’s AI division is called FAIR because that’s one thing their AI is unlikely to be in the long term.