Corporations and profit from gameified hate

Charlie Stross is a big thinker, connecting technology, economics, and history. His new connections provoke thought. In his keynote speech at the 34th Chaos Communication Congress, Stross characterizes corporations as machines, as “very old, very slow AIs”, and as “our current, actually-existing AI overlords”.

[screen grab from video] text from article

While early corporations needed human employees for their internal operation, “they are automating their business processes increasingly rapidly this century.” Humans are functionally interchangeable, much as our cells are for us. Our legal environment is now corporation-oriented. Corporations need to make profit the way we need to breathe.

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs.

We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

Stross claims that the decision to pay for the public internet through advertising was a serious design mistake that has “damaged democratic political processes" and "crippled our ability to truly understand the world around us”. He provides chilling current examples and developing trends.

… the 21st century is throwing up dangerous new technologies—just as our existing strategies for regulating very slow AIs [corporations] have broken down.

… imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app (based on your Grindr profile) and is getting their fellow travellers to queer bash gay men only when they're alone or out-numbered 10:1. (That's the special horror of precise geolocation.)

Someone out there is working on it: a geolocation-aware social media scraping deep learning application, that uses a gamified, competitive interface to reward its "players" for joining in acts of mob violence against whoever the app developer hates. Probably it has an inoccuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions—think Ingress or Pokemon Go. Then, at some pre-planned zero hour, it switches mode and starts rewarding players for violence—players who have been primed to think of their targets as vermin, by a steady drip-feed of micro-targeted dehumanizing propaganda delivered over a period of months.

And the worst bit of this picture?

Is that the app developer isn't a nation-state trying to disrupt its enemies, or an extremist political group trying to murder gays, …; it's just a paperclip maximizer [profit-making entity] doing what it does—and you are the paper.

His framing of corporations as an old form of artificial intelligence opened up new vistas. His descriptions of the ways in which an ad-sponsored internet distort human culture hit home. I can see how organized hate and spreading violence are profit-driven. Violence and hate grab eyeballs. There's a market for them. This means that any ad-supported internet would be inherently destructive to organized society.

A few other thoughts from this sci-fi writer to rub together:

"New technologies always come with an implicit political agenda that seeks to extend its use",

"mechanisms for keeping them in check,... they don't work well against AIs that deploy the dark arts—especially corruption and bribery—and they're even worse againt true AIs that evolve too fast for human-mediated mechanisms like the Law to keep up with", and

Replies to This Discussion

Corporations are the modern feudal lords but things have changed. I'm OK with the progress in AI recently and I understand making the Sophia Robot a citizen. What I do not understand is how all of us are now the product when before they tried to sell us a product. I would claim it is ethically wrong and algorithms falsely label everything. This could end up being a disaster. Those running the show realize that profit comes first. That's what it's all about.

Unfortunately, ethics are irrelevant to corporations and AIs in pursuit of their primary goals (and, as Stross points out, their instrumental goals that are essential for survival and operation, like breathing for people and making money for corporations). Our discussion "Terminator movies too comforting" is worth revisiting!

Two bits from there:

...there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…

A recurring problem in much of the literature on “machine ethics” ... is that researchers and commenters often appear to be asking the question “How will this solution work?” rather than “How will this solution fail?”