Some commentators wrote off her concerns as the periodic anxiety of an older generation seeing technology changing the world of their children, comparing it to earlier concerns about books or television. But I don’t see this as Yet Another Moral Panic about changing tech or norms. I see it as an early AI conflict, one that individuals have lost to embryonic corporate AI.

We have greatly advanced algorithms for claiming and retaining human attention, prominently including bulk attacks on shared Commons such as quiet spaces, spare time, empty mailboxes. This predates the net, but as in many areas, automation has conclusively outpaced capacity to react. There’s not even an arms race today: one the one hand, we have a few attention-preserving tools, productive norms that increasingly look like firewall instructions, a few dated regulations in some countries. On the other hand, we have a $T invested in persuasion, segmentation, attention, engagement: a growing portion of our economy, dinner conversations, and self-image as a civilization.

Persuasion is much more than advertising. The libraries of mind hacks and distractions we have developed are prominent in every networked app and social tool. Including simple things like adding a gloss of guilt and performative angst to increase engagement — like Snap or Duolingo adding publicly visible streaks to keep up daily participation.

We know people can saturate their capacity to track goals and urgencies. We know minds are exploitable, hard sells are possible — but (coming from a carny, or casino, or car salesman) unethical, bad for you. Yet when the exploit happens at a scale of billions, one new step each week, with a cloak of respectability — we haven’t figured out how to think about it. Indeed most growth hackers & experience designers, at companies whose immersive interfaces absorb centuries of spare time each day, would firmly deny that they are squeezing profit out of the valuable time + focus + energy of users :: even as they might agree that in aggregate, the set of all available interfaces are doing just that.

Twenge suggests people are becoming unhappier the more their attention is hacked: that seems right, up to a point. But past that point, go far enough and people will get used to anything, create new norms around it. If we lose meaningful measures of social wellbeing, then new ones may be designed for us, honoring current trends as the best of all possible worlds. A time-worn solution of cults, castes, frontiers, empires. Yet letting the few and the hawkers of the new set norms for all, doesn’t always work out well.

Let us do better.
+ Recognize exploits and failure modes of reason, habit, and thought. Treat these as important to healthy life, not simply a prize for whoever can claim them.
+ Measure maluses like addiction, negative attractors like monopoly, self-dealing, information asymmetry.
+ Measure things like learning speed, adaptability, self sufficiency, teamwork, contentment over time.
+ Reflect on system properties that seem to influence one or the other.
And build norms around all of this, countering the idea that “whatever norms we have are organic, so they must be good for us“.