When David Rowe put a smart meter in his home, it wasn’t so that he could spy on Amy, his teenage daughter. But that’s what happened anyway.

800KM from home, with the same idle curiosity that has me popping open my Twitter feed when there’s a lull in activity, he decided to check on his Fluksometer. Noticing suspiciously high power usage, he did a little investigating and busted up her unauthorized New Year’s party.

It’s a cute story, but it illustrates a crucial point: surveillance culture is leaky. Primary measurements beget chains of reasoning and implication. Second and third order conclusions can be drawn by clever observers and unintended consequences are the order of the day. That’s how we end up with stories of Target outing pregnant teens to their parents through the ultra-empathetic medium of coupons.

So far, I’ve been telling you the story about this kind of surveillance that companies who market these services want you to hear. They may be strange and creepy, but they are creepily accurate.

From here, we get the fantasy that with the right mixture of surveillance and analysis, you too can make terrifyingly accurate predictions about your customer’s/children’s/terrorism suspect’s behaviour. It falls to you only to determine how to act on that information in an appropriate manner. And, surely, you know best how to do that.

The truth is illustrated by an infographic halfway through Wired’s scathing overview of Klout. It shows that Klout ranks Robert Scoble as more influential than RZA, Sarah Palin, and Craig Venter. (You can learn a lot about the blinkered nature of Klout by the fact that their official account proudly linked to the piece.)

Any sane marketing organization would look at these results and conclude that Klout’s metrics are utterly flawed. Instead, we learn that some companies are offering perks to people with high Klout scores in the hopes that they’ll spread the word about their VIP treatment. In turn, we learn about people (including the reporter) who find themselves altering their behaviour in the hopes of finding favour with this blind, demented judge.

What’s particularly insidious about Klout is that it’s an opt-out service. You get a Klout score unless you take the time to tell them to fuck off. In this way, it’s of a lineage with Girls Around Me the app that scraped Foursquare check-ins and Facebook profiles to build up a stalker’s toolkit and Please Rob Me which listed empty burglable houses, based on Twitter geo-data.

I’m coming around to Eben Moglen’s view that social networking, as currently designed, is an ecological disaster for the social environment. This isn’t, like, a new insight or anything. We are the product and all that. But sometimes it takes a turn of phrase to drive a point home. Here’s the line that tipped me over the edge: “Every time you tag anything or respond to anything or link to anything, you’re informing on your friends.”

This is a situation that’s profoundly broken. It’s basically an open secret that it’s broken ethically, but it’s also broken emperically. To understand how broken, consider Alexis Madrigal’s attempt to work out how much user data is worth. The answer he comes up with is plus-or-minus 7 orders of magnitude. Half-a-penny or $1,200. You know. Depending.

All social-networking systems, as currently designed, demonstrably create social awkwardnesses that did not, and could not, exist before. All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options – and those static! A wiser response to them would be to recognize that, in the words of the old movie, “the only way to win is not to play.”

He describes the video like this: “How do robots see the world? How do they gather meaning from our streets, cities, media and from us?”

“Robot Readable World” is a useful shorthand but using the video to ask “how do robots see the world?” is exactly wrong. The images in the video, compelling though they are, don’t depict robots seeing the world any more than the Terminator HUD depicts a realistic view of how a well-designed T-1000 would see the world.

Arnall’s video is actually a depiction of the debug output of machine vision, processed and formatted to be human-readable. It looks the way it does because programmers threw together a visualization to help them understand why the machines weren’t seeing what they were supposed to be seeing, or to confirm that they were seeing what they were supposed to be seeing when everything seemed to work. It’s an attempt to peer into the mind of an algorithm. Its aesthetic core comes from the same place as scrolling lines of program output in a VT-100 terminal or the bright orange of safety vests.

It’s the aesthetic of engineers and function. It’s the aesthetic of hacked together monitors, using available crude rendering. It’s the same aesthetic as any debugger, and no more reflective of robot perception than the list of diagnostic println’s we used to use to trace crashes in a script are reflective of a game.

Got here.
Got here.
Got here.
Got here.
Fatal exception.

As Matt Frost remarked, “we can expect to read sentences like ‘the motive of the algorithm is still unclear’ a lot in the coming years.”

Here’s some New Aesthetic. Consider HP’s much maligned webcam software, helplessly trying to find the person in the frame. You can almost hear the algorithms scream in anguish as they try to make sense of the cacophonous firehose of data, bad lighting, and unanticipated skin tone. There are no overlaid facial recognition squares, just the mute stubborn refusal to recognize Desi.

Ah, but now I’ve made the mistake that Bruce Sterling cautions about. I’ve given the robot a personality. I’ve tried to make it a friend.

We’re not going to be able to gloss over this gaping vacuity by “making the machines our friends.” Because they’re not our friends. Machines are never our friends, even if they’re intimates in our purses and pockets eighteen hours a day. They may very well be our algorithmic investors, but they’re certainly not our art critics, because at that, they suck even worse than they do at running our economy.

It seems to me that this mistake is unavoidable. It may even be at the core of The New Aesthetic, this multiplication of entities and agents. In James Bridle’s post SxSW roundup, he seems to say as much.

One of the core themes of the New Aesthetic has been our collaboration with technology, whether that’s bots, digital cameras or satellites (and whether that collaboration is conscious or unconscious).

It’s a profoundly human action, to multiply entities. Perhaps it comes from the same root as pareidolia. We see faces in the clouds, we see personality in pets, we see collaboration in algorithms. Perhaps it’s all Pixar’s fault.

We are living inside a Cambrian explosion of entities of varying independence and varying physicality, some quite compact and individual, others smeared across great expanses of space and time. Some tied very much to a medium, others extruding parts of themselves into the biosphere, the noosphere, the memesphere, the digisphere.

I keep thinking about Aujik’s next nature Shintoist animism. They divide nature into the refined (robotics, artificial intelligence, nano technology, augmented reality, body enhancements) and the primitive (plants, soil, organisms, stones). Plenty of room in their cosmology for all sorts of new entities and hybrids.

Why stop there at the primitive and refined? There’s another class of entities to whom we have already granted personhood. I’m speaking, of course, about corporations. Immortal entities of terrifying inhuman thinking, capable of entering into contracts and incurring debts, and owed a subset of the rights which we accord to human persons.

I’m interested in the aesthetics of the corporate readable world, and their truly alien gaze.

Corporations communicate to us through money, press-releases, and advertising, always advertising. For a glimpse of the corporate readable world, look to Twitter’s routinely useless “who to follow” panel, Klout’s laughable ideas about what you are influential about, Facebook’s clumsy attempts to get you to join a dating site, and Google’s demented, personalized, Gmail ads. You can see it in your credit rating, and your position on the actuarial tables. You can see it in Blackwater/Xe/Academi’s attempt to conceal itself by shedding names like a trickster god shedding skins.

These aren’t as visually appealing as most of the examples that show up on Bridle’s Tumblr but they’re an aesthetic nonetheless.

At its core is the argument that the forces that attempt to regulate computers for the purposes of protecting the current copyright regime are a precursor to a wider battle about general purpose computing. Eventually, he argues, everything will have computation inside it. And the same logic that led copyright holders to embed spyware on compact discs will lead regulators to make demands that engineers allow them to limit the capabilites of e.g. self-driving cars.

But there’s a problem. We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

The same forces that make copyright untenable make surveillance inevitable. Computers are copying machines. They make copies of everything, including every action that you take within their field of sensation.

Historically, that’s meant the things that happen online, with the main avenue of input being keystrokes. But as we wire up the rest of the planet with cameras, accelerometers, potentionmeters, microphones, thermal sensors, pressure plates, and switches, that means the computer and corporate gaze will reach everything, everywhere, always.