Tuesday, February 23, 2010

Ben: I guess the question is, now the tech. is out there, how long does it remain "opt-in". Let's suppose it was, say, Facebook that bought this technology and incorporated it into their iPhone / Android client ... ie. see which people around you have FB accounts.

Then, one day, they find that not enough people opted in, so decide to make the system opt-out instead.

I use it for work, and will start putting social projects up there. But you have to be careful with making too many inferences (eg. about code quality) : sometimes I check-in code written by colleagues, and they check-in my code ...

On the broader point, I'm more convinced than ever we're heading into netocracy. And one of the symptoms is when people start making inferences from the social graph itself as opposed to using just navigating the social graph and then analyzing. Ie. when people start saying "X is probably a good potential hire *because* he's connected to these people and worked on those projects" rather than "I found X via these people, now let's see if he's a good hire".

What I really think is that there's little chance for society as a whole to retreat from netocracy.

And when we arrive, there'll be no way for individuals to opt-out. We could try to avoid making our social network explicit and publishing it, but it will become increasingly difficult to function in society without doing so.

In a real netocracy, not being in a social network will be as debilitating as not having any money under capitalism.

What is much more likely though, is that smart operators will work through secret or more private social networks (much as smart operators under capitalism have Swiss bank accounts or use otherwise dark payment schemes)

However, such secretive individuals and organizations will need to break cover and become partly visible, in order to operate at all in netocracy (witness this story about a criminal gang, who court certain kinds of visibility and connections)