Category Archives: Dunbar Number

Post navigation

In a few days time, it will be exactly thirty-two years – a bit more than a billion seconds – since I learned to code. I was lucky enough to attend a high school with its own DEC PDP 11/45, and lucky that it chose to offer computer science courses on a few VT-52 video terminals and a DECWriter attached to it. My first OS was RSTS/E, and my first programming language was – of course – BASIC.

A hundred million seconds before this, a friend dragged me over to a data center his dad managed, sat me down at a DECWriter, typed ‘startrek’ at the prompt, and it was all over. The damage had been done. From that day, all I’ve ever wanted to do is play with computers.

I’ve pretty much been able to keep to that.

Oddly, the only time I didn’t play with computers was at MIT. After MIT, when I began work as a software engineer, I got to play and get paid for it. I’ve written code for every major microprocessor family (with the exception of the 6502), all the common microcontrollers, and every OS from CP/M to Android. I’ve even written a batch-executed RPG II program, typed up on punched cards, exectuted on an IBM 370 mainframe.

(Shudder, shudder.)

At Christmas 1990, I sat down and read a novel published a few years before, by an up-and-coming science fiction writer. That novel – Neuromancer – changed my life. It gave me a vision that I would pursue for an entire decade: a three-dimensional, immersive, visualized Internet. Cyberspace. I dropped everything, moved myself to San Francisco – epicenter of all work in virtual reality – and founded a startup to design and market an inexpensive immersive videogaming console. It was hard work, frequently painful, and I managed to pour my life savings into the company before it went belly up. But I can’t say that any of the other VR companies faired any better. A few of them still exist, shadows of their former selves, selling specialty products into the industrial market.

These companies failed because each of them – my own among them – coveted the whole prize. With the eyes of a megalomaniac, each firm was going to ‘rule the world’. Each did lots of inventing, holding onto every scrap of invention with IP agreements and copyrights and all sorts of patents. I invented a technology very much similar to that seen in the Wiimote, but fourteen years before the Wiimote was introduced. It’s all patented. I don’t own it. After my company collapsed the patent went through a series of other owners, until eventually I found myself in a lawyer’s office, being deposed, because my patent – the one I didn’t actually own – was involved in a dispute over priority, theft of intellectual property, and other violations.

Lovely.

With the VR industry in ruins, I set about creating my own networked VR protocol, using a parser donated by my friend Tony Parisi, building upon work from a coder over in Switzerland, a bloke by the name of Tim Berners-Lee, who’d published reams and reams of (gulp) Objective-C code, preprocessed into ANSI C, implementing his new Hypertext Transport Protocol. I took his code, folded it into my own, and rapidly created a browser for three-dimensional scenes attached to Berners-Lee’s new-fangled World Wide Web.

This happened seventeen years ago this week. Half a billion seconds ago.

When I’d gotten my 3D browser up and running, I was faced with a choice: I could try to hold it tight, screaming ‘Mine! Mine! Mine!’ and struggle for attention, or I could promiscuously share my code with the world. Being the attention-seeking type that I am, the choice was easy. After Dave Raggett – the father of HTML – had christened my work ‘VRML’, I published the source code. A community began to form around the project. With some help from an eighteen year-old sysadmin at WIRED named Brian Behlendorf, I brought Silicon Graphics to the table, got them to open their own code, and we had a real specification to present at the 2nd International Conference on the World Wide Web. VRML was off and running, precisely because it was open to all, free to all, available to all.

It took about a billion seconds of living before I grokked the value of open source, the penny-drop moment I realized that a resource shared is a resource squared. I owe everything that came afterward – my careers as educator, author, and yes, panelist on The New Inventors – to that one insight. Ever since then, I’ve tried to give away nearly all of my work: ideas, articles, blog posts, audio and video recordings of my talks, slide decks, and, of course, lots of source code. The more I give away, the richer I become – not just or even necessarily financially. There are more metrics to wealth than cash in your bank account, and more ways than one to be rich. Just as there is more than one way to be good, and – oh yeah – more than one way to be evil.

Which brings us to my second penny-drop moment, which came after I’d been programming computers for almost a billion seconds…

I: ZOMFG 574LLm4N W45 r19H7!

Sometimes, the evil we do, we do to ourselves. For about half a billion seconds between the ages of nineteen and thirty nine, I smoked tobacco, until I realized that anyone who smokes past the age of forty is either a fool or very poorly informed. So I quit. It took five years and many, many, many boxes of nicotine chewing gum, but I’m clean.

A few years ago, Harvard researcher Dr. Nicholas Christakis published some interesting insights on how the behavior of smoking spreads. It’s not the advertising – that’s mostly banned, these days – but because we take cues from our peers. If our friends start smoking, we ourselves are more likely to start smoking. There’s a communicative relationship, almost an epidemiological relationship at work here. This behavior is being transmitted by mimesis – imitation. We’re the imitating primates, so good at imitating one another that we can master language and math and xkcd. When we see our friends smoking, we want to smoke. We want to fit in. We want to be cool. That’s what it feels like inside our minds, but really, we just want to imitate. We see something, and we want to do it. This explains Jackass.

Mimesis is not restricted to smoking. Christakis also studied obesity, and found that it showed the same ‘network’ effects. If you are surrounded by the obese people, chances are greater that you will be obese. If your peers starts slimming, chances are that you will join them in dieting. The boundaries of mimesis are broad: we can teach soldiers to kill by immersing them in an environment where everyone learns to kill; we can teach children to read by immersing them in an environment where everyone learns to read; we can stuff our faces with Maccas and watch approvingly as our friends do the same. We have learned to use mimesis to our advantage, but equally it makes us its slaves.

Recent research has shown something disturbing: divorce spreads via mimesis. If you divorce, its more likely that your friends will also split up. Conversely, if your friends separate, it’s more likely that your marriage will dissolve. Again, this makes sense – you’re observing the behavior of your peers and imitating it, but here it touches the heart, the core of our being.

Booting up into Homo Sapiens Sapiens meant the acquisition of a facility for mimesis as broadly flexible as the one we have for language. These may even be two views into the same cognitive process. We can imitate nearly anything, but what we choose to imitate is determined by our network of peers, that set of relationships which we now know as our ‘social graph’.

This is why one needs to choose one’s friends carefully. They are not just friends, they are epidemiological vectors. When they sneeze, you will catch a cold. They are puppet masters, pulling your strings, even if they are blissfully unaware of the power they have over you – or the power that you have over them.

All of this is interesting, but little of it has the shock of the new. Our mothers told us to exercise caution when selecting our friends. We all know people who got in with the ‘wrong crowd’, to see their lives ruined as a consequence. This is common knowledge, and common sense.

But things are different today. Not because the rules have changed – those seem to be eternal – but because we have extended ourselves so suddenly and so completely. Our very new digital ‘social networks’ recapitulate the ones between our ears, in one essential aspect – they become channels for communication, channels through which the messages of mimesis can spread. Viral videos – and ‘viral’ behavior in general – are good examples of this.

Digital social networks are instantaneous, ubiquitous and can be vastly larger than the hundred-and-fifty-or-so limit imposed on our endogenous social networks, the functional bandwidth of the human neocortex. Just as computers can execute algorithms tens of millions of times faster than we can, digital social networks can inflate to elephantine proportions, connecting us to thousands of others.

Most of us keep our social graphs much smaller; the average number of friends on any given user account on Facebook is around 35. That’s small enough that it resembles your endogenous social network, so the same qualities of mimesis come into play. When your connections start talking about a movie or a song or a television series, you’re more to become interested in it.

If this is all happening on Facebook – which it normally is – there is another member of your social graph, there whether you like it or not: Facebook itself. You choose to build your social graph by connecting to others within Facebook, store your social graph on Facebook’s servers, and communicate within Facebook’s environment. All of this has been neatly captured, providing an opening for Facebook to do what they will with your social graph.

You have friended Mark Zuckerberg, telling him everything about yourself that you have ever told to any of your friends. More, actually, because an analysis of your social graph reveals much about you that you might not want to ever reveal to anyone else: your sexual preference and fetishes, your social class, your income level – everything that you might choose to hide is entirely revealed because you need to reveal it in order to make Facebook work. Because you do not own it. Because you do not have access to the source code, or the databases. Because it is closed.

Your social graph is the most important thing you have that can be represented in bits. With it, I can manipulate you. I can change your tastes, your attitudes, even your politics. We now know this is possible – and probably even easy. But to do this, I need your social graph. I need you to surrender it to me before I can use it to fuck you over.

We didn’t understand any of this a quarter billion seconds ago, when Friendster went live. Now we have a very good idea of the potency of the social graph, but we find ourselves almost pathetically addicted to the amplified power of communication provided by Facebook. We want to quit it, but we just don’t know how. Just as with tobacco, going cold turkey won’t be easy.

On 28 May 2010, I killed my Facebook profile and signed off once and for all. There is a cost – I’m missing a lot of the information which exists solely within the walled boundaries of Facebook – but I also breathe a bit easier knowing that I am not quite the puppet I was. When someone asks why I quit – an explanation which has taken me over a thousand words this morning – they normally just close down the conversation with, “My grandmother is on Facebook. I have to be there.”

That may be our epitaph.

We are so fucked. We ended up here because we surrendered our most vital personal details to a closed-source system. We should have known better.

And that’s only the half of it.

So much has happened in the last eight weeks that we’ve almost forgotten that before all of this disaster and tragedy afflicted Queensland, we were obsessed with another sort of disaster, rolling out in slow-motion, like a car smash from inside the car. On 29 November 2010, Wikileaks, in conjunction with several well-respected newspapers, began to release the first few of a quarter million cables, written by US State Department officials throughout the world. The US Government did its best to laugh these off as inconsequential, but one has already led more-or-less directly to a revolution in Tunisia. We also know that Hilary Clinton has requested credit card numbers and DNA samples for all of the UN ambassadors in New York City, presumably so she can raise up a clone army of diplomats intent on identity theft. Not a good look.

In early December, as the first cables came to light, and their contents ricocheted through the mediasphere, the US government recognized that it had to act – and act quickly – to staunch the flow of leaks. The government had some help, because an individual seduced by the United States’ projection of power decided to mount a Distributed Denial of Service attack against the Wikileaks website. In the name of freedom. Or liberty. Or something.

Wikileaks went down, but quickly relocated its servers into Amazon.com’s EC2 cloud. This lasted until US Senator Joseph Lieberman started making noises. Wikileaks was quickly turfed out of EC2, with Amazon claiming newly discovered violations of its Terms of Service. Another ‘discovery’ of a violation followed in fairly short order with Wikileaks’ DNS provider, everyDNS. For the coup de gras, PayPal had a look at their own terms of service – and, quelle horreur! – found Wikileaks in violation, freezing Wikileaks accounts, which, at that time, must have been fairly overflowing with contributions.

Deprive them of servers, deprive them of name service, deprive them of funds: checkmate. The Powers That Be must have thought this could dent the forward progress of Wikileaks. In fact, it only caused the number of copies of the website and associated databases to multiply. Today, nearly two thousand webservers host mirrors of Wikileaks. Like striking at a dandelion, attacking it only causes the seed to spread with the winds.

Although Wikileaks successfully resumed its work releasing the cables, the entire incident proved one ugly, mean, nasty point: the Internet is fundamentally not free. Where we thought we breathed the pure air of free speech and free thought, we instead find ourselves severely caged. If we do something that upsets our masters too much, they bring the bars down upon us, leaving us no breathing room at all. That isn’t liberty. That is slavery.

This isn’t some hypothetical. This isn’t a paranoid fantasy. This is what is happening. It will happen again, and again, and again, whenever the State or forces in collusion with the State find themselves threatened. None of it is secure. None of it belongs to us. None of it is free.

This is why we are so truly and wholly fucked. This is why we must stop and rethink everything we are doing. This is why we must consider ourselves victims of another kind of disaster, another tragedy, and must equally and bravely confront another kind of rebuilding. Because if we do not create something new, if we do not restore what is broken, we surrender to the forces of control.

Like it or not, we find ourselves at war. It’s not a war we asked for. It’s not a war we wanted. But war is upon us, the last great gasp of the forces of control as they realize that when they digitized, in pursuit of greater efficiency, profit, or extensions of their own power, whatever they once held onto became so fluid it now drains away completely.

That’s one enemy, the old enemy, the ones whom history has already ruled irrelevant. But there’s the other enemy, who seeks to exteriorize the interior, to make privacy difficult and therefore irrelevant. Without privacy there is no liberty. Without privacy there is no individuality. Without privacy there is only the mindless, endless buzzing of the hive. That’s the new enemy. Although it announces itself with all of the hyperbole of historical inevitability, this is just PR aimed at extending the monopoly power of these forces.

We need weapons. Lots of weapons. I’m not talking about the Low Orbit Ion Cannon. Rather, I’m recommending a layered defensive strategy, one which allows us to carry on with our business, blithely unmolested by the forces which seek to constrain us.

Here, then, is my ‘Design Guide for Anarchists’:

Design Principle One: Distribute Everything

The recording industry used the courts to shut down Napster because they could. Napster had a single throat they could get their legal arms around, choking the life out of it. In a display of natural selection that would have brought a tear to Alfred Russel Wallace’s eye, the selection pressure applied by the recording industry only led to the creation of Gnutella, which, through its inherently distributed architecture, became essentially impossible to eradicate. The Day of the Darknet had begun.

This is an extension of the essential UNIX idea of simple programs which can be piped together to do useful things. ‘Small pieces, loosely joined.’ But these pieces shouldn’t live within a single process, a single processor, a single computer, or a single subnet. They must live everywhere they can live, in every compatible environment, so that they can survive any of the catastrophes of war.

Design Principle Two: Transport Independence

The inundation of Brisbane and its surrounding suburbs brought a sudden death to all of its networks: mobile, wired, optic. All of these networks are centralized, and for that reason they can all be turned off – either by a natural disaster, or at the whim of The Powers That Be. Just as significantly, they require the intervention of those Powers to reboot them: government and telcos had to work hand-in-hand to bring mobile service back to the worst-affected suburbs. So long as you are in the good graces of the government, it can be remarkably efficient. But if you find yourself aligned against your government, or your government is afflicted with corruption, as simple a thing as a dial tone can be almost impossible to manifest.

We have created a centralized communications infrastructure. Lines feed into trunks, which feed into central offices, which feed into backbones. This seems the natural order of things, but it is entirely an echo of the commercial requirements of these networks. In order to bill you, your communications must pass through a point where they can be measured, metered and tariffed.

There is another way. Years before the Internet came along, we used UUCP and FidoNet to spread mail and news posts throughout a far-flung, only occasionally connected global network of users. It was slower than we’re used to these days, but no less reliable. Messages would forward from host to host, until they reached their intended destination. It all worked if you had a phone line, or an Internet connection, or, well, pretty much anything else. I presume that a few hardy souls printed out a UUCP transmission on paper tape, physically carried it from one host to another, and fed it through.

A hierarchy is efficient, but the price of that efficiency is vulnerability. A rhizomatic arrangement of nodes within a mesh is slow, but very nearly invulnerable. It will survive flood, fire, earthquake and revolution. To abolish these dangerous hierarchies, we must reconsider everything we believe about ‘the right way’ to get bits from point A to point B. Every transport must be considered – from point-to-point laser beams to wide-area mesh networks using unlicensed spectrum down to semaphore and smoke signals. Nothing is too slow, only too unreliable. If we rely on TCP/IP and HTTP exclusively, we risk everything for the sake of some speed and convenience. But this is life during wartime, and we must shoulder this burden.

Design Principle Three: Secure Everything

Why would any message traverse a public network in plaintext? The bulk of our communication occurs in the wide open – between Web browsers and Web servers, email servers and clients, sensors and their recorders. This is insanity. It is not our job to make things easy to read for ASIO or the National Security Agency or Google or Facebook or anyone else who has some need to know what we’re saying and what we’re thinking.

As a baseline, everything we do, everywhere, must be transmitted with strong encryption. Until someone perfects a quantum computer, that’s our only line of defense.

We need a security approach that is more comprehensive than this. The migration to cloud computing – driven by its ubiquity and convenience, and baked into Google’s Chrome OS – deprives us of any ability to secure our own information. When we use Gmail or Flickr or Windows Live or MobileMe or even Dropbox (which is better than most, as it stores everything encrypted), we surrender our security for a little bit of simplicity. This is a false trade-off. These systems are insecure because it benefits those who offer these systems to the public. There is value in all of that data, so everything is exposed, leaving us exposed.

If you do not know where it lives, if you do not hold the keys to lock it or release it, if it affects to be more pretty than useful (because locks are ugly), turn your back on it, and tell the ones you love – who do not know what you know – to do the same. Then, go and build systems which are secure, which present nothing but a lock to any prying eyes.

Design Principle Four: Open Everything

I don’t need to offer any detailed explanation for this last point: it is the reason we are here. If you can’t examine the source code, how can you really trust it? This is an issue beyond maintainability, beyond the right to fork; this is the essential element that will prevent paranoia. ‘Transparency is the new objectivity’, and unless any particular program is completely transparent, it is inherently suspect.

Open source has the additional benefit that it can be reused and repurposed; the parts for one defensive weapon can rapidly be adapted to another one, so open source accelerates the responses to new threats, allowing us to stay one step ahead of the forces who are attempting to close all of this down. There’s a certain irony here: in order to compete effectively with us, those who oppose us will be forced to open their own source, to accelerate their own responses to our responses. On this point we must win, simply because open source improves selection fitness.

When all four of these design principles are embodied in a work, another design principle emerges: resilience. Something that is distributed, transport independent, secure and open is very, very difficult to subvert, shut down, or block. It will survive all sorts of disasters. Including warfare. It will adapt at lightning speed. It makes the most of every possible selection advantage. But nothing is perfect. Systems engineered to these design principles will be slower than those built purely for efficiency. The more immediacy you need, the less resilience you get. Sometimes immediacy will overrule other design principles. Such trade-offs must be carefully thought through.

Is all of this more work? Yes. But then, building an automobile that won’t kill its occupants at speed is a lot more work than slapping four wheels and a gear train on a paper mache box. We do that work because we don’t want our loved ones hurtling toward their deaths every time they climb behind the wheel. Freedom ain’t free, and ‘extremism in the defense of liberty is no vice.’

Let me take a few minutes to walk you through the design of my own open-source project, so you can see how these design principles have influenced my own work.

III: Plexus

When I announced I would quit Facebook, many of my contacts held what can only be described as an ‘electronic wake’ for me, in the middle of my Facebook comment stream. As if I were about to pass away, and they’d never see me again. I kept pointing them to my Posterous blog, but they simply ignored the links, telling me how much I’d be missed once I departed. ‘But why can’t you just come visit me on Posterous?’ I asked. One contact answered for the lot when he said, ‘That’s too hard, Mark. With Facebook I can check on everyone at once. I don’t need to go over there for you, and over here for someone else, and so on and so on. Facebook makes it easy.’

That’s another epitaph. Yet it precipitated a penny-drop moment. The reason Facebook has such lock-in with its users is because of a network effect: as more people join Facebook, its utility value as a human switchboard increases. It is this access to the social graph which is Facebook’s ‘flypaper’, the reason it is so sticky, and surpassing Google as the most visited site on the Internet.

That social graph is the key thing; it’s what the address book, the rolodex and the contacts database have morphed into, and it forms the foundation for a project that I have named Plexus. Plexus is a protocol for the social web, ‘plumbing’ that allows all social web components to communicate: from each, according to their ability, to each, according to their need. Some components of the social web – Facebook comes to mind – are very poor communicators. Others, like Twitter, have provided every conceivable service to make them easy to talk to.

Plexus provides a ‘meta-API’, based on RFC2822 messaging, so that each service can feed into or be fed by an individual’s social graph. This social graph, the heart of Plexus, is what we might call the ‘Web2.0 address book’. It’s not simply a static set of names, addresses, telephone numbers and emails, but, rather, an active set of connections between services, which you can choose to listen to, or to share with. This is the switchboard, where the real magic takes place, allowing you listen to or be listened to, allowing you to share, or be shared with.

Plexus is agnostic; it can talk to any service, and any service can talk to it. It is designed to ‘wire everything together’, so that we never have to worry about going hither and yon to manage our social graph, but neither need we be chained in one place. Plexus gives us as much flexibility as we require. That’s the vision.

Just after New Year, I had an insight. I had originally envisioned Plexus as a monolithic set of Python modules. It became clear that message-passing between the components – using an RFC2822 protocol – would allow me to separate the components, creating a distributed Plexus, parts of which could run anywhere: on a separate process, on a separate subnet, or, really, anywhere. Furthermore, these messages could easily be encrypted and signed using RSA encryption, creating a strong layer of security. Finally, these messages could be transmitted by any means necessary: TCP/IP, UUCP, even smoke signals. And of course, all of it is entirely open. Because it’s a protocol, the pieces of Plexus can be coded in any language anyone wants to use: Python, Node.js, PHP, Perl, Haskell, Ruby, Java, even shell. Plexus is an agreement to speak the same language about the things we want to share.

I could go into mind-numbing detail about the internals of Plexus, but I trust those of you who find Plexus intriguing will find me after I leave the stage this morning. I’m most interested in what you know that could help move this project forward: what pieces already exist that I can rework and adapt for Plexus? I need your vast knowledge, your insights and your critiques. Plexus is still coming to life, but a hundred things must go right for it to be a success. With your aid, that can happen.

The Chinese Taoist laughs at civilization and goes elsewhere.
The Babylonian Chaoist sets termites to the foundations.

Plexus is a white ant set to the imposing foundations of Facebook and every other service which chooses to take the easy path, walling its users in, the better to control them. There is another way. When the network outside the walls has a utility value greater than the network within, the forces of natural selection come into play, and those walls quickly tumble. We saw it with AOL. We saw it with MSN. We’ll see it again with Facebook. We will build the small and loosely-coupled components that individually do very little but altogether add up to something far more useful than anything on offer from any monopolist.

We need to see this happen. This is not just a game.

Conclusion: The Next Billion Seconds

A billion seconds ago, Linux did not exist. The personal computer was an expensive toy. The Internet – well, one of my friends is the sysadmin who got HP onto UUCP – this was before the Internet became pervasive – and he remembers updating his /etc/hosts file weekly – by hand. Every machine on the Internet could be found within a single file, that could be printed out on two sheets of greenbar. A billion seconds later, and we’re a few days away from IPocalypse, the total allocation of the IPv4 number space.

Something is going on.

I’m not as teleological as Kevin Kelly. I do not believe that there is evidence to support a seventh class of life – the technium – which is striving to come into its own. I don’t consider technology as something in any way separate from us. Other animals may use tools, but we have gone further, becoming synonymous with them. Our social instinct for imitation, our language instinct for communication, and our technological instinct for tool using all seem to be reaching new heights. Each instinct reinforces the others, creating a series of rising feedbacks that has only one possible end: the whole system overloads, overflows all its buffers, and – as you might expect – knocks the supervisor out of the box.

Call this a Singularity, if you like. I simply refer to it as the next billion seconds.

The epicenter of this transition, where all three streams collide, sits in the palm of our hands, nearly all the time. The mobile is the most pervasive technology in human history. People who do not have electricity or indoor plumbing or literacy or agriculture have mobiles. Perhaps five and a half billion of the planet’s seven billion souls possesses one; that’s everyone who earns more than one dollars a day. Countless studies shows that individuals with mobiles improve their economic fitness: they earn more money. Anything that improves selection fitness – and economic fitness is a big part of that – spreads rapidly, as humans imitate, as humans communicate, as humans take the tool and further it, increasing its utility, amplifying its ability to amplify economic fitness. The mobile becomes even more useful, more essential, more indispensable. A billion seconds ago, no one owned a mobile. Today, nearly everyone does.

Hundreds of billions of dollars are being invested to make the mobile more useful, more pervasive, and more effective. The engines of capital are reorganizing themselves around it, just as they did, three billion seconds ago, for the automobile, and a billion seconds ago for the integrated circuit. But unlike the automobile or the IC, the mobile is quintessentially a social technology, a connective fabric for humanity. The next billion seconds will see this fabric become more tangible and more tightly woven, as it becomes increasingly inconceivable to separate ourselves from those we choose to share our lives with.

Call this a Hive Mind, if you like. I simply refer to it as the next billion seconds.

This is starting to push beneath our skins the way it has already colonized our attention. I don’t know that we will literally ‘Borg’ ourselves. But the strict boundaries between ourselves, our machines, and other humans are becoming blurred to the point of meaninglessness. Organisms are defined by their boundaries, by what they admit and what they refuse. In this billion seconds, we are rewriting the definition of homo sapiens sapiens, irrevocably becoming something else.

Do we own that code? Are parts of that new definition closed off from us, fenced in by the ramparts of privilege or power or capital or law? Will we end up with something foreign inside each of us, a potency unnamed, unobserved, and unavoidable? Will we be invaded, infected, and controlled? This is the choice that confronts us in the next billion seconds, a choice made even in its abrogation. Freedom is not just an ideal. Liberty is not some utopian dream. These must form the baseline human experience in our next billion seconds, or all is lost. We ourselves will be lost.

We have reached the decision point. Our actions today – here, in this room – define the future we will inhabit, the transhumanity we are emerging into. We’ve had our playtime, and it’s been good. We’ve learned a lot, but mostly we’ve learned how to discern right from wrong. We know what to do: what to build up, and what to tear down. This transition is painful and bloody and carries with it the danger of complete loss. But we have no choice. We are too far down within it to change our ways now. ‘The way down is the way up.’

Call it a birth, if you like. It awaits us within the next billion seconds.

The slides for this talk (in OpenOffice.org Impress format) are available here. They contain strong images.

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful. But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’. Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century. Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web. Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real. I was one of them. From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu. This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web. It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source. But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died. The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released. ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s. When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits. That’s the line for all of the traffic heading from one coast to the other. I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase. And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand. And I wasn’t impressed. In July 1993 very little content existed for the Web – just a handful of sites, mostly academic. Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense. I walked away from the computer that July afternoon wanting more. Hypertext systems I’d seen before. What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy. Instead of a handful of sites, there were now hundreds. There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list. By Friday evening I was finished. I had surfed the entire Web. It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993. Then things began to explode.

From October on I became a Web evangelist. My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic. That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities. As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer. “What’s something you’re interested in?” I’d ask. They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits. I’d click on one, watch the page load, and let them read. “Wow!” they’d say. “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos. All I did was hook people by their own interests. This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented. The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount. We tend to forget this, or overlook it, or just plain ignore it. We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate. It’s not that that we should ignore these considerations, but they are always secondary. The Web is a ground for being. Individuals do not present themselves as receptacles to be filled. They are souls looking to be fulfilled. This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web. I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication. I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing. As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared. The sharing instinct is innate and immediate. We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it. We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another. It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby. Everyone carries that hundred and fifty around inside of them. Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with. It’s automatic, requires no thought. We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing. Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten. We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us. It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained? We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter. You have to do more than request sharing. You have to think through the entire goal of sharing, from the user’s perspective. Are they sharing this because it’s interesting? Are they sharing this because they want company? Are they sharing this because it’s a competition or a contest or collaborative? Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project. What is it about the design of your work that excites them to share it with others? Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential? In other words, is there space only for one, or is there room to spread the word? Why would anyone want to share your work? You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question. How will your work be shared?

Your works do not exist in isolation. They are part of a continuum of other works? Where does your work fit into that continuum? How do the instructor and student approach that work? Is it a top-down mandate? Or is it something that filters up from below as word-of-mouth spreads? How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected. Is it simply via email – do all the students have email addresses? Do they know the email addresses of their friends? Or do you want your work shared via SMS? A QRCode, perhaps? Or Facebook or Twitter or, well, who knows? And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today. It becomes painfully obvious when it’s been overlooked. For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend. There was simply no way to do that. (I don’t know if this has changed recently.) That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes. The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts. Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd. Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this. Everyone’s there, but no one is wholly aware of anyone else’s presence. You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another. Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose. These are the social networks: Facebook, MySpace, LinkedIn, and so on. In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication. But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless. Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting. There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths. Where you can poll your friends on Facebook, on Twitter you can poll a planet. How do I solve this problem? Where should I eat dinner tonight? What’s going on over there? These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles. It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections. It’s a bit of both. Where does your work benefit from a tight collective of connected individuals? Is it some sort of group problem-solving? A creative activity that really comes into its own when a whole band of people play together? Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms? When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity. Not every project will need both kinds of connecting, but almost every one will benefit from one or the other. We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed. (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.) Life is meaningful because we, together, give it meaning. Life is bearable because we, together, bear the load for one another. Human life is human connection.

The Web today is all about connecting. That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it. So how do your projects allow your users to connect? Does your work leave them alone, helpless, friendless, and lonely? Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic? Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica. That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun. It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute. There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy. (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.) By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer. For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook. This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia. Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles. Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us. This is a powerful logic, an attraction which transcends the rational. People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete. If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia. Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005. With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights. Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth. It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought. It will mean that we have captured the better part of human knowledge in a form accessible to all. That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’. It is a work-in-progress. Google understands this and releases interminable beta versions of every product. More than this, it means that nothing needs to offer all the answers. I would suggest that nothing should offer all the answers. Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you. It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility. There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done. This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system. The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up. TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments. RateMyProfessors.com is the holy terror of the academy in the United States. Each of these websites has had to design systems which allow for users to self-regulate peer contributions. In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation. Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material. TripAdvisor gives anonymous reviewers a lower ranking. eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade. Each of these are social solutions to social problems.

Web2.0 is not a technology. It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil. It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention. Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration. Nothing is ever complete, nor ever perfect. The perfect is the enemy of the good, so if you wait for perfection, you will never release. Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work. In their more uncharitable moments, do they abuse the freedoms you have given them? If so, how can you redesign your work, and ‘nudge’ them into better behavior? It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem. And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass. Instead, release, observe, adapt, and re-release. All releases are soft releases, everything is provisional, and nothing is quite perfect. That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter. Although they seem to be similar, they couldn’t be more different. Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself. If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page. Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook. Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach. From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users. Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data. Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks. People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks! It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself. Twitter has become a building block: when you write a program which needs to send a message, you use Twitter. Facebook isn’t a building block. It’s a monolith.

How do you build for openness? Consider: another position the user might occupy is someone trying to use your work as a building block within their own project. Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again? Or is it opaque, seamless, and closed? What about the data you collect, data the user has generated? Where does that live? Can it be exported and put to work in another application, or on another website? Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful). The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them. If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’ You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user. These are not precisely the same Web2.0 domains others might identify. That’s because Web2.0 has become a very ill-defined term. It can mean whatever we want it to mean. But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another. In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create. We need to make room for them. If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

At the close of the first decade of the 21st century, we find ourselves continuously connecting to one another. This isn’t a new thing, although it may feel new. The kit has changed – that much is obvious – but who we are has not. Only from an understanding of who we are that we can understand the future we are hurtling toward. Connect, connect, connect. But why? Why are we so driven?

To explain this – and reveal that who we are now is precisely who we have always been, I will tell you two stories. They’re interrelated – one leads seamlessly into the other. I’m not going to say that these stories are the God’s honest truth. They are, as Rudyard Kipling put it, ‘just-so stories’. If they aren’t true, the describe an arrangement of facts so believable that they could very well be true. There is scientific evidence to support both of these stories, but neither is considered scientific canon. So, take everything with a grain of salt; these are more fables than theories, but we have always used fables to help us illuminate the essence of our nature.

For our first story, we need to go back a long, long time. Before the settlement of Australia – by anyone. Before Homo Sapiens, before Australopithecus, before we broke away from the chimpanzees, five million years ago, just after we broke away from the gorillas, Ten million years ago. How much do we know about this common ancestor, which scientists call Pierolapithecus? Not very much. A few bits of skeletons discovered in Spain eight years ago. If you squint and imagine some sort of mash-up of the characteristics of humans, chimpanzees and gorillas, you might be able to get a glimmer of what they looked like. Smaller than us, certainly, and not upright – that comes along much later. But one thing we do know, without any evidence from skeletons: Pierolapithecus was a social animal. How do we know this? Each of its three descendent species – humans, chips and bonobos – are all highly social animals. We don’t do well on our own. In fact, on our own we tend to make a tasty meal for some sort of tiger or lion or other cat. Together, well, that’s another matter.

Which brings us to the first ‘just-so’ story. Imagine a warm late afternoon, hanging out in the trees in Africa’s Rift Valley. Just you and your mates – probably ten or twenty of them. You’re all males; the females are elsewhere, doing female-type things, which we’ll discuss presently. At a signal from the ‘alpha male’, all of you fall into line, drop out of the trees, and begin a trek that takes you throughout the little bit of land you call your own – with your own trees and plants and bugs that keep you well fed – and you go all the way to the edge of your territory, to the border of the territory of a neighboring troupe of Pierolapithecus. That troupe – about the same size as your own – is dozing in the heat of the afternoon, all over the place, but basically within eyeshot of one another.

Suddenly – and silently – you all cross the border. You fan out, still silent, looking for the adolescent males in this troupe. When you find them, you kill them. As for the rest, you scare them off with your screams and your charges, and, at the end, they’ve lost some of their own territory – and trees and plants and delicious grubs – while you’ve got just a little bit more. And you return, triumphant, with the bodies you’ve acquired, which you eat, with your troupe, in a victory dinner.

This all sounds horrid and nasty and mean and just not criket. That it is. It’s war. How do we know that ‘war’ stretches this far back into our past? Just last month a paper published in Current Biology and reported in THE ECONOMIST described how primatologists had seen just this behavior among chimpanzees in their natural habitats in the African rain forests. The scene I just described isn’t ten million years old, or even ten thousand, but current. Chimpanzees wage war. And this kind of warfare is exactly what was commonplace in New Guinea and the upper reaches of Amazonia until relatively recently – certainly within the span of my own lifetime. War is a behavior common to both chimpanzees and humans – so why wouldn’t it be something we inherited from our common ancestor?

War. What’s it good for? If you win your tiny Pierolapithecine war for a tiny bit more territory, you’ll gain all of the resources in that territory. Which means your troupe will be that much better fed. You’ll have stronger immune systems when you get sick, you’ll have healthier children. And you’ll have more children. As you acquire more resources, more of your genes will get passed along, down the generations. Which makes you even stronger, and better able to wage your little wars. If you’re good at war, natural selection will shine upon you.

What makes you good at war? That’s the real question here. You’re good at war if you and your troupe – your mates – can function effectively as a unit. You have to be able to coordinate your activities to attack – or defend – territory. We know that language skills don’t go back ten million years, so you’ve got to do this the old fashioned way, with gestures and grunts and the ability to get into the heads of your mates. That’s the key skill; if you can get into your mates’ heads, you can think as a group. The better you can do that, the better you will do in war. The better you do in war, the more offspring you’ll have, so that skill, that ability to get into each others’ heads gets reinforced by natural selection, and becomes, over time, evolution. The generations pass, and you get better and better at knowing what your mates are thinking.

This is the beginning of the social revolution. All the way back here, before we looked anything like human, we grasped the heart of the matter: we must know one another to survive. If we want to succeed, we must know each other well. There are limits to this knowing, particularly with the small brain of Pierolapithecus. Knowing someone well takes a lot of brain capacity, and soon that fills up. When it does, when you can’t know everyone around you intimately. When that happens your troupe will grow increasingly argumentative, confrontational, and eventually will break into two independent troupes. All because of a communication breakdown.

There’s strength in numbers; if I can manage a troupe of thirty while all you can manage is twenty, I’ll defeat you in war. So there’s pressure, year after year, to grow the troupe, and, quite literally, to stuff more mates into the space between your ears. For a long time that doesn’t lead anywhere; then there’s a baby born with just a small genetic difference, one which allows just a bit more brain capacity, so that they can handle two or three or four more mates into its head, which makes a big difference. Such a big difference that these genes get passed along very rapidly, and soon everyone can hold a few more mates inside their heads. But that capability comes with a price. Those Pierolapithecines have slightly bigger brains, and slightly bigger heads. They need to eat more to keep those bigger brains well-fed. And those big heads would soon prove very problematic.

This is where we cross over, from our first story, into our second. This is where we leave the world of men behind, and enter the world of women, who have been here, all along, giving birth and gathering food and raising children and mourning the dead lost to wars, as they still do today. As they have done for ten million years. But somewhere in the past few million years, something changed for women, something perfectly natural became utterly dangerous. All because of our drive to socialize.

Human birth is a very singular thing in the animal world. Among the primates, human babies are the only ones born facing downward and away from the mother. They’re also the only ones who seriously threaten the lives of their mothers as they come down the birth canal. That’s because our heads are big. Very big. Freakishly big. So big that one of the very recent evolutionary adaptations in Homo Sapiens is a pelvic gap in women that creates a larger birth canal, at the expense of their ability to walk. Women walk differently from men – much less efficiently – because they give birth to such large-brained children.

There’s two notable side-effects of this big-brained-ness. The first is well-known: women used to regularly die in childbirth. Until the first years of the 20th century, about one in one hundred pregnancies ended with the death of the mother. That’s an extraordinarily high rate, particularly given that a women might give birth to seven or eight children over their lifetime. Now that we have survivable caesarian sections and all sorts of other medical interventions, death in childbirth is much rarer – perhaps 1 in 10,000 births. Nowhere else among the mammals can you find this kind of danger surrounding the delivery of offspring. This is the real high price we pay for being big-brained: we very nearly kill our mothers.

The second side-effect is less well-known, but so pervasive we simply accept it as a part of reality: humans need other humans to assist in childbirth. This isn’t true for any other mammal species – or any other species, period. But there are very few (one or two) examples of cultures where women give childbirth by themselves. Until the 20th century medicalization of pregnancy and childbirth, this was ‘women’s work’, and a thriving culture of midwives managed the hard work of delivery. (The image of the chain-smoking father, waiting outside the maternity ward for news of his newborn child is far older than the 20th century.)

For at least a few hundred thousand years – and probably a great deal longer than that – the act of childbirth has been intensely social. Women come together to help their sisters, cousins, and daughters pass through the dangers and into motherhood. If you can’t rally your sisters together when you need them, childbirth will be a lonely and possibly lethal experience. So this is what it means to be human: we entered the world because of the social capabilities of our mothers. Women who had strong social capabilities, who could bring her sisters to her aid, would have an easier time in childbirth, and would be more likely to live through childbirth, as would their children.

After the child has been born, mothers need even more help from their female peers; in the first few hours, when the mother is weak, other women must provide food and shelter. As that child grows, the mother will periodically need help with childcare, particularly if she’s just been delivered of another child. Mothers who can use their social capabilities to deliver these resources will thrive. Their children will thrive. This means that these capabilities tended to be passed down, through the generations. Just as men had their social skills honed by generations upon generations of warfare, women had their social skills sharpened by generations upon generations of childbirth and child raising.

All of this sounds very much as though it’s Not Politically Correct. But our liberation from our biologically determined sex roles is a very recent thing. Men raise children while women go to war. Yet behind this lies hundreds of thousands of generations of our ancestors who did use these skills along gender-specific lines. That’s left a mark; men tend to favor coordination in groups – whether that’s a war or a footy match – while women tend to concentrate on building and maintaining a closely-linked web of social connections. Women seem to have a far greater sensitivity to these social connections than men do, but men can work together in a team – to slaughter the opponent (on the battlefield or the pitch).

The prefrontal cortex – freakishly large in human beings when compared to chimpanzees – seems to be where the magic happens, where we keep these models of one another. Socialization has limits, because our brains can’t effectively grow much bigger. They already nearly kill our mothers, they consume about 25% of the food we eat, and they’re not even done growing until five years after we’re born – leaving us defenseless and helpless far longer than any other mammals. That’s another price we pay for being so social.

But we’re maxed out. We’ve reached the point of diminishing returns. If our heads get any bigger, there won’t be any mothers left living to raise us. So here we are. An estimate conducted nearly 20 years ago pegs the number of people who can fit into your head at roughly 148, plus or minus a few. That’s not very many. But for countless thousands of years, that was as big as a tribe or a village ever grew. That was the number of people you could know well, and that set the upper boundary on human sociability.

And then, ten thousand years ago, the comfortable steady-state of human development blew apart. Two things happened nearly simultaneously; we learned to plant crops, which created larger food supplies, which meant families could raise more children. We also began to live together in communities much larger than the tribe or village. The first cities – like Jericho – date from around that time, cities with thousands of people in them.

This is where we cross a gap in human culture, a real line that separates that-which-has-come-before to that-which-comes-after. Everyone who has moved from a small town or village to the big city knows what it’s like to cross that line. People have been crossing that line for a hundred centuries. On one side of the line people are connected by bonds that are biological, ancient and customary – you do things because they’ve always been done that way. On the other side, people are bound by bonds that are cultural, modern, and legal. When we can’t know everyone around us, we need laws to protect us, a culture to guide us, and all of this is very new. Still. Ten thousand years of laws and culture, next to almost two hundred thousand years of custom – and that’s just Homo Sapiens. Custom extends back, probably all the way to Pierolapithecus.

We wage a constant war within ourselves. Our oldest parts want to be clannish, insular, and intensely xenophobic. That’s what we’re adapted to. That’s what natural selection fitted us for. The newest parts of us realize real benefits from accumulations of humanity to big to get our heads around. The division of labor associated with cities allows for intensive human productivity, hence larger and more successful human populations. The city is the real hub of human progress; more than any technology, it is our ability to congregate together in vast numbers that has propelled us into modernity.

There’s an intense contradiction here: we got to the point where we were able to build cities because we were so socially successful, but cities thwarted that essential sociability. It’s as though we went as far as we could, in our own heads, then leapt outside of them, into cities, and left our heads behind. Our cities are anonymous places, and consequently fraught with dangers.

It’s a danger we seem prepared to accept. In 2008 the UN reported that, for the first time in human history, over half of humanity lived in cities. Half of us had crossed the gap between the social world in our heads and the anonymous and atomized worlds of Mumbai and Chongquing and Mexico City and Cairo and Saõ Paulo. But just in this same moment, at very nearly the same time that half of us resided in cities, half of us also had mobiles. Well more than half of us do now. In the anonymity of the world’s cities, we stare down into our screens, and find within them a connection we had almost forgotten. It touches something so ancient – and so long ignored – that the mobile now contends with the real world as the defining axis of social orientation.

People are often too busy responding to messages to focus on those in their immediate presence. It seems ridiculous, thoughtless and pointless, but the device has opened a passage which allows us to retrieve this oldest part of ourselves, and we’re reluctant to let that go.

Forty-eight years ago, when my mother was pregnant with me, her friends and family threw her a baby shower. Among the gifts, she received a satin-covered ‘Baby Book’, with spaces to record all of the minutiae of the early days of my existence. I know for a fact that Dr. No and Lawrence of Arabia were playing in the movie theatres in Massachusetts at the time I was born, because it is neatly recorded on a page of my baby book. I know how much I weighed when I was born (7 lbs, 7 oz – or 3.3 kg), when I got my first tooth, when I started to walk, and so on. All of it is there, because my mother took the time to write it down as it happened.

What my mother didn’t write down – because it isn’t at all remarkable – was that I was busy reaching out, making connections with everyone I came into contact with. Those connections began with my mother and my father, then my aunts and uncles and grandparents, and, just a year later, my sister. I made those connections because that’s what humans do. It sounds perfectly ordinary because it comes so naturally: in fact, it’s quite profound. From the moment we’re born, we work to embed ourselves within a deep, strong and complex web of social relationships.

This isn’t a recent innovation, something that we ‘thought up’ the way we dreamed up art or writing or the steam engine; you need to go way, way back – at least ten million years, and probably a great deal more – before you get to any of our ancestors who wasn’t thoroughly social. A social animal will, on the whole, outperform a loner. A social animal can harness resources outside of themselves to ensure their survival and the survival of their children. Ten million years ago, a social animal could share the hunting and gathering of food, childcare, or lookout duties. Those with the best social skills – the best ability to communicate, coordinate, and function effectively as a unit – did better than their less-well-socialized relatives. They survived to pass their genes and behaviors along, down the generations. All along, a constant pressure accompanied them, driving them to become ever more social, better coordinated, and more effective. At some point – no one knows how long ago, or even how it happened – this pressure overflowed, creating the infinitely flexible form of communication we call language.

The more we study other animals – particularly chimpanzees – the less unique we seem to ourselves. Animals think, they even reason. They can carry around within themselves a model of how others think and think about them. They can deceive. They even appear to have empathy and a sense of fairness. But no other animal has the perfect tool of language. Animals can think and feel, but they can not express themselves, at least, not as comprehensively as we can. The expressiveness of language has one overriding aim: it allows us to connect very effectively.

The more we study ourselves, the more we understand how our need to connect has worked its way into our bodies, colonizing our nervous system. Our big brains are the hardware for our connection into the human network: there’s a direct correlation between the amount of grey matter in our prefrontal cortex and the number of individuals we can maintain connections with. Anthropologist Robin Dunbar came up with a figure of 148, plus or minus a few. That’s the number of individuals you carry around in your head with you, all the time. For a long, long time – tens of thousands of years – that was the largest a tribe of humans could grow, before they hived off into two tribes. When a tribe grows so big you can’t know all of its members, it’s time to divide.

We’ve grown used to being surrounded by people we have no connection with. That’s what cities are all about. We’ve been building them for close to ten thousand years, and in that time we’ve learned how to live with those we don’t know. It’s not easy – it requires police and courts and prisons – but the advantages of coming together in such great numbers outweigh the disadvantages. In 2008, for the first time in history, half of humanity lived in cities. We’re in the final stages of the urban revolution – a revolution in the making for the past hundred centuries. Urban life is now the default human condition.

Just as that revolution reaches is climax, we find ourselves presented with a new technology, which takes all of our human connections and digitizes them, creating an electronic representation of what we each carry around in our heads. We call this ‘social networking’, though, as I’ve explained, social networks are actually older than our species. Stuffing them into a computer doesn’t change them: We are our connections. They are what make us human. But the computer speeds up and amplifies those connections, taking something natural and ordinary and turning it into something freakish and – hopefully – wonderful.

Before we discuss how these newly amplified connections can be used, it may be useful to step back, and reframe this latest revolution – just three years old – in the context of a child born, not in the early 1960s, but in 2010. I have good friends in Melbourne who are expecting their first child in early September. For the sake of today’s talk, let’s use this child (we’ll call her a daughter, though no one yet knows) as an example of what is now happening, and what is to come.

Will this child have a baby book? Certainly, some beloved relative may provide one to the lucky parents, and mom and dad may even take the time to fill it in – between the 3 AM feedings and the nappy changes. But the true baby book for this child will be the endless stream of digital media created in her wake. From a few minutes after birth, she will be photographed, recorded, videoed, measured and captured in ways that would seem inconceivable (and obsessive) just a generation ago. Yet today think nothing of a parent who follows a child everywhere with a video camera.

As parents collect that all of that media, they’re going to want somewhere to show it off. An eponymous website. YouTube is already cluttered with videos of babies doing the most mundane sorts of things, precisely so they can be shown off to proud grandparents. Photo galleries on Picasa and Snapfish and Flickr exist for precisely the same reason – they provide a venue for sharing. Parents post to blogs documenting every move, every fitful crawl, every illness. What’s the difference between this and what we think of as a baby book? Nothing at all.

It seems natural and wonderful to gather all of this documentation about her. This is who she is in her youngest years. But there’s other information that her parents do not document, at least not yet: who does she connect with? This list is small in her very first years, but as she grows into a toddler and heads off to day care and pre-kindy and grade school, that list grows rather longer. Will her parents keep track of these relationships? Even if they do not, at some point, she will. She’ll go online to a site patrolled by Disney or Apple or Google or Microsoft and be invited to ‘friend’ others on the site, and enroll her own real-world friends. Her social network will begin to twin into its physical and virtual selves. Much of each will be a reflection of the other, but some connections will exist purely in one realm. Some friends or family members will have no presence online; a few friends might remain life-long ‘pen pals’, never meeting in the flesh, but maintaining constant, connected contact.

The most significant difference between these real-world and virtual networks centers on persistence. We only have room for 150 people in our heads. When we fill up, people start to get pushed out, crossing that invisible yet absolutely real line between friend and acquaintance. We may have a lot of acquaintances, but these relationships, in the real world, don’t consist of very much beyond a greeting and a few polite words. Contrast this to the virtual world, the world of Facebook and Twitter and LinkedIn, where connections persist forever unless explicitly deleted by one of the parties to that connection. There is no upper limit to the number of connections a computer can remember. (Facebook has an upper limit of 5000 friends, but that’s entirely artificial and will eventually be abandoned.)

As she passes through life, this child will continue to accrue connections, and these connections will be digitized for safekeeping – just like the photos and videos her parents shot in her youngest years. That list will naturally grow and grow and grow, as she passes through years 1 through 12, moves on to university, and out into the world of adults. By the time she’s 25, she’ll likely have thousands of connections that accreted just by living her life. Each of these people will be able to peer in, and see how she’s doing; she’ll be able to do the same with each of them.

Managing the difference between our real-world connections, which top out, and our virtual connections, which do not, is a task that we’ll be mastering over the next decade. Right now, we’re not very good at it. By the time she’s grown up enough to understand the different qualities of real and virtual connections, we will be able to teach her behaviors appropriate to each sphere of connection. At present there’s a lot of confusion, a fair bit of chaos, and a healthy helping of ignorance around all of this. We can give ourselves a pass: it’s brand new. But already we’re beginning to see that this is a real revolution. In the social sphere, nothing will look like the past.

II: Pillar of Cloud, Pillar of Fire

On Friday evening, my washing machine – which I bought, used, just after I moved to Australia – finally gave up the ghost. The motor on my front loader seemed less and less likely to make it through an entire spin cycle, so I knew this day was coming, and had some thoughts about what I’d do for a replacement. One of my very good friends recommended that I buy a Simpson brand washer, just as she owned, just as her mother owned. ‘Years of trouble-free service,’ she said. ‘It’ll last forever.’ I took that suggestion under advisement. But I knew that I had a larger pool of individuals to interrogate. About thirty minutes after the unfortunate passing of the washer, I posted a message to Twitter, asking for recommendations. Within minutes I was pointed to Choice Magazine, wher I read their reliability survey. Many people chimed in with their own love or horror stories about particular brands of washers. I was quickly dissuaded from Simpson: ‘There’s a reason they’re cheap,’ one person replied. A furious argument raged about whether LG should be purchased by anyone, for any reason whatsoever, given that they were caught cheating on a refrigerator efficiency test. Miele owners seemed fanatically in love with their washers – but acknowledged that they paid a big premium for that love. And so on. After reviewing the input from Twitter (and Choice), I made a decision to purchase a Bosch, which seemed both highly reliable and not too expensive, good value for money. I put my decision out to Twitter, and the Bosch owners all chimed in: very happy, except for one, who seemed to have gotten one of those units that inevitably break down a few days after the warrantee expires. That settled it. On Saturday morning I played Bing Lee off Harvey Norman, talked one down to a very good price, and made the purchase. Crisis resolved.

Let’s step back from the immediate and get a good look at this whole process. In considering what to replace my dead washing machine with, I first consulted my real-world network – my friend who recommended Simpson. Then I went out to my virtual network, a network which is much, much larger. I follow about 5700 people on Twitter. This means I have access, potentially, to 5700 opinions, 5700 sets of experiences, 5700 people who may be willing to help. Even if only a small proportion of those do decide to offer assistance, that’s a lot of help, and it comes to me more or less immediately. The entire process took about half an hour – and this on a Friday night. If it’d been on a Tuesday afternoon, when people idly monitor Twitter while they work, I would have received double the response.

Wherever I go, I carry this ‘cloud’ of connections with me. These connections have value in themselves – they are a record of my passage through the human universe – but they have far greater value when put to work to accomplish some task. This is it; this is the knife-edge of the present: We have been busily building up our social networks, and though I freely admit that I am better connected than most, this will not long remain the case, as a generation grows into adulthood keeping a perfect record of all of their connections. Within a few years, nearly everyone who wills it will enter every situation with the same cloud of connections, the same reliable web of helpers who can respond to requests as the need arises. That fundamental transition – at the heart of this latest revolution – makes each of us much more effective. We’re carrying around a whole stadium of individuals, who can be called upon as needed to help us make the best decision in every situation. As we grow more comfortable with this new power, every decision of significance we make will be done in consultation with this network of effectiveness. This is already transforming the way we operate.

Some more examples, drawn from my own experience, will help illuminate this transformation. In December I found myself in Canberra for a few days. Where to eat dinner in a town that shuts down at 5 pm? I asked Twitter, and forty-five minutes later I was enjoying some of the best seafood laksa I’ve had in Australia. A few days later, in the Barossa Valley, I asked Twitter which wineries I should visit – and the top five recommendations were very good indeed. In the moment these can seem like trivial affairs, but both together begin to mark the difference between an ordinary holiday and an awesome one. Imagine this stretching out, minute after minute, throughout our lives. We’re not used to thinking in such terms. But just twenty years ago we weren’t used to the idea that we could reach anyone else instantly from wherever we were, or be reached by anyone else, anywhere. Then the mobile came along, and now that’s an accepted part of our reality. We’d find it difficult to go back to a time before the mobile became such an essential tool in our lives. This is the same transition we’re in the midst of right now with social networks. We look at Twitter and Facebook and find them charming ways to stay in touch and while away some empty time. A social network isn’t charming, and it certainly isn’t a waste of time. We are like children, playing with very powerful weapons. And sometimes they go off.

Before we explore that more explosive side to social networks, the ‘pillar of fire’ to this ‘pillar of cloud’, I want to introduce you to one more social networking technology, one which is brand-new, and which you may not have heard of yet. Just over the past month, I’ve become a big fan of Foursquare, a location-based ‘social network’. Using the GPS on my mobile, Foursquare allows me to ‘check in’ when I go to a restaurant, a store, or almost anywhere else. That is, Foursquare records the fact that I am at a particular place at a particular time. Once I’ve checked in, I can then make a recommendation – a ‘tip’ in Foursquare lingo – and share something I’ve observed about that place. It could be anything – something absurdly trivial, or something very relevant. As others have likely been to this place before me, there is already a list of tips. If I peek through those tips, I can learn something that could prove very useful.

As every day passes, and more people use Foursquare (over a million at present, all around the world) this list of tips is rapidly growing longer, more substantial, and more useful. What does this mean? Well, I could walk into a bar that I’ve never been to before and know exactly which cocktail I want to order. I would know which table at a restaurant offers the quietest corner for a romantic date. Or which salesperson to talk to for a good deal on that washing machine. And so on. With Foursquare have immediate and continuous information in depth, information provided by the hundreds or thousands in my own social network, plus everyone else who chooses to contribute. Foursquare turns the real world into a kind of Wikipedia, where everyone contributes what they know to improve the lot of all. I have a growing range of information about the world around me in my hands. If I put it to work, it will improve my effectiveness.

Last weekend I went to the cinema, to see Iron Man 2. As soon as I left the theatre, I sent out a message to Twitter: “Thought Iron Man 2 better than original. Snappier. Funnier. More comic-book-y.” That recommendation – high praise from me – went out to the 6550 people who follow me. Many of those folks are Australians, who might have been looking for a film to see last weekend. My positive review would have influenced them. I know for a fact that it did influence some, because they sent me messages telling me this.

On the other hand, if I’d sent out a message saying, ‘Worst. Movie. Ever.’ that also would have reached 6550 people, who would, once again, consider it. It might have even dissuaded some from paying the $17.50 to see Iron Man 2 on the big screen. If enough people said the same thing, that could kill the box office. This is precisely what we’ve seen. There’s a direct correlation between the speed at which a motion picture bombs and the rise in the number of users of Twitter. It used to take a few days for word-of-mouth to kill a movie’s box office (think Godzilla). Now it takes a few minutes. As the first showing ends, friends text friends, people post to Twitter and Facebook, and the story spreads. After the second or third showing, the crowds have dropped off: word has gotten out that the film stinks. Where a film could coast an entire weekend, now it has just a Friday matinee to succeed or fail. Positive word-of-mouth kept Avatar at the #1 spot for nine weeks, and the film remained a trending topic on Twitter for half of that time; conversely, The Back-Up Plan disappeared almost without a trace. An opinion, multiplied by hundreds or thousands of connections, carries a lot of weight.

These connections always come with us, part of who we are now. If we have an experience we find objectionable, our connections have a taste of that. A few months ago a friend found herself in Far North Queensland with an American Express card whose credit limit had summarily been cut in half with no warning, leaving her far away from home and potentially caught in a jam. When she called American Express to make an inquiry – and found that their consumer credit division closed at 5 pm on a Friday evening – she lost her temper. The 7500 people who follow her on Twitter heard a solid rant about the evils of American Express, a rant that they will now remember every time they find an American Express invitation letter in the post, or even when they decide which credit card to select while making a purchase.

Every experience, positive or negative, is now amplified beyond all comprehension. We sit here with the social equivalent of tactical nuclear weapons in our hands, toying with the triggers, and act surprised when occasionally they go off. Catherine Deveny, a weekly columnist for The AGE, was summarily dismissed last week because of some messages she posted over Twitter during the Logies broadcast. It seems she hadn’t thought through the danger of sending an obscene – but comedic – message to thousands of people, a message that would be picked up and sent again, and sent again, and sent again, until the tabloid newspapers and television shows, smelling blood in the water, got in on the action. When you’re well-connected, everything is essentially public. There’s no firm boundary between your private sphere and your public life once you allow thousands of others a look in. That can be a good thing if one is hungry for celebrity and fame – Kim Kardashian is an excellent example of this – but it can also accelerate a drive to self-destruction (witness Miranda Devine’s comments from Sunday). We live within a social amplifier, and it’s always turned up to 11. When we scream, we can be heard around the world, but now our whispers sound like shouts.

This means that no one can be silenced, anywhere. Last June, the entire world watched as an abortive Iranian revolution broke out on the streets of Tehran, viewing clips shot on mobile handsets, uploaded to YouTube, tagged, then picked up and shared throughout social networks like Twitter, which brought them to the attention of CNN, the New York Times, and the US State Department. Mobiles brought into North Korea puncture the tightly held reins of state control as information and news seeps across the border with China, the human connection amplified by a social technology. It’s no longer the CIA or ASIO station chief who gathers intelligence from far-flung places. It courses through our human networks.

You can begin to see the shape of this revolution-in-progess. Everything is so new, so rough, so raw, so innocent of intention that we really don’t know where we are going. We’re all stumbling through this doorway together. Each of us hold our connections to one another; like balloons that, in sufficient numbers, might cause us to take flight. We’re lifting off and gaining speed. Whether we’re a glider or a guided missile is up to us. We must pause, take stock, and ask ourselves what we want from these powerful new tools. And, in return, ask what we must be prepared to accept.

III: Threat Assistment

Individuals are becoming radically hyper-empowered. Our connections give us capabilities undreamt of a generation ago. As individuals who assess the various risks for your organizations, you’ve just learned about a brand new one, a threat that will – relatively quickly – dwarf nearly all others. The risk of hyperconnectivity is coming at you from three distinct but interrelated axes: hyper-empowered individuals who want to interact with your organizations; hyper-empowered individuals who compose your organizations; and your organizations, when they grasp the nettle of hyperconnectivity.

What do you do when a hyperconnected individual wants to become a customer, or just interact in some way with your organization? What happens when an existing customer becomes hyperconnected? Both of these situations are becoming commonplace affairs. My friend who had her troubles with American Express typifies this sort of threat. She had a long-term relationship with the company, but in the last years of that relationship she became hyperempowered. American Express didn’t know this – probably wouldn’t have understood it – and failed to manage the relationship when she ran into trouble.

The key attitudes for managing external relationships with hyperconnected individuals are humility and openness. American Express had no idea what was going on because they weren’t plugged into what my friend was saying to thousands of her followers. They didn’t consider her worth listening to. There’s no reason for this sort of thing to happen. Excellent tools exist that allow you to monitor what is being said about your organization, right now, who is saying it, and where. You can keep your finger on the pulse; when a customer has an issue, you can respond in a timely manner, humbly and transparently. Social media places an enormous value on transparency: unless someone’s motives – and connections – are apparent to you, you have no real reason to trust them, and no basis upon which to build that trust.

This isn’t a difficult policy to implement, but the responsibility for listening doesn’t lie with a single individual or department within your organization. Responsibility is spread throughout the organization; that’s the only way your organization will be able to handle all of the hyperconnected customers you do business with. Spread the load. The Chinese have a proverb: ‘Many hands make light work.’ That same rule applies here. Make listening to customers a priority throughout your organizations. If you don’t, those customers will use their amplified capabilities to make your life a living hell.

Employees within your organizations don’t leave their own networks at the door when they walk into the office. Although employers often block access to services like Facebook and Twitter from employee workstations, mobiles and pervasive high speed wireless connectivity make that restriction increasingly meaningless. Employees will connect and stay connected throughout the day, regardless of your stated policy. Soon enough, you will be encouraging them to stay connected, in order to share the burden of all that listening. Right now, your employees are well connected, but poorly disciplined. They don’t know the right way to do things. Don’t blame them for this. It’s all very new, and there hasn’t been a lot of guidance.

If you walk out of today’s talk with any one thing buzzing in your head, let it be this: develop a social media policy for your employees. Employees want to know how they can be connected in the office without damaging your reputation or their position. In the absence of a social media policy, organizations will get into all sorts of prangs that could have been avoided. Case in point: last week’s sacking of AGE columnist Catherine Deveny happened, in large part, because Fairfax has no social media policy. There were no guidelines for what constituted acceptable behavior, or even which behavior was ‘on the clock’ versus ‘off the clock’. Without these sorts of guidelines, hyperconnected employees will make their own decisions – putting your organizations, your stakeholders and your brands at risk.

Two well-known Australian organizations have established their own social media policies. The ABC boiled theirs down to four simple rules:

1) Do not mix the personal and the professional in ways likely to bring the ABC into disrepute;

2) Do not undermine your effectiveness at work;

3) Do not imply ABC endorsement of your personal views;

4) Do not disclose confidential information obtained through work.

This could be summed up with ‘use common sense’, but spelled out as it is here, the ABC has given its employees a framework that allows them to both regulate and embrace social media.

Telstra’s policy is wordier – it runs to five pages – but it is, in essence, very similar. It is good that Telstra has a social media policy, but that policy was only developed after a very public and very embarrassing incident. Last year, Telstra employee Leslie Nassar, who posted to Twitter pseduonymously under the account ‘Fake Stephen Conroy’, revealed his identity. When Telstra realized that one of their employees daily satirized the senator charged with ministerial oversight of their organization, the company was appalled, and quickly moved to fire Nassar – only to find that it couldn’t, because Nassar had violated no stated policy or conditions of employment. Shortly after that, Telstra developed and promulgated its social media guidelines. Learn from Telstra’s mistake. This same sort of PR and political catastrophe needn’t happen in your organizations, but I guarantee that it will, if you do not develop a social media policy. So please, get started immediately.

Finally, what happens when organizations hyperconnect? For hundreds of years, organizations have been based on rigid hierarchies and restricted flows of information. Hyperconnectivity puts paid to the org chart, replacing it with a dense set of hyperconnections between individuals within the organization, and between organizations: from each according to his ability, to each according to his need. We don’t really understand much about this new form of organization, other than to say that it looks very little like what we are familiar with today. But the pressure from hyperconnected individuals – both within and outside of the organization – will only increase, and to accommodate this pressure, the organization will increasingly find itself embedded in hyperconnections. This is the final leg of the revolution, still some years away, but one which requires careful planning today. Can your organization handle itself as it connects broadly to a planet where everyone is connected broadly? Will it maintain its own integrity, will it dissolve, merge, or disintegrate? This is a question that businesses need to ask, that schools need to ask, that governments need to ask. Everything from mass production to service delivery is being re-thought and re-shaped by our hyperconnectivity.

Organizations that master hyperconnectivity, putting social media to work, experience a leap forward in productivity. That leap forward comes at a price. Every tool that enhances productivity also changes everyone who uses it. None of us, as individuals or organizations, will be left behind, even if we choose to unplug, because we remain completely connected to a human world which is increasing hyperconnected. There is no going back, nor any particular safety in the present. Instead, we need to connect, and together use the best of what we’ve got – which is substantial, because there are plenty of smart people in all your organizations, throughout the nation, and the world – to mange this transition. This could be a nearly bloodless revolution, if we can remember that, at our essence, we are the connected species. Thought it may seem chaotic, this is not a collapse. It is a culmination.

When I came to Australia six years ago, to seek my fame and fortune, business communications had remained largely unchanged for nearly a century. You could engage in face-to-face conversation – something humans have been doing since we learned to speak, countless thousands of years ago – or, if distance made that impossible, you could drop a letter into the post. Australia Post is an excellent organization, and seems to get all of the mail delivered within a day or two – quite an accomplishment in a country as dispersed and diffuse as ours.

In the twentieth century, the telephone became the dominant form of business communication; Australia Post wired the nation up, and let us talk to one another. Conversation, mediated by the telephone, became the dominant mode of communication. About twenty years ago the facsimile machine dropped in price dramatically, and we could now send images over phone lines.

The facsimile translates images into data and back into images again. That’s when the critical threshold was crossed: from that point on, our communications have always centered on data. The Internet arrived in 1995, and broadband in 2001. In the first years of Internet usage, electronic mail was both the ‘killer app’ and the thing that began to supplant the telephone for business correspondence. Electronic mail is asynchronous – you can always pick it up later. Email is non-local, particularly when used through a service such as Hotmail or Gmail – you can get it anywhere. Until mobiles started to become pervasive for business uses, the telephone was always a hit-or-miss affair. Electronic mail is a hit, every time.

Such was the business landscape when I arrived in Australia. The Web had arrived, and businesses eagerly used it as a publishing medium – a cheap way of getting information to their clients and customers. But the Web was changing. It had taken nearly a decade of working with the Web, day-to-day, before we discovered that the Web could become a fully-fledged two-way medium: the Web could listen as well as talk. That insight changed everything. The Web morphed into a new beast, christened ‘Web 2.0’, and everywhere the Web invited us to interact, to share, to respond, to play, to become involved. This transition has fundamentally changed business communication, and it’s my goal this morning to outline the dimensions of that transformation.

This transformation unfolds in several dimensions. The first of these – and arguably the most noticeable – is how well-connected we are these days. So long as we’re in range of a cellular radio signal, we can be reached. The number of ways we can be reached is growing almost geometrically. Five years ago we might have had a single email address. Now we have several – certainly one for business, and one for personal use – together with an account on Facebook (nearly eight million of the 22 million Australians have Facebook accounts), perhaps another account on MySpace, another on Twitter, another on YouTube, another on Flickr. We can get a message or maintain contact with someone through any of these connections. Some individuals have migrated to Facebook for the majority of their communications – there’s no spam, and they’re assured the message will be delivered. Among under-25s, electronic mail is seen as a technology of the ‘older generation’, something that one might use for work, but has no other practical value. Text messaging and messaging-via-Facebook have replaced electronic mail.

This increased connectivity hasn’t come for free. Each of us are now under a burden to maintain all of the various connections we’ve opened. At the most basic level, we must at least monitor all of these channels for incoming messages. That can easily get overwhelming, as each channel clamors for attention.

But wait. We’ve dropped Facebook and Twitter into the conversation before I even explained what they are and how they work. We just take them as a fact of life these days, but they’re brand new. Facebook was unknown just three years ago, and Twitter didn’t zoom into prominence until eighteen months ago. Let’s step back and take a look at what social networks are. In a very real way, we’ve always known exactly what a social network is: since we were very small we’ve been reaching out to other people and establishing social relationships with them. In the beginning that meant our mothers and fathers, sisters and brothers. As we grew older that list might grow to include some of the kids in the neighborhood, or at pre-kindy, and then our school friends. By the time we make it to university, that list of social relationships is actually quite long. But our brains have limited space to store all those relationships – it’s actually the most difficult thing we do, the most cognitively all-encompassing task. Forget physics – relationship are harder, and take more brainpower.

Nature has set a limit of about one hundred and fifty on the social relationships we can manage in our heads. That’s not a static number – it’s not as though as soon as you reach 150, you’re done, full. Rather, it’s a sign of how many relationships of importance you can manage at any one time. None of us, not even the most socially adept, can go very much beyond that number. We just don’t have the grey matter for it.

Hence, fifty years ago mankind invented the Rolodex – a way of keeping track of all the information we really should remember but can’t possibly begin to absorb. A real, living Rolodex (and there are few of them, these days) are a wonder to behold, with notes scribbled in the margins, business cards stapled to the backs of the Rolodex cards, and a glorious mess of information, all alphabetically organized. The Rolodex was mankind’s first real version of the modern, digital, social network. But a Rolodex doesn’t think for itself; a Rolodex can not draw out the connections between the different cards. A Rolodex does not make explicit what we know – we live in a very interconnected world, and many of our friends and associates are also friends and associates with our friends and associates.

That is precisely what Facebook gives us. It makes those implicit connections explicit. It allows those connections to become conduits for ever-greater-levels of connection. Once those connections are made, once they become a regular feature of our life, we can grow beyond the natural limit of 150. That doesn’t mean you can manage any of these relationships well – far from it. But it does mean that you can keep the channels of communication open. That’s really what all of these social networks are: turbocharged Rolodexes, which allow you to maintain far more relationships than ever before possible.

Once these relationships are established, something beings to happen quite naturally: people begin to share. What they share is often driven by the nature of the relationship – though we’ve all seen examples where individuals ‘over-share’ inappropriately, confusing business and social channels of communication. That sort of thing is very easy to do with social networks such as Facebook, because it doesn’t provide an easy method to send messages out to different groups of friends. We might want a social network where business friends might get something very formal, while close friends might that that photo of you doing tequila shots at last weekend’s birthday party. It’s a great idea, isn’t it? But it can’t be done. Not on Facebook, not on Twitter. Your friends are all lumped together into one undifferentiated whole. That’s one way that those social networks are very different from the ones inside our heads. And it’s something to be constantly aware of when sharing through social networks.

That said, this social sharing has become an incredibly potent force. More videos are uploaded to YouTube every day than all television networks all over the world produce in a year. It may not be material of the same quality, but that doesn’t matter – most of those videos are only meant to be seen among a small group of family or friends. We send pictures around, we send links around, we send music around (though that’s been cause for a bit of trouble), we share things because we care about them, and because we care about the people we’re sharing with. Every act of sharing, business or personal, brings the sharer and the recipient closer together. It truly is better to give than receive. On the other hand, we’re also drowning in shared material. There’s so much, coming from every corner, through every one of these social networks, there’s no possible way to keep up. So, most of us don’t. We cherry-pick, listening to our closest friends and associates: the things they share with us are the most meaningful. We filter the noise and hope that we’re not missing anything very important. (We usually are.)

In certain very specific situations, sharing can produce something greater than the sum of its parts. A community can get together and decide to pool what it knows about a particular domain of knowledge, can ‘wise up’ by sharing freely. This idea of ‘collective intelligence’ producing a shared storehouse of knowledge is the engine that drives sites like Wikipedia. We all know Wikipedia, we all know how it works – anyone can edit anything in any article within it – but the wonder of Wikipedia is that it works so well. It’s not perfectly accurate – nothing ever is – but it is good enough to be useful nearly all the time. Here’s the thing: you can come to Wikipedia ignorant and leave it knowing something. You can put that knowledge to work to make better decisions than you would have in your state of ignorance. Wikipedia can help you wise up.

Wikipedia isn’t the only example of shared knowledge. A decade ago a site named TeacherRatings.com went online, inviting university students to provide ratings of their professors, lecturers and instructors. Today it’s named RateMyProfessor.com, is owned by MTV Networks, and has over ten million ratings of one million instructors. This font of shared knowledge has become so potent that students regularly consult the site before deciding which classes they’ll take next semester at university. Universities can no longer saddle student with poor teachers (who may also be fantastic researchers). There are bidding wars taking place for the lecturers who get the highest ratings on the site. This sharing of knowledge has reversed the power relationship between a university and its students which stretches back nearly a thousand years.

Substitute the word ‘business’ for university and ‘customers’ for students and you see why this is so significant. In an era where we’re hyperconnected, where people share, and share knowledge, things are going to work a lot differently than they did before. These all-important relationships between businesses and their customers (potential and actual) have been completely rewritten. Let’s talk about that.

II. Linked Out

Of all the challenges you face in your professional practice, the greatest of them comes from a website that, at first glance, seems completely innocuous. LinkedIn is the “professional” social network, where individuals re-create their C.V. online, and, entry by entry, link their profiles to other people they have worked with over the years.

Just that alone is something entirely new and very potent. When a potential employer sees a C.V., they don’t see the network of connections the candidate created at every position – a network which tells the employer much of what they need to know about the suitability of the candidate. Suddenly, all of this implicit information has been revealed explicitly. An employer can ‘walk the chain’ of associations, long before a candidate submits any references. The LinkedIn profile is the reference, quite literally.

This means that a LinkedIn profile is more valuable than any hand-crafted C.V., because it is, on the whole, a more accurate read of the candidate. A candidate’s connections tell you everything about who the candidate is. They certainly tell you more than a list of hand-picked referees ever could. LinkedIn is simply a better way of doing business.

This means that LinkedIn has caught on like a bushfire in Big End of town. Throughout the nation, employers look for the LinkedIn profile of potential candidates, and these profiles carry more weight than any words from the candidate, or a recruiter, or, really, anyone else. This transformation happened suddenly over the last 12 months, as businesspeople reached a critical mass of involvement with LinkedIn. LinkedIn benefits from the ‘network effect’: the more people who create profiles on LinkedIn, the more valuable the service becomes – because it’s more likely you’ll find someone’s profile there. That, in turn, makes it more likely another individual will create a LinkedIn profile, making it more valuable, etc. It also means that any candidate without a LinkedIn profile is immediately suspect – what’s he or she trying to hide?

LinkedIn become the new standard in recruiting. But don’t look too closely, or you’ll get scared. LinkedIn takes one of the things the recruiter brings to the table – an extensive and wide-ranging set of contacts – and reproduces that electronically in such a way that anyone can take advantage of them. In other words, everyone is now on a much more equal footing. The time and energy you have dedicated to building up those networks can now be matched by someone spending a lot less time on it – someone who is employing the latest tools.

The big worry, from here forward, is that recruiters as we have known them will be obsolesced by social networking technologies. As we get further into the social media revolution, and these tools become more refined, many of the functions of the recruiter-as-networker, recruiter-as-matchmaker, and recruiter-as-talent-finder will be subsumed into these social networks. Already I can dial and tune searches on LinkedIn to give me, say, a list of electrical engineers who work in Melbourne. That’s a list I can work from, if I’m doing a personnel search. I can message those folks through LinkedIn, to find out if they’re interested in a conversation about a potential opportunity. The platform provides the basic set of capabilities to amplify my effectiveness – without any substantial investment.

People will begin to ask why they need recruiters. People are already beginning to ask this question, as they see the social network providing the same capabilities – and for free. This is something that should scare you a little bit, because it shows you that recruiting, as we’ve known it, has about as much life expectancy as a buggy-whip maker did in 1915. There are still a few years left in which recruiting will be a profitable business, but after that it will simply be overwhelmed by social networking tools which can amplify the powers of the average person so effectively that recruiting simply becomes another task on offer, like sending a message or posting a photo.

As people are drawn together over social networks, they get a better sense of the talents of those around them. This talent-spotting used to be the sine qua non of the recruiter. Now that each of us can manage connections far beyond the natural limit of 150, we each learn our respective strengths. We use systems like LinkedIn to help us keep tally of those strengths. We use the tools to deploy those strengths. Everything happens because the tools empower us. But will they empower us so much that recruiters become redundant?

You need to have a good think about your business, and about the way you practice your business. You need to have a good look at the tools – particularly LinkedIn, but also Twitter and Facebook. You’ll learn that these tools are good at some things, and lousy at others. Here’s the question: are you good at the things the tools aren’t? Tools are no substitute for relationships. Even though the tools give us some false sense of relationship, it’s not the real thing. Recruiting is the real thing. But, is that enough?

III. Social Media Gods

In times long past – and by this, I mean just five years ago – recruiters were the masters of the Rolodex. You survived and thrived by knowing everybody, everywhere, with talent, and everybody, everywhere, who needed that talent. That in itself is quite a talent. But that talent is no longer enough. It is, however, the springboard to get you to the next level.

Fasten your seatbelts. You’re about to get launched headlong into the future. I want you to imagine a time – let’s say, tomorrow afternoon – when the average person now has quite extraordinary Rolodex capabilities, courtesy of the social networks, and where you, the masters, have gone beyond that into regions undreamed of. Imagine being able to take each of your contacts, and use those as starting points for new contacts within new networks. You’d have an inner ring of close contacts – just as you do today, but multiplied by the capabilities of the tools to support and nurture these contacts. Outside that inner ring, you’d have consecutive rings of contacts-to-contacts, and contacts-to-contacts-to-contacts, and so on, all the way out until the network simply becomes too diffuse and too difficult to maintain.

If this sounds familiar, it’s because it echoes the famed ‘six degrees of separation’, a theorem that provides that we are all just six people away from any other person on the planet. Australia is a lot smaller than the world; within any particular domain of expertise, there’s really only one or two degrees of separation, whether that’s in filmmaking, medicine, or software engineering. There just aren’t that many of us. Fortunately, that means that our networks aren’t deep: we can more-or-less know everyone involved in our field, with the help of a good Rolodex.

You have more than a good Rolodex. You have the new tools; you can build a Rolodex of Rolodexes, one Rolodex per discipline, and use that to track everybody, everywhere, who matters. In this future, that is really tomorrow afternoon, you’ve so leveraged your network resources that each of you sits in the middle of a vast web, and each time there’s a twitch upon a thread, you know about it, because that information is shared throughout your networks, and finds its way toward your receptive ears.

You’re going to need good tools to make this ambitious project a reality, and you’re going to need them for two entirely contradictory reasons: first, to be able to listen to everything going on everywhere, and second, because that chaotic din will deafen you. You need tools to help you find out what’s going on, but, more significantly, you need tools to help you winnow the wheat from the chaff. Being well-connected means bearing the burden of drowning in pointless information. Without the right tools, as you grow your networks you will simply sink under the noise.

What tools? They barely exist today. Google Alerts is one tool that will help keep you abreast of news as it is created on the net. Within the next few months, Google will begin to digest the endless ‘feeds’ created by Facebook and Twitter users, and you’ll be able to search through those as well. But again, there’s just too much there. You likely need a more professional tool, such as Sydney’s own PeopleBrowsr, to sift through the wealth of information that will be generated by your ever-more-encompassing networks of networks.

I should point out – for the more entrepreneurial among you – there is now a market for tools that recruiters need to become better recruiters: tools that harness the networks. Such tools will need to be designed by someone who understands the recruiting business and the network. That means it could be one of you. You could partner with a Google or a PeopleBrowsr, or strike out on your own. If you don’t do it, one of your competitors – either in Australia or overseas – certainly will.

The first half of my advice is simply this: build your networks. Build them out to unimaginable reaches. Use the tools to leverage your capabilities. Use the tools as if your livelihood depended upon it. Because it does. Behind you are a new generation, unafraid to use the tools to build their networks up. When you go head-to-head against them, those with the best networks – and the best tools – will tend to win. That’s what the next decade looks like, as we transition from the Rolodex to the social network: more and more business will go to the well-networked. So really, there is no choice: adapt or die.

There’s another face to this, one that turns itself outward. Sure, you’ve created this vast and nationwide network to feed you information. But you’ve got to do more than listen. You must present yourself within the network. You must be present. Many people and most companies think that they can use social media as an advertising medium. Plenty of firms set up Facebook pages and Twitter accounts and post lots of advertising messages to an ever-decreasing number of followers.

People don’t want to get spammed. They don’t want to hear your marketing messages over a communications channel that they consider personal. So please, don’t make this mistake. In fact, I’ll go even further – don’t think of the Web as an advertising medium. Sure, it had a few good years where a business presence online was simply a great way to get your marketing materials out there inexpensively, but those days are over. Today everything is about engagement. Engagement begins with conversation.

Conversation is a tricky thing: on the one hand it’s the most natural of human capabilities; on the other hand, it’s fraught with disaster. Social media amplifies both sides of this equation. There are more places for more conversations than ever before, and more opportunities for these conversations to run off the rails. Here are some simple rules of thumb which should keep you out of trouble:

Only go where you’re invited. No one likes a salesman who sticks their foot in the door.

Participate in a conversation from a place of authenticity. Let people know who you are and why you’re there.

Spend time building relationships. Social media is a lot like friendship – it takes time and investment and a bit of love to make it work.

Be consistent. Invest time every single day, or at least with regularity. If you can’t do that, it’s probably better you do nothing at all.

Where are these conversations happening? All around you: on Twitter and Facebook and LinkedIn and YouTube and Flickr and thousand blogs. They’re happening all the time, everywhere. You probably want to spend some time investigating these conversations before you participate. That’s known as ‘lurking’, and it’s the foundation of successful net relationships. Having an appreciation and an understanding of a community before you participate within it shows respect. Respect will be reciprocated.

That’s about it for today – and frankly, that’s quite a lot. I’ve asked you to re-invent yourselves for the mid-21st century. I’ve asked you to become the gods of social media, to translate your natural role as connectors and facilitators into a greatly amplified form, just so you can remain competitive. I’m not saying that this transition will happen overnight. You have at least a few years to become adept with the tools, and a few more to build out those nationwide networks. But I can promise this: at the close of the 2nd decade of the 21st century, recruiting will look entirely different.

Every social network has a few individuals who are ‘superconnected’, who have many more connections than their peers within the network. Those individuals are the glue who keep the network held together. This is your natural role. The challenge, moving forward, is to remain extraordinary when everyone around you becomes superconnected themselves. It will take some work, and some time, but it can be done. Good luck.

In the US state of North Carolina, the New York Times reports, an interesting experiment has been in progress since the first of February. The “Birds and Bees Text Line” invites teenagers with any questions relating to sex or the mysteries of dating to SMS their question to a phone number. That number connects these teenagers to an on-duty adult at the Adolescent Pregnancy Prevention Campaign. Within 24 hours, the teenager gets a reply to their text. The questions range from the run-of-the-mill – “When is a person not a virgin anymore?” – and the unusual – “If you have sex underwater do u need a condom?” – to the utterly heart-rending – “Hey, I’m preg and don’t know how 2 tell my parents. Can you help?”

The Birds and Bees Text Line is a response to the slow rise in the number of teenage pregnancies in North Carolina, which reached its lowest ebb in 2003. Teenagers – who are given state-mandated abstinence-only sex education in school – now have access to another resource, unmediated by teachers or parents, to prevent another generation of teenage pregnancies. Although it’s early days yet, the response to the program has been positive. Teenagers are using the Birds and Bees Text Line.

It is precisely because the Birds and Bees Text Line is unmediated by parental control that it has earned the ire of the more conservative elements in North Carolina. Bill Brooks, president of the North Carolina Family Policy Council, a conservative group, complained to the Times about the lack of oversight. “If I couldn’t control access to this service, I’d turn off the texting service. When it comes to the Internet, parents are advised to put blockers on their computer and keep it in a central place in the home. But kids can have access to this on their cell phones when they’re away from parental influence – and it can’t be controlled.”

If I’d stuffed words into a straw man’s mouth, I couldn’t have come up with a better summation of the situation we’re all in right now: young and old, rich and poor, liberal and conservative. There are certain points where it becomes particularly obvious, such as with the Birds and Bees Text Line, but this example simply amplifies our sense of the present as a very strange place, an undiscovered country that we’ve all suddenly been thrust into. Conservatives naturally react conservatively, seeking to preserve what has worked in the past; Bill Brooks speaks for a large cohort of people who feel increasingly lost in this bewildering present.

Let us assume, for a moment, that conservatism was in the ascendant (though this is clearly not the case in the United States, one could make a good argument that the Rudd Government is, in many ways, more conservative than its predecessor). Let us presume that Bill Brooks and the people for whom he speaks could have the Birds and Bees Text Line shut down. Would that, then, be the end of it? Would we have stuffed the genie back into the bottle? The answer, unquestionably, is no.

Everyone who has used or even heard of the Birds and Bees Text Line would be familiar with what it does and how it works. Once demonstrated, it becomes much easier to reproduce. It would be relatively straightforward to take the same functions performed by the Birds and Bees Text Line and “crowdsource” them, sharing the load across any number of dedicated volunteers who might, through some clever software, automate most of the tasks needed to distribute messages throughout the “cloud” of volunteers. Even if it took a small amount of money to setup and get going, that kind of money would be available from donors who feel that teenage sexual education is a worthwhile thing.

In other words, the same sort of engine which powers Wikipedia can be put to work across a number of different “platforms”. The power of sharing allows individuals to come together in great “clouds” of activity, and allows them to focus their activity around a single task. It could be an encyclopedia, or it could be providing reliable and judgment-free information about sexuality to teenagers. The form matters not at all: what matters is that it’s happening, all around us, everywhere throughout the world.

The cloud, this new thing, this is really what has Bill Brooks scared, because it is, quite literally, ‘out of control’. It arises naturally out of the human condition of ‘hyperconnection’. We are so much better connected than we were even a decade ago, and this connectivity breeds new capabilities. The first of these capabilities are the pooling and sharing of knowledge – or ‘hyperintelligence’. Consider: everyone who reads Wikipedia is potentially as smart as the smartest person who’s written an article in Wikipedia. Wikipedia has effectively banished ignorance born of want of knowledge. The Birds and Bees Text Line is another form of hyperintelligence, connecting adults with knowledge to teenagers in desperate need of that knowledge.

Hyperconnectivity also means that we can carefully watch one another, and learn from one another’s behaviors at the speed of light. This new capability – ‘hypermimesis’ – means that new behaviors, such as the Birds and Bees Text Line, can be seen and copied very quickly. Finally, hypermimesis means that that communities of interest can form around particular behaviors, ‘clouds’ of potential. These communities range from the mundane to the arcane, and they are everywhere online. But only recently have they discovered that they can translate their community into doing, putting hyperintelligence to work for the benefit of the community. This is the methodology of the Adolescent Pregnancy Prevention Campaign. This is the methodology of Wikipedia. This is the methodology of Wikileaks, which seeks to provide a safe place for whistle-blowers who want to share the goods on those who attempt to defraud or censor or suppress. This is the methodology of ANONYMOUS, which seeks to expose Scientology as a ridiculous cult. How many more examples need to be listed before we admit that the rules have changed, that the smooth functioning of power has been terrifically interrupted by these other forces, now powers in their own right?

II: Affairs of State

Don’t expect a revolution. We will not see masses of hyperconnected individuals, storming the Winter Palaces of power. This is not a proletarian revolt. It is, instead, rather more subtle and complex. The entire nature of power has changed, as have the burdens of power. Power has always carried with it the ‘burden of omniscience’ – that is, those at the top of the hierarchy have to possess a complete knowledge of everything of importance happening everywhere under their control. Where they lose grasp of that knowledge, that’s the space where coups, palace revolutions and popular revolts take place.

This new power that flows from the cloud of hyperconnectivity carries a different burden, the ‘burden of connection’. In order to maintain the cloud, and our presence within it, we are beholden to it. We must maintain each of the social relationships, each of the informational relationships, each of the knowledge relationships and each of the mimetic relationships within the cloud. Without that constant activity, the cloud dissipates, evaporating into nothing at all.

This is not a particularly new phenomenon; Dunbar’s Number demonstrates that we are beholden to the ‘tribe’ of our peers, the roughly 150 individuals who can find a place in our heads. In pre-civilization, the cloud was the tribe. Should the members of tribe interrupt the constant reinforcement of their social, informational, knowledge-based and mimetic relationships, the tribe would dissolve and disperse – as happens to a tribe when it grows beyond the confines of Dunbar’s Number.

In this hyperconnected era, we can pick and choose which of our human connections deserves reinforcement; the lines of that reinforcement shape the scope of our power. Studies of Japanese teenagers using mobiles and twenty-somethings on Facebook have shown that, most of the time, activity is directed toward a small circle of peers, perhaps six or seven others. This ‘co-presence’ is probably a modern echo of an ancient behavior, presumably related to the familial unit.

While we might desire to extend our power and capabilities through our networks of hyperconnections, the cost associated with such investments is very high. Time spent invested in a far-flung cloud is time that lost on networks closer to home. Yet individuals will nonetheless often dedicate themselves to some cause greater than themselves, despite the high price paid, drawn to some higher ideal.

The Obama campaign proved an interesting example of the price of connectivity. During the Democratic primary for the state of New York (which Hilary Clinton was expected to win easily), so many individuals contacted the campaign through its website that the campaign itself quickly became overloaded with the number of connections it was expected to maintain. By election day, the campaign staff in New York had retreated from the web, back to using mobiles. They had detached from the ‘cloud’ connectivity they used the web to foster, instead focusing their connectivity on the older model of the six or seven individuals in co-present connection. The enormous cloud of power which could have been put to work in New York lay dormant, unorganized, talking to itself through the Obama website, but effectively disconnected from the Obama campaign.

For each of us, connectivity carries a high price. For every organization which attempts to harness hyperconnectivity, the price is even higher. With very few exceptions, organizations are structured along hierarchical lines. Power flows from bottom to the top. Not only does this create the ‘burden of omniscience’ at the highest levels of the organization, it also fundamentally mismatches the flows of power in the cloud. When the hierarchy comes into contact with an energized cloud, the ‘discharge’ from the cloud to the hierarchy can completely overload the hierarchy. That’s the power of hyperconnectivity.

Another example from the Obama campaign demonstrates this power. Project Houdini was touted out by the Obama campaign as a system which would get the grassroots of the campaign to funnel their GOTV results into a centralized database, which could then be used to track down individuals who hadn’t voted, in order to offer them assistance in getting to their local polling station. The campaign grassroots received training in Project Houdini, when through a field test of the software and procedures, then waited for election day. On election day, Project Houdini lasted no more than 15 minutes before it crashed under the incredible number of empowered individuals who attempted to plug data into Project Houdini. Although months in the making, Project Houdini proved that a centralized and hierarchical system for campaign management couldn’t actually cope with the ‘cloud’ of grassroots organizers.

In the 21st century we now have two oppositional methods of organization: the hierarchy and the cloud. Each of them carry with them their own costs and their own strengths. Neither has yet proven to be wholly better than the other. One could make an argument that both have their own roles into the future, and that we’ll be spending a lot of time learning which works best in a given situation. What we have already learned is that these organizational types are mostly incompatible: unless very specific steps are taken, the cloud overpowers the hierarchy, or the hierarchy dissipates the cloud. We need to think about the interfaces that can connect one to the other. That’s the area that all organizations – and very specifically, non-profit organizations – will be working through in the coming years. Learning how to harness the power of the cloud will mark the difference between a modest success and overwhelming one. Yet working with the cloud will present organizational challenges of an unprecedented order. There is no way that any hierarchy can work with a cloud without becoming fundamentally changed by the experience.

III: Affair de Coeur

All organizations are now confronted with two utterly divergent methodologies for organizing their activities: the tower and the cloud. The tower seeks to organize everything in hierarchies, control information flows, and keep the power heading from bottom to top. The cloud isn’t formally organized, pools its information resources, and has no center of power. Despite all of its obvious weaknesses, the cloud can still transform itself into a formidable power, capable of overwhelming the tower. To push the metaphor a little further, the cloud can become a storm.

How does this happen? What is it that turns a cloud into a storm? Jimmy Wales has said that the success of any language-variant version of Wikipedia comes down to the dedicated efforts of five individuals. Once he spies those five individuals hard at work in Pashtun or Khazak or Xhosa, he knows that edition of Wikipedia will become a success. In other words, five people have to take the lead, leading everyone else in the cloud with their dedication, their selflessness, and their openness. This number probably holds true in a cloud of any sort – find five like-minded individuals, and the transformation from cloud to storm will begin.

At the end of that transformation there is still no hierarchy. There are, instead, concentric circles of involvement. At the innermost, those five or more incredibly dedicated individuals; then a larger circle of a greater number, who work with that inner five as time and opportunity allow; and so on, outward, at decreasing levels of involvement, until we reach those who simply contribute a word or a grammatical change, and have no real connection with the inner circle, except in commonality of purpose. This is the model for Wikipedia, for Wikileaks, and for ANONYMOUS. This is the cloud model, fully actualized as a storm. At this point the storm can challenge any tower.

But the storm doesn’t have things all its own way; to present a challenge to a tower is to invite the full presentation of its own power, which is very rude, very physical, and potentially very deadly. Wikipedians at work on the Farsi version of the encyclopedia face arrest and persecution by Iran’s Revolutionary Guards and religious police. Just a few weeks ago, after the contents of the Australian government’s internet blacklist was posted to Wikileaks, the German government invaded the home of the man who owns the domain name for Wikileaks in Germany. The tower still controls most of the power apparatus in the world, and that power can be used to squeeze any potential competitors.

But what happens when you try to squeeze a cloud? Effectively, nothing at all. Wikipedia has no head to decapitate. Jimmy Wales is an effective cheerleader and face for the press, but his presence isn’t strictly necessary. There are over 2000 Wikipedians who handle the day-to-day work. Locking all of them away, while possible, would only encourage further development in the cloud, as other individuals moved to fill their places. Moreover, any attempt to disrupt the cloud only makes the cloud more resilient. This has been demonstrated conclusively from the evolution of ‘darknets’, private file-sharing networks, which grew up as the legal and widely available file-sharing networks, such as Napster, were shut down by the copyright owners. Attacks on the cloud only improve the networks within the cloud, only make the leaders more dedicated, only increase the information and knowledge sharing within the cloud. Trying to disperse a storm only intensifies it.

These are not idle speculations; the tower will seek to contain the storm by any means necessary. The 21st century will increasingly look like a series of collisions between towers and storms. Each time the storm emerges triumphant, the tower will become more radical and determined in its efforts to disperse the storm, which will only result in a more energized and intensified storm. This is not a game that the tower can win by fighting. Only by opening up and adjusting itself to the structure of the cloud can the tower find any way forward.

What, then, is leadership in the cloud? It is not like leadership in the tower. It is not a position wrought from power, but authority in its other, and more primary meaning, ‘to be the master of’. Authority in the cloud is drawn from dedication, or, to use rather more precise language, love. Love is what holds the cloud together. People are attracted to the cloud because they are in love with the aim of the cloud. The cloud truly is an affair of the heart, and these affairs of the heart will be the engines that drive 21st century business, politics and community.

Author and pundit Clay Shirky has stated, “The internet is better at stopping things than starting them.” I reckon he’s wrong there: the internet is very good at starting things that stop things. But it is very good at starting things. Making the jump from an amorphous cloud of potentiality to a forceful storm requires the love of just five people. That’s not much to ask. If you can’t get that many people in love with your cause, it may not be worth pursing.

Conclusion: Managing Your Affairs

All 21st century organizations need to recognize and adapt to the power of the cloud. It’s either that or face a death of a thousand cuts, the slow ebbing of power away from hierarchically-structured organizations as newer forms of organization supplant them. But it need not be this way. It need not be an either/or choice. It could be a future of and-and-and, where both forms continue to co-exist peacefully. But that will only come to pass if hierarchies recognize the power of the cloud.

This means you.

All of you have your own hierarchical organizations – because that’s how organizations have always been run. Yet each of you are surrounded by your own clouds: community organizations (both in the real world and online), bulletin boards, blogs, and all of the other Web2.0 supports for the sharing of connectivity, information, knowledge and power. You are already halfway invested in the cloud, whether or not you realize it. And that’s also true for people you serve, your customers and clients and interest groups. You can’t simply ignore the cloud.

How then should organizations proceed?

First recommendation: do not be scared of the cloud. It might be some time before you can come to love the cloud, or even trust it, but you must at least move to a place where you are not frightened by a constituency which uses the cloud to assert its own empowerment. Reacting out of fright will only lead to an arms race, a series of escalations where the your hierarchy attempts to contain the cloud, and the cloud – which is faster, smarter and more agile than you can ever hope to be – outwits you, again and again.

Second: like likes like. If you can permute your organization so that it looks more like the cloud, you’ll have an easier time working with the cloud. Case in point: because of ‘message discipline’, only a very few people are allowed to speak for an organization. Yet, because of the exponential growth in connectivity and Web2.0 technologies, everyone in your organization has more opportunities to speak for your organization than ever before. Can you release control over message discipline, and empower your organization to speak for itself, from any point of contact? Yes, this sounds dangerous, and yes, there are some dangers involved, but the cloud wants to be spoken to authentically, and authenticity has many competing voices, not a single monolithic tone.

Third, and finally, remember that we are all involved in a growth process. The cloud of last year is not the cloud of next year. The answers that satisfied a year ago are not the same answers that will satisfy a year from now. We are all booting up very quickly into an alternative form of social organization which is only just now spreading its wings and testing its worth. Beginnings are delicate times. The future will be shaped by actions in the present. This means there are enormous opportunities to extend the capabilities of existing organizations, simply by harnessing them to the changes underway. It also means that tragedies await those who fight the tide of times too single-mindedly. Our culture has already rounded the corner, and made the transition to the cloud. It remains to be seen which of our institutions and organizations can adapt themselves, and find their way forward into sharing power.

If a picture paints a thousand words, you’ve just absorbed a million, the equivalent of one-and-a-half Bibles. That’s the way it is, these days. Nothing is small, nothing discrete, nothing bite-sized. Instead, we get the fire hose, 24 x 7, a world in which connection and community have become so colonized by intensity and amplification that nearly nothing feels average anymore.

Is this what we wanted? It’s become difficult to remember the before-time, how it was prior to an era of hyperconnectivity. We’ve spent the last fifteen years working out the most excellent ways to establish, strengthen and multiply the connections between ourselves. The job is nearly done, but now, as we put down our tools and pause to catch our breath, here comes the question we’ve dreaded all along…

Why. Why this?

I gave this question no thought at all as I blithely added friends to Twitter, shot past the limits of Dunbar’s Number, through the ridiculous, and then outward, approaching the sheer insanity of 1200 so-called-“friends” whose tweets now scroll by so quickly that I can’t focus on any one saying any thing because this motion blur is such that by the time I think to answer in reply, the tweet in question has scrolled off the end of the world.

This is ludicrous, and can not continue. But this is vital and can not be forgotten. And this is the paradox of the first decade of the 21st century: what we want – what we think we need – is making us crazy.

Some of this craziness is biological.

Eleven million years of evolution, back to Proconsul, the ancestor of all the hominids, have crafted us into quintessentially social creatures. We are human to the degree we are in relationship with our peers. We grew big forebrains, to hold banks of the chattering classes inside our own heads, so that we could engage these simulations of relationships in never-ending conversation. We never talk to ourselves, really. We engage these internal others in our thoughts, endlessly rehearsing and reliving all of the social moments which comprise the most memorable parts of life.

It’s crowded in there. It’s meant to be. And this has only made it worse.

No man is an island. Man is only man when he is part of a community. But we have limits. Homo Sapiens Sapiens spent two hundred thousand years exploring the resources afforded by a bit more than a liter of neural tissue. The brain has physical limits (we have to pass through the birth canal without killing our mothers) so our internal communities top out at Dunbar’s magic Number of 150, plus or minus a few.

Dunbar’s Number defines the crucial threshold between a community and a mob. Communities are made up of memorable and internalized individuals; mobs are unique in their lack of distinction. Communities can be held in one’s head, can be tended and soothed and encouraged and cajoled.

Four years ago, when I began my research into sharing and social networks, I asked a basic question: Will we find some way to transcend this biological limit, break free of the tyranny of cranial capacity, grow beyond the limits of Dunbar’s Number?

After all, we have the technology. We can hyperconnect in so many ways, through so many media, across the entire range of sensory modalities, it is as if the material world, which we have fashioned into our own image, wants nothing more than to boost our capacity for relationship.

And now we have two forces in opposition, both originating in the mind. Our old mind hews closely to the community and Dunbar’s Number. Our new mind seeks the power of the mob, and the amplification of numbers beyond imagination. This is the central paradox of the early 21st century, this is the rift which will never close. On one side we are civil, and civilized. On the other we are awesome, terrible, and terrifying. And everything we’ve done in the last fifteen years has simply pushed us closer to the abyss of the awesome.

We can not reasonably put down these new weapons of communication, even as they grind communities beneath them like so many old and brittle bones. We can not turn the dial of history backward. We are what we are, and already we have a good sense of what we are becoming. It may not be pretty – it may not even feel human – but this is things as they are.

When the historians of this age write their stories, a hundred years from now, they will talk about amplification as the defining feature of this entire era, the three hundred year span from industrial revolution to the emergence of the hyperconnected mob. In the beginning, the steam engine amplified the power of human muscle – making both human slavery and animal power redundant. In the end, our technologies of communication amplified our innate social capabilities, which eleven million years of natural selection have consistently selected for. Above and beyond all of our other natural gifts, those humans who communicate most effectively stand the greatest chance of passing their genes along to subsequent generations. It’s as simple as that. We talk our partners into bed, and always have.

The steam engine transformed the natural world into a largely artificial environment; the amplification of our muscles made us masters of the physical world. Now, the technologies of hyperconnectivity are translating the natural world, ruled by Dunbar’s Number, into the dominating influence of maddening crowd.

We are not prepared for this. We have no biological defense mechanism. We are all going to have to get used to a constant state of being which resembles nothing so much as a stack overflow, a consistent social incontinence, as we struggle to retain some aspects of selfhood amidst the constantly eroding pressure of the hyperconnected mob.

Given this, and given that many of us here today are already in the midst of this, it seems to me that the most useful tool any of us could have, moving forward into this future, is a social contextualizer. This prosthesis – which might live in our mobiles, or our nettops, or our Bluetooth headsets – will fill our limited minds with the details of our social interactions.

This tool will make explicit that long, Jacob Marley-like train of lockboxes that are our interactions in the techno-social sphere. Thus, when I introduce myself to you for the first or the fifteen hundredth time, you can be instantly brought up to date on why I am relevant, why I matter. When all else gets stripped away, each relationship has a core of salience which can be captured (roughly), and served up every time we might meet.

I expect that this prosthesis will come along sooner rather than later, and that it will rival Google in importance. Google took too much data and made it roughly searchable. This prosthesis will take too much connectivity and make it roughly serviceable. Given that we primarily social beings, I expect it to be a greater innovation, and more broadly disruptive.

And this prosthesis has precedents; at Xerox PARC they have been looking into a ‘human memory prosthesis’ for sufferers from senile dementia, a device which constantly jogs human memories as to task, place, and people. The world that we’re making for ourselves, every time we connect, is a place where we are all (in some relative sense) demented. Without this tool we will be entirely lost. We’re already slipping beneath the waves. We need this soon. We need this now.

I hope you’ll get inventive.

II. THAT.

Now that we have comfortably settled into the central paradox of our current era, with a world that is working through every available means to increase our connectivity, and a brain that is suddenly overloaded and sinking beneath the demands of the sum total of these connections, we need to ask that question: Exactly what is hyperconnectivity good for? What new thing does that bring us?

The easy answer is the obvious one: crowdsourcing. The action of a few million hyperconnected individuals resulted in a massive and massively influential work: Wikipedia. But the examples only begin there. They range much further afield.

Uni students have been sharing their unvarnished assessments of their instructors and lecturers. Ratemyprofessors.com has become the bête noire of the academy, because researchers who can’t teach find they have no one signing up for their courses, while the best lecturers, with the highest ratings, suddenly find themselves swarmed with offers for better teaching positions at more prestigious universities. A simply and easily implemented system of crowdsourced reviews has carefully undone all of the work of the tenure boards of the academy.

It won’t be long until everything else follows. Restaurant reviews – that’s done. What about reviews of doctors? Lawyers? Indian chiefs? Politicans? ISPs? (Oh, wait, we have that with Whirlpool.) Anything you can think of. Anything you might need. All of it will have been so extensively reviewed by such a large mob that you will know nearly everything that can be known before you sign on that dotted line.

All of this means that every time we gather together in our hyperconnected mobs to crowdsource some particular task, we become better informed, we become more powerful. Which means it becomes more likely that the hyperconnected mob will come together again around some other task suited to crowdsourcing, and will become even more powerful. That system of positive feedbacks – which we are already quite in the midst of – is fashioning a new polity, a rewritten social contract, which is making the institutions of the 19th and 20th centuries – that is, the industrial era – seem as antiquated and quaint as the feudal systems which they replaced.

It is not that these institutions are dying, but rather, they now face worthy competitors. Democracy, as an example, works well in communities, but can fail epically when it scales to mobs. Crowdsourced knowledge requires a mob, but that knowledge, once it has been collected, can be shared within a community, to hyperempower that community. This tug-of-war between communities and crowds is setting all of our institutions, old and new, vibrating like taught strings.

We already have a name for this small-pieces-loosely-joined form of social organization: it’s known as anarcho-syndicalism. Anarcho-Syndicalism emerged from the labor movements that grew in numbers and power toward the end of the 19th century. Its basic idea is simply that people will choose to cooperate more often than they choose to compete, and this cooperation can form the basis for a social, political and economic contract wherein the people manage themselves.

A system with no hierarchy, no bosses, no secrets, no politics. (Well, maybe that last one is asking too much.) Anarcho-syndicalism takes as a given that all men are created equal, and therefore each have a say in what they choose to do.

Somewhere back before Australia became a nation, anarcho-syndicalist trade unions like the Industrial Workers of the World (or, more commonly, the ‘Wobblies’) fought armies of mercenaries in the streets of the major industrial cities of the world, trying get the upper hand in the battle between labor and capital. They failed because capital could outmaneuver labor in the 19th century. Today the situation is precisely reversed. Capital is slow. Knowledge is fast, the quicksilver that enlivens all our activities.

I come before you today wearing my true political colors – literally. I did not pick a red jumper and black pants by some accident or wardrobe malfunction. These are the colors of anarcho-syndicalism. And that is the new System of the World.

You don’t have to believe me. You can dismiss my political posturing as sheer radicalism. But I ask you to cast your mind further than this stage this afternoon, and look out on a world which is permanently and instantaneously hyperconnected, and I ask you – how could things go any other way? Every day one of us invents a new way to tie us together or share what we know; as that invention is used, it is copied by those who see it being used.

When we imitate the successful behaviors of our hyperconnected peers, this ‘hypermimesis’ means that we are all already in a giant collective. It’s not a hive mind, and it’s not an overmind. It’s something weirdly in-between. Connected we are smarter by far than we are as individuals, but this connection conditions and constrains us, even as it liberates us. No gift comes for free.

I assert, on the weight of a growing mountain of evidence, that anarcho-syndicalism is the place where the community meets the crowd; it is the environment where this social prosthesis meets that radical hyperempowerment of capabilities.

Let me give you one example, happening right now. The classroom walls are disintegrating (and thank heaven for that), punctured by hyperconnectivity, as the outside world comes rushing in to meet the student, and the student leaves the classroom behind for the school of the world. The student doesn’t need to be in the classroom anymore, nor does the false rigor of the classroom need to be drilled into the student. There is such a hyperabundance of instruction and information available, students needs a mentor more than a teacher, a guide through the wilderness, and not a penitentiary to prevent their journey.

Now the students, and their parents – and the teachers and instructors and administrators – need to find a new way to work together, a communion of needs married to a community of gifts. The school is transforming into an anarcho-syndicalist collective, where everyone works together as peers, comes together in a “more perfect union”, to educate. There is no more school-as-a-place-you-go-to-get-your-book-learning. School is a state of being, an act of communion.

If this is happening to education, can medicine, and law, and politics be so very far behind? Of course not. But, unlike the elites of education, these other forces will resist and resist and resist all change, until such time as they have no choice but to surrender to mobs which are smarter, faster and more flexible than they are. In twenty years time they all these institutions will be all but unrecognizable.

All of this is light-years away from how our institutions have been designed. Those institutions – all institutions – are feeling the strain of informational overload. More than that, they’re now suffering the death of a thousand cuts, as the various polities serviced by each of these institutions actually outperform them.

You walk into your doctor’s office knowing more about your condition than your doctor. You understand the implications of your contract better than your lawyer. You know more about a subject than your instructor. That’s just the way it is, in the era of hyperconnectivity.

So we must band together. And we already have. We have come together, drawn by our interests, put our shoulders to the wheel, and moved the Earth upon its axis. Most specifically, those of you in this theatre with me this arvo have made the world move, because the Web is the fulcrum for this entire transformation. In less than two decades we’ve gone from physicists plaything to rewriting the rules of civilization.

But try not to think about that too much. It could go to your head.

III. THE OTHER.

Back in July, just after Vodafone had announced its meager data plans for iPhone 3G, I wrote a short essay for Ross Dawson’s Future of Media blog. I griped and bitched and spat the dummy, summing things up with this line:

“It’s time to show the carriers we can do this ourselves.”

I recommended that we start the ‘Future Australian Carrier’, or FAUC, and proceeded to invite all of my readers to get FAUCed. A harmless little incitement to action. What could possibly go wrong?

Within a day’s time a FAUC Facebook group had been started – without my input – and I was invited to join. Over the next two weeks about four hundred people joined that group, individuals who had simply had enough grief from their carriers and were looking for something better. After that, although there was some lively discussion about a possible logo, and some research into how MVNOs actually worked, nothing happened.

About a month later, individuals began to ping me, both on Facebook and via Twitter, asking, “What happened with that carrier you were going to start, Mark? Hmm?” As if somehow, I had signed on the dotted line to be chief executive, cheerleader, nose-wiper and bottle-washer for FAUC.

All of this caught me by surprise, because I certainly hadn’t signed up to create anything. I’d floated an idea, nothing more. Yet everyone was looking to me to somehow bring this new thing into being.

After I’d been hit up a few times, I started to understand where the epic !FAIL! had occurred. And the failure wasn’t really mine. You see, I’ve come to realize a sad and disgusting little fact about all of us: We need and we need and we need.

We need others to gather the news we read. We need others to provide the broadband we so greedily lap up. We need other to govern us. And god forbid we should be asked to shoulder some of the burden. We’ll fire off a thousand excuses about how we’re so time poor even the cat hasn’t been fed in a week.

So, sure, four hundred people might sign up to a Facebook group to indicate their need for a better mobile carrier, but would any of them think of stepping forward to spearhead its organization, its cash-raising, or it leasing agreements? No. That’s all too much hard work. All any of these people needed was cheap mobile broadband.

Well, cheap don’t come cheaply.

Of course, this happens everywhere up and down the commercial chain of being. QANTAS and Telstra outsource work to southern Asia because they can’t be bothered to pay for local help, because their stockholders can’t be bothered to take a small cut in their quarterly dividends.

There’s no difference in the act itself, just in its scale. And this isn’t even raw economics. This is a case of being penny-wise and pound-foolish. Carve some profit today, spend a fortune tomorrow to recover. We see it over and over and over again (most recently and most expensively on Wall Street), but somehow the point never makes it through our thick skulls. It’s probably because we human beings find it much easier to imagine three months into the future than three years. That’s a cognitive feature which helps if you’re on the African savannah, but sucks if you’re sitting in an Australian boardroom.

So this is the other thing. The ugly thing that no one wants to look at, because to look at it involves an admission of laziness. Well folks, let me be the first one here to admit it: I’m lazy. I’m too lazy to administer my damn Qmail server, so I use Gmail. I’m too lazy to setup WebDAV, so I use Google Docs. I’m too lazy to keep my devices synced, so I use MobileMe. And I’m too lazy to start my own carrier, so instead I pay a small fortune each month to Vodafone, for lousy service.

And yes, we’re all so very, very busy. I understand this. Every investment of time is a tradeoff. Yet we seem to defer, every time, to let someone else do it for us.

And is this wise? The more I see of cloud computing, the more I am convinced that it has become a single-point-of-failure for data communications. The decade-and-a-half that I spent as a network engineer tells me that. Don’t trust the cloud. Don’t trust redundancy. Trust no one. Keep your data in the cloud if you must, but for goodness’ sake, keep another copy locally. And another copy on the other side of the world. And another under your mattress.

I’m telling you things I shouldn’t have to tell you. I’m telling you things that you already know. But the other, this laziness, it’s built into our culture. Socially, we have two states of being: community and crowd. A community can collaborate to bring a new mobile carrier into being. A crowd can only gripe about their carrier. And now, as the strict lines between community and crowd get increasingly confused because of the upswing in hyperconnectivity, we behave like crowds when we really ought to be organizing like a community.

And this, at last, is the other thing: the message I really want to leave you with. You people, here in this auditorium today, you are the masters of the world. Not your bosses, not your shareholders, not your users. You. You folks, right here and right now. The keys to the kingdom of hyperconnectivity have been given to you. You can contour, shape and control that chaotic meeting point between community and crowd. That is what you do every time you craft an interface, or write a script. Your work helps people self-organize. Your work can engage us at our laziest, and turn us into happy worker bees. It can be done. Wikipedia has shown the way.

And now, as everything hierarchical and well-ordered dissolves into the grey goo which is the other thing, you have to ask yourself, “Who does this serve?”

At the end of the day, you’re answerable to yourself. No one else is going to do the heavy lifting for you. So when you think up an idea or dream up a design, consider this: Will it help people think for themselves? Will it help people meet their own needs? Or will it simply continue to infantilize us, until we become a planet of dummy-spitting, whinging, wankers?

It’s a question I ask myself, too, a question that’s shaping the decisions I make for myself. I want to make things that empower people, so I’ve decided to take some time to work with Andy Coffey, and re-think the book for the 21st century. Yes, that sounds ridiculous and ambitious and quixotic, but it’s also a development whose time is long overdue. If it succeeds at all, we will provide a publishing platform for people to share their long-form ideas. Everything about it will be open source and freely available to use, to copy, and to hack, because I already know that my community is smarter than I am.

And it’s a question I have answered for myself in another way. This is my third annual appearance before you at Web Directions South. It will be the last time for some time. You people are my community; where I knew none of you back in 2006; I consider many of you friends in 2008. Yet, when I talk to you like this, I get the uncomfortable feeling that my community has become a crowd. So, for the next few years, let’s have someone else do the closing keynote. I want to be with my peeps, in the audience, and on the Twitter backchannel, taking the piss and trading ideas.

The future – for all of us – is the battle over the boundary between the community and the crowd. I am choosing to embrace the community. It seems the right thing to do. And as I walk off-stage here, this afternoon, I want you to remember that each of you holds the keys to the kingdom. Our community is yours to shape as you will. Everything that you do is translated into how we operate as a culture, as a society, as a civilization. It can be a coming together, or it can be a breaking apart. And it’s up to you.

Recorded in New York City, 23 June 2008 – the day before I delivered “Hyperpolitics, American Style” at the Personal Democracy Forum. A wide-ranging discussion on hyperconnectivity, hyperpolitics, media, hyperdistribution, and lots of other fun things.

In November of 1998, I attended a conference on technology and design in Amsterdam, and brought along two mates itching for an excuse to visit Europe. We all stayed at the flat of my good friends, Neil and Kylin. I dutifully attended the conference every day as the rest of them went out carousing through the various less-reputable quarters of Amsterdam, and we all had a great time. As Kylin tells it – given that she was the only woman on this Cook’s Tour – when we departed, we left a lingering residue of testosterone in their flat, and (if they calculated correctly) the very day after we departed for Los Angeles, they conceived their daughter Bey.

In February 1999, Neil and Kylin emailed all their friends, telling us of their plans to move – immediately – from Amsterdam to Florida. No explanation given. Through some weird intuition, I figured it out: Kylin was pregnant. I called her, and put the question to her directly. “How did you know?” she gasped. “We’ve been keeping it top secret.”

I don’t know how I knew. But I was overjoyed: I’m part of a generation who waited a long, long time to have children – my own nephews weren’t born until 2001 and 2002; none of my close friends had children in 1999. Neil and Kylin were the first.

It got me to pondering, as I ran a little thought experiment: what would the world of their daughter, still in utero, look like? What would her experience of that world be?

A month earlier, my friend Terence McKenna had challenged me to write a book. “You mouth off enough,” he suggested, “so maybe you should get it all down?” When he laid that challenge before me, I had no idea what I’d write a book about.

Somehow, as soon as I heard about Kylin’s pregnancy, I knew. I had to write a book about the world that child would grow up into, because that world would look nothing like the world I had been born into back in 1962. That child wouldn’t need this book. Her parents would.

A few months later I attended another conference, at MIT, where I heard psychologist Sherry Turkle talk about her work with young children. Turkle has been exploring how technology changes children’s behaviors, and, in this specific case, she’d taken a long look at a brand new toy: in fact, that season’s “hot” toy, the “Furby”.

Furby is an electromechanical plush toy, capable of responding to various actions by the child, but Furby also presents the child with demands – to be fed, to be played with, to be put to sleep when tired. More than interactive, the Furby presented children with some of the qualities we recognize as innate to living things. Would a small child recognize furby as inanimate, like a doll, or animate, like a pet?

From research in developmental psychology we know that children develop the categories of “inanimate” and “animate” when they’re around four years old. The development of these categories is a “constructivist” process – children do not need to be taught the difference between these two states; rather, they intuit the difference through continued interactions with animate and inanimate objects. Thus, an object, like Furby, which displays characteristics associated with both categories, should pose quite a philosophical conundrum for a small child.

Turkle put the question to these children: is Furby like your puppy? Is it like your doll? These children, little philosophical geniuses, gave her an answer she never expected to receive. They said it’s like neither of them. It is a thing itself, something in-between. They had no name for this third category between animate and inanimate, but they knew it existed, for they had direct experience of it.

This was my penny-drop moment: constructivism states that all children learn how the world works through their interactions within it. And we had suddenly changed the rules. We had infused the material world with the fairy dust of interactivity, creating the Pinocchio-like Furby, and, in so doing, at created a new ontological category. It is not a category that adults acknowledge – in fact, many adults find Furby slightly “creepy” precisely because it straddles two very familiar categories – but, in another generation, by the time these children are our age, that category will have a name, and will be accepted as a matter of course.

This is what Neil and Kylin – and, really, parents everywhere – need to know: the world has changed, the world is changing, and the world’s going to change a whole lot more. We may be the first beneficiaries of this great upwelling of technology, but the lasting benefits will be conferred upon our posterity, for it is changing the way they think. Their understanding to the world is, in some ways, utterly different from our own. And, just now, just over the last year or two, we’ve thrown a new element into the mix. We’re gracing ourselves with a new kind of connectivity – I call it “hyperconnectivity” which turbocharges some of the most essential features of human beings. This newest frontier – which did not exist even a decade ago – is what I want to focus upon this morning.

I: Who Are We?

We human beings are smart. Very smart. So smart we run the joint. But there’s a heavy price to be paid for all those brains. To start with, our heads our so big that we very nearly kill our mothers in the act of giving birth. Human births are so dangerous that we’re the only species we know of which can’t handle the act of birth alone.

We need others around – historically, other women – assisting us in the process. This point is essential to our humanity: we need other people. There is no way that a human, alone, can survive.

Yes, there are a few isolated incidence of “wolf boys” and Robinson Crusoe-types, battling against the odds in an indifferent or inimical environment, but, for far longer than we have been human, we have been social.

You can go back through the tree of life, a full eleven million years, to Proconsul, the common ancestor of gorillas, chimpanzees, bonobos and humans, and that animal was a social animal. It’s in our genes. It’s what we are. But why?

The answer is simple enough: eleven million years ago, those of our ancestors with the best social skills could most dependably count on help from others. That help was essential to their survival. That help allowed them to live long enough to pass those social genes and social behaviors along to their children. That help was essential, once our brains grew big enough to create trouble in the birth canal, for the next generation of human beings to come into the world. Cleverly, nature has crafted a species which, from the moment of the first birth pangs, must be social in order to survive. That pressure – a “selection pressure”, as it’s known in biology – is probably the essential, defining feature of humanity.

In an article in the May 17 2008 issue of New Scientist, an author rhapsodized about the end of “human exceptionalism”. Ethology and zoology have taught us that all of the behaviors we consider uniquely human do, in fact, exist broadly among other species. Whales have culture, of a sort. Chimpanzees use gestures to communicate their needs and wants, just like a child does. Dolphins have names. But each of these species, smart as they may be, deliver their young unassisted. They do not need help from their fellows to enter this world.

We are delivered by social means, and live our entire lives in a social order. What was essential at birth becomes even more important as an infant and toddler: because of our huge brains we remain helpless far longer than any other species.

A mother caring for a newborn infant has a full-time task on her hands. She can not devote her energies to finding food or shelter. Her attention is divided, but mostly focused on her child. Here again, the strong bonds of socialization create an environment where women (again) will altruistically bear some of the burden for mother and newborn. This altruism is reciprocal: as other women bear children, these mothers, with older children, will bear some of the burden for them.

This means that the mothers best able to forge strong social bonds with other women will have the most help at hand when they need it. This means, al things being equal, their children will be more likely to survive, and the chain of genes and behaviors gets passed along to another generation. This is another selection pressure which has, over millions of years, turned us into thoroughly social animals.

An interesting point to note here is that women have always had stronger selection pressures toward social behavior than men. I will come back to this.

Given that so much of our success is based upon our ability to socialize with others, and given that additional social skills confer additional advantage which increases selection success, as we evolved into our modern form – Homo Sapiens Sapiens – natural selection tended to emphasize our social characteristics. Being social has ever been the best way to get ahead.

In the last million years, as our brains grew explosively – as one scientist put it, “perhaps the most improbable event in all of evolution, anywhere” – much of the potential of all that new gray matter was put to work for social benefit. The “new brain” or neocortex, which is the most dramatically enlarged portion of the human brain, seems to be the area dedicated to our social relationships.

We know this because, in 1992, British anthropologist Robin Dunbar compared the average troop size of gorillas and chimpanzees against the average tribe sizes of humans. He found that there was a direct correlation between the volume of the neocortex in these three species and their average troop or tribe size. This value, known as “Dunbar’s Number”, is roughly 20 for gorillas, who have the smallest neocortex, about 35 for chimpanzees, and – for us lucky human beings, who have the greatest selection pressures on our social behavior – just under one hundred and fifty. We may not be entirely exceptional, but we’re doing quite well.

Essentially, inside of each one of our heads, there are a hundred and fifty other people running around. Yes, that sounds a bit crowded (particularly when they’re up partying all night long with their mates), but it’s actually imminently practical. These “little people” inside our heads are models of each person we know well: our family, our friends, our colleagues. For each of these people we build mental model which helps us to predict their behavior. (It isn’t really them, but rather, our image of them.) This predictive capability smoothes our social interactions. We know how to interact with people whom we have in our heads; with others we remain demure, reserved – in a word, predictable. Only with intimacy do we express the quirks of behavior which make us unique, only with intimacy do we take note of them in others.

We all know more than a hundred and fifty people. Some folks on FaceBook and MySpace claim thousands of “friends”. But most of these folks aren’t in our heads. There’s a simple rule you can use, to tell whether one of these folks is in your head: I call it the “sharing test”. Let’s suppose you see something – on the Web, in the newspaper, on the telly – that is so meaningful (funny, or poignant, or just so salient to whatever passions drive you), and in the next moment you think, “Wow, I know Dazza would really enjoy that.” And you flip the link along in an email. Or you send Dazza a text message with, “Hey, mate, did you see that thing just now on TEN?” And if he didn’t see it, you ring and fill him in. It’s that moment of unrestrained sharing – it feels almost automatic, and it’s entirely an essential part of what we are – which defines the most visible quality of those people inside our heads.

Every time when we share something with those little people in our head, we reinforce that relationship; we strengthen the social bonds which tie us to one another. Fifty thousand years ago this had enormous practical benefits: sharing where the best fruit grew – or the location of a predator in the tall grass – kept everyone alive and healthy. The selection pressure for sociability made us expert at sharing.

It’s interesting to watch this behavior as expressed by children; in some ways they share automatically – children love to share their experiences. In other situations – such as with a favorite toy – children must be taught to share, to override the natural selfishness of the singular animal, overruling that intrinsic behavior with the altruistic behavior of the social human. Sharing is one of the most important lessons parents teach their children, and if that lesson is poorly taught, it leaves a child at a permanent disadvantage.

While our genes make us sociable, our sharing behaviors are more software than hardware; this is why they must be taught. It takes time for any child to learn that lesson, just as it took quite a while for humans, as a species, to learn it. Geneticists know that human beings haven’t changed at all in at least 60,000 years, but civilization didn’t kick off in a meaningful way until about ten thousand years ago.

This has been an a bit of a puzzler for paleoanthropologists, but a new theory – which I also read about in New Scientist – seems to make sense of that gap: while we had the raw capacity for civilized behavior long ago, it took us 50,000 years to write the cultural software for civilization. Over those years, as we learned about ourselves and our world, our behavior changed and we taught these changes to our children, who improved upon them, passing those changes along.

In short, our entire species spent a long time in primary school (and might even have been kept back a few grades) before graduation. The incredible wealth of cultural learning – which we don’t really even reflect on, because it seems so essential and obvious to us – was painstaking developed across two thousand generations.

Our secondary studies, as a species, included that most unique of human institutions: the city. The earliest cities, such as Jericho and Çatal Höyük, already housed thousands of inhabitants – far beyond the reach of Dunbar’s Number.

That in itself presented a singular challenge for humanity, because, as near as we can tell, humans in pre-civilization lived in a perpetual state of war – the “war of all against all” – waged against all those not in their own tribes.

At the end of May 2008, we saw photos of a newly discovered tribe in the far reaches of the Amazon, who reacted to the presence of an aircraft by firing bows at it. Human beings possess an inherent xenophobia, and the boundaries those in the “in group” conform to the limits of Dunbar’s Number.

Given this, how did we all come to live together in ever-greater numbers? Simply this: the cultural software of civilization provided a greater selection advantage than that afforded by the tribal order which preceded it. Civilization is a broader form of sharing, where altruism is replaced by roles: the butcher, the baker, the candlestick maker. In civilization we share the manifold burdens of life by specializing, then we trade these specialized goods and services amongst ourselves. And it works.

Civilized human beings live in greater numbers, with greater population density, than pre-civilized cultures. It does not work perfectly: we have crime and poverty precisely because there are people in our cities who can fall through the “safety net” of civilized society. These eternal blights are the specific diseases of civilization. Yet the upsides of this broader and more diffuse form of sharing so outweighed the downsides that these evils have been tacitly acknowledged as the “price of progress.”

So things continued, merrily, for the last ten thousand years. Cities rose and fell; empires rose and fell; cultures and languages and entire peoples rose up suddenly, only to vanish just as quickly. All along the way, we continued adding to our cultural software. We learned – fairly early on – to record our learning in permanent form. We codified the essential elements of the software of civilization in laws and commandments.

We experimented with every form of human social organization, from the military dictatorship of Sparta, to the centralized bureaucracy of China, to the open democracy of Athens, to the chaotic anarchism of the Paris Commune. At each step along the way, we passed these lessons along, in a unbroken chain, to the generations that followed.

We are the children of nearly five hundred generations of civilization. The lessons learned over that immense span of time have brought us to the threshold of a revolution as comprehensive as that which obsolesced our tribal natures and replaced them with more civilized forms. Once again, the selection pressures of sociability force us into a narrow passage, toward another birth.

II: Where Are We Going?

We know that our amazingly comprehensive social skills are located in the newest part of our brain; we also know that they are among the last capabilities to mature during our cognitive development. Our sociability depends upon so much: a strong command of language, the ability to empathize and sympathize, the ability to consider the wants and needs of others, the ability to give freely of one’s self – altruism. At any point this complex and delicate process can be interrupted, by nature or by nurture.

My own nephew, Alexander, was diagnosed with an Autism Spectrum Disorder at the end of 2005. For leading-edge brain researchers, autism represents a natural failure of the brain’s inherent capability to model the behavior of others. The hundred and fifty people running around inside of the head of someone with an Autism Spectrum Disorder are shaped differently than the ones running about in mine; they still exist, but they are not (in an admittedly subjective assessment) as complete. Now that we know roughly what autism is, we work with these children intensively, because, while they lack certain inherent features we associate with normalcy, these children, if diagnosed early enough, can learn to become much more sensitive to the world-views and feelings of others.

My nephew attended a state-of-the-art pre-school in his San Diego suburb, where autistic children and “normal” children (such as his year-younger brother, Andrew) mix freely, because it is now known that the autistic children can and will learn necessary social skills through this continuous interaction. Alexander has now been mainstreamed, while my younger nephew remains as a “peer” in this school, showing other children how to be a fully socialized human being.

Then there are the children who have suffered neglect or abuse. Not having been nurtured themselves, they have not learned how to nurture others. This deficit manifests as emotional withdrawal, or in anti-social behaviors. Children who have not received love can not find it within themselves to love others. It is not that love is learned, per se, but rather, that we learn to recognize it as others demonstrate it toward us. The drive to connect with another human being, although entirely inherent, can be so confused, or so atrophied through disuse (these areas of the brain, if under-stimulated, will die away, leaving the child with a permanent deficit), that the child essentially becomes locked into a solitary world, unable to initiate or maintain the social relationships essential to success.

None of us are perfect; all of us feel embarrassment and disappointment and awkwardness in a range of social situations. Yet those sensations, of themselves, are proof our normalcy: we sense our social shortcomings. We had little awareness of our social nature when we were young. Only as we matured, turning the corner into tweenhood, did we rise into an awareness of the strong social bonds which form the largest part of our experience as human beings. For each and every one of us, this is a painful experience.

The brain, furiously making connections between regions which have been developing from before birth, integrates our comprehensive understanding of human behavior, our own emotional state, and our perceptions of the actions and emotions of others to create a model of how we are viewed by others, our “social standing”. It is this that natural selection has driven us to optimize: individuals with the highest social standing get the lion’s share of attention, affection and resources.

In particular, this burden lies heaviest on young women, who have the additional selection pressure (now more-or-less vestigial) driving them to form the social bonds of altruism with their peers which would, in prehistoric times, lead to greater help with childbearing and child-rearing. Young women emerge into a social consciousness so rich and so complex it makes young men look nearly autistic in comparison.

It is the reason why young woman invest themselves so wholly in their looks, in their friends, in their cliques, in the “in group” and the “out group”. Films like Heathers (one of my personal favorites) and Mean Girls tell tales as old as humanity: the rise into social consciousness of that most social of all the animals on the planet – the young woman.

It also provides some explanation for why young women are often emotionally overwrought. It isn’t just hormones. It’s the rising awareness of a vast social game that they don’t know how to play, with rules taught only through trial and error. Every mistake is potentially fatal, every success fleeting. And each of these moments of singular significance are amplified by a genetic imperative, a drive to connect, which leaves them helpless. Resistance is futile, and engagement only brings more learning, and more pain.

Oh, and we just made things a whole lot more complicated.

This generation of young adults, coming of age just now, have access to the best tools for connection and communication created by our species.

A few years ago, these kids, bounded by proximity and temporality, took their cues from their immediate peers. But now these connections can be forged via text messages, or MySpace pages, or YouTube videos, and so on. An average fifteen year-old girl might send and receive a hundred text messages in a single day and think nothing of it. Her inherent drive to connect has been freed from space and time; she can reach out everywhere, at any time; she can be reached anywhere, anytime. We have added a technological dimension – an intense and comprehensive acceleration – to a wholly natural process.

During the two hundred years of the industrial revolution, we amplified our capability for physical work. Steam engines and electric motors replaced muscle. As we moved from physical labor to monitoring and control of our machines, our capacity for work exploded, transforming the world. Still, these changes were entirely external. They did not affect our nature as social beings, but simply extended our physical capabilities. Now – just now – we have moved beyond the physical extension of our capabilities into a comprehensive amplification of our social nature. The mobile and the Internet are already transforming the human world as utterly as the steam engine transformed the landscape; but this transformation is happening in eighth-time.

The transition to industrialization, which took about a hundred years to complete, seems slow when compared to the rise of the Human Network, which will take about fifteen years, end-to-end.

Already, half of humanity owns a mobile phone; within about three years, three-quarters of the planet will own a mobile. That’s everyone except for the most desperately poor among us. No one, anywhere, expected this, because no one reckoned on this most basic of all human drives – the need to connect. The mobile is the steam engine, the electric motor, and the internal combustion engine of the 21st century: every bit of the potential framed by each of these enormous innovations now rests comfortably in the palm of three and a half billion hands.

Getting the tools for the amplification of our social natures is only half the story. That’s just hardware. What really counts is the software. And that’s why we turn, at the end of this tale, to Bey, the child conceived by Neil and Kylin, back in the last days of 1998.

III: Who Will Lead the Way?

Hardware is not enough. We spent fifty thousand years in idle, despite the best cognitive hardware on the planet, before anything truly interesting occurred. We are ensuring that every single person on Earth has a connection to the Human Network, but that doesn’t mean any of us know how to use it. Still, we are learning. And humans excel at learning from one another.

A recent study run with young chimps and toddlers showed that the chimps surpassed the toddlers in their cognitive capabilities, but that the toddlers far surpassed the chimpanzees in their ability to “ape” behavior. Humans learn by mimesis: the observation of our parents, our peers, our mentors and teachers. (Which is why the injunction, “Do as I say, not as I do,” never works.) As such, we closely observe each other to learn what works, and we copy it. This mimetic behavior, which used to be constrained by distance, has itself become a global phenomenon. Whatever works gets copied widely. It could be a good behavior, or a bad behavior: the only metric is the success of the behavior. If it achieves its ends, it will be observed and copied, widely and nearly instantaneously.

It took us two thousand generations to build up the cognitive software for civilization, as individual tribes made the same discoveries, independently, but lacked the means to share them. Even the diffusion of agriculture depended more on the migration of whole peoples than the dissemination of knowledge.

We know how to be social beings, but never before have we been globally and instantaneously social. For this reason, we are learning – and each of are intensely involved in this education. We are learning from ourselves, applying the lessons of our own socialization, to see if these lessons work in this new world. That’s pure constructivism. We are learning from each other, watching our peers as intently as any young woman would, when desperately trying to defend her position in an ever-more-competitive social circle. That’s pure mimesis. Together they’re a potent combination, and, when multiplied by the accelerator of the Human Network, it means we’re learning very rapidly indeed. Learning is never complete: ignorance is a permanent feature of the human condition. That said, competence can come quickly, when the students are wholly engaged in learning. As we are.

This means that, in another two or three years, when Bey is old enough to get her first mobile phone, at precisely the moment that she begins to awaken to her intense cognitive capabilities as social animal, those abilities will have been so comprehensively rewritten and transformed by the new software of sociability that she will find herself suddenly both intensely empowered and, most likely, entirely overwhelmed.

Bey will be among the first children who become socially aware within a world where the definition, rules and operating principles of the social universe have utterly changed. That transformation will not be complete, by any means, but it will be far enough along that the basic features and outlines of 21st century social civilization will be present.

This is the only social world that she will ever know. For her, social connections will not end with the classroom and the home. Social connectivity is already edging toward a state where everyone is directly connected to everyone else, all six point eight billion of us, a world where each of us can directly forge a relationship with everyone else. Bey will not know any of the boundaries we consider natural and solid, the boundaries of the classroom, the suburb, the family, or the nation: under the pressure of this intense hyperconnectivity, all of those boundaries dissolve, or are blown over. Only connect. Connection is all that matters. The social instinct, hyperempowered and taken to an entirely new level by hyperconnectivity, is rewriting the rules of culture.

This world looks utterly alien to us, yet it is already here. Author William Gibsonsays, “The future is already here, it’s just not evenly distributed.” We have moments of hyperconnectivity – as in the thirty-six hours after the Sichuan earthquake, when text messaging and other tools for hyperconnectivity spontaneously created a Human Network, sharing news of the tragedy and working to locate missing people. Such moments are becoming more frequent, gradually merging into a continuum.

But what about Bey? What lessons can we offer her? She will learn everything she can from everyone, everywhere. She will span the planet for best practices in sociability, because she can, and because she must. She will outpace us in every way, because the simultaneous emergence of the Human Network and her own social capabilities makes her potent in ways we can’t wholly predict. Her powers will be greater, but that also means that her crash will be more spectacular – apocalyptic, really – when she tries something, and fails.

We do know this: just as Furby created a new ontological class of being, a nether zone between animate and inanimate which children instinctively recognized and embraced, Bey will be living a new ontology of sociability, connection and relationship. These girls, just on the verge of becoming young women, will lead the way into this new world. They will be the first masters of the Human Network.

I want to close this essay with both a warning — and a hope. The warning is simply this: these young women will be vastly more powerful than we are. Harnessing the immense energies of the Human Network will be, quite literally, child’s play to them. If they sense they are being wronged, and can build a network of peers who concur in this assessment, you will need to watch out, because they will have the capacity to destroy you with a word. We already see students threatening educators with damage to their reputations; multiply that a billion-fold and you can sense the potential for catastrophe. I am not saying that this will inevitably happen, only that it can.

At the same time, despite their thermonuclear potential, it would be a mistake to handle these kids too delicately. Children are all passion, but lack wisdom. Adults have plenty of wisdom, but, all too often, we lack passion.

We need to build strong relationships with these children, using the Human Network of hyperconnectivity, so that each of us can infect the other. We need their passion to move forward without fear in a world where the human universe has shifted beneath our feet. They desperately need our wisdom to guide them into healthy and stable relationships throughout the Human Network. To do this, we need to bring these kids inside our heads, and we need to get ourselves into theirs, so that, together, we can make sense of a world so new, and so different, that we all seem but little children in a big world.