Category Archives: Internet

Post navigation

Forty-eight years ago, when my mother was pregnant with me, her friends and family threw her a baby shower. Among the gifts, she received a satin-covered ‘Baby Book’, with spaces to record all of the minutiae of the early days of my existence. I know for a fact that Dr. No and Lawrence of Arabia were playing in the movie theatres in Massachusetts at the time I was born, because it is neatly recorded on a page of my baby book. I know how much I weighed when I was born (7 lbs, 7 oz – or 3.3 kg), when I got my first tooth, when I started to walk, and so on. All of it is there, because my mother took the time to write it down as it happened.

What my mother didn’t write down – because it isn’t at all remarkable – was that I was busy reaching out, making connections with everyone I came into contact with. Those connections began with my mother and my father, then my aunts and uncles and grandparents, and, just a year later, my sister. I made those connections because that’s what humans do. It sounds perfectly ordinary because it comes so naturally: in fact, it’s quite profound. From the moment we’re born, we work to embed ourselves within a deep, strong and complex web of social relationships.

This isn’t a recent innovation, something that we ‘thought up’ the way we dreamed up art or writing or the steam engine; you need to go way, way back – at least ten million years, and probably a great deal more – before you get to any of our ancestors who wasn’t thoroughly social. A social animal will, on the whole, outperform a loner. A social animal can harness resources outside of themselves to ensure their survival and the survival of their children. Ten million years ago, a social animal could share the hunting and gathering of food, childcare, or lookout duties. Those with the best social skills – the best ability to communicate, coordinate, and function effectively as a unit – did better than their less-well-socialized relatives. They survived to pass their genes and behaviors along, down the generations. All along, a constant pressure accompanied them, driving them to become ever more social, better coordinated, and more effective. At some point – no one knows how long ago, or even how it happened – this pressure overflowed, creating the infinitely flexible form of communication we call language.

The more we study other animals – particularly chimpanzees – the less unique we seem to ourselves. Animals think, they even reason. They can carry around within themselves a model of how others think and think about them. They can deceive. They even appear to have empathy and a sense of fairness. But no other animal has the perfect tool of language. Animals can think and feel, but they can not express themselves, at least, not as comprehensively as we can. The expressiveness of language has one overriding aim: it allows us to connect very effectively.

The more we study ourselves, the more we understand how our need to connect has worked its way into our bodies, colonizing our nervous system. Our big brains are the hardware for our connection into the human network: there’s a direct correlation between the amount of grey matter in our prefrontal cortex and the number of individuals we can maintain connections with. Anthropologist Robin Dunbar came up with a figure of 148, plus or minus a few. That’s the number of individuals you carry around in your head with you, all the time. For a long, long time – tens of thousands of years – that was the largest a tribe of humans could grow, before they hived off into two tribes. When a tribe grows so big you can’t know all of its members, it’s time to divide.

We’ve grown used to being surrounded by people we have no connection with. That’s what cities are all about. We’ve been building them for close to ten thousand years, and in that time we’ve learned how to live with those we don’t know. It’s not easy – it requires police and courts and prisons – but the advantages of coming together in such great numbers outweigh the disadvantages. In 2008, for the first time in history, half of humanity lived in cities. We’re in the final stages of the urban revolution – a revolution in the making for the past hundred centuries. Urban life is now the default human condition.

Just as that revolution reaches is climax, we find ourselves presented with a new technology, which takes all of our human connections and digitizes them, creating an electronic representation of what we each carry around in our heads. We call this ‘social networking’, though, as I’ve explained, social networks are actually older than our species. Stuffing them into a computer doesn’t change them: We are our connections. They are what make us human. But the computer speeds up and amplifies those connections, taking something natural and ordinary and turning it into something freakish and – hopefully – wonderful.

Before we discuss how these newly amplified connections can be used, it may be useful to step back, and reframe this latest revolution – just three years old – in the context of a child born, not in the early 1960s, but in 2010. I have good friends in Melbourne who are expecting their first child in early September. For the sake of today’s talk, let’s use this child (we’ll call her a daughter, though no one yet knows) as an example of what is now happening, and what is to come.

Will this child have a baby book? Certainly, some beloved relative may provide one to the lucky parents, and mom and dad may even take the time to fill it in – between the 3 AM feedings and the nappy changes. But the true baby book for this child will be the endless stream of digital media created in her wake. From a few minutes after birth, she will be photographed, recorded, videoed, measured and captured in ways that would seem inconceivable (and obsessive) just a generation ago. Yet today think nothing of a parent who follows a child everywhere with a video camera.

As parents collect that all of that media, they’re going to want somewhere to show it off. An eponymous website. YouTube is already cluttered with videos of babies doing the most mundane sorts of things, precisely so they can be shown off to proud grandparents. Photo galleries on Picasa and Snapfish and Flickr exist for precisely the same reason – they provide a venue for sharing. Parents post to blogs documenting every move, every fitful crawl, every illness. What’s the difference between this and what we think of as a baby book? Nothing at all.

It seems natural and wonderful to gather all of this documentation about her. This is who she is in her youngest years. But there’s other information that her parents do not document, at least not yet: who does she connect with? This list is small in her very first years, but as she grows into a toddler and heads off to day care and pre-kindy and grade school, that list grows rather longer. Will her parents keep track of these relationships? Even if they do not, at some point, she will. She’ll go online to a site patrolled by Disney or Apple or Google or Microsoft and be invited to ‘friend’ others on the site, and enroll her own real-world friends. Her social network will begin to twin into its physical and virtual selves. Much of each will be a reflection of the other, but some connections will exist purely in one realm. Some friends or family members will have no presence online; a few friends might remain life-long ‘pen pals’, never meeting in the flesh, but maintaining constant, connected contact.

The most significant difference between these real-world and virtual networks centers on persistence. We only have room for 150 people in our heads. When we fill up, people start to get pushed out, crossing that invisible yet absolutely real line between friend and acquaintance. We may have a lot of acquaintances, but these relationships, in the real world, don’t consist of very much beyond a greeting and a few polite words. Contrast this to the virtual world, the world of Facebook and Twitter and LinkedIn, where connections persist forever unless explicitly deleted by one of the parties to that connection. There is no upper limit to the number of connections a computer can remember. (Facebook has an upper limit of 5000 friends, but that’s entirely artificial and will eventually be abandoned.)

As she passes through life, this child will continue to accrue connections, and these connections will be digitized for safekeeping – just like the photos and videos her parents shot in her youngest years. That list will naturally grow and grow and grow, as she passes through years 1 through 12, moves on to university, and out into the world of adults. By the time she’s 25, she’ll likely have thousands of connections that accreted just by living her life. Each of these people will be able to peer in, and see how she’s doing; she’ll be able to do the same with each of them.

Managing the difference between our real-world connections, which top out, and our virtual connections, which do not, is a task that we’ll be mastering over the next decade. Right now, we’re not very good at it. By the time she’s grown up enough to understand the different qualities of real and virtual connections, we will be able to teach her behaviors appropriate to each sphere of connection. At present there’s a lot of confusion, a fair bit of chaos, and a healthy helping of ignorance around all of this. We can give ourselves a pass: it’s brand new. But already we’re beginning to see that this is a real revolution. In the social sphere, nothing will look like the past.

II: Pillar of Cloud, Pillar of Fire

On Friday evening, my washing machine – which I bought, used, just after I moved to Australia – finally gave up the ghost. The motor on my front loader seemed less and less likely to make it through an entire spin cycle, so I knew this day was coming, and had some thoughts about what I’d do for a replacement. One of my very good friends recommended that I buy a Simpson brand washer, just as she owned, just as her mother owned. ‘Years of trouble-free service,’ she said. ‘It’ll last forever.’ I took that suggestion under advisement. But I knew that I had a larger pool of individuals to interrogate. About thirty minutes after the unfortunate passing of the washer, I posted a message to Twitter, asking for recommendations. Within minutes I was pointed to Choice Magazine, wher I read their reliability survey. Many people chimed in with their own love or horror stories about particular brands of washers. I was quickly dissuaded from Simpson: ‘There’s a reason they’re cheap,’ one person replied. A furious argument raged about whether LG should be purchased by anyone, for any reason whatsoever, given that they were caught cheating on a refrigerator efficiency test. Miele owners seemed fanatically in love with their washers – but acknowledged that they paid a big premium for that love. And so on. After reviewing the input from Twitter (and Choice), I made a decision to purchase a Bosch, which seemed both highly reliable and not too expensive, good value for money. I put my decision out to Twitter, and the Bosch owners all chimed in: very happy, except for one, who seemed to have gotten one of those units that inevitably break down a few days after the warrantee expires. That settled it. On Saturday morning I played Bing Lee off Harvey Norman, talked one down to a very good price, and made the purchase. Crisis resolved.

Let’s step back from the immediate and get a good look at this whole process. In considering what to replace my dead washing machine with, I first consulted my real-world network – my friend who recommended Simpson. Then I went out to my virtual network, a network which is much, much larger. I follow about 5700 people on Twitter. This means I have access, potentially, to 5700 opinions, 5700 sets of experiences, 5700 people who may be willing to help. Even if only a small proportion of those do decide to offer assistance, that’s a lot of help, and it comes to me more or less immediately. The entire process took about half an hour – and this on a Friday night. If it’d been on a Tuesday afternoon, when people idly monitor Twitter while they work, I would have received double the response.

Wherever I go, I carry this ‘cloud’ of connections with me. These connections have value in themselves – they are a record of my passage through the human universe – but they have far greater value when put to work to accomplish some task. This is it; this is the knife-edge of the present: We have been busily building up our social networks, and though I freely admit that I am better connected than most, this will not long remain the case, as a generation grows into adulthood keeping a perfect record of all of their connections. Within a few years, nearly everyone who wills it will enter every situation with the same cloud of connections, the same reliable web of helpers who can respond to requests as the need arises. That fundamental transition – at the heart of this latest revolution – makes each of us much more effective. We’re carrying around a whole stadium of individuals, who can be called upon as needed to help us make the best decision in every situation. As we grow more comfortable with this new power, every decision of significance we make will be done in consultation with this network of effectiveness. This is already transforming the way we operate.

Some more examples, drawn from my own experience, will help illuminate this transformation. In December I found myself in Canberra for a few days. Where to eat dinner in a town that shuts down at 5 pm? I asked Twitter, and forty-five minutes later I was enjoying some of the best seafood laksa I’ve had in Australia. A few days later, in the Barossa Valley, I asked Twitter which wineries I should visit – and the top five recommendations were very good indeed. In the moment these can seem like trivial affairs, but both together begin to mark the difference between an ordinary holiday and an awesome one. Imagine this stretching out, minute after minute, throughout our lives. We’re not used to thinking in such terms. But just twenty years ago we weren’t used to the idea that we could reach anyone else instantly from wherever we were, or be reached by anyone else, anywhere. Then the mobile came along, and now that’s an accepted part of our reality. We’d find it difficult to go back to a time before the mobile became such an essential tool in our lives. This is the same transition we’re in the midst of right now with social networks. We look at Twitter and Facebook and find them charming ways to stay in touch and while away some empty time. A social network isn’t charming, and it certainly isn’t a waste of time. We are like children, playing with very powerful weapons. And sometimes they go off.

Before we explore that more explosive side to social networks, the ‘pillar of fire’ to this ‘pillar of cloud’, I want to introduce you to one more social networking technology, one which is brand-new, and which you may not have heard of yet. Just over the past month, I’ve become a big fan of Foursquare, a location-based ‘social network’. Using the GPS on my mobile, Foursquare allows me to ‘check in’ when I go to a restaurant, a store, or almost anywhere else. That is, Foursquare records the fact that I am at a particular place at a particular time. Once I’ve checked in, I can then make a recommendation – a ‘tip’ in Foursquare lingo – and share something I’ve observed about that place. It could be anything – something absurdly trivial, or something very relevant. As others have likely been to this place before me, there is already a list of tips. If I peek through those tips, I can learn something that could prove very useful.

As every day passes, and more people use Foursquare (over a million at present, all around the world) this list of tips is rapidly growing longer, more substantial, and more useful. What does this mean? Well, I could walk into a bar that I’ve never been to before and know exactly which cocktail I want to order. I would know which table at a restaurant offers the quietest corner for a romantic date. Or which salesperson to talk to for a good deal on that washing machine. And so on. With Foursquare have immediate and continuous information in depth, information provided by the hundreds or thousands in my own social network, plus everyone else who chooses to contribute. Foursquare turns the real world into a kind of Wikipedia, where everyone contributes what they know to improve the lot of all. I have a growing range of information about the world around me in my hands. If I put it to work, it will improve my effectiveness.

Last weekend I went to the cinema, to see Iron Man 2. As soon as I left the theatre, I sent out a message to Twitter: “Thought Iron Man 2 better than original. Snappier. Funnier. More comic-book-y.” That recommendation – high praise from me – went out to the 6550 people who follow me. Many of those folks are Australians, who might have been looking for a film to see last weekend. My positive review would have influenced them. I know for a fact that it did influence some, because they sent me messages telling me this.

On the other hand, if I’d sent out a message saying, ‘Worst. Movie. Ever.’ that also would have reached 6550 people, who would, once again, consider it. It might have even dissuaded some from paying the $17.50 to see Iron Man 2 on the big screen. If enough people said the same thing, that could kill the box office. This is precisely what we’ve seen. There’s a direct correlation between the speed at which a motion picture bombs and the rise in the number of users of Twitter. It used to take a few days for word-of-mouth to kill a movie’s box office (think Godzilla). Now it takes a few minutes. As the first showing ends, friends text friends, people post to Twitter and Facebook, and the story spreads. After the second or third showing, the crowds have dropped off: word has gotten out that the film stinks. Where a film could coast an entire weekend, now it has just a Friday matinee to succeed or fail. Positive word-of-mouth kept Avatar at the #1 spot for nine weeks, and the film remained a trending topic on Twitter for half of that time; conversely, The Back-Up Plan disappeared almost without a trace. An opinion, multiplied by hundreds or thousands of connections, carries a lot of weight.

These connections always come with us, part of who we are now. If we have an experience we find objectionable, our connections have a taste of that. A few months ago a friend found herself in Far North Queensland with an American Express card whose credit limit had summarily been cut in half with no warning, leaving her far away from home and potentially caught in a jam. When she called American Express to make an inquiry – and found that their consumer credit division closed at 5 pm on a Friday evening – she lost her temper. The 7500 people who follow her on Twitter heard a solid rant about the evils of American Express, a rant that they will now remember every time they find an American Express invitation letter in the post, or even when they decide which credit card to select while making a purchase.

Every experience, positive or negative, is now amplified beyond all comprehension. We sit here with the social equivalent of tactical nuclear weapons in our hands, toying with the triggers, and act surprised when occasionally they go off. Catherine Deveny, a weekly columnist for The AGE, was summarily dismissed last week because of some messages she posted over Twitter during the Logies broadcast. It seems she hadn’t thought through the danger of sending an obscene – but comedic – message to thousands of people, a message that would be picked up and sent again, and sent again, and sent again, until the tabloid newspapers and television shows, smelling blood in the water, got in on the action. When you’re well-connected, everything is essentially public. There’s no firm boundary between your private sphere and your public life once you allow thousands of others a look in. That can be a good thing if one is hungry for celebrity and fame – Kim Kardashian is an excellent example of this – but it can also accelerate a drive to self-destruction (witness Miranda Devine’s comments from Sunday). We live within a social amplifier, and it’s always turned up to 11. When we scream, we can be heard around the world, but now our whispers sound like shouts.

This means that no one can be silenced, anywhere. Last June, the entire world watched as an abortive Iranian revolution broke out on the streets of Tehran, viewing clips shot on mobile handsets, uploaded to YouTube, tagged, then picked up and shared throughout social networks like Twitter, which brought them to the attention of CNN, the New York Times, and the US State Department. Mobiles brought into North Korea puncture the tightly held reins of state control as information and news seeps across the border with China, the human connection amplified by a social technology. It’s no longer the CIA or ASIO station chief who gathers intelligence from far-flung places. It courses through our human networks.

You can begin to see the shape of this revolution-in-progess. Everything is so new, so rough, so raw, so innocent of intention that we really don’t know where we are going. We’re all stumbling through this doorway together. Each of us hold our connections to one another; like balloons that, in sufficient numbers, might cause us to take flight. We’re lifting off and gaining speed. Whether we’re a glider or a guided missile is up to us. We must pause, take stock, and ask ourselves what we want from these powerful new tools. And, in return, ask what we must be prepared to accept.

III: Threat Assistment

Individuals are becoming radically hyper-empowered. Our connections give us capabilities undreamt of a generation ago. As individuals who assess the various risks for your organizations, you’ve just learned about a brand new one, a threat that will – relatively quickly – dwarf nearly all others. The risk of hyperconnectivity is coming at you from three distinct but interrelated axes: hyper-empowered individuals who want to interact with your organizations; hyper-empowered individuals who compose your organizations; and your organizations, when they grasp the nettle of hyperconnectivity.

What do you do when a hyperconnected individual wants to become a customer, or just interact in some way with your organization? What happens when an existing customer becomes hyperconnected? Both of these situations are becoming commonplace affairs. My friend who had her troubles with American Express typifies this sort of threat. She had a long-term relationship with the company, but in the last years of that relationship she became hyperempowered. American Express didn’t know this – probably wouldn’t have understood it – and failed to manage the relationship when she ran into trouble.

The key attitudes for managing external relationships with hyperconnected individuals are humility and openness. American Express had no idea what was going on because they weren’t plugged into what my friend was saying to thousands of her followers. They didn’t consider her worth listening to. There’s no reason for this sort of thing to happen. Excellent tools exist that allow you to monitor what is being said about your organization, right now, who is saying it, and where. You can keep your finger on the pulse; when a customer has an issue, you can respond in a timely manner, humbly and transparently. Social media places an enormous value on transparency: unless someone’s motives – and connections – are apparent to you, you have no real reason to trust them, and no basis upon which to build that trust.

This isn’t a difficult policy to implement, but the responsibility for listening doesn’t lie with a single individual or department within your organization. Responsibility is spread throughout the organization; that’s the only way your organization will be able to handle all of the hyperconnected customers you do business with. Spread the load. The Chinese have a proverb: ‘Many hands make light work.’ That same rule applies here. Make listening to customers a priority throughout your organizations. If you don’t, those customers will use their amplified capabilities to make your life a living hell.

Employees within your organizations don’t leave their own networks at the door when they walk into the office. Although employers often block access to services like Facebook and Twitter from employee workstations, mobiles and pervasive high speed wireless connectivity make that restriction increasingly meaningless. Employees will connect and stay connected throughout the day, regardless of your stated policy. Soon enough, you will be encouraging them to stay connected, in order to share the burden of all that listening. Right now, your employees are well connected, but poorly disciplined. They don’t know the right way to do things. Don’t blame them for this. It’s all very new, and there hasn’t been a lot of guidance.

If you walk out of today’s talk with any one thing buzzing in your head, let it be this: develop a social media policy for your employees. Employees want to know how they can be connected in the office without damaging your reputation or their position. In the absence of a social media policy, organizations will get into all sorts of prangs that could have been avoided. Case in point: last week’s sacking of AGE columnist Catherine Deveny happened, in large part, because Fairfax has no social media policy. There were no guidelines for what constituted acceptable behavior, or even which behavior was ‘on the clock’ versus ‘off the clock’. Without these sorts of guidelines, hyperconnected employees will make their own decisions – putting your organizations, your stakeholders and your brands at risk.

Two well-known Australian organizations have established their own social media policies. The ABC boiled theirs down to four simple rules:

1) Do not mix the personal and the professional in ways likely to bring the ABC into disrepute;

2) Do not undermine your effectiveness at work;

3) Do not imply ABC endorsement of your personal views;

4) Do not disclose confidential information obtained through work.

This could be summed up with ‘use common sense’, but spelled out as it is here, the ABC has given its employees a framework that allows them to both regulate and embrace social media.

Telstra’s policy is wordier – it runs to five pages – but it is, in essence, very similar. It is good that Telstra has a social media policy, but that policy was only developed after a very public and very embarrassing incident. Last year, Telstra employee Leslie Nassar, who posted to Twitter pseduonymously under the account ‘Fake Stephen Conroy’, revealed his identity. When Telstra realized that one of their employees daily satirized the senator charged with ministerial oversight of their organization, the company was appalled, and quickly moved to fire Nassar – only to find that it couldn’t, because Nassar had violated no stated policy or conditions of employment. Shortly after that, Telstra developed and promulgated its social media guidelines. Learn from Telstra’s mistake. This same sort of PR and political catastrophe needn’t happen in your organizations, but I guarantee that it will, if you do not develop a social media policy. So please, get started immediately.

Finally, what happens when organizations hyperconnect? For hundreds of years, organizations have been based on rigid hierarchies and restricted flows of information. Hyperconnectivity puts paid to the org chart, replacing it with a dense set of hyperconnections between individuals within the organization, and between organizations: from each according to his ability, to each according to his need. We don’t really understand much about this new form of organization, other than to say that it looks very little like what we are familiar with today. But the pressure from hyperconnected individuals – both within and outside of the organization – will only increase, and to accommodate this pressure, the organization will increasingly find itself embedded in hyperconnections. This is the final leg of the revolution, still some years away, but one which requires careful planning today. Can your organization handle itself as it connects broadly to a planet where everyone is connected broadly? Will it maintain its own integrity, will it dissolve, merge, or disintegrate? This is a question that businesses need to ask, that schools need to ask, that governments need to ask. Everything from mass production to service delivery is being re-thought and re-shaped by our hyperconnectivity.

Organizations that master hyperconnectivity, putting social media to work, experience a leap forward in productivity. That leap forward comes at a price. Every tool that enhances productivity also changes everyone who uses it. None of us, as individuals or organizations, will be left behind, even if we choose to unplug, because we remain completely connected to a human world which is increasing hyperconnected. There is no going back, nor any particular safety in the present. Instead, we need to connect, and together use the best of what we’ve got – which is substantial, because there are plenty of smart people in all your organizations, throughout the nation, and the world – to mange this transition. This could be a nearly bloodless revolution, if we can remember that, at our essence, we are the connected species. Thought it may seem chaotic, this is not a collapse. It is a culmination.

I want to open this afternoon’s talk with a story about my friend Kate Carruthers. Kate is a business strategist, currently working at Hyro, over in Surry Hills. In November, while on a business trip to Far North Queensland, Kate pulled out her American Express credit card to pay for a taxi fare. Her card was declined. Kate paid with another card and thought little of it until the next time she tried to use the card – this time to pay for something rather pricier, and more important – and found her card declined once again.

As it turned out, American Express had cut Kate’s credit line in half, but hadn’t bothered to inform her of this until perhaps a day or two before, via post. So here’s Kate, far away from home, with a crook credit card. Thank goodness she had another card with her, or it could have been quite a problem. When she contacted American Express to discuss that credit line change – on a Friday evening – she discovered that this ‘consumer’ company kept banker’s hours in its credit division. That, for Kate, was the last straw. She began to post a series of messages to Twitter:

“I can’t believe how rude Amex have been to me; cut credit limit by 50% without notice; declined my card while in QLD even though acct paid”

“since Amex just treated me like total sh*t I just posted a chq for the balance of my account & will close acct on Monday”

“Amex is hardly accepted anywhere anyhow so I hardly use it now & after their recent treatment I’m outta there”

“luckily for me I have more than enough to just pay the sucker out & never use Amex again”

“have both a gold credit card & gold charge card with amex until monday when I plan to close both after their crap behaviour”

One after another, Kate sent this stream of messages out to her Twitter followers. All of her Twitter followers. Kate’s been on Twitter for a long time – well over three years – and she’s accumulated a lot of followers. Currently, she has over 8300 followers, although at the time she had her American Express meltdown, the number was closer to 7500.

Let’s step back and examine this for a moment. Kate is, in most respects, a perfectly ordinary (though whip-smart) human being. Yet she now has this ‘cloud’ of connections, all around her, all the time, through Twitter. These 8300 people are at least vaguely aware of whatever she chooses to share in her tweets. They care enough to listen, even if they are not always listening very closely. A smaller number of individuals (perhaps a few hundred, people like me) listen more closely. Nearly all the time we’re near a computer or a mobile, we keep an eye on Kate. (Not that she needs it. She’s thoroughly grown up. But if she ever got into a spot of trouble or needed a bit of help, we’d be on it immediately.)

This kind of connectivity is unprecedented in human history. We came from villages where perhaps a hundred of us lived close enough together that there were no secrets. We moved to cities where the power of numbers gave us all a degree of anonymity, but atomized us into disconnected individuals, lacking the social support of a community. Now we come full circle. This is the realization of the ‘Global Village’ that Marshall McLuhan talked about fifty years ago. At the time McLuhan though of television as a retribalizing force. It wasn’t. But Facebook and Twitter and the mobiles each of us carry with us during all our waking hours? These are the new retribalizing forces, because they keep us continuously connected with one another, allowing us to manage connections in every-greater numbers.

Anything Kate says, no matter how mundane, is now widely known. But it’s more than that. Twitter is text, but it is also links that can point to images, or videos, or songs, or whatever you can digitize and upload to the Web. Kate need simply drop a URL into a tweet and suddenly nearly ten thousand people are aware of it. If they like it, they will send it along (‘re-tweet’ is the technical term), and it will spread out quickly, like waves on a pond.

But Twitter isn’t a one-way street. Kate is ‘following’ 7250 individuals; that is, she’s receiving tweets from them. That sounds like a nearly impossible task: how can you pay attention to what that many people have to say? It’d be like trying to listen to every conversation at Central Station (or Flinders Street Station) at peak hour. Madness. And yet, it is possible. Tools have been created that allow you to keep a pulse on the madness, to stick a toe into the raging torrent of commentary.

Why would you want to do this? It’s not something that you need to do (or even want to do) all the time, but there are particular moments – crisis times – when Twitter becomes something else altogether. After an earthquake or other great natural disaster, after some pivotal (or trivial) political event, after some stunning discovery. The 5650 people I follow are my connection to all of that. My connection is broad enough that someone, somewhere in my network is nearly always nearly the first to know something, among the first to share what they know. Which means that I too, if I am paying attention, am among the first to know.

Businesses have been built on this kind of access. An entire sector of the financial services industry, from DowJones to Bloomberg, has thrived because it provides subscribers with information before others have it – information that can be used on a trading floor. This kind of information freely comes to the very well-connected. This kind of information can be put to work to make you more successful as an individual, in your business, or in whatever hobbies you might pursue. And it’s always there. All you need do is plug into it.

When you do plug into it, once you’ve gotten over the initial confusion, and you’ve dedicated the proper time and tending to your network, so that it grows organically and enthusiastically, you will find yourself with something amazingly flexible and powerful. Case in point: in December I found myself in Canberra for a few days. Where to eat dinner in a town that shuts down at 5 pm? I asked Twitter, and forty-five minutes later I was enjoying some of the best seafood laksa I’ve had in Australia. A few days later, in the Barossa, I asked Twitter which wineries I should visit – and the top five recommendations were very good indeed. These may seem like trivial instances – though they’re the difference between a good holiday and a lackluster one – but what they demonstrate is that Twitter has allowed me to plug into all of the expertise of all of the thousands of people I am connected to. Human brainpower, multiplied by 5650 makes me smarter, faster, and much, much more effective. Why would I want to live any other way? Twitter can be inane, it can be annoying, it can be profane and confusing and chaotic, but I can’t imagine life without it, just as I can’t imagine life without the Web or without my mobile. The idea that I am continuously connected and listening to a vast number of other people – even as they listen to me – has gone from shocking to comfortable in just over three years.

Kate and I are just the leading edge. Where we have gone, all of the rest of you will soon follow. We are all building up our networks, one person at a time. A child born in 2010 will spend their lifetime building up a social network. They’ll never lose track of any individual they meet and establish a connection with. That connection will persist unless purposely destroyed. Think of the number of people you meet throughout your lives, who you establish some connection with, even if only for a few hours. That number would easily reach into the thousands for every one of us. Kate and I are not freaks, we’re simply using the bleeding edge of a technology that will be almost invisible and not really worth mentioning by 2020.

All of this means that the network is even more alluring than it was a few years ago, and will become ever more alluring with the explosive growth in social networks. We are just at the beginning of learning how to use these new social networks. First we kept track of friends and family. Then we moved on to business associates. Now we’re using them to learn, to train ourselves and train others, to explore, to explain, to help and to ask for help. They are becoming a new social fabric which will knit us together into an unfamiliar closeness. This is already creating some interesting frictions for us. We like being connected, but we also treasure the moments when we disconnect, when we can’t be reached, when our time and our thoughts are our own. We preach focus to our children, but find our time and attention increasing divided by devices that demand service: email, Web, phone calls, texts, Twitter, Facebook, all of it brand new, and all of it seemingly so important that if we ignore any of them we immediately feel the cost. I love getting away from it all. I hate the backlog of email that greets me when I return. Connecting comes with a cost. But it’s becoming increasingly impossible to imagine life without it.

II: Eyjafjallajökull

I recently read a most interesting blog post. Chase Saunders, a software architect and entrepreneur in Maine (not too far from where I was born) had a bit of a brainwave and decided to share it with the rest of the world. But you may not like it. Saunders begins with: “For me to get really mad at a company, it takes more than a lousy product or service: it’s the powerlessness I feel when customer service won’t even try to make things right. This happens to me about once a year.” Given the number of businesses we all interact with in any given year – both as consumers and as client businesses – this figure is far from unusual. There will be times when we get poor value for money, or poor service, or a poor response time, or what have you. The world is a cruel place. It’s what happens after that cruelty which is important: how does the business deal with an upset customer? If they fail the upset customer, that’s when problems can really get out of control.

In times past, an upset customer could cancel their account, taking their business elsewhere. Bad, but recoverable. These days, however, customers have more capability, precisely because of their connectivity. And this is where things start to go decidedly pear-shaped. Saunders gets to the core of his idea:

Let’s say you buy a defective part from ACME Widgets, Inc. and they refuse to refund or replace it. You’re mad, and you want the world to know about this awful widget. So you pop over to AdRevenge and you pay them a small amount. Say $3. If the company is handing out bad widgets, maybe some other people have already done this… we’ll suppose that before you got there, one guy donated $1 and another lady also donated $1. So now we have 3 people who have paid a total of $5 to warn other potential customers about this sketchy company…the 3 vengeful donations will go to the purchase of negative search engine advertising. The ads are automatically booked and purchased by the website…

And there it is. Your customers – your angry customers – have found an effective way to band together and warn every other potential customer just how badlyyou suck, and will do it every time your name gets typed into a search engine box. And they’ll do it whether or not their complaints are justified. In fact, your competitors could even game the system, stuffing it up with lots of false complaints. It will quickly become complete, ugly chaos.

You’re probably all donning your legal hats, and thinking about words like ‘libel’ and ‘defamation’. Put all of that out of your mind. The Internet is extraterritorial, it and effectively ungovernable, despite all of the neat attempts of governments from China to Iran to Australia to stuff it back into some sort of box. Ban AdRevenge somewhere, it pops up somewhere else – just as long as there’s a demand for it. Other countries – perhaps Iceland or Sweden, and certainly the United States – don’t have the same libel laws as Australia, yet their bits freely enter the nation over the Internet. There is no way to stop AdRevenge or something very much like AdRevenge from happening. No way at all. Resign yourself to this, and embrace it, because until you do you won’t be able to move on, into a new type of relationship with your customers.

Which brings us back to our beginning, and a very angry Kate Carruthers. Here she is, on a Friday night in Far North Queensland, spilling quite a bit of bile out onto Twitter. Everyone one of the 7500 people who read her tweets will bear her experience in mind the next time they decide whether they will do any business with American Express. This is damage, probably great damage to the reputation of American Express, damage that could have been avoided, or at least remediated before Kate ‘went nuclear’.

But where was American Express when all of this was going on? While Kate expressed her extreme dissatisfaction with American Express, its own marketing arm was busily cooking up a scheme to harness Twitter. It’s Open Forum Pulse website shows you tweets from small businesses around the world. Ironic, isn’t it? American Express builds a website to show us what others are saying on Twitter, all the while ignoring about what’s being said about it. So the fire rages, uncontrolled, while American Express fiddles.

There are other examples. On Twitter, one of my friends lauded the new VAustralia Premium Economy service to the skies, while VAustralia ran some silly marketing campaign that had four blokes sending three thousand tweets over two days in Los Angeles. Sure, I want to tune into that stream of dreck and drivel. That’s exactly what I’m looking for in the age of information overload: more crap.

This is it, the fundamental disconnect, the very heart of the matter. We all need to do a whole lot less talking, and a whole lot more listening. That’s true for each of us as individuals: we’re so well-connected now that by the time we do grow into a few thousand connections we’d be wiser listening than speaking, most of the time. But this is particularly true for businesses, which make their living dealing with customers. The relationship between businesses and their customers has historically been characterized by a ‘throw it over the wall’ attitude. There is no wall, anywhere. The customer is sitting right beside you, with a megaphone pointed squarely into your ear.

If we were military planners, we’d call this ‘asymmetric warfare’. Instead, we should just give it the name it rightfully deserves: 21st-century business. It’s a battlefield out there, but if you come prepared for a 20th-century conflict – massive armies and big guns – you’ll be overrun by the fleet-footed and omnipresent guerilla warfare your customers will wage against you – if you don’t listen to them. Like volcanic ash, it may not present a solid wall to prevent your progress. But it will jam up your engines, and stop you from getting off the ground.

Listening is not a job. There will be no ‘Chief Listening Officer’, charged with keeping their ear down to the ground, wondering if the natives are becoming restless, ready to sound the alarm when a situation threatens to go nuclear. There is simply too much to listen to, happening everywhere, all at once. Any single point which presumed to do the listening for an entire organization – whether an individual or a department – will simply be overwhelmed, drowning in the flow of data. Listening is not a job: it is an attitude. Every employee from the most recently hired through to the Chief Executive must learn to listen. Listen to what is being said internally (therein lies the path to true business success) and learn to listen to what others, outside the boundaries of the organization, are saying about you.

Employees already regularly check into their various social networks. Right now we think of that as ‘slacking off’, not something that we classify as work. But if we stretch the definition just a bit, and begin to recognize that the organization we work for is, itself, part of our social network, things become clearer. Someone can legitimately spend time on Facebook, looking for and responding to issues as they arise. Someone can be plugged into Twitter, giving it continuous partial attention all day long, monitoring and soothing customer relationships. And not just someone. Everyone. This is a shared responsibility. Working for the organization means being involved with and connected to the organization’s customers, past, present and future. Without that connection, problems will inevitably arise, will inevitably amplify, will inevitably result in ‘nuclear events’. Any organization (or government, or religion) can only withstand so many nuclear events before it begins to disintegrate. So this isn’t a matter of choice. This is a basic defensive posture. An insurance policy, of sorts, protecting you against those you have no choice but to do business with.

Yet this is not all about defense. Listening creates opportunity. I get some of my best ideas – such as that AdRevenge article – because I am constantly listening to others’ good ideas. Your customers might grumble, but they also praise you for a job well done. That positive relationship should be honored – and reinforced. As you reinforce the positive, you create a virtuous cycle of interactions which becomes terrifically difficult to disrupt. When that’s gone on long enough, and broadly enough, you have effectively raised up your own army – in the post-modern, guerilla sense of the word – who will go out there and fight for you and your brand when the haters and trolls and chaos-makers bear down upon you. These people are connected to you, and will connect to one another because of the passion they share around your products and your business. This is another network, an important network, an offensive network, and you need both defensive and offensive strategies to succeed on this playing field.

Just as we as individuals are growing into hyperconnectivity, so our businesses must inevitably follow. Hyperconnected individuals working with disconnected businesses is a perfect recipe for confusion and disaster. Like must meet with like before the real business of the 21st-century can begin.

III: Services With a Smile

Moving from the abstract to the concrete, let’s consider the types of products and services required in our densely hyperconnected world. First and foremost, we are growing into a pressing, almost fanatical need for continuous connectivity. Wherever we are – even in airplanes – we must be connected. The quality of that connection – its speed, reliability, and cost – are important co-factors to consider, and it is not always the cheapest connection which serves the customer best. I pay a premium for my broadband connection because I can send the CEO of my ISP a text any time my link goes down – and my trouble tickets are sorted very rapidly! Conversely, I went with a lower-cost carrier for my mobile service, and I am paying the price, with missed calls, failed data connections, and crashes on my iPhone.

As connectivity becomes more important, reliability crowds out other factors. You can offer a premium quality service at a premium price and people will adopt it, for the same reason they will pay more for a reliable car, or for electricity from a reliable supplier, or for food that they’re sure will be wholesome. Connectivity has become too vital to threaten. This means there’s room for healthy competition, as providers offer different levels of service at different price points, competing on quality, so that everyone gets the level of service they can afford. But uptime always will be paramount.

What service, exactly is on offer? Connectivity comes in at least two flavors: mobile and broadband. These are not mutually exclusive. When we’re stationary we use broadband; when we’re in motion we use mobile services. The transition between these two networks should be invisible and seamless as possible – as pioneered by Apple’s iPhone.

At home, in the office, at the café or library, in fact, in almost any structure, customers should have access to wireless broadband. This is one area where Australia noticeably trails the rest of the world. The tariff structure for Internet traffic has led Australians to be unusually conservative with their bits, because there is a specific cost incurred for each bit sent or received. While this means that ISPs should always have the funding to build out their networks to handle increases in capacity, it has also meant that users protect their networks from use in order to keep costs down. This fundamental dilemma has subjected wireless broadband in Australia to a subtle strangulation. We do not have the ubiquitous free wireless access that many other countries – in particular, the United States – have on offer, and this consequently alters our imagination of the possibilities for ubiquitous networking.

Tariffs are now low enough that customers ought to be encouraged to offer wireless networking to the broader public. There are some security concerns that need to be addressed to make this safe for all parties, but these are easily dealt with. There is no fundamental barrier to pervasive wireless broadband. It does not compete with mobile data services. Rather, as wireless broadband becomes more ubiquitous, people come to rely on continuous connectivity ever more. Mobile data demand will grow in lockstep as more wireless broadband is offered. Investment in wireless broadband is the best way to ensure that mobile data services continue to grow.

Mobile data services are best characterized principally by speed and availability. Beyond a certain point – perhaps a megabit per second – speed is not an overwhelming lure on a mobile handset. It’s nice but not necessary. At that point, it’s much more about provisioning: how will my carrier handle peak hour in Flinders Street Station (or Central Station)? Will my calls drop? Will I be able to access my cloud-based calendar so that I can grab a map and a phone number to make dinner reservations? If a customer finds themselves continually frustrated in these activities, one of two things will happen: either the mobile will go back into the pocket, more or less permanently, or the customer will change carriers. Since the customer’s family, friends and business associates will not be putting their own mobiles back into their pockets, it is unlikely that any customer will do so for any length of time, irrespective of the quality of their mobile service. If the carrier will not provision, the customers must go elsewhere.

Provisioning is expensive. But it is also the only sure way to retain your customers. A customer will put up with poor customer service if they know they have reliable service. A customer will put up with a higher monthly spend if they have a service they know they can depend upon in all circumstances. And a customer will quickly leave a carrier who can not be relied upon. I’ve learned that lesson myself. Expect it to be repeated, millions of times over, in the years to come, as carriers, regrettably and avoidably, find that their provisioning is inadequate to support their customers.

Wireless is wonderful, and we think of it as a maintenance-free technology, at least from the customer’s point of view. Yet this is rarely so. Last month I listened to a talk by Genevieve Bell, Intel Fellow and Lead Anthropologist at the chipmaker. Her job is to spend time in the field – across Europe and the developing world – observing how people really use technology when it escapes into the wild. Several years ago she spent some time in Singapore, studying how pervasive wireless broadband works in the dense urban landscape of the city-state. In any of Singapore’s apartment towers – which are everywhere – nearly everyone has access to very high speed wired broadband (perhaps 50 megabits per second) – which is then connected to a wireless router to distribute the broadband throughout the apartment. But wireless is no great respecter of walls. Even in my own flat in Surry Hills I can see nine wireless networks from my laptop, including my own. In a Singapore tower block, the number is probably nearer to twenty or thirty.

Genevieve visited a family who had recently purchased a wireless printer. They were dissatisfied with it, pronouncing it ‘possessed’. What do you mean? she inquired. Well, they explained, it doesn’t print what they tell it to print. But it does print other things. Things they never asked for. The family called for a grandfather to come over and practice his arts of feng shui, hoping to rid the printer of its evil spirits. The printer, now repositioned to a more auspicious spot, still misbehaved. A few days later, a knock came on the door. Outside stood a neighbor, a sheaf of paper in his hands, saying, “I believe these are yours…?”

The neighbor had also recently purchased a wireless printer, and it seems that these two printers had automatically registered themselves on each other’s networks. Automatic configuration makes wireless networks a pleasure to use, but it also makes for botched configurations and flaky communication. Most of this is so far outside the skill set of the average consumer that these problems will never be properly remedied. The customer might make a support call, and maybe – just maybe the problem will be solved. Or, the problem will persist, and the customer will simply give up. Even with a support call, wireless networks are often so complex that the problem can’t be wholly solved.

As wireless networks grow more pervasive, Genevieve Bell recommends that providers offer a high-quality hand-holding and diagnostic service to their customers. They need to offer a ‘tune up’ service that will travel to the customer once a year to make sure everything is running well. Consumers need to be educated that wireless networks do not come for free. Like anything else, they require maintenance, and the consumer should come to expect that it will cost them something, every year, to keep it all up and running. In this, a wireless network is no different than a swimming pool or a lawn. There is a future for this kind of service: if you don’t offer it, your competitors soon will.

Finally, let me close with what the world looks like when all of these services are working perfectly. Lately, I’ve become a big fan of Foursquare, a ‘location-based social network’. Using the GPS on my iPhone, Foursquare allows me to ‘check in’ when I go to a restaurant, a store, or almost anywhere else. Once I’ve checked in, I can make a recommendation – a ‘tip’ in Foursquare lingo – or simply look through the tips provided by those who have been there before me. This list of tips is quickly growing longer, more substantial, and more useful. I can walk into a bar that I’ve never been to before and know exactly which cocktail I want to order. I know which table at the restaurant offers the quietest corner for a romantic date. I know which salesperson to talk to for a good deal on that mobile handset. And so on. I have immediate and continuous information in depth, and I put that information to work, right now, to make my life better.

The world of hyperconnectivity isn’t some hypothetical place we’ll never see. We are living in it now. The seeds of the future are planted in the present. But the shape of the future is determined by our actions today. It is possible to blunt and slow Australia’s progress into this world with bad decisions and bad services. But it is also possible to thrust the nation into global leadership if we can embrace the inevitable trend toward hyperconnectivity, and harness it. It has already transformed our lives. It will transform our businesses, our schools, and our government. You are the carriers of that change. Your actions will bring this new world into being.

We live in the age of networks. Wherever we are, five billion of us are continuously and ubiquitously connected. That’s everyone over the age of twelve who earns more than about two dollars a day. The network has us all plugged into it. Yet this is only the more recent, and more explicit network. Networks are far older than this most modern incarnation; they are the foundation of how we think. That’s true at the most concrete level: our nervous system is a vast neural network. It’s also true at a more abstract level: our thinking is a network of connections and associations. This is necessarily reflected in the way we write.

I became aware of this connectedness of our thoughts as I read Ted Nelson’s Literary Machines back in 1982. Perhaps the seminal introduction to hypertext, Literary Machines opens with the basic assertion that all texts are hypertexts. Like it or not, we implicitly reference other texts with every word we write. It’s been like this since we learned to write – earlier, really, because we all crib from one another’s spoken thoughts. It’s the secret to our success. Nelson wanted to build a system that would make these implicit relationships explicit, exposing all the hidden references, making text-as-hypertext a self-evident truth. He never got it. But Nelson did influence a generation of hackers – Sir Tim Berners-Lee among them – and pushed them toward the implementation of hypertext.

As the universal hypertext system of HTTP and HTML conquered all, hypertext revealed qualities as a medium which had hitherto been unsuspected. While the great strength of hypertext is its capability for non-linearity – you can depart from the text at any point – no one had reckoned on the force (really, a type of seduction) of those points of departure. Each link presents an opportunity for exploration, and is, in a very palpable sense, similar to the ringing of a telephone. Do we answer? Do we click and follow? A link is pregnant with meaning, and passing a link by necessarily incurs an opportunity cost. The linear text is constantly weighed down with a secondary, ‘centrifugal’ force, trying to tear the reader away from the inertia of the text, and on into another space. The more heavily linked a particular hypertext document is, the greater this pressure.

Consider two different documents that might be served up in a Web browser. One of them is an article from the New York Times Magazine. It is long – perhaps ten thousand words – and has, over all of its length, just a handful of links. Many of these links point back to otherNew York Timesarticles. This article stands alone. It is a hyperdocument, but it has not embraced the capabilities of the medium. It has not been seduced. It is a spinster, of sorts, confident in its purity and haughty in its isolation. This article is hardly alone. Nearly all articles I could point to from any professional news source portray the same characteristics of separateness and resistance to connect with the medium they employ. We all know why this is: there is a financial pressure to keep eyes within the website, because attention has been monetized. Every link presents an escape route, and a potential loss of income. Hence, links are kept to a minimum, the losses staunched. Disappointingly, this has become a model for many other hyperdocuments, even where financial considerations do not conflict with the essential nature of the medium. The tone has been set.

On the other hand, consider an average article in Wikipedia. It could be short or long – though only a handful reach ten thousand words – but it will absolutely be sprinkled liberally with links. Many of these links will point back into Wikipedia, allowing someone to learn the meaning of a term they’re unfamiliar with, or explore some tangential bit of knowledge, but there also will be plenty of links that face out, into the rest of the Web. This is a hyperdocument which has embraced the nature of medium, which is not afraid of luring readers away under the pressure of linkage. Wikipedia is a non-profit organization which does not accept advertising and does not monetize attention. Without this competition of intentions, Wikipedia is itself an example of another variety of purity, the pure expression of the tension between the momentum of the text and centrifugal force of hypertext.

Although commercial hyperdocuments try to fence themselves off from the rest of the Web and the lure of its links, they are never totally immune from its persistent tug. Just because you have landed somewhere that has a paucity of links doesn’t constrain your ability to move non-linearly. If nothing else, the browser’s ‘Back’ button continually offers that opportunity, as do all of your bookmarks, the links that lately arrived in email from friends or family or colleagues, even an advertisement proffered by the site. In its drive to monetize attention, the commercial site must contend with the centrifugal force of its own ads. In order to be situated within a hypertext environment, a hyperdocument must accept the reality of centrifugal force, even as it tries, ever more cleverly, to resist it. This is the fundamental tension of all hypertext, but here heightened and amplified because it is resisted and forbidden. It is a source of rising tension, as the Web-beyond-the-borders becomes ever more comprehensive, meaningful and alluring, while the hyperdocument multiplies its attempts to ensnare, seduce, and retain.

This rising tension has had a consequential impact on the hyperdocument, and, more broadly, on an entire class of documents. It is most obvious in the way we now absorb news. Fifteen years ago, we spread out the newspaper for a leisurely read, moving from article to article, generally following the flow of the sections of the newspaper. Today, we click in, read a bit, go back, click in again, read some more, go back, go somewhere else, click in, read a bit, open an email, click in, read a bit, click forward, and so on. We allow ourselves to be picked up and carried along by the centrifugal force of the links; with no particular plan in mind – except perhaps to leave ourselves better informed – we flow with the current, floating down a channel which is shaped by the links we encounter along the way. The newspaper is no longer a coherent experience; it is an assemblage of discrete articles, each of which has no relation to the greater whole. Our behavior reflects this: most of us already gather our news from a selection of sources (NY Times, BBC, Sydney MorningHeraldand Guardian UK in my case), or even from an aggregator such as Google News, which completely abstracts the article content from its newspaper ‘vehicle’.

The newspaper as we have known it has been shredded. This is not the fault of Google or any other mechanical process, but rather is a natural if unforeseen consequence of the nature of hypertext. We are the ones who feel the lure of the link; no machine can do that. Newspapers made the brave decision to situate themselves as islands within a sea of hypertext. Though they might believe themselves singular, they are not the only islands in the sea. And we all have boats. That was bad enough, but the islands themselves are dissolving, leaving nothing behind but metaphorical clots of dirt in murky water.

The lure of the link has a two-fold effect on our behavior. With its centrifugal force, it is constantly pulling us away from wherever we are. It also presents us with an opportunity cost. When we load that 10,000-word essay from the New York Times Magazine into our browser window, we’re making a conscious decision to dedicate time and effort to digesting that article. That’s a big commitment. If we’re lucky – if there are no emergencies or calls on the mobile or other interruptions – we’ll finish it. Otherwise, it might stay open in a browser tab for days, silently pleading for completion or closure. Every time we come across something substantial, something lengthy and dense, we run an internal calculation: Do I have time for this? Does my need and interest outweigh all of the other demands upon my attention? Can I focus?

In most circumstances, we will decline the challenge. Whatever it is, it is not salient enough, not alluring enough. It is not so much that we fear commitment as we feel the pressing weight of our other commitments. We have other places to spend our limited attention. This calculation and decision has recently been codified into an acronym: “tl;dr”, for “too long; didn’t read”. It may be weighty and important and meaningful, but hey, I’ve got to get caught up on my Twitter feed and my blogs.

The emergence of the ‘tl;dr’ phenomenon – which all of us practice without naming it – has led public intellectuals to decry the ever-shortening attention span. Attention spans are not shortening: ten year-olds will still drop everything to read a nine-hundred page fantasy novel for eight days. Instead, attention has entered an era of hypercompetitive development. Twenty years ago only a few media clamored for our attention. Now, everything from video games to chatroulette to real-time Twitter feeds to text messages demand our attention. Absence from any one of them comes with a cost, and that burden weighs upon us, subtly but continuously, all figuring into the calculation we make when we decide to go all in or hold back.

The most obvious effect of this hypercompetitive development of attention is the shortening of the text. Under the tyranny of ‘tl;dr’ three hundred words seems just about the right length: long enough to make a point, but not so long as to invoke any fear of commitment. More and more, our diet of text comes in these ‘bite-sized’ chunks. Again, public intellectuals have predicted that this will lead to a dumbing-down of culture, as we lose the depth in everything. The truth is more complex. Our diet will continue to consist of a mixture of short and long-form texts. In truth, we do more reading today than ten years ago, precisely because so much information is being presented to us in short form. It is digestible. But it need not be vacuous. Countlessspecialtyblogs deliver highly-concentrated texts to audiences who need no introduction to the subject material. They always reference their sources, so that if you want to dive in and read the lengthy source work, you are free to commit. Here, the phenomenon of ‘tl;dr’ reveals its Achilles’ Heel: shorter the text, the less invested you are. You give way more easily to centrifugal force. You are more likely to navigate away.

There is a cost incurred both for substance and the lack thereof. Such are the dilemmas of hypertext.

II: Schwarzschild Radius

It appears inarguable that 2010 is the Year of the Electronic Book. The stars have finally aligned: there is a critical mass of usable, well-designed technology, broad acceptance (even anticipation) within the public, and an agreement among publishers that revenue models do exist. Amazon and its Kindle (and various software simulators for PCs and smartphones) have proven the existence of a market. Apple’s recently-released iPad is quintessentially a vehicle for iBooks, its own bookstore-and-book-reader package. Within a few years, tens of millions of both devices, their clones and close copies will be in the hands of readers throughout the world. The electronic book is an inevitability.

At this point a question needs to be asked: what’s so electronic about an electronic book? If I open the Stanza application on my iPhone, and begin reading George Orwell’s Nineteen Eighty-Four, I am presented with something that looks utterly familiar. Too familiar. This is not an electronic book. This is ‘publishing in light’. I believe it essential that we discriminate between the two, because the same commercial forces which have driven links from online newspapers and magazines will strip the term ‘electronic book’ of all of its meaning. An electronic book is not simply a one-for-one translation of a typeset text into UTF-8 characters. It doesn’t even necessarily begin with that translation. Instead, first consider the text qua text. What is it? Who is it speaking to? What is it speaking about?

These questions are important – essential – if we want to avoid turning living typeset texts into dead texts published in light. That act of murder would give us less than we had before, because the published in light texts essentially disavow the medium within which they are situated. They are less useful than typeset texts, purposely stripped of their utility to be shoehorned into a new medium. This serves the economic purposes of publishers – interested in maximizing revenue while minimizing costs – but does nothing for the reader. Nor does it make the electronic book an intrinsically alluring object. That’s an interesting point to consider, because hypertext is intrinsically alluring. The reason for the phenomenal, all-encompassing growth of the Web from 1994 through 2000 was because it seduced everyone who has any relationship to the text. If an electronic book does not offer a new relationship to the text, then what precisely is the point? Portability? Ubiquity? These are nice features, to be sure, but they are not, in themselves, overwhelmingly alluring. This is the visible difference between a book that has been printed in light and an electronic book: the electronic book offers a qualitatively different experience of the text, one which is impossibly alluring. At its most obvious level, it is the difference between Encyclopedia Britannica and Wikipedia.

Publishers will resist the allure of the electronic book, seeing no reason to change what they do simply to satisfy the demands of a new medium. But then, we know that monks did not alter the practices within the scriptorium until printed texts had become ubiquitous throughout Europe. Today’s publishers face a similar obsolescence; unless they adapt their publishing techniques appropriately, they will rapidly be replaced by publishers who choose to embrace the electronic book as a medium,. For the next five years we will exist in an interregnum, as books published in light make way for true electronic books.

What does the electronic book look like? Does it differ at all from the hyperdocuments we are familiar with today? In fifteen years of design experimentation, we’ve learned a lot of ways to present, abstract and play with text. All of these are immediately applicable to the electronic book. The electronic book should represent the best of 2010 has to offer and move forward from that point into regions unexplored. The printed volume took nearly fifty years to evolve into its familiar hand-sized editions. Before that, the form of the manuscript volume – chained to a desk or placed upon an altar – dictated the size of the book. We shouldn’t try to constrain our idea of what an electronic book can be based upon what the book has been. Over the next few years, our innovations will surprise us. We won’t really know what the electronic book looks like until we’ve had plenty of time to play with them.

The electronic book will not be immune from the centrifugal force which is inherent to the medium. Every link, every opportunity to depart from the linear inertia of the text, presents the same tension as within any other hyperdocument. Yet we come to books with a sense of commitment. We want to finish them. But what, exactly do we want to finish? The electronic book must necessarily reveal the interconnectedness of all ideas, of all writings – just as the Web does. So does an electronic book have a beginning and an end? Or is it simply a densely clustered set of texts with a well-defined path traversing them? From the vantage point of 2010 this may seem like a faintly ridiculous question. I doubt that will be the case in 2020, when perhaps half of our new books are electronic books. The more that the electronic book yields itself to the medium which constitutes it, the more useful it becomes – and the less like a book. There is no way that the electronic book can remain apart, indifferent and pure. It will become a hybrid, fluid thing, without clear beginnings or endings, but rather with a concentration of significance and meaning that rises and falls depending on the needs and intent of the reader. More of a gradient than a boundary.

It remains unclear how any such construction can constitute an economically successful entity. Ted Nelson’s “Project Xanadu” anticipated this chaos thirty-five years ago, and provided a solution: ‘transclusion’, which allows hyperdocuments to be referenced and enclosed within other hyperdocuments, ensuring the proper preservation of copyright throughout the hypertext universe. The Web provides no such mechanism, and although it is possible that one could be hacked into our current models, it seems very unlikely that this will happen. This is the intuitive fear of the commercial publishers: they see their market dissolving as the sharp edges disappear. Hence, they tightly grasp their publications and copyrights, publishing in light because it at least presents no slippery slope into financial catastrophe.

We come now to a line which we need to cross very carefully and very consciously, the ‘Schwarzschild Radius’ of electronic books. (For those not familiar with astrophysics, the Schwarzschild Radius is the boundary to a black hole. Once you’re on the wrong side you’re doomed to fall all the way in.) On one side – our side – things look much as they do today. Books are published in light, the economic model is preserved, and readers enjoy a digital experience which is a facsimile of the physical. On the other side, electronic books rapidly become almost completely unrecognizable. It’s not just the financial model which disintegrates. As everything becomes more densely electrified, more subject to the centrifugal force of the medium, and as we become more familiar with the medium itself, everything begins to deform. The text, linear for tens or hundreds of thousands of words, fragments into convenient chunks, the shortest of which looks more like a tweet than a paragraph, the longest of which only occasionally runs for more than a thousand words. Each of these fragments points directly at its antecedent and descendant, or rather at its antecedents and descendants, because it is quite likely that there is more than one of each, simply because there can be more than one of each. The primacy of the single narrative can not withstand the centrifugal force of the medium, any more than the newspaper or the magazine could. Texts will present themselves as intense multiplicity, something that is neither a branching narrative nor a straight line, but which possesses elements of both. This will completely confound our expectations of linearity in the text.

We are today quite used to discontinuous leaps in our texts, though we have not mastered how to maintain our place as we branch ever outward, a fault more of our nervous systems than our browsers. We have a finite ability to track and backtrack; even with the support of the infinitely patient and infinitely impressionable computer, we lose our way, become distracted, or simply move on. This is the greatest threat to the book, that it simply expands beyond our ability to focus upon it. Our consciousness can entertain a universe of thought, but it can not entertain the entire universe at once. Yet our electronic books, as they thread together and merge within the greater sea of hyperdocuments, will become one with the universe of human thought, eventually becoming inseparable from it. With no beginning and no ending, just a series of ‘and-and-and’, as the various nodes, strung together by need or desire, assemble upon demand, the entire notion of a book as something discrete, and for that reason, significant, is abandoned, replaced by a unity, a nirvana of the text, where nothing is really separate from anything else.

What ever happened to the book? It exploded in a paroxysm of joy, dissolved into union with every other human thought, and disappeared forever. This is not an ending, any more than birth is an ending. But it is a transition, at least as profound and comprehensive as the invention of moveable type. It’s our great good luck to live in the midst of this transition, astride the dilemmas of hypertext and the contradictions of the electronic book. Transitions are chaotic, but they are also fecund. The seeds of the new grow in the humus of the old. (And if it all seems sudden and sinister, I’ll simply note that Nietzsche said that new era nearly always looks demonic to the age it obsolesces.)

III: Finnegans Wiki

So what of Aristotle? What does this mean for the narrative? It is easy to conceive of a world where non-fiction texts simply dissolve into the universal sea of texts. But what about stories? From time out of mind we have listened to stories told by the campfire. The Iliad, The Mahabharata, and Beowolf held listeners spellbound as the storyteller wove the tale. For hours at a time we maintained our attention and focus as the stories that told us who we are and our place in the world traveled down the generations.

Will we lose all of this? Can narratives stand up against the centrifugal forces of hypertext? Authors and publishers both seem assured that whatever happens to non-fiction texts, the literary text will remain pure and untouched, even as it becomes a wholly electronic form. The lure of the literary text is that it takes you on a singular journey, from beginning to end, within the universe of the author’s mind. There are no distractions, no interruptions, unless the author has expressly put them there in order to add tension to the plot. A well-written literary text – and even a poorly-written but well-plotted ‘page-turner’ – has the capacity to hold the reader tight within the momentum of linearity. Something is a ‘page-turner’ precisely because its forward momentum effectively blocks the centrifugal force. We occasionally stay up all night reading a book that we ‘couldn’t put down’, precisely because of this momentum. It is easy to imagine that every literary text which doesn’t meet this higher standard of seduction will simply fail as an electronic book, unable to counter the overwhelming lure of the medium.

This is something we never encountered with printed books: until the mid-20th century, the only competition for printed books was other printed books. Now the entire Web – already quite alluring and only growing more so – offers itself up in competition for attention, along with television and films and podcasts and Facebook and Twitter and everything else that has so suddenly become a regular feature of our media diet. How can any text hope to stand against that?

And yet, some do. Children unplugged to read each of the increasingly-lengthy Harry Potter novels, as teenagers did for the Twilight series. Adults regularly buy the latest novel by Dan Brown in numbers that boggle the imagination. None of this is high literature, but it is literature capable of resisting all our alluring distractions. This is one path that the book will follow, one way it will stay true to Aristotle and the requirements of the narrative arc. We will not lose our stories, but it may be that, like blockbuster films, they will become more self-consciously hollow, manipulative, and broad. That is one direction, a direction literary publishers will pursue, because that’s where the money lies.

There are two other paths open for literature, nearly diametrically opposed. The first was taken by JRR Tolkien in The Lord of the Rings. Although hugely popular, the three-book series has never been described as a ‘page-turner’, being too digressive and leisurely, yet, for all that, entirely captivating. Tolkien imagined a new universe – or rather, retrieved one from the fragments of Northern European mythology – and placed his readers squarely within it. And although readers do finish the book, in a very real sense they do not leave that universe. The fantasy genre, which Tolkien single-handedly invented with The Lord of the Rings, sells tens of millions of books every year, and the universe of Middle-earth, the archetypal fantasy world, has become the playground for millions who want to explore their own imaginations. Tolkien’s magnum opus lends itself to hypertext; it is one of the few literary works to come complete with a set of appendices to deepen the experience of the universe of the books. Online, the fans of Middle-earth have created seemingly endless resources to explore, explain, and maintain the fantasy. Middle-earth launches off the page, driven by its own centrifugal force, its own drive to unpack itself into a much broader space, both within the reader’s mind and online, in the collective space of all of the work’s readers. This is another direction for the book. While every author will not be a Tolkien, a few authors will work hard to create a universe so potent and broad that readers will be tempted to inhabit it. (Some argue that this is the secret of JK Rowling’s success.)

Finally, there is another path open for the literary text, one which refuses to ignore the medium that constitutes it, which embraces all of the ambiguity and multiplicity and liminality of hypertext. There have been numerous attempts at ‘hypertext fiction’; nearly all of them have been unreadable failures. But there is one text which stands apart, both because it anticipated our current predicament, and because it chose to embrace its contradictions and dilemmas. The book was written and published before the digital computer had been invented, yet even features an innovation which is reminiscent of hypertext. That work is James Joyce’s Finnegans Wake, and it was Joyce’s deliberate effort to make each word choice a layered exploration of meaning that gives the text such power. It should be gibberish, but anyone who has read Finnegans Wake knows it is precisely the opposite. The text is overloaded with meaning, so much so that the mind can’t take it all in. Hypertext has been a help; there are a few wikis which attempt to make linkages between the text and its various derived meanings (the maunderings of four generations of graduate students and Joycephiles), and it may even be that – in another twenty years or so – the wikis will begin to encompass much of what Joyce meant. But there is another possibility. In so fundamentally overloading the text, implicitly creating a link from every single word to something else, Joyce wanted to point to where we were headed. In this, Finnegans Wake could be seen as a type of science fiction, not a dystopian critique like Aldous Huxley’s Brave New World, nor the transhumanist apotheosis of Olaf Stapledon’s Star Maker (both near-contemporary works) but rather a text that pointed the way to what all texts would become, performance by example. As texts become electronic, as they melt and dissolve and link together densely, meaning multiplies exponentially. Every sentence, and every word in every sentence, can send you flying in almost any direction. The tension within this text (there will be only one text) will make reading an exciting, exhilarating, dizzying experience – as it is for those who dedicate themselves to Finnegans Wake.

It has been said that all of human culture could be reconstituted from Finnegans Wake. As our texts become one, as they become one hyperconnected mass of human expression, that new thing will become synonymous with culture. Everything will be there, all strung together. And that’s what happened to the book.

We live today in the age of networks. Having grown from nothing just fifteen years ago, the network has become one of the principal influences in our lives. We trust the network; we depend on the network; we use the network to make ourselves more effective. This state of affairs did not develop gradually; rather, we have passed through a series of unpredicted and non-linear shifts in the fabric of culture.

The first of these shifts was coincident with the birth of the Web itself, back in the mid-1990s. From its earliest days the Web was alluring because it represented all things to all people: it could serve as both resource and repository for anything that might interest us, a platform for whatever we might choose to say. The truth of those earliest days is that we didn’t really know what we wanted to say; the stereotype of the page where one went on long and lovingly about one’s pussy carries an echo of that search for meaning. The lights were on, but nobody was home.

Drawing the curtain on this more-or-less vapid era of the Web, the second shift began with the collapse of the dot-com bubble in the early 2000s. The undergrowth cleared away, people could once again focus on the why of the Web. This was when the Web came into its own as an interactive medium. The Web could have been an interactive medium from day one – the technology hadn’t changed one bit – but it took time for people to map out the evolving relationship between user and experience. The Web, we realized, is not a page to read, but rather, a space for exploration, connection and sharing.

This is when things start to get interesting, when ideas like Wikipedia begin to emerge. Wikipedia is not a technology, at least, it’s not a specific technology. Wikis have been around since 1995, nearly as old as the Web itself. Databases are older than the Web, too. So what is new about Wikipedia? Simply this: the idea of sharing. Wikipedia invites us all to share from our expertise, for the benefit of one another. It is an agreement to share what we know to collectively improve our capability. If you strip away all of the technology, and all of the hype – both positive and negative –from Wikipedia, what you’re left with is this agreement to share. In the decade since Wikipedia’s launch we’ve learned to share across a broad range of domains. This sharing supported by technology is a new thing, and dramatically increases the allure of the network. What was merely very interesting back in 1995 became almost overpowering in the years since the turn of the millennium. It has consistently become harder and harder to imagine a life without the network, because the network provides so much usefulness, and so much utility.

The final shift occurred in 2007, as Facebook introduced F8, its plug-in architecture which opened its design – and its data – to outside developers. Facebook exploded from a few million users to over four hundred million: the third largest nation in the world. Social networks are significant because they harness and amplify our innate human desire and capability to connect with one another. We constantly look to our social networks – that is, our real-world networks – to remind us who we are, where we are, and what we’re doing. These social network provide our ontological grounding. When translated into cyberspace, these social networks can become almost impossibly potent – which is why, when they’re used to bully or harass someone, they can lead to such disastrous results. It becomes almost too easy, and we become almost too powerful.

A lot of what we’ll see in this decade is an assessment of what we choose to do with our new-found abilities. We can use these social networks to transmit pornographic pictures of one another back and forth at such frequency and density that we simply numb ourselves into a kind of fleshy hypnosis. That is one possible direction for the future. Or, we could decide that we want something different for ourselves, something altogether more substantial and meaningful. But in order to get that sort of clarity, we need to be very clear on what we want – both direction and outcome. At this point we are simply playing around – with a loaded weapon – hoping that it doesn’t accidentally go off.

Of course it does; someone sets up a Facebook page to memorialize a murdered eight year-old, but leaves the door open to all comers (believing, unrealistically, that others will share their desire to mourn together), only to see the overflowing sewage of the Internet spill bile and hatred and psychopathology onto a Web page. This happens again and again; it happened several times in one week in February. We are not learning the lesson we are meant to learn. We are missing something. Partly this is because it is all so new, but partly it is because we do not know what our own intentions are. Without that, without a stated goal, we can not winnow the wheat from the chaff. We will forget to close the windows and lock the doors. We will amuse ourselves to death.

I mention this because, as educators, it is up to all of us to act as forces for the positive moral good of the culture as a whole. Cultural values are transmitted by educators; and while parents may be a bigger influence, teachers have their role to play. Parents are simply overwhelmed by all of this novelty – the Web wasn’t around when they were children, and social networks weren’t around even five years ago. So, right at this moment in time, educators get to be the adult cultural vanguard, the vital mentoring center.

If we had to do this ourselves, alone, as individuals – or even as individual institutions – the project would almost certainly fail. After all, how could we hope to balance all of the seductions ‘out there’ against the sense which needs to be taught ‘in here’? We would simply be overwhelmed – our current condition. Fortunately, we are as well connected, at least in potential, as any of our students. We have access to better resources. And we have more experience, which allows us to put those resources to work. In short, we are far better placed to make use of social media than our charges, even if they seem native to the medium while we profess to be immigrants.

One thing that has changed, because of the second shift, the trend toward sharing, is that educational resources are available now as never before. Wikipedia led the way, but it is just small island in a much large sea of content, provided by individuals and organizations throughout the world. iTunes University, YouTube University, the numberless podcasts and blogs that have sprung up from experts on every subject from macroeconomics to the history of Mesoamerica – all of it searchable by Google, all of it instantaneously accessible – every one of these points to the fact that we have clearly entered a new era, where we are surrounded by and saturated with an ‘educational field’ of sorts. Whatever you need to know, you’re soaking in it.

This educational field is brand-new. No one has made systematic use of it, no teacher, no institution, no administration. But that doesn’t lessen its impact. We all consult Wikipedia when we have some trivial question to answer; that behavior is the archetype for where education is headed in the 21st century – real-time answers on-demand, drawn from the educational field.

Paired with the educational field is the ability for educators to establish strong social connections – not just with other educators, but laterally, through the student to the parents, through the parents to the community, and so on, so that the educator becomes ineluctably embedded in a web of relationships which define, shape and determine the pedagogical relationship. Educators have barely begun to make use of the social networking tools on offer; just to have a teacher ‘friend’ a student in Facebook is, to some eyes, a cause for concern – what could possibly be served by that relationship, one which subverts the neat hierarchy of the 19th century classroom?

The relationship is the essence of the classroom, that which remains when all the other trivia of pedagogy are stripped away. The relationship between the teacher and the student is at the core of the magical moment when knowledge is transmitted between the generations. We now have the greatest tool ever created by the hand of man to reinforce and strengthen that relationship. And we need to use it, or else we will all sink beneath a rising tide of noise and filth and distraction.

But how?

II: The Unfinished Project

The roots of today’s talk lie in a public conversation I had with Dr. Evan Arthur, who manages the Digital Education Revolution Group within the Department of Education, Employment and Workplace Relations. As part of this conversation, I asked him about educational styles, and, in particular, Constructivism. As conceived by Jean Piaget and his successors across the 20th century, Constructivism states that the child learns through play – or rather, through repeated interactions with the world. Schema are created by the child, put to the test, where they either succeed or fail. Failed schema are revised and re-tested, while successful schema are incorporated into ever-more-comprehensive schema. Through many years of research we know that we learn the physics of the real world through a constant process of experimentation. Every time a toddler dumps a cup of juice all over himself, he’s actually conducting an investigation into the nature of the real.

The basic tenets of Constructivism are not in dispute, although many educators have consistently resisted the underlying idea of Constructivism – that it is the child who determines the direction of learning. This conflicts directly with the top-down teacher-to-student model of education which we are all intimate familiar with, which has determined the nature of pedagogy and even the architecture of our classrooms. This is the grand battle between play and work; between ludic exploration and the hard grind of assimilating the skills that situate us within an ever-more-complex culture.

At the moment, this trench warfare has frozen us in a stalemate located, for the most part, between year two and year three. In the first two years education has a strong ludic component, and students are encouraged to explore. But in year three the process becomes routinized, formalized and very strict. Certainly, eight-year-olds are better able to understand restrictions than six-year-olds. They’re better at following the rules, at colouring within the lines. But it seems as though we’ve taken advantage of the fact that an older child is a more compliant one. It is true that as we advance in years, our ludic nature becomes tempered by an adult’s sensibility. But humans retain the urge to play throughout their lives – to a greater degree than any other species we know of. It could very well be that our ability to learn is intimately tied to our desire to play.

If we are prepared to swallow this bitter pill, and acknowledge that play is an essential part of the learning process, we have no choice but to follow this idea wherever it leads us. Which leads me back to my conversation with Dr. Arthur. I asked him about the necessity of play, and he framed his response by talking about “The Unfinished Constructivist Project”. It is a revolution trapped in mid-stride, a revelation that, somehow, hasn’t penetrated all the way through our culture. We still insist that instruction is the preferred mechanism for education, when we have ample evidence to suggest this simply isn’t true. Let me be clear: instruction is not the same thing as guidance. I am not suggesting that children simply do as they please. The more freedom they have, the more need they have for a strong, stabilizing force to guide them as they explore. This may be the significant (if mostly hidden) objection to the Constructivist project: it is simply too expensive. The human resources required to give each child their own mentor as they work their way through the corpus of human knowledge would simply overwhelm any current educational model, with the exception of homeschooling. I don’t know what the student-teacher ratio would need to be in a fully realized Constructivist educational system, but I doubt that twenty-to-one would be sufficient. That’s the level needed to maintain a semblance of order, more a peacekeeping force than an army of mentors.

There have been occasional attempts to create a fully Constructivist educational system, but these, like the manifold utopian communities which have been founded, flourish briefly, then fade or fracture, and do not survive the test of time. The level of dedication and involvement required from both educator/mentors and parents is simply too big an ask. This is the sort of thing that a hunter-gatherer culture has no trouble with: the entire world is the classroom, the child explores it, and an adult is always there to offer an explanation or story to round out the child’s knowledge. We live in an industrial culture (at least, our classrooms do), where there is strict differentiation between ‘education’ and the other activities in life, where adults are ‘educators’ or they are not, where everything is highly formal, almost ritualized. (Consider the highly regulated timings of the school day – equal parts order from chaos, and ritual.) There could never be enough support within such a framework to sustain a Constructivist model. This is why we have the present stalemate; we know the right thing to do, but, heretofore, we have lacked the resources to actualize this knowledge.

That has now changed.

The educational field must be recognized as the key element which will power the unfinished Constructivist revolution. The educational field does not recognize the boundaries of the classroom, the institution, or even the nation. It is simply pervasive, ubiquitous and available as needed. Within that field, both students and educator/mentors can find all of the resources needed to make the Constructivist project a continuing success. There need be no rupture between years two and three, no transformation of educational style from inward- to outward-directed. Instead, there can and should be a continual deepening of the child’s exploration of the corpus of knowledge, under the guidance of a network of mentors who share the burden. We already have most of the resources in place to assure that the child can have a continuous and continually strengthening relationship with knowledge: Wikipedia, while not perfect, points toward the kinds of knowledge sharing systems which will become both commonplace and easily created throughout the 21st century.

Sharing needs to become a foundational component in a modern educational system. Every time a teacher finds a resource to aid a student in their exploration, that should be noted and shared broadly. As students find things on their own – and they will be far better at it than most educators – these, too, should be shared. We should be creating a great, linked trail behind us as we learn, so that others, when exploring, will have paths to guide them – should they choose to follow. We have systems that can do this, but we have not applied these systems to education – in large part because this is not how we conceive of education. Or rather, this is not how we conceive of education in the classroom. I do a fair bit of corporate consulting, and this sort of ‘knowledge capture’ and ‘knowledge management’ is becoming essential to the operation of a 21st century business. Many businesses are creating their own, ad-hoc systems to share knowledge resources among their staff, as they understand how important this is for professional development.

This is a new battle line opened up in the war between the unfinished constructivist project and the older, more formal methods of education. The corporate world doesn’t have time for methodologies which have become obsolete. Employees must be constantly up-to-date. Professionals – particularly doctors and lawyers – must remain continuously well-informed about developments in their respective fields. Those in management need real-time knowledge streams in order recognize and solve problems as they emerge. This is all much more ludic than formal, much more self-directed than guided, much more juvenile than adult – even though these are all among the most adult of all activities. This disjunction, this desynchronization between the needs of the world-at-large and the delivery capabilities of an ever-more-obsolete educational system is the final indictment of things-as-they-are. Things will change; either education will become entirely corporatized, or educators will wholly embrace the unfinished Constructivist project. Either way the outcome will be the same.

Fortunately, the educational field has something else to offer educators beyond the near-infinite supply of educational resources. It is a network of individuals. It is a social network, connected together via bonds of familiarity and affinity. The student is embedded in a network with his mentors; the mentors are connected to other students, and to other mentors; everyone is connected to the parents, and the community. In this sense, the formal space of the ‘classroom’ collapses, undone by the pressure provided by the social network, which has effectively caused the classroom walls to implode. The outside world wants to connect to what happens within the crucible of the classroom, or, more specifically, with the magical moment of knowledge transference within the student’s mind. This is what we should be building our social networks to support. At present, social networks like Facebook and Twitter are dull, unsophisticated tools, capable of connecting together, but completely inadequate when it comes to shaping that connection around a task – such as mentoring, or exploring knowledge. A second generation of social networks is already reaching release. These tools display a more sophisticated edge, and will help to support the kinds of connections we need within the educational field.

None of this, as wonderful as it might sound (and I admit that it may also seem pretty frightening) is happening in a vacuum. There are larger changes afoot within Australia, and no vision for the future of education in Australia could ignore them. We must find a way to harmonize those changes with the larger, more fundamental changes overtaking the entire educational system.

III: The National Curriculum

Underlying fear of a Constructivist educational project is that it would simply give children an excuse to avoid the tough work of education. There is a persistent belief that children will simply load up on educational ‘candy’, without eating their all-so-essential ‘vegetables’, that is, the basic skills which form the foundation for future learning. Were children left entirely to their own devices, there might be some danger of this – though, now that we live in the educational field, even that possibility seems increasingly remote. Children do not live in isolation: they are surrounded by adults who want them to grow into successful adults. In prehistoric times, adults simply had to be adults around children for the transference of life-skills to take place. Children copied, imitated, and aped adults – and still do. This learning-by-mimesis is still a principle factor in the education of the child, though it is not one which is often highlighted by the educational system. Industrial culture has separated the adult from the child, putting one into the office, the other into the school. That separation, and the specialization which is the hallmark of the Industrial Age, broke the natural and persistent mentorship of parenting into discrete units: this much in the home, this much in the school. If we do not trust children to consume a nourishing diet of knowledge, it is because we do not trust ourselves to prepare it for them. The separation by function led to a situation where no one is responsible for the whole thread of the life. Parents look to teachers. Teachers look to parents. Everyone, everywhere, looks to authority for responsible solutions.

There is no authority anywhere. Either we do this ourselves, or it will not happen. We have to look to ourselves, build the networks between ourselves, reach out and connect from ourselves, if we expect to be able to resist a culture which wants to turn the entire human world into candy. This is not going to be easy; if it were, it would have happened by itself. Nor is it instantaneous. Nothing like this happens overnight. Furthermore, it requires great persistence. In the ideal situation, it begins at birth and continues on seamlessly until death. In that sense, this connected educational field mirrors and is a reflection of our human social networks, the ones we form from our first moments of awareness. But unlike that more ad-hoc network, this one has a specific intent: to bring the child into knowledge.

Knowledge, of course, is very big, very vague, mostly undefined. Meanwhile, there are specific skills and bodies of knowledge which we have nominated as important: the ability to read and write; to add and subtract, multiply and divide; a basic understanding of the physical and living worlds; the story of the nation and its peoples. These have very recently been crystallized in a ‘National Curriculum’, which seeks to standardize the pedagogical outcomes across Australia for all students in years 1 through 10. Parents and educators have already begun to argue about the inclusion or exclusion of elements within that curriculum. I was taught phonics over forty years ago, but apparently it’s still a matter of some debate. The teaching of history is always going to be contentious, because the story we tell ourselves about who we are is necessarily political. So the adults will argue it out – year after year, decade after decade – while the educators and students face this monolithic block of text which seems to be the complete antithesis of the Constructivist project. And, looked at one way, the National Curriculum is exactly the type of top-down, teacher-to-student, sit-down-and-shut-up sort of educational mandate which is no longer effective in the business world.

All of which means its probably best that we avoid viewing up the National Curriculum as a validation, encouraging us to continue on with things as they are. Instead, it should be used as mandate for change. There are several significant dimensions to this mandate.

First, putting everyone onto the same page, pedagogically, opens up an opportunity for sharing which transcends anything before possible. Teachers and students from all over Australia can contribute to or borrow from a wealth of resources shared by those who have passed before them through the National Curriculum. Every teacher and every student should think of themselves as part of a broader collective of learners and mentors, all working through the same basic materials. In this sense, the National Curriculum isn’t a document so much as it is the architecture of a network. It is the way all things educational are connected together. It is the wiring underneath all of the pedagogy, providing both a scaffolding and a switchboard for the learning moment.

Is it possible to conceive of a library organized along the lines of the National Curriculum? Certainly a librarian would have no problem configuring a physical library to meet the needs of the curriculum. It’s even easier to organize similar sorts of resources in cyberspace. Not only is it easy, there’s now a mandate to do so. We know what sorts of resources we’ll need, going forward. Nothing should be stopping us from creating collective resources – similar to an Australian Wikipedia, and perhaps drawing from it – which will serve the pedagogical requirements of the National Curriculum. We should be doing this now.

Second, we need to think of the National Curriculum as an opportunity to identify all of the experts in all of the areas covered by the curriculum, and, once they’ve been identified, we must create a strong social network, with them inside, giving them pride of place as ‘nodes of expertise’. Knowledge is not enough; it must be paired with mentors who have been able to put that knowledge into practice with excellence. The National Curriculum is the perfect excuse to bring these experts together, to make them all connected and accessible to everyone throughout the nation who could benefit from their wisdom.

Here, once again, it is best to think of the National Curriculum not as a document but as a network – a way to connect things, and people, together. The great strength of the National Curriculum is, as Dr. Evan Arthur put it, that it is a ‘greenfields’. Literally anything is possible. We can go in any direction we choose. Inertia would have us do things as we’ve always done them, even as the centrifugal forces of culture beyond the classroom point in a different direction. Inertia can not be a guiding force. It must be resisted, at every turn, not in the pursuit of some educational utopia or false revolution, but rather because we have come to realize that the network is the educational system.

Moving from where we are to where need to be seems like a momentous transition. But the Web saw repeated momentous transitions in its first fifteen years and we managed all of those successfully. We can absorb huge amounts of change and novelty so long as the frame which supports us is strong and consistent. That’s the essence of the parent-child relationship: so long as the child feels it is being cared for, it can endure almost anything. This means that we shouldn’t run around freaking out. The sky is not falling. The world is not ending. If anything, we are growing closer together, more connected, becoming more important to one another. It may feel a bit too close from time to time, as we learn how to keep a healthy distance in these new relationships, but that closeness supports us all. It can keep children from falling through the net of opportunity. It can see us advance into a culture where every child has the full benefit of an excellent education, without respect to income or circumstance.

That is the promise. We have the network. We live in the educational field. We now have the National Curriculum to wire it all together. But can we marry the demands of the National Curriculum with the ludic call of Constructivism? Can we create a world where literally we play into learning? This is more than video games that have math drills embedded into them. It’s about capturing the interests of a child and using that as a springboard for the investigation of their world, their nation, their home. That can only happen if mentors are deeply involved and embedded in the child’s life from its earliest years.

I don’t have any easy answers here. There is no magic wand to wave over this whole uncoordinated mess to make it all cohere. No one knows what’s expected of them anymore – educators least of all. Are we parents? Are we ‘friends’? Where do we stand? I know this: we stand most securely when we stand connected.

In October of 1993 I bought myself a used SPARCstation. I’d just come off of a consulting gig at Apple, and, flush with cash, wanted to learn UNIX systems administration. I also had some ideas about coding networking protocols for shared virtual worlds. Soon after I got the SparcStation installed in my lounge room – complete with its thirty-kilo monster of a monitor – I grabbed a modem, connected it to the RS-232 port, configured SLIP, and dialed out onto the Internet. Once online I used FTP, logged into SUNSITE and downloaded the newly released NSCA Mosaic, a graphical browser for the World Wide Web.

I’d first seen Mosaic running on an SGI workstation at the 1993 SIGGRAPH conference. I knew what hypertext was – I’d built a MacOS-based hypertext system back in 1986 – so I could see what Mosaic was doing, but there wasn’t much there. Not enough content to make it really interesting. The same problem that had bedeviled all hypertext systems since Douglas Englebart’s first demo, back in 1968. Without sufficient content, hypertext systems are fundamentally uninteresting. Even Hypercard, Apple’s early experiment in Hypertext, never really moved beyond the toy stage. To make hypertext interesting, it must be broadly connected – beyond a document, beyond a hard drive. Either everything is connected, or everything is useless.

In the three months between my first click on NCSA Mosaic and when I fired it up in my lounge room, a lot of people had come to the Web party. The master list of Websites – maintained by CERN, the birthplace of the Web – kept growing. Over the course of the last week of October 1993, I visited every single one of those Websites. Then I was done. I had surfed the entire World Wide Web. I was even able to keep up, as new sites were added.

This gives you a sense of the size of the Web universe in those very early days. Before the explosive ‘inflation’ of 1994 and 1995, the Web was a tiny, tidy place filled mostly with academic websites. Yet even so, the Web had the capacity to suck you in. I’d find something that interested me – astronomy, perhaps, or philosophy – and with a click-click-click find myself deep within something that spoke to me directly. This, I believe, is the core of the Web experience, an experience that we’re so many years away from we tend to overlook it. At its essence, the Web is personally seductive.

I realized the universal truth of this statement on a cold night in early 1994, when I dragged my SPARCstation and boat-anchor monitor across town to a house party. This party, a monthly event known as Anon Salon, was notorious for attracting the more intellectual and artistic crowd in San Francisco. People would come to perform, create, demonstrate, and spectate. I decided I would show these people this new-fangled thing I’d become obsessed with. So, that evening, as front the door opened, and another person entered, I’d sidle along side them, and ask them, “So, what are you interested in?” They’d mention their current hobby – gardening or vaudeville or whatever it might be – and I’d use the brand-new Yahoo! category index to look up a web page on the subject. They’d be delighted, and begin to explore. At no point did I say, “This is the World Wide Web.” Nor did I use the word ‘hypertext’. I let the intrinsic seductiveness of the Web snare them, one by one.

Of course, a few years later, San Francisco became the epicenter of the Web revolution. Was I responsible for that? I’d like to think so, but I reckon San Francisco was a bit of a nexus. I wasn’t the only one exploring the Web. That night at Anon Salon I met Jonathan Steuer, who walked on up and said, “Mosaic, hmm? How about you type in ‘www.hotwired.com’?” Steuer was part of the crew at work, just few blocks away, bringing WIRED magazine online. Everyone working on the Web shared the same fervor – an almost evangelical belief that the Web changes everything. I didn’t have to tell Steuer, and he didn’t have to tell me. We knew. And we knew if we simply shared the Web – not the technology, not its potential, but its real, seductive human face, we’d be done.

That’s pretty much how it worked out: the Web exploded from the second half of 1994, because it appeared to every single person who encountered it as the object of their desire. It was, and is, all things to all people. This makes it the perfect love machine – nothing can confirm your prejudices better than the Web. It also makes the Web a very pretty hate machine. It is the reflector and amplifier of all things human. We were completely unprepared, and for that reason the Web has utterly overwhelmed us. There is no going back. If every website suddenly crashed, we would find another way to recreate the universal infinite hypertextual connection.

In the process of overwhelming us – in fact, part of the process itself – the Web has hoovered up the entire space of human culture; anything that can be digitized has been sucked into the Web. Of course, this presents all sorts of thorny problems for individuals who claim copyright over cultural products, but they are, in essence swimming against the tide. The rest, everything that marks us as definably human, everything that is artifice, has, over the last fifteen years, been neatly and completely sucked into the space of infinite connection. The project is not complete – it will never be complete – but it is substantially underway, and more will simply be more: it will not represent a qualitative difference. We have already arrived at a new space, where human culture is now instantaneously and pervasively accessible to any of the four and a half billion network-connected individuals on the planet.

This, then, is the Golden Age, a time of rosy dawns and bright beginnings, when everything seems possible. But this age is drawing to a close. Two recent developments will, in retrospect, be seen as the beginning of the end. The first of these is the transformation of the oldest medium into the newest. The book is coextensive with history, with the largest part of what we regard as human culture. Until five hundred and fifty years ago, books were handwritten, rare and precious. Moveable type made books a mass medium, and lit the spark of modernity. But the book, unlike nearly every other medium, has resisted its own digitization. This year the defenses of the book have been breached, and ones and zeroes are rushing in. Over the next decade perhaps half or more of all books will ephemeralize, disappearing into the ether, never to return to physical form. That will seal the transformation of the human cultural project.

On the other hand, the arrival of the Web-as-appliance means it is now leaving the rarefied space of computers and mobiles-as-computers, and will now be seen as something as mundane as a book or a dinner plate. Apple’s iPad is the first device of an entirely new class which treat the Web as an appliance, as something that is pervasively just there when needed, and put down when not. The genius of Apple’s design is its extreme simplicity – too simple, I might add, for most of us. It presents the Web as a surface, nothing more. iPad is a portal into the human universe, stripped of everything that is a computer. It is emphatically not a computer. Now, we can discuss the relative merits of Apple’s design decisions – and we will, for some years to come. But the basic strength of the iPad’s simplistic design will influence what the Web is about to become.

eBooks and the iPad bookend the Golden Age; together they represent the complete translation of the human universe into a universally and ubiquitously accessible form. But the human universe is not the whole universe. We tend to forget this as we stare into the alluring and seductive navel of our ever-more-present culture. But the real world remains, and loses none of its importance even as the flashing lights of culture grow brighter and more hypnotic.

II: The Silver Age

Human beings have the peculiar capability to endow material objects with inner meaning. We know this as one of the basic characteristics of humanness. From the time a child anthropomorphizes a favorite doll or wooden train, we imbue the material world with the attributes of our own consciousness. Soon enough we learn to discriminate between the animate and the inanimate, but we never surrender our continual attribution of meaning to the material world. Things are never purely what they appear to be, instead we overlay our own meanings and associations onto every object in the world. This process actually provides the mechanism by which the world comes to make sense to us. If we could not overload the material world with meaning, we could not come to know it or manipulate it.

This layer of meaning is most often implicit; only in works of ‘art’ does the meaning crowd into the definition of the material itself. But none of us can look at a thing and be completely innocent about its hidden meanings. They constantly nip at the edges of our consciousness, unless, Zen-like, we practice an ‘emptiness of mind’, and attempt to encounter the material in an immediate, moment-to-moment awareness. For those of us not in such a blessed state, the material world has a subconscious component. Everything means something. Everything is surrounded by a penumbra of meaning, associations that may be universal (an apple can invoke the Fall of Man, or Newton’s Laws of Gravity), or something entirely specific. Through all of human history the interiority of the material world has remained hidden except in such moments as when we choose to allude to it. It is always there, but rarely spoken of. That is about to change.

One of the most significant, yet least understood implications of a planet where everyone is ubiquitously connected to the network via the mobile is that it brings the depth of the network ubiquitously to the individual. You are – amazingly – connected to the other five billion individuals who carry mobiles, and you are also connected to everything that’s been hoovered into cyberspace over the past fifteen years. That connection did not become entirely apparent until last year, as the first mobiles appeared with both GPS and compass capabilities. Suddenly, it became possible to point through the camera on a mobile, and – using the location and orientation of the device – search through the network.

This technique has become known as ‘Augmented Reality’, or AR, and it promises to be one of the great growth areas in technology over the next decade – but perhaps not the reasons the leaders of the field currently envision. The strength of AR is not what it brings to the big things – the buildings and monuments – but what it brings to the smallest and most common objects in the material world. At present, AR is flashy, but not at all useful. It’s about to make a transition. It will no longer be spectacular, but we’ll wonder how we lived without it.

Let me illustrate the nature of this transition, drawn from examples in my own experience. These three ‘thought experiments’ represent the different axes of a world which is making the transition between implicit meaning, and a world where the implicit has become explicit. Once meaning is exposed, it can be manipulated: this is something unexpected, and unexpectedly powerful.

Example One: The Book

Last year I read a wonderful book. The Rest is Noise: Listening to the Twentieth Century, by Alex Ross, is a thorough and thoroughly enjoyable history of music in the 20th century. By music, Ross means what we would commonly call ‘classical’ music, even though the Classical period ended some two hundred years ago. That’s not as stuffy as it sounds: George Gershwin and Aaron Copland are both major figures in 20th century music, though their works have always been classed as ‘popular’.

Ross’ book has a companion website, therestisnoise.com, which offers up a chapter-by-chapter samples of the composers whose lives and exploits he explores in the text. When I wrote The Playful World, back in 2000, and built a companion website to augment the text, it was considered quite revolutionary, but this is all pretty much standard for better books these days.

As I said earlier, the book is on the edge of ephemeralization. It wants to be digitized, because it has always been a message, encoded. When I dreamed up this example, I thought it would be very straightforward: you’d walk into your bookstore, point your smartphone at a book that caught your fancy, and instantly you’d find out what your friends thought of it, what their friends thought of it, what the reviewers thought of it, and so on. You’d be able to make a well-briefed decision on whether this book is the right book for you. Simple. In fact, Google Labs has already shown a basic example of this kind of technology in a demo running on Android.

But that’s not what a book is anymore. Yes, it’s good to know whether you should buy this or that book, but a book represents an investment of time, and an opportunity to open a window into an experience of knowledge in depth. It’s this intension that the device has to support. As the book slowly dissolves into the sea of fragmentary but infinitely threaded nodes of hypertext which are the human database, the device becomes the focal point, the lens through which the whole book appears, and appears to assemble itself.

This means that the book will vary, person to person. My fragments will be sewn together with my threads, yours with your threads. The idea of unitary authorship – persistent over the last five hundred years – won’t be overwhelmed by the collective efforts of crowdsourcing, but rather by the corrosive effects of hyperconnection. The more connected everything becomes, the less likely we are prone to linearity. We already see this in the ‘tl;dr’ phenomenon, where any text over 300 words becomes too onerous to read.

Somehow, whatever the book is becoming must balance the need for clarity and linearity against the centrifugal and connective forces of hypertext. The book is about to be subsumed within the network; the device is the place where it will reassemble into meaning. The implicit meaning of the book – that it has a linear story to tell, from first page to last – must be made explicit if the idea and function of the book is to survive.

The book stands on the threshold, between the worlds of the physical and the immaterial. As such it is pulled in both directions at once. It wants to be liberated, but will be utterly destroyed in that liberation. The next example is something far more physical, and, consequentially, far more important.

Example Two: Beef Mince

I go into the supermarket to buy myself the makings for a nice Spaghetti Bolognese. Among the ingredients I’ll need some beef mince (ground beef for those of you in the United States) to put into the sauce. Today I’d walk up to the meat case and throw a random package into my shopping trolley. If I were being thoughtful, I’d probably read the label carefully, to make sure the expiration date wasn’t too close. I might also check to see how much fat is in the mince. Or perhaps it’s grass-fed beef. Or organically grown. All of this information is offered up on the label placed on the package. And all of it is so carefully filtered that it means nearly nothing at all.

What I want to do is hold my device up to the package, and have it do the hard work. Go through the supermarket to the distributor, through the distributor to the abattoir, through the abattoir to farmer, through the farmer to the animal itself. Was it healthy? Where was it slaughtered? Is that abattoir healthy? (This isn’t much of an issue in Australia, or New Zealand. but in America things are quite a bit different.) Was it fed lots of antibiotics in a feedlot? Which ones?

And – perhaps most importantly – what about the carbon footprint of this little package of mince? How much CO2 was created? How much methane? How much water was consumed? These questions, at the very core of 21st century life, need to be answered on demand if we can be expected to adjust our lifestyles so as minimize our footprint on the planet. Without a system like this, it is essentially impossible. With such a system it can potentially become easy. As I walk through the market, popping items into my trolley, my device can record and keep me informed of a careful balance between my carbon budget and my financial budget, helping me to optimize both – all while referencing my purchases against sales on offer in other supermarkets.

Finally, what about the caloric count of that packet of mince? And its nutritional value? I should be tracking those as well – or rather, my device should – so that I can maintain optimal health. I should know whether I’m getting too much fat, or insufficient fiber, or – as I’ll discuss in a moment – too much sodium. Something should be keeping track of this. Something that can watch and record and use that recording to build a model. Something that can connect the real world of objects with the intangible set of goals that I have for myself. Something that could do that would be exceptionally desirable. It would be as seductive as the Web.

The more information we have at hand, the better the decisions we can make for ourselves. It’s an idea so simple it is completely self-evident. We won’t need to convince anyone of this, to sell them on the truth of it. They will simply ask, ‘When can I have it?’ But there’s more. My final example touches on something so personal and so vital that it may become the center of the drive to make the implicit explicit.

Example Three: Medicine

Four months ago, I contracted adult-onset chickenpox. Which was just about as much fun as that sounds. (And yes, since you’ve asked, I did have it as a child. Go figure.) Every few days I had doctors come by to make sure that I was surviving the viral infection. While the first doctor didn’t touch me at all – understandably – the second doctor took my blood pressure, and showed me the reading – 160/120, a bit too uncomfortably high. He suggested that I go on Micardis, a common medication for hypertension. I was too sick to argue, so I dutifully filled the prescription and began taking it that evening.

Whenever I begin taking a new medication – and I’m getting to an age where that happens with annoying regularity – I am always somewhat worried. Medicines are never perfect; they work for a certain large cohort of people. For others they do nothing at all. For a far smaller number, they might be toxic. So, when I popped that pill in my mouth I did wonder whether that medicine might turn out to be poison.

The doctor who came to see me was not my regular GP. He did not know my medical history. He did not know the history of the other medications I had been taking. All he knew was what he saw when he walked into my flat. That could be a recipe for disaster. Not in this situation – I was fine, and have continued to take Micardis – but there are numerous other situations where medications can interact within the patient to cause all sorts of problems. This is well known. It is one of the drawbacks of modern pharmaceutical medicine.

This situation is only going to grow more intense as the population ages and pharmaceutical management of the chronic diseases of aging becomes ever-more-pervasive. Right now we rely on doctors and pharmacists to keep their own models of our pharmaceutical consumption. But that’s a model which is precisely backward. While it is very important for them to know what drugs we’re on, it is even more important for us to be able to manage that knowledge for ourselves. I need to be able to point my device at any medicine, and know, more or less immediately, whether that medicine will cure me or kill me.

Over the next decade the cost of sequencing an entire human genome will fall from the roughly $5000 it costs today to less than $500. Well within the range of your typical medical test. Once that happens, will be possible to compile epidemiological data which compares various genomes to the effectiveness of drugs. Initial research in this area has already shown that some drugs are more effective among certain ethnic groups than others. Our genome holds the clue to why drugs work, why they occasionally don’t, and why they sometimes kill.

The device is the connection point between our genome – which lives, most likely, somewhere out on a medical cloud – and the medicines we take, and the diagnoses we receive. It is our interface to ourselves, and in that becomes an object of almost unimaginable importance. In twenty years time, when I am ‘officially’ a senior, I will have a handheld device – an augmented reality – whose sole intent is to keep me as healthy as possible for as long as possible. It will encompass everything known about me medically, and will integrate with everything I capture about my own life – my activities, my diet, my relationships. It will work with me to optimize everything we know about health (which is bound to be quite a bit by 2030) so that I can live a long, rich, healthy life.

These three examples represent the promise bound up in the collision between the handheld device and the ubiquitous, knowledge-filled network. There are already bits and pieces of much of this in place. It is a revolution waiting to happen. That revolution will change everything about the Web, and why we use it, how, and who profits from it.

III: The Bronze Age

By now, some of you sitting here listening to me this afternoon are probably thinking, “That’s the Semantic Web. He’s talking about the Semantic Web.” And you’re right, I am talking about the Semantic Web. But the Semantic Web as proposed and endlessly promoted by Sir Tim Berners-Lee was always about pushing, pushing, pushing to get the machines talking to one another. What I have demonstrated in these three thought experiments is a world that is intrinsically so alluring and so seductive that it will pull us all into it. That’s the vital difference which made the Web such a success in 1994 and 1995. And it’s about to happen once again.

But we are starting from near zero. Right now, I should be able to hold up my device, wave it around my flat, and have an interaction with the device about what’s in my flat. I can not. I can not Google for the contents of my home. There is no place to put that information, even if I had it, nor systems to put that information to work. It is exactly like the Web in 1993: the lights on, but nobody home. We have the capability to conceive of the world-as-a-database. We have the capability to create that database. We have systems which can put that database to work. And we have the need to overlay the real world with that rich set of data.

We have the capability, we have the systems, we have the need. But we have precious little connecting these three. These are not businesses that exist yet. We have not brought the real world into our conception of the Web. That will have to change. As it changes, the door opens to a crescendo of innovations that will make the Web revolution look puny in comparison. There is an opportunity here to create industries bigger than Google, bigger than Microsoft, bigger than Apple. As individuals and organizations figure out how to inject data into the real world, entirely new industry segments will be born.

I can not tell you exactly what will fire off this next revolution. I doubt it will be the integration of Wikipedia with a mobile camera. It will be something much more immediate. Much more concrete. Much more useful. Perhaps something concerned with health. Or with managing your carbon footprint. Those two seem the most obvious to me. But the real revolution will probably come from a direction no one expects. It’s nearly always that way.

There no reason to think that Wellington couldn’t be the epicenter of that revolution. There was nothing special about San Francisco back in 1993 and 1994. But, once things got started, they created a ‘virtuous cycle’ of feedbacks that brought the best-and-brightest to San Francisco to build out the Web. Wellington is doing that to the film industry; why shouldn’t it stretch out a bit, and invent this next generation ‘web-of things’?

This is where the future is entirely in your hands. You can leave here today promising yourself to invent the future, to write meaning explicitly onto the real world, to transform our relationship to the universe of objects. Or, you can wait for someone else to come along and do it. Because someone inevitably will. Every day, the pressure grows. The real world is clamoring to crawl into cyberspace. You can open the door.

This is the era of sharing. When the histories of our time are written a hundred years from now, sharing is the salient feature which historians will focus upon. The entirety of culture, from 1999 forward, looks like a gigantic orgy of sharing.

This morning I want to take a look at this phenomenon in some detail, and tie it into some Australian educational ‘megatrends’ – forces which are altering the landscape throughout the nation. Sharing can be used as an engine to power these forces, but that will only happen if we understand how sharing works.

At some level, sharing is totally familiar to us – we’ve been sharing since we’ve been very small. But sharing, at least in the English language, has two slightly different meanings: we can share things, or we can share thoughts. We adults spend a lot of time teaching children the importance of sharing their things; we never need to teach them to share their thoughts. The sharing of things is a cultural behavior, valued by our civilization, whereas the sharing of thoughts is an innate behavior – probably located somewhere deep in our genes.

Fifteen years ago, Nicholas Negroponte characterized this as the divide between bits and atoms. We have to teach children to share their atoms – their toys and games – but they freely share their bits. In fact, they’re so promiscuous with their bits that this has produced its own range of problems.

It was only a decade ago that Shawn Fanning released a program which he’d written for his mates at Boston’s Northeastern University. Napster allowed anyone with a computer and a broadband internet connection to share their MP3 music files freely. Within a few months, millions of broadband-connected college students were freely trading their music collections with one another – without any thought of copyright or ownership. Let me reiterate: thoughts of copyright or piracy simply didn’t enter into their thinking. To them, this was all about sharing.

This act of sharing was a natural consequence of the ‘hyperconnectivity’ these kids had achieved via their broadband connections. When you connect people together, they will begin to share the things they care about. If you build a system that allows them to share the music they care about, they’ll share that. If you build a system that allows them to share the videos they care about, they’ll share that. If you build a system that allows them to share the links they care about, they’ll share them.

Clever web developers and entrepreneurs have built all of these systems, and many, many more. For the first time we can use technology to accelerate and amplify the innate human desire to share bits, and so, in a case of history repeating itself, we have amplified our social and sharing systems the way the steam engine amplified our physical power two hundred years ago.

In the earliest years of this sharing revolution, people shared the objects of culture: music, videos, jokes, links, photos, writing, and so on. Just this alone has had an enormous impact on business and culture: the recording industries, which were flying high a decade ago, have been humbled. Television networks have gotten in front of the Internet distribution of their own shows, to take the sting out of piracy. Newspapers, caught in the crossfire between a controlled system of distribution and a world where everyone distributes everything, have begun to disappear. And this is just the beginning.

In 2001, another experiment in sharing started in earnest: Wikipedia encouraged a small community of contributors to add their own entries to an ever-expanding encyclopedia. In this case contributors were asked to share their knowledge – however specific or particular – to a greater whole. Although it grew slowly in its earliest days, after about 2 years Wikipedia hit an inflection point and began to grow explosively.

Knowledge seems to have a gravitational quality; when enough of it is gathered together in one place, it attracts more knowledge. That’s certainly the story of Wikipedia, which has grown to encompass more than three million articles in English, on nearly every topic under the sun. Wikipedia is only the most successful of many efforts to produce a ‘collective intelligence’ out of the ‘wisdom of crowds’. There are many others – including one I’ll come to shortly.

One of the singular features of Wikipedia – one that we never think about even though it’s the reason we use Wikipedia – is simply this: Wikipedia makes us smarter. We can approach Wikipedia full of ignorance and leave it knowing a lot of facts. Facts need to be put into practice before they can be transformed into knowledge, but at least with Wikipedia we now have the opportunity to load up on the facts. And this is true globally: because of Wikipedia every single one of us now has the opportunity to work with the best possible facts. We can use these facts to make better decisions, decisions which will improve our lives. Wikipedia may seem innocuous, but it’s really quite profound.

How profound? If we peel away all of the technology behind Wikipedia, all of the servers and databases and broadband connections of the world’s sixth most popular website, what are we left with? Only this: an agreement to share what we know. It’s that agreement, and not the servers or databases or bandwidth which makes Wikipedia special, and it’s that agreement historians will be writing about in a hundred years. That agreement will endure – even if, for some bizarre reason, Wikipedia should cease to exist – because that agreement is one of the engines driving our culture forward.

Another example of sharing, just as relevant to educators, comes from a site which launched back in 1999 as TeacherRatings.com. Like Wikipedia, it grew slowly, and went through ownership changes, emerging finally as RateMyProfessors.com, which is owned by MTV, and which now boasts ten million ratings of one million professors, lecturers and instructors. This huge wealth of ratings came about because RateMyProfessors.com attached itself to the innate desire to share. Students want to share their experiences with their instructors, and RateMyProfessors.com gives them a forum to do just that.

Just as is the case with Wikipedia, anyone can become smarter by using RateMyProfessors.com. You can learn which instructors are good teachers, which grade easily, which will bore you to tears, and so forth. You can then put that information to work to make your life better – avoiding the professors (or schools) which have the worst teachers, taking courses from the instructors who get the highest scores.

That shared knowledge, put to work, changes the power balance within the university. For the last six hundred years, universities have been able to saddle students with lousy instructors – who might happen to be fantastic researchers – and there wasn’t much that students could do about it except grumble. Now, with RateMyProfessors.com, students can pass their hard-won knowledge down to subsequent generations of students. The university proposes, the student disposes. Worse still, the instructors receiving the highest ratings on RateMyProfessors.com have been the subjects of bidding wars, as various universities try to woo them, and add them to their faculties. All of this has given students a power they’ve never had, a power they never could have until they began to share their experiences, and translate that shared knowledge into action.

Sharing is wonderful, but sharing has consequences. We can now amplify and accelerate our sharing so that it can cross the world in a matter of moments, copied and replicated all the way. The power of the network has driven us into a new era. Sharing culture, knowledge, and power has destabilized all of our institutions. Businesses totter and collapse; universities change their practices; governments create task forces to get in front of what everyone calls ‘something-2.0’. It could be web2.0, education2.0, or government2.0. It doesn’t matter. What does matter is that something big is happening, and it’s all driven by our ability to share.

OK, so we can share. But why? How does it matter to us?

II: Greenfields

Before we can look at why sharing matters so much in this particular moment, we need to spend some time examining the three big events which will revolutionize education in Australia over the next decade. Each of them are entirely revolutionary in themselves; their confluence will result in a compressed wave of change – a concrescence – that will radically transform all educational practice.

The first of these events will affect all Australians equally. At this moment in time, Australia lives with medium-to-low-end broadband speeds, and most families have broadband connections which, because of metering, fundamentally limit their use. This is how it’s been since the widespread adoption of the Internet in the mid-1990s, and it’s nearly impossible to imagine that things could be different. The hidden lesson of the last fifteen years is that the Internet is something that needs to be rationed carefully, because there’s not enough to go around.

The Government wants us to adopt a different point of view. With the National Broadband Network (NBN), they intend to build a fibre-optic infrastructure which will deliver at least 100 megabit-per-second connections to every home, every school, and every business in Australia. Although no one has come out and said it explicitly, it’s clear that the Government wants this connection to be unmetered – the Internet will finally be freely available in Australia, as it is in most other countries.

How this will change our usage of the Internet is anyone’s guess. And this is the important point – we don’t know what will happen. We have critics of the NBN claiming that there’s no good reason for it, that Australians are already adequately served by the broadband we’ve already got, but I regularly hear stories of schools which block YouTube – not because of its potentially distracting qualities, but because they can’t handle the demand for bandwidth.

That, writ large, describes Australia in 2009. Broadband is the oxygen of the 21st century. Australia has been subjected to a slow strangulation. Once we can breathe freely, new horizons will open to us. We know this is true from history: no one really knew what we’d do with broadband once we got it. No one predicted Napster or YouTube or Skype, no one could have predicted any of them – or any of a thousandotherinnovations – before we had widespread access to broadband. Critics who argue there’s no need for high-speed broadband have simply failed to learn the lessons of history.

Now, before you think that I’m carrying the Government’s water, let me find fault with a few things. I believe that the Government isn’t thinking big enough – by the time the NBN is fully deployed, around 2017, a hundred megabit-per-second connection will simply be mid-range among our OECD peers. The Government should have accepted the technical challenge and gone for a gigabit network. Eventually, they will. Further, I believe the NBN will come with ‘strings attached’, specifically the filtering and regulatory regime currently being proposed by Senator Conroy’s ministry. The Government wants to provide the nation a ‘clean feed’, sanitized according to its interpretation of the law; when everyone in Australia gets their Internet service from the Commonwealth, we may have no choice in the matter.

The next event – and perhaps the most salient, in the context of this conference – is the Government’s commitment to provide a computer to every student in years 9 through 12. During the 2007 election, the Prime Minister talked about using computers for ‘math drills’ and ‘foreign language training’. The line about providing computers in the classroom was a popular one, although it is now clear that the Government’s ministers didn’t think through the profound effect of pervasive computing in the classroom.

First, it radically alters the power balance in the classroom. Most students have more facility with their computers than their teachers do. Some teachers are prepared to work from humility and accept instruction from their students. For other teachers, such an idea is anathema. The power balance could be righted somewhat with extensive professional development for the teachers – and time for that professional development – but schools have neither the budget nor the time to allow for this. Instead, the computers are being dumped into the classroom without any thought as to how they will affect pedagogy.

Second, these computers are being handed to students who may not be wholly aware of the potency of these devices. We’ve seen how a single text message, forwarded endlessly, can spark a riot on a Sydney beach, or how a party invitation, posted to Facebook, can lead to a crowd of five hundred and a battle with the police. Do teenagers really understand how to use the network to their advantage, how to reinforce their own privacy and protect themselves? Do they know how easy it is to ruin their own lives – or someone else’s – if they abuse the power of the network, that amplifier and accelerator of sharing?

Teachers aren’t the only ones who need some professional development. We need to provide a strong curriculum in ‘digital citizenship’; just as teenagers get instruction before they get a driver’s license, so they need instruction before they get to ‘spin the wheels’ of these ubiquitous educational computers.

This isn’t a problem that can be solved by filtering the networks at the schools. Students are surrounded by too many devices – mobiles as well as computers – which connect to the network and which require a degree of caution and education. This isn’t a job that the schools should be handling alone; this is an opportunity for all of the adult voices of culture – parents, caretakers, mentors, educators and administrators – to speak as one about the potentials and pitfalls of network culture.

Finally, what is the goal here? Right now the students and teachers are getting their computers. Next year the deployment will be nearly complete. What, in the end, is the point? Is it simply to give Kevin Rudd a tick on his ‘promises fulfilled’ list when he goes up for re-election? Or is this an opening to something greater? Is this simply more of the same or something new? I haven’t seen any educator anywhere present anything that looks at all like an integrated vision of what these laptops mean to students, teachers or the classroom. They’re bling: pretty, but an entirely useless accessory. I’m not saying that this is a bad initiative – indeed, I believe the Government should be lauded for its efforts. But everything, thus far, feels only like a beginning, the first meter around a very long course.

Now we come to the most profound of the three events on the educational horizon: the National Curriculum. Although the idea of a national curriculum has been mooted by several successive governments, it looks as though we’ll finally achieve a deliverable curriculum sometime in the early years of the Rudd Government. There’s a long way to go, of course – and a lot of tussling between the states and the various educational stakeholders – but the process is well underway. It’s expected that curricula in ‘English, Mathematics, the Sciences and History’ will be ready for implementation in the start of 2011, not very far away. As these are the core elements in any school curriculum, they will affect every school, every teacher, and every student in Australia.

A few weeks ago I got the opportunity to share the stage with Dr. Evan Arthur, the Group Manager of the Digital Education Group at the Commonwealth Department of Education, Employment and Workplace Relations. During a ‘fireside chat’, when I asked him a series of questions, the topic turned to the National Curriculum. At this point Dr. Arthur became rather thoughtful, and described the National Curriculum as a “greenfields”. He went on to describe the curriculum documents, when completed, as a set of ‘strings’ which could be handled almost as if they were a Christmas tree, ready to have content hung all over them. The National Curriculum means that every educator in Australia is, for the first time, working to the same set of ‘strings’.

That’s when I became aware that Dr. Arthur saw the National Curriculum as an enormous opportunity to redraw the possibilities for education. We are all being given an opportunity to start again – to throw out the old rule book and start over with another one. But in order to do this we’ll have to take everything we’ve covered already – about sharing, the National Broadband Network, the Digital Education Revolution and the National Curriculum, then blend them together. Together they produce a very potent mix, a nexus of possibilities which could fundamentally transform education in Australia.

III: At The Nexus

Our future is a future of sharing; we’ll be improving constantly, finding better and better ways to share with one another. To this I want to add something more subtle; not a change in technology – we have a lot of technology – but rather, a change of direction and intent. We could choose to see the National Curriculum as simply another mandate from the Federal government, something that will make the educational process even more formal, rigorous, and lifeless. That option is open to us – and, to many of us, that’s the only option visible. I want to suggest that there is another, wildly different path open before us, right next to this well-trodden and much more prosaic laneway. Rather than viewing the National Curriculum as a done deal, wouldn’t it be wiser if we consider it as an open invitation to participation and sharing?

After all, the National Curriculum mandates what must be taught, but says little to nothing about how it gets taught. Teachers remain free to pursue their own pedagogical ends. That said, teachers across Australia will, for the first time, be pursing the same ends. This opens up a space and a rationale for sharing that never existed before. Everyone is pulling in the same direction; wouldn’t it make sense for teachers, students, administrators and parents to share the experience?

Let’s be realistic: whether or not we seek to formalize this sharing of experience, it will happen anyway, on BoredOfStudies.org, RateMyTeachers.com, a hundred other websites, a thousand blogs, a hundred thousand Facebook profiles, and a million tweets. But if it all happens out there, informally, we miss an enormous opportunity to let sharing power our transition to into the National Curriculum. We’d be letting our greatest and most powerful asset slip through our fingers.

So let me turn this around and project us into a future where we have decided to formalize our shared experience of the National Curriculum. What might that look like? A teacher might normally prepare their curriculum and pedagogical materials at the beginning of the school term; during that preparation process they would check into a shared space, organized around the National Curriculum (this should be done formally, through an organization such as Education.AU, but could – and would – happen informally, via Google) to find out what other educators have created and shared as curriculum materials. Educators would find extensive notes, lesson plans, probably numerous recorded podcasts, links to materials on Wikipedia and other online resources, and so forth – everything that an educator might need to create an effective learning experience. Furthermore, educators would be encourage to share and connect around any particular ‘string’ in the National Curriculum. The curriculum thus becomes a focal point for organization and coordination rather than a brute mandate of performance.

Students, already well-connected, will continue to use informal channels to communicate about their lessons; the National Curriculum gives the educational sector (and perhaps some enterprising entrepreneur) an opportunity to create a space where those curriculum ‘strings’ translate into points of contact. Students working through a particular point in the curriculum would know where they are, and would know where to gather together for help and advice. The same wealth of materials available to educators would be available to students. None of this constitutes ‘peeking at the answers’, but rather is part of an integrated effort to give students every advantage while working their way through the National Curriculum. A student in Townsville might be able to gain some advantage from a podcast of a teacher in Albany, might want to collaborate on research with students from Ballarat, might ask some questions of an educator in Lismore. The student sits in the middle of an nexus of resources designed to offer them every opportunity to succeed; if the methodology of their own classroom is a poor fit to their learning style, chances are high that they’ll find someone else, somewhere else, who makes a better match.

All of this sounds a lot like an educational utopia, but all of it is within our immediate grasp. It is because we live at the confluence of a broadly sharing culture, and within a nation which is getting ubiquitous high-speed broadband, students and educators who now have pervasive access to computers, and a National Curriculum to act as an organizing principle. It is precisely because the stars are aligned so auspiciously that we can dream big dreams. This is the moment when anything is possible.

This transition could simply reinforce the last hundred years of industrial era education, where one-size-fits-all, where the student enters ‘airplane mode’ when they walk into the classroom – all devices disconnected, eyes up and straight ahead for the boredom of a fifty-minute excursion through some meaningless and disconnected body of knowledge. Where the computer simply becomes an electronic textbook for the distribution of media, rather than a portal for the exploration of the knowledge shared by others. Where the educator finds themselves increasingly bound to a curriculum which limits their freedom to find expression and meaning in their work. And all of this will happen, unless we recognize the other path that has opened before us. Unless we change direction, and set our feet on that path. Because if we keep on as we have been, we’ll simply end up with what we have today. And that would be a big mistake.

It needn’t be this way. We can take advantage of our situation, of the concrescence of opportunities opening to us. It will take some work, some time and some money. But more than anything else it requires a change of heart. We must stop thinking of the classroom as a solitary island of peace and quiet in the midst of a stormy sea, and rather think of it as a node within a network, connected and receptive. We must stop thinking of educators as valiant but solitary warriors, and transform them into a connected and receptive army. And we must recognize that this generation of students are so well connected on every front that they outpace us in every advance. They will be teaching us how to make this transition seem effortless.

Can we do this? Can we screw our courage up and take a leap into a great unknown, into an educational future which draws from our past, but is not bound to it? With parents and politicians crying out for metrics and endless assessments, we are losing the space to experiment, to play, to explore. Next year, the National Curriculum will land like a ton of bricks, even as it presents the opportunity for a Great Escape. The next twelve months will be crucial. If we can only change the way we think about what is possible, we will change what is possible. It’s a big ask. It’s the challenge of our times. Will we rise to meet it? Can we make an agreement to share what we know and what we do? That’s all it takes. So simple and so profound.

In the US state of North Carolina, the New York Times reports, an interesting experiment has been in progress since the first of February. The “Birds and Bees Text Line” invites teenagers with any questions relating to sex or the mysteries of dating to SMS their question to a phone number. That number connects these teenagers to an on-duty adult at the Adolescent Pregnancy Prevention Campaign. Within 24 hours, the teenager gets a reply to their text. The questions range from the run-of-the-mill – “When is a person not a virgin anymore?” – and the unusual – “If you have sex underwater do u need a condom?” – to the utterly heart-rending – “Hey, I’m preg and don’t know how 2 tell my parents. Can you help?”

The Birds and Bees Text Line is a response to the slow rise in the number of teenage pregnancies in North Carolina, which reached its lowest ebb in 2003. Teenagers – who are given state-mandated abstinence-only sex education in school – now have access to another resource, unmediated by teachers or parents, to prevent another generation of teenage pregnancies. Although it’s early days yet, the response to the program has been positive. Teenagers are using the Birds and Bees Text Line.

It is precisely because the Birds and Bees Text Line is unmediated by parental control that it has earned the ire of the more conservative elements in North Carolina. Bill Brooks, president of the North Carolina Family Policy Council, a conservative group, complained to the Times about the lack of oversight. “If I couldn’t control access to this service, I’d turn off the texting service. When it comes to the Internet, parents are advised to put blockers on their computer and keep it in a central place in the home. But kids can have access to this on their cell phones when they’re away from parental influence – and it can’t be controlled.”

If I’d stuffed words into a straw man’s mouth, I couldn’t have come up with a better summation of the situation we’re all in right now: young and old, rich and poor, liberal and conservative. There are certain points where it becomes particularly obvious, such as with the Birds and Bees Text Line, but this example simply amplifies our sense of the present as a very strange place, an undiscovered country that we’ve all suddenly been thrust into. Conservatives naturally react conservatively, seeking to preserve what has worked in the past; Bill Brooks speaks for a large cohort of people who feel increasingly lost in this bewildering present.

Let us assume, for a moment, that conservatism was in the ascendant (though this is clearly not the case in the United States, one could make a good argument that the Rudd Government is, in many ways, more conservative than its predecessor). Let us presume that Bill Brooks and the people for whom he speaks could have the Birds and Bees Text Line shut down. Would that, then, be the end of it? Would we have stuffed the genie back into the bottle? The answer, unquestionably, is no.

Everyone who has used or even heard of the Birds and Bees Text Line would be familiar with what it does and how it works. Once demonstrated, it becomes much easier to reproduce. It would be relatively straightforward to take the same functions performed by the Birds and Bees Text Line and “crowdsource” them, sharing the load across any number of dedicated volunteers who might, through some clever software, automate most of the tasks needed to distribute messages throughout the “cloud” of volunteers. Even if it took a small amount of money to setup and get going, that kind of money would be available from donors who feel that teenage sexual education is a worthwhile thing.

In other words, the same sort of engine which powers Wikipedia can be put to work across a number of different “platforms”. The power of sharing allows individuals to come together in great “clouds” of activity, and allows them to focus their activity around a single task. It could be an encyclopedia, or it could be providing reliable and judgment-free information about sexuality to teenagers. The form matters not at all: what matters is that it’s happening, all around us, everywhere throughout the world.

The cloud, this new thing, this is really what has Bill Brooks scared, because it is, quite literally, ‘out of control’. It arises naturally out of the human condition of ‘hyperconnection’. We are so much better connected than we were even a decade ago, and this connectivity breeds new capabilities. The first of these capabilities are the pooling and sharing of knowledge – or ‘hyperintelligence’. Consider: everyone who reads Wikipedia is potentially as smart as the smartest person who’s written an article in Wikipedia. Wikipedia has effectively banished ignorance born of want of knowledge. The Birds and Bees Text Line is another form of hyperintelligence, connecting adults with knowledge to teenagers in desperate need of that knowledge.

Hyperconnectivity also means that we can carefully watch one another, and learn from one another’s behaviors at the speed of light. This new capability – ‘hypermimesis’ – means that new behaviors, such as the Birds and Bees Text Line, can be seen and copied very quickly. Finally, hypermimesis means that that communities of interest can form around particular behaviors, ‘clouds’ of potential. These communities range from the mundane to the arcane, and they are everywhere online. But only recently have they discovered that they can translate their community into doing, putting hyperintelligence to work for the benefit of the community. This is the methodology of the Adolescent Pregnancy Prevention Campaign. This is the methodology of Wikipedia. This is the methodology of Wikileaks, which seeks to provide a safe place for whistle-blowers who want to share the goods on those who attempt to defraud or censor or suppress. This is the methodology of ANONYMOUS, which seeks to expose Scientology as a ridiculous cult. How many more examples need to be listed before we admit that the rules have changed, that the smooth functioning of power has been terrifically interrupted by these other forces, now powers in their own right?

II: Affairs of State

Don’t expect a revolution. We will not see masses of hyperconnected individuals, storming the Winter Palaces of power. This is not a proletarian revolt. It is, instead, rather more subtle and complex. The entire nature of power has changed, as have the burdens of power. Power has always carried with it the ‘burden of omniscience’ – that is, those at the top of the hierarchy have to possess a complete knowledge of everything of importance happening everywhere under their control. Where they lose grasp of that knowledge, that’s the space where coups, palace revolutions and popular revolts take place.

This new power that flows from the cloud of hyperconnectivity carries a different burden, the ‘burden of connection’. In order to maintain the cloud, and our presence within it, we are beholden to it. We must maintain each of the social relationships, each of the informational relationships, each of the knowledge relationships and each of the mimetic relationships within the cloud. Without that constant activity, the cloud dissipates, evaporating into nothing at all.

This is not a particularly new phenomenon; Dunbar’s Number demonstrates that we are beholden to the ‘tribe’ of our peers, the roughly 150 individuals who can find a place in our heads. In pre-civilization, the cloud was the tribe. Should the members of tribe interrupt the constant reinforcement of their social, informational, knowledge-based and mimetic relationships, the tribe would dissolve and disperse – as happens to a tribe when it grows beyond the confines of Dunbar’s Number.

In this hyperconnected era, we can pick and choose which of our human connections deserves reinforcement; the lines of that reinforcement shape the scope of our power. Studies of Japanese teenagers using mobiles and twenty-somethings on Facebook have shown that, most of the time, activity is directed toward a small circle of peers, perhaps six or seven others. This ‘co-presence’ is probably a modern echo of an ancient behavior, presumably related to the familial unit.

While we might desire to extend our power and capabilities through our networks of hyperconnections, the cost associated with such investments is very high. Time spent invested in a far-flung cloud is time that lost on networks closer to home. Yet individuals will nonetheless often dedicate themselves to some cause greater than themselves, despite the high price paid, drawn to some higher ideal.

The Obama campaign proved an interesting example of the price of connectivity. During the Democratic primary for the state of New York (which Hilary Clinton was expected to win easily), so many individuals contacted the campaign through its website that the campaign itself quickly became overloaded with the number of connections it was expected to maintain. By election day, the campaign staff in New York had retreated from the web, back to using mobiles. They had detached from the ‘cloud’ connectivity they used the web to foster, instead focusing their connectivity on the older model of the six or seven individuals in co-present connection. The enormous cloud of power which could have been put to work in New York lay dormant, unorganized, talking to itself through the Obama website, but effectively disconnected from the Obama campaign.

For each of us, connectivity carries a high price. For every organization which attempts to harness hyperconnectivity, the price is even higher. With very few exceptions, organizations are structured along hierarchical lines. Power flows from bottom to the top. Not only does this create the ‘burden of omniscience’ at the highest levels of the organization, it also fundamentally mismatches the flows of power in the cloud. When the hierarchy comes into contact with an energized cloud, the ‘discharge’ from the cloud to the hierarchy can completely overload the hierarchy. That’s the power of hyperconnectivity.

Another example from the Obama campaign demonstrates this power. Project Houdini was touted out by the Obama campaign as a system which would get the grassroots of the campaign to funnel their GOTV results into a centralized database, which could then be used to track down individuals who hadn’t voted, in order to offer them assistance in getting to their local polling station. The campaign grassroots received training in Project Houdini, when through a field test of the software and procedures, then waited for election day. On election day, Project Houdini lasted no more than 15 minutes before it crashed under the incredible number of empowered individuals who attempted to plug data into Project Houdini. Although months in the making, Project Houdini proved that a centralized and hierarchical system for campaign management couldn’t actually cope with the ‘cloud’ of grassroots organizers.

In the 21st century we now have two oppositional methods of organization: the hierarchy and the cloud. Each of them carry with them their own costs and their own strengths. Neither has yet proven to be wholly better than the other. One could make an argument that both have their own roles into the future, and that we’ll be spending a lot of time learning which works best in a given situation. What we have already learned is that these organizational types are mostly incompatible: unless very specific steps are taken, the cloud overpowers the hierarchy, or the hierarchy dissipates the cloud. We need to think about the interfaces that can connect one to the other. That’s the area that all organizations – and very specifically, non-profit organizations – will be working through in the coming years. Learning how to harness the power of the cloud will mark the difference between a modest success and overwhelming one. Yet working with the cloud will present organizational challenges of an unprecedented order. There is no way that any hierarchy can work with a cloud without becoming fundamentally changed by the experience.

III: Affair de Coeur

All organizations are now confronted with two utterly divergent methodologies for organizing their activities: the tower and the cloud. The tower seeks to organize everything in hierarchies, control information flows, and keep the power heading from bottom to top. The cloud isn’t formally organized, pools its information resources, and has no center of power. Despite all of its obvious weaknesses, the cloud can still transform itself into a formidable power, capable of overwhelming the tower. To push the metaphor a little further, the cloud can become a storm.

How does this happen? What is it that turns a cloud into a storm? Jimmy Wales has said that the success of any language-variant version of Wikipedia comes down to the dedicated efforts of five individuals. Once he spies those five individuals hard at work in Pashtun or Khazak or Xhosa, he knows that edition of Wikipedia will become a success. In other words, five people have to take the lead, leading everyone else in the cloud with their dedication, their selflessness, and their openness. This number probably holds true in a cloud of any sort – find five like-minded individuals, and the transformation from cloud to storm will begin.

At the end of that transformation there is still no hierarchy. There are, instead, concentric circles of involvement. At the innermost, those five or more incredibly dedicated individuals; then a larger circle of a greater number, who work with that inner five as time and opportunity allow; and so on, outward, at decreasing levels of involvement, until we reach those who simply contribute a word or a grammatical change, and have no real connection with the inner circle, except in commonality of purpose. This is the model for Wikipedia, for Wikileaks, and for ANONYMOUS. This is the cloud model, fully actualized as a storm. At this point the storm can challenge any tower.

But the storm doesn’t have things all its own way; to present a challenge to a tower is to invite the full presentation of its own power, which is very rude, very physical, and potentially very deadly. Wikipedians at work on the Farsi version of the encyclopedia face arrest and persecution by Iran’s Revolutionary Guards and religious police. Just a few weeks ago, after the contents of the Australian government’s internet blacklist was posted to Wikileaks, the German government invaded the home of the man who owns the domain name for Wikileaks in Germany. The tower still controls most of the power apparatus in the world, and that power can be used to squeeze any potential competitors.

But what happens when you try to squeeze a cloud? Effectively, nothing at all. Wikipedia has no head to decapitate. Jimmy Wales is an effective cheerleader and face for the press, but his presence isn’t strictly necessary. There are over 2000 Wikipedians who handle the day-to-day work. Locking all of them away, while possible, would only encourage further development in the cloud, as other individuals moved to fill their places. Moreover, any attempt to disrupt the cloud only makes the cloud more resilient. This has been demonstrated conclusively from the evolution of ‘darknets’, private file-sharing networks, which grew up as the legal and widely available file-sharing networks, such as Napster, were shut down by the copyright owners. Attacks on the cloud only improve the networks within the cloud, only make the leaders more dedicated, only increase the information and knowledge sharing within the cloud. Trying to disperse a storm only intensifies it.

These are not idle speculations; the tower will seek to contain the storm by any means necessary. The 21st century will increasingly look like a series of collisions between towers and storms. Each time the storm emerges triumphant, the tower will become more radical and determined in its efforts to disperse the storm, which will only result in a more energized and intensified storm. This is not a game that the tower can win by fighting. Only by opening up and adjusting itself to the structure of the cloud can the tower find any way forward.

What, then, is leadership in the cloud? It is not like leadership in the tower. It is not a position wrought from power, but authority in its other, and more primary meaning, ‘to be the master of’. Authority in the cloud is drawn from dedication, or, to use rather more precise language, love. Love is what holds the cloud together. People are attracted to the cloud because they are in love with the aim of the cloud. The cloud truly is an affair of the heart, and these affairs of the heart will be the engines that drive 21st century business, politics and community.

Author and pundit Clay Shirky has stated, “The internet is better at stopping things than starting them.” I reckon he’s wrong there: the internet is very good at starting things that stop things. But it is very good at starting things. Making the jump from an amorphous cloud of potentiality to a forceful storm requires the love of just five people. That’s not much to ask. If you can’t get that many people in love with your cause, it may not be worth pursing.

Conclusion: Managing Your Affairs

All 21st century organizations need to recognize and adapt to the power of the cloud. It’s either that or face a death of a thousand cuts, the slow ebbing of power away from hierarchically-structured organizations as newer forms of organization supplant them. But it need not be this way. It need not be an either/or choice. It could be a future of and-and-and, where both forms continue to co-exist peacefully. But that will only come to pass if hierarchies recognize the power of the cloud.

This means you.

All of you have your own hierarchical organizations – because that’s how organizations have always been run. Yet each of you are surrounded by your own clouds: community organizations (both in the real world and online), bulletin boards, blogs, and all of the other Web2.0 supports for the sharing of connectivity, information, knowledge and power. You are already halfway invested in the cloud, whether or not you realize it. And that’s also true for people you serve, your customers and clients and interest groups. You can’t simply ignore the cloud.

How then should organizations proceed?

First recommendation: do not be scared of the cloud. It might be some time before you can come to love the cloud, or even trust it, but you must at least move to a place where you are not frightened by a constituency which uses the cloud to assert its own empowerment. Reacting out of fright will only lead to an arms race, a series of escalations where the your hierarchy attempts to contain the cloud, and the cloud – which is faster, smarter and more agile than you can ever hope to be – outwits you, again and again.

Second: like likes like. If you can permute your organization so that it looks more like the cloud, you’ll have an easier time working with the cloud. Case in point: because of ‘message discipline’, only a very few people are allowed to speak for an organization. Yet, because of the exponential growth in connectivity and Web2.0 technologies, everyone in your organization has more opportunities to speak for your organization than ever before. Can you release control over message discipline, and empower your organization to speak for itself, from any point of contact? Yes, this sounds dangerous, and yes, there are some dangers involved, but the cloud wants to be spoken to authentically, and authenticity has many competing voices, not a single monolithic tone.

Third, and finally, remember that we are all involved in a growth process. The cloud of last year is not the cloud of next year. The answers that satisfied a year ago are not the same answers that will satisfy a year from now. We are all booting up very quickly into an alternative form of social organization which is only just now spreading its wings and testing its worth. Beginnings are delicate times. The future will be shaped by actions in the present. This means there are enormous opportunities to extend the capabilities of existing organizations, simply by harnessing them to the changes underway. It also means that tragedies await those who fight the tide of times too single-mindedly. Our culture has already rounded the corner, and made the transition to the cloud. It remains to be seen which of our institutions and organizations can adapt themselves, and find their way forward into sharing power.

A spectre is haunting the classroom, the spectre of change. Nearly a century of institutional forms, initiated at the height of the Industrial Era, will change irrevocably over the next decade. The change is already well underway, but this change is not being led by teachers, administrators, parents or politicians. Coming from the ground up, the true agents of change are the students within the educational system. Within just the last five years, both power and control have swung so quickly and so completely in their favor that it’s all any of us can do to keep up. We live in an interregnum, between the shift in power and its full actualization: These wacky kids don’t yet realize how powerful they are.

This power shift does not have a single cause, nor could it be thwarted through any single change, to set the clock back to a more stable time. Instead, we are all participating in a broadly-based cultural transformation. The forces unleashed can not simply be dammed up; thus far they have swept aside every attempt to contain them. While some may be content to sit on the sidelines and wait until this cultural reorganization plays itself out, as educators you have no such luxury. Everything hits you first, and with full force. You are embedded within this change, as much so as this generation of students.

This paper outlines the basic features of this new world we are hurtling towards, pointing out the obvious rocks and shoals that we must avoid being thrown up against, collisions which could dash us to bits. It is a world where even the illusion of control has been torn away from us. A world wherein the first thing we need to recognize that what is called for in the classroom is a strategic détente, a détente based on mutual interest and respect. Without those two core qualities we have nothing, and chaos will drown all our hopes for worthwhile outcomes. These outcomes are not hard to achieve; one might say that any classroom which lacks mutual respect and interest is inevitably doomed to failure, no matter what the tenor of the times. But just now, in this time, it happens altogether more quickly.

Hence I come to the title of this talk, “Digital Citizenship”. We have given our children the Bomb, and they can – if they so choose – use it to wipe out life as we know it. Right now we sit uneasily in an era of mutually-assured destruction, all the more dangerous because these kids don’t now how fully empowered they are. They could pull the pin by accident. For this reason we must understand them, study them intently, like anthropologists doing field research with an undiscovered tribe. They are not the same as us. Unwittingly, we have changed the rules of the world for them. When the Superpowers stared each other down during the Cold War, each was comforted by the fact that each knew the other had essentially the same hopes and concerns underneath the patina of Capitalism or Communism. This time around, in this Cold War, we stare into eyes so alien they could be another form of life entirely. And this, I must repeat, is entirely our own doing. We have created the cultural preconditions for this Balance of Terror. It is up to us to create an environment that fosters respect, trust, and a new balance of powers. To do that first we must examine the nature of the tremendous changes which have fundamentally altered the way children think.

I: Primary Influences

I am a constructivist. Constructivism states (in terms that now seem fairly obvious) that children learn the rules of the world from their repeated interactions within in. Children build schema, which are then put to the test through experiment; if these experiments succeed, those schema are incorporated into ever-larger schema, but if they fail, it’s back to the drawing board to create new schema. This all seems straightforward enough – even though Einstein pronounced it, “An idea so simple only a genius could have thought of it.” That genius, Jean Piaget, remains an overarching influence across the entire field of childhood development.

At the end of the last decade I became intensely aware that the rapid technological transformations of the past generation must necessarily impact upon the world views of children. At just the time my ideas were gestating, I was privileged to attend a presentation given by Sherry Turkle, a professor at the Massachusetts Institute of Technology, and perhaps the most subtle thinker in the area of children and technology. Turkle talked about her current research, which involved a recently-released and fantastically popular children’s toy, the Furby.

For those of you who may have missed the craze, the Furby is an animatronic creature which has expressive eyes, touch sensors, and a modest capability with language. When first powered up, the Furby speaks ‘Furbish’, an artificial language which the child can decode by looking up words in a dictionary booklet included in the package. As the child interacts with the toy, the Furby’s language slowly adopts more and more English prhases. All of this is interesting enough, but more interesting, by far, is that the Furby has needs. Furby must be fed and played with. Furby must rest and sleep after a session of play. All of this gives the Furby some attributes normally associated with living things, and this gave Turkle an idea.

Constructivists had already determined that between ages four and six children learn to differentiate between animate objects, such as a pet dog, and inanimate objects, such as a doll. Since Furby showed qualities which placed it into both ontological categories, Turkle wondered whether children would class it as animate or inanimate. What she discovered during her interviews with these children astounded her. When the question was put to them of whether the Furby was animate or inanimate, the children said, “Neither.” The children intuited that the Furby resided in a new ontological class of objects, between the animate and inanimate. It’s exactly this ontological in-between-ness of Furby which causes some adults to find them “creepy”. We don’t have a convenient slot to place them into our own world views, and therefore reject them as alien. But Furby was completely natural to these children. Even the invention of a new ontological class of being-ness didn’t strain their understanding. It was, to them, simply the way the world works.

Writ large, the Furby tells the story of our entire civilization. We make much of the difference between “digital immigrants”, such as ourselves, and “digital natives”, such as these children. These kids are entirely comfortable within the digital world, having never known anything else. We casually assume that this difference is merely a quantitative facility. In fact, the difference is almost entirely qualitative. The schema upon which their world-views are based, the literal ‘rules of their world’, are completely different. Furby has an interiority hitherto only ascribed to living things, and while it may not make the full measure of a living thing, it is nevertheless somewhere on a spectrum that simply did not exist a generation ago. It is a magical object, sprinkled with the pixie dust of interactivity, come partially to life, and closer to a real-world Pinocchio than we adults would care to acknowledge.

If Furby were the only example of this transformation of the material world, we would be able to easily cope with the changes in the way children think. It was, instead, part of a leading edge of a breadth of transformation. For example, when I was growing up, LEGO bricks were simple, inanimate objects which could be assembled in an infinite arrangement of forms. Today, LEGO Mindstorms allow children to create programmable forms, using wheels and gears and belts and motors and sensors. LEGO is no longer passive, but active and capable of interacting with the child. It, too, has acquired an interiority which teaches children that at some essential level the entire material world is poised at the threshold of a transformation into the active. A child playing with LEGO Mindstorms will never see the material world as wholly inanimate; they will see it as a playground requiring little more than a few simple mechanical additions, plus a sprinkling of code, to bring it to life. Furby adds interiority to the inanimate world, but LEGO Mindstorms empowers the child with the ability to add this interiority themselves.

The most significant of these transformational innovations is one of the most recent. In 2004, Google purchased Keyhole, Inc., a company that specialized in geospatial data visualization tools. A year later Google released the first version of Google Earth, a tool which provides a desktop environment wherein the entire Earth’s surface can be browsed, at varying levels of resolution, from high Earth orbit, down to the level of storefronts, anywhere throughout the world. This tool, both free and flexible, has fomented a revolution in the teaching of geography, history and political science. No longer constrained to the archaic Mercator Projection atlas on the wall, or the static globe-as-a-ball perched on one corner of teacher’s desk, Google Earth presents Earth-as-a-snapshot.

We must step back and ask ourselves the qualitative lesson, the constructivist message of Google Earth. Certainly it removes the problem of scale; the child can see the world from any point of view, even multiple points of view simultaneously. But it also teaches them that ‘to observe is to understand’. A child can view the ever-expanding drying of southern Australia along with a data showing the rise in temperature over the past decade, all laid out across the continent. The Earth becomes a chalkboard, a spreadsheet, a presentation medium, where the thorny problems of global civilization and its discontents can be explored out in exquisite detail. In this sense, no problem, no matter how vast, no matter how global, will be seen as being beyond the reach of these children. They’ll learn this – not because of what teacher says, or what homework assignments they complete – through interaction with the technology itself.

The generation of children raised on Google Earth will graduate from secondary schools in 2017, just at the time the Government plans to complete its rollout of the National Broadband Network. I reckon these two tools will go hand-in-hand: broadband connects the home to the world, while Google Earth brings the world into the home. Australians, particularly beset by the problems of global warming, climate, and environmental management, need the best tools and the best minds to solve the problems which already beset us. Fortunately it looks as though we are training a generation for leadership, using the tools already at hand.

The existence of Google Earth as an interactive object changes the child’s relationship to the planet. A simulation of Earth is a profoundly new thing, and naturally is generating new ontological categories. Yet again, and completely by accident, we have profoundly altered the world view of this generation of children and young adults. We are doing this to ourselves: our industries turn out products and toys and games which apply the latest technological developments in a dazzling variety of ways. We give these objects to our children, more or less blindly unaware of how this will affect their development. Then we wonder how these aliens arrived in our midst, these ‘digital natives’ with their curious ways. Ladies and gentlemen, we need to admit that we have done this to ourselves. We and our technological-materialist culture have fostered an environment of such tremendous novelty and variety that we have changed the equations of childhood.

Yet these technologies are only the tip of the iceberg. Each are the technologies of childhood, of a world of objects, where the relationship is between child and object. This is not the world of adults, where the relations between objects are thoroughly confused by the relationships between adults. In fact, it can be said that for as much as adults are obsessed with material possessions, we are only obsessed with them because of our relationships to other adults. The corner we turn between childhood and young adulthood is indicative of a change in the way we think, in the objects of attention, and in the technologies which facilitate and amplify that attention. These technologies have also suddenly and profoundly changed, and, again, we are almost completely unaware of what that has done to those wacky kids.

II: Share This Thought!

Australia now has more mobile phone subscribers than people. We have reached 104% subscription levels, simply because some of us own and use more than one handset. This phenomenon has been repeated globally; there are something like four billion mobile phone subscribers throughout the world, representing approximately three point six billion customers. That’s well over half the population of planet Earth. Given that there are only about a billion people in the ‘advanced’ economies in the developed world – almost all of whom now use mobiles – two and a half billion of the relatively ‘poor’ also have mobiles. How could this be? Shouldn’t these people be spending money on food, housing, and education for their children?

As it turns out (and there are numerous examples to support this) a mobile handset is probably the most important tool someone can employ to improve their economic well-being. A farmer can call ahead to markets to find out which is paying the best price for his crop; the same goes for fishermen. Tradesmen can close deals without the hassle and lost time involved in travel; craftswomen can coordinate their creative resources with a few text messages. Each of these examples can be found in any Bangladeshi city or Africa village. In the developed world, the mobile was nice but non-essential: no one is late anymore, just delayed, because we can always phone ahead. In the parts of the world which never had wired communications, the leap into the network has been explosively potent.

The mobile is a social accelerant; it does for our innate social capabilities what the steam shovel did for our mechanical capabilities two hundred years ago. The mobile extends our social reach, and deepens our social connectivity. Nowhere is this more noticeable than in the lives of those wacky kids. At the beginning of this decade, researcher Mitzuko Ito took a look at the mobile phone in the lives of Japanese teenagers. Ito published her research in Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, presenting a surprising result: these teenagers were sending and receiving a hundred text messages a day among a close-knit group of friends (generally four or five others), starting when they first arose in the morning, and going on until they fell asleep at night. This constant, gentle connectivity – which Ito named ‘co-presence’ – often consisted of little of substance, just reminders of connection.

At the time many of Ito’s readers dismissed this phenomenon as something to be found among those ‘wacky Japanese’, with their technophilic bent. A decade later this co-presence is the standard behavior for all teenagers everywhere in the developed world. An Australian teenager thinks nothing of sending and receiving a hundred text messages a day, within their own close group of friends. A parent who might dare to look at the message log on a teenager’s phone would see very little of significance and wonder why these messages needed to be sent at all. But the content doesn’t matter: connection is the significant factor.

We now know that the teenage years are when the brain ‘boots’ into its full social awareness, when children leave childhood behind to become fully participating members within the richness of human society. This process has always been painful and awkward, but just now, with the addition of the social accelerant and amplifier of the mobile, it has become almost impossibly significant. The co-present social network can help cushion the blow of rejection, or it can impel the teenager to greater acts of folly. Both sides of the technology-as-amplifier are ever-present. We have seen bullying by mobile and over YouTube or Facebook; we know how quickly the technology can overrun any of the natural instincts which might prevent us from causing damage far beyond our intention – keep this in mind, because we’ll come back to it when we discuss digital citizenship in detail.

There is another side to sociability, both far removed from this bullying behavior and intimately related to it – the desire to share. The sharing of information is an innate human behavior: since we learned to speak we’ve been talking to each other, warning each other of dangers, informing each other of opportunities, positing possibilities, and just generally reassuring each other with the sound of our voices. We’ve now extended that four-billion-fold, so that half of humanity is directly connected, one to another.

We know we say little to nothing with those we know well, though we may say it continuously. What do we say to those we know not at all? In this case we share not words but the artifacts of culture. We share a song, or a video clip, or a link, or a photograph. Each of these are just as important as words spoken, but each of these places us at a comfortable distance within the intimate act of sharing. 21st-century culture looks like a gigantic act of sharing. We share music, movies and television programmes, driving the creative industries to distraction – particularly with the younger generation, who see no need to pay for any cultural product. We share information and knowledge, creating a wealth of blogs, and resources such as Wikipedia, the universal repository of factual information about the world as it is. We share the minutiae of our lives in micro-blogging services such as Twitter, and find that, being so well connected, we can also harvest the knowledge of our networks to become ever-better informed, and ever more effective individuals. We can translate that effectiveness into action, and become potent forces for change.

Everything we do, both within and outside the classroom, must be seen through this prism of sharing. Teenagers log onto video chat services such as Skype, and do their homework together, at a distance, sharing and comparing their results. Parents offer up their kindergartener’s presentations to other parents through Twitter – and those parents respond to the offer. All of this both amplifies and undermines the classroom. The classroom has not dealt with the phenomenal transformation in the connectivity of the broader culture, and is in danger of becoming obsolesced by it.

Yet if the classroom were to wholeheartedly to embrace connectivity, what would become of it? Would it simply dissolve into a chaotic sea, or is it strong enough to chart its own course in this new world? This same question confronts every institution, of every size. It affects the classroom first simply because the networked and co-present polity of hyperconnected teenagers has reached it first. It is the first institution that must transform because the young adults who are its reason for being are the agents of that transformation. There’s no way around it, no way to set the clock back to a simpler time, unless, Amish-like, we were simply to dispose of all the gadgets which we have adopted as essential elements in our lifestyle.

This, then, is why these children hold the future of the classroom-as-institution in their hands, this is why the power-shift has been so sudden and so complete. This is why digital citizenship isn’t simply an academic interest, but a clear and present problem which must be addressed, broadly and immediately, throughout our entire educational system. We already live in a time of disconnect, where the classroom has stopped reflecting the world outside its walls. The classroom is born of an industrial mode of thinking, where hierarchy and reproducibility were the order of the day. The world outside those walls is networked and highly heterogeneous. And where the classroom touches the world outside, sparks fly; the classroom can’t handle the currents generated by the culture of connectivity and sharing. This can not go on.

When discussing digital citizenship, we must first look to ourselves. This is more than a question of learning the language and tools of the digital era, we must take the life-skills we have already gained outside the classroom and bring them within. But beyond this, we must relentlessly apply network logic to the work of our own lives. If that work is as educators, so be it. We must accept the reality of the 21st century, that, more than anything else, this is the networked era, and that this network has gifted us with new capabilities even as it presents us with new dangers. Both gifts and dangers are issues of potency; the network has made us incredibly powerful. The network is smarter, faster and more agile than the hierarchy; when the two collide – as they’re bound to, with increasing frequency – the network always wins. A text message can unleash revolution, or land a teenager in jail on charges of peddling child pornography, or spark a riot on a Sydney beach; Wikipedia can drive Britannica, a quarter millennium-old reference text out of business; a outsider candidate can get himself elected president of the United States because his team masters the logic of the network. In truth, we already live in the age of digital citizenship, but so many of us don’t know the rules, and hence, are poor citizens.

Now that we’ve explored the dimensions of the transition in the understanding of the younger generation, and the desynchronization of our own practice within the world as it exists, we can finally tackle the issue of digital citizenship. Children and young adults who have grown up in this brave new world, who have already created new ontological categories to frame it in their understanding, won’t have time or attention for preaching and screeching from the pulpit in the classroom, or the ‘bully pulpits’ of the media. In some ways, their understanding already surpasses ours, but their apprehension of consequential behavior does not. It is entirely up to us to bridge this gap in their understanding, but I do not to imply that educators can handle this task alone. All of the adult forces of the culture must be involved: parents, caretakers, educators, administrators, mentors, authority and institutional figures of all kinds. We must all be pulling in the same direction, lest the threads we are trying to weave together unravel.

III: 20/60 Foresight

While on a lecture tour last year, a Queensland teacher said something quite profound to me. “Giving a year 7 student a laptop is the equivalent of giving them a loaded gun.” Just as we wouldn’t think of giving this child a gun without extensive safety instruction, we can’t even think consider giving this child a computer – and access to the network – without extensive training in digital citizenship. But the laptop is only one device; any networked device has the potential for the same pitfalls.

Long before Sherry Turkle explored Furby’s effect on the world-view of children, she examined how children interact with computers. In her first survey, The Second Self: Computers and the Human Spirit, she applied Lacanian psychoanalysis and constructivism to build a model of how children interacted with computers. In the earliest days of the personal computer revolution, these machines were not connected to any networks, but were instead laboratories where the child could explore themselves, creating a ‘mirror’ of their own understanding.

Now that almost every computer is fully connected to the billion-plus regular users of the Internet, the mirror no longer reflects the self, but the collective yet highly heterogeneous tastes and behaviors of mankind. The opportunity for quiet self-exploration drowns amidst the clamor from a very vital human world. In the space between the singular and the collective, we must provide an opportunity for children to grow into a sense of themselves, their capabilities, and their responsibilities. This liminal moment is the space for an education in digital citizenship. It may be the only space available for such an education, before the lure of the network sets behavioral patterns in place.

Children must be raised to have a healthy respect for the network from their earliest awareness of it. The network access of young children is generally closely supervised, but, as they turn the corner into tweenage and secondary education, we need to provide another level of support, which fully briefs these rapidly maturing children on the dangers, pitfalls, opportunities and strengths of network culture. They already know how to do things, but they do not have the wisdom to decide when it appropriate to do them, and when it is appropriate to refrain. That wisdom is the core of what must be passed along. But wisdom is hard to transmit in words; it must flow from actions and lessons learned. Is it possible to develop a lesson plan which imparts the lessons of digital citizenship? Can we teach these children to tame their new powers?

Before a child is given their own mobile – something that happens around age 12 here in Australia, though that is slowly dropping – they must learn the right way to use it. Not the perfunctory ‘this is not a toy’ talk they might receive from a parent, but a more subtle and profound exploration of what it means to be directly connected to half of humanity, and how, should that connectivity go awry, it could seriously affect someone’s life – possibly even their own. Yes, the younger generation has different values where the privacy of personal information is concerned, but even they have limits they want to respect, and circles of intimacy they want to defend. Showing them how to reinforce their privacy with technology is a good place to start in any discussion of digital citizenship.

Similarly, before a child is given a computer – either at home or in school – it must be accompanied by instruction in the power of the network. A child may have a natural facility with the network without having any sense of the power of the network as an amplifier of capability. It’s that disconnect which digital citizenship must bridge.

It’s not my role to be prescriptive. I’m not going to tell you to do this or that particular thing, or outline a five-step plan to ensure that the next generation avoid ruining their lives as they come online. This is a collective problem which calls for a collective solution. Fortunately, we live in an era of collective technology. It is possible for all of us to come together and collaborate on solutions to this problem. Digital citizenship is a issue which has global reach; the UK and the US are both confronting similar issues, and both, like Australia, fail to deal with them comprehensively. Perhaps the Australian College of Educators can act as a spearhead on this issue, working in concert with other national bodies to develop a program and curriculum in digital citizenship. It would be a project worthy of your next fifty years.

In closing, let’s cast our eyes forward fifty years, to 2060, when your organization will be celebrating its hundredth anniversary. We can only imagine the technological advances of the next fifty years in the fuzziest of terms. You need only cast yourselves back fifty years to understand why. Back then, a computer as powerful as my laptop wouldn’t have filled a single building – or even a single city block. It very likely would have filled a small city, requiring its own power plant. If we have come so far in fifty years, judging where we’ll be in fifty years time is beyond the capabilities of even the most able futurist. We can only say that computers will become pervasive and nearly invisibly woven through the fabric of human culture.

Let us instead focus on how we will use technology in fifty years’ time. We can already see the shape of the future in one outstanding example – a website known as RateMyProfessors.com. Here, in a database of nine million reviews of one million teachers, lecturers and professors, students can learn which instructors bore, which grade easily, which excite the mind, and so forth. This simple site – which grew out of the power of sharing – has radically changed the balance of power on university campuses throughout the US and the UK. Students can learn from others’ mistakes or triumphs, and can repeat them. Universities, which might try to corral students into lectures with instructors who might not be exemplars of their profession, find themselves unable to fill those courses. Worse yet, bidding wars have broken out between universities seeking to fill their ranks with the instructors who receive the highest rankings.

Alongside the rise of RateMyProfessors.com, there has been an exponential increase in the amount of lecture material you can find online, whether on YouTube, or iTunes University, or any number of dedicated websites. Those lectures also have ratings, so it is already possible for a student to get to the best and most popular lectures on any subject, be it calculus or Mandarin or the medieval history of Europe.

Both of these trends are accelerating because both are backed by the power of sharing, the engine driving all of this. As we move further into the future, we’ll see the students gradually take control of the scheduling functions of the university (and probably in a large number of secondary school classes). These students will pair lecturers with courses using software to coordinate both. More and more, the educational institution will be reduced to a layer of software sitting between the student, the mentor-instructor and the courseware. As the university dissolves in the universal solvent of the network, the capacity to use the network for education increases geometrically; education will be available everywhere the network reaches. It already reaches half of humanity; in a few years it will cover three-quarters of the population of the planet. Certainly by 2060 network access will be thought of as a human right, much like food and clean water.

In 2060, Australian College of Educators may be more of an ‘Invisible College’ than anything based in rude physicality. Educators will continue to collaborate, but without much of the physical infrastructure we currently associate with educational institutions. Classrooms will self-organize and disperse organically, driven by need, proximity, or interest, and the best instructors will find themselves constantly in demand. Life-long learning will no longer be a catch-phrase, but a reality for the billions of individuals all focusing on improving their effectiveness within an ever-more-competitive global market for talent. (The same techniques employed by RateMyProfessors.com will impact all the other professions, eventually.)

There you have it. The human future is both more chaotic and more potent than we can easily imagine, even if we have examples in our present which point the way to where we are going. And if this future sounds far away, keep this in mind: today’s year 10 student will be retiring in 2060. This is their world.

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.