A spectre is haunting the classroom, the spectre of change. Nearly a century of institutional forms, initiated at the height of the Industrial Era, will change irrevocably over the next decade. The change is already well underway, but this change is not being led by teachers, administrators, parents or politicians. Coming from the ground up, the true agents of change are the students within the educational system. Within just the last five years, both power and control have swung so quickly and so completely in their favor that it’s all any of us can do to keep up. We live in an interregnum, between the shift in power and its full actualization: These wacky kids don’t yet realize how powerful they are.

This power shift does not have a single cause, nor could it be thwarted through any single change, to set the clock back to a more stable time. Instead, we are all participating in a broadly-based cultural transformation. The forces unleashed can not simply be dammed up; thus far they have swept aside every attempt to contain them. While some may be content to sit on the sidelines and wait until this cultural reorganization plays itself out, as educators you have no such luxury. Everything hits you first, and with full force. You are embedded within this change, as much so as this generation of students.

This paper outlines the basic features of this new world we are hurtling towards, pointing out the obvious rocks and shoals that we must avoid being thrown up against, collisions which could dash us to bits. It is a world where even the illusion of control has been torn away from us. A world wherein the first thing we need to recognize that what is called for in the classroom is a strategic détente, a détente based on mutual interest and respect. Without those two core qualities we have nothing, and chaos will drown all our hopes for worthwhile outcomes. These outcomes are not hard to achieve; one might say that any classroom which lacks mutual respect and interest is inevitably doomed to failure, no matter what the tenor of the times. But just now, in this time, it happens altogether more quickly.

Hence I come to the title of this talk, “Digital Citizenship”. We have given our children the Bomb, and they can – if they so choose – use it to wipe out life as we know it. Right now we sit uneasily in an era of mutually-assured destruction, all the more dangerous because these kids don’t now how fully empowered they are. They could pull the pin by accident. For this reason we must understand them, study them intently, like anthropologists doing field research with an undiscovered tribe. They are not the same as us. Unwittingly, we have changed the rules of the world for them. When the Superpowers stared each other down during the Cold War, each was comforted by the fact that each knew the other had essentially the same hopes and concerns underneath the patina of Capitalism or Communism. This time around, in this Cold War, we stare into eyes so alien they could be another form of life entirely. And this, I must repeat, is entirely our own doing. We have created the cultural preconditions for this Balance of Terror. It is up to us to create an environment that fosters respect, trust, and a new balance of powers. To do that first we must examine the nature of the tremendous changes which have fundamentally altered the way children think.

I: Primary Influences

I am a constructivist. Constructivism states (in terms that now seem fairly obvious) that children learn the rules of the world from their repeated interactions within in. Children build schema, which are then put to the test through experiment; if these experiments succeed, those schema are incorporated into ever-larger schema, but if they fail, it’s back to the drawing board to create new schema. This all seems straightforward enough – even though Einstein pronounced it, “An idea so simple only a genius could have thought of it.” That genius, Jean Piaget, remains an overarching influence across the entire field of childhood development.

At the end of the last decade I became intensely aware that the rapid technological transformations of the past generation must necessarily impact upon the world views of children. At just the time my ideas were gestating, I was privileged to attend a presentation given by Sherry Turkle, a professor at the Massachusetts Institute of Technology, and perhaps the most subtle thinker in the area of children and technology. Turkle talked about her current research, which involved a recently-released and fantastically popular children’s toy, the Furby.

For those of you who may have missed the craze, the Furby is an animatronic creature which has expressive eyes, touch sensors, and a modest capability with language. When first powered up, the Furby speaks ‘Furbish’, an artificial language which the child can decode by looking up words in a dictionary booklet included in the package. As the child interacts with the toy, the Furby’s language slowly adopts more and more English prhases. All of this is interesting enough, but more interesting, by far, is that the Furby has needs. Furby must be fed and played with. Furby must rest and sleep after a session of play. All of this gives the Furby some attributes normally associated with living things, and this gave Turkle an idea.

Constructivists had already determined that between ages four and six children learn to differentiate between animate objects, such as a pet dog, and inanimate objects, such as a doll. Since Furby showed qualities which placed it into both ontological categories, Turkle wondered whether children would class it as animate or inanimate. What she discovered during her interviews with these children astounded her. When the question was put to them of whether the Furby was animate or inanimate, the children said, “Neither.” The children intuited that the Furby resided in a new ontological class of objects, between the animate and inanimate. It’s exactly this ontological in-between-ness of Furby which causes some adults to find them “creepy”. We don’t have a convenient slot to place them into our own world views, and therefore reject them as alien. But Furby was completely natural to these children. Even the invention of a new ontological class of being-ness didn’t strain their understanding. It was, to them, simply the way the world works.

Writ large, the Furby tells the story of our entire civilization. We make much of the difference between “digital immigrants”, such as ourselves, and “digital natives”, such as these children. These kids are entirely comfortable within the digital world, having never known anything else. We casually assume that this difference is merely a quantitative facility. In fact, the difference is almost entirely qualitative. The schema upon which their world-views are based, the literal ‘rules of their world’, are completely different. Furby has an interiority hitherto only ascribed to living things, and while it may not make the full measure of a living thing, it is nevertheless somewhere on a spectrum that simply did not exist a generation ago. It is a magical object, sprinkled with the pixie dust of interactivity, come partially to life, and closer to a real-world Pinocchio than we adults would care to acknowledge.

If Furby were the only example of this transformation of the material world, we would be able to easily cope with the changes in the way children think. It was, instead, part of a leading edge of a breadth of transformation. For example, when I was growing up, LEGO bricks were simple, inanimate objects which could be assembled in an infinite arrangement of forms. Today, LEGO Mindstorms allow children to create programmable forms, using wheels and gears and belts and motors and sensors. LEGO is no longer passive, but active and capable of interacting with the child. It, too, has acquired an interiority which teaches children that at some essential level the entire material world is poised at the threshold of a transformation into the active. A child playing with LEGO Mindstorms will never see the material world as wholly inanimate; they will see it as a playground requiring little more than a few simple mechanical additions, plus a sprinkling of code, to bring it to life. Furby adds interiority to the inanimate world, but LEGO Mindstorms empowers the child with the ability to add this interiority themselves.

The most significant of these transformational innovations is one of the most recent. In 2004, Google purchased Keyhole, Inc., a company that specialized in geospatial data visualization tools. A year later Google released the first version of Google Earth, a tool which provides a desktop environment wherein the entire Earth’s surface can be browsed, at varying levels of resolution, from high Earth orbit, down to the level of storefronts, anywhere throughout the world. This tool, both free and flexible, has fomented a revolution in the teaching of geography, history and political science. No longer constrained to the archaic Mercator Projection atlas on the wall, or the static globe-as-a-ball perched on one corner of teacher’s desk, Google Earth presents Earth-as-a-snapshot.

We must step back and ask ourselves the qualitative lesson, the constructivist message of Google Earth. Certainly it removes the problem of scale; the child can see the world from any point of view, even multiple points of view simultaneously. But it also teaches them that ‘to observe is to understand’. A child can view the ever-expanding drying of southern Australia along with a data showing the rise in temperature over the past decade, all laid out across the continent. The Earth becomes a chalkboard, a spreadsheet, a presentation medium, where the thorny problems of global civilization and its discontents can be explored out in exquisite detail. In this sense, no problem, no matter how vast, no matter how global, will be seen as being beyond the reach of these children. They’ll learn this – not because of what teacher says, or what homework assignments they complete – through interaction with the technology itself.

The generation of children raised on Google Earth will graduate from secondary schools in 2017, just at the time the Government plans to complete its rollout of the National Broadband Network. I reckon these two tools will go hand-in-hand: broadband connects the home to the world, while Google Earth brings the world into the home. Australians, particularly beset by the problems of global warming, climate, and environmental management, need the best tools and the best minds to solve the problems which already beset us. Fortunately it looks as though we are training a generation for leadership, using the tools already at hand.

The existence of Google Earth as an interactive object changes the child’s relationship to the planet. A simulation of Earth is a profoundly new thing, and naturally is generating new ontological categories. Yet again, and completely by accident, we have profoundly altered the world view of this generation of children and young adults. We are doing this to ourselves: our industries turn out products and toys and games which apply the latest technological developments in a dazzling variety of ways. We give these objects to our children, more or less blindly unaware of how this will affect their development. Then we wonder how these aliens arrived in our midst, these ‘digital natives’ with their curious ways. Ladies and gentlemen, we need to admit that we have done this to ourselves. We and our technological-materialist culture have fostered an environment of such tremendous novelty and variety that we have changed the equations of childhood.

Yet these technologies are only the tip of the iceberg. Each are the technologies of childhood, of a world of objects, where the relationship is between child and object. This is not the world of adults, where the relations between objects are thoroughly confused by the relationships between adults. In fact, it can be said that for as much as adults are obsessed with material possessions, we are only obsessed with them because of our relationships to other adults. The corner we turn between childhood and young adulthood is indicative of a change in the way we think, in the objects of attention, and in the technologies which facilitate and amplify that attention. These technologies have also suddenly and profoundly changed, and, again, we are almost completely unaware of what that has done to those wacky kids.

II: Share This Thought!

Australia now has more mobile phone subscribers than people. We have reached 104% subscription levels, simply because some of us own and use more than one handset. This phenomenon has been repeated globally; there are something like four billion mobile phone subscribers throughout the world, representing approximately three point six billion customers. That’s well over half the population of planet Earth. Given that there are only about a billion people in the ‘advanced’ economies in the developed world – almost all of whom now use mobiles – two and a half billion of the relatively ‘poor’ also have mobiles. How could this be? Shouldn’t these people be spending money on food, housing, and education for their children?

As it turns out (and there are numerous examples to support this) a mobile handset is probably the most important tool someone can employ to improve their economic well-being. A farmer can call ahead to markets to find out which is paying the best price for his crop; the same goes for fishermen. Tradesmen can close deals without the hassle and lost time involved in travel; craftswomen can coordinate their creative resources with a few text messages. Each of these examples can be found in any Bangladeshi city or Africa village. In the developed world, the mobile was nice but non-essential: no one is late anymore, just delayed, because we can always phone ahead. In the parts of the world which never had wired communications, the leap into the network has been explosively potent.

The mobile is a social accelerant; it does for our innate social capabilities what the steam shovel did for our mechanical capabilities two hundred years ago. The mobile extends our social reach, and deepens our social connectivity. Nowhere is this more noticeable than in the lives of those wacky kids. At the beginning of this decade, researcher Mitzuko Ito took a look at the mobile phone in the lives of Japanese teenagers. Ito published her research in Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, presenting a surprising result: these teenagers were sending and receiving a hundred text messages a day among a close-knit group of friends (generally four or five others), starting when they first arose in the morning, and going on until they fell asleep at night. This constant, gentle connectivity – which Ito named ‘co-presence’ – often consisted of little of substance, just reminders of connection.

At the time many of Ito’s readers dismissed this phenomenon as something to be found among those ‘wacky Japanese’, with their technophilic bent. A decade later this co-presence is the standard behavior for all teenagers everywhere in the developed world. An Australian teenager thinks nothing of sending and receiving a hundred text messages a day, within their own close group of friends. A parent who might dare to look at the message log on a teenager’s phone would see very little of significance and wonder why these messages needed to be sent at all. But the content doesn’t matter: connection is the significant factor.

We now know that the teenage years are when the brain ‘boots’ into its full social awareness, when children leave childhood behind to become fully participating members within the richness of human society. This process has always been painful and awkward, but just now, with the addition of the social accelerant and amplifier of the mobile, it has become almost impossibly significant. The co-present social network can help cushion the blow of rejection, or it can impel the teenager to greater acts of folly. Both sides of the technology-as-amplifier are ever-present. We have seen bullying by mobile and over YouTube or Facebook; we know how quickly the technology can overrun any of the natural instincts which might prevent us from causing damage far beyond our intention – keep this in mind, because we’ll come back to it when we discuss digital citizenship in detail.

There is another side to sociability, both far removed from this bullying behavior and intimately related to it – the desire to share. The sharing of information is an innate human behavior: since we learned to speak we’ve been talking to each other, warning each other of dangers, informing each other of opportunities, positing possibilities, and just generally reassuring each other with the sound of our voices. We’ve now extended that four-billion-fold, so that half of humanity is directly connected, one to another.

We know we say little to nothing with those we know well, though we may say it continuously. What do we say to those we know not at all? In this case we share not words but the artifacts of culture. We share a song, or a video clip, or a link, or a photograph. Each of these are just as important as words spoken, but each of these places us at a comfortable distance within the intimate act of sharing. 21st-century culture looks like a gigantic act of sharing. We share music, movies and television programmes, driving the creative industries to distraction – particularly with the younger generation, who see no need to pay for any cultural product. We share information and knowledge, creating a wealth of blogs, and resources such as Wikipedia, the universal repository of factual information about the world as it is. We share the minutiae of our lives in micro-blogging services such as Twitter, and find that, being so well connected, we can also harvest the knowledge of our networks to become ever-better informed, and ever more effective individuals. We can translate that effectiveness into action, and become potent forces for change.

Everything we do, both within and outside the classroom, must be seen through this prism of sharing. Teenagers log onto video chat services such as Skype, and do their homework together, at a distance, sharing and comparing their results. Parents offer up their kindergartener’s presentations to other parents through Twitter – and those parents respond to the offer. All of this both amplifies and undermines the classroom. The classroom has not dealt with the phenomenal transformation in the connectivity of the broader culture, and is in danger of becoming obsolesced by it.

Yet if the classroom were to wholeheartedly to embrace connectivity, what would become of it? Would it simply dissolve into a chaotic sea, or is it strong enough to chart its own course in this new world? This same question confronts every institution, of every size. It affects the classroom first simply because the networked and co-present polity of hyperconnected teenagers has reached it first. It is the first institution that must transform because the young adults who are its reason for being are the agents of that transformation. There’s no way around it, no way to set the clock back to a simpler time, unless, Amish-like, we were simply to dispose of all the gadgets which we have adopted as essential elements in our lifestyle.

This, then, is why these children hold the future of the classroom-as-institution in their hands, this is why the power-shift has been so sudden and so complete. This is why digital citizenship isn’t simply an academic interest, but a clear and present problem which must be addressed, broadly and immediately, throughout our entire educational system. We already live in a time of disconnect, where the classroom has stopped reflecting the world outside its walls. The classroom is born of an industrial mode of thinking, where hierarchy and reproducibility were the order of the day. The world outside those walls is networked and highly heterogeneous. And where the classroom touches the world outside, sparks fly; the classroom can’t handle the currents generated by the culture of connectivity and sharing. This can not go on.

When discussing digital citizenship, we must first look to ourselves. This is more than a question of learning the language and tools of the digital era, we must take the life-skills we have already gained outside the classroom and bring them within. But beyond this, we must relentlessly apply network logic to the work of our own lives. If that work is as educators, so be it. We must accept the reality of the 21st century, that, more than anything else, this is the networked era, and that this network has gifted us with new capabilities even as it presents us with new dangers. Both gifts and dangers are issues of potency; the network has made us incredibly powerful. The network is smarter, faster and more agile than the hierarchy; when the two collide – as they’re bound to, with increasing frequency – the network always wins. A text message can unleash revolution, or land a teenager in jail on charges of peddling child pornography, or spark a riot on a Sydney beach; Wikipedia can drive Britannica, a quarter millennium-old reference text out of business; a outsider candidate can get himself elected president of the United States because his team masters the logic of the network. In truth, we already live in the age of digital citizenship, but so many of us don’t know the rules, and hence, are poor citizens.

Now that we’ve explored the dimensions of the transition in the understanding of the younger generation, and the desynchronization of our own practice within the world as it exists, we can finally tackle the issue of digital citizenship. Children and young adults who have grown up in this brave new world, who have already created new ontological categories to frame it in their understanding, won’t have time or attention for preaching and screeching from the pulpit in the classroom, or the ‘bully pulpits’ of the media. In some ways, their understanding already surpasses ours, but their apprehension of consequential behavior does not. It is entirely up to us to bridge this gap in their understanding, but I do not to imply that educators can handle this task alone. All of the adult forces of the culture must be involved: parents, caretakers, educators, administrators, mentors, authority and institutional figures of all kinds. We must all be pulling in the same direction, lest the threads we are trying to weave together unravel.

III: 20/60 Foresight

While on a lecture tour last year, a Queensland teacher said something quite profound to me. “Giving a year 7 student a laptop is the equivalent of giving them a loaded gun.” Just as we wouldn’t think of giving this child a gun without extensive safety instruction, we can’t even think consider giving this child a computer – and access to the network – without extensive training in digital citizenship. But the laptop is only one device; any networked device has the potential for the same pitfalls.

Long before Sherry Turkle explored Furby’s effect on the world-view of children, she examined how children interact with computers. In her first survey, The Second Self: Computers and the Human Spirit, she applied Lacanian psychoanalysis and constructivism to build a model of how children interacted with computers. In the earliest days of the personal computer revolution, these machines were not connected to any networks, but were instead laboratories where the child could explore themselves, creating a ‘mirror’ of their own understanding.

Now that almost every computer is fully connected to the billion-plus regular users of the Internet, the mirror no longer reflects the self, but the collective yet highly heterogeneous tastes and behaviors of mankind. The opportunity for quiet self-exploration drowns amidst the clamor from a very vital human world. In the space between the singular and the collective, we must provide an opportunity for children to grow into a sense of themselves, their capabilities, and their responsibilities. This liminal moment is the space for an education in digital citizenship. It may be the only space available for such an education, before the lure of the network sets behavioral patterns in place.

Children must be raised to have a healthy respect for the network from their earliest awareness of it. The network access of young children is generally closely supervised, but, as they turn the corner into tweenage and secondary education, we need to provide another level of support, which fully briefs these rapidly maturing children on the dangers, pitfalls, opportunities and strengths of network culture. They already know how to do things, but they do not have the wisdom to decide when it appropriate to do them, and when it is appropriate to refrain. That wisdom is the core of what must be passed along. But wisdom is hard to transmit in words; it must flow from actions and lessons learned. Is it possible to develop a lesson plan which imparts the lessons of digital citizenship? Can we teach these children to tame their new powers?

Before a child is given their own mobile – something that happens around age 12 here in Australia, though that is slowly dropping – they must learn the right way to use it. Not the perfunctory ‘this is not a toy’ talk they might receive from a parent, but a more subtle and profound exploration of what it means to be directly connected to half of humanity, and how, should that connectivity go awry, it could seriously affect someone’s life – possibly even their own. Yes, the younger generation has different values where the privacy of personal information is concerned, but even they have limits they want to respect, and circles of intimacy they want to defend. Showing them how to reinforce their privacy with technology is a good place to start in any discussion of digital citizenship.

Similarly, before a child is given a computer – either at home or in school – it must be accompanied by instruction in the power of the network. A child may have a natural facility with the network without having any sense of the power of the network as an amplifier of capability. It’s that disconnect which digital citizenship must bridge.

It’s not my role to be prescriptive. I’m not going to tell you to do this or that particular thing, or outline a five-step plan to ensure that the next generation avoid ruining their lives as they come online. This is a collective problem which calls for a collective solution. Fortunately, we live in an era of collective technology. It is possible for all of us to come together and collaborate on solutions to this problem. Digital citizenship is a issue which has global reach; the UK and the US are both confronting similar issues, and both, like Australia, fail to deal with them comprehensively. Perhaps the Australian College of Educators can act as a spearhead on this issue, working in concert with other national bodies to develop a program and curriculum in digital citizenship. It would be a project worthy of your next fifty years.

In closing, let’s cast our eyes forward fifty years, to 2060, when your organization will be celebrating its hundredth anniversary. We can only imagine the technological advances of the next fifty years in the fuzziest of terms. You need only cast yourselves back fifty years to understand why. Back then, a computer as powerful as my laptop wouldn’t have filled a single building – or even a single city block. It very likely would have filled a small city, requiring its own power plant. If we have come so far in fifty years, judging where we’ll be in fifty years time is beyond the capabilities of even the most able futurist. We can only say that computers will become pervasive and nearly invisibly woven through the fabric of human culture.

Let us instead focus on how we will use technology in fifty years’ time. We can already see the shape of the future in one outstanding example – a website known as RateMyProfessors.com. Here, in a database of nine million reviews of one million teachers, lecturers and professors, students can learn which instructors bore, which grade easily, which excite the mind, and so forth. This simple site – which grew out of the power of sharing – has radically changed the balance of power on university campuses throughout the US and the UK. Students can learn from others’ mistakes or triumphs, and can repeat them. Universities, which might try to corral students into lectures with instructors who might not be exemplars of their profession, find themselves unable to fill those courses. Worse yet, bidding wars have broken out between universities seeking to fill their ranks with the instructors who receive the highest rankings.

Alongside the rise of RateMyProfessors.com, there has been an exponential increase in the amount of lecture material you can find online, whether on YouTube, or iTunes University, or any number of dedicated websites. Those lectures also have ratings, so it is already possible for a student to get to the best and most popular lectures on any subject, be it calculus or Mandarin or the medieval history of Europe.

Both of these trends are accelerating because both are backed by the power of sharing, the engine driving all of this. As we move further into the future, we’ll see the students gradually take control of the scheduling functions of the university (and probably in a large number of secondary school classes). These students will pair lecturers with courses using software to coordinate both. More and more, the educational institution will be reduced to a layer of software sitting between the student, the mentor-instructor and the courseware. As the university dissolves in the universal solvent of the network, the capacity to use the network for education increases geometrically; education will be available everywhere the network reaches. It already reaches half of humanity; in a few years it will cover three-quarters of the population of the planet. Certainly by 2060 network access will be thought of as a human right, much like food and clean water.

In 2060, Australian College of Educators may be more of an ‘Invisible College’ than anything based in rude physicality. Educators will continue to collaborate, but without much of the physical infrastructure we currently associate with educational institutions. Classrooms will self-organize and disperse organically, driven by need, proximity, or interest, and the best instructors will find themselves constantly in demand. Life-long learning will no longer be a catch-phrase, but a reality for the billions of individuals all focusing on improving their effectiveness within an ever-more-competitive global market for talent. (The same techniques employed by RateMyProfessors.com will impact all the other professions, eventually.)

There you have it. The human future is both more chaotic and more potent than we can easily imagine, even if we have examples in our present which point the way to where we are going. And if this future sounds far away, keep this in mind: today’s year 10 student will be retiring in 2060. This is their world.

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!

We have been human beings for perhaps sixty thousand years. In all that time, our genome, the twenty-five thousand genes and three billion base pairs which comprise the source code for Homo Sapiens Sapiens has hardly changed.

For at least three thousand generations, we’ve had big brains to think with, a descended larynx to speak with, and opposable thumbs to grasp with. Yet, for almost ninety percent of that enormous span of time, humanity remained a static presence.

Our ancestors entered the world and passed on from it, but the patterns of culture remained remarkably stable, persistent and conservative. This posed a conundrum for paleoanthropologists, long known as ‘the sapient paradox’: if we had the “kit” for it, why did civilization take so long to arise?

Cambridge archeologist Colin Renfrew (more formally, Baron Renfrew of Kamisthorn) recently proposed an answer. We may have had great hardware, but it took a long, long time for humans to develop software which made full use of it.

We had to pass through symbolization, investing the outer world with inner meaning (in the process, creating some great art), before we could begin to develop the highly symbolic processes of cities, culture, law, and government.

About ten thousand years ago, the hidden interiority of humanity, passed down through myths and teachings and dreamings, built up a cultural reservoir of social capacity which overtopped the dam of the conservative patterns of humanity. We booted up (as it were) into a culture now so familiar we rarely take notice of it.

In Guns, Germs and Steel, evolutionary biologist and geographer Jared Diamond presented a model which elegantly explains how various peoples crossed the gap into civilization.

Cultures located along similar climatic regions on the planet’s surface could and did share innovations, most significantly along the broad swath of land from the Yangtze to the Rhine. This sharing accelerated the development of each of the populations connected together through the material flow of plants and animals and the immaterial flow of ideas and symbols. Where sharing had been a local and generational project for fifty thousand years, it suddenly became a geographical project across nearly half the diameter of the planet. Cities emerged in Anatolia, Palestine and the Fertile Crescent, and civilization spread out, over the next five hundred generations, to cover all of Eurasia.

Civilization proved another conservative force in human culture; despite the huge increases in population, the social order of Jericho looks little different from those of Imperial Rome or the Qin Dynasty or Medieval France.

But when Gutenberg (borrowing from the Chinese) perfected moveable type, he led the way to another and even broader form of cultural sharing; literacy became widespread in the aftermath of the printing press, and savants throughout the Europe published their insights, sharing their own expertise, producing the Enlightenment and igniting the Scientific Revolution. Peer-review, although portrayed today as a conservative force, initially acted as a radical intellectual accelerant, a mental hormone which again amplified the engines of human culture, leading directly to the Industrial Age.

The conservative empires fell, replaced by demos, the people: the cogs and wheels of a new system of the world which allowed for massive cities, massive markets, mass media, massive growth in human knowledge, and a new type of radicalism, known as Liberalism, which asserted the freedom of capital, labor, and people. That Liberalism, after two hundred and fifty years of ascendancy, has become the conservative order of culture, and faces its own existential threat, the result of another innovation in sharing.

Last month, The Economist, that fountainhead of Ur-Liberalism, proclaimed humanity “halfway there.” Somewhere in the last few months, half the population of the planet became mobile telephone subscribers. In a decade’s time we’ve gone from half the world having never made a telephone call to half the world owning their own mobile.

It took nearly a decade to get to the first billion, four years to the second, eighteen months to the third, and – sometime during 2011 – over five billion of us will be connected. Mobile handsets will soon be in the hands of everyone except the billion and a half extremely poor; microfinance organizations like Bangladesh’s Grameen Bank work hard to ensure that even this destitute minority have access to mobiles. Why? Mobiles may be the most potent tool yet invented for the elimination of poverty.

To those of us in the developed word this seems a questionable assertion. For us, mobiles are mainly social accelerants: no one is ever late anymore, just delayed. But, for entire populations who have never had access to instantaneous global communication, the mobile unleashes the innate, inherent and inalienable capabilities of sociability. Sociability has always been the cornerstone to human effectiveness. Being social has always been the best way to get ahead.

Until recently, we’d seen little to correlate mobiles with human economic development. But, here again, we see the gap between raw hardware capabilities and their expression in cultural software. Handing someone a mobile is not the end of the story, but the beginning. Nor is this purely a phenomenon of the developing world, or of the poor. We had the Web for almost a decade before we really started to work it toward its potential. Wikis were invented in 1995, marking it as an early web technology; the idea of Wikipedia took another six years.

Even SMS, the true carrier of the Human Network, had been dismissed by the telecommunications giants as uninteresting, a sideshow. Last year we sent forty three billion text messages.

We have a drive to connect and socialize: this drive has now been accelerated and amplified as comprehensively as the steam engine amplified human strength two hundred and fifty years ago. Just as the steam engine initiated the transformation of the natural landscape into man-made artifice, the ‘hyperconnectivity’ engendered by these new toys is transforming the human landscape of social relations. This time around, fifty thousand years of cultural development will collapse into about twenty.

This is coming as a bit of a shock.

Part Two: Hypermimesis

I have two nephews, Alexander and Andrew, born in 2001, and 2002. Alexander watched his mother mousing around on her laptop, and – from about 18 months – reached out to play with the mouse, imitating her actions. By age three Alex had a fair degree of control over the mouse; his younger brother watched him at play, and copied his actions. Soon, both wrestled for control of a mouse that both had mastered. Children are experts in mimesis – learning by imitation. It’s been shown that young chimpanzees regularly outscore human toddlers on cognitive tasks, while the children far surpass the chimps in their ability to “ape” behavior. We are built to observe and reproduce the behaviors of our parents, our mentors and our peers.

Our peers now number three and a half billion.

Whenever any one of us displays a new behavior in a hyperconnected context, that behavior is inherently transparent, visible and observed. If that behavior is successful, it is immediately copied by those who witnessed the behavior, then copied by those who witness that behavior, and those who witnessed that behavior, and so on. Very quickly, that behavior becomes part of the global behavioral kit. As its first-order emergent quality, hyperconnectivity produces hypermimesis, the unprecedented acceleration of the natural processes of observational learning, where each behavioral innovation is distributed globally and instantaneously.

Only a decade ago the network was all hardware and raw potential, but we are learning fast, and this learning is pervasive. Behaviors, once slowly copied from generation to generation, then, still slowly, from location to location, now ‘hyperdistribute’ themselves via the Human Network. We all learn from each other with every text we send, and each new insight becomes part of the new software of a new civilization.

We still do not know much about this nascent cultural form, even as its pieces pop out of the ether all around us. We know that it is fluid, flexible, mobile, pervasive and inexorable. We know that it does not allow for the neat proprieties of privacy and secrecy and ownership which define the fundamental ground of Liberal civilization. We know that, even as it grows, it encounters conservative forces intent on moderating its impact. Yet every assault, every tariff, every law designed to constrain this Human Network has failed.

The Chinese, who gave it fair go, have conceded the failure of their “Great Firewall,” relying now on self-censorship, situating the policeman within the mind of the dissident netizen.

Record companies and movie studios try to block distribution channels they can not control and can not tariff; every attempt to control distribution only results in an ever-more-pervasive and ever-more-difficult to detect “Darknet.”

A band of reporters and bloggers (some of whom are in this room today) took down the Attorney General of the United States, despite the best attempts of Washington’s political machinery to obfuscate then overload the processes of transparency and oversight. Each of these singular examples would have been literally unthinkable a decade ago, but today they are the facts on the ground, unmistakable signs of the potency of this new cultural order.

It is as though we have all been shoved into the same room, a post-modern Panopticon, where everyone watches everyone else, can speak with everyone else, can work with everyone else. We can send out a call to “find the others,” for any cause, and watch in wonder as millions raise their hands. Any fringe (noble or diabolical) multiplied across three and a half billion adds up to substantial numbers. Amplified by the Human Network, the bonds of affinity have delivered us over to a new kind of mob rule.

This shows up, at its most complete, in Wikipedia, which (warts and all) represents the first attempt to survey and capture the knowledge of the entire human race, rather than only its scientific and academic elites. A project of the mob, for the mob, and by the mob, Wikipedia is the mob rule of factual knowledge. Its phenomenal success demonstrates beyond all doubt how the calculus of civilization has shifted away from its Liberal basis. In Liberalism, knowledge is a scarce resource, managed by elites: the more scarce knowledge is, the more highly valued that knowledge, and the elites which conserve it. Wikipedia turns that assertion inside out: the more something is shared the more valuable it becomes. These newly disproportionate returns on the investment in altruism now trump the ‘virtue of selfishness.’

Paradoxically, Wikipedia is not at all democratic, nor is it actually transparent, though it gives the appearance of both. Investigations conducted by The Register in the UK and other media outlets have shown that the “encyclopedia anyone can edit” is, in fact, tightly regulated by a close network of hyperconnected peers, the “Wikipedians.”

This premise is borne out by the unpleasant fact that article submissions to Wikipedia are being rejected at an ever-increasing rate. Wikipedia’s growth has slowed, and may someday grind to a halt, not because it has somehow encompassed the totality of human knowledge, but because it is the front line of a new kind of warfare, a battle both semantic and civilizational. In this battle, we can see the tracings of hyperpolitics, the politics of era of hyperconnectivity.

To outsiders like myself, who critique their increasingly draconian behavior, Wikipedians have a simple response: “We are holding the line against chaos.” Wikipedians honestly believe that, in keeping Wikipedia from such effluvia as endless articles on anime characters, or biographies of living persons deemed “insufficiently notable,” they keep their resource “pure.” This is an essentially conservative impulse, as befits the temperament of a community of individuals who are, at heart, librarians and archivists.

The mechanisms through which this purity is maintained, however, are hardly conservative.

Hyperconnected, the Wikipedians create “sock puppet” personae to argue their points on discussion pages, using back-channel, non-transparent communications with other Wikipedians to amass the support (both numerically and rhetorically) to enforce their dictates. Those who attempt to counter the fixed opinion of any network of Wikipedians encounter a buzz-saw of defiance, and, almost invariably, withdraw in defeat.

Now that this ‘Great Game’ has been exposed, hypermimesis comes into play. The next time an individual or community gets knocked back, they have an option: they can choose to “go nuclear” on Wikipedia, using the tools of hyperconnectivity to generate such a storm of protest, from so many angles of attack, that the Wikipedians find themselves overwhelmed, backed into the buzz-saw of their own creation.

This will probably engender even more conservative reaction from the Wikipedians, until, in fairly short order, the most vital center of human knowledge creation in the history of our species becomes entirely fossilized.

Or, just possibly, Wikipedians will bow to the inevitable, embrace the chaos, and find a way to make it work.

That choice, writ large, is the same that confronts us in every aspect of our lives. The entire human social sphere faces the increasing pressures of hyperconnectivity, which arrive hand-in-hand with an increasing empowerment (‘hyperempowerment’) by means of hypermimesis. All of our mass social institutions, developed at the start of the Liberal era, are backed up against the same buzz saw.

Politics, as the most encompassing of our mass institutions, now balances on a knife edge between a past which no longer works and a future of chaos.

Part Three: No Governor

Last Monday, as I waited at San Francisco International for a flight to Logan, I used my mobile to snap some photos of the status board (cheerfully informing me of my delayed departure), which I immediately uploaded to Flickr. As I waited at the gate, I engaged in a playful banter with two women d’un certain age, that clever sort of casual conversation one has with fellow travelers. After we boarded the flight, one of the women approached me. “I just wanted you to know, that other woman, she works for the Treasury Department. And you were making her nervous when you took those photos.”

Now here’s the thing: I wanted to share the frustrations of my journey with my many friends, both in Australia and America, who track my comings and goings on Twitter, Flickr and Facebook. Sharing makes the unpleasant endurable. In that moment of confrontation, I found myself thrust into a realization that had been building over the last four years: Sharing is the threat. Not just a threat. It is the whole of the thing.

A photo snapped on my mobile becomes instantaneously and pervasively visible. No wonder she’s nervous: in my simple, honest and entirely human act of sharing, it becomes immediately apparent that any pretensions to control, or limitation, or the exercise of power have already collapsed into shell-shocked impotence.

We are asked to believe that hyperconnectivity can be embraced by political campaigns, and by politicians in power. We are asked to believe that everything we already know to be true about the accelerating disintegration of hierarchies of all kinds – economic, academic, cultural – will somehow magically suspend itself for the political process. That, somehow, politics will be different.

Bullshit. Ladies and gentlemen, don’t believe a word of it. It’s whistling past the graveyard. It’s clapping for Tinkerbelle. Obama may be the best thing since sliced bread, but this isn’t a crisis of leadership. This is not an emergency. And my amateur photography did not bring down the curtain on the Republic.

For the first time, we have a political campaign embracing hyperconnectivity. As is always the case with political campaigns, it is a means to an end. The Obama campaign has built a nationwide social network (using lovely, old-fashioned, human techniques), then activated it to compete in the primaries, dominate in the caucuses, and secure the Democratic nomination. That network is being activated again to win the general election.

Then what? Three months ago, I put this question directly to an Obama field organizer. He paused, as if he’d never given the question any thought, before answering, “I don’t know. I don’t believe anyone’s thought that far ahead.” There are now some statements from candidate Obama about what he’d like to see this network become. They are, of course, noble sentiments.They matter not at all. The mob, now mobilized, will do as it pleases. Obama can lead by example, can encourage or scold as occasion warrants, but he can not control. Not with all the King’s horses and all the King’s men.

And yes, that’s scary.

Fasten your seatbelts and prepare for a rapid descent into the Bellum omnia contra omnes, Thomas Hobbes’ “war of all against all.” A hyperconnected polity – whether composed of a hundred individuals or a hundred thousand – has resources at its disposal which exponentially amplify its capabilities. Hyperconnectivity begets hypermimesis begets hyperempowerment. After the arms race comes the war.

Conserved across nearly four thousand generations, the social fabric will warp and convulse as various polities actualize their hyperempowerment in the cultural equivalent of nuclear exchanges. Eventually (one hopes, with hypermimesis, rather quickly) we will learn to contain these most explosive forces. We will learn that even though we can push the button, we’re far better off refraining. At that point, as in the era of superpower Realpolitik, the action will shift to a few tens of thousands of ‘little’ conflicts, the hyperconnected equivalents of the endless civil wars which plagued Asia, Africa and Latin America during the Cold War.

Naturally, governments will seek to control and mediate these emerging conflicts. This will only result in the guns being trained upon them. The power redistributions of the 21st century have dealt representative democracies out. Representative democracies are a poor fit to the challenges ahead, and ‘rebooting’ them is not enough. The future looks nothing like democracy, because democracy, which sought to empower the individual, is being obsolesced by a social order which hyperempowers him.

Anthropologist Margaret Mead famously pronounced that we should “Never underestimate the ability of a small group of committed individuals to change the world.” Mead spoke truthfully, and prophetically. We are all committed, we are all passionate. We merely lacked the lever to effectively translate the force of our commitment and passion into power. That lever has arrived, in my hand and yours.

To say that we’re living in a time of accelerated change is a truism. What we forget – because it would scare the hell out of us – is exactly how much change we’ve seen. I moved to Australia 4 ½ years ago. When I got here there was no YouTube, no podcasting, no BitTorrent, no Wikipedia (in a practical sense). And no MySpace, FaceBook, Bebo, or Twitter.

These are things that I, in my daily life, take for granted. But they’re absolutely brand new. I’m not quite sure how we manage to fool ourselves into believing this is all perfectly normal.

Of course, there is one group of people for whom this is perfectly normal, because they’ve never known anything else – those wacky kids. Consider: I had my very first tour of the World Wide Web at SIGGRAPH, the big computer graphics conference, in Anaheim, California, back in July, 1993. I moused around the recently-released NCSA Mosaic on a hundred-thousand dollar graphics workstation.

I already knew what hypertext was. I had already written a Macintosh-based hypertext system, just before Hypercard made it completely irrelevant. I knew what I was looking at. And I wasn’t very impressed. Sure, it was hypertext, but there were only a handful of sites to visit.

Eventually, the penny dropped. A few months later I bought a used, huge and heavy SPARCStation, set it up in my lounge room, strung a phone cable across my flat, SLIPped into the Internet, launched NSCA Mosaic, and started surfing. Every night when I got home from work, I surfed some more. And, at the end of that very enjoyable week in mid-October 1993, I was done. I had surfed the entire Web.

If you said that today – that you’d surfed the entire web of a hundred million discrete domains and a hundred and fifty million individual blogs and – who knows? – maybe twenty billion pages – people would either believe you a liar or mad as a cut snake.

And yet, a child, born in July 1993, when I first clicked on a Web link, would just be coming up to her 15th birthday. Probably in the middle of year 10.

For that fifteen year-old, change is the only constant she’s known. All the world has changed. All of culture and human behavior have changed – in some ways we are unrecognizable. Because we are embedded in this change, we only feel the acceleration. To someone whose baseline experience, their entire lifetime, has been this continuous acceleration, there is no sensation at all.

People talk about digital immigrants and digital natives. But that’s too simple. It’s an injustice to the truth of the matter – a truth which is important for us to understand.

In the late 1990s, I’d gotten a sense of what was going on, and wrote a book to usher parents into the world of their children: The Playful World: How Technology is Transforming Our Imagination took a look at three areas – intelligence, activity and presence. For each of these domains of human experience, I selected a toy – the Furby, Lego’s MINDSTORMS, and the Sony PlayStation2, respectively – as the starting point for an explanation of this startling shift in the inner lives of children.

Let me be clear: I am a strict Constructivist. I believe that children learn through interactions with their environment. I had come to realize that the environment for a child born at the turn of the millennium looked nothing like the world of 1962, the year I was born.

The world is intelligent. The world responds. The world allows us to extend our senses globally – as with Google Earth – or down to the nanoscopic. All of this – all of it – was already showing up in children’s toys! Not in fancy labs, but in toys. And it’s still going on today. The Nintendo Wii is a better bit of virtual reality than anything ever created by NASA.

How can a kid who plays tennis with a virtual racket, or bowls with a virtual ball, ever hope to have the same cognitive relationship to the world of things that I do?

We’re not even on the same planet.

And we forget this. Or rather, we refuse to see it. But we can’t avoid it any longer, because all this tech has turned this sub-15 generation into mutants with strange new powers.

Let’s come back to that 15 year-old, who, of course, owns a mobile phone. What does she do with it? Those of you with teenage children already know the answer: she texts. Continuously.

Mizuko Ito, a Japanese researcher, studied teenagers in Japan a few years ago, and found that these kids – from the moment they wake up in the morning, until they drop off to sleep at night – are enaged in a continuous and mostly trival conversation with, on average, five other friends. They might be in the flat next door, or on the other side of Tokyo. Proximity doesn’t matter. What does matter is the constant connection. Ito named this phenomenon “co-presence”. It seemed a bit too science-fiction wacky-technophile Japanese, at the time.

Today, it’s the standard operating procedure for all teenagers everywhere in the developed world.

That typical 15 year-old will blow her prepay budget on texts, up to a hundred a day – which works out to about 6 every waking hour – and then, as the credit runs out, and the flow of messages stops, friends will check MySpace, where the 15 year-old has gone, to message for free, and so the flow of co-presence continues.

In some ways, this looks like a new thing, but in reality, it isn’t. It’s an old thing – a very old thing – expressed in an entirely new way.

All of this comes down to what we really are: social animals. That means we live to communicate, and we appear to be better at communication than any other species on the planet.

What we’ve done is given those wacky kids the tools to free this communication, so that it is no longer bound in space and time. We’ve accelerated communication to the speed of light. And all of this is perfectly natural to them.

This much we know.

It’s the unintended, unexpected, unpredictable consequences of all this “hyperconnectivity” which are really putting the screws to us. This is the new stuff. The things that are coming at us from our blind spot.

Consider: 11 December 2005, Cronulla Beach, and every Anglo-Celtic White Supremacist in New South Wales has a text message in hand, forwarded from fellow traveler to fellow traveler, asking them to lend a hand in the beat-down of the Lebs.

That’s what happens when you connect everyone together.

Consider: Also in December 2005, Nature published a peer-reviewed article which stated that Wikipedia, the peer-produced encyclopedia made possible by the fact that half a billion people can connect to it and contribute to it (and, through it, to each others’ thoughts and expertise), is very nearly as accurate as that gold-standard reference work, Encyclopedia Britannica.

Twist the dials one way, you get Cronulla. Twist them another, and you get Wikipedia.

And we’ve given those wacky kids the dial.

II: Those Well-Meaning Adults

These “hyperconnected” and ever-more wacky kids get up in the morning, put on their uniforms and go to school. When they get there, they’ve got to turn off their mobiles, put away their iPods, close the chat windows, unplug themselves from the webs of co-presence which shape their social experiences, sit still and listen to teacher.

And they’ve got to do this inside of an environment – the classroom – which is so thoroughly disconnected from the rest of life as they have always known it that it must, deep in their co-present souls, resemble nothing so much as a medieval torture chamber. An isolation tank. Solitary confinement.

It’s not just that school is a pain in the ass. It’s that it looks – to them – like a completely unrealistic pain in the ass, one which is out of step with the world beyond the classroom walls. It’s as if, every morning, these kids are marched into a time machine which transports them back to 1955.

It’s always important to recognize the hidden elements in any curriculum. The modern school was created not only to produce a literate workforce, but one which understood schedules and timelines, essential elements in the industrial era. Bells and periods trained students in the implicit curriculum. They learned to be timely and orderly, while they explicitly learned their letters and numbers. These curricula – explicit and implicit – fit the needs of the industrial age, and so were highly successful.

So, kids today, stripped of their hyperconnectivity as they walk through the school house door, learn that while timeliness is important, the ability to communicate, to collaborate, share and participate – across time and distance – are not. Oh, we can have practice exercises and whatnot which help to encourage those capabilities, but the hidden curriculum of our schools implicitly denies the value of this experience –the greater part of life experience for those wacky kids.

The trouble with this state of affairs is that it directly contradicts the world these kids have always lived in. In the industrial age, they saw their fathers leave home in time for the morning shift, and return home when that shift was completed. Their experience of regimented time within school perfectly agreed with life at home.

These days, those two worlds have almost nothing in common. Parents work flextime, they telecommute, work all hours of the day or night, across nations, across time zones, across disciplines. Work has changed. Home life has changed. School has not.

This is a very dangerous state of affairs, because in this subtle and invisible argument between school and life as it is really lived, life is always going to win.

What this means, in a practical sense, is that students have lost respect for the classroom, because it has no relevance to their lives. Yes, they will be polite – as they’re polite to their grandparents – but that is no substitute for a real working relationship. School will be endured, because parents and state mandate it. But it’s a waiting game.

This is not the right way to create the next generation of Australia’s leaders. This is only going to create a generation who have learned how to be patient, patronizing, and excel in the art of ass-kissing.

Australia is not alone in this. In the United States, the No Child Left Behind program, the very epitome of Industrial Age methodology, simply subjects students to assessment after meaningless assessment. Students train for tests. There are no more exploratory moments. Learning is by rote. Asia and Europe fare no better. Everywhere, everything is exactly the same — and exactly wrong.

What, then, is to be done?

It’s not as though educators and educational administrators are entirely unaware of this increasing desynchronization between the classroom and the world beyond it. Far form it. Just like those wacky kids, they live in both of these worlds, and they sense that the classroom has become an antique, a museum piece.

But they don’t know what to do about it.

That’s not to say they’re not grasping about for solutions. They are. The plan to get computers into secondary school classrooms throughout Australia is such an attempt. But no one has thought through what these computers will be used for, once they arrive on students’ desks. The Prime Minister, during the election campaign, uttered a few lines about maths drills and language exercises.

Yuk.

Probably Kevin Rudd should have sat and watched his own 14 year-old son as he goes online, or plays games on his Xbox, or texts his mates, to get a sense of the real value of all this hyperconnected technology.

Instead, Rudd relied on the opinions of educational experts, individuals who likely got their post-graduate degrees before there was a World Wide Web.

Hence: give the schools computers, but make them so dull, so meaningless, that the students are guaranteed to recoil in horror.

I have a better idea. Perhaps a school in Queensland can link up with a school in France, so that students learning English in France and students learning French in Australia can talk with each other, in foreign tongues. There are plenty of cheap technologies, like Skype and iChat AV, which can be used for that sort of thing.

Or, how about this: students in Victoria learning about the Eureka Stockade Rebellion might focus on a particular participant, and build a fully-researched and peer-reviewed article for Wikipedia. Teachers can go in and look at the history and discussion pages associated with the article to assess their students’ progress.

This isn’t about computers, folks. It’s about what we use computers for. And it’s about an educational administration that does not recognize that the computer, at its very best, is a window that opens up to other people. It is not a robot that drills students into submission.

All of this is light-years away from any curriculum in practice today. Yes, there are experiments – a few brave teachers and administrators sticking their necks out, tall poppies trying to make their classrooms relevant to the world outside. But these are just experiments.

Teachers are already so overworked, so time-poor, and, sometimes, so hide-bound, that technology is too frequently seen as a disruption. Actually, it’s the classroom that’s the disruption. What they see as a disruption is the outside world, clamoring to be let in.

The situation is bound to get worse before it gets better. The tabloid media are full of frightening stories of those wacky kids, inviting all their Facebook mates to come by and party, or MySpace suicide pacts, or cyber-bullying on YouTube.

And I say this knowing full well that I’m one of the pushers.

Although the schools need this technology, this window opening onto the real world, it is, at the same time, a profound threat to the comfortable, tried-and-true ways of doing business. When the computer salesman knocks on the door, they hear the rising winds of a storm that threatens to blow the classroom walls away.

So, something that should be an absolute no-brainer is turning out to be a very hard sell. People – teachers, administrators, parents and politicians – are afraid. When people are afraid, psychologists tell us, they put off making important decisions. They postpone change.

Those well-meaning adults, who really only want to get Australia’s next generation ready for a world that looks nothing like what they expected, are frozen in place, like Bambi in the headlights.

This will not do. It will not do for the kids. It will not do for the nation. And it will not do for you.

III. Breaking Through

Now, truth be told, I’m preaching to the converted. The reason you’re here in this room this morning listening to me rant and rave about those wacky kids and those well-meaning adults is because you want to be part of the solution. You’re voting with your feet. You understand that it’s important we do something – and do it quickly.

But we’re the mutants. We’re the ones who are out-of-step with the educational establishments in the states and the Commonwealth. We watch, with mixed degrees of amusement and horror, as the educational machinery shudders along, even as it groans under the increasing weight of the world outside. And we start to wonder – seriously – when it will all just collapse.

No one likes to set a deadline on these sorts of things; all deadlines inevitably fail. But I’d say that if we aren’t well on our way to transforming education within the next few years, the tide of the times could simply whip past us, and leave the educational establishment in a backwater, an eddy, while the rest of culture and civilization zips away downstream.

But, even as I say that, I reckon such an outcome to be very unikely. There’s too much pressure, coming from too many points, for education to get off that easily. It’s too important to be ignored or cast aside. Instead, the pressure will continue to rise, as the most extraordinary and unexpected things begin to happen. In fact, this is already happening.

I’d like to tell you a story about my colleague Stephen Collins, who lives and works down in Canberra. His story is a good example of how things are changing so quickly and so unexpectedly. But, before I tell you his story, I need to tell you the story of how I know Stephen Collins, because that will tell you something about just how fast things are moving right now.

Last year I signed up for a new Web service known as “Twitter”. Twitter bills itself as a “social message service” – sort of a cross between a social network (like Facebook or MySpace) and the short message service (or SMS) that we’re all completely familiar with. When I signed up to Twitter, I could elect to “follow” certain other people – that is, my friends, and colleagues, and so forth. When ever any of these people sends a “tweet” – that is, a 140-character message – I receive it, as do all of their followers. I might receive that tweet via the Twitter website, or one of the growing number of Twitter programs, or I can even have it delivered via SMS to my mobile.

I didn’t use Twitter very much for the first several months; there weren’t that many people using it, and weren’t that many folks to follow. So I ignored it. But, just in the last six months, a lot of people in Australia have discovered Twitter – particularly those folks who, like myself, are interested in what’s up-and-coming on the Web. Nearly all of those folks use Twitter these days, and most of them follow one another. I quickly got swept up into this madness, and am now very well “hyperconnected” with a few hundred core Twitter users in Sydney and throughout the nation.

The vast majority of tweets range from the minor to the inane. It’s like cocktail party chatter – often funny, but just as often, meaningless. But, once in a while – and more frequently, these days – there’s a point to all this incessant tweeting. For example, the Sichuan earthquake of Monday 12 May was reported by Twitterers in China a full thirty minutes before it made its way into the media. The folks who felt the temblor reported and shared their reports. Through Twitter, I knew about the earthquake an hour before most other Australians knew anything about it. In that moment of tragedy, Twitter became a human early warning system.

Over the next 24 hours, I closely followed the tweets of Dedric Lam, who lives in Shanghai, and who acted as a clearing house for a wide range of news reports, articles and videos about the earthquake. As major news organizations struggled to site reporters into the earthquake zone, I received a more consistent and more consistently accurate stream of news, directly from the people affected by the earthquake, via Twitter.

That’s interesting, and more important, it was completely unexpected. The folks who created Twitter thought they were creating a “microblogging” service – something where you’d be able to post short updates about your day. What we’ve turned it into – as we learn what it’s good for – is something completely different. Science fiction writer William Gibson once wrote, “The street finds its own use for things, uses the manufacturers never intended.” Twitter is a true street technology, and every day every one of its two hundred thousand core users find new ways to put its hyperconnective capabilities to work.

Twitter is how I came to know Stephen Collins. Stephen is one of the core Twitter users in Australia, a consultant, and power user of “social media”, as we’re now starting to call all these technologies of hyperconnectivity. He’s been tweeting for a year, and has used Twitter to both extend and reinforce his commercial and personal connections. I came to know of him soon after I got sucked into the Australian “Twitteratti”, and followed him, for he’s an individual who frequently makes keen observations.

On the same Monday that the Sichuan earthquake occurred, Stephen came to Sydney for the day, to speak at Interesting South, a local lecture series. Through Twitter, we arranged an afternoon coffee in the Strand Arcade, and chatted away amiably enough, griping about how people just aren’t “getting” social media.

Then he related an interesting story.

Stephen sends his 10 year-old daughter to Canberra’s St. Clare of Assisi Primary School, where she gets “the best education I can afford to give her,” as he wryly puts it. St. Clare of Assisi Primary is a big school – the largest private primary school in the ACT, at 730 kids. Never the passive parent, Stephen has grown progressively more involved in the affairs of St. Clare of Assisi, and found himself, this January – almost inadvertently – elected to the position of Secretary to the Board of the school.

Gah, you must be thinking: what a thankless task. Sit there and take notes at all the meetings. Dull as. And so it would have been, were Stephen a less inventive sort. Instead, during his first meeting, he had a penny-drop moment: rather than just writing up all these notes and sending out a sheaf of emails, he could type all of this information into a ‘wiki’ – that is, a user-editable website, and the technological basis for Wikipedia – so that everyone on the board could have access to his notes, make additional notes, start wiki entries on their own topics, and so on.

Wikis go hand-in-hand with hyperconnectivity: once we’re all connected together in a few dozen or few hundred million ways, we need someplace to pool our common wealth of resources, information, knowledge, and experience. Wikipedia is proof positive that everyone, everywhere, is expert in something, even something terrifically obscure – and it’s proof that someone else, somewhere else, will treasure that expertise.

When the administrators saw the PBwiki that Stephen set up, they were amazed and delighted. All of the hard yards of coordinating via emails could now be handled through a collaborative process, with a common tool accessible anywhere Internet connectivity could be had. “So now,” said the Head of School, “can we bring the staff up to speed on this? And the teachers? Can we get them to start planning their courses on this? And get the parents more involved? And what about the kids – can they use this too?”

With just one simple act – and really, an act that saved him work – Stephen introduced a new way of thinking and working to Canberra’s largest private primary school. It’s early days yet, but as they come to learn to use the wiki, discovering its strengths and weaknesses, it will begin to transform the way they teach. It is opening the way to a broader and more comprehensive revolution in education. This “accidental revolution” is a clear sign that the ground is fertile. Things are breaking through all over. All it takes is one person, in the right place, at the right time, with the right idea.

Which brings me back to all of you, here in this room, this morning. We are the change agents. All of us. We don’t have to leave here today with grand plans. Far from it. All we need to do is share with one another what we’ve learned along the way: what’s worked, what hasn’t, and why. We need to connect with one another – using all the tools at our disposal (and there’s a lot of them), and we need to put the new tools of knowledge sharing to work for us, pooling our own deep reservoirs of expertise, learning from each other as effectively as we can. If each of us can add one good idea – and I reckon each of us has at least one good idea – that means there are a lot of good ideas in this room. Just one of those can change the educational environment of a school. Stephen Collins’ story is proof of that.

For the rest of the day, I’m going to sit back and listen. Hard. I’m going to listen to all of the good ideas you folks have been working up as we all confront this huge challenge. When I hear an idea that strikes me, I’ll be blogging it – on Twitter. At the end of these four events, I’ll be able to go back and read my tweetstream, and see what really interested me. Perhaps it will interest you too. All the while, 660 other folks, all around the world, will be looking in. Some of them might get a good idea, something they want to share with us. We can and must use hyperconnectivity to increase our effectiveness. We can and must use knowledge sharing to increase our intelligence. We can crack this problem.

After all, we’ve been around the block. These wacky kids, they’re just getting started. They have the tools, but lack the wisdom to use them effectively. It’s up to us to teach them how. But first, we’ve got learn how to use them. That done, we can transform education, and transform their enormous capacity to learn. But, right now, the teachers must become students.

When I was a young man, I was obsessed by computers. I remember perfectly the first time I sat at a keyboard – at a “line printing” terminal, which had an endless sheet of paper spooling through it – to play a game of “Star Trek”. The fascination I felt at that moment has never really ended, nor the sense of wonder, or the desire to dive in and learn everything about this seemingly magical machinery. My timing was excellent; within a few years the first “microcomputers”, such as the Tandy TRS-80, came onto the market at affordable prices, and I could plumb the guts of computing with my very own machine.

This was incredibly fortuitous, because I was not a good student at University; or rather, I excelled at some classes and completely failed others. I had not yet learned the discipline to apply myself to unpleasant tasks (even today, nearly thirty years later, it presents difficulties), so my grades were a perfect reflection of my obsessions. If something interested me, I got As. Otherwise, well, my transcript speaks for itself. The University noted this as well, and politely asked me to “get lost” for a few years, until I had acquired the necessary discipline to focus on my education. That marked the end of my formal education, but that doesn’t mean I stopped learning. Far from it.

From my earliest years, I have been a sponge for information; my parents bought me the World Book Encyclopedia when I was six – twenty red-and-black leather-bound volumes, full of photographs and illustrations – and by the time I was eight, I’d read the whole thing. (I hadn’t memorized it, but I had read through it.) Once I discovered computers, I devoured anything I could find on the subject, in particular the January 1975 issue of Popular Electronics, which featured the MITS Altair 8800 – the world’s first microcomputer – on its cover. I dived in, learning everything about microcomputers: how they worked, how to program them, what they could be used for, until I had one of my own. Then I learned everything about that computer (the Tandy TRS-80), its CPU (the Zilog Z-80), experimented with programming it in BASIC and assembly language, becoming completely obsessive about it.

When I found myself tossed out of University, my obsession quickly turned into a job offer programming Intel 8080 systems (very similar to the Z-80), which led to a fifteen-year career as a software engineer, for which I was well paid, and within a field where my lack of University degrees in no way hindered my professional advancement. In the 1980s, nearly everyone working within microcomputing was an autodidact; almost none of these people had completed a university degree. I had the fortune to work with a few truly brilliant programmers in my earliest professional years, who mentored me in best programming practices. I learned from their own expertise as they transferred their wealth of experience and helped me to make it my own.

It is said that programming is more of a craft than a profession, in that it takes years of apprenticeship, working under masters of the craft, to reach proficiency. This is equally true of most professions: medicine, the law, even (or perhaps, especially) such arcana as synthetic chemistry. At its best, post-graduate education is a mentorship process which wires the obsessions of the apprentice to the wisdom of the master. The apprentice proposes, the master disposes, until the apprentice surpasses the master. The back-and-forth informs both apprentice and master; both are learning, though each learn different things.

II.

Everyone is an expert. From a toddler, expert in the precarious balance of towering wooden blocks, to a nanotechnologist, expert in the precarious juxtaposition of atom against atom, everyone has some field of study wherein they excel – however esoteric or profane it might seem to the rest of us. The hows and whys of this are essential to human nature; we’re an obsessive species, and our obsessions can form around almost any object which engages our attentions. Most of these obsessions seem completely natural, in context: a Pitjandjara child learns an enormous amount about the flora and fauna of the central Australian desert, knows where to find water and shade, can recite the dreamings which place her within the greater cosmos. In the age before agriculture, all of us grew up with similar skills, each of us entirely obsessed with the world around us, because within that obsession lay the best opportunity for survival. Those of our ancestors who were most apt with obsession (up to a point) would thrive even in the worst of times, passing those behaviors (some genetic, some cultural) down through time to ourselves. But obsession is not a vestigial behavior; the entire bedrock of civilization is built upon it: specialization, that peculiar feature of civilization, where each assumes a particular set of duties for the whole, is simply obsession by another name.

A century ago, Jean Piaget realized that small children are obsessed with the physics of the world. Piaget watched as his own children struggled, inchoate, with elaborate hypotheses of causality, volume, and difference, constantly testing their own theories of how the world works, an operation as intent as any performed in the laboratory.

Language acquisition is arguably the most marvelous of all childish obsessions; in the space of just a few years – coincident with developments in the nervous system – the child moves from sonorous babbling into rich, flexible, meaningful speech – a process which occurs whether or not explicit instruction is given to the child. In fact, the only way to keep a child from learning language is to separate them from the community of other human beings. Even the banter of adults is enough for a child to grow into language.

Somewhere between early childhood and early adulthood the thick rope of obsession unwinds to a few mere threads. Most of us are not that obsessive, most of the time, or rather, our obsessions have shifted from the material to the immaterial. Adolescent girls become obsessive people-watchers, huddling together in cliques whose hierarchies and connections are so rich and so obscure as to be worthy of any hermetic cult. This process occurs precisely at the time their highest brain functions are realized, when they become acutely aware of the social networks within which they operate. Physics pales into insignificance when weighed against the esteem (or contempt) of one’s peers. This, too, is a survival mechanism: women, as the principle caregivers, need strong social networks to ensure that their offspring are well-cared for. Women who obsessively establish and maintain strong social deliver their children a decisive advantage in life, and so pass this behavior along to their children. Or so the thinking goes.

III.

Mentoring is an embodied relationship, and does not scale beyond individuals. The sharing of expertise, on the other hand, has grown hand in hand with the printing press, the broadcast media, and the Web. Publishing and broadcasting both act as unintentional gatekeepers on the sharing of expertise; the costs of publishing a book (or magazine, or pamphlet), and the costs of broadcast spectrum set a lower limit on what specific examples of individual expertise make the transition into the public mind. For every Julia Child or Nigella Lawson, there are a thousand cooks who produce wonders from their kitchens; for every Simon Schama or David Halberstam, there are a thousand historians (most of whom are not white English-speakers) spinning tales of antiquity. These voices were lost to us, because they could not negotiate the transition into popularity. This is should not be read as a flat assessment of quality, but as a critique of the function of the market. Mass markets thrive on mass tastes; the specific is sacrificed for the broad. Yet the specific is often far more significant to the individual, containing within itself the quality of salience. Salience – that which is significant to us – is driven by our obsessions; things are salient because we are obsessed by them. The “salience gap” between the expertise delivered by the marketplace, and the burning thirst for knowledge of obsessed individuals has finally collapsed with the introduction of the Wiki.

At its most essential, a Wiki is simply a web page that is editable within a Web browser. While significant, that is not enough to explain why Wikis have unlocked humanity’s hidden and vast resources of expertise. That you can edit a web page in situ is less important than the goal of the editor. It took several years before it occurred to anyone that the editor could use a Wiki to share expertise. However, once that innovation occurred, it was rapidly replicated throughout the Internet on countless other Wikis.

Early in this process, Wikipedia launched and began its completely unexpected rise into utility. In some ways, Wikipedia has an easy job: as an encyclopedia it must provide a basic summary of facts, not a detailed exploration of a topic, and it is generally possible for someone with a basic background in a topic to provide this much information. Yet this critique overlooks the immense breadth of Wikipedia (as of this writing, nearly 2.3 million articles in its English-language version). By casting its net wide – inviting all experts, everywhere, to contribute their specific knowledge – not only has Wikipedia covered the basics, it’s also covering everything else. No other encyclopedia could hope to be as comprehensive as Wikipedia, because no group of individuals – short of the billion internet-connected individuals who have access to Wikipedia – could be so comprehensively knowledgeable on such a wide range of subjects.

Wikipedia will ever remain a summary of human knowledge; that is its intent, and there are signs that the Wikipedians are becoming increasingly zealous in their enforcement of this goal. Summaries are significant and important (particularly for the mass of us who are casually interested in a particular topic), but summaries do not satisfy our obsessive natures. Although Wikipedia provides an outlet for expertise, it does not cross the salience gap. This left an opening for a new generation of Wikis designed to provide depth immersion in a particular obsession. (Jimmy Wales, the founder of Wikipedia, realized this, and created Wikia.com as a resource where these individuals can create highly detailed Wikis.)

While one individual may have an obsession, it takes a community of individuals, sharing their obsession, to create a successful Wiki. No one’s knowledge is complete, or completely accurate. To create a resource useful to a broader community – who may not be as deeply obsessed – this “start-up” community must pool both their expertise and their criticism. Beginnings are delicate times, and more so for a Wiki, because obsessive individuals too often tie their identity to their expertise; questioning their expertise is taken as a personal affront. If the start-up community can not get through this first crisis, the Wiki will fail.

Furthermore, it takes weeks to months to get a sufficient quantity of expertise into a Wiki. A Wiki must reach “critical mass” before it has enough “gravitational attraction” to lure other obsessive individuals to the Wiki, where it is hoped they will make their own edits and additions to it. Thus, the start-up phase isn’t merely contentious, it’s also thankless – there are few visible results for all of the hard work. If the start-up community lacks discipline in equal measure to their forbearance, the Wiki will fail.

Given these natural barriers, it’s a wonder that Wikis ever succeed. The vast majority of Wikis are stillborn, but those which do succeed in attracting the attentions of the broader community of obsessive individuals cross the salience gap, and, in that lucky moment, the Wiki begins to grow on its own, drawing in expertise from a broad but strongly-connected social network, because individuals obsessed with something will tend to have strong connections to other similar individuals. Very quickly the knowledge within the community is immensely amplified, as knowledge and expertise pours out of individual heads and into the Wiki.

This phenomenon – which I have termed “hyperintelligence” – creates a situation where the community is smarter as a whole (and as individuals) because of their interactions with the Wiki. In short, the community will be more effective in the pursuit of its obsession because of the Wiki, and this increase in effectiveness will make them more closely bound to the Wiki. This process feeds back on itself until the idea of the community without the Wiki becomes quite literally unthinkable. The Wiki is the “common mind” of the community; for this reason it will be contentious, but, more significantly, it will be vital, an electronic representation of the power of obsession, an embodied form of the community’s depth of expertise.

What this community does with its newfound effectiveness is the open question.

During the April 2007 Education.AU tour of Australia’s capitol cities with Jimmy Wales (founder of Wikipedia), I opened the afternoon panel & workshop sessions with a brief talk about peer-produced knowledge – and how it doesn’t necessarily lead to the truth.

Here it is – with my slides, rather than video footage of me behind a podium (which would be rather dull in any case).

It’s merger season in Australia. Everything must go! Just moments after the new media ownership rules received the Governor-General’s royal assent, James Packer sold off his family’s crown jewel, the NINE NETWORK – consistently Australia’s highest-rated television broadcaster since its inception, fifty years ago – along with a basket of other media properties. This sale effectively doubled his already sizeable fortune (now hovering at close to 8 billion Australian dollars) and gave him plenty of cash to pursue the 21st-century’s real cash cow: gambling. In an era when all media is more-or-less instantaneously accessible, anywhere, from anyone, the value of a media distribution empire is rapidly approaching zero, built on the toppling pillars of government regulation of the airwaves, and a cheap stream of high-quality American television programming. Yes, audiences might still tune in to watch the footy – live broadcasting being uniquely exempt from the pressures of the economics of the network – but even there the number of distribution choices is growing, with cable, satellite and IPTV all demanding a slice of the audience. Television isn’t dying, but it no longer guarantees returns. Time for Packer to turn his attention to the emerging commodity of the third millennium: experience. You can’t download experience: you can only live through it. For those who find the dopamine hit of a well-placed wager the experiential sine qua non, there Packer will be, Asia’s croupier, ready to collect his winnings. Who can blame him? He (and, undoubtedly, his well-paid advisors) have read the trend lines correctly: the mainstream media is dying, slowly starved of attention.

The transformation which led to the sale of NINE NETWORK is epochal, yet almost entirely subterranean. It isn’t as though everyone suddenly switched off the telly in favor of YouTube. It looks more like death from a thousand cuts: DVDs, video games, iPods, and YouTube have all steered eyeballs away from the broadcast spectrum toward something both entirely digital and (for that reason) ultimately pervasive. Chip away at a monolith long enough and you’re left with a pile of rubble and dust.

On a somewhat more modest scale, other media moguls in Australia have begun to hedge their bets. Kerry Stokes, the owner of Channel 7, made a strategic investment in Western Australia Publishing. NEWS Corporation, the original Australian media empire, purchased a minority stake in Fairfax, the nation’s largest newspaper publisher (and is eyeing a takeover of Canadian-owned Channel TEN). To see these broadcasters buying into newspapers, four decades after broadcast news effectively delivered death-blows to newspaper publishing, highlights the sense of desperation: they’re hoping that something, somewhere in the mainstream media will remain profitable. Yet there are substantial reasons to expect that these long-shot bets will fail to pay out.

II. The Vanilla Republic

It’s election season in America. Everyone must go! The mood of the electorate in the darkening days of 2006 could best be described as surly. An undercurrent of rage and exasperation afflicts the body politic. This may result in a left-wing shift in the American political landscape, but we’re still two weeks away from knowing. Whatever the outcome, this electoral cycle signifies another epochal change: the mainstream media have lost their lead as the reporters of political news. The public at large views the mainstream media skeptically – these were, after all, the same organizations which whipped the republic into a frenzied war-fever – and, with the regret typical of a very disgruntled buyer, Americans are refusing to return to the dealership for this year’s model. In previous years, this would have left voters in the dark: it was either the mainstream media or ignorance. But, in the two years since the Presidential election, the “netroots” movement has flowered into a vital and flexible apparatus for news reportage, commentary and strategic thinking. Although the netroots movement is most often associated with left-wing politics, both sides of the political spectrum have learned to harness blogs, wikis, feeds and hyperdistribution services such as YouTube for their own political ends. There is nothing quintessentially new about this; modern political parties, emerging in Restoration-era London, used printing presses, broadsheets and daily newspapers – freely deposited in the city’s thousands of coffeehouses – as the blogs of their era. Political news moved very quickly in 17th-century England, to the endless consternation of King Charles II and his censors.

When broadcast media monopolized all forms of reportage – including political reporting – the mass mind of the 20th-century slotted into a middle-of-the-road political persuasion. Neither too liberal, nor too conservative, the mainstream media fostered a “Vanilla Republic,” where centrist values came to dominate political discourse. Of course, the definition of “centrist” values is itself highly contentious: who defines the center? The right-wing decries the excesses of “liberal bias” in the media, while the left-wing points to the “agenda of the owners,” the multi-billionaire stakeholders in these broadcast empires. This struggle for control over the definition of the center characterized political debate at the dawn of the 21st-century – a debate which has now been eclipsed, or, more precisely, overrun by events.

In April 2004, Markos Moulitsas Zúniga, a US army veteran who had been raised in civil-war-torn El Salvador, founded dKosopedia, a wiki designed to be a clearing-house for all sorts of information relating to leftwing netroots activities. (The name is a nod to Wikipedia.) While the first-order effect of the network is to gather individuals together into a community, once the community has formed, it begins to explore the bounds of its collective intelligence. Political junkies are the kind of passionate amateurs who defy the neat equation of amateur as amateurish. While they are not professional – meaning that they are not in the employ of politicians or political parties – political junkies are intensely well-informed, regarding this as both a civic virtue and a moral imperative. Political junkies work not for power, but for the greater good. (That opposing parties in political debate demonize their opponents as evil is only to be expected given this frame of mind.) The greater good has two dimensions: to those outside the community, it is represented as us vs. them; internally, it is articulated through the community’s social network: those with particular areas of expertise are recognized for their contributions, and their standing in the community rises appropriately.

This same process transformed dKosopedia into Daily Kos (dKos), a political blog where any member can freely write entries – known as “diaries” – on any subject of interest, political, cultural or (more rarely) nearly anything else. The very best of these contributors became the “front page” authors of Daily Kos, their entries presented to the entire community; but part of the responsibility of a front-page contributor is that they must constantly scan the ever-growing set of diaries, looking for the best posts among them to “bump” to front-page status. (This article will be cross-posted to my dKos diary, and we’ll see what happens to it.) Any dKos member can make a comment on any post, so any community member – whether a regular diarist or regular reader – can add their input to the conversation. The strongly self-reinforcing behavior of participation encourages “Kossacks” (as they style themselves) to share, pool, and disseminate the wealth of information gathered by over two million readers. Daily Kos has grown nearly exponentially since its founding days, and looks to reach its highest traffic levels ever as the mid-term elections approach.

III. My Left Eyeball

Salience is the singular quality of information: how much does this matter to me? In a world of restricted media choices, salience is best-fit affair; something simply needs to be relevant enough to garner attention. In the era of hyperdistribution, salience is a laser-like quality; when there are a million sites to read, a million videos to watch, a million songs to listen to, individuals tailor their choices according to the specifics of their passions. Just a few years ago – as the number of media choices began to grow explosively – this took considerable effort. Today, with the rise of “viral” distribution techniques, it’s a much more straight-forward affair. Although most of us still rely on ad-hoc methods – polling our friends and colleagues in search of the salient – it’s become so easy to find, filter, and forward media through our social networks that we have each become our own broadcasters, transmitting our own passions through the network. Where systems have been organized around this principle – for instance, YouTube, or Daily Kos – this information flow is greatly accelerated, and the consequential outcomes amplified. A Sick Puppies video posted to YouTube gets four million views in a month, and ends up on NINE NETWORK’s 60 Minutes broadcast. A Democratic senatorial primary in Connecticut becomes the focus of national interest – a referendum on the Iraq war – because millions of Kossacks focus attention on the contest.

Attention engenders salience, just as salience engenders attention. Salience satisfied reinforces relationship; to have received something of interest makes it more likely that I will receive something of interest in the future. This is the psychological engine which powers YouTube and Daily Kos, and, as this relationship deepens, it tends to have a zero-sum effect on its participants’ attention. Minutes watching YouTube videos are advertising dollars lost to NINE NETWORK. Time spent reading Daily Kos are eyeballs and click-through lost to The New York Times. Furthermore, salience drives out the non-salient. It isn’t simply that a Kossack will read less of the Times, eventually they’ll read it rarely, if at all. Salience has been satisfied, so the search is over.

While this process seems inexorable, given the trends in media, only very recently has it become a ground-truth reality. Just this week I quipped to one of my friends – equally a dedicatee of Daily Kos – that I wanted “an IV drip of dKos into my left eyeball.” I keep the RSS feed of Daily Kos open all the time, waiting for the steady drip of new posts. I am, to some degree, addicted. But, while I always hunger for more, I am also satisfied. When I articulated the passion I now had for Daily Kos, I also realized that I hadn’t been checking the Times as frequently as before – perhaps once a day – and that I’d completely abandoned CNN. Neither website possessed the salience needed to hold my attention.

I am certainly more technically adept in than the average user of the network; my media usage patterns tend to lead broader trends in the culture. Yet there is strong evidence to demonstrate that I am hardly alone in this new era of salience. How do I know this? I recently received a link – through two blogs, Daily Kos and The Left Coaster – to a political campaign advertisment for Missouri senatorial candidate Claire McCaskill. The ad, featuring Michael J. Fox, diagnosed with a early-onset form of Parkinson’s Disease, clearly shows him suffering the worst effects of the disorder. Within a few hours after the ad went up on the McCaskill website, it had already been viewed hundreds of thousands, and probably millions of times. People are emailing the link to the ad (conveniently provided below the video window, to spur on viral distribution) all around the country, and likely throughout the world. “All politics is local,” Fox says. “But it’s not always the case.” This, in a nutshell, describes both the political and the media landscapes of the 21st-century. Nothing can be kept in a box. Everything escapes.

Twenty-five years ago, in The Third Wave, Alvin Toffler predicted the “demassification of media.” Looking at the ever-multiplying number of magazines and television channels, Toffler predicted a time when the mass market fragmented utterly, into an atomic polity, entirely composed of individuals. Writing before the Web (and before the era of the personal computer) he offered no technological explanation for how demassification would come to pass. Yet the trend lines seemed obvious.

The network has grown to cover every corner of the planet in the quarter-century since the publication of The Third Wave – over two billion mobile phones, and nearly a billion networked computers. A third of the world can be reached, and – more significantly – can reach out. Photographs of bombings in the London Underground, captured on mobile phone cameras, reach Flickr before they’re broadcast on the BBC. Islamic insurgents in Iraq videotape, encode and upload their IED attacks to filesharing networks. China fights an losing battle to restrict the free flow of information – while its citizens buy more mobile phones, every year, than the total number ever purchased in the United States. Give individuals a network, and – sooner, rather than later – they’ll become broadcasters.

One final, and crucial technological element completes the transition into the era of demassification – the release of Microsoft’s Internet Explorer version 7.0. Long delayed, this most important of all web browsers finally includes support for RSS – the technology behind “feeds.” Suddenly, half a billion PC users can access the enormous wealth of individually-produced and individually-tailored news resources which have grown up over the last five years. But they can also create their own feeds, either by aggregating resources they’ve found elsewhere, or by creating new ones. The revolution that began with Gutenberg is now nearly complete; while the Web turned the network into a printing press, RSS gives us the ability to hyperdistribute publications so that anyone, anywhere, can reach everyone, everywhere.

Now all is dissolution. The mainstream media will remain potent for some time, centers for the creation of content, but they must now face the rise of the amateurs: a battle of hundreds versus billions. To compete, the media must atomize, delivering discrete chunks of content through every available feed. They will be forced to move from distribution to seduction: distribution has been democratized, so only the seduction of salience will carry their messages around the network. But the amateurs are already masters of this game, having grown up in an environment where salience forms the only selection pressure. This is the time of the amateur, and this is their chosen battlefield. The outcome is inevitable. Deck chairs, meet Titanic.