Category Archives: peer production

Post navigation

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful. But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’. Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century. Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web. Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real. I was one of them. From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu. This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Xanadu was never released, but we got the Web. It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source. But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died. The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released. ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

If Xanadu had been released in 1987, it would have been next to useless without an Internet to support it, and the Internet was still very tiny in the 1980s. When I started using the Internet, in 1988, the main trunk line across the United States was just about to be upgraded from 9.6 kilobits to 56 kilobits. That’s the line for all of the traffic heading from one coast to the other. I suspect that today this cross-country bandwidth, in aggregate, would be measured in terabits – trillions of bits per second, a million-fold increase. And it keeps on growing, without any end in sight.

Because of my experience with Xanadu, when I first played with NCSA Mosaic – the first publicly available Web browser – I immediately knew what I held in my mousing hand. And I wasn’t impressed. In July 1993 very little content existed for the Web – just a handful of sites, mostly academic. Given that the Web was born to serve the global high-energy-physics community headquartered at CERN and Fermilab, this made sense. I walked away from the computer that July afternoon wanting more. Hypertext systems I’d seen before. What I lusted after was a global system with a reach like Xanadu.

Three months later, when I’d acquired a SUN workstation for a programming project, I immediately downloaded and installed NCSA Mosaic, to find that the Web elves had been busy. Instead of a handful of sites, there were now hundreds. There was a master list of known sites, maintained at NCSA, and over the course of a week in October, I methodically visited every site in the list. By Friday evening I was finished. I had surfed the entire Web. It was even possible to keep up the new sites as they were added to the bottom of the list, though the end of 1993. Then things began to explode.

From October on I became a Web evangelist. My conversion was complete, and my joy in life was to share my own experience with my friends, using my own technical skills to get them set up with Internet access and their own copies of NCSA Mosaic. That made converts of them; they then began to work on their friends, and so by degrees of association, the word of the Web spread.

In mid-January 1994, I dragged that rather unwieldy SUN workstation across town to show it off at a house party / performance event known as ‘Anon Salon’, which featured an interesting cross-section of San Francisco’s arts and technology communities. As someone familiar walked in the door at the Salon, I walked up to them and took them over to my computer. “What’s something you’re interested in?” I’d ask. They’d reply with something like “Gardening” or “Astronomy” or “Watersports of Mesoamerica” and I’d go to the newly-created category index of the Web, known as Yahoo!, and still running out of a small lab on the Stanford University campus, type in their interest, and up would come at least a few hits. I’d click on one, watch the page load, and let them read. “Wow!” they’d say. “This is great!”

I never mentioned the Web or hypertext or the Internet as I gave these little demos. All I did was hook people by their own interests. This, in January 1994 in San Francisco, is what would happen throughout the world in January 1995 and January 1996, and still happening today, as the two-billion Internet-connected individuals sit down before their computers and ask themselves, “What am I passionate about?”

This is the essential starting point for any discussion of what the Web is, what it is becoming, and how it should be presented. The individual, with their needs, their passions, their opinions, their desires and their goals is always paramount. We tend to forget this, or overlook it, or just plain ignore it. We design from a point of view which is about what we have to say, what we want to present, what we expect to communicate. It’s not that that we should ignore these considerations, but they are always secondary. The Web is a ground for being. Individuals do not present themselves as receptacles to be filled. They are souls looking to be fulfilled. This is as true for children as for adults – perhaps more so – and for this reason the educational Web has to be about space and place for being, not merely the presentation of a good-looking set of data.

How we get there, how we create the space for being, is what we have collectively learned in the first seventeen years of the web. I’ll now break these down some of these individually.

I: Sharing

Every morning when I sit down to work at my computer, I’m greeted with a flurry of correspondence and communication. I often start off with the emails that have come in overnight from America and Europe, the various mailing lists which spit out their contents at 3 AM, late night missives from insomniac friends, that sort of thing. As I move through them, I sort them: this one needs attention and a reply, this one can get trashed, and this one – for one reason or another – should be shared. The sharing instinct is innate and immediate. We know upon we hearing a joke, or seeing an image, or reading an article, when someone else will be interested in it. We’ve always known this; it’s part of being a human, and for as long as we’ve been able to talk – both as children and as a species – we’ve babbled and shared with one another. It’s a basic quality of humanity.

Who we share with is driven by the people we know, the hundred-and-fifty or so souls who make up our ‘Dunbar Number’, the close crowd of individuals we connect to by blood or by friendship, or as co-workers, or neighbors, or co-religionists, or fellow enthusiasts in pursuit of sport or hobby. Everyone carries that hundred and fifty around inside of them. Most of the time we’re unaware of it, until that moment when we spy something, and immediately know who we want to share it with. It’s automatic, requires no thought. We just do it.

Once things began to move online, and we could use the ‘Forward’ button on our email clients, we started to see an acceleration and broadening of this sharing. Everyone has a friend or two who forwards along every bad joke they come across, or every cute photo of a kitten. We’ve all grown used to this, very tolerant of the high level of randomness and noise, because the flip side of that is a new and incredibly rapid distribution medium for the things which matter to us. It’s been truly said that ‘If news is important, it will find me,’ because once some bit of information enters our densely hyperconnected networks, it gets passed hither-and-yon until it arrives in front of the people who most care about it.

That’s easy enough to do with emails, but how does that work with creations that may be Web-based, or similarly constrained? We’ve seen the ‘share’ button show up on a lot of websites, but that’s not the entire matter. You have to do more than request sharing. You have to think through the entire goal of sharing, from the user’s perspective. Are they sharing this because it’s interesting? Are they sharing this because they want company? Are they sharing this because it’s a competition or a contest or collaborative? Or are they only sharing this because you’ve asked them to?

Here we come back – as we will, several more times – to the basic position of the user’s experience as central to the design of any Web project. What is it about the design of your work that excites them to share it with others? Have you made sharing a necessary component – as it might be in a multi-player game, or a collaborative and crowdsourced knowledge project – or is it something that is nice but not essential? In other words, is there space only for one, or is there room to spread the word? Why would anyone want to share your work? You need to be able to answer this: definitively, immediately, and conclusively, because the answer to that question leads to the next question. How will your work be shared?

Your works do not exist in isolation. They are part of a continuum of other works? Where does your work fit into that continuum? How do the instructor and student approach that work? Is it a top-down mandate? Or is it something that filters up from below as word-of-mouth spreads? How does that word-of-mouth spread?

Now you have to step back and think about the users of your work, and how they’re connected. Is it simply via email – do all the students have email addresses? Do they know the email addresses of their friends? Or do you want your work shared via SMS? A QRCode, perhaps? Or Facebook or Twitter or, well, who knows? And how do you get a class of year 3 students, who probably don’t have access to any of these tools, sharing your work?

You do want them to share, right?

This idea of sharing is foundational to everything we do on the Web today. It becomes painfully obvious when it’s been overlooked. For example, the iPad version of The Australian had all of the articles of the print version, but you couldn’t share an article with a friend. There was simply no way to do that. (I don’t know if this has changed recently.) That made the iPad version of The Australian significantly less functional than its website version – because there I could at least past a URL into an email.

The more something is shared, the more valuable it becomes. The more students use your work, the more indispensable you become to the curriculum, and the more likely your services will be needed, year after year, to improve and extend your present efforts. Sharing isn’t just good design, it’s good business.

II: Connecting

Within the space for being created by the Web, there is room for a crowd. Sometimes these crowds can be vast and anonymous – Wikipedia is a fine example of this. Everyone’s there, but no one is wholly aware of anyone else’s presence. You might see an edit to a page, or a new post on the discussion for a particular topic, but that’s as close as people come to one another. Most of the connecting for the Wikipedians – the folks who behind-the-scenes make Wikipedia work – is performed by that old reliable friend, email.

There are other websites which make connecting the explicit central point of their purpose. These are the social networks: Facebook, MySpace, LinkedIn, and so on. In essence they take the Dunbar Number written into each of our minds and make it explicit, digital and a medium for communication. But it doesn’t end there; one can add countless other contacts from all corners of life, until the ‘social graph’ – that set of connections – becomes so broad it is essentially meaningless. Every additional contact makes the others less meaningful, if only because there’s only so much of you to go around.

That’s one type of connecting. There is another type, as typified by Twitter, in which connections are weaker – generally falling outside the Dunbar Number – but have a curious resilience that presents unexpected strengths. Where you can poll your friends on Facebook, on Twitter you can poll a planet. How do I solve this problem? Where should I eat dinner tonight? What’s going on over there? These loose but far-flung connections provide a kind of ‘hive mind’, which is less precise, and knows less about you, but knows a lot more about everything else.

These are not mutually exclusive principles. It’s is not Facebook-versus-Twitter; it is not tight connections versus loose connections. It’s a bit of both. Where does your work benefit from a tight collective of connected individuals? Is it some sort of group problem-solving? A creative activity that really comes into its own when a whole band of people play together? Or simply something which benefits from having a ‘lifeline’ to your comrades-in-arms? When you constantly think of friends, that’s the sort of task that benefits from close connectivity.

On the other hand, when you’re collaborating on a big task – building up a model or a database or an encyclopedia or a catalog or playing a massive, rich, detailed and unpredictable game, or just trying to get a sense of what is going on ‘out there’, that’s the kind of task which benefits from loose connectivity. Not every project will need both kinds of connecting, but almost every one will benefit from one or the other. We are much smarter together than individually, much wiser, much more sensible, and less likely to be distracted, distraught or depressed. (We are also more likely to reinforce each others’ prejudices and preconceptions, but that’s another matter of longstanding which technology can not help but amplify.) Life is meaningful because we, together, give it meaning. Life is bearable because we, together, bear the load for one another. Human life is human connection.

The Web today is all about connecting. That’s its single most important feature, the one which is serving as an organizing principle for nearly all activity on it. So how do your projects allow your users to connect? Does your work leave them alone, helpless, friendless, and lonely? Does it crowd them together into too-close quarters, so that everyone feels a bit claustrophobic? Or does it allow them to reach out and forge the bonds that will carry them through?

III: Contributing, Regulating, Iterating

In January of 2002, when I had my first demo of Wikipedia, the site had barely 14,000 articles – many copied from the 1911 out-of-copyright edition of Encyclopedia Britannica. That’s enough content for a child’s encyclopedia, perhaps even for a primary school educator, but not really enough to be useful for adults, who might be interested in almost anything under the Sun. It took the dedicated efforts of thousands of contributors for several years to get Wikipedia to the size of Britannica (250,000 articles), an effort which continues today.

Explicit to the design of Wikipedia is the idea that individuals should contribute. There is an ‘edit’ button at the top of nearly every page, and making changes to Wikipedia is both quick and easy. (This leaves the door open a certain amount of childish vandalism, but that is easily reversed or corrected precisely because it is so easy to edit anything within the site.) By now everyone knows that Wikipedia is the collaboratively created encyclopedia, representing the best of all of what its contributors have to offer. For the next hundred years academics and social scientists will debate the validity of crowdsourced knowledge creation, but what no one can deny is that Wikipedia has become an essential touchstone, our common cultural workbook. This is less because of Wikipedia-as-a-resource than it is because we all share a sense of pride-in-ownership of Wikipedia. Probably most of you have made some small change to Wikipedia; a few of you may have authored entire articles. Every time any of us adds our own voice to Wikipedia, we become part of it, and it becomes part of us. This is a powerful logic, an attraction which transcends the rational. People cling to Wikipedia – right or wrong – because it is their own.

It’s difficult to imagine a time will come when Wikipedia will be complete. If nothing else, events continue to occur, history is made, and all of this must be recorded somewhere in Wikipedia. Yet Wikipedia, in its English-language edition, is growing more slowly in 2010 than in 2005. With nearly 3.5 million articles in English, it’s reasonably comprehensive, at least by its own lights. Certain material is considered inappropriate for Wikipedia – homespun scientific theories, or the biographies of less-than-remarkable individuals – and this has placed limits on its growth. It’s possible that within a few years we will regard Wikipedia as essentially complete – which is, when you reflect upon it, an utterly awesome thought. It will mean that we have captured the better part of human knowledge in a form accessible to all. That we can all carry the learned experience of the species around in our pockets.

Wikipedia points to something else, quite as important and nearly as profound: the Web is not ‘complete’. It is a work-in-progress. Google understands this and releases interminable beta versions of every product. More than this, it means that nothing needs to offer all the answers. I would suggest that nothing should offer all the answers. Leaving that space for the users to add what they know – or are willing to learn – to the overall mix creates a much more powerful relationship with the user, and – counterintuitively – with less work from you. It is up to you to provide the framework for individuals to contribute within, but it is not up to you to populate that framework with every possibility. There’s a ‘sweet spot’, somewhere between nothing and too much, which shows users the value of contributions but allows them enough space to make their own.

User contributions tend to become examples in their own right, showing other users how it’s done. This creates a ‘virtuous cycle’ of contributions leading to contributions leading to still more contributions – which can produce the explosive creativity of a Wikipedia or TripAdvisor or an eBay or a RateMyProfessors.com.

In each of these websites it needs to be noted that there is a possibility for ‘bad data’ to work its way into system. The biggest problem Wikipedia faces is not vandalism but the more pernicious types of contributions which look factual but are wholly made up. TripAdvisor is facing a class-action lawsuit from hoteliers who have been damaged by anonymous negative ratings of their establishments. RateMyProfessors.com is the holy terror of the academy in the United States. Each of these websites has had to design systems which allow for users to self-regulate peer contributions. In some cases – such as on a blog – it’s no more than a ‘report this post’ button, which flags it for later moderation. Wikipedia promulgated a directive that strongly encouraged contributors to provide a footnote linking to supporting material. TripAdvisor gives anonymous reviewers a lower ranking. eBay forces both buyers and sellers to rate each transaction, building a database of interactions which can be used to guide others when they come to trade. Each of these are social solutions to social problems.

Web2.0 is not a technology. It is a suite of social techniques, and each technique must be combined with a social strategy for deployment, considering how the user will behave: neither wholly good nor entirely evil. It is possible to design systems and interfaces which engage the better angels of nature, possible to develop wholly open systems which self-regulate and require little moderator intervention. Yet it is not easy to do so, because it is not easy to know in advance how any social technique can be abused by those who employ it.

This means that aWeb2.0 concept that should guide you in your design work is iteration. Nothing is ever complete, nor ever perfect. The perfect is the enemy of the good, so if you wait for perfection, you will never release. Instead, watch your users, see if they struggle to work within the place you have created for then, or whether they immediately grasp hold and begin to work. In their more uncharitable moments, do they abuse the freedoms you have given them? If so, how can you redesign your work, and ‘nudge’ them into better behavior? It may be as simple as a different set of default behaviors, or as complex as a set of rules governing a social ecosystem. And although Moses came down from Mount Sinai with all ten commandments, you can not and should not expect to get it right on a first pass. Instead, release, observe, adapt, and re-release. All releases are soft releases, everything is provisional, and nothing is quite perfect. That’s as it should be.

IV: Opening

Two of the biggest Web2.0 services are Facebook and Twitter. Although they seem to be similar, they couldn’t be more different. Facebook is ‘greedy’, hoarding all of the data provided by its users, all of their photographs and conversations, keeping them entirely for itself. If you want to have access to that data, you need to work with Facebook’s tools, and you need to build an application that works within Facebook – literally within the web page. Facebook has control over everything you do, and can arbitrarily choose to limit what you do, even shut you down your application if they don’t like it, or perceive it as somehow competitive with Facebook. Facebook is entirely in control, and Facebook holds onto all of the data your application needs to use.

Twitter has taken an entirely different approach. From the very beginning, anyone could get access to the Twitter feed – whether for a single individual (if their stream of Tweets had been made public), or for all of Twitter’s users. Anyone could do anything they wanted with these Tweets – though Twitter places restrictions on commercial re-use of their data. Twitter provided very clear (and remarkably straightforward) instruction on how to access their data, and threw the gates open wide.

Although Facebook has half a billion users, Twitter is actually more broadly used, in more situations, because it has been incredibly easy for people to adapt Twitter to their tasks. People have developed computer programs that send Tweets when the program is about to crash, created vast art projects which allow the public to participate from anywhere around the world, or even a little belt worn by a pregnant woman which sends out a Tweet every time the baby kicks! It’s this flexibility which has made Twitter a sort of messaging ‘glue’ on the Internet of 2010, and that’s something Facebook just can’t do, because it’s too closed in upon itself. Twitter has become a building block: when you write a program which needs to send a message, you use Twitter. Facebook isn’t a building block. It’s a monolith.

How do you build for openness? Consider: another position the user might occupy is someone trying to use your work as a building block within their own project. Have you created space for your work to be re-used, to be incorporated, to be pieced apart and put back together again? Or is it opaque, seamless, and closed? What about the data you collect, data the user has generated? Where does that live? Can it be exported and put to work in another application, or on another website? Are you a brick or are you a brick wall?

When you think about your design – both technically and from the user’s experience – you must consider how open you want to be, and weigh the price of openness (extra work, unpredictability) against the price of being closed (less useful). The highest praise you can receive for your work is when someone wants to use it in their own. For this to happen, you have to leave the door open for them. If you publish the APIs to access the data you collect; if you build your work modularly, with clearly defined interfaces; if you use standards such as RSS and REST where appropriate, you will create something that others can re-use.

One of my favorite lines comes from science fiction author William Gibson, who wrote, ‘The street finds its own uses for things – uses the manufacturer never imagined.’ You can’t know how valuable your work will be to someone else, what they’ll see in it that you never could, and how they’ll use it to solve a problem.

All of these techniques – sharing, connecting, contributing, regulating, iterating and opening – share a common thread: they regard the user’s experience as paramount and design as something that serves the user. These are not precisely the same Web2.0 domains others might identify. That’s because Web2.0 has become a very ill-defined term. It can mean whatever we want it to mean. But it always comes back to experience, something that recognizes the importance and agency of the user, and makes that the center of the work.

It took us the better part of a decade to get to Web2.0; although pieces started showing up in the late 1990s, it wasn’t until the early 21st century that we really felt confident with the Web as an experience, and could use that experience to guide us into designs that left room for us to explore, to play and to learn from one another. In this decade we need to bring everything we’ve learned to everything we create, to avoid the blind traps and dead ends of a design which ignores the vital reality of the people who work with what we create. We need to make room for them. If we don’t, they will make other rooms, where they can be themselves, where they can share what they’ve found, connect with the ones they care about, collaborate and contribute and create.

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development. DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time. Or, well, perhaps I overstate the matter. But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships. This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks. The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process. I said I’d be happy to do so, and asked how many proposals I’d have to review. “I doubt it will be more than thirty or forty,” he replied. Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions. But the RFP didn’t result in thirty or forty proposals. The total came to almost ninety. All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting. Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs. If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit. Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out. That took nearly 24 hours by itself – and cost an ungodly sum. I was left with a huge, heavy box of paper which I could barely lug back to my flat. For the next 36 hours, this box would be my ball and chain. I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad. Then I looked back at the box. Then back at the iPad. Then back at the box. I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop. This, for me, would be a bit of a test. For the last decade I’d never traveled anywhere without my laptop. Could I manage a business trip with just my iPad? I looked back at the iPad. Then at the box. You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox. Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it. Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own. I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service. My rationale was that I imagined this iPad would be a ‘cloud-centric’ device. The ‘cloud’ is a term that’s come into use quite recently. It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer. Gmail is a good example of a software that’s ‘in the cloud’. Facebook is another. Twitter, another. Much of what we do with our computers – iPad included – involves software accessed over the Internet. Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud. Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work. Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible. In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side. I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one. My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me. Here again were the proposals, carefully ordered and placed into several large, ringed binders. I’d be expected to tote these to the evaluation meeting. Fortunately, that was only a few floors above my hotel room. That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room. I put those boxes down – and never looked at them again. As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off. She understood completely. I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office. Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth. We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper. Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked. To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper. We have it now. After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written. You will soon have access to every single document you might ever need, right here, right now. We’re not 100% there yet – but that’s not the fault of the device. We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment. At that point, your iPad becomes the page which contains all other pages within it. You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text. The world is richer than that. iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever. It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world. And it is every one of a hundred-million-plus websites and maybe a trillion web pages. All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years. It’s 2020, and we’ve had iPads for a whole decade. The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law. This law states that computers double in power every twenty-four months. Ten years is five doublings, or 32 times. That rule extends to the display as well as the computer. The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye. The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper. The device itself will be thinner and lighter than the current model. Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear. You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user. And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010. Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution. Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits. That’s as good as a wired connection – as fast as anything promised by the National Broadband Network! In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second. That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today. Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad. (Perhaps a full copy of Wikipedia? Or all of the books published before 1915?) All of this still cost just $700. If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option. I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of? How do we put all of that power to work? First off, iPad will be able to see and hear in meaningful ways. Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’. We can already speak to our computers, and, most of the time, they can understand us. With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it. Your iPad will hear you, understand your voice, and follow your commands. It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time. They may still be employed in very specialized tasks. For almost everything else, we will be using our iPads. They’ll rarely leave our sides. They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task. When everything is so well connected, you don’t need to have personal information stored in a specific iPad. You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible. Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly. People may find voice recognition more of an annoyance than an affordance. The idea of your iPad watching you might seem creepy to some people. But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s. He lives in Boston while they live in Northern California. But he needs to keep in touch, he needs to have a look in. Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime. It’s a bit ‘Jetsons’, when you think about it. And that’s just what will happen next year. By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long. It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia. No student, however poor, will be without their own iPad – the Government of the day will see to that. These students of 2020 are at least as well connected as you are, as their parents are, as anyone is. To them, iPads are not new things; they’ve always been around. They grew up in a world where touch is the default interface. A computer mouse, for them, seems as archaic as a manual typewriter does to us. They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband. They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations? This is not the universe of ‘chalk and talk’. This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network. This is a world where education can be provided anywhere, on demand, as called for. This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two. Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away. Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons. Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history. iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator. We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding. The more we virtualize the educational process, the more important and singular our embodied interactions become. Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up. Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students. That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon. We learn best when we learn from others. We humans are experts in mimesis, in learning by imitation. That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings. We are born to work together, we are designed to learn from one another. iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense. It should be an amplifier, not a replacement, something that lets students go further, faster than before. But they should not go alone.

The constant danger of technology is that it can interrupt the human moment. We can be too busy checking our messages to see the real people right before our eyes. This is the dilemma that will face us in the age of the iPad. Governments will see them as cost-saving devices, something that could substitute for the human touch. If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III: The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband. The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together. But what will they be working on? Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc. This is certainly not the intent of the project’s creators. Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy. Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities. That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity. We know that all year nine students in Australia will be covering a particular suite of topics. This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on. As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it. The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia. All the article headings are there, all the taxonomy, all the cross references, but none of the content. The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing. But it isn’t. Everyone secretly suspects the National Curriculum will ruin education. I ask that we can see things differently. The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value. More than that, we need to think of every student in Australia as a contributor of value. That’s the vital gap that must be crossed. Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work. Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from. Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this. We need to do this. Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers. This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy. But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad. We’ve never had these stars aligned in such a way before. Only just now – in 2010 – is it possible to dream such big dreams. It won’t even cost much money. Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities. We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value. It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine. Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra. These kinds of things have been possible before, but the National Curriculum gives us the reason to do it. iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy. The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse. In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation. Education is one way that this happens. People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market. This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have. If we can share our learning, we can close this gap. We can bring the best of what we teach to everyone who has the need to know.

And there we are. But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it. The iPad is an excellent toy. Please play with it. I don’t mean use it. I mean explore it. Punch all the buttons. Do things you shouldn’t do. Press the big red button that says, “Don’t press me!” Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning. That joy is foundational to us. If we didn’t love learning, we wouldn’t be running things around here. We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own. These are my favorites, but I own many others, and enjoy all of them. There are literally tens of thousands to choose from, some of them educational, some, just for fun. That’s the point: all work and no play makes iPad a dull toy.

So please, go and play. As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything. Or can, if we can change ourselves.

Back in 1978 – when I was just fifteen – I begged my parents to let me enroll in a course at the local community college (the equivalent of TAFE) so that I could take ‘Data Processing with RPG II’. I wrote my first computer program in RPG II. I typed that program onto a series of punched cards, one statement per punched card. Once I’d completed typing the deck of cards which comprised my program, I dropped them off at the college’s data processing center, where they went into the batch queue. You returned in 24 hours and were returned your deck of punched cards, along with a long string of ‘green-bar’ paper, which printed the results (or errors) in your program. If you’d made a mistake on one of the cards – a spelling error, or a syntactical no-no, you’d be forced to repeat the process, as needed, until you got it right.

Woohoo. Sign me up.

From around 1980 – when I went off to MIT to study computer science – computers have been my constant companions. I’ve owned cheap ones (Commodore’s VIC-20), expensive ones (one of the first Macintosh IIs to roll off the assembly line), tiny ones (iPhone), and big ones (SparcStation 3). I have never owned a computer that I have not written code for. In my mind, the computer and the act of programming are inseparable.

Programming languages are something one acquires, like computers; but you don’t put those languages in the bin – mostly. In preparation for this talk, I made up a list of all the programming languages I’ve learned over the years, beginning with RPG II – which I’ve since forgotten. BASIC came next, and I thought it a wonderful, useful, incredible language, my true starting point.

I spent many years programming in assembly language on a variety of systems – CP/M, MS-DOS, embedded microcontrollers. I bought a cheap C compiler in 1982, a copy of Kernighan & Ritchie, learned pointer arithmetic, and crashed my computer repeatedly in the process. Now that was fun.

I did take up C++ when it was still new, when Stroustrup was still implementing features of the language. (Oh, wait, he’s still doing that, isn’t he?) Buried myself in class designs and object hierarchies and delegation models. I can probably still program in C++. If someone were to threaten me with a taser.

In the 1990s along came the Web and LINUX, the open computing platform. Suddenly a language was more useful for its ability to communicate with other entities than for its raw processing power.

I sat down at the 3rd International World Wide Web conference with a few folks from SUN Microsystems, who were touting this new, portable programming language they’d invented, which they called ‘Oak‘. I wonder whatever became of that?

Each new language is supposed to conquer the world. Each new language is meant to subdue all before it. And I have to admit that I had my share of fun with PERL – the bastard child of BASIC and C – and, later PHP. I’ve written a lot of JavaScript, because that’s the programming language of choice that brings VRML to life. Oh, and that’s right: along the way I invented a language, a portable language for interactive 3D computer graphics, a language that now, with WebGL about to become part of HTML5, looks less a damp squib than fifteen years ahead of its time.

Oh well.

Just a few years ago I decided that I needed to learn Python. I don’t remember the reason. I don’t even know that there was a reason. Python was there, and that was enough.

It didn’t take long to learn – Python isn’t a difficult language – but for just that little bit of learning I got so much power, well – I don’t have to explain it to you. You understand. It’s a bit like crack, Python is. Once you’ve had that first hit, you’re never quite the same again.

I put Python on everything: on my Macs, on my servers, on my mobile – everything I owned got a Python install. I didn’t know exactly what I’d do with all this Python, but somehow that seemed unimportant. Just get it everywhere. You’ll figure something out.

In some ways discovering Python was very frustrating. By my early 40s I’d basically stopped programming; not because I hated coding, but because my life had turned in other directions. I teach, I research, I lecture, I write, I do a little TV on the side. None of that has anything to do with coding. I had the best tool for a grand bit of hackery, and no time to do anything with it, nor any real reason to drive me to make time.

My biggest Python project (before last week) was a simple script to create a video used in the opening of my 2008 WebDirections South keynote. I wanted to show the ‘cloud’ of Twitter followers I had started to accumulate – around 1500. Not just a ‘wall’ of different faces, but a film, an animation, where each person I followed on Twitter had their moment in the sun. The script retrieved the list of people I follow, then iterated through this list, getting profile information for each individual, extracting from that the URL for the user’s avatar, which it then retrieved, Using Python Imaging Library, it then embossed the user’s handle onto the image. After that it was a basic drag-and-drop operation into Adobe Premiere. Presto! – I had a movie. Thank you, Python.

For half a decade I’ve been thinking about social networks. This little film project allowed me to tie my research together with my desire to have a pleasant excuse to hack. When I sat back and watched the film I’d algorithmically pieced together, I began to get a deeper sense of the value of my ‘social graph’. That’s a new phrase, and it means the set of human relationships we each carry with us. Until just a few years ago, these relationships lived wholly between our ears; we might augment our memories with an address book or a Rolodex, but these paper trails were only ever a reflection of our embodied relationships. Ever since Friendster, these relationships have exteriorized, leaped out of our heads (like Athena from Zeus) and crawled into our computers.

This makes them both intimately familiar and eerily pluripotent. We are wired from birth to connect with one another: to share what we know, to listen to what others say. This is what we do, a knowledge so essential, so foundational, it never needs to be taught. When this essential feature of being human gets accelerated by the speed of the computer, then amplified by a global network that now connects about five billion people (counting both mobile or Internet), all sorts of unexpected things begin to happen. The entire landscape of human knowledge – how we come to know something, how we come to share what we know – has been utterly transformed over the last decade. Were we to find a convenient TARDIS and take ourselves back to the world of 1999, it would be almost unrecognizable. The media landscape was as it always had been, though the print component had hesitatingly migrated onto the Web. To learn about the world around us, we all looked up – to the ABC, to the New York Times, to the BBC World Service.

Then the world exploded.

We don’t look up anymore. We look around – we look to one another – to learn what’s going on. Sometimes we share what we hear on the ABC or the Times or the World Service. But what’s important is that we share it. There is no up, there is no centre. There is only a vast sea of hyperconnected human nodes.

The most alluring and seductive of all of the hyperconnecting services is unquestionably Facebook. In three years it has grown from just fifteen million to nearly half a billion users. It might be the most visited website in the world, just now surpassing Google. Facebook has become the nexus, the connecting point for one person in every fourteen on Earth. Facebook is the place where the social graph has come to life, where the potency of sharing and listening can be explored in depth. But it is a life lived out in public. Facebook is not really geared toward privacy, toward the intimacies that we expect as a necessary quality of our embodied relationships. Facebook founder Mark Zuckerberg is on the record talking about ‘the end of privacy’, and how he sees it that a side-effect of Facebook’s mission ‘to give people the power to share, and make the world more open and connected’.

A world more open could be a good thing, but only if the openness is wholly multilateral. We don’t want to end up in a world where our secrets as individuals have been revealed, while those who have the concentrations of capital and power, and their supporting organizations and networks, manage to continue to remain obscure and occult. This kind of ‘privacy asymmetry’ will only work against the individuals who have surrendered their privacy.

This is precisely where we seem to be headed. Facebook wants us to connect and share and reveal, but – particularly around privacy, user confidentiality, and the way they put that vast amount of user-generated data to work for themselves and their advertisers – Facebook’s business practices are entirely opaque. Openness must be met with openness, sharing with sharing. Anything else creates a situation where one side is – quite literally – holding all the cards.

I have been pondering the power of social networks for six years, so I am peculiarly conscious of the price you pay for participation in someone else’s network. I’ve come to realized your social graph is your most important possession. In a very real way, your social graph is who you are. Until a few years ago we never gave this much thought because we carried our graphs with us everywhere, inside our heads. But now that these graph live elsewhere – under the control of someone else – we’re confronted with a dilemma :we want to turbocharge our social graphs, but we don’t want anyone else having any access to something so fundamental and intimate. If the CIA and NSA use social graphs to find and combat terrorists, if smoking, obesity and divorce spread through social graphs, why would we hand something so personal and so potent to anyone else? What kind of value would we receive for surrendering our crown jewels?

By the end of last month it was clear that Facebook had become dangerous. Something had to be done. People had to be warned. In a Melbourne hotel room, I drafted a manifesto. Here’s how I closed it:

There is only one solution. We must take the thing which is inalienable from us – our presence – and remove it from those who would use that presence for their own gain. We must move, migrate, become digital refugees, fleeing a regime which seeks only its own best interests, to the detriment of our own… We may be the first, but we will not be the last. We must map the harbors, clear the woods, and make virgin lands inviting enough that it will be an easy decision for those who will come to join us in this new country, where freedom goes hand-in-hand with presence, where privacy is not a dirty word, and where the future knows no bounds.

So I quit. But I didn’t do it suddenly or rashly. I’d been using Facebook to share media – links and articles and videos – so I set up a Posterous account, where I could do exactly the same kind of sharing. Over the course of two weeks, I posted a series of Facebook updates, telling everyone in my social graph that I’d be quitting Facebook – beginning by posting that manifesto – and giving them the link to my Posterous account. I did this on five separate occasions in the week leading up to my account deletion.

The responses were interesting. Most of the folks in my social graph who bothered to respond were in various stages of mourning. My own aunt – whom I’ve been corresponding with via email for twenty years – wrote how much she’d miss me. Another individual expressed regret at my leave-taking, given that we’d only just reconnected after many years. “But,” I responded, “I’ve shown you how we can stay in touch. Just follow the link.” “That’s too hard,” he replied, “I like that Facebook gives me everyone in one place. I don’t have to remember to check here for you, or over there for someone else. This is just easy.”

I can’t fault his logic: Facebook is just like the comfy chair. It’s a pleasant place to be – even when surrounded by Inquisitors. Facebook users are simply so grateful that such an amazing service is on offer – seemingly for free – that they haven’t thought through the price of their participation. And unless something else comes along that’s as powerful and easy as Facebook, things will go on just as are. Unless a disruptive innovation upends all the apple carts.

This is when I had a brainwave.

II: And Now For Something Completely Different

What is the social graph? At its essence, it is a set of connections, connections which define certain flows of information. These connections are both figurative and literal. If I say that I am connected to someone, I mean that we have some sort of relationship. But I also means that we have established protocols for communication, channels that can be used to send messages back and forth. For the last three hundred years this has been embodied in the ‘visiting card’, presented at all occasions when there is an invitation to connect. The ‘visiting card’ evolved into the ‘business card’ we share freely and promiscuously when there’s money to be made, or a connection to be had. The business card of 2010 must provide four significant pieces of information: a) the name of the caller; b) the address of the caller; c) the telephone number(s) of the caller; and d) the email address of the caller. Other information can be provided on the card – and often is – but if a card is missing any of these four essentials, it is incomplete. Each item represents a separate sphere of connectivity: the name is the necessary prerequisite for social connectivity; the address for postal connectivity; the telephone number and email addresses are self-explanatory. Each entry has a one-to-one correspondence with some form of connectivity. When we exchange business cards, we are providing the information necessary to establish connectivity.

We now have digital versions of the business card; we hand out vCards, or provide QR Codes that can be scanned and translated into a pointer to a vCard. Yet what we do with these digital versions of the business card not has changed: we stuff them into ‘address books’, or into the contact lists on our mobiles. If we have the right tools, we can upload them to Plaxo or LinkedIn. There they sit, static and essentially useless. A database with no applications.

That’s kind of weird, isn’t it? I mean, here we are, each of us walking around with a few hundred contacts on our mobiles, and essentially doing nothing with them unless we need to make a phone call or send an email. It doesn’t make sense. Somehow we’ve lost sight of the fact that the digital item is active in a way the physical object is not. Facebook understands this. Facebook takes your ‘calling card’ – the profile that you loaded up with your personal information – and makes it the foundation of your social graph. Everyone connects to your profile (which is you), and these connections become the cornerstone of fully bilateral sharing relationships. Anyone connected to you can send you a message, or initiate a chat, or look at the photos you uploaded of your holiday in the fleshpots of Bangkok. That one connection becomes the cornerstone for a whole range of opportunities to share media – text, images, video, links, music, events, etc. – and equally an opportunity to listen to what others are sharing. That’s what Facebook is, really, a giant, centralized switchboard which connects its members to one another. That’s all any social network is.

It’s easy – really easy – to connect together. We have so many ways to do so, through so many mechanisms, that really we’re drowning in choice, rather than a poverty of options. Instead of a monolithic solution, the Internet, like nature, tends to favor diversity and heterogeneity. Diversity creates the space for play and exploration; a tolerance for heterogeneity allows that there is no right answer, no one way to play the game. Is it possible to design an architecture for human connectivity which favors diversity and heterogeneity.

For the past few weeks those of you following me on Twitter have seen me tweet about ‘Project Thunderware’, which was the silliest code-name I could think up for a project that is actually entirely serious. The real name is Plexus. Plexus is design for a second-generation social network. It is personal – everyone runs their own Plexus. It is portable – written entirely in Python so you can drop it onto a USB key (if you want), and take it with you anywhere you can get Python running. It is private – no one else has access to your Plexus, unless you want them to. It’s completely open and completely modular. Plexus is designed to take the passive social graph we’ve all got tucked away in our various devices, translating it into something active, vital, and essential.

There are three components within Plexus. First and most important is the social graph, a database of connections known as the ‘Plex’. Each of these connections, like a business card, comes with a list of connection points. These connection points can be outgoing – ‘this is how I will speak to you’, or incoming – ‘this is how I will listen to you’. They can be unilateral or bilateral. They can be based on standard protocols – such as SMTP or XMPP, or the APIs of the rapidly-multiplying set of social services already available in the wilds of the Internet, or they can be something entirely home-grown and home-brewed. They can be wide open, or encrypted with GPG. Everything is negotiable. That’s the point: something’s in the Plex because there’s an active connection and relationship between two parties.

The Plex is only a database. To bring that database to life, two other components are required. The first of these is the ‘Sharer’. The Sharer, as the name implies, makes sure that something to be shared – be it a string of text, or a link, or a video, or a blog post, or whatever – ends up going out over the negotiated channels. The Sharer is built out of a set of Python modules, with each particular sharing service handled by its own module. This means that there is no limit or artificial constraint on what kinds of services Plexus can share with.

Conversely, the third component, the Listener, monitors all of the negotiated channels for any activity by any of the connections in the Plex. When the Listener hears something, it sends that to the user – to be displayed or saved or ignored according to the needs of the moment. Like the Sharer, the Listener is also a set of Python modules, with each monitored service handled by its own module. The Listener should be able to listen to anything that has a clearly defined interface.

When Plexus starts up, it reads through the Plex, instancing the appropriate Sharer and Listener objects on a connection-by-connection basis. Everything after initialization is event-driven: the Plexus user shares something, or the Listener hears something and offers that to the Plexus user.

That’s it. That’s the whole of the design. As always, the devil is in the details, but the essential architecture will probably remain unchanged. Plexus creates your own, self-managed social network, both entirely self-contained, and also acts as a connected node within a broader network. Because Plexus functions as plumbing – wiring together social services that haven’t been designed to talk to one another – it performs a service that is badly needed, filling a growing void. Plexus is your own plumbing, under your own control.

Let’s talk through a use case. I give a lot of lectures, and I make sure to put my contact details – email, blog and Twitter – on my slides. I meet two people at a lecture – we’ll call one of them Nick, and the other one Anthony. (Those names just came to me.) Nick is an affable person, he just wants to be able to follow all of my output, as I put it out. He needs are a list of the dozen-or-so public contact points where I present myself. That’d be my name, the six or seven blogs I write, my Twitter feed, my Posterous, my YouTube account, and Viddler account, and so forth. He gets that nugget of data off of markpesce.com/markpesce.plx – it’s basically a nice little bit of JSON (I don’t care for XML, but you can microformat to your heart’s content) that he can drop directly into Plexus, where it will go into the Plex. As the Plex digests it, this nugget instances the necessary Listeners. Now, whenever I say anything – anywhere – Nick knows about it. Which makes Nick happy.

Anthony is a different story. He’s a l33t user, and doesn’t want to be forced to rub shoulders with the hoi polloi at any of the normal social web services. Instead, Anthony wants to get a personally-addressed email from me every time I have something to share. Apparently he’s developed some excellent email filtering and management tools, so that even if I get quite chatty, it won’t clog up his inbox. So, he negotiates with me – Plexus-to-Plexus – and goes into my Plex as a contact, so that when I instance my Sharers, one is specifically set up to send him anything I share via SMTP. He doesn’t have to do anything to his Plexus, because he’s not using his Plexus to listen to me.

Use cases are all the more meaningful when they’re backed up by working code. Hence, I went back to the code mines last weekend – with a spring in my step and a song in my heart – and created a very, very embryonic version of Plexus. In just a little over two days, I created Sharer modules for Twitter, Posterous, Tumblr and SMTP, and Listener modules for Twitter and RSS. I reckoned that would be sufficient for the purposes of a demonstration – though if I’d had more time I could easily have wired in a few hundred other web social services.

There you go. That’s Plexus. The project is open source – after all, why would you trust a social network when you can’t inspect the code?

III: How Not To Be Seen

Plexus is grass-roots, bottom-up, and radically decentralized. That means the big boys will probably try to ignore it. Social media isn’t about the people, after all. It’s about humungous accumulations of capital going hand-in-hand with impossibly large collections of data, and, somewhere in the background, all the spooks, reading the paper trail. Social media is an instrument of control, the latest and the greatest. Sit still, read your feed, and comply.

But what if we refuse to comply? Is that even an option? Is it possible to be disconnected and influential? That’s the Faustian bargain being offered to us: join with the collective and you will be heard. And managed. And herded. Or suit yourself, and weep and gnash your teeth in the outer darkness. But in that Interzone, outside the smooth functioning of power, what happens when we connect there?

Reflect back on March of 2000. Napster, the centralized filesharing network, had recently be shut down by court order. A different crew created a decentralized filesharing tool, known as Gnutella, releasing both the tool and the source code to the world on March 14th. When AOL/TimeWarner – parent company of the folks who wrote Gnutella – found out about and put a stop to the source code release, it was too late. It couldn’t be recalled. The bomb couldn’t be un-invented. The music industry is more authentic than it was a decade ago, more open to innovation, to outsiders, to diversity and heterogenetity. All because a few hackers decided to change the way people share their music.

History never repeats, but it does rhyme. We share everything now; we worry that we overshare. Now it’s time to take our sharing to the next level. We need a social2.0, something that reflects what we’ve learned in the past half-dozen years. That’s not just a slew of new services. That’s an attitude change. Consider: the wiki was invented in 1995. It’s Precambrian web tech. But we didn’t start using wikis until after 2001, when Wikipedia began to take off. Why? It took us a while – and a lot of interactions – to understand how to use the tools on offer. Social technology is uniquely potent – so much so that we’ll be learning its strengths and weakness for a decade or more. The time has come to step out, seize the means of communication, and make them our own.

I reckon you can now understand why Python was such an obvious choice for Plexus. In no other language, with no other community, is the idea of sharing so much at the core. There is a Python module or code sample to do nearly every task under the sun, precisely because sharing is a core ethic of the Python community. Python is the language of the Web because it lends itself to the same sharing that the Web fosters. Python is the language of Plexus because Plexus needs to inherit all of Python’s best qualities, needs to be straightforward and open and flexible and extensible and easily shared. I need to be able to drop a Plexus module into an email and know, at the other end, that it will just work. ‘Take this,’ I’ll say, ‘and feed it to your Plexus.’ You’ll do that, and suddenly you’ll find that we have a secure, obscure and nearly invisible means of sharing – a darknet, how not to be seen – that can be as private and personal or open and public as we agree it should be. And you can turn around, think up something else, and mail that to me, or to someone else, or to the world.

The social web must be a social project, an opportunity to embody exactly what we’re trying to create as we are creating it. It’s the ultimate dogfooding. Success requires a willing surrender that rejoices in cooperation.

So here it is. This is the best I can do. It may be the best that I will ever do. I place it before you this morning, a humble offering, written in a language that I barely know, but which I’ve used to express my highest aspirations. Plexus is naked, newborn, and needs help. It will only benefit from your input, comments, recommendations, pointers and critiques. It is an idea that can only grow and mature as it is shared. That’s what this is all about. It always has been.

We live today in the age of networks. Having grown from nothing just fifteen years ago, the network has become one of the principal influences in our lives. We trust the network; we depend on the network; we use the network to make ourselves more effective. This state of affairs did not develop gradually; rather, we have passed through a series of unpredicted and non-linear shifts in the fabric of culture.

The first of these shifts was coincident with the birth of the Web itself, back in the mid-1990s. From its earliest days the Web was alluring because it represented all things to all people: it could serve as both resource and repository for anything that might interest us, a platform for whatever we might choose to say. The truth of those earliest days is that we didn’t really know what we wanted to say; the stereotype of the page where one went on long and lovingly about one’s pussy carries an echo of that search for meaning. The lights were on, but nobody was home.

Drawing the curtain on this more-or-less vapid era of the Web, the second shift began with the collapse of the dot-com bubble in the early 2000s. The undergrowth cleared away, people could once again focus on the why of the Web. This was when the Web came into its own as an interactive medium. The Web could have been an interactive medium from day one – the technology hadn’t changed one bit – but it took time for people to map out the evolving relationship between user and experience. The Web, we realized, is not a page to read, but rather, a space for exploration, connection and sharing.

This is when things start to get interesting, when ideas like Wikipedia begin to emerge. Wikipedia is not a technology, at least, it’s not a specific technology. Wikis have been around since 1995, nearly as old as the Web itself. Databases are older than the Web, too. So what is new about Wikipedia? Simply this: the idea of sharing. Wikipedia invites us all to share from our expertise, for the benefit of one another. It is an agreement to share what we know to collectively improve our capability. If you strip away all of the technology, and all of the hype – both positive and negative –from Wikipedia, what you’re left with is this agreement to share. In the decade since Wikipedia’s launch we’ve learned to share across a broad range of domains. This sharing supported by technology is a new thing, and dramatically increases the allure of the network. What was merely very interesting back in 1995 became almost overpowering in the years since the turn of the millennium. It has consistently become harder and harder to imagine a life without the network, because the network provides so much usefulness, and so much utility.

The final shift occurred in 2007, as Facebook introduced F8, its plug-in architecture which opened its design – and its data – to outside developers. Facebook exploded from a few million users to over four hundred million: the third largest nation in the world. Social networks are significant because they harness and amplify our innate human desire and capability to connect with one another. We constantly look to our social networks – that is, our real-world networks – to remind us who we are, where we are, and what we’re doing. These social network provide our ontological grounding. When translated into cyberspace, these social networks can become almost impossibly potent – which is why, when they’re used to bully or harass someone, they can lead to such disastrous results. It becomes almost too easy, and we become almost too powerful.

A lot of what we’ll see in this decade is an assessment of what we choose to do with our new-found abilities. We can use these social networks to transmit pornographic pictures of one another back and forth at such frequency and density that we simply numb ourselves into a kind of fleshy hypnosis. That is one possible direction for the future. Or, we could decide that we want something different for ourselves, something altogether more substantial and meaningful. But in order to get that sort of clarity, we need to be very clear on what we want – both direction and outcome. At this point we are simply playing around – with a loaded weapon – hoping that it doesn’t accidentally go off.

Of course it does; someone sets up a Facebook page to memorialize a murdered eight year-old, but leaves the door open to all comers (believing, unrealistically, that others will share their desire to mourn together), only to see the overflowing sewage of the Internet spill bile and hatred and psychopathology onto a Web page. This happens again and again; it happened several times in one week in February. We are not learning the lesson we are meant to learn. We are missing something. Partly this is because it is all so new, but partly it is because we do not know what our own intentions are. Without that, without a stated goal, we can not winnow the wheat from the chaff. We will forget to close the windows and lock the doors. We will amuse ourselves to death.

I mention this because, as educators, it is up to all of us to act as forces for the positive moral good of the culture as a whole. Cultural values are transmitted by educators; and while parents may be a bigger influence, teachers have their role to play. Parents are simply overwhelmed by all of this novelty – the Web wasn’t around when they were children, and social networks weren’t around even five years ago. So, right at this moment in time, educators get to be the adult cultural vanguard, the vital mentoring center.

If we had to do this ourselves, alone, as individuals – or even as individual institutions – the project would almost certainly fail. After all, how could we hope to balance all of the seductions ‘out there’ against the sense which needs to be taught ‘in here’? We would simply be overwhelmed – our current condition. Fortunately, we are as well connected, at least in potential, as any of our students. We have access to better resources. And we have more experience, which allows us to put those resources to work. In short, we are far better placed to make use of social media than our charges, even if they seem native to the medium while we profess to be immigrants.

One thing that has changed, because of the second shift, the trend toward sharing, is that educational resources are available now as never before. Wikipedia led the way, but it is just small island in a much large sea of content, provided by individuals and organizations throughout the world. iTunes University, YouTube University, the numberless podcasts and blogs that have sprung up from experts on every subject from macroeconomics to the history of Mesoamerica – all of it searchable by Google, all of it instantaneously accessible – every one of these points to the fact that we have clearly entered a new era, where we are surrounded by and saturated with an ‘educational field’ of sorts. Whatever you need to know, you’re soaking in it.

This educational field is brand-new. No one has made systematic use of it, no teacher, no institution, no administration. But that doesn’t lessen its impact. We all consult Wikipedia when we have some trivial question to answer; that behavior is the archetype for where education is headed in the 21st century – real-time answers on-demand, drawn from the educational field.

Paired with the educational field is the ability for educators to establish strong social connections – not just with other educators, but laterally, through the student to the parents, through the parents to the community, and so on, so that the educator becomes ineluctably embedded in a web of relationships which define, shape and determine the pedagogical relationship. Educators have barely begun to make use of the social networking tools on offer; just to have a teacher ‘friend’ a student in Facebook is, to some eyes, a cause for concern – what could possibly be served by that relationship, one which subverts the neat hierarchy of the 19th century classroom?

The relationship is the essence of the classroom, that which remains when all the other trivia of pedagogy are stripped away. The relationship between the teacher and the student is at the core of the magical moment when knowledge is transmitted between the generations. We now have the greatest tool ever created by the hand of man to reinforce and strengthen that relationship. And we need to use it, or else we will all sink beneath a rising tide of noise and filth and distraction.

But how?

II: The Unfinished Project

The roots of today’s talk lie in a public conversation I had with Dr. Evan Arthur, who manages the Digital Education Revolution Group within the Department of Education, Employment and Workplace Relations. As part of this conversation, I asked him about educational styles, and, in particular, Constructivism. As conceived by Jean Piaget and his successors across the 20th century, Constructivism states that the child learns through play – or rather, through repeated interactions with the world. Schema are created by the child, put to the test, where they either succeed or fail. Failed schema are revised and re-tested, while successful schema are incorporated into ever-more-comprehensive schema. Through many years of research we know that we learn the physics of the real world through a constant process of experimentation. Every time a toddler dumps a cup of juice all over himself, he’s actually conducting an investigation into the nature of the real.

The basic tenets of Constructivism are not in dispute, although many educators have consistently resisted the underlying idea of Constructivism – that it is the child who determines the direction of learning. This conflicts directly with the top-down teacher-to-student model of education which we are all intimate familiar with, which has determined the nature of pedagogy and even the architecture of our classrooms. This is the grand battle between play and work; between ludic exploration and the hard grind of assimilating the skills that situate us within an ever-more-complex culture.

At the moment, this trench warfare has frozen us in a stalemate located, for the most part, between year two and year three. In the first two years education has a strong ludic component, and students are encouraged to explore. But in year three the process becomes routinized, formalized and very strict. Certainly, eight-year-olds are better able to understand restrictions than six-year-olds. They’re better at following the rules, at colouring within the lines. But it seems as though we’ve taken advantage of the fact that an older child is a more compliant one. It is true that as we advance in years, our ludic nature becomes tempered by an adult’s sensibility. But humans retain the urge to play throughout their lives – to a greater degree than any other species we know of. It could very well be that our ability to learn is intimately tied to our desire to play.

If we are prepared to swallow this bitter pill, and acknowledge that play is an essential part of the learning process, we have no choice but to follow this idea wherever it leads us. Which leads me back to my conversation with Dr. Arthur. I asked him about the necessity of play, and he framed his response by talking about “The Unfinished Constructivist Project”. It is a revolution trapped in mid-stride, a revelation that, somehow, hasn’t penetrated all the way through our culture. We still insist that instruction is the preferred mechanism for education, when we have ample evidence to suggest this simply isn’t true. Let me be clear: instruction is not the same thing as guidance. I am not suggesting that children simply do as they please. The more freedom they have, the more need they have for a strong, stabilizing force to guide them as they explore. This may be the significant (if mostly hidden) objection to the Constructivist project: it is simply too expensive. The human resources required to give each child their own mentor as they work their way through the corpus of human knowledge would simply overwhelm any current educational model, with the exception of homeschooling. I don’t know what the student-teacher ratio would need to be in a fully realized Constructivist educational system, but I doubt that twenty-to-one would be sufficient. That’s the level needed to maintain a semblance of order, more a peacekeeping force than an army of mentors.

There have been occasional attempts to create a fully Constructivist educational system, but these, like the manifold utopian communities which have been founded, flourish briefly, then fade or fracture, and do not survive the test of time. The level of dedication and involvement required from both educator/mentors and parents is simply too big an ask. This is the sort of thing that a hunter-gatherer culture has no trouble with: the entire world is the classroom, the child explores it, and an adult is always there to offer an explanation or story to round out the child’s knowledge. We live in an industrial culture (at least, our classrooms do), where there is strict differentiation between ‘education’ and the other activities in life, where adults are ‘educators’ or they are not, where everything is highly formal, almost ritualized. (Consider the highly regulated timings of the school day – equal parts order from chaos, and ritual.) There could never be enough support within such a framework to sustain a Constructivist model. This is why we have the present stalemate; we know the right thing to do, but, heretofore, we have lacked the resources to actualize this knowledge.

That has now changed.

The educational field must be recognized as the key element which will power the unfinished Constructivist revolution. The educational field does not recognize the boundaries of the classroom, the institution, or even the nation. It is simply pervasive, ubiquitous and available as needed. Within that field, both students and educator/mentors can find all of the resources needed to make the Constructivist project a continuing success. There need be no rupture between years two and three, no transformation of educational style from inward- to outward-directed. Instead, there can and should be a continual deepening of the child’s exploration of the corpus of knowledge, under the guidance of a network of mentors who share the burden. We already have most of the resources in place to assure that the child can have a continuous and continually strengthening relationship with knowledge: Wikipedia, while not perfect, points toward the kinds of knowledge sharing systems which will become both commonplace and easily created throughout the 21st century.

Sharing needs to become a foundational component in a modern educational system. Every time a teacher finds a resource to aid a student in their exploration, that should be noted and shared broadly. As students find things on their own – and they will be far better at it than most educators – these, too, should be shared. We should be creating a great, linked trail behind us as we learn, so that others, when exploring, will have paths to guide them – should they choose to follow. We have systems that can do this, but we have not applied these systems to education – in large part because this is not how we conceive of education. Or rather, this is not how we conceive of education in the classroom. I do a fair bit of corporate consulting, and this sort of ‘knowledge capture’ and ‘knowledge management’ is becoming essential to the operation of a 21st century business. Many businesses are creating their own, ad-hoc systems to share knowledge resources among their staff, as they understand how important this is for professional development.

This is a new battle line opened up in the war between the unfinished constructivist project and the older, more formal methods of education. The corporate world doesn’t have time for methodologies which have become obsolete. Employees must be constantly up-to-date. Professionals – particularly doctors and lawyers – must remain continuously well-informed about developments in their respective fields. Those in management need real-time knowledge streams in order recognize and solve problems as they emerge. This is all much more ludic than formal, much more self-directed than guided, much more juvenile than adult – even though these are all among the most adult of all activities. This disjunction, this desynchronization between the needs of the world-at-large and the delivery capabilities of an ever-more-obsolete educational system is the final indictment of things-as-they-are. Things will change; either education will become entirely corporatized, or educators will wholly embrace the unfinished Constructivist project. Either way the outcome will be the same.

Fortunately, the educational field has something else to offer educators beyond the near-infinite supply of educational resources. It is a network of individuals. It is a social network, connected together via bonds of familiarity and affinity. The student is embedded in a network with his mentors; the mentors are connected to other students, and to other mentors; everyone is connected to the parents, and the community. In this sense, the formal space of the ‘classroom’ collapses, undone by the pressure provided by the social network, which has effectively caused the classroom walls to implode. The outside world wants to connect to what happens within the crucible of the classroom, or, more specifically, with the magical moment of knowledge transference within the student’s mind. This is what we should be building our social networks to support. At present, social networks like Facebook and Twitter are dull, unsophisticated tools, capable of connecting together, but completely inadequate when it comes to shaping that connection around a task – such as mentoring, or exploring knowledge. A second generation of social networks is already reaching release. These tools display a more sophisticated edge, and will help to support the kinds of connections we need within the educational field.

None of this, as wonderful as it might sound (and I admit that it may also seem pretty frightening) is happening in a vacuum. There are larger changes afoot within Australia, and no vision for the future of education in Australia could ignore them. We must find a way to harmonize those changes with the larger, more fundamental changes overtaking the entire educational system.

III: The National Curriculum

Underlying fear of a Constructivist educational project is that it would simply give children an excuse to avoid the tough work of education. There is a persistent belief that children will simply load up on educational ‘candy’, without eating their all-so-essential ‘vegetables’, that is, the basic skills which form the foundation for future learning. Were children left entirely to their own devices, there might be some danger of this – though, now that we live in the educational field, even that possibility seems increasingly remote. Children do not live in isolation: they are surrounded by adults who want them to grow into successful adults. In prehistoric times, adults simply had to be adults around children for the transference of life-skills to take place. Children copied, imitated, and aped adults – and still do. This learning-by-mimesis is still a principle factor in the education of the child, though it is not one which is often highlighted by the educational system. Industrial culture has separated the adult from the child, putting one into the office, the other into the school. That separation, and the specialization which is the hallmark of the Industrial Age, broke the natural and persistent mentorship of parenting into discrete units: this much in the home, this much in the school. If we do not trust children to consume a nourishing diet of knowledge, it is because we do not trust ourselves to prepare it for them. The separation by function led to a situation where no one is responsible for the whole thread of the life. Parents look to teachers. Teachers look to parents. Everyone, everywhere, looks to authority for responsible solutions.

There is no authority anywhere. Either we do this ourselves, or it will not happen. We have to look to ourselves, build the networks between ourselves, reach out and connect from ourselves, if we expect to be able to resist a culture which wants to turn the entire human world into candy. This is not going to be easy; if it were, it would have happened by itself. Nor is it instantaneous. Nothing like this happens overnight. Furthermore, it requires great persistence. In the ideal situation, it begins at birth and continues on seamlessly until death. In that sense, this connected educational field mirrors and is a reflection of our human social networks, the ones we form from our first moments of awareness. But unlike that more ad-hoc network, this one has a specific intent: to bring the child into knowledge.

Knowledge, of course, is very big, very vague, mostly undefined. Meanwhile, there are specific skills and bodies of knowledge which we have nominated as important: the ability to read and write; to add and subtract, multiply and divide; a basic understanding of the physical and living worlds; the story of the nation and its peoples. These have very recently been crystallized in a ‘National Curriculum’, which seeks to standardize the pedagogical outcomes across Australia for all students in years 1 through 10. Parents and educators have already begun to argue about the inclusion or exclusion of elements within that curriculum. I was taught phonics over forty years ago, but apparently it’s still a matter of some debate. The teaching of history is always going to be contentious, because the story we tell ourselves about who we are is necessarily political. So the adults will argue it out – year after year, decade after decade – while the educators and students face this monolithic block of text which seems to be the complete antithesis of the Constructivist project. And, looked at one way, the National Curriculum is exactly the type of top-down, teacher-to-student, sit-down-and-shut-up sort of educational mandate which is no longer effective in the business world.

All of which means its probably best that we avoid viewing up the National Curriculum as a validation, encouraging us to continue on with things as they are. Instead, it should be used as mandate for change. There are several significant dimensions to this mandate.

First, putting everyone onto the same page, pedagogically, opens up an opportunity for sharing which transcends anything before possible. Teachers and students from all over Australia can contribute to or borrow from a wealth of resources shared by those who have passed before them through the National Curriculum. Every teacher and every student should think of themselves as part of a broader collective of learners and mentors, all working through the same basic materials. In this sense, the National Curriculum isn’t a document so much as it is the architecture of a network. It is the way all things educational are connected together. It is the wiring underneath all of the pedagogy, providing both a scaffolding and a switchboard for the learning moment.

Is it possible to conceive of a library organized along the lines of the National Curriculum? Certainly a librarian would have no problem configuring a physical library to meet the needs of the curriculum. It’s even easier to organize similar sorts of resources in cyberspace. Not only is it easy, there’s now a mandate to do so. We know what sorts of resources we’ll need, going forward. Nothing should be stopping us from creating collective resources – similar to an Australian Wikipedia, and perhaps drawing from it – which will serve the pedagogical requirements of the National Curriculum. We should be doing this now.

Second, we need to think of the National Curriculum as an opportunity to identify all of the experts in all of the areas covered by the curriculum, and, once they’ve been identified, we must create a strong social network, with them inside, giving them pride of place as ‘nodes of expertise’. Knowledge is not enough; it must be paired with mentors who have been able to put that knowledge into practice with excellence. The National Curriculum is the perfect excuse to bring these experts together, to make them all connected and accessible to everyone throughout the nation who could benefit from their wisdom.

Here, once again, it is best to think of the National Curriculum not as a document but as a network – a way to connect things, and people, together. The great strength of the National Curriculum is, as Dr. Evan Arthur put it, that it is a ‘greenfields’. Literally anything is possible. We can go in any direction we choose. Inertia would have us do things as we’ve always done them, even as the centrifugal forces of culture beyond the classroom point in a different direction. Inertia can not be a guiding force. It must be resisted, at every turn, not in the pursuit of some educational utopia or false revolution, but rather because we have come to realize that the network is the educational system.

Moving from where we are to where need to be seems like a momentous transition. But the Web saw repeated momentous transitions in its first fifteen years and we managed all of those successfully. We can absorb huge amounts of change and novelty so long as the frame which supports us is strong and consistent. That’s the essence of the parent-child relationship: so long as the child feels it is being cared for, it can endure almost anything. This means that we shouldn’t run around freaking out. The sky is not falling. The world is not ending. If anything, we are growing closer together, more connected, becoming more important to one another. It may feel a bit too close from time to time, as we learn how to keep a healthy distance in these new relationships, but that closeness supports us all. It can keep children from falling through the net of opportunity. It can see us advance into a culture where every child has the full benefit of an excellent education, without respect to income or circumstance.

That is the promise. We have the network. We live in the educational field. We now have the National Curriculum to wire it all together. But can we marry the demands of the National Curriculum with the ludic call of Constructivism? Can we create a world where literally we play into learning? This is more than video games that have math drills embedded into them. It’s about capturing the interests of a child and using that as a springboard for the investigation of their world, their nation, their home. That can only happen if mentors are deeply involved and embedded in the child’s life from its earliest years.

I don’t have any easy answers here. There is no magic wand to wave over this whole uncoordinated mess to make it all cohere. No one knows what’s expected of them anymore – educators least of all. Are we parents? Are we ‘friends’? Where do we stand? I know this: we stand most securely when we stand connected.

This is the era of sharing. When the histories of our time are written a hundred years from now, sharing is the salient feature which historians will focus upon. The entirety of culture, from 1999 forward, looks like a gigantic orgy of sharing.

This morning I want to take a look at this phenomenon in some detail, and tie it into some Australian educational ‘megatrends’ – forces which are altering the landscape throughout the nation. Sharing can be used as an engine to power these forces, but that will only happen if we understand how sharing works.

At some level, sharing is totally familiar to us – we’ve been sharing since we’ve been very small. But sharing, at least in the English language, has two slightly different meanings: we can share things, or we can share thoughts. We adults spend a lot of time teaching children the importance of sharing their things; we never need to teach them to share their thoughts. The sharing of things is a cultural behavior, valued by our civilization, whereas the sharing of thoughts is an innate behavior – probably located somewhere deep in our genes.

Fifteen years ago, Nicholas Negroponte characterized this as the divide between bits and atoms. We have to teach children to share their atoms – their toys and games – but they freely share their bits. In fact, they’re so promiscuous with their bits that this has produced its own range of problems.

It was only a decade ago that Shawn Fanning released a program which he’d written for his mates at Boston’s Northeastern University. Napster allowed anyone with a computer and a broadband internet connection to share their MP3 music files freely. Within a few months, millions of broadband-connected college students were freely trading their music collections with one another – without any thought of copyright or ownership. Let me reiterate: thoughts of copyright or piracy simply didn’t enter into their thinking. To them, this was all about sharing.

This act of sharing was a natural consequence of the ‘hyperconnectivity’ these kids had achieved via their broadband connections. When you connect people together, they will begin to share the things they care about. If you build a system that allows them to share the music they care about, they’ll share that. If you build a system that allows them to share the videos they care about, they’ll share that. If you build a system that allows them to share the links they care about, they’ll share them.

Clever web developers and entrepreneurs have built all of these systems, and many, many more. For the first time we can use technology to accelerate and amplify the innate human desire to share bits, and so, in a case of history repeating itself, we have amplified our social and sharing systems the way the steam engine amplified our physical power two hundred years ago.

In the earliest years of this sharing revolution, people shared the objects of culture: music, videos, jokes, links, photos, writing, and so on. Just this alone has had an enormous impact on business and culture: the recording industries, which were flying high a decade ago, have been humbled. Television networks have gotten in front of the Internet distribution of their own shows, to take the sting out of piracy. Newspapers, caught in the crossfire between a controlled system of distribution and a world where everyone distributes everything, have begun to disappear. And this is just the beginning.

In 2001, another experiment in sharing started in earnest: Wikipedia encouraged a small community of contributors to add their own entries to an ever-expanding encyclopedia. In this case contributors were asked to share their knowledge – however specific or particular – to a greater whole. Although it grew slowly in its earliest days, after about 2 years Wikipedia hit an inflection point and began to grow explosively.

Knowledge seems to have a gravitational quality; when enough of it is gathered together in one place, it attracts more knowledge. That’s certainly the story of Wikipedia, which has grown to encompass more than three million articles in English, on nearly every topic under the sun. Wikipedia is only the most successful of many efforts to produce a ‘collective intelligence’ out of the ‘wisdom of crowds’. There are many others – including one I’ll come to shortly.

One of the singular features of Wikipedia – one that we never think about even though it’s the reason we use Wikipedia – is simply this: Wikipedia makes us smarter. We can approach Wikipedia full of ignorance and leave it knowing a lot of facts. Facts need to be put into practice before they can be transformed into knowledge, but at least with Wikipedia we now have the opportunity to load up on the facts. And this is true globally: because of Wikipedia every single one of us now has the opportunity to work with the best possible facts. We can use these facts to make better decisions, decisions which will improve our lives. Wikipedia may seem innocuous, but it’s really quite profound.

How profound? If we peel away all of the technology behind Wikipedia, all of the servers and databases and broadband connections of the world’s sixth most popular website, what are we left with? Only this: an agreement to share what we know. It’s that agreement, and not the servers or databases or bandwidth which makes Wikipedia special, and it’s that agreement historians will be writing about in a hundred years. That agreement will endure – even if, for some bizarre reason, Wikipedia should cease to exist – because that agreement is one of the engines driving our culture forward.

Another example of sharing, just as relevant to educators, comes from a site which launched back in 1999 as TeacherRatings.com. Like Wikipedia, it grew slowly, and went through ownership changes, emerging finally as RateMyProfessors.com, which is owned by MTV, and which now boasts ten million ratings of one million professors, lecturers and instructors. This huge wealth of ratings came about because RateMyProfessors.com attached itself to the innate desire to share. Students want to share their experiences with their instructors, and RateMyProfessors.com gives them a forum to do just that.

Just as is the case with Wikipedia, anyone can become smarter by using RateMyProfessors.com. You can learn which instructors are good teachers, which grade easily, which will bore you to tears, and so forth. You can then put that information to work to make your life better – avoiding the professors (or schools) which have the worst teachers, taking courses from the instructors who get the highest scores.

That shared knowledge, put to work, changes the power balance within the university. For the last six hundred years, universities have been able to saddle students with lousy instructors – who might happen to be fantastic researchers – and there wasn’t much that students could do about it except grumble. Now, with RateMyProfessors.com, students can pass their hard-won knowledge down to subsequent generations of students. The university proposes, the student disposes. Worse still, the instructors receiving the highest ratings on RateMyProfessors.com have been the subjects of bidding wars, as various universities try to woo them, and add them to their faculties. All of this has given students a power they’ve never had, a power they never could have until they began to share their experiences, and translate that shared knowledge into action.

Sharing is wonderful, but sharing has consequences. We can now amplify and accelerate our sharing so that it can cross the world in a matter of moments, copied and replicated all the way. The power of the network has driven us into a new era. Sharing culture, knowledge, and power has destabilized all of our institutions. Businesses totter and collapse; universities change their practices; governments create task forces to get in front of what everyone calls ‘something-2.0’. It could be web2.0, education2.0, or government2.0. It doesn’t matter. What does matter is that something big is happening, and it’s all driven by our ability to share.

OK, so we can share. But why? How does it matter to us?

II: Greenfields

Before we can look at why sharing matters so much in this particular moment, we need to spend some time examining the three big events which will revolutionize education in Australia over the next decade. Each of them are entirely revolutionary in themselves; their confluence will result in a compressed wave of change – a concrescence – that will radically transform all educational practice.

The first of these events will affect all Australians equally. At this moment in time, Australia lives with medium-to-low-end broadband speeds, and most families have broadband connections which, because of metering, fundamentally limit their use. This is how it’s been since the widespread adoption of the Internet in the mid-1990s, and it’s nearly impossible to imagine that things could be different. The hidden lesson of the last fifteen years is that the Internet is something that needs to be rationed carefully, because there’s not enough to go around.

The Government wants us to adopt a different point of view. With the National Broadband Network (NBN), they intend to build a fibre-optic infrastructure which will deliver at least 100 megabit-per-second connections to every home, every school, and every business in Australia. Although no one has come out and said it explicitly, it’s clear that the Government wants this connection to be unmetered – the Internet will finally be freely available in Australia, as it is in most other countries.

How this will change our usage of the Internet is anyone’s guess. And this is the important point – we don’t know what will happen. We have critics of the NBN claiming that there’s no good reason for it, that Australians are already adequately served by the broadband we’ve already got, but I regularly hear stories of schools which block YouTube – not because of its potentially distracting qualities, but because they can’t handle the demand for bandwidth.

That, writ large, describes Australia in 2009. Broadband is the oxygen of the 21st century. Australia has been subjected to a slow strangulation. Once we can breathe freely, new horizons will open to us. We know this is true from history: no one really knew what we’d do with broadband once we got it. No one predicted Napster or YouTube or Skype, no one could have predicted any of them – or any of a thousandotherinnovations – before we had widespread access to broadband. Critics who argue there’s no need for high-speed broadband have simply failed to learn the lessons of history.

Now, before you think that I’m carrying the Government’s water, let me find fault with a few things. I believe that the Government isn’t thinking big enough – by the time the NBN is fully deployed, around 2017, a hundred megabit-per-second connection will simply be mid-range among our OECD peers. The Government should have accepted the technical challenge and gone for a gigabit network. Eventually, they will. Further, I believe the NBN will come with ‘strings attached’, specifically the filtering and regulatory regime currently being proposed by Senator Conroy’s ministry. The Government wants to provide the nation a ‘clean feed’, sanitized according to its interpretation of the law; when everyone in Australia gets their Internet service from the Commonwealth, we may have no choice in the matter.

The next event – and perhaps the most salient, in the context of this conference – is the Government’s commitment to provide a computer to every student in years 9 through 12. During the 2007 election, the Prime Minister talked about using computers for ‘math drills’ and ‘foreign language training’. The line about providing computers in the classroom was a popular one, although it is now clear that the Government’s ministers didn’t think through the profound effect of pervasive computing in the classroom.

First, it radically alters the power balance in the classroom. Most students have more facility with their computers than their teachers do. Some teachers are prepared to work from humility and accept instruction from their students. For other teachers, such an idea is anathema. The power balance could be righted somewhat with extensive professional development for the teachers – and time for that professional development – but schools have neither the budget nor the time to allow for this. Instead, the computers are being dumped into the classroom without any thought as to how they will affect pedagogy.

Second, these computers are being handed to students who may not be wholly aware of the potency of these devices. We’ve seen how a single text message, forwarded endlessly, can spark a riot on a Sydney beach, or how a party invitation, posted to Facebook, can lead to a crowd of five hundred and a battle with the police. Do teenagers really understand how to use the network to their advantage, how to reinforce their own privacy and protect themselves? Do they know how easy it is to ruin their own lives – or someone else’s – if they abuse the power of the network, that amplifier and accelerator of sharing?

Teachers aren’t the only ones who need some professional development. We need to provide a strong curriculum in ‘digital citizenship’; just as teenagers get instruction before they get a driver’s license, so they need instruction before they get to ‘spin the wheels’ of these ubiquitous educational computers.

This isn’t a problem that can be solved by filtering the networks at the schools. Students are surrounded by too many devices – mobiles as well as computers – which connect to the network and which require a degree of caution and education. This isn’t a job that the schools should be handling alone; this is an opportunity for all of the adult voices of culture – parents, caretakers, mentors, educators and administrators – to speak as one about the potentials and pitfalls of network culture.

Finally, what is the goal here? Right now the students and teachers are getting their computers. Next year the deployment will be nearly complete. What, in the end, is the point? Is it simply to give Kevin Rudd a tick on his ‘promises fulfilled’ list when he goes up for re-election? Or is this an opening to something greater? Is this simply more of the same or something new? I haven’t seen any educator anywhere present anything that looks at all like an integrated vision of what these laptops mean to students, teachers or the classroom. They’re bling: pretty, but an entirely useless accessory. I’m not saying that this is a bad initiative – indeed, I believe the Government should be lauded for its efforts. But everything, thus far, feels only like a beginning, the first meter around a very long course.

Now we come to the most profound of the three events on the educational horizon: the National Curriculum. Although the idea of a national curriculum has been mooted by several successive governments, it looks as though we’ll finally achieve a deliverable curriculum sometime in the early years of the Rudd Government. There’s a long way to go, of course – and a lot of tussling between the states and the various educational stakeholders – but the process is well underway. It’s expected that curricula in ‘English, Mathematics, the Sciences and History’ will be ready for implementation in the start of 2011, not very far away. As these are the core elements in any school curriculum, they will affect every school, every teacher, and every student in Australia.

A few weeks ago I got the opportunity to share the stage with Dr. Evan Arthur, the Group Manager of the Digital Education Group at the Commonwealth Department of Education, Employment and Workplace Relations. During a ‘fireside chat’, when I asked him a series of questions, the topic turned to the National Curriculum. At this point Dr. Arthur became rather thoughtful, and described the National Curriculum as a “greenfields”. He went on to describe the curriculum documents, when completed, as a set of ‘strings’ which could be handled almost as if they were a Christmas tree, ready to have content hung all over them. The National Curriculum means that every educator in Australia is, for the first time, working to the same set of ‘strings’.

That’s when I became aware that Dr. Arthur saw the National Curriculum as an enormous opportunity to redraw the possibilities for education. We are all being given an opportunity to start again – to throw out the old rule book and start over with another one. But in order to do this we’ll have to take everything we’ve covered already – about sharing, the National Broadband Network, the Digital Education Revolution and the National Curriculum, then blend them together. Together they produce a very potent mix, a nexus of possibilities which could fundamentally transform education in Australia.

III: At The Nexus

Our future is a future of sharing; we’ll be improving constantly, finding better and better ways to share with one another. To this I want to add something more subtle; not a change in technology – we have a lot of technology – but rather, a change of direction and intent. We could choose to see the National Curriculum as simply another mandate from the Federal government, something that will make the educational process even more formal, rigorous, and lifeless. That option is open to us – and, to many of us, that’s the only option visible. I want to suggest that there is another, wildly different path open before us, right next to this well-trodden and much more prosaic laneway. Rather than viewing the National Curriculum as a done deal, wouldn’t it be wiser if we consider it as an open invitation to participation and sharing?

After all, the National Curriculum mandates what must be taught, but says little to nothing about how it gets taught. Teachers remain free to pursue their own pedagogical ends. That said, teachers across Australia will, for the first time, be pursing the same ends. This opens up a space and a rationale for sharing that never existed before. Everyone is pulling in the same direction; wouldn’t it make sense for teachers, students, administrators and parents to share the experience?

Let’s be realistic: whether or not we seek to formalize this sharing of experience, it will happen anyway, on BoredOfStudies.org, RateMyTeachers.com, a hundred other websites, a thousand blogs, a hundred thousand Facebook profiles, and a million tweets. But if it all happens out there, informally, we miss an enormous opportunity to let sharing power our transition to into the National Curriculum. We’d be letting our greatest and most powerful asset slip through our fingers.

So let me turn this around and project us into a future where we have decided to formalize our shared experience of the National Curriculum. What might that look like? A teacher might normally prepare their curriculum and pedagogical materials at the beginning of the school term; during that preparation process they would check into a shared space, organized around the National Curriculum (this should be done formally, through an organization such as Education.AU, but could – and would – happen informally, via Google) to find out what other educators have created and shared as curriculum materials. Educators would find extensive notes, lesson plans, probably numerous recorded podcasts, links to materials on Wikipedia and other online resources, and so forth – everything that an educator might need to create an effective learning experience. Furthermore, educators would be encourage to share and connect around any particular ‘string’ in the National Curriculum. The curriculum thus becomes a focal point for organization and coordination rather than a brute mandate of performance.

Students, already well-connected, will continue to use informal channels to communicate about their lessons; the National Curriculum gives the educational sector (and perhaps some enterprising entrepreneur) an opportunity to create a space where those curriculum ‘strings’ translate into points of contact. Students working through a particular point in the curriculum would know where they are, and would know where to gather together for help and advice. The same wealth of materials available to educators would be available to students. None of this constitutes ‘peeking at the answers’, but rather is part of an integrated effort to give students every advantage while working their way through the National Curriculum. A student in Townsville might be able to gain some advantage from a podcast of a teacher in Albany, might want to collaborate on research with students from Ballarat, might ask some questions of an educator in Lismore. The student sits in the middle of an nexus of resources designed to offer them every opportunity to succeed; if the methodology of their own classroom is a poor fit to their learning style, chances are high that they’ll find someone else, somewhere else, who makes a better match.

All of this sounds a lot like an educational utopia, but all of it is within our immediate grasp. It is because we live at the confluence of a broadly sharing culture, and within a nation which is getting ubiquitous high-speed broadband, students and educators who now have pervasive access to computers, and a National Curriculum to act as an organizing principle. It is precisely because the stars are aligned so auspiciously that we can dream big dreams. This is the moment when anything is possible.

This transition could simply reinforce the last hundred years of industrial era education, where one-size-fits-all, where the student enters ‘airplane mode’ when they walk into the classroom – all devices disconnected, eyes up and straight ahead for the boredom of a fifty-minute excursion through some meaningless and disconnected body of knowledge. Where the computer simply becomes an electronic textbook for the distribution of media, rather than a portal for the exploration of the knowledge shared by others. Where the educator finds themselves increasingly bound to a curriculum which limits their freedom to find expression and meaning in their work. And all of this will happen, unless we recognize the other path that has opened before us. Unless we change direction, and set our feet on that path. Because if we keep on as we have been, we’ll simply end up with what we have today. And that would be a big mistake.

It needn’t be this way. We can take advantage of our situation, of the concrescence of opportunities opening to us. It will take some work, some time and some money. But more than anything else it requires a change of heart. We must stop thinking of the classroom as a solitary island of peace and quiet in the midst of a stormy sea, and rather think of it as a node within a network, connected and receptive. We must stop thinking of educators as valiant but solitary warriors, and transform them into a connected and receptive army. And we must recognize that this generation of students are so well connected on every front that they outpace us in every advance. They will be teaching us how to make this transition seem effortless.

Can we do this? Can we screw our courage up and take a leap into a great unknown, into an educational future which draws from our past, but is not bound to it? With parents and politicians crying out for metrics and endless assessments, we are losing the space to experiment, to play, to explore. Next year, the National Curriculum will land like a ton of bricks, even as it presents the opportunity for a Great Escape. The next twelve months will be crucial. If we can only change the way we think about what is possible, we will change what is possible. It’s a big ask. It’s the challenge of our times. Will we rise to meet it? Can we make an agreement to share what we know and what we do? That’s all it takes. So simple and so profound.

A spectre is haunting the classroom, the spectre of change. Nearly a century of institutional forms, initiated at the height of the Industrial Era, will change irrevocably over the next decade. The change is already well underway, but this change is not being led by teachers, administrators, parents or politicians. Coming from the ground up, the true agents of change are the students within the educational system. Within just the last five years, both power and control have swung so quickly and so completely in their favor that it’s all any of us can do to keep up. We live in an interregnum, between the shift in power and its full actualization: These wacky kids don’t yet realize how powerful they are.

This power shift does not have a single cause, nor could it be thwarted through any single change, to set the clock back to a more stable time. Instead, we are all participating in a broadly-based cultural transformation. The forces unleashed can not simply be dammed up; thus far they have swept aside every attempt to contain them. While some may be content to sit on the sidelines and wait until this cultural reorganization plays itself out, as educators you have no such luxury. Everything hits you first, and with full force. You are embedded within this change, as much so as this generation of students.

This paper outlines the basic features of this new world we are hurtling towards, pointing out the obvious rocks and shoals that we must avoid being thrown up against, collisions which could dash us to bits. It is a world where even the illusion of control has been torn away from us. A world wherein the first thing we need to recognize that what is called for in the classroom is a strategic détente, a détente based on mutual interest and respect. Without those two core qualities we have nothing, and chaos will drown all our hopes for worthwhile outcomes. These outcomes are not hard to achieve; one might say that any classroom which lacks mutual respect and interest is inevitably doomed to failure, no matter what the tenor of the times. But just now, in this time, it happens altogether more quickly.

Hence I come to the title of this talk, “Digital Citizenship”. We have given our children the Bomb, and they can – if they so choose – use it to wipe out life as we know it. Right now we sit uneasily in an era of mutually-assured destruction, all the more dangerous because these kids don’t now how fully empowered they are. They could pull the pin by accident. For this reason we must understand them, study them intently, like anthropologists doing field research with an undiscovered tribe. They are not the same as us. Unwittingly, we have changed the rules of the world for them. When the Superpowers stared each other down during the Cold War, each was comforted by the fact that each knew the other had essentially the same hopes and concerns underneath the patina of Capitalism or Communism. This time around, in this Cold War, we stare into eyes so alien they could be another form of life entirely. And this, I must repeat, is entirely our own doing. We have created the cultural preconditions for this Balance of Terror. It is up to us to create an environment that fosters respect, trust, and a new balance of powers. To do that first we must examine the nature of the tremendous changes which have fundamentally altered the way children think.

I: Primary Influences

I am a constructivist. Constructivism states (in terms that now seem fairly obvious) that children learn the rules of the world from their repeated interactions within in. Children build schema, which are then put to the test through experiment; if these experiments succeed, those schema are incorporated into ever-larger schema, but if they fail, it’s back to the drawing board to create new schema. This all seems straightforward enough – even though Einstein pronounced it, “An idea so simple only a genius could have thought of it.” That genius, Jean Piaget, remains an overarching influence across the entire field of childhood development.

At the end of the last decade I became intensely aware that the rapid technological transformations of the past generation must necessarily impact upon the world views of children. At just the time my ideas were gestating, I was privileged to attend a presentation given by Sherry Turkle, a professor at the Massachusetts Institute of Technology, and perhaps the most subtle thinker in the area of children and technology. Turkle talked about her current research, which involved a recently-released and fantastically popular children’s toy, the Furby.

For those of you who may have missed the craze, the Furby is an animatronic creature which has expressive eyes, touch sensors, and a modest capability with language. When first powered up, the Furby speaks ‘Furbish’, an artificial language which the child can decode by looking up words in a dictionary booklet included in the package. As the child interacts with the toy, the Furby’s language slowly adopts more and more English prhases. All of this is interesting enough, but more interesting, by far, is that the Furby has needs. Furby must be fed and played with. Furby must rest and sleep after a session of play. All of this gives the Furby some attributes normally associated with living things, and this gave Turkle an idea.

Constructivists had already determined that between ages four and six children learn to differentiate between animate objects, such as a pet dog, and inanimate objects, such as a doll. Since Furby showed qualities which placed it into both ontological categories, Turkle wondered whether children would class it as animate or inanimate. What she discovered during her interviews with these children astounded her. When the question was put to them of whether the Furby was animate or inanimate, the children said, “Neither.” The children intuited that the Furby resided in a new ontological class of objects, between the animate and inanimate. It’s exactly this ontological in-between-ness of Furby which causes some adults to find them “creepy”. We don’t have a convenient slot to place them into our own world views, and therefore reject them as alien. But Furby was completely natural to these children. Even the invention of a new ontological class of being-ness didn’t strain their understanding. It was, to them, simply the way the world works.

Writ large, the Furby tells the story of our entire civilization. We make much of the difference between “digital immigrants”, such as ourselves, and “digital natives”, such as these children. These kids are entirely comfortable within the digital world, having never known anything else. We casually assume that this difference is merely a quantitative facility. In fact, the difference is almost entirely qualitative. The schema upon which their world-views are based, the literal ‘rules of their world’, are completely different. Furby has an interiority hitherto only ascribed to living things, and while it may not make the full measure of a living thing, it is nevertheless somewhere on a spectrum that simply did not exist a generation ago. It is a magical object, sprinkled with the pixie dust of interactivity, come partially to life, and closer to a real-world Pinocchio than we adults would care to acknowledge.

If Furby were the only example of this transformation of the material world, we would be able to easily cope with the changes in the way children think. It was, instead, part of a leading edge of a breadth of transformation. For example, when I was growing up, LEGO bricks were simple, inanimate objects which could be assembled in an infinite arrangement of forms. Today, LEGO Mindstorms allow children to create programmable forms, using wheels and gears and belts and motors and sensors. LEGO is no longer passive, but active and capable of interacting with the child. It, too, has acquired an interiority which teaches children that at some essential level the entire material world is poised at the threshold of a transformation into the active. A child playing with LEGO Mindstorms will never see the material world as wholly inanimate; they will see it as a playground requiring little more than a few simple mechanical additions, plus a sprinkling of code, to bring it to life. Furby adds interiority to the inanimate world, but LEGO Mindstorms empowers the child with the ability to add this interiority themselves.

The most significant of these transformational innovations is one of the most recent. In 2004, Google purchased Keyhole, Inc., a company that specialized in geospatial data visualization tools. A year later Google released the first version of Google Earth, a tool which provides a desktop environment wherein the entire Earth’s surface can be browsed, at varying levels of resolution, from high Earth orbit, down to the level of storefronts, anywhere throughout the world. This tool, both free and flexible, has fomented a revolution in the teaching of geography, history and political science. No longer constrained to the archaic Mercator Projection atlas on the wall, or the static globe-as-a-ball perched on one corner of teacher’s desk, Google Earth presents Earth-as-a-snapshot.

We must step back and ask ourselves the qualitative lesson, the constructivist message of Google Earth. Certainly it removes the problem of scale; the child can see the world from any point of view, even multiple points of view simultaneously. But it also teaches them that ‘to observe is to understand’. A child can view the ever-expanding drying of southern Australia along with a data showing the rise in temperature over the past decade, all laid out across the continent. The Earth becomes a chalkboard, a spreadsheet, a presentation medium, where the thorny problems of global civilization and its discontents can be explored out in exquisite detail. In this sense, no problem, no matter how vast, no matter how global, will be seen as being beyond the reach of these children. They’ll learn this – not because of what teacher says, or what homework assignments they complete – through interaction with the technology itself.

The generation of children raised on Google Earth will graduate from secondary schools in 2017, just at the time the Government plans to complete its rollout of the National Broadband Network. I reckon these two tools will go hand-in-hand: broadband connects the home to the world, while Google Earth brings the world into the home. Australians, particularly beset by the problems of global warming, climate, and environmental management, need the best tools and the best minds to solve the problems which already beset us. Fortunately it looks as though we are training a generation for leadership, using the tools already at hand.

The existence of Google Earth as an interactive object changes the child’s relationship to the planet. A simulation of Earth is a profoundly new thing, and naturally is generating new ontological categories. Yet again, and completely by accident, we have profoundly altered the world view of this generation of children and young adults. We are doing this to ourselves: our industries turn out products and toys and games which apply the latest technological developments in a dazzling variety of ways. We give these objects to our children, more or less blindly unaware of how this will affect their development. Then we wonder how these aliens arrived in our midst, these ‘digital natives’ with their curious ways. Ladies and gentlemen, we need to admit that we have done this to ourselves. We and our technological-materialist culture have fostered an environment of such tremendous novelty and variety that we have changed the equations of childhood.

Yet these technologies are only the tip of the iceberg. Each are the technologies of childhood, of a world of objects, where the relationship is between child and object. This is not the world of adults, where the relations between objects are thoroughly confused by the relationships between adults. In fact, it can be said that for as much as adults are obsessed with material possessions, we are only obsessed with them because of our relationships to other adults. The corner we turn between childhood and young adulthood is indicative of a change in the way we think, in the objects of attention, and in the technologies which facilitate and amplify that attention. These technologies have also suddenly and profoundly changed, and, again, we are almost completely unaware of what that has done to those wacky kids.

II: Share This Thought!

Australia now has more mobile phone subscribers than people. We have reached 104% subscription levels, simply because some of us own and use more than one handset. This phenomenon has been repeated globally; there are something like four billion mobile phone subscribers throughout the world, representing approximately three point six billion customers. That’s well over half the population of planet Earth. Given that there are only about a billion people in the ‘advanced’ economies in the developed world – almost all of whom now use mobiles – two and a half billion of the relatively ‘poor’ also have mobiles. How could this be? Shouldn’t these people be spending money on food, housing, and education for their children?

As it turns out (and there are numerous examples to support this) a mobile handset is probably the most important tool someone can employ to improve their economic well-being. A farmer can call ahead to markets to find out which is paying the best price for his crop; the same goes for fishermen. Tradesmen can close deals without the hassle and lost time involved in travel; craftswomen can coordinate their creative resources with a few text messages. Each of these examples can be found in any Bangladeshi city or Africa village. In the developed world, the mobile was nice but non-essential: no one is late anymore, just delayed, because we can always phone ahead. In the parts of the world which never had wired communications, the leap into the network has been explosively potent.

The mobile is a social accelerant; it does for our innate social capabilities what the steam shovel did for our mechanical capabilities two hundred years ago. The mobile extends our social reach, and deepens our social connectivity. Nowhere is this more noticeable than in the lives of those wacky kids. At the beginning of this decade, researcher Mitzuko Ito took a look at the mobile phone in the lives of Japanese teenagers. Ito published her research in Personal, Portable, Pedestrian: Mobile Phones in Japanese Life, presenting a surprising result: these teenagers were sending and receiving a hundred text messages a day among a close-knit group of friends (generally four or five others), starting when they first arose in the morning, and going on until they fell asleep at night. This constant, gentle connectivity – which Ito named ‘co-presence’ – often consisted of little of substance, just reminders of connection.

At the time many of Ito’s readers dismissed this phenomenon as something to be found among those ‘wacky Japanese’, with their technophilic bent. A decade later this co-presence is the standard behavior for all teenagers everywhere in the developed world. An Australian teenager thinks nothing of sending and receiving a hundred text messages a day, within their own close group of friends. A parent who might dare to look at the message log on a teenager’s phone would see very little of significance and wonder why these messages needed to be sent at all. But the content doesn’t matter: connection is the significant factor.

We now know that the teenage years are when the brain ‘boots’ into its full social awareness, when children leave childhood behind to become fully participating members within the richness of human society. This process has always been painful and awkward, but just now, with the addition of the social accelerant and amplifier of the mobile, it has become almost impossibly significant. The co-present social network can help cushion the blow of rejection, or it can impel the teenager to greater acts of folly. Both sides of the technology-as-amplifier are ever-present. We have seen bullying by mobile and over YouTube or Facebook; we know how quickly the technology can overrun any of the natural instincts which might prevent us from causing damage far beyond our intention – keep this in mind, because we’ll come back to it when we discuss digital citizenship in detail.

There is another side to sociability, both far removed from this bullying behavior and intimately related to it – the desire to share. The sharing of information is an innate human behavior: since we learned to speak we’ve been talking to each other, warning each other of dangers, informing each other of opportunities, positing possibilities, and just generally reassuring each other with the sound of our voices. We’ve now extended that four-billion-fold, so that half of humanity is directly connected, one to another.

We know we say little to nothing with those we know well, though we may say it continuously. What do we say to those we know not at all? In this case we share not words but the artifacts of culture. We share a song, or a video clip, or a link, or a photograph. Each of these are just as important as words spoken, but each of these places us at a comfortable distance within the intimate act of sharing. 21st-century culture looks like a gigantic act of sharing. We share music, movies and television programmes, driving the creative industries to distraction – particularly with the younger generation, who see no need to pay for any cultural product. We share information and knowledge, creating a wealth of blogs, and resources such as Wikipedia, the universal repository of factual information about the world as it is. We share the minutiae of our lives in micro-blogging services such as Twitter, and find that, being so well connected, we can also harvest the knowledge of our networks to become ever-better informed, and ever more effective individuals. We can translate that effectiveness into action, and become potent forces for change.

Everything we do, both within and outside the classroom, must be seen through this prism of sharing. Teenagers log onto video chat services such as Skype, and do their homework together, at a distance, sharing and comparing their results. Parents offer up their kindergartener’s presentations to other parents through Twitter – and those parents respond to the offer. All of this both amplifies and undermines the classroom. The classroom has not dealt with the phenomenal transformation in the connectivity of the broader culture, and is in danger of becoming obsolesced by it.

Yet if the classroom were to wholeheartedly to embrace connectivity, what would become of it? Would it simply dissolve into a chaotic sea, or is it strong enough to chart its own course in this new world? This same question confronts every institution, of every size. It affects the classroom first simply because the networked and co-present polity of hyperconnected teenagers has reached it first. It is the first institution that must transform because the young adults who are its reason for being are the agents of that transformation. There’s no way around it, no way to set the clock back to a simpler time, unless, Amish-like, we were simply to dispose of all the gadgets which we have adopted as essential elements in our lifestyle.

This, then, is why these children hold the future of the classroom-as-institution in their hands, this is why the power-shift has been so sudden and so complete. This is why digital citizenship isn’t simply an academic interest, but a clear and present problem which must be addressed, broadly and immediately, throughout our entire educational system. We already live in a time of disconnect, where the classroom has stopped reflecting the world outside its walls. The classroom is born of an industrial mode of thinking, where hierarchy and reproducibility were the order of the day. The world outside those walls is networked and highly heterogeneous. And where the classroom touches the world outside, sparks fly; the classroom can’t handle the currents generated by the culture of connectivity and sharing. This can not go on.

When discussing digital citizenship, we must first look to ourselves. This is more than a question of learning the language and tools of the digital era, we must take the life-skills we have already gained outside the classroom and bring them within. But beyond this, we must relentlessly apply network logic to the work of our own lives. If that work is as educators, so be it. We must accept the reality of the 21st century, that, more than anything else, this is the networked era, and that this network has gifted us with new capabilities even as it presents us with new dangers. Both gifts and dangers are issues of potency; the network has made us incredibly powerful. The network is smarter, faster and more agile than the hierarchy; when the two collide – as they’re bound to, with increasing frequency – the network always wins. A text message can unleash revolution, or land a teenager in jail on charges of peddling child pornography, or spark a riot on a Sydney beach; Wikipedia can drive Britannica, a quarter millennium-old reference text out of business; a outsider candidate can get himself elected president of the United States because his team masters the logic of the network. In truth, we already live in the age of digital citizenship, but so many of us don’t know the rules, and hence, are poor citizens.

Now that we’ve explored the dimensions of the transition in the understanding of the younger generation, and the desynchronization of our own practice within the world as it exists, we can finally tackle the issue of digital citizenship. Children and young adults who have grown up in this brave new world, who have already created new ontological categories to frame it in their understanding, won’t have time or attention for preaching and screeching from the pulpit in the classroom, or the ‘bully pulpits’ of the media. In some ways, their understanding already surpasses ours, but their apprehension of consequential behavior does not. It is entirely up to us to bridge this gap in their understanding, but I do not to imply that educators can handle this task alone. All of the adult forces of the culture must be involved: parents, caretakers, educators, administrators, mentors, authority and institutional figures of all kinds. We must all be pulling in the same direction, lest the threads we are trying to weave together unravel.

III: 20/60 Foresight

While on a lecture tour last year, a Queensland teacher said something quite profound to me. “Giving a year 7 student a laptop is the equivalent of giving them a loaded gun.” Just as we wouldn’t think of giving this child a gun without extensive safety instruction, we can’t even think consider giving this child a computer – and access to the network – without extensive training in digital citizenship. But the laptop is only one device; any networked device has the potential for the same pitfalls.

Long before Sherry Turkle explored Furby’s effect on the world-view of children, she examined how children interact with computers. In her first survey, The Second Self: Computers and the Human Spirit, she applied Lacanian psychoanalysis and constructivism to build a model of how children interacted with computers. In the earliest days of the personal computer revolution, these machines were not connected to any networks, but were instead laboratories where the child could explore themselves, creating a ‘mirror’ of their own understanding.

Now that almost every computer is fully connected to the billion-plus regular users of the Internet, the mirror no longer reflects the self, but the collective yet highly heterogeneous tastes and behaviors of mankind. The opportunity for quiet self-exploration drowns amidst the clamor from a very vital human world. In the space between the singular and the collective, we must provide an opportunity for children to grow into a sense of themselves, their capabilities, and their responsibilities. This liminal moment is the space for an education in digital citizenship. It may be the only space available for such an education, before the lure of the network sets behavioral patterns in place.

Children must be raised to have a healthy respect for the network from their earliest awareness of it. The network access of young children is generally closely supervised, but, as they turn the corner into tweenage and secondary education, we need to provide another level of support, which fully briefs these rapidly maturing children on the dangers, pitfalls, opportunities and strengths of network culture. They already know how to do things, but they do not have the wisdom to decide when it appropriate to do them, and when it is appropriate to refrain. That wisdom is the core of what must be passed along. But wisdom is hard to transmit in words; it must flow from actions and lessons learned. Is it possible to develop a lesson plan which imparts the lessons of digital citizenship? Can we teach these children to tame their new powers?

Before a child is given their own mobile – something that happens around age 12 here in Australia, though that is slowly dropping – they must learn the right way to use it. Not the perfunctory ‘this is not a toy’ talk they might receive from a parent, but a more subtle and profound exploration of what it means to be directly connected to half of humanity, and how, should that connectivity go awry, it could seriously affect someone’s life – possibly even their own. Yes, the younger generation has different values where the privacy of personal information is concerned, but even they have limits they want to respect, and circles of intimacy they want to defend. Showing them how to reinforce their privacy with technology is a good place to start in any discussion of digital citizenship.

Similarly, before a child is given a computer – either at home or in school – it must be accompanied by instruction in the power of the network. A child may have a natural facility with the network without having any sense of the power of the network as an amplifier of capability. It’s that disconnect which digital citizenship must bridge.

It’s not my role to be prescriptive. I’m not going to tell you to do this or that particular thing, or outline a five-step plan to ensure that the next generation avoid ruining their lives as they come online. This is a collective problem which calls for a collective solution. Fortunately, we live in an era of collective technology. It is possible for all of us to come together and collaborate on solutions to this problem. Digital citizenship is a issue which has global reach; the UK and the US are both confronting similar issues, and both, like Australia, fail to deal with them comprehensively. Perhaps the Australian College of Educators can act as a spearhead on this issue, working in concert with other national bodies to develop a program and curriculum in digital citizenship. It would be a project worthy of your next fifty years.

In closing, let’s cast our eyes forward fifty years, to 2060, when your organization will be celebrating its hundredth anniversary. We can only imagine the technological advances of the next fifty years in the fuzziest of terms. You need only cast yourselves back fifty years to understand why. Back then, a computer as powerful as my laptop wouldn’t have filled a single building – or even a single city block. It very likely would have filled a small city, requiring its own power plant. If we have come so far in fifty years, judging where we’ll be in fifty years time is beyond the capabilities of even the most able futurist. We can only say that computers will become pervasive and nearly invisibly woven through the fabric of human culture.

Let us instead focus on how we will use technology in fifty years’ time. We can already see the shape of the future in one outstanding example – a website known as RateMyProfessors.com. Here, in a database of nine million reviews of one million teachers, lecturers and professors, students can learn which instructors bore, which grade easily, which excite the mind, and so forth. This simple site – which grew out of the power of sharing – has radically changed the balance of power on university campuses throughout the US and the UK. Students can learn from others’ mistakes or triumphs, and can repeat them. Universities, which might try to corral students into lectures with instructors who might not be exemplars of their profession, find themselves unable to fill those courses. Worse yet, bidding wars have broken out between universities seeking to fill their ranks with the instructors who receive the highest rankings.

Alongside the rise of RateMyProfessors.com, there has been an exponential increase in the amount of lecture material you can find online, whether on YouTube, or iTunes University, or any number of dedicated websites. Those lectures also have ratings, so it is already possible for a student to get to the best and most popular lectures on any subject, be it calculus or Mandarin or the medieval history of Europe.

Both of these trends are accelerating because both are backed by the power of sharing, the engine driving all of this. As we move further into the future, we’ll see the students gradually take control of the scheduling functions of the university (and probably in a large number of secondary school classes). These students will pair lecturers with courses using software to coordinate both. More and more, the educational institution will be reduced to a layer of software sitting between the student, the mentor-instructor and the courseware. As the university dissolves in the universal solvent of the network, the capacity to use the network for education increases geometrically; education will be available everywhere the network reaches. It already reaches half of humanity; in a few years it will cover three-quarters of the population of the planet. Certainly by 2060 network access will be thought of as a human right, much like food and clean water.

In 2060, Australian College of Educators may be more of an ‘Invisible College’ than anything based in rude physicality. Educators will continue to collaborate, but without much of the physical infrastructure we currently associate with educational institutions. Classrooms will self-organize and disperse organically, driven by need, proximity, or interest, and the best instructors will find themselves constantly in demand. Life-long learning will no longer be a catch-phrase, but a reality for the billions of individuals all focusing on improving their effectiveness within an ever-more-competitive global market for talent. (The same techniques employed by RateMyProfessors.com will impact all the other professions, eventually.)

There you have it. The human future is both more chaotic and more potent than we can easily imagine, even if we have examples in our present which point the way to where we are going. And if this future sounds far away, keep this in mind: today’s year 10 student will be retiring in 2060. This is their world.

I have to admit that I am in awe of iTunes University. It’s just amazing that so many well-respected universities – Stanford, MIT, Yale, and Uni Melbourne – are willing to put their crown jewels – their lectures – online for everyone to download. It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.

When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course. This is my only complaint with iTunes University – you can’t rate the various lectures on offer. You can know which ones have been downloaded most often, but that’s not precisely the same thing as which calculus seminar or which sociology lecture is the best. So as much as I love iTunes University, I see it as halfway there. Perhaps Apple didn’t want to turn iTunes U into a popularity contest, but, without that vital bit of feedback, it’s nearly impossible for us to winnow out the wheat from the educational chaff.

This is something that has to happen inside the system; it could happen across a thousand educational blogs spread out across the Web, but then it’s too diffuse to be really helpful. The reviews have to be coordinated and collated – just as with RateMyProfessors.com.

Say, that’s an interesting point. Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University? If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. And as iTunes University gets bigger and bigger, RateMyLectures.com becomes ever more important, the ultimate guide to the ultimate source of educational multimedia on the Internet. One needs the other to be wholly useful; without ratings iTunes U is just an undifferentiated pile of possibilities. But with ratings, iTunes U becomes a highly focused and effective tool for digital education.

Now let’s cast our minds ahead a few semesters: iTunes U is bigger and better than ever, and RateMyLectures.com has benefited from the hundreds of thousands of contributed reviews. Those reviews extend beyond the content in iTunes U, out into YouTube and Google Video and Vimeo and Blip.tv and where ever people are creating lectures and putting them online. Now anyone can come by the site and discover the absolute best lecture on almost any subject they care to research. The net is now cast globally; I can search for the best lecture on Earth, so long as it’s been captured and uploaded somewhere, and someone’s rated it on RateMyLectures.com.

All of a sudden we’ve imploded the boundaries of the classroom. The lecture can come from the US, or the UK, or Canada, or New Zealand, or any other country. Location doesn’t matter – only its rating as ‘best’ matters. This means that every student, every time they sit down at a computer, already does or will soon have on available the absolute best lectures, globally. That’s just a mind-blowing fact. It grows very naturally out of our desire to share and our desire to share ratings about what we have shared. Nothing extraordinary needed to happen to produce this entirely extraordinary state of affairs.

The network is acting like a universal solvent, dissolving all of the boundaries that have kept things separate. It’s not just dissolving the boundaries of distance – though it is doing that – it’s also dissolving the boundaries of preference. Although there will always be differences in taste and delivery, some instructors are simply better lecturers – in better command of their material – than others. Those instructors will rise to the top. Just as RateMyProfessors.com has created a global market for the lecturers with the highest ratings, RateMyLectures.com will create a global market for the best performances, the best material, the best lessons.

That RateMyLectures.com is only a hypothetical shouldn’t put you off. Part of what’s happening at this inflection point is that we’re all collectively learning how to harness the network for intelligence augmentation – Engelbart’s final triumph. All we need do is identify an area which could benefit from knowledge sharing and, sooner rather than later, someone will come along with a solution. I’d actually be very surprised if a service a lot like RateMyLectures.com doesn’t already exist. It may be small and unimpressive now. But Wikipedia was once small and unimpressive. If it’s useful, it will likely grow large enough to be successful.

Of course, lectures alone do not an education make. Lectures are necessary but are only one part of the educational process. Mentoring and problem solving and answering questions: all of these take place in the very real, very physical classroom. The best lectures in the world are only part of the story. The network is also transforming the classroom, from inside out, melting it down, and forging it into something that looks quite a bit different from the classroom we’ve grown familiar with over the last 50 years.

II: Fluid Dynamics

If we take the examples of RateMyProfessors.com and RateMyLectures.com and push them out a little bit, we can see the shape of things to come. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open” school, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructors facilitate and mentor, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial presence with the instructor, it will have a physical locale, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode, vanishing online, and explode: the world will become the classroom.

This, then, can already be predicted from current trends; as the network begins to destabilizing the institutional hierarchies in education, everything else becomes inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: Digital Citizenship

Obviously, much of what I’ve described here in the “melting down” of the educational process applies first and foremost to university students. That’s where most of the activity is taking place. But I would argue that it only begins with university students. From there – just like Facebook – it spreads across the gap between tertiary and secondary education, and into the high schools and colleges.

This is significant an interesting because it’s at this point that we, within Australia, run headlong into the Government’s plan to provide laptops for all year 9 through year 12 students. Some schools will start earlier; there’s a general consensus among educators that year 7 is the earliest time a student should be trusted to behave responsibility with their “own” computer. Either way, the students will be fully equipped and capable to use all of the tools at hand to manage their own education.

But will they? Some of this is a simple question of discipline: will the students be disciplined enough to take an ever-more-active role in the co-production of their education? As ever, the question is neither black nor white; some students will demonstrate the qualities of discipline needed to allow them to assume responsibility for their education, while others will not.

But, somewhere along here, there’s the presumption of some magical moment during the secondary school years, when the student suddenly learns how to behave online. And we already know this isn’t happening. We see too many incidents where students make mistakes, behaving badly without fully understanding that the whole world really is watching.

In the early part of this year I did a speaking tour with the Australian Council of Educational Researchers; during the tour I did a lot of listening. One thing I heard loud and clear from the educators is that giving a year 7 student a laptop is the functional equivalent of giving them a loaded gun. And we shouldn’t be surprised, when we do this, when there are a few accidental – or volitional – shootings.

I mentioned this in a talk to TAFE educators last week, and one of the attendees suggested that we needed to teach “Digital Citizenship”. I’d never heard the phrase before, but I’ve taken quite a liking to it. Of course, by the time a student gets to TAFE, the damage is done. We shouldn’t start talking about digital citizenship in TAFE. We should be talking about it from the first days of secondary education. And it’s not something that should be confined to the school: parents are on the hook for this, too. Even when the parents are not digitally literate, they can impart the moral and ethical lessons of good behavior to their children, lessons which will transfer to online behavior.

Make no mistake, without a firm grounding in digital citizenship, a secondary student can’t hope to make sense of the incredibly rich and impossibly distracting world afforded by the network. Unless we turn down the internet connection – which always seems like the first option taken by administrators – students will find themselves overwhelmed. That’s not surprising: we’ve taught them few skills to help them harness the incredible wealth available. In part that’s because we’re only just learning those skills ourselves. But in part it’s because we would have to relinquish control. We’re reluctant to do that. A course in digital citizenship would help both students and teachers feel more at ease with one another when confronted by the noise online.

Make no mistake, this inflection point in education is going inevitably going to cross the gap between tertiary and secondary school and students. Students will be able to do for themselves in ways that were never possible before. None of this means that the teacher or even the administrator has necessarily become obsolete. But the secondary school of the mid-21st century may look a lot more like a website than campus. The classroom will have a fluid look, driven by the teacher, the students and the subject material.

Have we prepared students for this world? Have we given them the ability to make wise decisions about their own education? Or are we like those university administrators who mutter about how RateMyProfessors.com has ruined all their carefully-laid plans? The world where students were simply the passive consumers of an educational product is coming to an end. There are other products out there, clamoring for attention – you can thank Apple for that. And YouTube.

Once we get through this inflection point in the digital revolution in education, we arrive in a landscape that’s literally mind-blowing. We will each have access to educational resources far beyond anything on offer at any other time in human history. The dream of life-long learning will be simply a few clicks away for most of the billion people on the Internet, and many of the four billion who use mobiles. It will not be an easy transition, nor will it be perfect on the other side. But it will be incredible, a validation of everything Douglas Engelbart demonstrated forty years ago, and an opportunity to create a truly global educational culture, focused on excellence, and dedicated to serving all students, everywhere.

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!

Our greatest fear, in bringing computers into the classroom, is that we teachers and instructors and lecturers will lose control of the classroom, lose touch with the students, lose the ability to make a difference. The computer is ultimately disruptive. It offers greater authority than any instructor, greater resources than any lecturer, and greater reach than any teacher. The computer is not perfect, but it is indefatigable. The computer is not omniscient, but it is comprehensive. The computer is not instantaneous, but it is faster than any other tool we’ve ever used.

All of this puts the human being at a disadvantage; in a classroom full of machines, the human factor in education is bound to be overlooked. Even though we know that everyone learns more effectively when there’s a teacher or mentor present, we want to believe that everything can be done with the computer. We want the machines to distract, and we hope that in that distraction some education might happen. But distraction is not enough. There must be a point to the exercise, some reason that makes all the technology worthwhile. That search for a point – a search we are still mostly engaged in – will determine whether these computers are meaningful to the educational process, or if they are an impediment to learning.

It’s all about control.

What’s most interesting about the computer is how it puts paid to all of our cherished fantasies of control. The computer – or, most specifically, the global Internet connected to it – is ultimately disruptive, not just to the classroom learning experience, but to the entire rationale of the classroom, the school, the institution of learning. And if you believe this to be hyperbolic, this story will help to convince you.

In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc. Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

This is not something that anyone expected; it certainly wasn’t what John Swapceinski had in mind when founded Teacher Ratings. He wasn’t trying to overturn the prerogatives of heads of school around the world. He was simply offering up a place for people to pool their knowledge. That knowledge, once pooled, takes on a life of its own, and finds itself in places where it has uses that its makers never intended.

This rating system serves as an archetype for what it is about to happen to education in general. If we are smart enough, we can learn a lesson here and now that we will eventually learn – rather more expensively – if we wait. The lesson is simple: control is over. This is not about control anymore. This is about finding a way to survive and thrive in chaos.

The chaos is not something we should be afraid of. Like King Canute, we can’t roll back the tide of chaos that’s rolling over us. We can’t roll back the clock to an earlier age without computers, without Internet, without the subtle but profound distraction of text messaging. The school is of its time, not out it. Which means we must play the hand we’ve been dealt. That’s actually a good thing, because we hold a lot of powerful cards, or can, if we choose to face the chaos head on.

II: Do It Ourselves

If we take the example of RateMyProfessors.com and push it out a little bit, we can see the shape of things to come. But there are some other trends which are also becoming visible. The first and most significant of these is the trend toward sharing lecture material online, so that it reaches a very large audience. Spearheaded by Stanford University and the Massachusetts Institute of Technology, both of which have placed their entire set of lectures online through iTunes University, these educational institutions assert that the lectures themselves aren’t the real reason students spend $50,000 a year to attend these schools; the lectures only have full value in context. This is true, in some sense, but it discounts the possibility that some individuals or group of individuals might create their own context around the lectures. And this is where the future seems to be pointing.

When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students. The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication. The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?

At the moment the educational institution has an advantage over the singular student, in that it exists to coordinate the various functions of education. The student doesn’t have access to the same facilities or coordination tools. But we already see that this is changing; RateMyProfessors.com points the way. Why not create a new kind of “Open University”, a website that offers nothing but the kinds of scheduling and coordination tools students might need to organize their own courses? I’m sure that if this hasn’t been invented already someone is currently working on it – it’s the natural outgrowth of all the efforts toward student empowerment we’ve seen over the last several years.

In this near future world, students are the administrators. All of the administrative functions have been “pushed down” into a substrate of software. Education has evolved into something like a marketplace, where instructors “bid” to work with students. Now since most education is funded by the government, there will obviously be other forces at play; it may be that “administration”, such as it is, represents the government oversight function which ensures standards are being met. In any case, this does not look much like the educational institution of the 20th century – though it does look quite a bit like the university of the 13th century, where students would find and hire instructors to teach them subjects.

The role of the instructor has changed as well; as recently as a few years ago the lecturer was the font of wisdom and source of all knowledge – perhaps with a companion textbook. In an age of Wikipedia, YouTube and Twitter this no longer the case. The lecturer now helps the students find the material available online, and helps them to make sense of it, contextualizing and informing their understanding. even as the students continue to work their way through the ever-growing set of information. The instructor can not know everything available online on any subject, but will be aware of the best (or at least, favorite) resources, and will pass along these resources as a key outcome of the educational process. The instructor facilitates and mentors, as they have always done, but they are no longer the gatekeepers, because there are no gatekeepers, anywhere.

The administration has gone, the instructor’s role has evolved, now what happens to the classroom itself? In the context of a larger school facility, it may or may not be relevant. A classroom is clearly relevant if someone is learning engine repair, but perhaps not if learning calculus. The classroom in this fungible future of student administrators and evolved lecturers is any place where learning happens. If it can happen entirely online, that will be the classroom. If it requires substantial darshan with the instructor, it will have a physical local, which may or may not be a building dedicated to education. (It could, in many cases, simply be a field outdoors, again harkening back to 13th-century university practices.) At one end of the scale, students will be able work online with each other and with an lecturer to master material; at the other end, students will work closely with a mentor in a specialist classroom. This entire range of possibilities can be accommodated without much of the infrastructure we presently associate with educational institutions. The classroom will both implode – vanishing online – and explode – the world will become the classroom.

This, then, can already be predicted from current trends; once RateMyProfessors.com succeeded in destabilizing the institutional hierarchies in education, everything else became inevitable. Because this transformation lies mostly in the future, it is possible to shape these trends with actions taken in the present. In the worst case scenario, our educational institutions to not adjust to the pressures placed upon them by this new generation of students, and are simply swept aside by these students as they rise into self-empowerment. But the worst case need not be the only case. There are concrete steps which institutions can take to ease the transition from our highly formal present into our wildly informal future. In order to roll with the punches delivered by these newly-empowered students, educational institutions must become more fluid, more open, more atomic, and less interested the hallowed traditions of education than in outcomes.

III: All and Everything

Flexibility and fluidity are the hallmark qualities of the 21st century educational institution. An analysis of the atomic features of the educational process shows that the course is a series of readings, assignments and lectures that happen in a given room on a given schedule over a specific duration. In our drive to flexibility how can we reduce the class into to essential, indivisible elements? How can we capture those elements? Once captured, how can we get these elements to the students? And how can the students share elements which they’ve found in their own studies?

Recommendation #1: Capture Everything

I am constantly amazed that we simply do not record almost everything that occurs in public forums as a matter of course. This talk is being recorded for a later podcast – and so it should be. Not because my words are particularly worthy of preservation, but rather because this should now be standard operating procedure for education at all levels, for all subject areas. It simply makes no sense to waste my words – literally, pouring them away – when with very little infrastructure an audio recording can be made, and, with just a bit more infrastructure, a video recording can be made.

This is the basic idea that’s guiding Stanford and MIT: recording is cheap, lecturers are expensive, and students are forgetful. Somewhere in the middle these three trends meet around recorded media. Yes, a student at Stanford who misses a lecture can download and watch it later, and that’s a good thing. But it also means that any student, anywhere, can download the same lecture.

Yes, recording everything means you end up with a wealth of media that must be tracked, stored, archived, referenced and so forth. But that’s all to the good. Every one of these recordings has value, and the more recordings you have, the larger the horde you’re sitting upon. If you think of it like that – banking your work – the logic of capturing everything becomes immediately clear.

Recommendation #2: Share Everything

While education definitely has value – teachers are paid for the work – that does not mean that resources, once captured, should be tightly restricted to authorized users only. In fact, the opposite is the case: the resources you capture should be shared as broadly as can possibly be managed. More than just posting them onto a website (or YouTube or iTunes), you should trumpet their existence from the highest tower. These resources are your calling card, these resources are your recruiting tool. If someone comes across one of your lectures (or other resources) and is favorably impressed by it, how much more likely will they be to attend a class?

The center of this argument is simple, though subtle: the more something is shared, the more valuable it becomes. You extend your brand with every resource you share. You extend the knowledge of your institution throughout the Internet. Whatever you have – if it’s good enough – will bring people to your front door, first virtually, then physically.

If universities as illustrious (and expensive) as Stanford and MIT could both share their full courseware online, without worrying that it would dilute the value of the education they offer, how can any other institution hope to refute their example? Both voted with their feet, and both show a different way to value education – as experience. You can’t download experience. You can’t bottle it. Experience has to be lived, and that requires a teacher.

Recommendation #3: Open Everything

You will be approached by many vendors promising all sorts of wonderful things that will make the educational processes seamless and nearly magical for both educators and students. Don’t believe a word of it. (If I had a dollar for every gripe I’ve heard about Blackboard and WebCT, I’d be a very wealthy man.) There is no off-the-shelf tool that is perfectly equipped for every situation. Each tool tries to shoehorn an infinity of possibilities into a rather limited palette.

Rather than going for a commercial solution, I would advise you to look at the open-source solutions. Rather than buying a solution, use Moodle, the open-source, Australian answer to digital courseware. Going open means that as your needs change, the software can change to meet those needs. Given the extraordinary pressures education will be under over the next few years, openness is a necessary component of flexibility.

Openness is also about achieving a certain level of device-independence. Education happens everywhere, not just with your nose down in a book, or stuck into a computer screen. There are many screens today, and while the laptop screen may be the most familiar to educators, the mobile handset has a screen which is, in many ways, more vital. Many students will never be very computer literate, but every single one of them has a mobile handset, and every single one of them sends text messages. It’s the big of computer technology we nearly always overlook – because it is so commonplace. Consider every screen when you capture, and when you share; dealing with them all as equals will help you work find audiences you never suspected you’d have.

There is a third aspect of openness: open networks. Educators of every stripe throughout Australia are under enormous pressure to “clean” the network feeds available to students. This is as true for adult students as it is for educators who have a duty-of-care relationship with their students. Age makes no difference, apparently. The Web is big, bad, evil and must be tamed.

Yet net filtering throws the baby out with the bathwater. Services like Twitter get filtered out because they could potentially be disruptive, cutting students off from the amazing learning potential of social messaging. Facebook and MySpace are seen as time-wasters, rather than tools for organizing busy schedules. The list goes on: media sites are blocked because the schools don’t have enough bandwidth to support them; Wikipedia is blocked because teachers don’t want students cheating.

All of this has got to stop. The classroom does not exist in isolation, nor can it continue to exist in opposition to the Internet. Filtering, while providing a stopgap, only leaves students painfully aware of how disconnected the classroom is from the real world. Filtering makes the classroom less flexible and less responsive. Filtering is lazy.

Recommendation #4: Only Connect

Mind the maxim of the 21st century: connection is king. Students must be free to connect with instructors, almost at whim. This becomes difficult for instructors to manage, but it is vital. Mentorship has exploded out of the classroom and, through connectivity, entered everyday life. Students should also be able to freely connect with educational administration; a fruitful relationship will keep students actively engaged in the mechanics of their education.

Finally, students must be free to (and encouraged to) connect with their peers. Part of the reason we worry about lecturers being overburdened by all this connectivity is because we have yet to realize that this is a multi-lateral, multi-way affair. It’s not as though all questions and issues immediately rise to the instructor’s attention. This should happen if and only if another student can’t be found to address the issue. Students can instruct one another, can mentor one another, can teach one another. All of this happens already in every classroom; it’s long past time to provide the tools to accelerate this natural and effective form of education. Again, look to RateMyProfessors.com – it shows the value of “crowdsourced” learning.

Connection is expensive, not in dollars, but in time. But for all its drawbacks, connection enriches us enormously. It allows us to multiply our reach, and learn from the best. The challenge of connectivity is nowhere near as daunting as the capabilities it delivers. Yet we know already that everyone will be looking to maintain control and stability, even as everything everywhere becomes progressively reshaped by all this connectivity. We need to let go, we need to trust ourselves enough to recognize that what we have now, though it worked for a while, is no longer fit for the times. If we can do that, we can make this transition seamless and pleasant. So we must embrace sharing and openness and connectivity; in these there’s the fluidity we need for the future.

If a picture paints a thousand words, you’ve just absorbed a million, the equivalent of one-and-a-half Bibles. That’s the way it is, these days. Nothing is small, nothing discrete, nothing bite-sized. Instead, we get the fire hose, 24 x 7, a world in which connection and community have become so colonized by intensity and amplification that nearly nothing feels average anymore.

Is this what we wanted? It’s become difficult to remember the before-time, how it was prior to an era of hyperconnectivity. We’ve spent the last fifteen years working out the most excellent ways to establish, strengthen and multiply the connections between ourselves. The job is nearly done, but now, as we put down our tools and pause to catch our breath, here comes the question we’ve dreaded all along…

Why. Why this?

I gave this question no thought at all as I blithely added friends to Twitter, shot past the limits of Dunbar’s Number, through the ridiculous, and then outward, approaching the sheer insanity of 1200 so-called-“friends” whose tweets now scroll by so quickly that I can’t focus on any one saying any thing because this motion blur is such that by the time I think to answer in reply, the tweet in question has scrolled off the end of the world.

This is ludicrous, and can not continue. But this is vital and can not be forgotten. And this is the paradox of the first decade of the 21st century: what we want – what we think we need – is making us crazy.

Some of this craziness is biological.

Eleven million years of evolution, back to Proconsul, the ancestor of all the hominids, have crafted us into quintessentially social creatures. We are human to the degree we are in relationship with our peers. We grew big forebrains, to hold banks of the chattering classes inside our own heads, so that we could engage these simulations of relationships in never-ending conversation. We never talk to ourselves, really. We engage these internal others in our thoughts, endlessly rehearsing and reliving all of the social moments which comprise the most memorable parts of life.

It’s crowded in there. It’s meant to be. And this has only made it worse.

No man is an island. Man is only man when he is part of a community. But we have limits. Homo Sapiens Sapiens spent two hundred thousand years exploring the resources afforded by a bit more than a liter of neural tissue. The brain has physical limits (we have to pass through the birth canal without killing our mothers) so our internal communities top out at Dunbar’s magic Number of 150, plus or minus a few.

Dunbar’s Number defines the crucial threshold between a community and a mob. Communities are made up of memorable and internalized individuals; mobs are unique in their lack of distinction. Communities can be held in one’s head, can be tended and soothed and encouraged and cajoled.

Four years ago, when I began my research into sharing and social networks, I asked a basic question: Will we find some way to transcend this biological limit, break free of the tyranny of cranial capacity, grow beyond the limits of Dunbar’s Number?

After all, we have the technology. We can hyperconnect in so many ways, through so many media, across the entire range of sensory modalities, it is as if the material world, which we have fashioned into our own image, wants nothing more than to boost our capacity for relationship.

And now we have two forces in opposition, both originating in the mind. Our old mind hews closely to the community and Dunbar’s Number. Our new mind seeks the power of the mob, and the amplification of numbers beyond imagination. This is the central paradox of the early 21st century, this is the rift which will never close. On one side we are civil, and civilized. On the other we are awesome, terrible, and terrifying. And everything we’ve done in the last fifteen years has simply pushed us closer to the abyss of the awesome.

We can not reasonably put down these new weapons of communication, even as they grind communities beneath them like so many old and brittle bones. We can not turn the dial of history backward. We are what we are, and already we have a good sense of what we are becoming. It may not be pretty – it may not even feel human – but this is things as they are.

When the historians of this age write their stories, a hundred years from now, they will talk about amplification as the defining feature of this entire era, the three hundred year span from industrial revolution to the emergence of the hyperconnected mob. In the beginning, the steam engine amplified the power of human muscle – making both human slavery and animal power redundant. In the end, our technologies of communication amplified our innate social capabilities, which eleven million years of natural selection have consistently selected for. Above and beyond all of our other natural gifts, those humans who communicate most effectively stand the greatest chance of passing their genes along to subsequent generations. It’s as simple as that. We talk our partners into bed, and always have.

The steam engine transformed the natural world into a largely artificial environment; the amplification of our muscles made us masters of the physical world. Now, the technologies of hyperconnectivity are translating the natural world, ruled by Dunbar’s Number, into the dominating influence of maddening crowd.

We are not prepared for this. We have no biological defense mechanism. We are all going to have to get used to a constant state of being which resembles nothing so much as a stack overflow, a consistent social incontinence, as we struggle to retain some aspects of selfhood amidst the constantly eroding pressure of the hyperconnected mob.

Given this, and given that many of us here today are already in the midst of this, it seems to me that the most useful tool any of us could have, moving forward into this future, is a social contextualizer. This prosthesis – which might live in our mobiles, or our nettops, or our Bluetooth headsets – will fill our limited minds with the details of our social interactions.

This tool will make explicit that long, Jacob Marley-like train of lockboxes that are our interactions in the techno-social sphere. Thus, when I introduce myself to you for the first or the fifteen hundredth time, you can be instantly brought up to date on why I am relevant, why I matter. When all else gets stripped away, each relationship has a core of salience which can be captured (roughly), and served up every time we might meet.

I expect that this prosthesis will come along sooner rather than later, and that it will rival Google in importance. Google took too much data and made it roughly searchable. This prosthesis will take too much connectivity and make it roughly serviceable. Given that we primarily social beings, I expect it to be a greater innovation, and more broadly disruptive.

And this prosthesis has precedents; at Xerox PARC they have been looking into a ‘human memory prosthesis’ for sufferers from senile dementia, a device which constantly jogs human memories as to task, place, and people. The world that we’re making for ourselves, every time we connect, is a place where we are all (in some relative sense) demented. Without this tool we will be entirely lost. We’re already slipping beneath the waves. We need this soon. We need this now.

I hope you’ll get inventive.

II. THAT.

Now that we have comfortably settled into the central paradox of our current era, with a world that is working through every available means to increase our connectivity, and a brain that is suddenly overloaded and sinking beneath the demands of the sum total of these connections, we need to ask that question: Exactly what is hyperconnectivity good for? What new thing does that bring us?

The easy answer is the obvious one: crowdsourcing. The action of a few million hyperconnected individuals resulted in a massive and massively influential work: Wikipedia. But the examples only begin there. They range much further afield.

Uni students have been sharing their unvarnished assessments of their instructors and lecturers. Ratemyprofessors.com has become the bête noire of the academy, because researchers who can’t teach find they have no one signing up for their courses, while the best lecturers, with the highest ratings, suddenly find themselves swarmed with offers for better teaching positions at more prestigious universities. A simply and easily implemented system of crowdsourced reviews has carefully undone all of the work of the tenure boards of the academy.

It won’t be long until everything else follows. Restaurant reviews – that’s done. What about reviews of doctors? Lawyers? Indian chiefs? Politicans? ISPs? (Oh, wait, we have that with Whirlpool.) Anything you can think of. Anything you might need. All of it will have been so extensively reviewed by such a large mob that you will know nearly everything that can be known before you sign on that dotted line.

All of this means that every time we gather together in our hyperconnected mobs to crowdsource some particular task, we become better informed, we become more powerful. Which means it becomes more likely that the hyperconnected mob will come together again around some other task suited to crowdsourcing, and will become even more powerful. That system of positive feedbacks – which we are already quite in the midst of – is fashioning a new polity, a rewritten social contract, which is making the institutions of the 19th and 20th centuries – that is, the industrial era – seem as antiquated and quaint as the feudal systems which they replaced.

It is not that these institutions are dying, but rather, they now face worthy competitors. Democracy, as an example, works well in communities, but can fail epically when it scales to mobs. Crowdsourced knowledge requires a mob, but that knowledge, once it has been collected, can be shared within a community, to hyperempower that community. This tug-of-war between communities and crowds is setting all of our institutions, old and new, vibrating like taught strings.

We already have a name for this small-pieces-loosely-joined form of social organization: it’s known as anarcho-syndicalism. Anarcho-Syndicalism emerged from the labor movements that grew in numbers and power toward the end of the 19th century. Its basic idea is simply that people will choose to cooperate more often than they choose to compete, and this cooperation can form the basis for a social, political and economic contract wherein the people manage themselves.

A system with no hierarchy, no bosses, no secrets, no politics. (Well, maybe that last one is asking too much.) Anarcho-syndicalism takes as a given that all men are created equal, and therefore each have a say in what they choose to do.

Somewhere back before Australia became a nation, anarcho-syndicalist trade unions like the Industrial Workers of the World (or, more commonly, the ‘Wobblies’) fought armies of mercenaries in the streets of the major industrial cities of the world, trying get the upper hand in the battle between labor and capital. They failed because capital could outmaneuver labor in the 19th century. Today the situation is precisely reversed. Capital is slow. Knowledge is fast, the quicksilver that enlivens all our activities.

I come before you today wearing my true political colors – literally. I did not pick a red jumper and black pants by some accident or wardrobe malfunction. These are the colors of anarcho-syndicalism. And that is the new System of the World.

You don’t have to believe me. You can dismiss my political posturing as sheer radicalism. But I ask you to cast your mind further than this stage this afternoon, and look out on a world which is permanently and instantaneously hyperconnected, and I ask you – how could things go any other way? Every day one of us invents a new way to tie us together or share what we know; as that invention is used, it is copied by those who see it being used.

When we imitate the successful behaviors of our hyperconnected peers, this ‘hypermimesis’ means that we are all already in a giant collective. It’s not a hive mind, and it’s not an overmind. It’s something weirdly in-between. Connected we are smarter by far than we are as individuals, but this connection conditions and constrains us, even as it liberates us. No gift comes for free.

I assert, on the weight of a growing mountain of evidence, that anarcho-syndicalism is the place where the community meets the crowd; it is the environment where this social prosthesis meets that radical hyperempowerment of capabilities.

Let me give you one example, happening right now. The classroom walls are disintegrating (and thank heaven for that), punctured by hyperconnectivity, as the outside world comes rushing in to meet the student, and the student leaves the classroom behind for the school of the world. The student doesn’t need to be in the classroom anymore, nor does the false rigor of the classroom need to be drilled into the student. There is such a hyperabundance of instruction and information available, students needs a mentor more than a teacher, a guide through the wilderness, and not a penitentiary to prevent their journey.

Now the students, and their parents – and the teachers and instructors and administrators – need to find a new way to work together, a communion of needs married to a community of gifts. The school is transforming into an anarcho-syndicalist collective, where everyone works together as peers, comes together in a “more perfect union”, to educate. There is no more school-as-a-place-you-go-to-get-your-book-learning. School is a state of being, an act of communion.

If this is happening to education, can medicine, and law, and politics be so very far behind? Of course not. But, unlike the elites of education, these other forces will resist and resist and resist all change, until such time as they have no choice but to surrender to mobs which are smarter, faster and more flexible than they are. In twenty years time they all these institutions will be all but unrecognizable.

All of this is light-years away from how our institutions have been designed. Those institutions – all institutions – are feeling the strain of informational overload. More than that, they’re now suffering the death of a thousand cuts, as the various polities serviced by each of these institutions actually outperform them.

You walk into your doctor’s office knowing more about your condition than your doctor. You understand the implications of your contract better than your lawyer. You know more about a subject than your instructor. That’s just the way it is, in the era of hyperconnectivity.

So we must band together. And we already have. We have come together, drawn by our interests, put our shoulders to the wheel, and moved the Earth upon its axis. Most specifically, those of you in this theatre with me this arvo have made the world move, because the Web is the fulcrum for this entire transformation. In less than two decades we’ve gone from physicists plaything to rewriting the rules of civilization.

But try not to think about that too much. It could go to your head.

III. THE OTHER.

Back in July, just after Vodafone had announced its meager data plans for iPhone 3G, I wrote a short essay for Ross Dawson’s Future of Media blog. I griped and bitched and spat the dummy, summing things up with this line:

“It’s time to show the carriers we can do this ourselves.”

I recommended that we start the ‘Future Australian Carrier’, or FAUC, and proceeded to invite all of my readers to get FAUCed. A harmless little incitement to action. What could possibly go wrong?

Within a day’s time a FAUC Facebook group had been started – without my input – and I was invited to join. Over the next two weeks about four hundred people joined that group, individuals who had simply had enough grief from their carriers and were looking for something better. After that, although there was some lively discussion about a possible logo, and some research into how MVNOs actually worked, nothing happened.

About a month later, individuals began to ping me, both on Facebook and via Twitter, asking, “What happened with that carrier you were going to start, Mark? Hmm?” As if somehow, I had signed on the dotted line to be chief executive, cheerleader, nose-wiper and bottle-washer for FAUC.

All of this caught me by surprise, because I certainly hadn’t signed up to create anything. I’d floated an idea, nothing more. Yet everyone was looking to me to somehow bring this new thing into being.

After I’d been hit up a few times, I started to understand where the epic !FAIL! had occurred. And the failure wasn’t really mine. You see, I’ve come to realize a sad and disgusting little fact about all of us: We need and we need and we need.

We need others to gather the news we read. We need others to provide the broadband we so greedily lap up. We need other to govern us. And god forbid we should be asked to shoulder some of the burden. We’ll fire off a thousand excuses about how we’re so time poor even the cat hasn’t been fed in a week.

So, sure, four hundred people might sign up to a Facebook group to indicate their need for a better mobile carrier, but would any of them think of stepping forward to spearhead its organization, its cash-raising, or it leasing agreements? No. That’s all too much hard work. All any of these people needed was cheap mobile broadband.

Well, cheap don’t come cheaply.

Of course, this happens everywhere up and down the commercial chain of being. QANTAS and Telstra outsource work to southern Asia because they can’t be bothered to pay for local help, because their stockholders can’t be bothered to take a small cut in their quarterly dividends.

There’s no difference in the act itself, just in its scale. And this isn’t even raw economics. This is a case of being penny-wise and pound-foolish. Carve some profit today, spend a fortune tomorrow to recover. We see it over and over and over again (most recently and most expensively on Wall Street), but somehow the point never makes it through our thick skulls. It’s probably because we human beings find it much easier to imagine three months into the future than three years. That’s a cognitive feature which helps if you’re on the African savannah, but sucks if you’re sitting in an Australian boardroom.

So this is the other thing. The ugly thing that no one wants to look at, because to look at it involves an admission of laziness. Well folks, let me be the first one here to admit it: I’m lazy. I’m too lazy to administer my damn Qmail server, so I use Gmail. I’m too lazy to setup WebDAV, so I use Google Docs. I’m too lazy to keep my devices synced, so I use MobileMe. And I’m too lazy to start my own carrier, so instead I pay a small fortune each month to Vodafone, for lousy service.

And yes, we’re all so very, very busy. I understand this. Every investment of time is a tradeoff. Yet we seem to defer, every time, to let someone else do it for us.

And is this wise? The more I see of cloud computing, the more I am convinced that it has become a single-point-of-failure for data communications. The decade-and-a-half that I spent as a network engineer tells me that. Don’t trust the cloud. Don’t trust redundancy. Trust no one. Keep your data in the cloud if you must, but for goodness’ sake, keep another copy locally. And another copy on the other side of the world. And another under your mattress.

I’m telling you things I shouldn’t have to tell you. I’m telling you things that you already know. But the other, this laziness, it’s built into our culture. Socially, we have two states of being: community and crowd. A community can collaborate to bring a new mobile carrier into being. A crowd can only gripe about their carrier. And now, as the strict lines between community and crowd get increasingly confused because of the upswing in hyperconnectivity, we behave like crowds when we really ought to be organizing like a community.

And this, at last, is the other thing: the message I really want to leave you with. You people, here in this auditorium today, you are the masters of the world. Not your bosses, not your shareholders, not your users. You. You folks, right here and right now. The keys to the kingdom of hyperconnectivity have been given to you. You can contour, shape and control that chaotic meeting point between community and crowd. That is what you do every time you craft an interface, or write a script. Your work helps people self-organize. Your work can engage us at our laziest, and turn us into happy worker bees. It can be done. Wikipedia has shown the way.

And now, as everything hierarchical and well-ordered dissolves into the grey goo which is the other thing, you have to ask yourself, “Who does this serve?”

At the end of the day, you’re answerable to yourself. No one else is going to do the heavy lifting for you. So when you think up an idea or dream up a design, consider this: Will it help people think for themselves? Will it help people meet their own needs? Or will it simply continue to infantilize us, until we become a planet of dummy-spitting, whinging, wankers?

It’s a question I ask myself, too, a question that’s shaping the decisions I make for myself. I want to make things that empower people, so I’ve decided to take some time to work with Andy Coffey, and re-think the book for the 21st century. Yes, that sounds ridiculous and ambitious and quixotic, but it’s also a development whose time is long overdue. If it succeeds at all, we will provide a publishing platform for people to share their long-form ideas. Everything about it will be open source and freely available to use, to copy, and to hack, because I already know that my community is smarter than I am.

And it’s a question I have answered for myself in another way. This is my third annual appearance before you at Web Directions South. It will be the last time for some time. You people are my community; where I knew none of you back in 2006; I consider many of you friends in 2008. Yet, when I talk to you like this, I get the uncomfortable feeling that my community has become a crowd. So, for the next few years, let’s have someone else do the closing keynote. I want to be with my peeps, in the audience, and on the Twitter backchannel, taking the piss and trading ideas.

The future – for all of us – is the battle over the boundary between the community and the crowd. I am choosing to embrace the community. It seems the right thing to do. And as I walk off-stage here, this afternoon, I want you to remember that each of you holds the keys to the kingdom. Our community is yours to shape as you will. Everything that you do is translated into how we operate as a culture, as a society, as a civilization. It can be a coming together, or it can be a breaking apart. And it’s up to you.