Main menu

Category Archives: professional

Post navigation

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development. DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time. Or, well, perhaps I overstate the matter. But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships. This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks. The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process. I said I’d be happy to do so, and asked how many proposals I’d have to review. “I doubt it will be more than thirty or forty,” he replied. Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions. But the RFP didn’t result in thirty or forty proposals. The total came to almost ninety. All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting. Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs. If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit. Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out. That took nearly 24 hours by itself – and cost an ungodly sum. I was left with a huge, heavy box of paper which I could barely lug back to my flat. For the next 36 hours, this box would be my ball and chain. I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad. Then I looked back at the box. Then back at the iPad. Then back at the box. I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop. This, for me, would be a bit of a test. For the last decade I’d never traveled anywhere without my laptop. Could I manage a business trip with just my iPad? I looked back at the iPad. Then at the box. You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox. Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it. Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own. I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service. My rationale was that I imagined this iPad would be a ‘cloud-centric’ device. The ‘cloud’ is a term that’s come into use quite recently. It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer. Gmail is a good example of a software that’s ‘in the cloud’. Facebook is another. Twitter, another. Much of what we do with our computers – iPad included – involves software accessed over the Internet. Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud. Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work. Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible. In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side. I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one. My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me. Here again were the proposals, carefully ordered and placed into several large, ringed binders. I’d be expected to tote these to the evaluation meeting. Fortunately, that was only a few floors above my hotel room. That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room. I put those boxes down – and never looked at them again. As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off. She understood completely. I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office. Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth. We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper. Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked. To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper. We have it now. After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written. You will soon have access to every single document you might ever need, right here, right now. We’re not 100% there yet – but that’s not the fault of the device. We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment. At that point, your iPad becomes the page which contains all other pages within it. You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text. The world is richer than that. iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever. It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world. And it is every one of a hundred-million-plus websites and maybe a trillion web pages. All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years. It’s 2020, and we’ve had iPads for a whole decade. The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law. This law states that computers double in power every twenty-four months. Ten years is five doublings, or 32 times. That rule extends to the display as well as the computer. The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye. The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper. The device itself will be thinner and lighter than the current model. Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear. You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user. And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010. Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution. Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits. That’s as good as a wired connection – as fast as anything promised by the National Broadband Network! In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second. That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today. Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad. (Perhaps a full copy of Wikipedia? Or all of the books published before 1915?) All of this still cost just $700. If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option. I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of? How do we put all of that power to work? First off, iPad will be able to see and hear in meaningful ways. Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’. We can already speak to our computers, and, most of the time, they can understand us. With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it. Your iPad will hear you, understand your voice, and follow your commands. It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time. They may still be employed in very specialized tasks. For almost everything else, we will be using our iPads. They’ll rarely leave our sides. They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task. When everything is so well connected, you don’t need to have personal information stored in a specific iPad. You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible. Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly. People may find voice recognition more of an annoyance than an affordance. The idea of your iPad watching you might seem creepy to some people. But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s. He lives in Boston while they live in Northern California. But he needs to keep in touch, he needs to have a look in. Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime. It’s a bit ‘Jetsons’, when you think about it. And that’s just what will happen next year. By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long. It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia. No student, however poor, will be without their own iPad – the Government of the day will see to that. These students of 2020 are at least as well connected as you are, as their parents are, as anyone is. To them, iPads are not new things; they’ve always been around. They grew up in a world where touch is the default interface. A computer mouse, for them, seems as archaic as a manual typewriter does to us. They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband. They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations? This is not the universe of ‘chalk and talk’. This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network. This is a world where education can be provided anywhere, on demand, as called for. This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two. Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away. Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons. Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history. iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator. We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding. The more we virtualize the educational process, the more important and singular our embodied interactions become. Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up. Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students. That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon. We learn best when we learn from others. We humans are experts in mimesis, in learning by imitation. That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings. We are born to work together, we are designed to learn from one another. iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense. It should be an amplifier, not a replacement, something that lets students go further, faster than before. But they should not go alone.

The constant danger of technology is that it can interrupt the human moment. We can be too busy checking our messages to see the real people right before our eyes. This is the dilemma that will face us in the age of the iPad. Governments will see them as cost-saving devices, something that could substitute for the human touch. If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III: The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband. The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together. But what will they be working on? Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc. This is certainly not the intent of the project’s creators. Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy. Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities. That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity. We know that all year nine students in Australia will be covering a particular suite of topics. This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on. As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it. The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia. All the article headings are there, all the taxonomy, all the cross references, but none of the content. The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing. But it isn’t. Everyone secretly suspects the National Curriculum will ruin education. I ask that we can see things differently. The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value. More than that, we need to think of every student in Australia as a contributor of value. That’s the vital gap that must be crossed. Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work. Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from. Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this. We need to do this. Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers. This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy. But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad. We’ve never had these stars aligned in such a way before. Only just now – in 2010 – is it possible to dream such big dreams. It won’t even cost much money. Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities. We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value. It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine. Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra. These kinds of things have been possible before, but the National Curriculum gives us the reason to do it. iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy. The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse. In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation. Education is one way that this happens. People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market. This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have. If we can share our learning, we can close this gap. We can bring the best of what we teach to everyone who has the need to know.

And there we are. But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it. The iPad is an excellent toy. Please play with it. I don’t mean use it. I mean explore it. Punch all the buttons. Do things you shouldn’t do. Press the big red button that says, “Don’t press me!” Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning. That joy is foundational to us. If we didn’t love learning, we wouldn’t be running things around here. We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own. These are my favorites, but I own many others, and enjoy all of them. There are literally tens of thousands to choose from, some of them educational, some, just for fun. That’s the point: all work and no play makes iPad a dull toy.

So please, go and play. As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything. Or can, if we can change ourselves.

When I came to Australia six years ago, to seek my fame and fortune, business communications had remained largely unchanged for nearly a century. You could engage in face-to-face conversation – something humans have been doing since we learned to speak, countless thousands of years ago – or, if distance made that impossible, you could drop a letter into the post. Australia Post is an excellent organization, and seems to get all of the mail delivered within a day or two – quite an accomplishment in a country as dispersed and diffuse as ours.

In the twentieth century, the telephone became the dominant form of business communication; Australia Post wired the nation up, and let us talk to one another. Conversation, mediated by the telephone, became the dominant mode of communication. About twenty years ago the facsimile machine dropped in price dramatically, and we could now send images over phone lines.

The facsimile translates images into data and back into images again. That’s when the critical threshold was crossed: from that point on, our communications have always centered on data. The Internet arrived in 1995, and broadband in 2001. In the first years of Internet usage, electronic mail was both the ‘killer app’ and the thing that began to supplant the telephone for business correspondence. Electronic mail is asynchronous – you can always pick it up later. Email is non-local, particularly when used through a service such as Hotmail or Gmail – you can get it anywhere. Until mobiles started to become pervasive for business uses, the telephone was always a hit-or-miss affair. Electronic mail is a hit, every time.

Such was the business landscape when I arrived in Australia. The Web had arrived, and businesses eagerly used it as a publishing medium – a cheap way of getting information to their clients and customers. But the Web was changing. It had taken nearly a decade of working with the Web, day-to-day, before we discovered that the Web could become a fully-fledged two-way medium: the Web could listen as well as talk. That insight changed everything. The Web morphed into a new beast, christened ‘Web 2.0’, and everywhere the Web invited us to interact, to share, to respond, to play, to become involved. This transition has fundamentally changed business communication, and it’s my goal this morning to outline the dimensions of that transformation.

This transformation unfolds in several dimensions. The first of these – and arguably the most noticeable – is how well-connected we are these days. So long as we’re in range of a cellular radio signal, we can be reached. The number of ways we can be reached is growing almost geometrically. Five years ago we might have had a single email address. Now we have several – certainly one for business, and one for personal use – together with an account on Facebook (nearly eight million of the 22 million Australians have Facebook accounts), perhaps another account on MySpace, another on Twitter, another on YouTube, another on Flickr. We can get a message or maintain contact with someone through any of these connections. Some individuals have migrated to Facebook for the majority of their communications – there’s no spam, and they’re assured the message will be delivered. Among under-25s, electronic mail is seen as a technology of the ‘older generation’, something that one might use for work, but has no other practical value. Text messaging and messaging-via-Facebook have replaced electronic mail.

This increased connectivity hasn’t come for free. Each of us are now under a burden to maintain all of the various connections we’ve opened. At the most basic level, we must at least monitor all of these channels for incoming messages. That can easily get overwhelming, as each channel clamors for attention.

But wait. We’ve dropped Facebook and Twitter into the conversation before I even explained what they are and how they work. We just take them as a fact of life these days, but they’re brand new. Facebook was unknown just three years ago, and Twitter didn’t zoom into prominence until eighteen months ago. Let’s step back and take a look at what social networks are. In a very real way, we’ve always known exactly what a social network is: since we were very small we’ve been reaching out to other people and establishing social relationships with them. In the beginning that meant our mothers and fathers, sisters and brothers. As we grew older that list might grow to include some of the kids in the neighborhood, or at pre-kindy, and then our school friends. By the time we make it to university, that list of social relationships is actually quite long. But our brains have limited space to store all those relationships – it’s actually the most difficult thing we do, the most cognitively all-encompassing task. Forget physics – relationship are harder, and take more brainpower.

Nature has set a limit of about one hundred and fifty on the social relationships we can manage in our heads. That’s not a static number – it’s not as though as soon as you reach 150, you’re done, full. Rather, it’s a sign of how many relationships of importance you can manage at any one time. None of us, not even the most socially adept, can go very much beyond that number. We just don’t have the grey matter for it.

Hence, fifty years ago mankind invented the Rolodex – a way of keeping track of all the information we really should remember but can’t possibly begin to absorb. A real, living Rolodex (and there are few of them, these days) are a wonder to behold, with notes scribbled in the margins, business cards stapled to the backs of the Rolodex cards, and a glorious mess of information, all alphabetically organized. The Rolodex was mankind’s first real version of the modern, digital, social network. But a Rolodex doesn’t think for itself; a Rolodex can not draw out the connections between the different cards. A Rolodex does not make explicit what we know – we live in a very interconnected world, and many of our friends and associates are also friends and associates with our friends and associates.

That is precisely what Facebook gives us. It makes those implicit connections explicit. It allows those connections to become conduits for ever-greater-levels of connection. Once those connections are made, once they become a regular feature of our life, we can grow beyond the natural limit of 150. That doesn’t mean you can manage any of these relationships well – far from it. But it does mean that you can keep the channels of communication open. That’s really what all of these social networks are: turbocharged Rolodexes, which allow you to maintain far more relationships than ever before possible.

Once these relationships are established, something beings to happen quite naturally: people begin to share. What they share is often driven by the nature of the relationship – though we’ve all seen examples where individuals ‘over-share’ inappropriately, confusing business and social channels of communication. That sort of thing is very easy to do with social networks such as Facebook, because it doesn’t provide an easy method to send messages out to different groups of friends. We might want a social network where business friends might get something very formal, while close friends might that that photo of you doing tequila shots at last weekend’s birthday party. It’s a great idea, isn’t it? But it can’t be done. Not on Facebook, not on Twitter. Your friends are all lumped together into one undifferentiated whole. That’s one way that those social networks are very different from the ones inside our heads. And it’s something to be constantly aware of when sharing through social networks.

That said, this social sharing has become an incredibly potent force. More videos are uploaded to YouTube every day than all television networks all over the world produce in a year. It may not be material of the same quality, but that doesn’t matter – most of those videos are only meant to be seen among a small group of family or friends. We send pictures around, we send links around, we send music around (though that’s been cause for a bit of trouble), we share things because we care about them, and because we care about the people we’re sharing with. Every act of sharing, business or personal, brings the sharer and the recipient closer together. It truly is better to give than receive. On the other hand, we’re also drowning in shared material. There’s so much, coming from every corner, through every one of these social networks, there’s no possible way to keep up. So, most of us don’t. We cherry-pick, listening to our closest friends and associates: the things they share with us are the most meaningful. We filter the noise and hope that we’re not missing anything very important. (We usually are.)

In certain very specific situations, sharing can produce something greater than the sum of its parts. A community can get together and decide to pool what it knows about a particular domain of knowledge, can ‘wise up’ by sharing freely. This idea of ‘collective intelligence’ producing a shared storehouse of knowledge is the engine that drives sites like Wikipedia. We all know Wikipedia, we all know how it works – anyone can edit anything in any article within it – but the wonder of Wikipedia is that it works so well. It’s not perfectly accurate – nothing ever is – but it is good enough to be useful nearly all the time. Here’s the thing: you can come to Wikipedia ignorant and leave it knowing something. You can put that knowledge to work to make better decisions than you would have in your state of ignorance. Wikipedia can help you wise up.

Wikipedia isn’t the only example of shared knowledge. A decade ago a site named TeacherRatings.com went online, inviting university students to provide ratings of their professors, lecturers and instructors. Today it’s named RateMyProfessor.com, is owned by MTV Networks, and has over ten million ratings of one million instructors. This font of shared knowledge has become so potent that students regularly consult the site before deciding which classes they’ll take next semester at university. Universities can no longer saddle student with poor teachers (who may also be fantastic researchers). There are bidding wars taking place for the lecturers who get the highest ratings on the site. This sharing of knowledge has reversed the power relationship between a university and its students which stretches back nearly a thousand years.

Substitute the word ‘business’ for university and ‘customers’ for students and you see why this is so significant. In an era where we’re hyperconnected, where people share, and share knowledge, things are going to work a lot differently than they did before. These all-important relationships between businesses and their customers (potential and actual) have been completely rewritten. Let’s talk about that.

II. Linked Out

Of all the challenges you face in your professional practice, the greatest of them comes from a website that, at first glance, seems completely innocuous. LinkedIn is the “professional” social network, where individuals re-create their C.V. online, and, entry by entry, link their profiles to other people they have worked with over the years.

Just that alone is something entirely new and very potent. When a potential employer sees a C.V., they don’t see the network of connections the candidate created at every position – a network which tells the employer much of what they need to know about the suitability of the candidate. Suddenly, all of this implicit information has been revealed explicitly. An employer can ‘walk the chain’ of associations, long before a candidate submits any references. The LinkedIn profile is the reference, quite literally.

This means that a LinkedIn profile is more valuable than any hand-crafted C.V., because it is, on the whole, a more accurate read of the candidate. A candidate’s connections tell you everything about who the candidate is. They certainly tell you more than a list of hand-picked referees ever could. LinkedIn is simply a better way of doing business.

This means that LinkedIn has caught on like a bushfire in Big End of town. Throughout the nation, employers look for the LinkedIn profile of potential candidates, and these profiles carry more weight than any words from the candidate, or a recruiter, or, really, anyone else. This transformation happened suddenly over the last 12 months, as businesspeople reached a critical mass of involvement with LinkedIn. LinkedIn benefits from the ‘network effect’: the more people who create profiles on LinkedIn, the more valuable the service becomes – because it’s more likely you’ll find someone’s profile there. That, in turn, makes it more likely another individual will create a LinkedIn profile, making it more valuable, etc. It also means that any candidate without a LinkedIn profile is immediately suspect – what’s he or she trying to hide?

LinkedIn become the new standard in recruiting. But don’t look too closely, or you’ll get scared. LinkedIn takes one of the things the recruiter brings to the table – an extensive and wide-ranging set of contacts – and reproduces that electronically in such a way that anyone can take advantage of them. In other words, everyone is now on a much more equal footing. The time and energy you have dedicated to building up those networks can now be matched by someone spending a lot less time on it – someone who is employing the latest tools.

The big worry, from here forward, is that recruiters as we have known them will be obsolesced by social networking technologies. As we get further into the social media revolution, and these tools become more refined, many of the functions of the recruiter-as-networker, recruiter-as-matchmaker, and recruiter-as-talent-finder will be subsumed into these social networks. Already I can dial and tune searches on LinkedIn to give me, say, a list of electrical engineers who work in Melbourne. That’s a list I can work from, if I’m doing a personnel search. I can message those folks through LinkedIn, to find out if they’re interested in a conversation about a potential opportunity. The platform provides the basic set of capabilities to amplify my effectiveness – without any substantial investment.

People will begin to ask why they need recruiters. People are already beginning to ask this question, as they see the social network providing the same capabilities – and for free. This is something that should scare you a little bit, because it shows you that recruiting, as we’ve known it, has about as much life expectancy as a buggy-whip maker did in 1915. There are still a few years left in which recruiting will be a profitable business, but after that it will simply be overwhelmed by social networking tools which can amplify the powers of the average person so effectively that recruiting simply becomes another task on offer, like sending a message or posting a photo.

As people are drawn together over social networks, they get a better sense of the talents of those around them. This talent-spotting used to be the sine qua non of the recruiter. Now that each of us can manage connections far beyond the natural limit of 150, we each learn our respective strengths. We use systems like LinkedIn to help us keep tally of those strengths. We use the tools to deploy those strengths. Everything happens because the tools empower us. But will they empower us so much that recruiters become redundant?

You need to have a good think about your business, and about the way you practice your business. You need to have a good look at the tools – particularly LinkedIn, but also Twitter and Facebook. You’ll learn that these tools are good at some things, and lousy at others. Here’s the question: are you good at the things the tools aren’t? Tools are no substitute for relationships. Even though the tools give us some false sense of relationship, it’s not the real thing. Recruiting is the real thing. But, is that enough?

III. Social Media Gods

In times long past – and by this, I mean just five years ago – recruiters were the masters of the Rolodex. You survived and thrived by knowing everybody, everywhere, with talent, and everybody, everywhere, who needed that talent. That in itself is quite a talent. But that talent is no longer enough. It is, however, the springboard to get you to the next level.

Fasten your seatbelts. You’re about to get launched headlong into the future. I want you to imagine a time – let’s say, tomorrow afternoon – when the average person now has quite extraordinary Rolodex capabilities, courtesy of the social networks, and where you, the masters, have gone beyond that into regions undreamed of. Imagine being able to take each of your contacts, and use those as starting points for new contacts within new networks. You’d have an inner ring of close contacts – just as you do today, but multiplied by the capabilities of the tools to support and nurture these contacts. Outside that inner ring, you’d have consecutive rings of contacts-to-contacts, and contacts-to-contacts-to-contacts, and so on, all the way out until the network simply becomes too diffuse and too difficult to maintain.

If this sounds familiar, it’s because it echoes the famed ‘six degrees of separation’, a theorem that provides that we are all just six people away from any other person on the planet. Australia is a lot smaller than the world; within any particular domain of expertise, there’s really only one or two degrees of separation, whether that’s in filmmaking, medicine, or software engineering. There just aren’t that many of us. Fortunately, that means that our networks aren’t deep: we can more-or-less know everyone involved in our field, with the help of a good Rolodex.

You have more than a good Rolodex. You have the new tools; you can build a Rolodex of Rolodexes, one Rolodex per discipline, and use that to track everybody, everywhere, who matters. In this future, that is really tomorrow afternoon, you’ve so leveraged your network resources that each of you sits in the middle of a vast web, and each time there’s a twitch upon a thread, you know about it, because that information is shared throughout your networks, and finds its way toward your receptive ears.

You’re going to need good tools to make this ambitious project a reality, and you’re going to need them for two entirely contradictory reasons: first, to be able to listen to everything going on everywhere, and second, because that chaotic din will deafen you. You need tools to help you find out what’s going on, but, more significantly, you need tools to help you winnow the wheat from the chaff. Being well-connected means bearing the burden of drowning in pointless information. Without the right tools, as you grow your networks you will simply sink under the noise.

What tools? They barely exist today. Google Alerts is one tool that will help keep you abreast of news as it is created on the net. Within the next few months, Google will begin to digest the endless ‘feeds’ created by Facebook and Twitter users, and you’ll be able to search through those as well. But again, there’s just too much there. You likely need a more professional tool, such as Sydney’s own PeopleBrowsr, to sift through the wealth of information that will be generated by your ever-more-encompassing networks of networks.

I should point out – for the more entrepreneurial among you – there is now a market for tools that recruiters need to become better recruiters: tools that harness the networks. Such tools will need to be designed by someone who understands the recruiting business and the network. That means it could be one of you. You could partner with a Google or a PeopleBrowsr, or strike out on your own. If you don’t do it, one of your competitors – either in Australia or overseas – certainly will.

The first half of my advice is simply this: build your networks. Build them out to unimaginable reaches. Use the tools to leverage your capabilities. Use the tools as if your livelihood depended upon it. Because it does. Behind you are a new generation, unafraid to use the tools to build their networks up. When you go head-to-head against them, those with the best networks – and the best tools – will tend to win. That’s what the next decade looks like, as we transition from the Rolodex to the social network: more and more business will go to the well-networked. So really, there is no choice: adapt or die.

There’s another face to this, one that turns itself outward. Sure, you’ve created this vast and nationwide network to feed you information. But you’ve got to do more than listen. You must present yourself within the network. You must be present. Many people and most companies think that they can use social media as an advertising medium. Plenty of firms set up Facebook pages and Twitter accounts and post lots of advertising messages to an ever-decreasing number of followers.

People don’t want to get spammed. They don’t want to hear your marketing messages over a communications channel that they consider personal. So please, don’t make this mistake. In fact, I’ll go even further – don’t think of the Web as an advertising medium. Sure, it had a few good years where a business presence online was simply a great way to get your marketing materials out there inexpensively, but those days are over. Today everything is about engagement. Engagement begins with conversation.

Conversation is a tricky thing: on the one hand it’s the most natural of human capabilities; on the other hand, it’s fraught with disaster. Social media amplifies both sides of this equation. There are more places for more conversations than ever before, and more opportunities for these conversations to run off the rails. Here are some simple rules of thumb which should keep you out of trouble:

Only go where you’re invited. No one likes a salesman who sticks their foot in the door.

Participate in a conversation from a place of authenticity. Let people know who you are and why you’re there.

Spend time building relationships. Social media is a lot like friendship – it takes time and investment and a bit of love to make it work.

Be consistent. Invest time every single day, or at least with regularity. If you can’t do that, it’s probably better you do nothing at all.

Where are these conversations happening? All around you: on Twitter and Facebook and LinkedIn and YouTube and Flickr and thousand blogs. They’re happening all the time, everywhere. You probably want to spend some time investigating these conversations before you participate. That’s known as ‘lurking’, and it’s the foundation of successful net relationships. Having an appreciation and an understanding of a community before you participate within it shows respect. Respect will be reciprocated.

That’s about it for today – and frankly, that’s quite a lot. I’ve asked you to re-invent yourselves for the mid-21st century. I’ve asked you to become the gods of social media, to translate your natural role as connectors and facilitators into a greatly amplified form, just so you can remain competitive. I’m not saying that this transition will happen overnight. You have at least a few years to become adept with the tools, and a few more to build out those nationwide networks. But I can promise this: at the close of the 2nd decade of the 21st century, recruiting will look entirely different.

Every social network has a few individuals who are ‘superconnected’, who have many more connections than their peers within the network. Those individuals are the glue who keep the network held together. This is your natural role. The challenge, moving forward, is to remain extraordinary when everyone around you becomes superconnected themselves. It will take some work, and some time, but it can be done. Good luck.

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!

In mid-1994, sometime shortly after Tony Parisi and I had fused the new technology of the World Wide Web to a 3D visualization engine, to create VRML, we paid a visit to the University of Santa Cruz, about 120 kilometers south of San Francisco. Two UCSC students wanted to pitch us on their own web media project. The Internet Underground Music Archive, or IUMA, featured a simple directory of artists, complete with links to MP3 files of these artists’ recordings. (Before I go any further, I should state that they had all the necessary clearances to put musical works up onto the Web – IUMA was not violating anyone’s copyrights.) The idea behind IUMA was simple enough, the technology absolutely straightforward – and yet, for all that, it was utterly revolutionary. Anyone, anywhere could surf over to the IUMA site, pick an artist, then download a track and play it.

This was in the days before broadband, so downloading a multi-megabyte MP3 recording could take upwards of an hour per track – something that seems ridiculous today, but was still so potent back in 1994 that IUMA immediately became one of the most popular sites on the still-quite-tiny Web. The founders of IUMA – Rob Lord and Jon Luini – wanted to create a place where unsigned or non-commercial musicians could share their music with the public in order to reach a larger audience, gain recognition, and perhaps even end up with a recording deal. IUMA was always better as a proof-of-concept than as a business opportunity, but the founders did get venture capital, and tried to make a go of selling music online. However, given the relative obscurity of the musicians on IUMA, and the pre-iPod lack of pervasive MP3 players, IUMA ran through its money by 2001, shuttering during the dot-com implosion of the same year. Despite that, every music site which followed IUMA, legal and otherwise, from Napster to Rhapsody to iTunes, has walked in its footsteps. Now, nearing the end of the first decade of the 21st century, we have a broadband infrastructure capable of delivery MP3s, and several hundred million devices which can play them. IUMA was a good idea, but five years too early.

Just forty-eight hours ago, a new music service, calling itself Qtrax, aborted its international launch – though it promises to be up “real soon now.” Qtrax also promises that anyone, anywhere will be able to download any of its twenty-five million songs perfectly legally, and listen to them practically anywhere they like – along with an inserted advertisement. Using peer-to-peer networking to relieve the burden on its own servers, and Digital Rights Management, or DRM, Qtrax ensures that there are no abuses of these pseudo-free recordings.

Most of the words that I used to describe Qtrax in the preceding paragraph didn’t exist in common usage when IUMA disappeared from the scene in the first year of this millennium. The years between IUMA and Qtrax are a geological age in Internet time, so it’s a good idea to walk back through that era and have a good look at the fossils which speak to how we evolved to where we are today.

In 1999, a curly-haired undergraduate at Boston’s Northeastern University built a piece of software that allowed him to share his MP3 collection with a few of his friends on campus, and allowed him access to their MP3s. This scanned the MP3s on each hard drive, publishing the list to a shared database, allowing each person using the software to download the MP3 from someone else’s hard drive to his own. This is simple enough, technically, but Shawn Fanning’s Napster created a dual-headed revolution. First, it was the killer app for broadband: using Napster on a dial-up connection was essentially impossible. Second, it completely ignored the established systems of distribution used for recorded music.

This second point is the one which has the most relevance to my talk this morning; Napster had an entirely unpredicted effect on the distribution methodologies which had been the bedrock of the recording industry for the past hundred years. The music industry grew up around the licensing, distribution and sale of a physical medium – a piano roll, a wax recording, a vinyl disk, a digital compact disc. However, when the recording industry made the transition to CDs in the 1980s (and reaped windfall profits as the public purchased new copies of older recordings) they also signed their own death warrants. Digital recordings are entirely ephemeral, composed only of mathematics, not of matter. Any system which transmitted the mathematics would suffice for the distribution of music, and the compact disc met this need only until computers were powerful enough to play the more compact MP3 format, and broadband connections were fast enough to allow these smaller files to be transmitted quickly. Napster leveraged both of these criteria – the mathematical nature of digitally-encoded music and the prevalence of broadband connections on America’s college campuses – to produce a sensation.

In its earliest days, Napster reflected the tastes of its college-age users, but, as word got out, the collection of tracks available through Napster grew more varied and more interesting. Many individuals took recordings that were only available on vinyl, and digitally recorded them specifically to post them on Napster. Napster quickly had a more complete selection of recordings than all but the most comprehensive music stores. This only attracted more users to Napster, who added more oddities from their on collections, which attracted more users, and so on, until Napster became seen as the authoritative source for recorded music.

Given that all of this “file-sharing”, as it was termed, happened outside of the economic systems of distribution established by the recording industry, it was taking money out of their pockets – probably something greater than billions of dollars a year was lost, if all of these downloads had been converted into sales. (Studies indicate this was unlikely – college students have ever been poor.) The recording industry launched a massive lawsuit against Napster in 2000, forcing the service to shutter in 2001, just as it reached an incredible peak of 14 million simultaneous users, out of a worldwide broadband population of probably only 100 million. This means that one in seven computers connected to the broadband internet were using Napster just as it was being shut down.

Here’s where it gets more interesting: the recording industry thought they’d brought the horse back into the barn. What they hadn’t realized was that the gate had burnt down. The millions of Napster users had their appetites whet by a world where an incredible variety of music was instantaneously available with few clicks of the mouse. In the absence of Napster, that pressure remained, and it only took a few weeks for a few enterprising engineers to create a successor to Napster, known as Gnutella, which provided the same service as Napster, but used a profoundly different technology for its filesharing. Where Napster had all of its users register their tracks within a centralized database (which disappeared when Napster was shut down) Gnutella created a vast, amorphous, distributed database, spread out across all of the computers running Guntella. Gnutella had no center to strike at, and therefore could not be shut down.

It is because of the actions of the recording industry that Gnutella was developed. If legal pressure hadn’t driven Napster out of business, Gnutella would not have been necessary. The recording industry turned out to be its own worst enemy, because it turned a potentially profitable relationship with its customers into an ever-escalating arms race of file-sharing tools, lawsuits, and public relations nightmares.

Once Gnutella and its descendants – Kazaa, Limewire, and Acquisition – arrived on the scene, the listening public had wholly taken control of the distribution of recorded music. Every attempt to shut down these ever-more-invisible “darknets” has ended in failure and only spurred the continued growth of these networks. Now, with Qtrax, the recording industry is seeking to make an accommodation with an audience which expects music to be both free and freely available, falling back on advertising revenue source to recover some of their production costs.

At first, it seemed that filmic media would be immune from the disruptions that have plagued the recording industry – films and TV shows, even when heavily compressed, are very large files, on the order of hundreds of millions of bytes of data. Systems like Gnutella, which allow you to transfer a file directly from one computer to another are not particularly well-suited to such large file transfers. In 2002, an unemployed programmer named Bram Cohen solved that problem definitively with the introduction of a new file-sharing system known as BitTorrent.

BitTorrent is a bit mysterious to most everyone not deeply involved in technology, so a brief of explanation will help to explain its inner workings. Suppose, for a moment, that I have a short film, just 1000 frames in length, digitally encoded on my hard drive. If I wanted to share this film with each of you via Gnutella, you’d have to wait in a queue as I served up the film, time and time again, to each of you. The last person in the queue would wait quite a long time. But if, instead, I gave the first ten frames of the film to the first person in the queue, and the second ten frames to the second person in the queue, and the third ten frames to the third person in the queue, and so on, until I’d handed out all thousand frames, all I need do at that point is tell each of you that each of your “peers” has the missing frames, and that you needed to get them from those peers. A flurry of transfers would result, as each peer picked up the pieces it needed to make a complete whole from other peers. From my point of view, I only had to transmit the film once – something I can do relatively quickly. From your point of view, none of you had to queue to get the film – because the pieces were scattered widely around, in little puzzle pieces, that you could gather together on your own.

That’s how BitTorrent works. It is both incredibly efficient and incredibly resilient – peers can come and go as they please, yet the total number of peers guaratees that somewhere out there is an entire copy of the film available at all times. And, even more perversely, the more people who want copies of my film, the easier it is for each successive person to get a copy of the film – because there are more peers to grab pieces from. This group of peers, known as a “swarm”, is the most efficient system yet developed for the distribution of digital media. In fact, a single, underpowered computer, on a single, underpowered broadband link can, via BitTorrent, create a swarm of peers. BitTorrent allows anyone, anywhere, distribute any large media file at essentially no cost.

It is estimated that upwards of 60% of all traffic on the Internet is composed of BitTorrent transfers. Much of this traffic is perfectly legitimate – software, such as the free Linux operating system, is distributed using BitTorrent. Still, it is well known that movies and television programmes are also distributed using BitTorrent, in violation of copyright. This became absolutely clear on the 14th of October 2004, when Sky Broadcasting in the UK premiered the first episode of Battlestar Galactica, Ron Moore’s dark re-imagining of the famous shlocky 1970s TV series. Because the American distributor, SciFi Channel, had chosen to hold off until January to broadcast the series, fans in the UK recorded the programmes and posted them to BitTorrent for American fans to download. Hundreds of thousands of copies of the episodes circulated in the United States – and conventional thinking would reckon that this would seriously impact the ratings of the show upon its US premiere. In fact, precisely the opposite happened: the show was so well written and produced that the word-of-mouth engendered by all this mass piracy created an enormous broadcast audience for the series, making it the most successful in SciFi Channel history.

In the age of BitTorrent, piracy is not necessarily a menace. The ability to “hyperdistribute” a programme – using BitTorrent to send a single copy of a programme to millions of people around the world efficiently and instantaneously – creates an environment where the more something is shared, the more valuable it becomes. This seems counterintuitive, but only in the context of systems of distribution which were part-and-parcel of the scarce exhibition outlets of theaters and broadcasters. Once everyone, everywhere had the capability to “tuning into” a BitTorrent broadcast, the economics of distribution were turned on their heads. The distribution gatekeepers, stripped of their power, whinge about piracy. But, as was the case with recorded music, the audience has simply asserted its control over distribution. This is not about piracy. This is about the audience getting whatever it wants, by any means necessary. They have the tools, they have the intent, and they have the power of numbers. It is foolishness to insist that the future will be substantially different from the world we see today. We can not change the behavior of the audience. Instead, we must all adapt to things as they are.

But things as the are have changed more than you might know. This is not the story of how piracy destroyed the film industry. This is the story how the audience became not just the distributors but the producers of their own content, and, in so doing, brought down the high walls which separate professionals from amateurs.

II. The Barbarian Hordes Storm the Walls

Without any doubt the most outstanding success of the second phase of the Web (known colloquially as “Web 2.0”) is the video-sharing site YouTube. Founded in early 2005, as of yesterday YouTube was the third most visited site on the entire Web, led only by Yahoo! and YouTube’s parent, Google. There are a lot of videos on YouTube. I’m not sure if anyone knows quite how many, but they easily number in the tens of millions, quite likely approaching a hundred million. Another hundred thousand videos are uploaded each day; YouTube grows by three million videos a month. That’s a lot of video, difficult even to contemplate. But an understanding of YouTube is essential for anyone in the film and television industries in the 21st century, because, in the most pure, absolute sense, YouTube is your competitor.

Let me unroll that statement a bit, because I don’t wish it to be taken as simply as it sounds. It’s not that YouTube is competing with you for dollars – it isn’t, at least not yet – but rather, it is competing for attention. Attention is the limiting factor for the audience; we are cashed up but time-poor. Yet, even as we’ve become so time-poor, the number of options for how we can spend that time entertaining ourselves has grown so grotesquely large as to be almost unfathomable. This is the real lesson of YouTube, the one I want you to consider in your deliberations today. In just the past three years we have gone from an essential scarcity of filmic media – presented through limited and highly regulated distribution channels – to a hyperabundance of viewing options.

This hyperabundance of choices, it was supposed until recently, would lead to a sort of “decision paralysis,” whereby the viewer would be so overwhelmed by the number of choices on offer that they would simply run back, terrified, to the highly regularized offerings of the old-school distribution channels. This has not happened; in fact, the opposite has occured: the audience is fragmenting, breaking up into ever-smaller “microaudiences”. It is these microaudiences that YouTube speaks directly to. The language of microaudiences is YouTube’s native tongue.

In order to illustrate the transformation that has completely overtaken us, let’s consider a hypothetical fifteen year-old boy, home after a day at school. He is multi-tasking: texting his friends, posting messages on Bebo, chatting away on IM, surfing the web, doing a bit of homework, and probably taking in some entertainment. That might be coming from a television, somewhere in the background, or it might be coming from the Web browser right in front of him. (Actually, it’s probably both simultaneously.) This teenager has a limited suite of selections available on the telly – even with satellite or cable, there won’t be more than a few hundred choices on offer, and he’s probably settled for something that, while not incredibly satisfying, is good enough to play in the background.

Meanwhile, on his laptop, he’s viewing a whole series of YouTube videos that he’s received from his friends; they’ve found these videos in their own wanderings, and immediately forwarded them along, knowing that he’ll enjoy them. He views them, and laughs, he forwards them along to other friends, who will laugh, and forward them along to other friends, and so on. Sharing is an essential quality of all of the media this fifteen year-old has ever known. In his eyes, if it can’t be shared, a piece of media loses most of its value. If it can’t be forwarded along, it’s broken.

For this fifteen year-old, the concept of a broadcast network no longer exists. Television programmes might be watched as they’re broadcast over the airwaves, but more likely they’re spooled off of a digital video recorder, or downloaded from the torrent and watched where and when he chooses. The broadcast network has been replaced by the social network of his friends, all of whom are constantly sharing the newest, coolest things with one another. The current hot item might be something that was created at great expense for a mass audience, but the relationship between a hot piece of media and its meaningfulness for a microaudience is purely coincidental. All the marketing dollars in the world can foster some brand awareness, but no amount of money will inspire that fifteen year old to forward something along – because his social standing hangs in the balance. If he passes along something lame, he’ll lose social standing with his peers. This factors into every decision he makes, from the brand of runners he wears, to the television series he chooses to watch. Because of the hyperabundance of media – something he takes as a given, not as an incredibly recent development – all of his media decisions are weighed against the values and tastes of his social network, rather than against a scarcity of choices.

This means that the true value of media in the 21st century is entirely personal, and based upon the salience, that is, the importance, of that media to the individual and that individual’s social network. The mass market, with its enforced scarcity, simply does not enter into his calculations. Yes, he might go to the theatre to see Transformers with his mates; but he’s just as likely to download a copy recorded in the movie theatre with an illegally smuggled-in camera that was uploaded to The Pirate Bay a few hours after its release.

That’s today. Now let’s project ourselves five years into the future. YouTube is still around, but now it has more than two hundred million videos (probably much more), all available, all the time, from short-form to full-length features, many of which are now available in high-definition. There’s so much “there” there that it is inconceivable that conventional media distribution mechanisms of exhibition and broadcast could compete. For this twenty year-old, every decision to spend some of his increasingly-valuable attention watching anything is measured against salience: “How important is this for me, right now?” When he weighs the latest episode of a TV series against some newly-made video that is meant only to appeal to a few thousand people – such as himself – that video will win, every time. It more completely satisfies him. As the number of videos on offer through YouTube and its competitors continues to grow, the number of salient choices grows ever larger. His social network, communicating now through FaceBook and MySpace and next-generation mobile handsets and iPods and goodness-knows-what-else is constantly delivering an ever-growing and increasingly-relevant suite of media options. He, as a vital node within his social network, is doing his best to give as good as he gets. His reputation depends on being “on the tip.”

When the barriers to media distribution collapsed in the post-Napster era, the exhibitors and broadcasters lost control of distribution. What no one had expected was that the professional producers would lose control of production. The difference between an amateur and a professional – in the media industries – has always centered on the point that the professional sells their work into distribution, while the amateur uses wits and will to self-distribute. Now that self-distribution is more effective than professional distribution, how do we distinguish between the professional and the amateur? This twenty year-old doesn’t know, and doesn’t care.

There is no conceivable way that the current systems of film and television production and distribution can survive in this environment. This is an uncomfortable truth, but it is the only truth on offer this morning. I’ve come to this conclusion slowly, because it seems to spell the death of a hundred year-old industry with many, many creative professionals. In this environment, television is already rediscovering its roots as a live medium, increasingly focusing on news, sport and “event” based programming, such as Pop Idol, where being there live is the essence of the experience. Broadcasting is uniquely designed to support the efficient distribution of live programming. Hollywood will continue to churn out blockbuster after blockbuster, seeking a warmed-over middle ground of thrills and chills which ensures that global receipts will cover the ever-increasing production costs. In this form, both industries will continue for some years to come, and will probably continue to generate nice profits. But the audience’s attentions have turned elsewhere. They’re not returning.

This future almost completely excludes “independent” production, a vague term which basically means any production which takes place outside of the media megacorporations (News Corp, Disney, Sony, Universal and TimeWarner), which increasingly dominate the mass media landscape. Outside of their corporate embrace, finding an audience sufficient to cover production and marketing costs has become increasingly difficult. Film and television have long been losing economic propositions (except for the most lucky), but they’re now becoming financially suicidal. National and regional funding bodies are growing increasingly intolerant of funding productions which can not find an audience; soon enough that pipeline will be cut off, despite the damage to national cultures. Australia funds the Film Finance Corporation and the Australian Film Council to the tune of a hundred million dollars a year, to ensure that Australian stories are told by Australian voices; but Australians don’t go to see them in the theatres, and don’t buy them on DVD.

The center can not hold. Instead, YouTube, which founder Steve Chen insists has “no gold standard” of production values, is rapidly becoming the vehicle for independent productions; productions which cost not millions of euros, but hundreds, and which make up for their low production values in salience and in overwhelming numbers. This tsunami of content can not be stopped or even slowed down; it has nothing to do with piracy (only nine percent of the videos viewed on YouTube are violations of copyright) but reflects the natural accommodation of the audience to an era of media hyperabundance.

What then, is to be done?

III. And The Penny Drops

It isn’t all bad news. But, like a good doctor, I want to give you the bad news right up front: There is no single, long-term solution for film or television production. No panacea. It’s not even entirely clear that the massive Hollywood studios will do business-as-usual for any length of time into the future. Just a decade ago the entire music recording industry seemed impregnable. Now it lies in ruins. To assume that history won’t repeat itself is more than willful ignorance of the facts; it’s bad business.

This means that the one-size-fits-all production-to-distribution model, which all of you have been taught as the orthodoxy of the media industries, is worse than useless; it’s actually blocking your progress because it is effectively keeping you from thinking outside the square. This is a wholly new world, one which is littered with golden opportunities for those able to avail themselves of them. We need to get you from where you are – bound to an obsolete production model – to where you need to be. Let me illustrate this transition with two examples.

In early 2005, producer Ronda Byrne got a production agreement with Channel NINE, then the number one Australian television network, to make a feature-length television programme about the “law of attraction”, an idea she’d learned of when reading a book published in 1910, The Science of Getting Rich. The interviews and other footage were shot in July and August, and after a few months in the editing suite, she showed the finished production to executives at Channel NINE, who declined to broadcast it, believing it lacked mass appeal. Since Byrne wasn’t going to be getting broadcast fees from Channel NINE to cover her production costs, she negotiated a new deal with NINE, allowing her to sell DVDs of the completed film.

At this point Byrne began spreading news of the film virally, through the communities she thought would be most interested in viewing it; specifically, spiritual and “New Age” communities. People excited by Byrne’s teaser marketing could pay $20 for a DVD copy of the film (with extended features), or pay $5 to watch a streaming version directly on their computer. As the film made its way to its intended audience, word-of-mouth caused business to mushroom overnight. The Secret became a blockbuster, selling millions of copies on DVD. A companion book, also titled The Secret, has sold over two million copies. And that arbiter of American popular taste, Oprah, has featured the film and book on her talk show, praising both to the skies. The film has earned back many, many times its production costs, making Byrne a wealthy woman. She’s already deep into the production of a sequel to The Secret – a film which already has an audience identified and targeted.

Chagrined, the television executives of Channel NINE finally did broadcast The Secret in February 2007. It didn’t do that well. This sums up the paradox distribution in the age of the microaudience. Clearly The Secret had a massive world-wide audience, but television wasn’t the most effective way to reach them, because this audience was actually a collection of microaudiences, rather than a single, aggregated audience. If The Secret had opened theatrically, it’s unlikely it would have done terribly well; it’s the kind of film that people want to watch more than once, being in equal parts a self-help handbook and a series of inspirational stories. It is well-suited for a direct-to-DVD release – a distribution vehicle that no longer has the stigma of “failure” associated with it. It is also well-suited to cross-media projects, such as books, conferences, streamed delivery, podcasts, and so forth. Having found her audience, Byrne has transformed The Secret into an exceptional money-making franchise, as lucrative, in its own way, and at its own scale, as any Hollywood franchise.

The second example is utterly different from The Secret, yet the fundamentals are strikingly similar. Just last month a production group calling themselves “The League of Peers” released a film titled Steal This Film, Part 2. The first part of this film, released in late 2006, dealt with the rise of file-sharing, and, in specific, with the legal troubles of the world’s largest BitTorrent site, Sweden’s The Pirate Bay. That film, although earnest and coherent, felt as though it was produced by individuals still learning the craft of filmmaking. This latest film feels looks as professional as any documentary created for BBC’s Horizon or PBS’s Frontline or ABC’s 4Corners. It is slick, well-lit, well-edited, and has a very compelling story to tell about the history of copying – beginning with the invention of the printing press, five hundred years ago. Steal This Film is a political production, a bit of propaganda with an bias. This, in itself, is not uncommon in a documentary. The funding and distribution model for this film is what makes it relatively unique.

Individuals who saw Steal This Film, Part One – which was made freely available for download via BitTorrent – were invited to contribute to the making of the sequel. Nearly five million people downloaded Steal This Film, Part One, so there was a substantial base of contributors to draw from. (I myself donated five dollars after viewing the film. If every viewer had done likewise that would cover the budget of a major Hollywood production!) The League of Peers also approached arts funding bodies, such as the British Documentary Council, with their completed film in hand, the statistics showing that their work reached a large audience, and a roadmap for the second film – this got them additional funding. Now, having released Steal This Film, Part Two, viewers are again invited to contribute (if they like the film), promised a “secret gift” for contributions of $15 or more. While the tip jar – literally, busking – may seem a very weird way to fund a film production, it’s likely that Steal This Film, Part Two will find an even wider audience than Part One, and that the coffers of the League of Peers will provide them with enough funds to embark on their next film, The Oil of the 21st Century, which will focus on the evolution of intellectual property into a traded commodity.

I have asked Screen Training Ireland to include a DVD of Steal This Film, Part Two with the materials you received this morning. You’ve been given the DVD version of the film, but I encourage you to download the other versions of the film: the XVID version, for playback on a PC; the iPod version, for portable devices; and the high-definition version, for your visual enjoyment. It’s proof positive that a viable economic model exists for film, even when it is given away. It will not work for all productions, but there is a global community of individuals who are intensely interested in factual works about copyright and intellectual property in the 21st century, who find these works salient, and who are underserved by the media megacorporations, who would not consider it in their own economic best interest to produce or distribute such works. The League of Peers, as part of the community whom this film is intended for, knew how to get the word out about the film (particularly through Boing Boing, the most popular blog in the world, with two million readers a week), and, within a few weeks, nearly everyone who should have heard of the film had heard about it – through their social networks.

Both The Secret and Steal This Film, Part Two are factual works, and it’s clear that this emerging distribution model – which relies on targeting communities of interest – works best with factual productions. One of the reasons that there has been such an upsurge in the production of factual works over the past few years is because these works have been able to build their own funding models upon a deep knowledge of the communities they are talking to – made by microaudiences, for microaudiences. But microaudiences, scaled to global proportions, can easily number in the millions. Microaudiences are perfectly willing to pay for something or contribute to something they consider of particular value and salience; it is a visible thank you, a form of social reinforcement which is very natural within social networks.

What about drama, comedy and animation? Short-form comedy and animation probably have the easiest go of it, because they can be delivered online with an advertising payload of some sort. Happy Tree Friends is a great example of how this works – but it took producers Mondo Media nearly a decade to stumble into a successful economic model. Feature-length comedy and feature-length drama are more difficult nuts to crack, but they are not impossible. Again, the key is to find the communities which will be most interested in the production; this is not always entirely obvious, but the filmmaker should have some idea of the target audience for their film. While in preproduction, these communities need to be wooed and seduced into believing that this film is meant just for them, that it is salient. Productions can be released through complementary distribution channels: a limited, occasional run in rented exhibition spaces (which can be “events”, created to promote and showcase the film); direct DVD sales (which are highly lucrative if the producer does this directly); online distribution vehicles such as iTunes Movie Store; and through “community” viewing, where a DVD is given to a few key members of the community in the hopes that word-of-mouth will spread in that community, generating further DVD sales.

None of this guarantees success, but it is the way things work for independent productions in the 21st-century. All of this is new territory. It isn’t a role that belongs neatly to the producer of the film, nor, in the absence of studio muscle, is it something that a film distributor would be competent at. This may not be the producer’s job. But it is someone’s job. Someone must do it. Starting at the earliest stages of pre-production, someone has to sit down with the creatives and the producer and ask the hard questions: “Who is this film intended for?” “What audiences will want to see this film – or see it more than once?” “How do we reach these audiences?” From these first questions, it should be possible to construct a marketing campaign which leverages microaudiences and social networks into ticket receipts and DVD sales and online purchases.

So, as you sit down to do your planning today, and discuss how to move Irish screen industries into the 21st century, ask yourselves who will be fulfilling this role. The producer is already overloaded, time-poor, and may not be particularly good at marketing. The director has a vision, but might be practically autistic when it comes to working with communities. This is a new role, one that is utterly vital to the success of the production, but one which is not yet budgeted for, and one which we do not yet train people to fill. Individuals have succeeded in this new model through their own tireless efforts, but each of these have been scattershot; there is a way to systematize this. While every production and every marketing plan will be unique – drawn from the fundamentals of the story being told – there are commonalities across productions which people will be able to absorb and apply, production after production.

One of my favorite quotes from science fiction writer William Gibson goes, “The future is already here, it’s just not evenly distributed.” This is so obviously true for film and television production that I need only close by noting that there are a lot of success stories out there, individuals who have taken the new laws of hyperdistribution and sharing and turned them to their own advantage. It is a challenge, and there will be failures; but we learn more from our failures than from our successes. Media production has always been a gamble; but the audiences of the 21st century make success easier to achieve than ever before.

“The net interprets censorship as damage and routes around it.”
– John Gilmore

I read a very interesting article last week. It turns out that, despite their best efforts, the Communist government of the People’s Republic of China have failed to insulate their prodigious population from the outrageous truths to be found online. In the article from the Times, Wang Guoqing, a vice-minister in the information office of the Chinese cabinet was quoted as saying, “It has been repeatedly proved that information blocking is like walking into a dead end.” If China, with all of the resources of a one-party state, and thus able to “lock down” its internet service providers, directing their IP traffic through a “great firewall of China”, can not block the free-flow of information, how can any government, anywhere – or any organization, or institution – hope to try?

Of course, we all chuckle a little bit when we see the Chinese attempt the Sisyphean task of damming the torrent of information which characterizes life in the 21st century. We, in the democratic West, know better, and pat ourselves on the back. But we are in no position to throw stones. Gilmore’s Law is not specifically tuned for political censorship; censorship simply means the willful withholding of information – for any reason. China does it for political reasons; in the West our reasons for censorship are primarily economic. Take, for example, the hullabaloo associated with the online release of Harry Potter and the Deathly Hallows, three days before its simultaneous, world-wide publication. It turns out that someone, somewhere, got a copy of the book, and laboriously photographed every single page of the 784-page text, bound these images together into a single PDF file, and then uploaded it to the global peer-to-peer filesharing networks. Everyone with a vested financial interest in the book – author J.K. Rowling, Bloomsbury and Scholastic publishing houses, film studio Warner Brothers – had been feeding the hype for the impending release, all focused around the 21st of July. An enormous pressure had been built up to “peek at the present” before it was formally unwrapped, and all it took was one single gap in the $20 million security system Bloomsbury had constructed to keep the text safely secure. Then it became a globally distributed media artifact. Curiously, Bloomsbury was reported as saying they thought it would only add to sales – if many people are reading the book now, even illegally, then even more people will want to be reading the book right now. Piracy, in this case, might be a good thing.

These two examples represent two data points which show the breadth and reach of Gilmore’s Law. Censorship, broadly defined, is anything which restricts the free flow of information. The barriers could be political, or they could be economic, or they could – as in the case immediately relevant today – they could be a nexus of the two. Broadband in Australia is neither purely an economic nor purely a political issue. In this, broadband reflects the Janus-like nature of Telstra, with one face turned outward, toward the markets, and another turned inward, toward the Federal Government. Even though Telstra is now (more or less) wholly privatized, the institutional memory of all those years as an arm of the Federal Government hasn’t yet been forgotten. Telstra still behaves as though it has a political mandate, and is more than willing to use its near-monopoly economic strength to reinforce that impression.

Although seemingly unavoidable, given the established patterns of the organization, Telstra’s behavior has consequences. Telstra has engendered enormous resentment – both from its competitors and its customers – for its actions and attitude. They’ve recently pushed the Government too far (at least, publicly), and have been told to back off. What may not be as clear – and what I want to warn you of today – is how Telstra has sewn the seeds of its own failure. What’s more, this may not be anything that Telstra can now avoid, because this is neither a regulatory nor an economic failure. It can not be remedied by any mechanism that Telstra has access to. Instead, it may require a top-down rethinking of the entire business.

I: Network Effects

For the past several thousand years, the fishermen of Kerala, on the southern coast of India, have sailed their dhows out into the Indian Ocean, lowered their nets, and hoped for the best. When the fishing is good, they come back to shore fully laden, and ready to sell their catch in the little fish markets that dot the coastline. A fisherman might have a favorite market, docking there only to find that half a dozen other dhows have had the same idea. In that market there are too many fish for sale that day, and the fisherman might not even earn enough from his catch to cover costs. Meanwhile, in a market just a few kilometers away, no fishing boats have docked, and there’s no fish available at any price. This fundamental chaos of the fish trade in Kerala has been a fact of life for a very long time.

Just a few years ago, several of India’s rapidly-growing wireless carriers strung GSM towers along the Kerala coast. This gives those carriers a signal reach of up to about 25km offshore – enough to be very useful for a fisherman. While mobile service in India is almost ridiculously cheap by Australian standards – many carriers charge a penny for an SMS, and a penny or two per minute for voice calls – a handset is still relatively expensive, even one such as the Nokia 1100, which was marketed specifically at emerging mobile markets, designed to be cheap and durable. Such a handset might cost a month’s profits for a fisherman – which makes it a serious investment. But, at some point in the last few years, one fisherman – probably a more prosperous one – bought a handset, and took it to sea. Then, perhaps quite accidentally, he learned, through a call ashore, of a market wanting for fish that day, brought his dhow to dock there, and made a handsome profit. After that, the word got around rapidly, and soon all of Kerala’s fisherman were sporting their own GSM handsets, calling into shore, making deals with fishmongers, acting as their own arbitrageurs, creating a true market where none had existed before. Today in Kerala the markets are almost always stocked with just enough fish; the fishmongers make a good price for their fish, and the fishermen themselves earn enough to fully recoup the cost of their handsets in just two months. Mobile service in Kerala has dramatically altered the economic prospects for these people.

This is not the only example: in Kenya farmers call ahead to the markets to learn which ones will have the best prices for their onions and maize; spice traders, again in Kerala, use SMS to create their own, far-flung bourse. Although we in the West generally associate mobile communications with affluent lifestyles, a significant number of microfinance loans made by Grameen Bank in Bangladesh, and others in Pakistan, India, Africa and South America are used to purchase mobile handsets – precisely because the correlation between access to mobile communications and earning potential has become so visible in the developing world. Grameen Bank has even started its own carrier, GrameenPhone, to service its microfinance clientele.

Although economists are beginning to recognize and document this curious relationship between economics and access to communication, it needs to be noted that this relationship was not predicted – by anyone. It happened all by itself, emerging from the interaction of individuals and the network. People – who are always the intelligent actors in the network – simply recognized the capabilities of the network, and put them to work. As we approach the watershed month of October 2007, when three billion people will be using mobile handsets, when half of humanity will be interconnected, we can expect more of the unexpected.

All of this means that none of us – even the most foresighted futurist – can know in advance what will happen when people are connected together in an electronic network. People themselves are too resourceful, and too intelligent, to model their behavior in any realistic way. We might be able to model their network usage – though even that has confounded the experts – but we can’t know why they’re using the network, nor what kind of second-order effects that usage will have on culture. Nor can we realistically provision for service offerings; people are more intelligent, and more useful, than any other service the carriers could hope to offer. The only truly successful service offering in mobile communications is SMS – because it provides an asynchronous communications channel between people. The essential feature of the network is simply that it connects people together, not that it connects them to services.

This strikes at the heart of the most avaricious aspects of the carriers’ long-term plans, which center around increasing the levels of services on offer, by the carrier, to the users of the network. Although this strategy has consistently proven to be a complete failure – consider Compuserve, Prodigy and AOL – it nevertheless has become the idée fixe of shareholder reports, corporate plans, and press releases. The network, we are told, will become increasingly more intelligent, more useful, and more valuable. But all of the history of the network argues directly against this. Nearly 40 years after its invention, the most successful service on the Internet is still electronic mail, the Internet’s own version of SMS. Although the Web has become an important service in its own right, it will never be as important as electronic mail, because it connects individuals.

Although the network in Kerala was brought into being by the technology of GSM transponders and mobile handsets, the intelligence of the network truly does lie in the individuals who are connected by the network. Let’s run a little thought experiment, and imagine a world where all of India’s telecoms firms suffered a simultaneous catastrophic and long-lasting failure. (Perhaps they all went bankrupt.) Do you suppose that the fishermen would simply shrug their shoulders and go back to their old, chaotic market-making strategies? Hardly. Whether they used smoke signals, or semaphores, or mirrors on the seashore, they’d find some way to maintain those networks of communication – even in the absence of the technology of the network. The benefits of the network so outweigh the implementation of the network that, once created, networks can not be destroyed. The network will be rebuilt from whatever technology comes to hand – because the network is not the technology, but the individuals connected through it.

This is the kind of bold assertion that could get me into a lot of trouble; after all, everyone knows that the network is the towers, the routers, and the handsets which comprise its physical and logical layers. But if that were true, then we could deterministically predict the qualities and uses of networks well in advance of their deployment. The quintessence of the network is not a physical property; it is an emergent property of the interaction of the network’s users. And while people do persistently believe that there is some “magic” in the network, the source of that magic is the endlessly inventive intellects of the network’s users. When someone – anywhere in the network – invents a new use for the network, it propagates widely, and almost instantaneously, transmitted throughout the length and breadth of the network. The network amplifies the reach of its users, but it does not goad them into being inventive. The service providers are the users of the network.

I hope this gives everyone here some pause; after all, it is widely known that the promise to bring a high-speed broadband network to Australia is paired with the desire to provide services on that network, including – most importantly – IPTV. It’s time to take a look at that promise with our new understanding of the real power of networks. It is under threat from two directions: the emergence of peer-produced content; and the dramatic, disruptive collapse in the price of high-speed wide-area networking which will fully power individuals to create their own network infrastructure.

II: DIYnet

Although nearly all high-speed broadband providers – which are, by and large, monopoly or formerly monopoly telcos – have bet the house on the sale of high-priced services to finance the build-out of high-speed (ADSL2/FTTN/FTTH) network infrastructure, it is not at all clear that these service offerings will be successful. Mobile carriers earn some revenue from ringtone and game sales, but this is a trivial income stream when compared to the fees they earn from carriage. Despite almost a decade of efforts to milk more ARPU from their customers, those same customers have proven stubbornly resistant to a continuous fleecing. The only thing that customers seem obviously willing to pay for is more connectivity – whether that’s more voice calls, more SMS, or more data.

What is most interesting is what these customers have done with this ever-increasing level of interconnectivity. These formerly passive consumers of entertainment have become their own media producers, and – perhaps more ominously, in this context – their own broadcasters. Anyone with a cheap webcam (or mobile handset), a cheap computer, and a broadband link can make and share their own videos. This trend had been growing for several years, but since the launch of YouTube, in 2005, it has rocketed into prominence. YouTube is now the 4th busiest website, world-wide, and perhaps 65% of all video downloads on the web take place through Google-owned properties. Amateur productions regularly garner tens of thousands of viewers – and sometimes millions.

We need to be very careful about how we judge both the meaning of the word “amateur” in the context of peer-produced media. An amateur production may be produced with little or no funding, but that does not automatically mean it will appear clumsy to the audience. The rough edges of an amateur prodution are balanced out by a corresponding increase in salience – that is, the importance which the viewer attaches to the subject of the media. If something is compelling because it is important to us – something which we care passionately about – high production values do not enter into our assessment. Chad Hurley, one of the founders of YouTube has remarked that the site has no “gold-standard” for production; in fact, YouTube’s gold-standard is salience – if the YouTube audience feels the work is important, audience members will share it within their own communities of interest. Sharing is the proof of salience.

After two years of media sharing, the audience for YouTube (which is now coincident with the global television audience in the developed world) has grown accustomed to being able to share salient media freely. This is another of the unexpected and unpredicted emergent effects of the intelligence of humans using the network. We now have an expectation that when we encounter some media we find highly salient, we should be able to forward it along within our social networks, sharing it within our communities of salience. But this is not the desire of many copyright holders, who collect their revenues by placing barriers to the access of media. This fundamental conflict, between the desire to share, as engendered by our own interactions with the network, and the desire of copyright holders to restrain media consumption to economic channels has, thus far, been consistently resolved in favor of sharing. The copyright holders have tried to use the legal system as a bludgeon to change the behavior of the audience; this has not, nor will it ever work. But, as the copyright holders resort to ever-more-draconian techniques to maintain control over the distribution of their works, the audience is presented with an ever-growing world of works that are meant to be shared. The danger here is that the audience is beginning to ignore works which they can not share freely, seeing them as “broken” in some fundamental way. Since sharing has now become an essential quality of media, the audience is simply reacting to a perceived defect in those works. In this sense, the media multinationals have been their own worst enemies; by restricting the ability of the audiences to share the works they control, they have helped to turn audiences toward works which audiences can distribute through their own “do-it-yourself” networks.

These DIYnets are now a permanent fixture of the media landscape, even as their forms evolve through YouTube playlists, RSS feeds, and sharing sites such as Facebook and Pownce. These networks exist entirely outside the regular and licensed channels of distribution; they are not suitable – legally or economically – for distribution via a commercial IPTV network. Telstra can not provide these DIYnets to their customers through its IPTV service – nor can any other broadband carrier. IPTV, to a carrier, means the distribution of a few hundred highly regularized television channels. While there will doubtless be a continuing market for mass entertainment, that audience is continuously being eroded by a growing range of peer-produced programming which is growing in salience. In the long-term this, like so much in the world, will probably obey an 80/20 rule, with about 80 percent of the audience’s attention absorbed in peer-produced, highly-salient media, while 20 percent will come from mass-market, high-production-value works. It doesn’t make a lot of sense to bet the house on a service offering which will command such a small portion of the audience’s attention. Yes, Telstra will offer it. But it will never be able to compete with the productions created by the audience.

Because of this tension between the desires of the carrier and the interests of the audience, the carrier will seek to manipulate the capabilities of the broadband offering, to weight it in favor of a highly regularized IPTV offering. In the United States this has become known as the “net neutrality” argument, and centers on the question of whether a carrier has the right to shape traffic within its own IP network to advantage its own traffic over that of others. In Australia, the argument has focused on tariff rates: Telstra believes that if they build the network, they should be able to set the tariff. The ACCC argues otherwise. This has been the characterized as the central stumbling block which has prevented the deployment of a high-speed broadband network across the nation, and, in some sense that is entirely true – Telstra has chosen not move forward until it feels assured that both economic and regulatory conditions prove favorable. But this does not mean that the consumer demand for a high-speed network was simply put on pause over the last years. More significantly, the world beyond Telstra has not stopped advancing. While it now costs roughly USD $750 per household to provide a high-speed fiber-optic connection to the carrier network, other technologies are coming on-line, right now, which promise to reduce those costs by an order of magnitude, and furthermore, which don’t require any infrastructure build-out on the part of the carrier. This disruptive innovation could change the game completely.

III: Check, Mate

All parties to the high-speed broadband dispute – government, Telstra, the Group of Nine, and the public – share the belief that this network must be built by a large organization, able to command the billions of dollars in capital required to dig up the streets, lay the fiber, and run the enormous data centers. This model of a network is an reflection in copper, plastic and silicon, of the hierarchical forms of organization which characterize large institutions – such as governments and carriers. However, if we have learned anything about the emergent qualities of networks, it is that they quickly replace hierarchies with “netocracies“: horizontal meritocracies, which use the connective power of the network to out-compete slower and rigid hierarchies. It is odd that, while the network has transformed nearly everything it has touched, the purveyors of those networks – the carriers – somehow seem immune from those transformative qualities. Telecommunications firms are – and have ever been – the very definition of hierarchical organizations. During the era of plain-old telephone service, the organizational form of the carrier was isomorphic to the form of the network. However, over the last decade, as the internal network has transitioned from circuit-switched to packed-switched, the institution lost synchronization with the form of the network it provided to consumers. As each day passes, carriers move even further out of sync: this helps to explain the current disconnect between Telstra and Australians.

We are about to see an adjustment. First, the data on the network was broken into packets; now, the hardware of the network has followed. Telephone networks were centralized because they required explicit wiring from point-to-point; cellular networks are decentralized, but use licensed spectrum – which requires enormous capital resources. Both of these conditions created significant barriers to entry. But there is no need to use wires, nor is there any need to use licensed spectrum. The 2.4 GHz radio band is freely available for anyone to use, so long as that use stays below certain power values. We now see a plethora of devices using that spectrum: cordless handsets, Bluetooth devices, and the all-but-ubiquitous 802.11 “WiFi” data networks. The chaos which broadcasters and governments had always claimed would be the by-product of unlicensed spectrum has, instead, become an wonderfully rich marketplace of products and services. The first generation of these products made connection to the centralized network even easier: cordless handsets liberated the telephone from the twisted-pair connection to the central office, while WiFi freed computers from heavy and clumsy RJ-45 jacks and CAT-5 cabling. While these devices had some intelligence, that intelligence centered on making and maintaining a connection to the centralized network.

Recently, advances in software have produced a new class of devices which create their own networks. Devices connected to these ad-hoc “mesh” networks act as peers in a swarm (similar to the participants in peer-to-peer filesharing), rather than clients within a hierarchical distribution system. These network peers share information about their evolving topology, forming a highly-resilient fabric of connections. Devices maintain multiple connections to multiple nodes throughout the network, and a packet travels through the mesh along a non-deterministic path. While this was always the promise of TCP/IP networks, static routes through the network cloud are now the rule, because they provide greater efficiency, make it easier to maintain the routers, diagnose network problems, and keeps maintenance costs down. But mesh networks are decentralized; there is no controlling authority, no central router providing an interconnection with a peer network. And – most significantly – mesh networks now incredibly inexpensive to implement.

Earlier this year, the US-based firm Meraki launched their long-awaited Meraki Mini wireless mesh router. For about AUD $60, plus the cost of electricity, anyone can become a peer within a wireless mesh network providing speeds of up to 50 megabits per second. The device is deceptively simple; it’s just an 802.11 transceiver paired with a single-chip computer running LINUX and Meraki’s mesh routing software – which was developed by Meraki’s founders while Ph.D. students at the Massachusetts Institute of Technology. The 802.11 radio within the Meraki Mini has been highly optimized for long-distance communication. Instead of the normal 50 meter radius associated with WiFi, the Meraki Mini provides coverage over at least 250 meters – and, depending upon topography, can reach 750 meters. Let me put that in context, by showing you the coverage I’ll get when I install a Meraki Mini on my sixth-floor balcony in Surry Hills:

From my flat, I will be able to reach all the way from Central Station to Riley Street, from Belvoir Street over to Albion Street. Thousands of people will be within range of my network access point. Of course, if all of them chose to use my single point of access, my Meraki Mini would be swamped with traffic. It simply wouldn’t be able to cope. But – given that the Meraki Mini is cheaper than most WiFi access points available at Harvey Norman – it’s likely that many people within that radius would install their own access points. These access points would detect each others’ presence, forming a self-organizing mesh network. If every WiFi access point visible from my flat (I can sense between 10 and 20 of them at any given time) were replaced with a Meraki Mini, or, perhaps more significantly, if these WiFi access points were given firmware upgrades which allowed them to interoperate with the mesh networks created by the Meraki Mini – my Surry Hills neighborhood would suddenly be blanketed in a highly resilient and wholly pervasive wireless high-speed network, at nearly no cost to the users of that network. In other words, this could all be done in software. The infrastructure is already deployed.

As some of you have no doubt noted, this network is highly local; while there are high-speed connections within the wireless cloud, the mesh doesn’t necessarily have connections to the global Internet. In fact, Meraki Minis can act as routers to the Internet, routing packets through their Ethernet interfaces to the broader Internet, and Meraki recommends that at least every tenth device in a mesh be so equipped. But it’s not strictly necessary, and – if dedicated to a particular task – completely unnecessary. Let us say, for example, that I wanted to provide a low-cost IPTV service to the residents of Surry Hills. I could create a “head-end” in my own flat, and provide my “subscribers” with Meraki Minis and an inexpensive set-top-box to interface with their televisions. For a total install cost of perhaps $300, I could give everyone in Surry Hills a full IPTV service (though it’s unlikely I could provide HD-quality). No wiring required, no high-speed broadband buildout, no billions of dollars, no regulatory relaxation. I could just do it. And collect both subscriber fees and advertiser revenues. No Telstra. No Group of Nine. No blessing from Senator Coonan. No go-over by the ACCC. The technology is all in place, today.

Here’s a news report – almost a year old – which makes the point quite well:

I bring up this thought experiment to drive home my final point: Telstra isn’t needed. It might not even be wanted. We have so many other avenues open to us to create and deploy high-speed broadband services that it’s likely Telstra has just missed the boat. You’ve waited too long, dilly-dallying while the audience and the technology have made you obsolete. The audience doesn’t want the same few hundred channels they can get on FoxTel: they want the nearly endless stream of salience they can get from YouTube. The technology is no longer about centralized distribution networks: it favors light, flexible, inexpensive mesh networks. Both of these are long-term trends, and both will only grow more pronounced as the years pass. In the years it takes Telstra – or whomever gets the blessing of the regulators – to build out this high-speed broadband network, you will be fighting a rearguard action, as both the audience and the technology of the network race on past you. They have already passed you by, and it’s been my task this morning to point this out. You simply do not matter.

This doesn’t mean it’s game over. I don’t want you to report to Sol Trujilo that it’s time to have a quick fire-sale of Telstra’s assets. But it does mean you need to radically rethink your business – right now. In the age of pervasive peer-production, paired with the advent of cheap wireless mesh networks, your best option is to become a high-quality connection to the global Internet – in short, a commodity. All of this pervasive wireless networking will engender an incredible demand for bandwidth; the more people are connected together, the more they want to be connected together. That’s the one inarguable truth we can glean from the 160 years of electric communication. Telstra has the infrastructure to leverage itself into becoming the most reliable data carrier connecting Australians to the global Internet. It isn’t glamorous, but it is a business with high barriers to entry, and promises a steadily growing (if unexciting) continuing revenue stream. But, if you continue to base your plans around selling Australians services we don’t want, you are building your castles on the sand. And the tide is rising.

I.Nothing is perfect. Everything, this side of heaven, contains a flaw. The master rug makers of Persia go so far as to add a mistaken stitch into their carpets; perfection would be an insult to the greatness of God. For nearly everything else, and for nearly everyone else, we don’t have to worry about adding errors: we work from incomplete knowledge, we work from ignorance, and we work from prejudice. As Mark Twain noted, “It’s not what you don’t know that’ll hurt you, but what you know that ain’t so.” We believe we know so much; in truth we know nearly nothing at all. We have trouble discerning our own motivations – yet we constantly judge the motivations of others. Cognitive scientists have repeatedly demonstrated how we backfill our own memories to create a comfortable and pleasing narrative of our lives; this keeps us from drowning in despair, but it also allows us to be monsters who have no trouble sleeping soundly at night.

We constantly and impudently impugn the motives of others, carrying that attitude into the designs of systems which support community. We protect children from pedophiles; we protect ourselves from unsolicited emails; we protect communities from the excesses of emotion or behavior which – we believe – would rip them apart. Each of these filtering processes – many of them automated – serve to create a “safe space” for conversation and community. Yet community is at least as much about difference as it is about similarity. If every member of a community held to a unity of thought, no conversation would be possible; information – “the difference which makes a difference” – can only emerge from dissent. Any system which diminishes difference therefore necessarily diminishes the vitality of a community. Every act of communication within a community is both an promise of friendship and a cry to civil war. Every community sails between the Scylla and Charybdis of undifferentiated assent and complete fracture.

When people were bound by proximity – in their villages and towns – the pressure for the community to remain cohesive prevented most of the egregious separations from occurring, though periodically – and particularly since the Reformation – communities have split apart, divided on religious or ideological lines. In the post-Enlightenment era, with the opening of the Americas, divided communities could simply move away, and establishing their own particular Edens, though these too could fracture; schism follows schism in an echo of the Biblical story of the Confusion of Tongues. Rural communities could remain singular and united (at least, until they burst apart under the build up of pressures), but urban polities had to move in another direction: tolerance. Amsterdam and London flourished in the eighteenth century because of the dissenting voices they tolerated in their streets. It was either that, or, as both had learned – to their horror – endless civil wars. This essential idea of the Enlightenment – that men could keep their own counsel, so long as they respected the beliefs of others – fostered democracy, science, capitalism and elevated millions from misery and poverty. It is said that democratic nations never wage war against one another; while not entirely true, tolerance acts as a firewall against the most immediate passions of states. The alternative – repeated countless times throughout the 20th century – is amountainofskulls.

Where people are connected electronically, freed both from the strictures of proximity and the organic and cultural bounds of propriety that accompany face-to-face interactions (it is much easier to be rude to someone that you’ve never met in the flesh) the natural tendency to schism is amplified. The checks against bad behavior lose their consequential quality. One can be rude, abrasive, even evil, because the mountain of skulls which pile up as the inevitable result of such psychopathology appear to lack the immediacy of a real, bleeding body. It has been argued that we need “to be excellent to each other,” or that we need to grow thicker skins. Both suggestions have some merit, but the truth lies somewhere in between.

II.

While USENET, the thirty year old, Internet-wide bulletin board system remains the archetype for online community – the place where the terms “flame”, “flame war” and “troll” originated in their current, electronic usages – USENET has been long since been obsolesced by a million dedicated websites. We can learn a lot about the pathology of online communities by studying USENET, but the most important lesson we can draw involves the original online schism. In 1987, John Gilmore – one of the founding engineers of SUN Microsystems – wanted to start a USENET list to discuss topics related to illegal psychoactive drugs. USENET users must approve all requests for new lists, and this highly polarizing topic, when put to a vote, was repeatedly rejected. Gilmore spent a few hours modifying the USENET code so that it could handle a new top-level hierarchy, “alt.*” This was designed to be the alternative to USENET, where anyone could start a list for any reason, anywhere. While many USENET sites tried to ban the alt. hierarchy from their servers, within a year’s time alt. became ubiquitously available. Everyone on USENET had a passion for some list which couldn’t be satisfied within its strict guidelines. To this day, the tightly moderated USENET and free-wheeling, often obscene, and frequently illegal alt. hierarchy coexist side-by-side. Each has reinforced the existence of the other.

Qualities of both USENET and the alt. hierarchy have been embodied in the peer-produced encyclopedia-about-everything, Wikipedia. Like the alt. hierarchy, anyone can create an entry on any subject, and anyone can edit any entry on any subject (with a few exceptions, discussed below). However, like USENET, there are Wikipedia moderators, who can choose to delete entries, or roll back the edits on entry, and who act as “governors” – in the sense that they direct activity, rather than ruling over it (this from the original Greek kybernetes, from which we get “cybernetics,” and meaning “steersman”). By any objective standard the system has worked remarkably well; Wikipedia now has nearly 1.5 million English-language articles, and continues growing at a nearly exponential rate. The strength of the moderation in Wikipedia is that it is nearly invisible; although articles do get deleted because they do not meet Wikipedia’s evolving standards (e.g., the first version of a biographical page about myself) it remains a triumph of tolerance, carefully maintaining a laissez-faire approach to the creation of content, applying a moderating influence only when the broad guidelines of Wikipedia (summed up in the maxim “don’t be a dick”) have been obviously violated. The community feels that it has complete control over the creation of content within Wikipedia, and this sense of investment – that Wikipedia truly is the product of the community’s own work – has made Wikipedia’s contributors its most earnest evangelists.

There is a price to be paid for this open-door policy: noise. Because Wikipedia is open to all, it can be vandalized, or filled with spurious information. While the moderators do their best to correct instances of vandalism, Wikipedia relies on the community to do this nitpicking work. (I have deleted vandalism on Wikipedia pages several times.) For the most part, it works well, though there are specific instances – such as on 31 July 2006, when Steven Colbert urged viewers of his television program to modify Wikipedia entries to promote his own “political” views – when it falls down utterly. Wikipedia can withstand the random assaults of individuals, but, in its present form, it can not hope to stand against thousands of individuals intent on changing its content in specific areas. Thus, in certain circumstances, Wikipedia moderators will “lock” certain entries, allowing them to be modified only by carefully designated individuals. Although Colbert meant his assault as a stunt, with no malicious intent, he pointed to the serious flaw of all open-door systems – they rely on the good faith of the vast majority of their users. If any polity decides to take action against Wikipedia, the system will suffer damage.

With a growing consciousness of the danger of open-door systems – and a sense that perhaps more moderation is better – Wikipedia cofounder Larry Sanger has launched his own competitor to Wikipedia, Citizendium. Starting with a “fork” of Wikipedia (that is, a selection of the entries thought “suitable” for inclusion in the new work), Citizendium will restrict posting in its entries to trusted experts in their fields. The goal is to create a higher-quality version of Wikipedia, with greater involvement from professional researchers and academics.

While a certain argument can be made that Wikipedia entries contain too much noise –many are poorly written, have no references, or even project a certain point-of-view – it remains to be seen if any differentiation between “professional” and “amateur” communities of knowledge production can be maintained in an era of hyperdistribution. If a film producer is now threatened by the rise of the amateur – that is, an enthusiast working outside the established systems of media distribution – won’t an academic (and by extension, any professional) also be under threat? The academy has always existed for two reasons: to expand knowledge, and to restrict it. Academic communities function under the same rules of all communities, the balancing act between uniformity and schism. The “standard bearers” in any community reify the orthodox tenets of any field, blocking the research of any outsiders whose work might threaten the functioning assumptions of the community. Yet, since T.S. Kuhn published The Structure of Scientific Revolutions we know that science progresses (in Max Planck’s apt phrasing) “funeral by funeral.” Experts tend to block progress in a field; by extension any encyclopedia which uses these same experts as the gatekeepers to knowledge aquisition will effectively hamstring itself from first principles. In the age of hyperintelligence, expertise has become a broadly accessible quality; it is not located in any particular community, but rather in individuals who may not be associated with any official institution. Noise is not the enemy; it is a sign of vitality, and something that we must come to accept as part of the price we pay for our newly-expanded capabilities. As Kevin Kelly eloquently expressed in Out of Control, “The perfect is the enemy of the good.” The question is not whether Wikipedia is perfect, but rather, is it good enough? If it is – and that much must be clear by now – then Citizendium, as an attempt to make perfect what is already good enough, must be doomed to failure, out of tune with the times, fighting the trend toward the era of the amateur.

As Citizendium flowers and fails over the next year, it will be interesting to note how its community practices change in response to an ever-more-dire situation. The pressures of the community will force Citizendium to become more Wikipedia-like in its submissions and review policies. At the same time, additional instances of organized vandalism (we’ve only just started to see these) will drive Wikipedia toward a more restrictive submissions and editing policy. Citizendium overshot the mark from the starting line, and will need to crawl back toward the open-door policy, yet, as it does, it risks alienating the same experts it’s designed to defend. Wikipedia, starting from a position of radical openness, has only restricted access in response to some real threat to its community. Citizendium is proactive and presumes too much; Wikipedia is reactive (and for this reason will occasionally suffer malicious damage) but only modifies its access policies when a clear threat to the stability of the community has been demonstrated. Wikipedia is an anarchic horde, moving by consensus, unlike Citizendium, which is a recapitulation of the top-down hierarchy of the academy. While some will no doubt treasure the heavy moderation of Citizendium, the vast majority will prefer the noise and vitality of Wikipedia. A heavy hand versus an invisible one; this is the central paradox of community.

III.

A well-run online community walks a narrow line between anarchy and authoritarianism. To encourage discussion and debate, a community must be encouraged to sit on a hand grenade that always threatens to explode, but never quite manages to go off. In general, it’s quite enough to put people into the same conversational space, and watch the sparks fly; stirring the pot is rarely necessary. Conversely, when the pot begins to boil over, someone has to be on hand to turn the heat down. Communities frequently manage this process on their own, with cool minds ready to reframe conversation in less inflammatory terms. This wisdom of communities is not innate; it is knowledge embodied within a community’s practices, something that each community must learn for itself. USENET lists, over the course of thirty years, have learned how to avoid the most obvious hot-button topics, and regular contributors to these lists have learned to filter out the outrageous flame-baiting of list trolls. But none of this community intelligence resides in a newly-founded community, so, in an absolute sense, the long-term health of any community depends strongly on the character and capabilities of its earliest members.

The founding members of a new community should not be arbitrarily selected; that would be gambling on the good behavior of individuals who, insofar as the community is concerned, have no track record. Instead, these founders need to be carefully vetted across two axes of significance: their ability to be provocative, and their capability to act like adults. These qualities usually don’t come as a neat package; any individual who has a surfeit of one is more than likely to be lacking in the other. However, once such “balanced” individuals have been identified and recruited, the community can begin its work.

After a time, the best of these individuals – whose qualities will become clear to the rest of the community – should be promoted to moderator status, assuming the Solomonic mantle as protectors and guardians of the community. This role is vital; a community should always know that they are functioning in a moderated environment, but this moderation should be so light-handed as to be nearly invisible. The presumption of observation encourages individuals to behave appropriately; the rare examples when a moderator is forced to act as a benevolent and trustworthy force for good should encourage imitation.

Hand-in-hand with the sense of confidence which comes from careful and gentle moderation, a community must feel empowered to create something that represents both their individual and collective abilities. The idea of “ownership,” when multiplied by a community-recognized sense of expertise, produces a strongly reinforcing behavior. Individuals who are able to share their expertise with a community – and help the community build its own expertise – will develop a very strong sense of loyalty to the community. Expertise can be demonstrated in the context of a bulletin board system, but these systems do not easily adapt themselves to the total history of interactions experts have within the community. A posting made today is lost in six months’ time; a Wiki is forever. Thus, in addition to conversation – and growing naturally from it – the community should have the tools at its disposal to translate its conversation into something more permanent. Community members will quickly recognize those within its ranks who have the authority of expertise on any given subject, and they should be gently guided into making a record of that expertise. As that record builds, it develops a value of its own, beyond its immense value as a repository of expertise; it becomes the living embodiment of an individual’s dedication to the community. Over time, community members will come to see themselves as the true “content” of the community, both through their participation in the endless conversation of the community, and as the co-creators of the community’s collective intelligence.

This model has worked successfully for over a decade in some of the more notable electronic communities – particularly in the open-source software movement. The various communities around GNU/Linux, PHP and Python have all demonstrated that any community with room enough to pool the expertise of large numbers of dedicated individuals will build something of lasting value, and bring broad renown to its key contributors, moderators, and enthusiasts.

However, even in the most effective communities, schism remains the inevitable human tendency, and some conflicts can not always be resolved, drawn from deep-seated philosophical or temperamental differences. Schism should not be embraced arbitrarily, but neither should it be avoided at all costs. Instead – as in the case of the alt. hierarchy – room should be made to accommodate the natural tendency to differentiate. Wikipedia will eventually fork into a hundred major variants, of which Citizendium is but the first. The LINUX world has been dividedintodifferentdistributions since its earliest years. Schism is a sign of life, indicating that there is something important enough to fight over. Schisms either resolve in an ecumenical unity, or persist and continue to divide; neither outcome is inherently preferable.

Every living thing struggles between static order and chaotic dissolution; it isn’t perfect, but then, nothing ever is. Even as we feel ourselves drawn to one extreme or another, wisdom wrought from experience (often painfully gained) checks our progress, and guides us forward, delicately, into something that is, in the best of worlds, utterly unexpected. The potential for novelty in any community is enormous; releasing that potential requires flexibility, balance, and presence. There are no promises of success. Like a newborn child, a new community is all potential – unbounded, unbridled, standing at the cusp of a unique wonder. We can set its feet on the path of wisdom; what comes after is unknowable, and, for this reason, impossibly potent.

Everything is changing. Everything has changed. Everything always changes, but at times that change is particularly pronounced and thus specifically noteworthy. For media – which is the topic du jour – this is so plainly obvious that any attempt to refer to the “before” time has an almost archeological feel, as though we must shovel carefully through layers of dirt to uncover how media worked just a few year ago. These transformations have been seismic, and singular. There is no going back.

But what, exactly, has happened?

The revolution we glimpsed in 1994, when the rough beast of the Web, its hour come at last, made the earth tremble, seducing and subsuming us into its ever-broadening expanse, fell back, for a brief while, into patterns more established and more familiar. We glimpsed a utopia; then a fog rose, and the vision faded. We endured half a decade of stupidity, cupidity and the slow strangulation of dreams. We longed for communion; we got DVD players delivered in under an hour. Fortunately, the network accelerates everything it embraces, and what might have taken a generation in earlier times took just five years to run its course, from Netscape to Razorfish, and the lunar crater of NASDAQ seemed to spell the final doom of all our hopes. The Web, people loudly proclaimed, was so over.

Silly humans.

During those first five years, we learned just how different network economics could be; not just in theory, but in practice. We learned that the essence of the digital artifact is that it exists to be copied. Like a gene in the Cambrian seas of the early Web, information was copied and recopied endlessly. John Perry Barlow’s Declaration of the Independence of Cyberspace was one of the first such objects, spread via email and website until it became nearly impossible to ignore. More recently, Cory Doctorow’s lecture on DRM for Microsoft Research – in text, Pig Latin and video versions – has been passed around like a cheap two-dollar…well, you know. Each of these digital artifacts eventually reached nearly every single individual who might find them interesting, because, as they were copied and read, forwarded and linked to, each of the human nodes in this network made a decision that this information was important enough to share. In the networked era, salience is the only significant quality of information. For that reason, it was only a matter of time until the technologies of the network would reinforce this natural tendency, and accelerate it.

So even as the Web died, it was reborn. The top-down design of a hundred centralized sources of information evolved into seven hundred million peers. From each according to their ability, to each according to their need. Feeds replaced websites, and torrents replaced streams. The revolution we had fleetingly glimpsed had finally – blessedly – arrived.

But one man’s blessing is another’s curse.

The network revolution presented incredible opportunities to anyone working in the media industries. Suddenly, it became possible to reach massive audiences, unbounded by proximity. But instead of reinforcing the previous structures of media ownership and information distribution, the network has consistently undermined them. Mention Craigslist to a newspaperman, and watch as the color drains from their face. Casually drop BitTorrent into a conversation with a studio executive, and observe as they choke back their rage. The network carries within it the seeds of their destruction. And they’re absolutely, utterly, completely powerless to stop it.

This would be a sad story if professional media had not willingly cooperated in their own demise. The technologies of the digital era were simply too tempting to be ignored, too important to the bottom line. But the network has its own economics, and quickly overcomes or blithely ignores any attempt to subvert its innate qualities. Film studios make the majority of revenues from DVD distribution of their productions, but that same DVD, because of its essentially digital nature, can be copied and recopied endlessly, at no cost. If it is salient, it will be copied widely. That’s not just a horror story: that’s the law.

And if you don’t want your film copied? Well then, you have to resort to antique production technique. Make sure it’s shot to film stock, physically edited (good luck finding an editor who prefers a Steenbeck to an Avid) and graded – with no digital intermediates – then projected in an exhibition space where every audience member has been subjected to a humiliating physical search of their bodies. If you did that, you’d kill piracy. Probably. Of course, you’d also kill your exhibition revenues. But the studios (and the record companies, and the broadcasters, and the book publishers) want to have it both ways, want the benefits of digital distribution, all the while denying the essential quality of the medium – it exists to be copied.

That, at least, is the message from a hundred insta-pundits, on the business pages of newspapers, in blogs, and countlessanalysts’reports. The entire world seemed shocked by the entirely expected purchase of video-sharing site YouTube by Google for 1.65 billion dollars. It’s a bad deal, some say, doomed to fail. It isn’t worth it. It’ll bring Google crashing back to earth with endless litigation from the copyright holders who have just been waiting for someone with deep enough pockets to sue.

Feh.

What most everyone overlooked – as it happened the very same day as the Google purchase – were the licensing agreements YouTube struck with Universal, Sony BMG, and CBS. Together with their earlier deal with Warner, YouTube now has a deal with every major music publisher in the world. YouTube will now figure out how to share the revenues it will be generating with Google’s advertising technology with all of the copyright holders whose materials end up on YouTube.

Some pundits – most notably, Mark Cuban – have indicated that only a moron would buy YouTube, because it’s widely believed that YouTube has built its business entirely upon the violation of copyright. Certainly, YouTube established its reputation with a specific piece of video owned by someone else – a digital short from NBC’s Saturday Night Live, “Chronic Sunday.” That video – viewed millions of times before NBC rattled its legal saber and the content was removed – introduced most users to YouTube. In the year since “Chronic Sunday,” YouTube has become a clearing house for the funniest bits of video content produced by other companies, from segments of The Daily Show with Jon Stewart, to South Park, to Family Guy to The Simpsons. Why has YouTube become the redistributors of these clips? Because none of the copyright holders made an effort to distribute these clips themselves. YouTube has been acting as an arbitrageur of media, equalizing an inequity in the market place – and getting very rich in the process. It may be copyright violation, but the power of the audience is far, far greater than the power of the copyright holder. YouTube could delete every clip uploaded in violation of copyright – to some degree they do – but if you have a few thousand people uploading the same clip, how do you stay ahead of that? Even YouTube itself is subject to the power of its audience. And if they become draconian in their enforcement of copyright – which is a possible outcome of the Google purchase – they will simply force the audience elsewhere, to other sites. Better by far to strike a deal with the copyright holders, so that they receive recompense for their efforts. NBC has started to distribute Saturday Night Live’s digital shorts on its own website; ABC and FOX offer full streaming versions of their programs; everyone is queuing up to sell their TV shows on iTunes. Is this a willing transition? Probably not. Minutes spent in front of the computer are minutes lost to television ratings. But if the copyright holders don’t distribute their content as widely as possible, someone else will. YouTube has proven this point beyond all argument.

Cuban believes that YouTube will die without a steady stream of content uploaded in violation of copyright. But if recent history is any guide, the studios are now falling over each other in their eagerness to do a deal, and share some of that money. The simultaneity of the Google purchase and the YouTube deals with the recording industry are not accidental; they’re indicative of a great sea-change. Big media has swallowed the bitter pill, and realized that they’ve lost control of distribution. Now they’ll try to make money off of it.

But Cuban makes another, and more damning point: he says that no one wants to watch the little hand-made videos which make up the vast majority of uploads to YouTube. This is the Big Lie of Big Media: if it isn’t professionally produced, the audience won’t watch it. No statement could be more mendacious, no assertion could be further from the truth. As a film producer and broadcaster, Cuban certainly hopes that audiences will always prefer professional content to amateur productions, but there’s no evidence to support this position – and rather a lot which counters it. The success of Red versus Blue, Homestar Runner, Happy Tree Friends, and The Show with Zefrank – each of which command audiences in the hundreds of thousands to millions – prove that audiences will find the content which interests them, and share that content with their friends, using the hyperdistribution techniques enabled by the network that ensure these audiences can get what they want – from anyone, anywhere, at any time – with a minimum of difficulty. These productions lie completely outside the bounds of “professional” media; they are “amateur,” not in the sense of raw, or poorly produced, but because they have turned their back on the antique systems of distribution which previously separated the big boys from the wannabes.

A perfect example of this transition can be seen in a video on YouTube by the Australian band Sick Puppies. Shot by the band’s drummer, it features a well-known character, Juan Mann, who inhabits Sydney’s Pitt Street mall, bearing a sign reading “Free Hugs.” The band befriended this unlikely character, and shot hours of video of him at work, giving free hugs to passers-by. While in Los Angeles, pursuing a recording deal, the drummer cut his footage into a three minute film, then added one the band’s song “All The Same” as a temp track. Thinking to share his work around, he uploaded the video to YouTube on the 26th of September, and told his friends. Who told their friends. Who told their friends. YouTube is particularly good at “viral” distribution of media – it’s the one thing they’ve gotten absolutely right – so, within three week’s time, that little hand-made video had been viewed well over three million times. Sick Puppies are now on the map; their music video has given them a worldwide fan base. A debut album on a major label – expected early next year – will complete their transformation from amateurs to professionals.

Salience determines whether an audience will gather around and share media, not production values. In the time before hyperdistribution, audiences had a severely limited pool of choices, all of them professionally produced; now the gates have come down, and audiences are free to make their own choices. When placed head-to-head, can a professional production of modest salience stand up against an amateur production of great salience? Absolutely not. The audience will always select the production which speaks to them most directly. Media is a form of language, and we always favor our mother tongue.

The future for YouTube lies with the amateurs, not with the professionals. Cuban misses the point entirely, assuming that the audience will behave as it always has. But this is not that audience; this is an audience which has essentially infinite choice, and has come to understand that the sharing of media is an act of production in itself – that we are all our own broadcasters.

And you’d have to be a moron to miss that.

III. The Epidemiology of Cool

We know why YouTube has had such an incredible string of successes; the site makes it easy to share a video with your friends, and for those friends to share that videos with their friends, and so on. The marketers call this “viral distribution,” but we know it by another and rather more prosaic name – friendship. As an inherently social species, we are constantly reinforcing the our social connections through communication. It could be an IM, a text message, an email, a phone call, or a video – it’s all the same to the enormous section of our forebrains that we use to process the intricacies of our social relationships. We share these things to tell our friends that we’re thinking of them – and, rather more competitively, to show our friends that we’re on the tip. Each of us are coolfinders (some of us do it professionally), and we each keep a little internal thermometer which measures our own cool against that of our peers. That innate drive to be recognized for our tastes has been accelerated to the speed of light by the network. Now, even as we coolfind, we are constantly inundated and challenged by the coolfinding of our peers. It’s produced a very healthy, if ultra-Darwinian, ecology of cool. Our peers are the selection pressure as we struggle to pass our memes on to the next generation.

Thus far, we’ve done this on our own, with very little assistance from the wealth of computing machinery which crowds our lives. We create ad-hoc solutions for media distribution: mailing lists, websites, podcasts – each of these an attempt to spread our ideas more successfully. But they’re held together tenuously, only by our constant activity, busy bees maintaining the cells of our hive. And it’s a lot of work. We’re forced to do it – forced to run the race, lest we be overrun by the memes of others – but we’ve reached the one practical limit: time. No one has enough time in the day to keep up with all of the information we should be absorbing. We can filter ruthlessly – and perhaps miss out on something we’ll regret later – or declare email bankruptcy, like Lawrence Lessig, or just withdraw to an ever-more-specialized domain of coolfinding. And we are doing each of these things, every day, under the pressure of all this information.

There’s got to be a better way.

In the early years of the 19th century, farmers in western Pennsylvania kept their wagon wheels greased with puddles of bubbling muck that studded the countryside. Although useful, the puddles were a toxic nuisance to livestock. If the farmers could have rid their lands of these puddles, they likely would have. A half a century later, western Pennsylvania became a boomtown, built on its substantial petroleum reserves. The bubbling muck had immense value – but it had to wait for the demands of the kerosene lamp and the internal combustion engine.

In the early years of the 21st century, we each generate an enormous amount of interaction data – every click on a computer, every email sent or received, every website visited, every text message, every phone call, every swipe of a credit card or loyalty card or debit card, every face-to-face interaction. None of it is recorded – or at least, it’s not recorded by any of us, for any of us (though the NSA has expressed some interest in it) – because it hasn’t been seen as valuable. It’s bubbling up through all of us, and around all of us, as we create data shadows that have grown longer and longer, resembling Jacob Marley’s lockboxes and chains, rattling throughout cyberspace.

All of that information is worth more than oil, more than gold. And all of it is sadly – almost obscenely – dropped on the floor as soon as it is created. If we’re lucky, it is deleted. If we’re unlucky, someone uses it to create a digital simulacrum, and we find our identities hijacked. But in no case is this information ever exposed to us, for our own use. We’re told it has no value to us, and – so far – we’ve been stupid enough to believe it.

But now, just now, economic forces are linking the persistence of our data shadows to our ability to filter the avalanche of information which characterizes life in the 21st century. Turns out this data guck is good for more than greasing the wheels of commerce. These data shadow glow with the evanescent echo of our real social networks – not the baby steps of MySpace and Friendster – but the real ground-truth interactions which reveal ourselves and our relations one to another. It is human metadata. And it is the most valuable thing we’ve got, now that there’s demand for it.

YouTube records every email address you use to forward a video to a friend. It uses these, at present, to do auto-completion of addresses as you type them in. It also presents a friendly list of these addresses, to make forwarding all that much easier. What they’re not doing – at least, not visibly, and very likely not at all – is keeping any record of what I sent to whom, nor when, nor why. Yet every video forwarded through YouTube is forwarded for a reason – salience. YouTube could record those moments of salience, could use them to build a model, a data shadow, which could reinforce your own ability to make decisions about who should see what. It might even, to some degree, automate that process. When you add to this the newly emerging capabilities of analytic folksonomy – comparing a user’s tag clouds against the tag clouds of others within their social network – certain other relationships and affinities emerge. Again, these relationships can be used to improve the capability of the system to help find, filter and forward relevant videos. This is how a social network really works. It’s not about having 500 first-degree friends in MySpace. It’s about listening to your naturally occurring social network to direct, improve, and accelerate information flow. When the brand-new power of the individual as broadcaster is reified by the capabilities of computing machinery to listen to and model our interactions, the result is hypercasting. This is what media distribution in the 21st century is inevitably hurtling toward, driven by the natural selection of steadily increasing informational pressure.

Hypercasting solves some lingering questions confronting us. The first and most important of these is: How will we figure out what to watch now that we’ve got a near-to-infinite set of choices? We’ll rely on the recommendation of our friends, as we always have, but now these recommendations will be backed up by a hypercasting system which will invisibly and pervasively keep track our interests, the points of interest we hold in common with our friends, our communities, our families, and our co-workers. It will not be automatic – no one really wants to see some out-of-control hypercasting system deluge us with video spam – but it will be so tightly integrated into our interactive experiences that it will barely register on our perceptions. We’ll simply come to expect that our iPods, our Media Centers, our PSPs and our mobiles are loaded up and ready for us, with things we’re sure to find compelling. Addiction to television will soar to new highs, a new crop of amateurs – millions of them – will find successful and lucrative careers in media production, and advertisers, as always, will find a way to spread their messages. On the surface, things will look much as they do now, but everything will move at a more rapid clip. Videos will fly across the world in seconds, not days, and a global audience of a million will gather in moments. Almost accidentally, this will change news reporting forever, as citizen journalism becomes a real threat to established media companies, and their utter undoing. Shouldn’t the New York Times be subject to the same pressures as NEWS Corporation?

Is YouTube the harbinger of the transition to hypercasting? The lead is theirs to lose. GooTube delivers over half of all videos seen on the Internet. They have the cash and the brainpower to transform broadcasting into hypercasting. And they have to worry about the next set of 20-somethings, in a garage, working on the Next Big Thing. Those kids, nurtured by YouTube, know just what’s wrong with it, and how to make it better. YouTube faces its own selection pressures, which will only increase as it grows exponentially and cuts content deals and just tries to keep the whole centralized mess up and running.

Yet it doesn’t matter. We have seen birth and death, and thought they were different. But the death of the Web brought a new kind of life, a vitality and surefootedness suppressed during the years of MBAs and crazy business plans and IPOs. Perhaps history is repeating itself, as everyone goes wild with another case of gold fever, and we’ll lose the plot again. In that case, we should be glad of another death.

Hypercasting might need to wait a few years, for a platform very much like a fully mature Democracy DTV – or something we haven’t even dreamt up. It may be that YouTube will disappoint. But that doesn’t mean anything at all. YouTube isn’t driving the evolution toward hypercasting. The audience is. And the audience – in its teeming, active, probing billions – always gets whatever it wants. That’s the first rule of show business.