Reports from the connected present

Main menu

Category Archives: hypercasting

Back in 1978 – when I was just fifteen – I begged my parents to let me enroll in a course at the local community college (the equivalent of TAFE) so that I could take ‘Data Processing with RPG II’. I wrote my first computer program in RPG II. I typed that program onto a series of punched cards, one statement per punched card. Once I’d completed typing the deck of cards which comprised my program, I dropped them off at the college’s data processing center, where they went into the batch queue. You returned in 24 hours and were returned your deck of punched cards, along with a long string of ‘green-bar’ paper, which printed the results (or errors) in your program. If you’d made a mistake on one of the cards – a spelling error, or a syntactical no-no, you’d be forced to repeat the process, as needed, until you got it right.

Woohoo. Sign me up.

From around 1980 – when I went off to MIT to study computer science – computers have been my constant companions. I’ve owned cheap ones (Commodore’s VIC-20), expensive ones (one of the first Macintosh IIs to roll off the assembly line), tiny ones (iPhone), and big ones (SparcStation 3). I have never owned a computer that I have not written code for. In my mind, the computer and the act of programming are inseparable.

Programming languages are something one acquires, like computers; but you don’t put those languages in the bin – mostly. In preparation for this talk, I made up a list of all the programming languages I’ve learned over the years, beginning with RPG II – which I’ve since forgotten. BASIC came next, and I thought it a wonderful, useful, incredible language, my true starting point.

I spent many years programming in assembly language on a variety of systems – CP/M, MS-DOS, embedded microcontrollers. I bought a cheap C compiler in 1982, a copy of Kernighan & Ritchie, learned pointer arithmetic, and crashed my computer repeatedly in the process. Now that was fun.

I did take up C++ when it was still new, when Stroustrup was still implementing features of the language. (Oh, wait, he’s still doing that, isn’t he?) Buried myself in class designs and object hierarchies and delegation models. I can probably still program in C++. If someone were to threaten me with a taser.

In the 1990s along came the Web and LINUX, the open computing platform. Suddenly a language was more useful for its ability to communicate with other entities than for its raw processing power.

I sat down at the 3rd International World Wide Web conference with a few folks from SUN Microsystems, who were touting this new, portable programming language they’d invented, which they called ‘Oak‘. I wonder whatever became of that?

Each new language is supposed to conquer the world. Each new language is meant to subdue all before it. And I have to admit that I had my share of fun with PERL – the bastard child of BASIC and C – and, later PHP. I’ve written a lot of JavaScript, because that’s the programming language of choice that brings VRML to life. Oh, and that’s right: along the way I invented a language, a portable language for interactive 3D computer graphics, a language that now, with WebGL about to become part of HTML5, looks less a damp squib than fifteen years ahead of its time.

Oh well.

Just a few years ago I decided that I needed to learn Python. I don’t remember the reason. I don’t even know that there was a reason. Python was there, and that was enough.

It didn’t take long to learn – Python isn’t a difficult language – but for just that little bit of learning I got so much power, well – I don’t have to explain it to you. You understand. It’s a bit like crack, Python is. Once you’ve had that first hit, you’re never quite the same again.

I put Python on everything: on my Macs, on my servers, on my mobile – everything I owned got a Python install. I didn’t know exactly what I’d do with all this Python, but somehow that seemed unimportant. Just get it everywhere. You’ll figure something out.

In some ways discovering Python was very frustrating. By my early 40s I’d basically stopped programming; not because I hated coding, but because my life had turned in other directions. I teach, I research, I lecture, I write, I do a little TV on the side. None of that has anything to do with coding. I had the best tool for a grand bit of hackery, and no time to do anything with it, nor any real reason to drive me to make time.

My biggest Python project (before last week) was a simple script to create a video used in the opening of my 2008 WebDirections South keynote. I wanted to show the ‘cloud’ of Twitter followers I had started to accumulate – around 1500. Not just a ‘wall’ of different faces, but a film, an animation, where each person I followed on Twitter had their moment in the sun. The script retrieved the list of people I follow, then iterated through this list, getting profile information for each individual, extracting from that the URL for the user’s avatar, which it then retrieved, Using Python Imaging Library, it then embossed the user’s handle onto the image. After that it was a basic drag-and-drop operation into Adobe Premiere. Presto! – I had a movie. Thank you, Python.

For half a decade I’ve been thinking about social networks. This little film project allowed me to tie my research together with my desire to have a pleasant excuse to hack. When I sat back and watched the film I’d algorithmically pieced together, I began to get a deeper sense of the value of my ‘social graph’. That’s a new phrase, and it means the set of human relationships we each carry with us. Until just a few years ago, these relationships lived wholly between our ears; we might augment our memories with an address book or a Rolodex, but these paper trails were only ever a reflection of our embodied relationships. Ever since Friendster, these relationships have exteriorized, leaped out of our heads (like Athena from Zeus) and crawled into our computers.

This makes them both intimately familiar and eerily pluripotent. We are wired from birth to connect with one another: to share what we know, to listen to what others say. This is what we do, a knowledge so essential, so foundational, it never needs to be taught. When this essential feature of being human gets accelerated by the speed of the computer, then amplified by a global network that now connects about five billion people (counting both mobile or Internet), all sorts of unexpected things begin to happen. The entire landscape of human knowledge – how we come to know something, how we come to share what we know – has been utterly transformed over the last decade. Were we to find a convenient TARDIS and take ourselves back to the world of 1999, it would be almost unrecognizable. The media landscape was as it always had been, though the print component had hesitatingly migrated onto the Web. To learn about the world around us, we all looked up – to the ABC, to the New York Times, to the BBC World Service.

Then the world exploded.

We don’t look up anymore. We look around – we look to one another – to learn what’s going on. Sometimes we share what we hear on the ABC or the Times or the World Service. But what’s important is that we share it. There is no up, there is no centre. There is only a vast sea of hyperconnected human nodes.

The most alluring and seductive of all of the hyperconnecting services is unquestionably Facebook. In three years it has grown from just fifteen million to nearly half a billion users. It might be the most visited website in the world, just now surpassing Google. Facebook has become the nexus, the connecting point for one person in every fourteen on Earth. Facebook is the place where the social graph has come to life, where the potency of sharing and listening can be explored in depth. But it is a life lived out in public. Facebook is not really geared toward privacy, toward the intimacies that we expect as a necessary quality of our embodied relationships. Facebook founder Mark Zuckerberg is on the record talking about ‘the end of privacy’, and how he sees it that a side-effect of Facebook’s mission ‘to give people the power to share, and make the world more open and connected’.

A world more open could be a good thing, but only if the openness is wholly multilateral. We don’t want to end up in a world where our secrets as individuals have been revealed, while those who have the concentrations of capital and power, and their supporting organizations and networks, manage to continue to remain obscure and occult. This kind of ‘privacy asymmetry’ will only work against the individuals who have surrendered their privacy.

This is precisely where we seem to be headed. Facebook wants us to connect and share and reveal, but – particularly around privacy, user confidentiality, and the way they put that vast amount of user-generated data to work for themselves and their advertisers – Facebook’s business practices are entirely opaque. Openness must be met with openness, sharing with sharing. Anything else creates a situation where one side is – quite literally – holding all the cards.

I have been pondering the power of social networks for six years, so I am peculiarly conscious of the price you pay for participation in someone else’s network. I’ve come to realized your social graph is your most important possession. In a very real way, your social graph is who you are. Until a few years ago we never gave this much thought because we carried our graphs with us everywhere, inside our heads. But now that these graph live elsewhere – under the control of someone else – we’re confronted with a dilemma :we want to turbocharge our social graphs, but we don’t want anyone else having any access to something so fundamental and intimate. If the CIA and NSA use social graphs to find and combat terrorists, if smoking, obesity and divorce spread through social graphs, why would we hand something so personal and so potent to anyone else? What kind of value would we receive for surrendering our crown jewels?

By the end of last month it was clear that Facebook had become dangerous. Something had to be done. People had to be warned. In a Melbourne hotel room, I drafted a manifesto. Here’s how I closed it:

There is only one solution. We must take the thing which is inalienable from us – our presence – and remove it from those who would use that presence for their own gain. We must move, migrate, become digital refugees, fleeing a regime which seeks only its own best interests, to the detriment of our own… We may be the first, but we will not be the last. We must map the harbors, clear the woods, and make virgin lands inviting enough that it will be an easy decision for those who will come to join us in this new country, where freedom goes hand-in-hand with presence, where privacy is not a dirty word, and where the future knows no bounds.

So I quit. But I didn’t do it suddenly or rashly. I’d been using Facebook to share media – links and articles and videos – so I set up a Posterous account, where I could do exactly the same kind of sharing. Over the course of two weeks, I posted a series of Facebook updates, telling everyone in my social graph that I’d be quitting Facebook – beginning by posting that manifesto – and giving them the link to my Posterous account. I did this on five separate occasions in the week leading up to my account deletion.

The responses were interesting. Most of the folks in my social graph who bothered to respond were in various stages of mourning. My own aunt – whom I’ve been corresponding with via email for twenty years – wrote how much she’d miss me. Another individual expressed regret at my leave-taking, given that we’d only just reconnected after many years. “But,” I responded, “I’ve shown you how we can stay in touch. Just follow the link.” “That’s too hard,” he replied, “I like that Facebook gives me everyone in one place. I don’t have to remember to check here for you, or over there for someone else. This is just easy.”

I can’t fault his logic: Facebook is just like the comfy chair. It’s a pleasant place to be – even when surrounded by Inquisitors. Facebook users are simply so grateful that such an amazing service is on offer – seemingly for free – that they haven’t thought through the price of their participation. And unless something else comes along that’s as powerful and easy as Facebook, things will go on just as are. Unless a disruptive innovation upends all the apple carts.

This is when I had a brainwave.

II: And Now For Something Completely Different

What is the social graph? At its essence, it is a set of connections, connections which define certain flows of information. These connections are both figurative and literal. If I say that I am connected to someone, I mean that we have some sort of relationship. But I also means that we have established protocols for communication, channels that can be used to send messages back and forth. For the last three hundred years this has been embodied in the ‘visiting card’, presented at all occasions when there is an invitation to connect. The ‘visiting card’ evolved into the ‘business card’ we share freely and promiscuously when there’s money to be made, or a connection to be had. The business card of 2010 must provide four significant pieces of information: a) the name of the caller; b) the address of the caller; c) the telephone number(s) of the caller; and d) the email address of the caller. Other information can be provided on the card – and often is – but if a card is missing any of these four essentials, it is incomplete. Each item represents a separate sphere of connectivity: the name is the necessary prerequisite for social connectivity; the address for postal connectivity; the telephone number and email addresses are self-explanatory. Each entry has a one-to-one correspondence with some form of connectivity. When we exchange business cards, we are providing the information necessary to establish connectivity.

We now have digital versions of the business card; we hand out vCards, or provide QR Codes that can be scanned and translated into a pointer to a vCard. Yet what we do with these digital versions of the business card not has changed: we stuff them into ‘address books’, or into the contact lists on our mobiles. If we have the right tools, we can upload them to Plaxo or LinkedIn. There they sit, static and essentially useless. A database with no applications.

That’s kind of weird, isn’t it? I mean, here we are, each of us walking around with a few hundred contacts on our mobiles, and essentially doing nothing with them unless we need to make a phone call or send an email. It doesn’t make sense. Somehow we’ve lost sight of the fact that the digital item is active in a way the physical object is not. Facebook understands this. Facebook takes your ‘calling card’ – the profile that you loaded up with your personal information – and makes it the foundation of your social graph. Everyone connects to your profile (which is you), and these connections become the cornerstone of fully bilateral sharing relationships. Anyone connected to you can send you a message, or initiate a chat, or look at the photos you uploaded of your holiday in the fleshpots of Bangkok. That one connection becomes the cornerstone for a whole range of opportunities to share media – text, images, video, links, music, events, etc. – and equally an opportunity to listen to what others are sharing. That’s what Facebook is, really, a giant, centralized switchboard which connects its members to one another. That’s all any social network is.

It’s easy – really easy – to connect together. We have so many ways to do so, through so many mechanisms, that really we’re drowning in choice, rather than a poverty of options. Instead of a monolithic solution, the Internet, like nature, tends to favor diversity and heterogeneity. Diversity creates the space for play and exploration; a tolerance for heterogeneity allows that there is no right answer, no one way to play the game. Is it possible to design an architecture for human connectivity which favors diversity and heterogeneity.

For the past few weeks those of you following me on Twitter have seen me tweet about ‘Project Thunderware’, which was the silliest code-name I could think up for a project that is actually entirely serious. The real name is Plexus. Plexus is design for a second-generation social network. It is personal – everyone runs their own Plexus. It is portable – written entirely in Python so you can drop it onto a USB key (if you want), and take it with you anywhere you can get Python running. It is private – no one else has access to your Plexus, unless you want them to. It’s completely open and completely modular. Plexus is designed to take the passive social graph we’ve all got tucked away in our various devices, translating it into something active, vital, and essential.

There are three components within Plexus. First and most important is the social graph, a database of connections known as the ‘Plex’. Each of these connections, like a business card, comes with a list of connection points. These connection points can be outgoing – ‘this is how I will speak to you’, or incoming – ‘this is how I will listen to you’. They can be unilateral or bilateral. They can be based on standard protocols – such as SMTP or XMPP, or the APIs of the rapidly-multiplying set of social services already available in the wilds of the Internet, or they can be something entirely home-grown and home-brewed. They can be wide open, or encrypted with GPG. Everything is negotiable. That’s the point: something’s in the Plex because there’s an active connection and relationship between two parties.

The Plex is only a database. To bring that database to life, two other components are required. The first of these is the ‘Sharer’. The Sharer, as the name implies, makes sure that something to be shared – be it a string of text, or a link, or a video, or a blog post, or whatever – ends up going out over the negotiated channels. The Sharer is built out of a set of Python modules, with each particular sharing service handled by its own module. This means that there is no limit or artificial constraint on what kinds of services Plexus can share with.

Conversely, the third component, the Listener, monitors all of the negotiated channels for any activity by any of the connections in the Plex. When the Listener hears something, it sends that to the user – to be displayed or saved or ignored according to the needs of the moment. Like the Sharer, the Listener is also a set of Python modules, with each monitored service handled by its own module. The Listener should be able to listen to anything that has a clearly defined interface.

When Plexus starts up, it reads through the Plex, instancing the appropriate Sharer and Listener objects on a connection-by-connection basis. Everything after initialization is event-driven: the Plexus user shares something, or the Listener hears something and offers that to the Plexus user.

That’s it. That’s the whole of the design. As always, the devil is in the details, but the essential architecture will probably remain unchanged. Plexus creates your own, self-managed social network, both entirely self-contained, and also acts as a connected node within a broader network. Because Plexus functions as plumbing – wiring together social services that haven’t been designed to talk to one another – it performs a service that is badly needed, filling a growing void. Plexus is your own plumbing, under your own control.

Let’s talk through a use case. I give a lot of lectures, and I make sure to put my contact details – email, blog and Twitter – on my slides. I meet two people at a lecture – we’ll call one of them Nick, and the other one Anthony. (Those names just came to me.) Nick is an affable person, he just wants to be able to follow all of my output, as I put it out. He needs are a list of the dozen-or-so public contact points where I present myself. That’d be my name, the six or seven blogs I write, my Twitter feed, my Posterous, my YouTube account, and Viddler account, and so forth. He gets that nugget of data off of markpesce.com/markpesce.plx – it’s basically a nice little bit of JSON (I don’t care for XML, but you can microformat to your heart’s content) that he can drop directly into Plexus, where it will go into the Plex. As the Plex digests it, this nugget instances the necessary Listeners. Now, whenever I say anything – anywhere – Nick knows about it. Which makes Nick happy.

Anthony is a different story. He’s a l33t user, and doesn’t want to be forced to rub shoulders with the hoi polloi at any of the normal social web services. Instead, Anthony wants to get a personally-addressed email from me every time I have something to share. Apparently he’s developed some excellent email filtering and management tools, so that even if I get quite chatty, it won’t clog up his inbox. So, he negotiates with me – Plexus-to-Plexus – and goes into my Plex as a contact, so that when I instance my Sharers, one is specifically set up to send him anything I share via SMTP. He doesn’t have to do anything to his Plexus, because he’s not using his Plexus to listen to me.

Use cases are all the more meaningful when they’re backed up by working code. Hence, I went back to the code mines last weekend – with a spring in my step and a song in my heart – and created a very, very embryonic version of Plexus. In just a little over two days, I created Sharer modules for Twitter, Posterous, Tumblr and SMTP, and Listener modules for Twitter and RSS. I reckoned that would be sufficient for the purposes of a demonstration – though if I’d had more time I could easily have wired in a few hundred other web social services.

There you go. That’s Plexus. The project is open source – after all, why would you trust a social network when you can’t inspect the code?

III: How Not To Be Seen

Plexus is grass-roots, bottom-up, and radically decentralized. That means the big boys will probably try to ignore it. Social media isn’t about the people, after all. It’s about humungous accumulations of capital going hand-in-hand with impossibly large collections of data, and, somewhere in the background, all the spooks, reading the paper trail. Social media is an instrument of control, the latest and the greatest. Sit still, read your feed, and comply.

But what if we refuse to comply? Is that even an option? Is it possible to be disconnected and influential? That’s the Faustian bargain being offered to us: join with the collective and you will be heard. And managed. And herded. Or suit yourself, and weep and gnash your teeth in the outer darkness. But in that Interzone, outside the smooth functioning of power, what happens when we connect there?

Reflect back on March of 2000. Napster, the centralized filesharing network, had recently be shut down by court order. A different crew created a decentralized filesharing tool, known as Gnutella, releasing both the tool and the source code to the world on March 14th. When AOL/TimeWarner – parent company of the folks who wrote Gnutella – found out about and put a stop to the source code release, it was too late. It couldn’t be recalled. The bomb couldn’t be un-invented. The music industry is more authentic than it was a decade ago, more open to innovation, to outsiders, to diversity and heterogenetity. All because a few hackers decided to change the way people share their music.

History never repeats, but it does rhyme. We share everything now; we worry that we overshare. Now it’s time to take our sharing to the next level. We need a social2.0, something that reflects what we’ve learned in the past half-dozen years. That’s not just a slew of new services. That’s an attitude change. Consider: the wiki was invented in 1995. It’s Precambrian web tech. But we didn’t start using wikis until after 2001, when Wikipedia began to take off. Why? It took us a while – and a lot of interactions – to understand how to use the tools on offer. Social technology is uniquely potent – so much so that we’ll be learning its strengths and weakness for a decade or more. The time has come to step out, seize the means of communication, and make them our own.

I reckon you can now understand why Python was such an obvious choice for Plexus. In no other language, with no other community, is the idea of sharing so much at the core. There is a Python module or code sample to do nearly every task under the sun, precisely because sharing is a core ethic of the Python community. Python is the language of the Web because it lends itself to the same sharing that the Web fosters. Python is the language of Plexus because Plexus needs to inherit all of Python’s best qualities, needs to be straightforward and open and flexible and extensible and easily shared. I need to be able to drop a Plexus module into an email and know, at the other end, that it will just work. ‘Take this,’ I’ll say, ‘and feed it to your Plexus.’ You’ll do that, and suddenly you’ll find that we have a secure, obscure and nearly invisible means of sharing – a darknet, how not to be seen – that can be as private and personal or open and public as we agree it should be. And you can turn around, think up something else, and mail that to me, or to someone else, or to the world.

The social web must be a social project, an opportunity to embody exactly what we’re trying to create as we are creating it. It’s the ultimate dogfooding. Success requires a willing surrender that rejoices in cooperation.

So here it is. This is the best I can do. It may be the best that I will ever do. I place it before you this morning, a humble offering, written in a language that I barely know, but which I’ve used to express my highest aspirations. Plexus is naked, newborn, and needs help. It will only benefit from your input, comments, recommendations, pointers and critiques. It is an idea that can only grow and mature as it is shared. That’s what this is all about. It always has been.

On the 18th of October in 2004, a UK cable channel, SkyOne, broadcast the premiere episode of Battlestar Galactica, writer-producer Ron Moore’s inspired revisioning of the decidedly campy 70s television series. SkyOne broadcast the episode as soon as it came off the production line, but its US production partner, the SciFi Channel, decided to hold off until January – a slow month for television – before airing the episodes. The audience for Battlestar Galactica, young and technically adept, made digital recordings of the broadcasts as they went to air, cut out the commercials breaks, then posted them to the Internet.

For an hour-long television programme, a lot of data needs to be dragged across the Internet, enough to clog up even the fastest connection. But these young science fiction fans used a new tool, BitTorrent, to speed the bits on their way. BitTorrent allows a large number of computers (in this case, over 10,000 computers were involved) to share the heavy lifting. Each of the computers downloaded pieces of Battlestar Galactica, and as each got a piece, they offered it up to any other computer which wanted a copy of that piece. Like a forest of hands each trading puzzle pieces, each computer quickly assembled a complete copy of the show.

All of this happened within a few hours of Battlestar Galactica going to air. That same evening, on the other side of the Atlantic, American fans watched the very same episode that their fellow fans in the UK had just viewed. They liked what they saw, and told their friends, who also downloaded the episode, using BitTorrent. Within just a few days, perhaps a hundred thousand Americans had watched the show.

US cable networks regularly count their audience in hundreds of thousands. A million would be considered incredibly good. Executives for SciFi Channel ran the numbers and assumed that the audience for this new and very expensive TV series had been seriously undercut by this international trafficking in television. They couldn’t have been more wrong. When Battlestar Galactica finally aired, it garnered the biggest audiences SciFi Channel had ever seen – well over 3 million viewers.

How did this happen? Word of mouth. The people who had the chops to download Battlestar Galactica liked what they saw, and told their friends, most of whom were content to wait for SciFi Channel to broadcast the series. The boost given the series by its core constituency of fans helped it over the threshold from cult classic into a genuine cultural phenomenon. Battlestar Galactica has become one of the most widely-viewed cable TV series in history; critics regularly lavish praise on it, and yes, fans still download it, all over the world.

Although it might seem counterintuitive, the widespread “piracy” of Battlestar Galactica was instrumental to its ratings success. This isn’t the only example. BBC’s Dr. Who, leaked to BitTorrent by a (quickly fired) Canadian editor, drummed up another huge audience. It seems, in fact, that “piracy” is good. Why? We live in an age of fantastic media oversupply: there are always too many choices of things to watch, or listen to, or play with. But, if one of our friends recommends something, something they loved enough to spend the time and effort downloading, that carries a lot of weight.

All of this sharing of media means that the media titans – the corporations which produce and broadcast most of the television we watch – have lost control over their own content. Anything broadcast anywhere, even just once, becomes available everywhere, almost instantaneously. While that’s a revolutionary development, it’s merely the tip of the iceberg. The audience now has the ability to share anything they like – whether produced by a media behemoth, or made by themselves. YouTube has allowed individuals (some talented, some less so) reach audiences numbering in hundreds of millions. The attention of the audience, increasingly focused on what the audience makes for itself, has been draining ratings away from broadcasters, a drain which accelerates every time someone posts something funny, or poignant, or instructive to YouTube.

The mass media hasn’t collapsed, but it has been hollowed out. The audience occasionally tunes in – especially to watch something newsworthy, in real-time – but they’ve moved on. It’s all about what we’re saying directly to one another. The individual – every individual – has become a broadcaster in his or her own right. The mechanics of this person-to-person sharing, and the architecture of these “New Networks”, are driven by the oldest instincts of humankind.

The New Networks

Human beings are social animals. Long before we became human – or even recognizably close – we became social. For at least 11 million years, before our ancestors broke off from the gorillas and chimpanzees, we cultivated social characteristics. In social groups, these distant forbears could share the tasks of survival: finding food, raising young, and self-defense. Human babies, in particular, take many years to mature, requiring constantly attentive parenting – time stolen away from other vital activities. Living in social groups helped ensure that these defenseless members of the group grew to adulthood. The adults who best expressed social qualities bore more and healthier children. The day-to-day pressures of survival on the African savannahs drove us to be ever more adept with our social skills.

We learned to communicate with gestures, then (no one knows just how long ago) we learned to speak. Each step forward in communication reinforced our social relationships; each moment of conversation reaffirms our commitment to one another, every spoken word an unspoken promise to support, defend and extend the group. As we communicate, whether in gestures or in words, we build models of one another’s behavior. (This is why we can judge a friend’s reaction to some bit of news, or a joke, long before it comes out of our mouths.) We have always walked around with our heads full of other people, a tidy little “social network,” the first and original human network. We can hold about 150 other people in our heads (chimpanzees can manage about 30, gorillas about 15, but we’ve got extra brains they don’t to help us with that), so, for 90% of human history, we lived in tribes of no more than about 150 individuals, each of us in constant contact, a consistent communication building and reinforcing bonds which would make us the most successful animals on Earth. We learned from one another, and shared whatever we learned; a continuity of knowledge passed down seamlessly, generation upon generation, a chain of transmission that still survives within the world’s indigenous communities. Social networks are the gentle strings which connect us to our origins.

This is the old network. But it’s also the new network. A few years ago, researcher Mizuko Ito studied teenagers in Japan, to find that these kids – all of whom owned mobile telephones – sent as many as a few hundred text messages, every single day, to the same small circle of friends. These messages could be intensely meaningful (the trials and tribulations of adolescent relationships), or just pure silliness; the content mattered much less than that constant reminder and reinforcement of the relationship. This “co-presence,” as she named it, represents the modern version of an incredibly ancient human behavior, a behavior that had been unshackled by technology, to span vast distances. These teens could send a message next door, or halfway across the country. Distance mattered not: the connection was all.

In 2001, when Ito published her work, many dismissed her findings as a by-product of those “wacky Japanese” and their technophile lust for new toys. But now, teenagers everywhere in the developed world do the same thing, sending tens to hundreds of text messages a day. When they run out of money to send texts (which they do, unless they have very wealthy parents), they simply move online, using instant messaging and MySpace and other techniques to continue the never-ending conversation.

We adults do it too, though we don’t recognize it. Most of us who live some of our lives online, receive a daily dose of email: we flush the spam, answer the requests and queries of our co-workers, deal with any family complaints. What’s left over, from our friends, more and more consists of nothing other than a link to something – a video, a website, a joke – somewhere on the Internet. This new behavior, actually as old as we are, dates from the time when sharing information ensured our survival. Each time we find something that piques our interest, we immediately think, “hmm, I bet so-and-so would really like this.” That’s the social network in our heads, grinding away, filtering our experience against our sense of our friends’ interests. We then hit the “forward” button, sending the tidbit along, reinforcing that relationship, reminding them that we’re still here – and still care. These “Three Fs” – find, filter and forward – have become the cornerstone of our new networks, information flowing freely from person-to-person, in weird and unpredictable ways, unbounded by geography or simultaneity (a friend can read an email weeks after you send it), but always according to long-established human behaviors.

One thing is different about the new networks: we are no longer bounded by the number of individuals we can hold in our heads. Although we’ll never know more than 150 people well enough for them to take up some space between our ears (unless we grow huge, Spock-like minds) our new tools allow us to reach out and connect with casual acquaintances, or even people we don’t know. Our connectivity has grown into “hyperconnectivity”, and a single individual, with the right message, at the right time, can reach millions, almost instantaneously.

This simple, sudden, subtle change in culture has changed everything.

The Nuclear Option

On the 12th of May in 2008, a severe earthquake shook a vast area of southeast Asia, centered in the Chinese state of Sichuan. Once the shaking stopped – in some places, it lasted as long as three minutes – people got up (when they could, as may lay under collapsed buildings), dusted themselves off, and surveyed the damage. Those who still had power turned to their computers to find out what had happened, and share what had happened to them. Some of these people used so-called “social messaging services”, which allowed them to share a short message – similar to a text message – with hundreds or thousands of acquaintances in their hyperconnected social networks.

Within a few minutes, people on every corner of the planet knew about the earthquake – well in advance of any reports from Associated Press, the BBC, or CNN. This network of individuals, sharing information each other through their densely hyperconnected networks, spread the news faster, more effectively, and more comprehensively than any global broadcaster.

This had happened before. On 7 July 2005, the first pictures of the wreckage caused by bombs detonated within London’s subway system found their way onto Flickr, an Internet photo-sharing service, long before being broadcast by BBC. A survivor, waking past one of the destroyed subway cars, took snaps from her mobile and sent them directly on to Flickr, where everyone on the planet could have a peek. One person can reach everyone else, if what they have to say (or show) merits such attention, because that message, even if seen by only one other person, will be forwarded on and on, through our hyperconnected networks, until it has been received by everyone for whom that message has salience. Just a few years ago, it might have taken hours (or even days) for a message to traverse the Human Network. Now it happens a few seconds.

Most messages don’t have a global reach, nor do they need one. It is enough that messages reach interested parties, transmitted via the Human Network, because just that alone has rewritten the rules of culture. An intemperate CEO screams at a consultant, who shares the story through his network: suddenly, no one wants to work for the CEO’s firm. A well-connected blogger gripes about problems with his cable TV provider, a story forwarded along until – just a half-hour later – he receives a call from a vice-president of that company, contrite with apologies and promises of an immediate repair. An American college student, arrested in Egypt for snapping some photos in the wrong place at the wrong time, text messages a single word – “ARRESTED” – to his social network, and 24 hours later, finds himself free, escorted from jail by a lawyer and the American consul, because his network forwarded this news along to those who could do something about his imprisonment.

Each of us, thoroughly hyperconnected, brings the eyes and ears of all of humanity with us, wherever we go. Nothing is hidden anymore, no secret safe. We each possess a ‘nuclear option’ – the capability to go wide, instantaneously, bringing the hyperconnected attention of the Human Network to a single point. This dramatically empowers each of us, a situation we are not at all prepared for. A single text message, forwarded perhaps a million times, organized the population of Xiamen, a coastal city in southern China, against a proposed chemical plant – despite the best efforts of the Chinese government to sensor the message as it passed through the state-run mobile telephone network. Another message, forwarded around a community of white supremacists in Sydney’s southern suburbs, led directly to the Cronulla Riots, two days of rampage and attacks against Sydney’s Lebanese community, in December 2005.

When we watch or read stories about the technologies of sharing, they almost always center on recording companies and film studios crying poverty, of billions of dollars lost to ‘piracy’. That’s a sideshow, a distraction. The media companies have been hurt by the Human Network, but that’s only a minor a side-effect of the huge cultural transformation underway. As we plug into the Human Network, and begin to share that which is important to us with others who will deem it significant, as we learn to “find the others”, reinforcing the bonds to those others every time we forward something to them, we dissolve the monolithic ties of mass media and mass culture. Broadcasters, who spoke to millions, are replaced by the Human Network: each of us, networks in our own right, conversing with a few hundred well-chosen others. The cultural consensus, driven by the mass media, which bound 20th-century nations together in a collective vision, collapses into a Babel-like configuration of social networks which know no cultural or political boundaries.

The bomb has already dropped. The nuclear option has been exercised. The Human Network brought us together, and broke us apart. But in these fragments and shards of culture we find an immense vitality, the protean shape of the civilization rising to replace the world we have always known. It all hinges on the transition from sharing to knowing.

In mid-1994, sometime shortly after Tony Parisi and I had fused the new technology of the World Wide Web to a 3D visualization engine, to create VRML, we paid a visit to the University of Santa Cruz, about 120 kilometers south of San Francisco. Two UCSC students wanted to pitch us on their own web media project. The Internet Underground Music Archive, or IUMA, featured a simple directory of artists, complete with links to MP3 files of these artists’ recordings. (Before I go any further, I should state that they had all the necessary clearances to put musical works up onto the Web – IUMA was not violating anyone’s copyrights.) The idea behind IUMA was simple enough, the technology absolutely straightforward – and yet, for all that, it was utterly revolutionary. Anyone, anywhere could surf over to the IUMA site, pick an artist, then download a track and play it.

This was in the days before broadband, so downloading a multi-megabyte MP3 recording could take upwards of an hour per track – something that seems ridiculous today, but was still so potent back in 1994 that IUMA immediately became one of the most popular sites on the still-quite-tiny Web. The founders of IUMA – Rob Lord and Jon Luini – wanted to create a place where unsigned or non-commercial musicians could share their music with the public in order to reach a larger audience, gain recognition, and perhaps even end up with a recording deal. IUMA was always better as a proof-of-concept than as a business opportunity, but the founders did get venture capital, and tried to make a go of selling music online. However, given the relative obscurity of the musicians on IUMA, and the pre-iPod lack of pervasive MP3 players, IUMA ran through its money by 2001, shuttering during the dot-com implosion of the same year. Despite that, every music site which followed IUMA, legal and otherwise, from Napster to Rhapsody to iTunes, has walked in its footsteps. Now, nearing the end of the first decade of the 21st century, we have a broadband infrastructure capable of delivery MP3s, and several hundred million devices which can play them. IUMA was a good idea, but five years too early.

Just forty-eight hours ago, a new music service, calling itself Qtrax, aborted its international launch – though it promises to be up “real soon now.” Qtrax also promises that anyone, anywhere will be able to download any of its twenty-five million songs perfectly legally, and listen to them practically anywhere they like – along with an inserted advertisement. Using peer-to-peer networking to relieve the burden on its own servers, and Digital Rights Management, or DRM, Qtrax ensures that there are no abuses of these pseudo-free recordings.

Most of the words that I used to describe Qtrax in the preceding paragraph didn’t exist in common usage when IUMA disappeared from the scene in the first year of this millennium. The years between IUMA and Qtrax are a geological age in Internet time, so it’s a good idea to walk back through that era and have a good look at the fossils which speak to how we evolved to where we are today.

In 1999, a curly-haired undergraduate at Boston’s Northeastern University built a piece of software that allowed him to share his MP3 collection with a few of his friends on campus, and allowed him access to their MP3s. This scanned the MP3s on each hard drive, publishing the list to a shared database, allowing each person using the software to download the MP3 from someone else’s hard drive to his own. This is simple enough, technically, but Shawn Fanning’s Napster created a dual-headed revolution. First, it was the killer app for broadband: using Napster on a dial-up connection was essentially impossible. Second, it completely ignored the established systems of distribution used for recorded music.

This second point is the one which has the most relevance to my talk this morning; Napster had an entirely unpredicted effect on the distribution methodologies which had been the bedrock of the recording industry for the past hundred years. The music industry grew up around the licensing, distribution and sale of a physical medium – a piano roll, a wax recording, a vinyl disk, a digital compact disc. However, when the recording industry made the transition to CDs in the 1980s (and reaped windfall profits as the public purchased new copies of older recordings) they also signed their own death warrants. Digital recordings are entirely ephemeral, composed only of mathematics, not of matter. Any system which transmitted the mathematics would suffice for the distribution of music, and the compact disc met this need only until computers were powerful enough to play the more compact MP3 format, and broadband connections were fast enough to allow these smaller files to be transmitted quickly. Napster leveraged both of these criteria – the mathematical nature of digitally-encoded music and the prevalence of broadband connections on America’s college campuses – to produce a sensation.

In its earliest days, Napster reflected the tastes of its college-age users, but, as word got out, the collection of tracks available through Napster grew more varied and more interesting. Many individuals took recordings that were only available on vinyl, and digitally recorded them specifically to post them on Napster. Napster quickly had a more complete selection of recordings than all but the most comprehensive music stores. This only attracted more users to Napster, who added more oddities from their on collections, which attracted more users, and so on, until Napster became seen as the authoritative source for recorded music.

Given that all of this “file-sharing”, as it was termed, happened outside of the economic systems of distribution established by the recording industry, it was taking money out of their pockets – probably something greater than billions of dollars a year was lost, if all of these downloads had been converted into sales. (Studies indicate this was unlikely – college students have ever been poor.) The recording industry launched a massive lawsuit against Napster in 2000, forcing the service to shutter in 2001, just as it reached an incredible peak of 14 million simultaneous users, out of a worldwide broadband population of probably only 100 million. This means that one in seven computers connected to the broadband internet were using Napster just as it was being shut down.

Here’s where it gets more interesting: the recording industry thought they’d brought the horse back into the barn. What they hadn’t realized was that the gate had burnt down. The millions of Napster users had their appetites whet by a world where an incredible variety of music was instantaneously available with few clicks of the mouse. In the absence of Napster, that pressure remained, and it only took a few weeks for a few enterprising engineers to create a successor to Napster, known as Gnutella, which provided the same service as Napster, but used a profoundly different technology for its filesharing. Where Napster had all of its users register their tracks within a centralized database (which disappeared when Napster was shut down) Gnutella created a vast, amorphous, distributed database, spread out across all of the computers running Guntella. Gnutella had no center to strike at, and therefore could not be shut down.

It is because of the actions of the recording industry that Gnutella was developed. If legal pressure hadn’t driven Napster out of business, Gnutella would not have been necessary. The recording industry turned out to be its own worst enemy, because it turned a potentially profitable relationship with its customers into an ever-escalating arms race of file-sharing tools, lawsuits, and public relations nightmares.

Once Gnutella and its descendants – Kazaa, Limewire, and Acquisition – arrived on the scene, the listening public had wholly taken control of the distribution of recorded music. Every attempt to shut down these ever-more-invisible “darknets” has ended in failure and only spurred the continued growth of these networks. Now, with Qtrax, the recording industry is seeking to make an accommodation with an audience which expects music to be both free and freely available, falling back on advertising revenue source to recover some of their production costs.

At first, it seemed that filmic media would be immune from the disruptions that have plagued the recording industry – films and TV shows, even when heavily compressed, are very large files, on the order of hundreds of millions of bytes of data. Systems like Gnutella, which allow you to transfer a file directly from one computer to another are not particularly well-suited to such large file transfers. In 2002, an unemployed programmer named Bram Cohen solved that problem definitively with the introduction of a new file-sharing system known as BitTorrent.

BitTorrent is a bit mysterious to most everyone not deeply involved in technology, so a brief of explanation will help to explain its inner workings. Suppose, for a moment, that I have a short film, just 1000 frames in length, digitally encoded on my hard drive. If I wanted to share this film with each of you via Gnutella, you’d have to wait in a queue as I served up the film, time and time again, to each of you. The last person in the queue would wait quite a long time. But if, instead, I gave the first ten frames of the film to the first person in the queue, and the second ten frames to the second person in the queue, and the third ten frames to the third person in the queue, and so on, until I’d handed out all thousand frames, all I need do at that point is tell each of you that each of your “peers” has the missing frames, and that you needed to get them from those peers. A flurry of transfers would result, as each peer picked up the pieces it needed to make a complete whole from other peers. From my point of view, I only had to transmit the film once – something I can do relatively quickly. From your point of view, none of you had to queue to get the film – because the pieces were scattered widely around, in little puzzle pieces, that you could gather together on your own.

That’s how BitTorrent works. It is both incredibly efficient and incredibly resilient – peers can come and go as they please, yet the total number of peers guaratees that somewhere out there is an entire copy of the film available at all times. And, even more perversely, the more people who want copies of my film, the easier it is for each successive person to get a copy of the film – because there are more peers to grab pieces from. This group of peers, known as a “swarm”, is the most efficient system yet developed for the distribution of digital media. In fact, a single, underpowered computer, on a single, underpowered broadband link can, via BitTorrent, create a swarm of peers. BitTorrent allows anyone, anywhere, distribute any large media file at essentially no cost.

It is estimated that upwards of 60% of all traffic on the Internet is composed of BitTorrent transfers. Much of this traffic is perfectly legitimate – software, such as the free Linux operating system, is distributed using BitTorrent. Still, it is well known that movies and television programmes are also distributed using BitTorrent, in violation of copyright. This became absolutely clear on the 14th of October 2004, when Sky Broadcasting in the UK premiered the first episode of Battlestar Galactica, Ron Moore’s dark re-imagining of the famous shlocky 1970s TV series. Because the American distributor, SciFi Channel, had chosen to hold off until January to broadcast the series, fans in the UK recorded the programmes and posted them to BitTorrent for American fans to download. Hundreds of thousands of copies of the episodes circulated in the United States – and conventional thinking would reckon that this would seriously impact the ratings of the show upon its US premiere. In fact, precisely the opposite happened: the show was so well written and produced that the word-of-mouth engendered by all this mass piracy created an enormous broadcast audience for the series, making it the most successful in SciFi Channel history.

In the age of BitTorrent, piracy is not necessarily a menace. The ability to “hyperdistribute” a programme – using BitTorrent to send a single copy of a programme to millions of people around the world efficiently and instantaneously – creates an environment where the more something is shared, the more valuable it becomes. This seems counterintuitive, but only in the context of systems of distribution which were part-and-parcel of the scarce exhibition outlets of theaters and broadcasters. Once everyone, everywhere had the capability to “tuning into” a BitTorrent broadcast, the economics of distribution were turned on their heads. The distribution gatekeepers, stripped of their power, whinge about piracy. But, as was the case with recorded music, the audience has simply asserted its control over distribution. This is not about piracy. This is about the audience getting whatever it wants, by any means necessary. They have the tools, they have the intent, and they have the power of numbers. It is foolishness to insist that the future will be substantially different from the world we see today. We can not change the behavior of the audience. Instead, we must all adapt to things as they are.

But things as the are have changed more than you might know. This is not the story of how piracy destroyed the film industry. This is the story how the audience became not just the distributors but the producers of their own content, and, in so doing, brought down the high walls which separate professionals from amateurs.

II. The Barbarian Hordes Storm the Walls

Without any doubt the most outstanding success of the second phase of the Web (known colloquially as “Web 2.0”) is the video-sharing site YouTube. Founded in early 2005, as of yesterday YouTube was the third most visited site on the entire Web, led only by Yahoo! and YouTube’s parent, Google. There are a lot of videos on YouTube. I’m not sure if anyone knows quite how many, but they easily number in the tens of millions, quite likely approaching a hundred million. Another hundred thousand videos are uploaded each day; YouTube grows by three million videos a month. That’s a lot of video, difficult even to contemplate. But an understanding of YouTube is essential for anyone in the film and television industries in the 21st century, because, in the most pure, absolute sense, YouTube is your competitor.

Let me unroll that statement a bit, because I don’t wish it to be taken as simply as it sounds. It’s not that YouTube is competing with you for dollars – it isn’t, at least not yet – but rather, it is competing for attention. Attention is the limiting factor for the audience; we are cashed up but time-poor. Yet, even as we’ve become so time-poor, the number of options for how we can spend that time entertaining ourselves has grown so grotesquely large as to be almost unfathomable. This is the real lesson of YouTube, the one I want you to consider in your deliberations today. In just the past three years we have gone from an essential scarcity of filmic media – presented through limited and highly regulated distribution channels – to a hyperabundance of viewing options.

This hyperabundance of choices, it was supposed until recently, would lead to a sort of “decision paralysis,” whereby the viewer would be so overwhelmed by the number of choices on offer that they would simply run back, terrified, to the highly regularized offerings of the old-school distribution channels. This has not happened; in fact, the opposite has occured: the audience is fragmenting, breaking up into ever-smaller “microaudiences”. It is these microaudiences that YouTube speaks directly to. The language of microaudiences is YouTube’s native tongue.

In order to illustrate the transformation that has completely overtaken us, let’s consider a hypothetical fifteen year-old boy, home after a day at school. He is multi-tasking: texting his friends, posting messages on Bebo, chatting away on IM, surfing the web, doing a bit of homework, and probably taking in some entertainment. That might be coming from a television, somewhere in the background, or it might be coming from the Web browser right in front of him. (Actually, it’s probably both simultaneously.) This teenager has a limited suite of selections available on the telly – even with satellite or cable, there won’t be more than a few hundred choices on offer, and he’s probably settled for something that, while not incredibly satisfying, is good enough to play in the background.

Meanwhile, on his laptop, he’s viewing a whole series of YouTube videos that he’s received from his friends; they’ve found these videos in their own wanderings, and immediately forwarded them along, knowing that he’ll enjoy them. He views them, and laughs, he forwards them along to other friends, who will laugh, and forward them along to other friends, and so on. Sharing is an essential quality of all of the media this fifteen year-old has ever known. In his eyes, if it can’t be shared, a piece of media loses most of its value. If it can’t be forwarded along, it’s broken.

For this fifteen year-old, the concept of a broadcast network no longer exists. Television programmes might be watched as they’re broadcast over the airwaves, but more likely they’re spooled off of a digital video recorder, or downloaded from the torrent and watched where and when he chooses. The broadcast network has been replaced by the social network of his friends, all of whom are constantly sharing the newest, coolest things with one another. The current hot item might be something that was created at great expense for a mass audience, but the relationship between a hot piece of media and its meaningfulness for a microaudience is purely coincidental. All the marketing dollars in the world can foster some brand awareness, but no amount of money will inspire that fifteen year old to forward something along – because his social standing hangs in the balance. If he passes along something lame, he’ll lose social standing with his peers. This factors into every decision he makes, from the brand of runners he wears, to the television series he chooses to watch. Because of the hyperabundance of media – something he takes as a given, not as an incredibly recent development – all of his media decisions are weighed against the values and tastes of his social network, rather than against a scarcity of choices.

This means that the true value of media in the 21st century is entirely personal, and based upon the salience, that is, the importance, of that media to the individual and that individual’s social network. The mass market, with its enforced scarcity, simply does not enter into his calculations. Yes, he might go to the theatre to see Transformers with his mates; but he’s just as likely to download a copy recorded in the movie theatre with an illegally smuggled-in camera that was uploaded to The Pirate Bay a few hours after its release.

That’s today. Now let’s project ourselves five years into the future. YouTube is still around, but now it has more than two hundred million videos (probably much more), all available, all the time, from short-form to full-length features, many of which are now available in high-definition. There’s so much “there” there that it is inconceivable that conventional media distribution mechanisms of exhibition and broadcast could compete. For this twenty year-old, every decision to spend some of his increasingly-valuable attention watching anything is measured against salience: “How important is this for me, right now?” When he weighs the latest episode of a TV series against some newly-made video that is meant only to appeal to a few thousand people – such as himself – that video will win, every time. It more completely satisfies him. As the number of videos on offer through YouTube and its competitors continues to grow, the number of salient choices grows ever larger. His social network, communicating now through FaceBook and MySpace and next-generation mobile handsets and iPods and goodness-knows-what-else is constantly delivering an ever-growing and increasingly-relevant suite of media options. He, as a vital node within his social network, is doing his best to give as good as he gets. His reputation depends on being “on the tip.”

When the barriers to media distribution collapsed in the post-Napster era, the exhibitors and broadcasters lost control of distribution. What no one had expected was that the professional producers would lose control of production. The difference between an amateur and a professional – in the media industries – has always centered on the point that the professional sells their work into distribution, while the amateur uses wits and will to self-distribute. Now that self-distribution is more effective than professional distribution, how do we distinguish between the professional and the amateur? This twenty year-old doesn’t know, and doesn’t care.

There is no conceivable way that the current systems of film and television production and distribution can survive in this environment. This is an uncomfortable truth, but it is the only truth on offer this morning. I’ve come to this conclusion slowly, because it seems to spell the death of a hundred year-old industry with many, many creative professionals. In this environment, television is already rediscovering its roots as a live medium, increasingly focusing on news, sport and “event” based programming, such as Pop Idol, where being there live is the essence of the experience. Broadcasting is uniquely designed to support the efficient distribution of live programming. Hollywood will continue to churn out blockbuster after blockbuster, seeking a warmed-over middle ground of thrills and chills which ensures that global receipts will cover the ever-increasing production costs. In this form, both industries will continue for some years to come, and will probably continue to generate nice profits. But the audience’s attentions have turned elsewhere. They’re not returning.

This future almost completely excludes “independent” production, a vague term which basically means any production which takes place outside of the media megacorporations (News Corp, Disney, Sony, Universal and TimeWarner), which increasingly dominate the mass media landscape. Outside of their corporate embrace, finding an audience sufficient to cover production and marketing costs has become increasingly difficult. Film and television have long been losing economic propositions (except for the most lucky), but they’re now becoming financially suicidal. National and regional funding bodies are growing increasingly intolerant of funding productions which can not find an audience; soon enough that pipeline will be cut off, despite the damage to national cultures. Australia funds the Film Finance Corporation and the Australian Film Council to the tune of a hundred million dollars a year, to ensure that Australian stories are told by Australian voices; but Australians don’t go to see them in the theatres, and don’t buy them on DVD.

The center can not hold. Instead, YouTube, which founder Steve Chen insists has “no gold standard” of production values, is rapidly becoming the vehicle for independent productions; productions which cost not millions of euros, but hundreds, and which make up for their low production values in salience and in overwhelming numbers. This tsunami of content can not be stopped or even slowed down; it has nothing to do with piracy (only nine percent of the videos viewed on YouTube are violations of copyright) but reflects the natural accommodation of the audience to an era of media hyperabundance.

What then, is to be done?

III. And The Penny Drops

It isn’t all bad news. But, like a good doctor, I want to give you the bad news right up front: There is no single, long-term solution for film or television production. No panacea. It’s not even entirely clear that the massive Hollywood studios will do business-as-usual for any length of time into the future. Just a decade ago the entire music recording industry seemed impregnable. Now it lies in ruins. To assume that history won’t repeat itself is more than willful ignorance of the facts; it’s bad business.

This means that the one-size-fits-all production-to-distribution model, which all of you have been taught as the orthodoxy of the media industries, is worse than useless; it’s actually blocking your progress because it is effectively keeping you from thinking outside the square. This is a wholly new world, one which is littered with golden opportunities for those able to avail themselves of them. We need to get you from where you are – bound to an obsolete production model – to where you need to be. Let me illustrate this transition with two examples.

In early 2005, producer Ronda Byrne got a production agreement with Channel NINE, then the number one Australian television network, to make a feature-length television programme about the “law of attraction”, an idea she’d learned of when reading a book published in 1910, The Science of Getting Rich. The interviews and other footage were shot in July and August, and after a few months in the editing suite, she showed the finished production to executives at Channel NINE, who declined to broadcast it, believing it lacked mass appeal. Since Byrne wasn’t going to be getting broadcast fees from Channel NINE to cover her production costs, she negotiated a new deal with NINE, allowing her to sell DVDs of the completed film.

At this point Byrne began spreading news of the film virally, through the communities she thought would be most interested in viewing it; specifically, spiritual and “New Age” communities. People excited by Byrne’s teaser marketing could pay $20 for a DVD copy of the film (with extended features), or pay $5 to watch a streaming version directly on their computer. As the film made its way to its intended audience, word-of-mouth caused business to mushroom overnight. The Secret became a blockbuster, selling millions of copies on DVD. A companion book, also titled The Secret, has sold over two million copies. And that arbiter of American popular taste, Oprah, has featured the film and book on her talk show, praising both to the skies. The film has earned back many, many times its production costs, making Byrne a wealthy woman. She’s already deep into the production of a sequel to The Secret – a film which already has an audience identified and targeted.

Chagrined, the television executives of Channel NINE finally did broadcast The Secret in February 2007. It didn’t do that well. This sums up the paradox distribution in the age of the microaudience. Clearly The Secret had a massive world-wide audience, but television wasn’t the most effective way to reach them, because this audience was actually a collection of microaudiences, rather than a single, aggregated audience. If The Secret had opened theatrically, it’s unlikely it would have done terribly well; it’s the kind of film that people want to watch more than once, being in equal parts a self-help handbook and a series of inspirational stories. It is well-suited for a direct-to-DVD release – a distribution vehicle that no longer has the stigma of “failure” associated with it. It is also well-suited to cross-media projects, such as books, conferences, streamed delivery, podcasts, and so forth. Having found her audience, Byrne has transformed The Secret into an exceptional money-making franchise, as lucrative, in its own way, and at its own scale, as any Hollywood franchise.

The second example is utterly different from The Secret, yet the fundamentals are strikingly similar. Just last month a production group calling themselves “The League of Peers” released a film titled Steal This Film, Part 2. The first part of this film, released in late 2006, dealt with the rise of file-sharing, and, in specific, with the legal troubles of the world’s largest BitTorrent site, Sweden’s The Pirate Bay. That film, although earnest and coherent, felt as though it was produced by individuals still learning the craft of filmmaking. This latest film feels looks as professional as any documentary created for BBC’s Horizon or PBS’s Frontline or ABC’s 4Corners. It is slick, well-lit, well-edited, and has a very compelling story to tell about the history of copying – beginning with the invention of the printing press, five hundred years ago. Steal This Film is a political production, a bit of propaganda with an bias. This, in itself, is not uncommon in a documentary. The funding and distribution model for this film is what makes it relatively unique.

Individuals who saw Steal This Film, Part One – which was made freely available for download via BitTorrent – were invited to contribute to the making of the sequel. Nearly five million people downloaded Steal This Film, Part One, so there was a substantial base of contributors to draw from. (I myself donated five dollars after viewing the film. If every viewer had done likewise that would cover the budget of a major Hollywood production!) The League of Peers also approached arts funding bodies, such as the British Documentary Council, with their completed film in hand, the statistics showing that their work reached a large audience, and a roadmap for the second film – this got them additional funding. Now, having released Steal This Film, Part Two, viewers are again invited to contribute (if they like the film), promised a “secret gift” for contributions of $15 or more. While the tip jar – literally, busking – may seem a very weird way to fund a film production, it’s likely that Steal This Film, Part Two will find an even wider audience than Part One, and that the coffers of the League of Peers will provide them with enough funds to embark on their next film, The Oil of the 21st Century, which will focus on the evolution of intellectual property into a traded commodity.

I have asked Screen Training Ireland to include a DVD of Steal This Film, Part Two with the materials you received this morning. You’ve been given the DVD version of the film, but I encourage you to download the other versions of the film: the XVID version, for playback on a PC; the iPod version, for portable devices; and the high-definition version, for your visual enjoyment. It’s proof positive that a viable economic model exists for film, even when it is given away. It will not work for all productions, but there is a global community of individuals who are intensely interested in factual works about copyright and intellectual property in the 21st century, who find these works salient, and who are underserved by the media megacorporations, who would not consider it in their own economic best interest to produce or distribute such works. The League of Peers, as part of the community whom this film is intended for, knew how to get the word out about the film (particularly through Boing Boing, the most popular blog in the world, with two million readers a week), and, within a few weeks, nearly everyone who should have heard of the film had heard about it – through their social networks.

Both The Secret and Steal This Film, Part Two are factual works, and it’s clear that this emerging distribution model – which relies on targeting communities of interest – works best with factual productions. One of the reasons that there has been such an upsurge in the production of factual works over the past few years is because these works have been able to build their own funding models upon a deep knowledge of the communities they are talking to – made by microaudiences, for microaudiences. But microaudiences, scaled to global proportions, can easily number in the millions. Microaudiences are perfectly willing to pay for something or contribute to something they consider of particular value and salience; it is a visible thank you, a form of social reinforcement which is very natural within social networks.

What about drama, comedy and animation? Short-form comedy and animation probably have the easiest go of it, because they can be delivered online with an advertising payload of some sort. Happy Tree Friends is a great example of how this works – but it took producers Mondo Media nearly a decade to stumble into a successful economic model. Feature-length comedy and feature-length drama are more difficult nuts to crack, but they are not impossible. Again, the key is to find the communities which will be most interested in the production; this is not always entirely obvious, but the filmmaker should have some idea of the target audience for their film. While in preproduction, these communities need to be wooed and seduced into believing that this film is meant just for them, that it is salient. Productions can be released through complementary distribution channels: a limited, occasional run in rented exhibition spaces (which can be “events”, created to promote and showcase the film); direct DVD sales (which are highly lucrative if the producer does this directly); online distribution vehicles such as iTunes Movie Store; and through “community” viewing, where a DVD is given to a few key members of the community in the hopes that word-of-mouth will spread in that community, generating further DVD sales.

None of this guarantees success, but it is the way things work for independent productions in the 21st-century. All of this is new territory. It isn’t a role that belongs neatly to the producer of the film, nor, in the absence of studio muscle, is it something that a film distributor would be competent at. This may not be the producer’s job. But it is someone’s job. Someone must do it. Starting at the earliest stages of pre-production, someone has to sit down with the creatives and the producer and ask the hard questions: “Who is this film intended for?” “What audiences will want to see this film – or see it more than once?” “How do we reach these audiences?” From these first questions, it should be possible to construct a marketing campaign which leverages microaudiences and social networks into ticket receipts and DVD sales and online purchases.

So, as you sit down to do your planning today, and discuss how to move Irish screen industries into the 21st century, ask yourselves who will be fulfilling this role. The producer is already overloaded, time-poor, and may not be particularly good at marketing. The director has a vision, but might be practically autistic when it comes to working with communities. This is a new role, one that is utterly vital to the success of the production, but one which is not yet budgeted for, and one which we do not yet train people to fill. Individuals have succeeded in this new model through their own tireless efforts, but each of these have been scattershot; there is a way to systematize this. While every production and every marketing plan will be unique – drawn from the fundamentals of the story being told – there are commonalities across productions which people will be able to absorb and apply, production after production.

One of my favorite quotes from science fiction writer William Gibson goes, “The future is already here, it’s just not evenly distributed.” This is so obviously true for film and television production that I need only close by noting that there are a lot of success stories out there, individuals who have taken the new laws of hyperdistribution and sharing and turned them to their own advantage. It is a challenge, and there will be failures; but we learn more from our failures than from our successes. Media production has always been a gamble; but the audiences of the 21st century make success easier to achieve than ever before.

“The net interprets censorship as damage and routes around it.”
– John Gilmore

I read a very interesting article last week. It turns out that, despite their best efforts, the Communist government of the People’s Republic of China have failed to insulate their prodigious population from the outrageous truths to be found online. In the article from the Times, Wang Guoqing, a vice-minister in the information office of the Chinese cabinet was quoted as saying, “It has been repeatedly proved that information blocking is like walking into a dead end.” If China, with all of the resources of a one-party state, and thus able to “lock down” its internet service providers, directing their IP traffic through a “great firewall of China”, can not block the free-flow of information, how can any government, anywhere – or any organization, or institution – hope to try?

Of course, we all chuckle a little bit when we see the Chinese attempt the Sisyphean task of damming the torrent of information which characterizes life in the 21st century. We, in the democratic West, know better, and pat ourselves on the back. But we are in no position to throw stones. Gilmore’s Law is not specifically tuned for political censorship; censorship simply means the willful withholding of information – for any reason. China does it for political reasons; in the West our reasons for censorship are primarily economic. Take, for example, the hullabaloo associated with the online release of Harry Potter and the Deathly Hallows, three days before its simultaneous, world-wide publication. It turns out that someone, somewhere, got a copy of the book, and laboriously photographed every single page of the 784-page text, bound these images together into a single PDF file, and then uploaded it to the global peer-to-peer filesharing networks. Everyone with a vested financial interest in the book – author J.K. Rowling, Bloomsbury and Scholastic publishing houses, film studio Warner Brothers – had been feeding the hype for the impending release, all focused around the 21st of July. An enormous pressure had been built up to “peek at the present” before it was formally unwrapped, and all it took was one single gap in the $20 million security system Bloomsbury had constructed to keep the text safely secure. Then it became a globally distributed media artifact. Curiously, Bloomsbury was reported as saying they thought it would only add to sales – if many people are reading the book now, even illegally, then even more people will want to be reading the book right now. Piracy, in this case, might be a good thing.

These two examples represent two data points which show the breadth and reach of Gilmore’s Law. Censorship, broadly defined, is anything which restricts the free flow of information. The barriers could be political, or they could be economic, or they could – as in the case immediately relevant today – they could be a nexus of the two. Broadband in Australia is neither purely an economic nor purely a political issue. In this, broadband reflects the Janus-like nature of Telstra, with one face turned outward, toward the markets, and another turned inward, toward the Federal Government. Even though Telstra is now (more or less) wholly privatized, the institutional memory of all those years as an arm of the Federal Government hasn’t yet been forgotten. Telstra still behaves as though it has a political mandate, and is more than willing to use its near-monopoly economic strength to reinforce that impression.

Although seemingly unavoidable, given the established patterns of the organization, Telstra’s behavior has consequences. Telstra has engendered enormous resentment – both from its competitors and its customers – for its actions and attitude. They’ve recently pushed the Government too far (at least, publicly), and have been told to back off. What may not be as clear – and what I want to warn you of today – is how Telstra has sewn the seeds of its own failure. What’s more, this may not be anything that Telstra can now avoid, because this is neither a regulatory nor an economic failure. It can not be remedied by any mechanism that Telstra has access to. Instead, it may require a top-down rethinking of the entire business.

I: Network Effects

For the past several thousand years, the fishermen of Kerala, on the southern coast of India, have sailed their dhows out into the Indian Ocean, lowered their nets, and hoped for the best. When the fishing is good, they come back to shore fully laden, and ready to sell their catch in the little fish markets that dot the coastline. A fisherman might have a favorite market, docking there only to find that half a dozen other dhows have had the same idea. In that market there are too many fish for sale that day, and the fisherman might not even earn enough from his catch to cover costs. Meanwhile, in a market just a few kilometers away, no fishing boats have docked, and there’s no fish available at any price. This fundamental chaos of the fish trade in Kerala has been a fact of life for a very long time.

Just a few years ago, several of India’s rapidly-growing wireless carriers strung GSM towers along the Kerala coast. This gives those carriers a signal reach of up to about 25km offshore – enough to be very useful for a fisherman. While mobile service in India is almost ridiculously cheap by Australian standards – many carriers charge a penny for an SMS, and a penny or two per minute for voice calls – a handset is still relatively expensive, even one such as the Nokia 1100, which was marketed specifically at emerging mobile markets, designed to be cheap and durable. Such a handset might cost a month’s profits for a fisherman – which makes it a serious investment. But, at some point in the last few years, one fisherman – probably a more prosperous one – bought a handset, and took it to sea. Then, perhaps quite accidentally, he learned, through a call ashore, of a market wanting for fish that day, brought his dhow to dock there, and made a handsome profit. After that, the word got around rapidly, and soon all of Kerala’s fisherman were sporting their own GSM handsets, calling into shore, making deals with fishmongers, acting as their own arbitrageurs, creating a true market where none had existed before. Today in Kerala the markets are almost always stocked with just enough fish; the fishmongers make a good price for their fish, and the fishermen themselves earn enough to fully recoup the cost of their handsets in just two months. Mobile service in Kerala has dramatically altered the economic prospects for these people.

This is not the only example: in Kenya farmers call ahead to the markets to learn which ones will have the best prices for their onions and maize; spice traders, again in Kerala, use SMS to create their own, far-flung bourse. Although we in the West generally associate mobile communications with affluent lifestyles, a significant number of microfinance loans made by Grameen Bank in Bangladesh, and others in Pakistan, India, Africa and South America are used to purchase mobile handsets – precisely because the correlation between access to mobile communications and earning potential has become so visible in the developing world. Grameen Bank has even started its own carrier, GrameenPhone, to service its microfinance clientele.

Although economists are beginning to recognize and document this curious relationship between economics and access to communication, it needs to be noted that this relationship was not predicted – by anyone. It happened all by itself, emerging from the interaction of individuals and the network. People – who are always the intelligent actors in the network – simply recognized the capabilities of the network, and put them to work. As we approach the watershed month of October 2007, when three billion people will be using mobile handsets, when half of humanity will be interconnected, we can expect more of the unexpected.

All of this means that none of us – even the most foresighted futurist – can know in advance what will happen when people are connected together in an electronic network. People themselves are too resourceful, and too intelligent, to model their behavior in any realistic way. We might be able to model their network usage – though even that has confounded the experts – but we can’t know why they’re using the network, nor what kind of second-order effects that usage will have on culture. Nor can we realistically provision for service offerings; people are more intelligent, and more useful, than any other service the carriers could hope to offer. The only truly successful service offering in mobile communications is SMS – because it provides an asynchronous communications channel between people. The essential feature of the network is simply that it connects people together, not that it connects them to services.

This strikes at the heart of the most avaricious aspects of the carriers’ long-term plans, which center around increasing the levels of services on offer, by the carrier, to the users of the network. Although this strategy has consistently proven to be a complete failure – consider Compuserve, Prodigy and AOL – it nevertheless has become the idée fixe of shareholder reports, corporate plans, and press releases. The network, we are told, will become increasingly more intelligent, more useful, and more valuable. But all of the history of the network argues directly against this. Nearly 40 years after its invention, the most successful service on the Internet is still electronic mail, the Internet’s own version of SMS. Although the Web has become an important service in its own right, it will never be as important as electronic mail, because it connects individuals.

Although the network in Kerala was brought into being by the technology of GSM transponders and mobile handsets, the intelligence of the network truly does lie in the individuals who are connected by the network. Let’s run a little thought experiment, and imagine a world where all of India’s telecoms firms suffered a simultaneous catastrophic and long-lasting failure. (Perhaps they all went bankrupt.) Do you suppose that the fishermen would simply shrug their shoulders and go back to their old, chaotic market-making strategies? Hardly. Whether they used smoke signals, or semaphores, or mirrors on the seashore, they’d find some way to maintain those networks of communication – even in the absence of the technology of the network. The benefits of the network so outweigh the implementation of the network that, once created, networks can not be destroyed. The network will be rebuilt from whatever technology comes to hand – because the network is not the technology, but the individuals connected through it.

This is the kind of bold assertion that could get me into a lot of trouble; after all, everyone knows that the network is the towers, the routers, and the handsets which comprise its physical and logical layers. But if that were true, then we could deterministically predict the qualities and uses of networks well in advance of their deployment. The quintessence of the network is not a physical property; it is an emergent property of the interaction of the network’s users. And while people do persistently believe that there is some “magic” in the network, the source of that magic is the endlessly inventive intellects of the network’s users. When someone – anywhere in the network – invents a new use for the network, it propagates widely, and almost instantaneously, transmitted throughout the length and breadth of the network. The network amplifies the reach of its users, but it does not goad them into being inventive. The service providers are the users of the network.

I hope this gives everyone here some pause; after all, it is widely known that the promise to bring a high-speed broadband network to Australia is paired with the desire to provide services on that network, including – most importantly – IPTV. It’s time to take a look at that promise with our new understanding of the real power of networks. It is under threat from two directions: the emergence of peer-produced content; and the dramatic, disruptive collapse in the price of high-speed wide-area networking which will fully power individuals to create their own network infrastructure.

II: DIYnet

Although nearly all high-speed broadband providers – which are, by and large, monopoly or formerly monopoly telcos – have bet the house on the sale of high-priced services to finance the build-out of high-speed (ADSL2/FTTN/FTTH) network infrastructure, it is not at all clear that these service offerings will be successful. Mobile carriers earn some revenue from ringtone and game sales, but this is a trivial income stream when compared to the fees they earn from carriage. Despite almost a decade of efforts to milk more ARPU from their customers, those same customers have proven stubbornly resistant to a continuous fleecing. The only thing that customers seem obviously willing to pay for is more connectivity – whether that’s more voice calls, more SMS, or more data.

What is most interesting is what these customers have done with this ever-increasing level of interconnectivity. These formerly passive consumers of entertainment have become their own media producers, and – perhaps more ominously, in this context – their own broadcasters. Anyone with a cheap webcam (or mobile handset), a cheap computer, and a broadband link can make and share their own videos. This trend had been growing for several years, but since the launch of YouTube, in 2005, it has rocketed into prominence. YouTube is now the 4th busiest website, world-wide, and perhaps 65% of all video downloads on the web take place through Google-owned properties. Amateur productions regularly garner tens of thousands of viewers – and sometimes millions.

We need to be very careful about how we judge both the meaning of the word “amateur” in the context of peer-produced media. An amateur production may be produced with little or no funding, but that does not automatically mean it will appear clumsy to the audience. The rough edges of an amateur prodution are balanced out by a corresponding increase in salience – that is, the importance which the viewer attaches to the subject of the media. If something is compelling because it is important to us – something which we care passionately about – high production values do not enter into our assessment. Chad Hurley, one of the founders of YouTube has remarked that the site has no “gold-standard” for production; in fact, YouTube’s gold-standard is salience – if the YouTube audience feels the work is important, audience members will share it within their own communities of interest. Sharing is the proof of salience.

After two years of media sharing, the audience for YouTube (which is now coincident with the global television audience in the developed world) has grown accustomed to being able to share salient media freely. This is another of the unexpected and unpredicted emergent effects of the intelligence of humans using the network. We now have an expectation that when we encounter some media we find highly salient, we should be able to forward it along within our social networks, sharing it within our communities of salience. But this is not the desire of many copyright holders, who collect their revenues by placing barriers to the access of media. This fundamental conflict, between the desire to share, as engendered by our own interactions with the network, and the desire of copyright holders to restrain media consumption to economic channels has, thus far, been consistently resolved in favor of sharing. The copyright holders have tried to use the legal system as a bludgeon to change the behavior of the audience; this has not, nor will it ever work. But, as the copyright holders resort to ever-more-draconian techniques to maintain control over the distribution of their works, the audience is presented with an ever-growing world of works that are meant to be shared. The danger here is that the audience is beginning to ignore works which they can not share freely, seeing them as “broken” in some fundamental way. Since sharing has now become an essential quality of media, the audience is simply reacting to a perceived defect in those works. In this sense, the media multinationals have been their own worst enemies; by restricting the ability of the audiences to share the works they control, they have helped to turn audiences toward works which audiences can distribute through their own “do-it-yourself” networks.

These DIYnets are now a permanent fixture of the media landscape, even as their forms evolve through YouTube playlists, RSS feeds, and sharing sites such as Facebook and Pownce. These networks exist entirely outside the regular and licensed channels of distribution; they are not suitable – legally or economically – for distribution via a commercial IPTV network. Telstra can not provide these DIYnets to their customers through its IPTV service – nor can any other broadband carrier. IPTV, to a carrier, means the distribution of a few hundred highly regularized television channels. While there will doubtless be a continuing market for mass entertainment, that audience is continuously being eroded by a growing range of peer-produced programming which is growing in salience. In the long-term this, like so much in the world, will probably obey an 80/20 rule, with about 80 percent of the audience’s attention absorbed in peer-produced, highly-salient media, while 20 percent will come from mass-market, high-production-value works. It doesn’t make a lot of sense to bet the house on a service offering which will command such a small portion of the audience’s attention. Yes, Telstra will offer it. But it will never be able to compete with the productions created by the audience.

Because of this tension between the desires of the carrier and the interests of the audience, the carrier will seek to manipulate the capabilities of the broadband offering, to weight it in favor of a highly regularized IPTV offering. In the United States this has become known as the “net neutrality” argument, and centers on the question of whether a carrier has the right to shape traffic within its own IP network to advantage its own traffic over that of others. In Australia, the argument has focused on tariff rates: Telstra believes that if they build the network, they should be able to set the tariff. The ACCC argues otherwise. This has been the characterized as the central stumbling block which has prevented the deployment of a high-speed broadband network across the nation, and, in some sense that is entirely true – Telstra has chosen not move forward until it feels assured that both economic and regulatory conditions prove favorable. But this does not mean that the consumer demand for a high-speed network was simply put on pause over the last years. More significantly, the world beyond Telstra has not stopped advancing. While it now costs roughly USD $750 per household to provide a high-speed fiber-optic connection to the carrier network, other technologies are coming on-line, right now, which promise to reduce those costs by an order of magnitude, and furthermore, which don’t require any infrastructure build-out on the part of the carrier. This disruptive innovation could change the game completely.

III: Check, Mate

All parties to the high-speed broadband dispute – government, Telstra, the Group of Nine, and the public – share the belief that this network must be built by a large organization, able to command the billions of dollars in capital required to dig up the streets, lay the fiber, and run the enormous data centers. This model of a network is an reflection in copper, plastic and silicon, of the hierarchical forms of organization which characterize large institutions – such as governments and carriers. However, if we have learned anything about the emergent qualities of networks, it is that they quickly replace hierarchies with “netocracies“: horizontal meritocracies, which use the connective power of the network to out-compete slower and rigid hierarchies. It is odd that, while the network has transformed nearly everything it has touched, the purveyors of those networks – the carriers – somehow seem immune from those transformative qualities. Telecommunications firms are – and have ever been – the very definition of hierarchical organizations. During the era of plain-old telephone service, the organizational form of the carrier was isomorphic to the form of the network. However, over the last decade, as the internal network has transitioned from circuit-switched to packed-switched, the institution lost synchronization with the form of the network it provided to consumers. As each day passes, carriers move even further out of sync: this helps to explain the current disconnect between Telstra and Australians.

We are about to see an adjustment. First, the data on the network was broken into packets; now, the hardware of the network has followed. Telephone networks were centralized because they required explicit wiring from point-to-point; cellular networks are decentralized, but use licensed spectrum – which requires enormous capital resources. Both of these conditions created significant barriers to entry. But there is no need to use wires, nor is there any need to use licensed spectrum. The 2.4 GHz radio band is freely available for anyone to use, so long as that use stays below certain power values. We now see a plethora of devices using that spectrum: cordless handsets, Bluetooth devices, and the all-but-ubiquitous 802.11 “WiFi” data networks. The chaos which broadcasters and governments had always claimed would be the by-product of unlicensed spectrum has, instead, become an wonderfully rich marketplace of products and services. The first generation of these products made connection to the centralized network even easier: cordless handsets liberated the telephone from the twisted-pair connection to the central office, while WiFi freed computers from heavy and clumsy RJ-45 jacks and CAT-5 cabling. While these devices had some intelligence, that intelligence centered on making and maintaining a connection to the centralized network.

Recently, advances in software have produced a new class of devices which create their own networks. Devices connected to these ad-hoc “mesh” networks act as peers in a swarm (similar to the participants in peer-to-peer filesharing), rather than clients within a hierarchical distribution system. These network peers share information about their evolving topology, forming a highly-resilient fabric of connections. Devices maintain multiple connections to multiple nodes throughout the network, and a packet travels through the mesh along a non-deterministic path. While this was always the promise of TCP/IP networks, static routes through the network cloud are now the rule, because they provide greater efficiency, make it easier to maintain the routers, diagnose network problems, and keeps maintenance costs down. But mesh networks are decentralized; there is no controlling authority, no central router providing an interconnection with a peer network. And – most significantly – mesh networks now incredibly inexpensive to implement.

Earlier this year, the US-based firm Meraki launched their long-awaited Meraki Mini wireless mesh router. For about AUD $60, plus the cost of electricity, anyone can become a peer within a wireless mesh network providing speeds of up to 50 megabits per second. The device is deceptively simple; it’s just an 802.11 transceiver paired with a single-chip computer running LINUX and Meraki’s mesh routing software – which was developed by Meraki’s founders while Ph.D. students at the Massachusetts Institute of Technology. The 802.11 radio within the Meraki Mini has been highly optimized for long-distance communication. Instead of the normal 50 meter radius associated with WiFi, the Meraki Mini provides coverage over at least 250 meters – and, depending upon topography, can reach 750 meters. Let me put that in context, by showing you the coverage I’ll get when I install a Meraki Mini on my sixth-floor balcony in Surry Hills:

From my flat, I will be able to reach all the way from Central Station to Riley Street, from Belvoir Street over to Albion Street. Thousands of people will be within range of my network access point. Of course, if all of them chose to use my single point of access, my Meraki Mini would be swamped with traffic. It simply wouldn’t be able to cope. But – given that the Meraki Mini is cheaper than most WiFi access points available at Harvey Norman – it’s likely that many people within that radius would install their own access points. These access points would detect each others’ presence, forming a self-organizing mesh network. If every WiFi access point visible from my flat (I can sense between 10 and 20 of them at any given time) were replaced with a Meraki Mini, or, perhaps more significantly, if these WiFi access points were given firmware upgrades which allowed them to interoperate with the mesh networks created by the Meraki Mini – my Surry Hills neighborhood would suddenly be blanketed in a highly resilient and wholly pervasive wireless high-speed network, at nearly no cost to the users of that network. In other words, this could all be done in software. The infrastructure is already deployed.

As some of you have no doubt noted, this network is highly local; while there are high-speed connections within the wireless cloud, the mesh doesn’t necessarily have connections to the global Internet. In fact, Meraki Minis can act as routers to the Internet, routing packets through their Ethernet interfaces to the broader Internet, and Meraki recommends that at least every tenth device in a mesh be so equipped. But it’s not strictly necessary, and – if dedicated to a particular task – completely unnecessary. Let us say, for example, that I wanted to provide a low-cost IPTV service to the residents of Surry Hills. I could create a “head-end” in my own flat, and provide my “subscribers” with Meraki Minis and an inexpensive set-top-box to interface with their televisions. For a total install cost of perhaps $300, I could give everyone in Surry Hills a full IPTV service (though it’s unlikely I could provide HD-quality). No wiring required, no high-speed broadband buildout, no billions of dollars, no regulatory relaxation. I could just do it. And collect both subscriber fees and advertiser revenues. No Telstra. No Group of Nine. No blessing from Senator Coonan. No go-over by the ACCC. The technology is all in place, today.

Here’s a news report – almost a year old – which makes the point quite well:

I bring up this thought experiment to drive home my final point: Telstra isn’t needed. It might not even be wanted. We have so many other avenues open to us to create and deploy high-speed broadband services that it’s likely Telstra has just missed the boat. You’ve waited too long, dilly-dallying while the audience and the technology have made you obsolete. The audience doesn’t want the same few hundred channels they can get on FoxTel: they want the nearly endless stream of salience they can get from YouTube. The technology is no longer about centralized distribution networks: it favors light, flexible, inexpensive mesh networks. Both of these are long-term trends, and both will only grow more pronounced as the years pass. In the years it takes Telstra – or whomever gets the blessing of the regulators – to build out this high-speed broadband network, you will be fighting a rearguard action, as both the audience and the technology of the network race on past you. They have already passed you by, and it’s been my task this morning to point this out. You simply do not matter.

This doesn’t mean it’s game over. I don’t want you to report to Sol Trujilo that it’s time to have a quick fire-sale of Telstra’s assets. But it does mean you need to radically rethink your business – right now. In the age of pervasive peer-production, paired with the advent of cheap wireless mesh networks, your best option is to become a high-quality connection to the global Internet – in short, a commodity. All of this pervasive wireless networking will engender an incredible demand for bandwidth; the more people are connected together, the more they want to be connected together. That’s the one inarguable truth we can glean from the 160 years of electric communication. Telstra has the infrastructure to leverage itself into becoming the most reliable data carrier connecting Australians to the global Internet. It isn’t glamorous, but it is a business with high barriers to entry, and promises a steadily growing (if unexciting) continuing revenue stream. But, if you continue to base your plans around selling Australians services we don’t want, you are building your castles on the sand. And the tide is rising.

Last week, YouTube began the laborious process of removing all clips of The Daily Show with Jon Stewart at the request of VIACOM, parent to Paramount Television, which runs Comedy Central, home to The Daily Show. This is no easy task; there are probably tens of thousands of clips of The Daily Show posted to YouTube. Not all of them are tagged well, so – despite its every effort – YouTube is going to miss some of them, opening themselves up to continuing legal action from VIACOM.

It is as all of YouTube’s users feared: now that billions of dollars are at stake, YouTube is playing by the rules. The free-for-all of video clip sharing which brought YouTube to greatness is now being threatened by that very success. Because YouTube is big enough to sue – part of Google, which has a market capitalization of over 160 billion dollars – it is now subject to the same legal restrictions on distribution as all of the other major players in media distribution. In other words, YouTube’s ability to hyperdistribute content has been entirely handicapped by its new economic vulnerability. Since this hyperdistribution capability is the quintessence of YouTube, one wonders what will happen. Can YouTube survive as its assets are slowly stripped away?

Mark Cuban’s warnings have come back to haunt us; Cuban claimed that only a moron would buy YouTube, built as it is on the purloined copyrights of others. Cuban’s critique overlooked the enormous value of YouTube’s of peer-produced content, something I have noted elsewhere. Thus, this stripping of assets will not diminish the value of YouTube. Instead, it will reveal the true wealth of peer-production.

In the past week I’ve used YouTube at least five times daily – but not to watch The Daily Show. I’ve been watching a growing set of political advertisements, commentary and mashups, all leading up to the US midterm elections. YouTube has become the forum for the sharing of political videos, and, while some of them are brazenly lifted from CNN or FOX NEWS, most are produced by the campaigns, and are intended to be hyperdistributed as widely as possible. Political advertising and YouTube are a match made in heaven. When political activism crosses the line into citizen journalism (such as in the disturbing clips of people being roughed up by partisan thugs) that too is hyperdistributed via YouTube. Anything that’s captured on a video camera, or television tuner, or mobile telephone can (and frequently does) end up on YouTube in a matter of minutes.

Even as VIACOM executed their draconian copyrights, the folly of their old-school thinking became ever more apparent. Oprah featured a segment on Juan Mann, Sick Puppies and their now-entirely-overexposed video. It’s been up on YouTube for five weeks, has now topped five million views, and four major record labels are battling for the chance to sign Sick Puppies to a recording contract. It reveals the fundamental paradox of hyperdistribution: the more something is shared, the more valuable it becomes. Take The Daily Show off of YouTube, and fewer people will see it. Fewer people will want to catch the broadcast. Ratings will drop off. And you run the risk of someone else – Ze Frank, perhaps, or another talented upstart – filling the gap.

Yes, Comedy Central is offering The Daily Show on their website, for those who can remember to go there, can navigate through the pages to find the show they want, can hope they have the right video software installed, etc. But Comedy Central isn’t YouTube. It isn’t delivering half of the video seen on the internet. YouTube has become synonymous with video the way Google has become synonymous with search. Comedy Central ignores this fact at its peril, because it’s relying on a change in audience behavior.

II.

Television producers are about to learn the same lessons that film studios and the recording industry learned before them: what the audience wants, it gets. Take your clips off of YouTube, and watch as someone else – quite illegally – creates another hyperdistribution system for them. Attack that system, and watch as it fades into invisibility. Those attacks will force it to evolve into ever-more-undetectable forms. That’s the lesson of music-sharing site Napster, and the lesson of torrent-sharing site Supernova. When you attack the hyperdistribution system, you always make the problem worse.

In its rude, thuggish way, VIACOM is asserting the primacy of broadcasting over hypercasting. VIACOM built an empire from television broadcasting, and makes enormous revenues from it. They’re unlikely to do anything that would encourage the audience toward a new form of distribution. At the same time, they’re powerless to stop that audience from embracing hyperdistribution. So now we get to see the great, unspoken truth of television broadcasting – it’s nothing special. Buy a chunk of radio spectrum, or a satellite transponder, or a cable provider: none of it gives you any inherent advantage in reaching the audience. Ten years ago, they were a lock; today, they’re only an opportunity. There are too many alternate paths to the audience – and the audience has too many paths to one another.

This doesn’t mean that broadcasting will collapse – at least not immediately. It does mean that – finally – there’s real competition. The five media megacorporations in the United States now have several hundred thousand motivated competitors. Only a few of these will reach the “gold standard” of high-quality production technique which characterizes broadcast media. The audience doesn’t care. The audience prizes immediacy, relevancy, accessibility, and above all, salience. There’s no way that five companies, however rich and productive, can satisfy the needs of an audience which has come to expect that it can get exactly what it wants, when it wants, wherever it wants. Furthermore, there’s no way to stop anything that gets broadcast by those companies from being hyperdistributed and added to the millions of available choices. You’d need to lock down every PC, every broadband connection, and every television in the world to maintain a level of control which, just a few years ago, came effortlessly.

VIACOM may sense the truth of this, even as they act against this knowledge. Rumors have been swirling around the net, indicating that YouTube and VIACOM have come to a deal, and that the clips will not be removed – this, while they’re still being deleted. VIACOM, caught in the inflection point between broadcasting and hypercasting, doesn’t fully understand where its future interests lie. In the meantime, it thrashes about as its lizard-brained lawyers revert to the reflexive habits of cease-and-desist.

III.

This week, after two years of frustration and failure, I managed to install and configure MythTV. MythTV is a LINUX-based digital video recorder (DVR) which has been in development for over four years. It has matured enormously in that time, but it still took every last one of my technical skills – plus a whole lot of newly-acquired ones – to get it properly set up. Even now, after some four days of configuration, I’m not quite finished. That puts MythTV miles out of the range of the average viewer, who just wants a box they can drop into their system, turn on, and play with. Those folks purchase a TiVo. But TiVo doesn’t work in Australia – at least, not without the same level of technical gymnastics required to install MythTV. If I had digital cable – spectacularly uncommon in Australia – I could use Foxtel iQ, a very polished DVR with multiple tuners, full program guide, etc. But I have all of that, right now, running on my PC, with MythTV.

I’ve never owned a DVR, though I have written about them extensively. The essential fact of the DVR is that it coaxes you away from television as a live medium. That’s an important point in Australia, where most of us have just five broadcast channels to pick from: frequently, there’s nothing worth watching. But, once you’ve set up the appropriate recording schedule on your DVR, the device is always filled with programming you want to watch. People with DVRs tend to watch 30% more television than those without, and they tend to enjoy it more, because they’re getting just the programmes they find most salient.

Last night – the first night of a relatively complete MythTV configuration – I went to attend a friend’s lecture, but left MythTV to record the evening’s news programmes. I came back in, and played the recorded programmes, but took full advantage of the DVRs ability to jump through the content. I skipped news stories I’d seen earlier in the day (plus all of the sport reportage), and reviewed the segments I found most interesting. I watched 2 hours of television in about 45 minutes, and felt immensely satisfied at the end, because, for the first time, I could completely command the television broadcast, shaping it to the demands of salience. This is the way TV should be watched, I realized, and I knew there’d be no going back.

My DVR has a lot in common with YouTube. Both systems skirt the law; in my case the programming schedules which I download from a community-hosted site are arguably illegal under Australian copyright law, and recording a program at all – either in the US or in Australia – is also illegal. (You don’t sue your audience, and you don’t waste your money suing a not-for-profit community site.) Both systems give me immediate access to content with enormous salience; I see just what I want, just when I want to. YouTube is home to peer-produced content, while the DVR houses professional productions, works that meet the “gold standard”. I have already begun to conceive of them as two halves of the same video experience.

It won’t be long before some enterprising hacker integrates the two meaningfully: perhaps a YouTube plugin for MythTV? (MythTV is a free and open source application, available for anyone to modify or improve.) Perhaps it will be some deal struck between the broadcasters and YouTube. Or perhaps both will occur. This would represent the kind of “convergence” much talked about in the late 1990s, and all but abandoned. Convergence has come; from my point of view it doesn’t matter whether I use MythTV or YouTube or their hybrid offspring. All I care about is watching the programmes that interest me. How they get delivered is nothing special.

Everything is changing. Everything has changed. Everything always changes, but at times that change is particularly pronounced and thus specifically noteworthy. For media – which is the topic du jour – this is so plainly obvious that any attempt to refer to the “before” time has an almost archeological feel, as though we must shovel carefully through layers of dirt to uncover how media worked just a few year ago. These transformations have been seismic, and singular. There is no going back.

But what, exactly, has happened?

The revolution we glimpsed in 1994, when the rough beast of the Web, its hour come at last, made the earth tremble, seducing and subsuming us into its ever-broadening expanse, fell back, for a brief while, into patterns more established and more familiar. We glimpsed a utopia; then a fog rose, and the vision faded. We endured half a decade of stupidity, cupidity and the slow strangulation of dreams. We longed for communion; we got DVD players delivered in under an hour. Fortunately, the network accelerates everything it embraces, and what might have taken a generation in earlier times took just five years to run its course, from Netscape to Razorfish, and the lunar crater of NASDAQ seemed to spell the final doom of all our hopes. The Web, people loudly proclaimed, was so over.

Silly humans.

During those first five years, we learned just how different network economics could be; not just in theory, but in practice. We learned that the essence of the digital artifact is that it exists to be copied. Like a gene in the Cambrian seas of the early Web, information was copied and recopied endlessly. John Perry Barlow’s Declaration of the Independence of Cyberspace was one of the first such objects, spread via email and website until it became nearly impossible to ignore. More recently, Cory Doctorow’s lecture on DRM for Microsoft Research – in text, Pig Latin and video versions – has been passed around like a cheap two-dollar…well, you know. Each of these digital artifacts eventually reached nearly every single individual who might find them interesting, because, as they were copied and read, forwarded and linked to, each of the human nodes in this network made a decision that this information was important enough to share. In the networked era, salience is the only significant quality of information. For that reason, it was only a matter of time until the technologies of the network would reinforce this natural tendency, and accelerate it.

So even as the Web died, it was reborn. The top-down design of a hundred centralized sources of information evolved into seven hundred million peers. From each according to their ability, to each according to their need. Feeds replaced websites, and torrents replaced streams. The revolution we had fleetingly glimpsed had finally – blessedly – arrived.

But one man’s blessing is another’s curse.

The network revolution presented incredible opportunities to anyone working in the media industries. Suddenly, it became possible to reach massive audiences, unbounded by proximity. But instead of reinforcing the previous structures of media ownership and information distribution, the network has consistently undermined them. Mention Craigslist to a newspaperman, and watch as the color drains from their face. Casually drop BitTorrent into a conversation with a studio executive, and observe as they choke back their rage. The network carries within it the seeds of their destruction. And they’re absolutely, utterly, completely powerless to stop it.

This would be a sad story if professional media had not willingly cooperated in their own demise. The technologies of the digital era were simply too tempting to be ignored, too important to the bottom line. But the network has its own economics, and quickly overcomes or blithely ignores any attempt to subvert its innate qualities. Film studios make the majority of revenues from DVD distribution of their productions, but that same DVD, because of its essentially digital nature, can be copied and recopied endlessly, at no cost. If it is salient, it will be copied widely. That’s not just a horror story: that’s the law.

And if you don’t want your film copied? Well then, you have to resort to antique production technique. Make sure it’s shot to film stock, physically edited (good luck finding an editor who prefers a Steenbeck to an Avid) and graded – with no digital intermediates – then projected in an exhibition space where every audience member has been subjected to a humiliating physical search of their bodies. If you did that, you’d kill piracy. Probably. Of course, you’d also kill your exhibition revenues. But the studios (and the record companies, and the broadcasters, and the book publishers) want to have it both ways, want the benefits of digital distribution, all the while denying the essential quality of the medium – it exists to be copied.

That, at least, is the message from a hundred insta-pundits, on the business pages of newspapers, in blogs, and countlessanalysts’reports. The entire world seemed shocked by the entirely expected purchase of video-sharing site YouTube by Google for 1.65 billion dollars. It’s a bad deal, some say, doomed to fail. It isn’t worth it. It’ll bring Google crashing back to earth with endless litigation from the copyright holders who have just been waiting for someone with deep enough pockets to sue.

Feh.

What most everyone overlooked – as it happened the very same day as the Google purchase – were the licensing agreements YouTube struck with Universal, Sony BMG, and CBS. Together with their earlier deal with Warner, YouTube now has a deal with every major music publisher in the world. YouTube will now figure out how to share the revenues it will be generating with Google’s advertising technology with all of the copyright holders whose materials end up on YouTube.

Some pundits – most notably, Mark Cuban – have indicated that only a moron would buy YouTube, because it’s widely believed that YouTube has built its business entirely upon the violation of copyright. Certainly, YouTube established its reputation with a specific piece of video owned by someone else – a digital short from NBC’s Saturday Night Live, “Chronic Sunday.” That video – viewed millions of times before NBC rattled its legal saber and the content was removed – introduced most users to YouTube. In the year since “Chronic Sunday,” YouTube has become a clearing house for the funniest bits of video content produced by other companies, from segments of The Daily Show with Jon Stewart, to South Park, to Family Guy to The Simpsons. Why has YouTube become the redistributors of these clips? Because none of the copyright holders made an effort to distribute these clips themselves. YouTube has been acting as an arbitrageur of media, equalizing an inequity in the market place – and getting very rich in the process. It may be copyright violation, but the power of the audience is far, far greater than the power of the copyright holder. YouTube could delete every clip uploaded in violation of copyright – to some degree they do – but if you have a few thousand people uploading the same clip, how do you stay ahead of that? Even YouTube itself is subject to the power of its audience. And if they become draconian in their enforcement of copyright – which is a possible outcome of the Google purchase – they will simply force the audience elsewhere, to other sites. Better by far to strike a deal with the copyright holders, so that they receive recompense for their efforts. NBC has started to distribute Saturday Night Live’s digital shorts on its own website; ABC and FOX offer full streaming versions of their programs; everyone is queuing up to sell their TV shows on iTunes. Is this a willing transition? Probably not. Minutes spent in front of the computer are minutes lost to television ratings. But if the copyright holders don’t distribute their content as widely as possible, someone else will. YouTube has proven this point beyond all argument.

Cuban believes that YouTube will die without a steady stream of content uploaded in violation of copyright. But if recent history is any guide, the studios are now falling over each other in their eagerness to do a deal, and share some of that money. The simultaneity of the Google purchase and the YouTube deals with the recording industry are not accidental; they’re indicative of a great sea-change. Big media has swallowed the bitter pill, and realized that they’ve lost control of distribution. Now they’ll try to make money off of it.

But Cuban makes another, and more damning point: he says that no one wants to watch the little hand-made videos which make up the vast majority of uploads to YouTube. This is the Big Lie of Big Media: if it isn’t professionally produced, the audience won’t watch it. No statement could be more mendacious, no assertion could be further from the truth. As a film producer and broadcaster, Cuban certainly hopes that audiences will always prefer professional content to amateur productions, but there’s no evidence to support this position – and rather a lot which counters it. The success of Red versus Blue, Homestar Runner, Happy Tree Friends, and The Show with Zefrank – each of which command audiences in the hundreds of thousands to millions – prove that audiences will find the content which interests them, and share that content with their friends, using the hyperdistribution techniques enabled by the network that ensure these audiences can get what they want – from anyone, anywhere, at any time – with a minimum of difficulty. These productions lie completely outside the bounds of “professional” media; they are “amateur,” not in the sense of raw, or poorly produced, but because they have turned their back on the antique systems of distribution which previously separated the big boys from the wannabes.

A perfect example of this transition can be seen in a video on YouTube by the Australian band Sick Puppies. Shot by the band’s drummer, it features a well-known character, Juan Mann, who inhabits Sydney’s Pitt Street mall, bearing a sign reading “Free Hugs.” The band befriended this unlikely character, and shot hours of video of him at work, giving free hugs to passers-by. While in Los Angeles, pursuing a recording deal, the drummer cut his footage into a three minute film, then added one the band’s song “All The Same” as a temp track. Thinking to share his work around, he uploaded the video to YouTube on the 26th of September, and told his friends. Who told their friends. Who told their friends. YouTube is particularly good at “viral” distribution of media – it’s the one thing they’ve gotten absolutely right – so, within three week’s time, that little hand-made video had been viewed well over three million times. Sick Puppies are now on the map; their music video has given them a worldwide fan base. A debut album on a major label – expected early next year – will complete their transformation from amateurs to professionals.

Salience determines whether an audience will gather around and share media, not production values. In the time before hyperdistribution, audiences had a severely limited pool of choices, all of them professionally produced; now the gates have come down, and audiences are free to make their own choices. When placed head-to-head, can a professional production of modest salience stand up against an amateur production of great salience? Absolutely not. The audience will always select the production which speaks to them most directly. Media is a form of language, and we always favor our mother tongue.

The future for YouTube lies with the amateurs, not with the professionals. Cuban misses the point entirely, assuming that the audience will behave as it always has. But this is not that audience; this is an audience which has essentially infinite choice, and has come to understand that the sharing of media is an act of production in itself – that we are all our own broadcasters.

And you’d have to be a moron to miss that.

III. The Epidemiology of Cool

We know why YouTube has had such an incredible string of successes; the site makes it easy to share a video with your friends, and for those friends to share that videos with their friends, and so on. The marketers call this “viral distribution,” but we know it by another and rather more prosaic name – friendship. As an inherently social species, we are constantly reinforcing the our social connections through communication. It could be an IM, a text message, an email, a phone call, or a video – it’s all the same to the enormous section of our forebrains that we use to process the intricacies of our social relationships. We share these things to tell our friends that we’re thinking of them – and, rather more competitively, to show our friends that we’re on the tip. Each of us are coolfinders (some of us do it professionally), and we each keep a little internal thermometer which measures our own cool against that of our peers. That innate drive to be recognized for our tastes has been accelerated to the speed of light by the network. Now, even as we coolfind, we are constantly inundated and challenged by the coolfinding of our peers. It’s produced a very healthy, if ultra-Darwinian, ecology of cool. Our peers are the selection pressure as we struggle to pass our memes on to the next generation.

Thus far, we’ve done this on our own, with very little assistance from the wealth of computing machinery which crowds our lives. We create ad-hoc solutions for media distribution: mailing lists, websites, podcasts – each of these an attempt to spread our ideas more successfully. But they’re held together tenuously, only by our constant activity, busy bees maintaining the cells of our hive. And it’s a lot of work. We’re forced to do it – forced to run the race, lest we be overrun by the memes of others – but we’ve reached the one practical limit: time. No one has enough time in the day to keep up with all of the information we should be absorbing. We can filter ruthlessly – and perhaps miss out on something we’ll regret later – or declare email bankruptcy, like Lawrence Lessig, or just withdraw to an ever-more-specialized domain of coolfinding. And we are doing each of these things, every day, under the pressure of all this information.

There’s got to be a better way.

In the early years of the 19th century, farmers in western Pennsylvania kept their wagon wheels greased with puddles of bubbling muck that studded the countryside. Although useful, the puddles were a toxic nuisance to livestock. If the farmers could have rid their lands of these puddles, they likely would have. A half a century later, western Pennsylvania became a boomtown, built on its substantial petroleum reserves. The bubbling muck had immense value – but it had to wait for the demands of the kerosene lamp and the internal combustion engine.

In the early years of the 21st century, we each generate an enormous amount of interaction data – every click on a computer, every email sent or received, every website visited, every text message, every phone call, every swipe of a credit card or loyalty card or debit card, every face-to-face interaction. None of it is recorded – or at least, it’s not recorded by any of us, for any of us (though the NSA has expressed some interest in it) – because it hasn’t been seen as valuable. It’s bubbling up through all of us, and around all of us, as we create data shadows that have grown longer and longer, resembling Jacob Marley’s lockboxes and chains, rattling throughout cyberspace.

All of that information is worth more than oil, more than gold. And all of it is sadly – almost obscenely – dropped on the floor as soon as it is created. If we’re lucky, it is deleted. If we’re unlucky, someone uses it to create a digital simulacrum, and we find our identities hijacked. But in no case is this information ever exposed to us, for our own use. We’re told it has no value to us, and – so far – we’ve been stupid enough to believe it.

But now, just now, economic forces are linking the persistence of our data shadows to our ability to filter the avalanche of information which characterizes life in the 21st century. Turns out this data guck is good for more than greasing the wheels of commerce. These data shadow glow with the evanescent echo of our real social networks – not the baby steps of MySpace and Friendster – but the real ground-truth interactions which reveal ourselves and our relations one to another. It is human metadata. And it is the most valuable thing we’ve got, now that there’s demand for it.

YouTube records every email address you use to forward a video to a friend. It uses these, at present, to do auto-completion of addresses as you type them in. It also presents a friendly list of these addresses, to make forwarding all that much easier. What they’re not doing – at least, not visibly, and very likely not at all – is keeping any record of what I sent to whom, nor when, nor why. Yet every video forwarded through YouTube is forwarded for a reason – salience. YouTube could record those moments of salience, could use them to build a model, a data shadow, which could reinforce your own ability to make decisions about who should see what. It might even, to some degree, automate that process. When you add to this the newly emerging capabilities of analytic folksonomy – comparing a user’s tag clouds against the tag clouds of others within their social network – certain other relationships and affinities emerge. Again, these relationships can be used to improve the capability of the system to help find, filter and forward relevant videos. This is how a social network really works. It’s not about having 500 first-degree friends in MySpace. It’s about listening to your naturally occurring social network to direct, improve, and accelerate information flow. When the brand-new power of the individual as broadcaster is reified by the capabilities of computing machinery to listen to and model our interactions, the result is hypercasting. This is what media distribution in the 21st century is inevitably hurtling toward, driven by the natural selection of steadily increasing informational pressure.

Hypercasting solves some lingering questions confronting us. The first and most important of these is: How will we figure out what to watch now that we’ve got a near-to-infinite set of choices? We’ll rely on the recommendation of our friends, as we always have, but now these recommendations will be backed up by a hypercasting system which will invisibly and pervasively keep track our interests, the points of interest we hold in common with our friends, our communities, our families, and our co-workers. It will not be automatic – no one really wants to see some out-of-control hypercasting system deluge us with video spam – but it will be so tightly integrated into our interactive experiences that it will barely register on our perceptions. We’ll simply come to expect that our iPods, our Media Centers, our PSPs and our mobiles are loaded up and ready for us, with things we’re sure to find compelling. Addiction to television will soar to new highs, a new crop of amateurs – millions of them – will find successful and lucrative careers in media production, and advertisers, as always, will find a way to spread their messages. On the surface, things will look much as they do now, but everything will move at a more rapid clip. Videos will fly across the world in seconds, not days, and a global audience of a million will gather in moments. Almost accidentally, this will change news reporting forever, as citizen journalism becomes a real threat to established media companies, and their utter undoing. Shouldn’t the New York Times be subject to the same pressures as NEWS Corporation?

Is YouTube the harbinger of the transition to hypercasting? The lead is theirs to lose. GooTube delivers over half of all videos seen on the Internet. They have the cash and the brainpower to transform broadcasting into hypercasting. And they have to worry about the next set of 20-somethings, in a garage, working on the Next Big Thing. Those kids, nurtured by YouTube, know just what’s wrong with it, and how to make it better. YouTube faces its own selection pressures, which will only increase as it grows exponentially and cuts content deals and just tries to keep the whole centralized mess up and running.

Yet it doesn’t matter. We have seen birth and death, and thought they were different. But the death of the Web brought a new kind of life, a vitality and surefootedness suppressed during the years of MBAs and crazy business plans and IPOs. Perhaps history is repeating itself, as everyone goes wild with another case of gold fever, and we’ll lose the plot again. In that case, we should be glad of another death.

Hypercasting might need to wait a few years, for a platform very much like a fully mature Democracy DTV – or something we haven’t even dreamt up. It may be that YouTube will disappoint. But that doesn’t mean anything at all. YouTube isn’t driving the evolution toward hypercasting. The audience is. And the audience – in its teeming, active, probing billions – always gets whatever it wants. That’s the first rule of show business.