Main menu

Category Archives: MySpace

When I came to Australia six years ago, to seek my fame and fortune, business communications had remained largely unchanged for nearly a century. You could engage in face-to-face conversation – something humans have been doing since we learned to speak, countless thousands of years ago – or, if distance made that impossible, you could drop a letter into the post. Australia Post is an excellent organization, and seems to get all of the mail delivered within a day or two – quite an accomplishment in a country as dispersed and diffuse as ours.

In the twentieth century, the telephone became the dominant form of business communication; Australia Post wired the nation up, and let us talk to one another. Conversation, mediated by the telephone, became the dominant mode of communication. About twenty years ago the facsimile machine dropped in price dramatically, and we could now send images over phone lines.

The facsimile translates images into data and back into images again. That’s when the critical threshold was crossed: from that point on, our communications have always centered on data. The Internet arrived in 1995, and broadband in 2001. In the first years of Internet usage, electronic mail was both the ‘killer app’ and the thing that began to supplant the telephone for business correspondence. Electronic mail is asynchronous – you can always pick it up later. Email is non-local, particularly when used through a service such as Hotmail or Gmail – you can get it anywhere. Until mobiles started to become pervasive for business uses, the telephone was always a hit-or-miss affair. Electronic mail is a hit, every time.

Such was the business landscape when I arrived in Australia. The Web had arrived, and businesses eagerly used it as a publishing medium – a cheap way of getting information to their clients and customers. But the Web was changing. It had taken nearly a decade of working with the Web, day-to-day, before we discovered that the Web could become a fully-fledged two-way medium: the Web could listen as well as talk. That insight changed everything. The Web morphed into a new beast, christened ‘Web 2.0’, and everywhere the Web invited us to interact, to share, to respond, to play, to become involved. This transition has fundamentally changed business communication, and it’s my goal this morning to outline the dimensions of that transformation.

This transformation unfolds in several dimensions. The first of these – and arguably the most noticeable – is how well-connected we are these days. So long as we’re in range of a cellular radio signal, we can be reached. The number of ways we can be reached is growing almost geometrically. Five years ago we might have had a single email address. Now we have several – certainly one for business, and one for personal use – together with an account on Facebook (nearly eight million of the 22 million Australians have Facebook accounts), perhaps another account on MySpace, another on Twitter, another on YouTube, another on Flickr. We can get a message or maintain contact with someone through any of these connections. Some individuals have migrated to Facebook for the majority of their communications – there’s no spam, and they’re assured the message will be delivered. Among under-25s, electronic mail is seen as a technology of the ‘older generation’, something that one might use for work, but has no other practical value. Text messaging and messaging-via-Facebook have replaced electronic mail.

This increased connectivity hasn’t come for free. Each of us are now under a burden to maintain all of the various connections we’ve opened. At the most basic level, we must at least monitor all of these channels for incoming messages. That can easily get overwhelming, as each channel clamors for attention.

But wait. We’ve dropped Facebook and Twitter into the conversation before I even explained what they are and how they work. We just take them as a fact of life these days, but they’re brand new. Facebook was unknown just three years ago, and Twitter didn’t zoom into prominence until eighteen months ago. Let’s step back and take a look at what social networks are. In a very real way, we’ve always known exactly what a social network is: since we were very small we’ve been reaching out to other people and establishing social relationships with them. In the beginning that meant our mothers and fathers, sisters and brothers. As we grew older that list might grow to include some of the kids in the neighborhood, or at pre-kindy, and then our school friends. By the time we make it to university, that list of social relationships is actually quite long. But our brains have limited space to store all those relationships – it’s actually the most difficult thing we do, the most cognitively all-encompassing task. Forget physics – relationship are harder, and take more brainpower.

Nature has set a limit of about one hundred and fifty on the social relationships we can manage in our heads. That’s not a static number – it’s not as though as soon as you reach 150, you’re done, full. Rather, it’s a sign of how many relationships of importance you can manage at any one time. None of us, not even the most socially adept, can go very much beyond that number. We just don’t have the grey matter for it.

Hence, fifty years ago mankind invented the Rolodex – a way of keeping track of all the information we really should remember but can’t possibly begin to absorb. A real, living Rolodex (and there are few of them, these days) are a wonder to behold, with notes scribbled in the margins, business cards stapled to the backs of the Rolodex cards, and a glorious mess of information, all alphabetically organized. The Rolodex was mankind’s first real version of the modern, digital, social network. But a Rolodex doesn’t think for itself; a Rolodex can not draw out the connections between the different cards. A Rolodex does not make explicit what we know – we live in a very interconnected world, and many of our friends and associates are also friends and associates with our friends and associates.

That is precisely what Facebook gives us. It makes those implicit connections explicit. It allows those connections to become conduits for ever-greater-levels of connection. Once those connections are made, once they become a regular feature of our life, we can grow beyond the natural limit of 150. That doesn’t mean you can manage any of these relationships well – far from it. But it does mean that you can keep the channels of communication open. That’s really what all of these social networks are: turbocharged Rolodexes, which allow you to maintain far more relationships than ever before possible.

Once these relationships are established, something beings to happen quite naturally: people begin to share. What they share is often driven by the nature of the relationship – though we’ve all seen examples where individuals ‘over-share’ inappropriately, confusing business and social channels of communication. That sort of thing is very easy to do with social networks such as Facebook, because it doesn’t provide an easy method to send messages out to different groups of friends. We might want a social network where business friends might get something very formal, while close friends might that that photo of you doing tequila shots at last weekend’s birthday party. It’s a great idea, isn’t it? But it can’t be done. Not on Facebook, not on Twitter. Your friends are all lumped together into one undifferentiated whole. That’s one way that those social networks are very different from the ones inside our heads. And it’s something to be constantly aware of when sharing through social networks.

That said, this social sharing has become an incredibly potent force. More videos are uploaded to YouTube every day than all television networks all over the world produce in a year. It may not be material of the same quality, but that doesn’t matter – most of those videos are only meant to be seen among a small group of family or friends. We send pictures around, we send links around, we send music around (though that’s been cause for a bit of trouble), we share things because we care about them, and because we care about the people we’re sharing with. Every act of sharing, business or personal, brings the sharer and the recipient closer together. It truly is better to give than receive. On the other hand, we’re also drowning in shared material. There’s so much, coming from every corner, through every one of these social networks, there’s no possible way to keep up. So, most of us don’t. We cherry-pick, listening to our closest friends and associates: the things they share with us are the most meaningful. We filter the noise and hope that we’re not missing anything very important. (We usually are.)

In certain very specific situations, sharing can produce something greater than the sum of its parts. A community can get together and decide to pool what it knows about a particular domain of knowledge, can ‘wise up’ by sharing freely. This idea of ‘collective intelligence’ producing a shared storehouse of knowledge is the engine that drives sites like Wikipedia. We all know Wikipedia, we all know how it works – anyone can edit anything in any article within it – but the wonder of Wikipedia is that it works so well. It’s not perfectly accurate – nothing ever is – but it is good enough to be useful nearly all the time. Here’s the thing: you can come to Wikipedia ignorant and leave it knowing something. You can put that knowledge to work to make better decisions than you would have in your state of ignorance. Wikipedia can help you wise up.

Wikipedia isn’t the only example of shared knowledge. A decade ago a site named TeacherRatings.com went online, inviting university students to provide ratings of their professors, lecturers and instructors. Today it’s named RateMyProfessor.com, is owned by MTV Networks, and has over ten million ratings of one million instructors. This font of shared knowledge has become so potent that students regularly consult the site before deciding which classes they’ll take next semester at university. Universities can no longer saddle student with poor teachers (who may also be fantastic researchers). There are bidding wars taking place for the lecturers who get the highest ratings on the site. This sharing of knowledge has reversed the power relationship between a university and its students which stretches back nearly a thousand years.

Substitute the word ‘business’ for university and ‘customers’ for students and you see why this is so significant. In an era where we’re hyperconnected, where people share, and share knowledge, things are going to work a lot differently than they did before. These all-important relationships between businesses and their customers (potential and actual) have been completely rewritten. Let’s talk about that.

II. Linked Out

Of all the challenges you face in your professional practice, the greatest of them comes from a website that, at first glance, seems completely innocuous. LinkedIn is the “professional” social network, where individuals re-create their C.V. online, and, entry by entry, link their profiles to other people they have worked with over the years.

Just that alone is something entirely new and very potent. When a potential employer sees a C.V., they don’t see the network of connections the candidate created at every position – a network which tells the employer much of what they need to know about the suitability of the candidate. Suddenly, all of this implicit information has been revealed explicitly. An employer can ‘walk the chain’ of associations, long before a candidate submits any references. The LinkedIn profile is the reference, quite literally.

This means that a LinkedIn profile is more valuable than any hand-crafted C.V., because it is, on the whole, a more accurate read of the candidate. A candidate’s connections tell you everything about who the candidate is. They certainly tell you more than a list of hand-picked referees ever could. LinkedIn is simply a better way of doing business.

This means that LinkedIn has caught on like a bushfire in Big End of town. Throughout the nation, employers look for the LinkedIn profile of potential candidates, and these profiles carry more weight than any words from the candidate, or a recruiter, or, really, anyone else. This transformation happened suddenly over the last 12 months, as businesspeople reached a critical mass of involvement with LinkedIn. LinkedIn benefits from the ‘network effect’: the more people who create profiles on LinkedIn, the more valuable the service becomes – because it’s more likely you’ll find someone’s profile there. That, in turn, makes it more likely another individual will create a LinkedIn profile, making it more valuable, etc. It also means that any candidate without a LinkedIn profile is immediately suspect – what’s he or she trying to hide?

LinkedIn become the new standard in recruiting. But don’t look too closely, or you’ll get scared. LinkedIn takes one of the things the recruiter brings to the table – an extensive and wide-ranging set of contacts – and reproduces that electronically in such a way that anyone can take advantage of them. In other words, everyone is now on a much more equal footing. The time and energy you have dedicated to building up those networks can now be matched by someone spending a lot less time on it – someone who is employing the latest tools.

The big worry, from here forward, is that recruiters as we have known them will be obsolesced by social networking technologies. As we get further into the social media revolution, and these tools become more refined, many of the functions of the recruiter-as-networker, recruiter-as-matchmaker, and recruiter-as-talent-finder will be subsumed into these social networks. Already I can dial and tune searches on LinkedIn to give me, say, a list of electrical engineers who work in Melbourne. That’s a list I can work from, if I’m doing a personnel search. I can message those folks through LinkedIn, to find out if they’re interested in a conversation about a potential opportunity. The platform provides the basic set of capabilities to amplify my effectiveness – without any substantial investment.

People will begin to ask why they need recruiters. People are already beginning to ask this question, as they see the social network providing the same capabilities – and for free. This is something that should scare you a little bit, because it shows you that recruiting, as we’ve known it, has about as much life expectancy as a buggy-whip maker did in 1915. There are still a few years left in which recruiting will be a profitable business, but after that it will simply be overwhelmed by social networking tools which can amplify the powers of the average person so effectively that recruiting simply becomes another task on offer, like sending a message or posting a photo.

As people are drawn together over social networks, they get a better sense of the talents of those around them. This talent-spotting used to be the sine qua non of the recruiter. Now that each of us can manage connections far beyond the natural limit of 150, we each learn our respective strengths. We use systems like LinkedIn to help us keep tally of those strengths. We use the tools to deploy those strengths. Everything happens because the tools empower us. But will they empower us so much that recruiters become redundant?

You need to have a good think about your business, and about the way you practice your business. You need to have a good look at the tools – particularly LinkedIn, but also Twitter and Facebook. You’ll learn that these tools are good at some things, and lousy at others. Here’s the question: are you good at the things the tools aren’t? Tools are no substitute for relationships. Even though the tools give us some false sense of relationship, it’s not the real thing. Recruiting is the real thing. But, is that enough?

III. Social Media Gods

In times long past – and by this, I mean just five years ago – recruiters were the masters of the Rolodex. You survived and thrived by knowing everybody, everywhere, with talent, and everybody, everywhere, who needed that talent. That in itself is quite a talent. But that talent is no longer enough. It is, however, the springboard to get you to the next level.

Fasten your seatbelts. You’re about to get launched headlong into the future. I want you to imagine a time – let’s say, tomorrow afternoon – when the average person now has quite extraordinary Rolodex capabilities, courtesy of the social networks, and where you, the masters, have gone beyond that into regions undreamed of. Imagine being able to take each of your contacts, and use those as starting points for new contacts within new networks. You’d have an inner ring of close contacts – just as you do today, but multiplied by the capabilities of the tools to support and nurture these contacts. Outside that inner ring, you’d have consecutive rings of contacts-to-contacts, and contacts-to-contacts-to-contacts, and so on, all the way out until the network simply becomes too diffuse and too difficult to maintain.

If this sounds familiar, it’s because it echoes the famed ‘six degrees of separation’, a theorem that provides that we are all just six people away from any other person on the planet. Australia is a lot smaller than the world; within any particular domain of expertise, there’s really only one or two degrees of separation, whether that’s in filmmaking, medicine, or software engineering. There just aren’t that many of us. Fortunately, that means that our networks aren’t deep: we can more-or-less know everyone involved in our field, with the help of a good Rolodex.

You have more than a good Rolodex. You have the new tools; you can build a Rolodex of Rolodexes, one Rolodex per discipline, and use that to track everybody, everywhere, who matters. In this future, that is really tomorrow afternoon, you’ve so leveraged your network resources that each of you sits in the middle of a vast web, and each time there’s a twitch upon a thread, you know about it, because that information is shared throughout your networks, and finds its way toward your receptive ears.

You’re going to need good tools to make this ambitious project a reality, and you’re going to need them for two entirely contradictory reasons: first, to be able to listen to everything going on everywhere, and second, because that chaotic din will deafen you. You need tools to help you find out what’s going on, but, more significantly, you need tools to help you winnow the wheat from the chaff. Being well-connected means bearing the burden of drowning in pointless information. Without the right tools, as you grow your networks you will simply sink under the noise.

What tools? They barely exist today. Google Alerts is one tool that will help keep you abreast of news as it is created on the net. Within the next few months, Google will begin to digest the endless ‘feeds’ created by Facebook and Twitter users, and you’ll be able to search through those as well. But again, there’s just too much there. You likely need a more professional tool, such as Sydney’s own PeopleBrowsr, to sift through the wealth of information that will be generated by your ever-more-encompassing networks of networks.

I should point out – for the more entrepreneurial among you – there is now a market for tools that recruiters need to become better recruiters: tools that harness the networks. Such tools will need to be designed by someone who understands the recruiting business and the network. That means it could be one of you. You could partner with a Google or a PeopleBrowsr, or strike out on your own. If you don’t do it, one of your competitors – either in Australia or overseas – certainly will.

The first half of my advice is simply this: build your networks. Build them out to unimaginable reaches. Use the tools to leverage your capabilities. Use the tools as if your livelihood depended upon it. Because it does. Behind you are a new generation, unafraid to use the tools to build their networks up. When you go head-to-head against them, those with the best networks – and the best tools – will tend to win. That’s what the next decade looks like, as we transition from the Rolodex to the social network: more and more business will go to the well-networked. So really, there is no choice: adapt or die.

There’s another face to this, one that turns itself outward. Sure, you’ve created this vast and nationwide network to feed you information. But you’ve got to do more than listen. You must present yourself within the network. You must be present. Many people and most companies think that they can use social media as an advertising medium. Plenty of firms set up Facebook pages and Twitter accounts and post lots of advertising messages to an ever-decreasing number of followers.

People don’t want to get spammed. They don’t want to hear your marketing messages over a communications channel that they consider personal. So please, don’t make this mistake. In fact, I’ll go even further – don’t think of the Web as an advertising medium. Sure, it had a few good years where a business presence online was simply a great way to get your marketing materials out there inexpensively, but those days are over. Today everything is about engagement. Engagement begins with conversation.

Conversation is a tricky thing: on the one hand it’s the most natural of human capabilities; on the other hand, it’s fraught with disaster. Social media amplifies both sides of this equation. There are more places for more conversations than ever before, and more opportunities for these conversations to run off the rails. Here are some simple rules of thumb which should keep you out of trouble:

Only go where you’re invited. No one likes a salesman who sticks their foot in the door.

Participate in a conversation from a place of authenticity. Let people know who you are and why you’re there.

Spend time building relationships. Social media is a lot like friendship – it takes time and investment and a bit of love to make it work.

Be consistent. Invest time every single day, or at least with regularity. If you can’t do that, it’s probably better you do nothing at all.

Where are these conversations happening? All around you: on Twitter and Facebook and LinkedIn and YouTube and Flickr and thousand blogs. They’re happening all the time, everywhere. You probably want to spend some time investigating these conversations before you participate. That’s known as ‘lurking’, and it’s the foundation of successful net relationships. Having an appreciation and an understanding of a community before you participate within it shows respect. Respect will be reciprocated.

That’s about it for today – and frankly, that’s quite a lot. I’ve asked you to re-invent yourselves for the mid-21st century. I’ve asked you to become the gods of social media, to translate your natural role as connectors and facilitators into a greatly amplified form, just so you can remain competitive. I’m not saying that this transition will happen overnight. You have at least a few years to become adept with the tools, and a few more to build out those nationwide networks. But I can promise this: at the close of the 2nd decade of the 21st century, recruiting will look entirely different.

Every social network has a few individuals who are ‘superconnected’, who have many more connections than their peers within the network. Those individuals are the glue who keep the network held together. This is your natural role. The challenge, moving forward, is to remain extraordinary when everyone around you becomes superconnected themselves. It will take some work, and some time, but it can be done. Good luck.

When I came to Australia six years ago, to seek my fame and fortune, business communications had remained largely unchanged for nearly a century. You could engage in face-to-face conversation – something humans have been doing since we learned to speak, countless thousands of years ago – or, if distance made that impossible, you could drop a letter into the post. Australia Post is an excellent organization, and seems to get all of the mail delivered within a day or two – quite an accomplishment in a country as dispersed and diffuse as ours.

In the twentieth century, the telephone became the dominant form of business communication; Australia Post wired the nation up, and let us talk to one another. Conversation, mediated by the telephone, became the dominant mode of communication. About twenty years ago the facsimile machine dropped in price dramatically, and we could now send images over phone lines.

The facsimile translates images into data and back into images again. That’s when the critical threshold was crossed: from that point on, our communications have always centered on data. The Internet arrived in 1995, and broadband in 2001. In the first years of Internet usage, electronic mail was both the ‘killer app’ and the thing that began to supplant the telephone for business correspondence. Electronic mail is asynchronous – you can always pick it up later. Email is non-local, particularly when used through a service such as Hotmail or Gmail – you can get it anywhere. Until mobiles started to become pervasive for business uses, the telephone was always a hit-or-miss affair. Electronic mail is a hit, every time.

Such was the business landscape when I arrived in Australia. The Web had arrived, and businesses eagerly used it as a publishing medium – a cheap way of getting information to their clients and customers. But the Web was changing. It had taken nearly a decade of working with the Web, day-to-day, before we discovered that the Web could become a fully-fledged two-way medium: the Web could listen as well as talk. That insight changed everything. The Web morphed into a new beast, christened ‘Web 2.0’, and everywhere the Web invited us to interact, to share, to respond, to play, to become involved. This transition has fundamentally changed business communication, and it’s my goal this morning to outline the dimensions of that transformation.

This transformation unfolds in several dimensions. The first of these – and arguably the most noticeable – is how well-connected we are these days. So long as we’re in range of a cellular radio signal, we can be reached. The number of ways we can be reached is growing almost geometrically. Five years ago we might have had a single email address. Now we have several – certainly one for business, and one for personal use – together with an account on Facebook (nearly eight million of the 22 million Australians have Facebook accounts), perhaps another account on MySpace, another on Twitter, another on YouTube, another on Flickr. We can get a message or maintain contact with someone through any of these connections. Some individuals have migrated to Facebook for the majority of their communications – there’s no spam, and they’re assured the message will be delivered. Among under-25s, electronic mail is seen as a technology of the ‘older generation’, something that one might use for work, but has no other practical value. Text messaging and messaging-via-Facebook have replaced electronic mail.

This increased connectivity hasn’t come for free. Each of us are now under a burden to maintain all of the various connections we’ve opened. At the most basic level, we must at least monitor all of these channels for incoming messages. That can easily get overwhelming, as each channel clamors for attention.

But wait. We’ve dropped Facebook and Twitter into the conversation before I even explained what they are and how they work. We just take them as a fact of life these days, but they’re brand new. Facebook was unknown just three years ago, and Twitter didn’t zoom into prominence until eighteen months ago. Let’s step back and take a look at what social networks are. In a very real way, we’ve always known exactly what a social network is: since we were very small we’ve been reaching out to other people and establishing social relationships with them. In the beginning that meant our mothers and fathers, sisters and brothers. As we grew older that list might grow to include some of the kids in the neighborhood, or at pre-kindy, and then our school friends. By the time we make it to university, that list of social relationships is actually quite long. But our brains have limited space to store all those relationships – it’s actually the most difficult thing we do, the most cognitively all-encompassing task. Forget physics – relationship are harder, and take more brainpower.

Nature has set a limit of about one hundred and fifty on the social relationships we can manage in our heads. That’s not a static number – it’s not as though as soon as you reach 150, you’re done, full. Rather, it’s a sign of how many relationships of importance you can manage at any one time. None of us, not even the most socially adept, can go very much beyond that number. We just don’t have the grey matter for it.

Hence, fifty years ago mankind invented the Rolodex – a way of keeping track of all the information we really should remember but can’t possibly begin to absorb. A real, living Rolodex (and there are few of them, these days) are a wonder to behold, with notes scribbled in the margins, business cards stapled to the backs of the Rolodex cards, and a glorious mess of information, all alphabetically organized. The Rolodex was mankind’s first real version of the modern, digital, social network. But a Rolodex doesn’t think for itself; a Rolodex can not draw out the connections between the different cards. A Rolodex does not make explicit what we know – we live in a very interconnected world, and many of our friends and associates are also friends and associates with our friends and associates.

That is precisely what Facebook gives us. It makes those implicit connections explicit. It allows those connections to become conduits for ever-greater-levels of connection. Once those connections are made, once they become a regular feature of our life, we can grow beyond the natural limit of 150. That doesn’t mean you can manage any of these relationships well – far from it. But it does mean that you can keep the channels of communication open. That’s really what all of these social networks are: turbocharged Rolodexes, which allow you to maintain far more relationships than ever before possible.

Once these relationships are established, something beings to happen quite naturally: people begin to share. What they share is often driven by the nature of the relationship – though we’ve all seen examples where individuals ‘over-share’ inappropriately, confusing business and social channels of communication. That sort of thing is very easy to do with social networks such as Facebook, because it doesn’t provide an easy method to send messages out to different groups of friends. We might want a social network where business friends might get something very formal, while close friends might that that photo of you doing tequila shots at last weekend’s birthday party. It’s a great idea, isn’t it? But it can’t be done. Not on Facebook, not on Twitter. Your friends are all lumped together into one undifferentiated whole. That’s one way that those social networks are very different from the ones inside our heads. And it’s something to be constantly aware of when sharing through social networks.

That said, this social sharing has become an incredibly potent force. More videos are uploaded to YouTube every day than all television networks all over the world produce in a year. It may not be material of the same quality, but that doesn’t matter – most of those videos are only meant to be seen among a small group of family or friends. We send pictures around, we send links around, we send music around (though that’s been cause for a bit of trouble), we share things because we care about them, and because we care about the people we’re sharing with. Every act of sharing, business or personal, brings the sharer and the recipient closer together. It truly is better to give than receive. On the other hand, we’re also drowning in shared material. There’s so much, coming from every corner, through every one of these social networks, there’s no possible way to keep up. So, most of us don’t. We cherry-pick, listening to our closest friends and associates: the things they share with us are the most meaningful. We filter the noise and hope that we’re not missing anything very important. (We usually are.)

In certain very specific situations, sharing can produce something greater than the sum of its parts. A community can get together and decide to pool what it knows about a particular domain of knowledge, can ‘wise up’ by sharing freely. This idea of ‘collective intelligence’ producing a shared storehouse of knowledge is the engine that drives sites like Wikipedia. We all know Wikipedia, we all know how it works – anyone can edit anything in any article within it – but the wonder of Wikipedia is that it works so well. It’s not perfectly accurate – nothing ever is – but it is good enough to be useful nearly all the time. Here’s the thing: you can come to Wikipedia ignorant and leave it knowing something. You can put that knowledge to work to make better decisions than you would have in your state of ignorance. Wikipedia can help you wise up.

Wikipedia isn’t the only example of shared knowledge. A decade ago a site named TeacherRatings.com went online, inviting university students to provide ratings of their professors, lecturers and instructors. Today it’s named RateMyProfessor.com, is owned by MTV Networks, and has over ten million ratings of one million instructors. This font of shared knowledge has become so potent that students regularly consult the site before deciding which classes they’ll take next semester at university. Universities can no longer saddle student with poor teachers (who may also be fantastic researchers). There are bidding wars taking place for the lecturers who get the highest ratings on the site. This sharing of knowledge has reversed the power relationship between a university and its students which stretches back nearly a thousand years.

Substitute the word ‘business’ for university and ‘customers’ for students and you see why this is so significant. In an era where we’re hyperconnected, where people share, and share knowledge, things are going to work a lot differently than they did before. These all-important relationships between businesses and their customers (potential and actual) have been completely rewritten. Let’s talk about that.

II. Breaking In

The most important thing you need to know about the new relationship between yourselves and your customers is that your customers are constantly engaging in a conversation about you. At this point, you don’t know where those customers are, and what they’re saying. They could be saying something via a text message, or a Facebook post, or an email, or on Twitter. Any and all of these conversations about you are going on right now. But you don’t know, so there’s no way you can participate in them.

I’ll give you an example I used my column in NETT magazine. My mate John Allsopp (a big-time Web developer, working on the next generation of Web technologies) travels a lot for business. Back in June, on a trip the US, he decided to give VAustralia’s Premium Economy class a try. He was so pleased about the service – and the sleep he got – he immediately sent out a tweet: “At LAX waiting for flight to Denver. Best flight ever on VAustralia Premium Economy. Fantastic seat, service, and sleep. Hooked.” That message went out to twelve hundred of John’s Twitter followers – many of whom are Australians. It was quickly answered by a tweet from Cheryl Gledhill: “isn’t VAustralia the bomb!! My favourite airline at the moment… so roomy, and great entertainment, nice hosties, etc.” That message went to Cheryl’s 250 followers. I chimed in, too: “Precisely how I felt after my VA flights last month: hooked. Got 7 hours sleep each way. Worth the price.” That message went out to fifty-two hundred of my followers – who are disproportionately Australian.

Just between the three of us, we might have reached as many as seven thousand people – individuals who are like ourselves – because like connects to like in social networks. That means these are individuals who are likely to take advantage of VAustralia the next time they fly the transpacific route. But here’s the sad thing: VAustralia had no idea this wonderful and loving conversation about their product was going on. No idea at all. You know what they were involved in? An ad-agency dreamed-up ‘4320SYD’ campaign, which flew four mates to Los Angeles for three days, promising them free round-the-world flights on the various Virgin airlines if they sent at least two thousand tweets during their trip. VAustralia – or rather, VAustralia’s ad agency – presumed that people with busy lives would spend some of their precious time and attention following four blokes spewing out line after line of inane chatter. Naturally, the campaign disappeared without a trace.

If VAustralia had asked its agency to monitor Twitter, to keep its finger to the pulse of what was being said online, things could have turned out very differently. Perhaps a VAustralia rep would have contacted John Allsopp directly, thanked him for his kind words, and offered him a $100 coupon for his next flight on V Australia Premium Economy. VAustralia would have made a customer for life – and for a lot less than they spent on the ‘4320SYD’ campaign.

Marketers and agencies are still thinking in terms of mass markets and mass media. While both do still exist, they don’t shape perception as they did a generation ago. Instead, we turn to the hyperconnections we have with one another. I can instantly ask Twitter for a review of a restaurant, a gadget, or a movie, and I do. So do millions of others. This is the new market, and this is the place where marketing – at least as we’ve known it – can not penetrate.

That’s one problem. There’s another, and larger problem: what happens when you have an angry customer? Let me tell you a story about my friend Kate Carruthers, who will be speaking with you later this morning. On a recent trip to Queensland, she pulled out her American Express credit card to pay for a taxi fare. Her card was declined. Kate paid with another card and thought little of it until the next time she tried to use the card – this time to pay for something rather pricier, and more sensitive – only to find her card declined once again.

As it turned out, AMEX had cut her credit line in half, but hadn’t bothered to inform her of this until perhaps a day or two before, via post. So here’s Kate, far away from home, with a crook credit card. Thank goodness she had another card with her, or it could have been quite a problem. When she contacted AMEX to discuss the credit line change – on a Friday evening – she discovered that this ‘consumer’ company kept banker’s hours in its credit division. That, for Kate, was the last straw. She began to post a series of messages to Twitter:

“I can’t believe how rude Amex have been to me; cut credit limit by 50% without notice; declined my card while in QLD even though acct paid”

“since Amex just treated me like total sh*t I just posted a chq for the balance of my account & will close acct on Monday”

“Amex is hardly accepted anywhere anyhow so I hardly use it now & after their recent treatment I’m outta there”

“luckily for me I have more than enough to just pay the sucker out & never use Amex again”

“have both a gold credit card & gold charge card with amex until monday when I plan to close both after their crap behaviour”

Kate is both a prolific user of Twitter and a very well connected individual. There are over seven thousand individuals reading her tweets. Seven thousand people who saw Kate ‘go nuclear’ over her bad treatment at the hands of AMEX. Seven thousand people who will now think twice when an AMEX offer comes in the post, or when they pass by the tables that are ubiquitously in every airport and mall. Everyone one of them will remember the ordeal Kate suffered – almost as if Kate were a close friend.

Does AMEX know that Kate went nuclear? Almost certainly not. They didn’t make any attempt to contact her after her outburst, so it’s fairly certain that this flew well underneath their radar. But the damage to AMEX’s reputation is quantifiable: Kate is simply too hyperconnected to be ignored, or mistreated. And that’s the world we’re all heading into. As we all grow more and more connected, as we each individually reach thousands of others, slights against any one of us have a way of amplifying into enormous events, the kinds of mistakes that could, if repeated, bring a business to its knees. AMEX, in its ignorant bliss, has no idea that it has shot itself in the foot.

While Kate expressed her extreme dissatisfaction with AMEX, its own marketing arm was busily cooking up a scheme to harness Twitter. It’s Open Forum Pulse website shows you tweets from small businesses around the world. It’s ironic, isn’t it? AMEX builds a website to show us what others are saying on Twitter, all the while ignoring about what’s being said about it. Just like VAustralia. Perhaps that’s simply the way Big Business is going to play the social media revolution – like complete idiots. You have an opportunity to learn from their mistakes.

There is a whole world out there engaging in conversation about you. You need to be able to recognize that. There are tools out there – like PeopleBrowsr – which make it easy for you to monitor those conversations. You’ll need to think through a strategy which allows you to recognize and promote those positive conversations, while – perhaps more importantly – keeping an eye on the negative conversations. An upset customer should be serviced before they go nuclear; these kinds of accidents don’t need to happen. But you’ll need to be proactive in your listening. Customers will no longer come to you to talk about you or your business.

III. Breaking Out

The first step in any social media strategy for business is to embrace the medium. Many business ban social media from their corporate networks, seeing them as a drain of time and attention. Which is, in essence, saying that you don’t trust your own employees. That you’re willing to infantilize them by blocking their network access. This won’t work. ‘Smartphones’ – that is, mobiles which have big screens, broadband connections, and full web browsers – have become increasingly popular in Australia. Perhaps one third of all mobile handsets now qualify as smartphones. Apple’s iPhone is simply the most visible of these devices, but they’re sold by many manufacturers, and, within a few years, they’ll be entirely pervasive: every mobile will be a smartphone. A smartphone can access a social network just as easily – often more easily – than a desktop web browser. Your employees have access to social networks all day long, unless you ask them to leave their mobiles at the front desk.

Just as we expect that employees won’t spend their days sending text messages to the friends, so an employer can expect that employees are sensible enough to regulate their own net usage. A ‘net nanny’ is not required. Mutual respect is. Yes, the network is a powerful thing – it can be used to spread rumor and innuendo, can be used to promote or undermine – but employees understand this. We all use the network at home. We know what it’s good for. Bringing it into the office requires some common sense, and perhaps a few guidelines. The ABC recently released their own guidelines for social media, and they’re a brilliant example of the parsimony and common sense which need to underwrite all of our business efforts online. Here they are:

• do not mix professional and personal in ways likely to bring the ABC into disrepute,

• do not undermine your effectiveness at work,

• do not imply ABC endorsement of personal views, and,

• do not disclose confidential information obtained at work.

There’s nothing hard about this list – for either employer or employee – yet it tells everyone exactly where they stand and what’s expected of them. Employers are expected to trust their employees. Employees are expected to reciprocate that trust by acting responsibly. All in all, a very adult relationship.

Once that adult relationship has been established around social media, you have a unique opportunity to let your employees become your eyes and ears online. Most small to medium-sized businesses have neither the staff nor the resources to dedicate a specific individual to social media issues. In fact, that’s not actually a good idea. When things ‘hot up’ for your business, any single individual charged with handling all things social media will quickly overload, with too much coming in through too many channels simultaneously. That means something will get overlooked. Something will get dropped. And a potential nuclear event – something that could be defused or forestalled if responded to in a timely manner – will slip through the cracks.

Social media isn’t a one-person job. It’s a job for the entire organization. You need to give your employees permission to be out there on Facebook, on Twitter, on the blogs and in the net’s weirder corners – wherever their searches might lead them. You need to charge them with the responsibility of being proactive, to go out there and hunt down those conversations of importance to your and your business. Of course, they should be polite, and only offer help where it is needed, but, if they can do that, you will increase your reach and your presence immeasurably. And you will have done it without spending a dime.

Those of you with a background in marketing have just broken out in cold sweat. This is nothing like what they taught you at university, nothing like what you learned on the job. That’s the truth of it. But what you learned on the job is what VAustralia and AMEX are now up to – that is, complete and utter failure. But, you’re thinking, what about message discipline? How can we have that many people speaking for the organization? Won’t it be chaos?

The answer, in short, is yes. It will be chaos. But not in a bad way. You’ll have your own army out there, working for you. Employees will know enough to know when they can speak for the organization, and when they should be silent. (If they don’t know, they’ll learn quickly.) Will it be messy? Probably. But the world of social media is not neat. It is not based on image and marketing and presentation. It is based on authenticity, on relationships that are established and which develop through time. It is not something that can be bought or sold like an ad campaign. It is, instead, something more akin to friendship – requiring time and tending and more than a little bit of love.

This means that employees will need some time to spend online, probably a few minutes, several times a day, to keep an eye on things. To keep watch. To make sure a simmering pot doesn’t suddenly boil over.

That’s the half of it. The other half is how you use social media to reach out. Many companies set up Twitter and Facebook accounts and use them to send useless spam-like messages to anyone who cares to listen. Please don’t do this. Social media is not about advertising. In fact, it’s anti-advertising. Social media is an opportunity to connect. If you’re a furniture maker, for example, perhaps you’d like to have a public conversation with designers and homeowners about the art and business of making furniture. Social media is precisely where you get to show off the expertise which keeps you in business – whatever that might be. Lawyers can talk about law, accountants about accounting, and printers about printing. Business, especially small business, is all about passion, and social media is a passion amplifier. Let your passions show and people will respond. Some of them will become customers.

So please, when you leave here today, setup those Facebook and Twitter accounts. But when you’ve done that, step back and have a think. Ask yourself, “How can I represent my business in a way that invites conversation?” Once you’ve answered that, you’ve also answered the other important question – how do you translate that conversation into business. Without the conversation you’ve got nothing. But, once that conversation has begun, you have everything you need.

Those are the basics. Everything else you’ll learn as you go along. Social media isn’t difficult, though it takes time to master. Just like any relationship, you’ll get out of it what you put into it. And it isn’t going away. It’s not a fad. It’s the new way of doing business. The efforts you make today will, in short order, reward you a hundred-fold. That’s the promise of network: it will bring you success.

In November of 1998, I attended a conference on technology and design in Amsterdam, and brought along two mates itching for an excuse to visit Europe. We all stayed at the flat of my good friends, Neil and Kylin. I dutifully attended the conference every day as the rest of them went out carousing through the various less-reputable quarters of Amsterdam, and we all had a great time. As Kylin tells it – given that she was the only woman on this Cook’s Tour – when we departed, we left a lingering residue of testosterone in their flat, and (if they calculated correctly) the very day after we departed for Los Angeles, they conceived their daughter Bey.

In February 1999, Neil and Kylin emailed all their friends, telling us of their plans to move – immediately – from Amsterdam to Florida. No explanation given. Through some weird intuition, I figured it out: Kylin was pregnant. I called her, and put the question to her directly. “How did you know?” she gasped. “We’ve been keeping it top secret.”

I don’t know how I knew. But I was overjoyed: I’m part of a generation who waited a long, long time to have children – my own nephews weren’t born until 2001 and 2002; none of my close friends had children in 1999. Neil and Kylin were the first.

It got me to pondering, as I ran a little thought experiment: what would the world of their daughter, still in utero, look like? What would her experience of that world be?

A month earlier, my friend Terence McKenna had challenged me to write a book. “You mouth off enough,” he suggested, “so maybe you should get it all down?” When he laid that challenge before me, I had no idea what I’d write a book about.

Somehow, as soon as I heard about Kylin’s pregnancy, I knew. I had to write a book about the world that child would grow up into, because that world would look nothing like the world I had been born into back in 1962. That child wouldn’t need this book. Her parents would.

A few months later I attended another conference, at MIT, where I heard psychologist Sherry Turkle talk about her work with young children. Turkle has been exploring how technology changes children’s behaviors, and, in this specific case, she’d taken a long look at a brand new toy: in fact, that season’s “hot” toy, the “Furby”.

Furby is an electromechanical plush toy, capable of responding to various actions by the child, but Furby also presents the child with demands – to be fed, to be played with, to be put to sleep when tired. More than interactive, the Furby presented children with some of the qualities we recognize as innate to living things. Would a small child recognize furby as inanimate, like a doll, or animate, like a pet?

From research in developmental psychology we know that children develop the categories of “inanimate” and “animate” when they’re around four years old. The development of these categories is a “constructivist” process – children do not need to be taught the difference between these two states; rather, they intuit the difference through continued interactions with animate and inanimate objects. Thus, an object, like Furby, which displays characteristics associated with both categories, should pose quite a philosophical conundrum for a small child.

Turkle put the question to these children: is Furby like your puppy? Is it like your doll? These children, little philosophical geniuses, gave her an answer she never expected to receive. They said it’s like neither of them. It is a thing itself, something in-between. They had no name for this third category between animate and inanimate, but they knew it existed, for they had direct experience of it.

This was my penny-drop moment: constructivism states that all children learn how the world works through their interactions within it. And we had suddenly changed the rules. We had infused the material world with the fairy dust of interactivity, creating the Pinocchio-like Furby, and, in so doing, at created a new ontological category. It is not a category that adults acknowledge – in fact, many adults find Furby slightly “creepy” precisely because it straddles two very familiar categories – but, in another generation, by the time these children are our age, that category will have a name, and will be accepted as a matter of course.

This is what Neil and Kylin – and, really, parents everywhere – need to know: the world has changed, the world is changing, and the world’s going to change a whole lot more. We may be the first beneficiaries of this great upwelling of technology, but the lasting benefits will be conferred upon our posterity, for it is changing the way they think. Their understanding to the world is, in some ways, utterly different from our own. And, just now, just over the last year or two, we’ve thrown a new element into the mix. We’re gracing ourselves with a new kind of connectivity – I call it “hyperconnectivity” which turbocharges some of the most essential features of human beings. This newest frontier – which did not exist even a decade ago – is what I want to focus upon this morning.

I: Who Are We?

We human beings are smart. Very smart. So smart we run the joint. But there’s a heavy price to be paid for all those brains. To start with, our heads our so big that we very nearly kill our mothers in the act of giving birth. Human births are so dangerous that we’re the only species we know of which can’t handle the act of birth alone.

We need others around – historically, other women – assisting us in the process. This point is essential to our humanity: we need other people. There is no way that a human, alone, can survive.

Yes, there are a few isolated incidence of “wolf boys” and Robinson Crusoe-types, battling against the odds in an indifferent or inimical environment, but, for far longer than we have been human, we have been social.

You can go back through the tree of life, a full eleven million years, to Proconsul, the common ancestor of gorillas, chimpanzees, bonobos and humans, and that animal was a social animal. It’s in our genes. It’s what we are. But why?

The answer is simple enough: eleven million years ago, those of our ancestors with the best social skills could most dependably count on help from others. That help was essential to their survival. That help allowed them to live long enough to pass those social genes and social behaviors along to their children. That help was essential, once our brains grew big enough to create trouble in the birth canal, for the next generation of human beings to come into the world. Cleverly, nature has crafted a species which, from the moment of the first birth pangs, must be social in order to survive. That pressure – a “selection pressure”, as it’s known in biology – is probably the essential, defining feature of humanity.

In an article in the May 17 2008 issue of New Scientist, an author rhapsodized about the end of “human exceptionalism”. Ethology and zoology have taught us that all of the behaviors we consider uniquely human do, in fact, exist broadly among other species. Whales have culture, of a sort. Chimpanzees use gestures to communicate their needs and wants, just like a child does. Dolphins have names. But each of these species, smart as they may be, deliver their young unassisted. They do not need help from their fellows to enter this world.

We are delivered by social means, and live our entire lives in a social order. What was essential at birth becomes even more important as an infant and toddler: because of our huge brains we remain helpless far longer than any other species.

A mother caring for a newborn infant has a full-time task on her hands. She can not devote her energies to finding food or shelter. Her attention is divided, but mostly focused on her child. Here again, the strong bonds of socialization create an environment where women (again) will altruistically bear some of the burden for mother and newborn. This altruism is reciprocal: as other women bear children, these mothers, with older children, will bear some of the burden for them.

This means that the mothers best able to forge strong social bonds with other women will have the most help at hand when they need it. This means, al things being equal, their children will be more likely to survive, and the chain of genes and behaviors gets passed along to another generation. This is another selection pressure which has, over millions of years, turned us into thoroughly social animals.

An interesting point to note here is that women have always had stronger selection pressures toward social behavior than men. I will come back to this.

Given that so much of our success is based upon our ability to socialize with others, and given that additional social skills confer additional advantage which increases selection success, as we evolved into our modern form – Homo Sapiens Sapiens – natural selection tended to emphasize our social characteristics. Being social has ever been the best way to get ahead.

In the last million years, as our brains grew explosively – as one scientist put it, “perhaps the most improbable event in all of evolution, anywhere” – much of the potential of all that new gray matter was put to work for social benefit. The “new brain” or neocortex, which is the most dramatically enlarged portion of the human brain, seems to be the area dedicated to our social relationships.

We know this because, in 1992, British anthropologist Robin Dunbar compared the average troop size of gorillas and chimpanzees against the average tribe sizes of humans. He found that there was a direct correlation between the volume of the neocortex in these three species and their average troop or tribe size. This value, known as “Dunbar’s Number”, is roughly 20 for gorillas, who have the smallest neocortex, about 35 for chimpanzees, and – for us lucky human beings, who have the greatest selection pressures on our social behavior – just under one hundred and fifty. We may not be entirely exceptional, but we’re doing quite well.

Essentially, inside of each one of our heads, there are a hundred and fifty other people running around. Yes, that sounds a bit crowded (particularly when they’re up partying all night long with their mates), but it’s actually imminently practical. These “little people” inside our heads are models of each person we know well: our family, our friends, our colleagues. For each of these people we build mental model which helps us to predict their behavior. (It isn’t really them, but rather, our image of them.) This predictive capability smoothes our social interactions. We know how to interact with people whom we have in our heads; with others we remain demure, reserved – in a word, predictable. Only with intimacy do we express the quirks of behavior which make us unique, only with intimacy do we take note of them in others.

We all know more than a hundred and fifty people. Some folks on FaceBook and MySpace claim thousands of “friends”. But most of these folks aren’t in our heads. There’s a simple rule you can use, to tell whether one of these folks is in your head: I call it the “sharing test”. Let’s suppose you see something – on the Web, in the newspaper, on the telly – that is so meaningful (funny, or poignant, or just so salient to whatever passions drive you), and in the next moment you think, “Wow, I know Dazza would really enjoy that.” And you flip the link along in an email. Or you send Dazza a text message with, “Hey, mate, did you see that thing just now on TEN?” And if he didn’t see it, you ring and fill him in. It’s that moment of unrestrained sharing – it feels almost automatic, and it’s entirely an essential part of what we are – which defines the most visible quality of those people inside our heads.

Every time when we share something with those little people in our head, we reinforce that relationship; we strengthen the social bonds which tie us to one another. Fifty thousand years ago this had enormous practical benefits: sharing where the best fruit grew – or the location of a predator in the tall grass – kept everyone alive and healthy. The selection pressure for sociability made us expert at sharing.

It’s interesting to watch this behavior as expressed by children; in some ways they share automatically – children love to share their experiences. In other situations – such as with a favorite toy – children must be taught to share, to override the natural selfishness of the singular animal, overruling that intrinsic behavior with the altruistic behavior of the social human. Sharing is one of the most important lessons parents teach their children, and if that lesson is poorly taught, it leaves a child at a permanent disadvantage.

While our genes make us sociable, our sharing behaviors are more software than hardware; this is why they must be taught. It takes time for any child to learn that lesson, just as it took quite a while for humans, as a species, to learn it. Geneticists know that human beings haven’t changed at all in at least 60,000 years, but civilization didn’t kick off in a meaningful way until about ten thousand years ago.

This has been an a bit of a puzzler for paleoanthropologists, but a new theory – which I also read about in New Scientist – seems to make sense of that gap: while we had the raw capacity for civilized behavior long ago, it took us 50,000 years to write the cultural software for civilization. Over those years, as we learned about ourselves and our world, our behavior changed and we taught these changes to our children, who improved upon them, passing those changes along.

In short, our entire species spent a long time in primary school (and might even have been kept back a few grades) before graduation. The incredible wealth of cultural learning – which we don’t really even reflect on, because it seems so essential and obvious to us – was painstaking developed across two thousand generations.

Our secondary studies, as a species, included that most unique of human institutions: the city. The earliest cities, such as Jericho and Çatal Höyük, already housed thousands of inhabitants – far beyond the reach of Dunbar’s Number.

That in itself presented a singular challenge for humanity, because, as near as we can tell, humans in pre-civilization lived in a perpetual state of war – the “war of all against all” – waged against all those not in their own tribes.

At the end of May 2008, we saw photos of a newly discovered tribe in the far reaches of the Amazon, who reacted to the presence of an aircraft by firing bows at it. Human beings possess an inherent xenophobia, and the boundaries those in the “in group” conform to the limits of Dunbar’s Number.

Given this, how did we all come to live together in ever-greater numbers? Simply this: the cultural software of civilization provided a greater selection advantage than that afforded by the tribal order which preceded it. Civilization is a broader form of sharing, where altruism is replaced by roles: the butcher, the baker, the candlestick maker. In civilization we share the manifold burdens of life by specializing, then we trade these specialized goods and services amongst ourselves. And it works.

Civilized human beings live in greater numbers, with greater population density, than pre-civilized cultures. It does not work perfectly: we have crime and poverty precisely because there are people in our cities who can fall through the “safety net” of civilized society. These eternal blights are the specific diseases of civilization. Yet the upsides of this broader and more diffuse form of sharing so outweighed the downsides that these evils have been tacitly acknowledged as the “price of progress.”

So things continued, merrily, for the last ten thousand years. Cities rose and fell; empires rose and fell; cultures and languages and entire peoples rose up suddenly, only to vanish just as quickly. All along the way, we continued adding to our cultural software. We learned – fairly early on – to record our learning in permanent form. We codified the essential elements of the software of civilization in laws and commandments.

We experimented with every form of human social organization, from the military dictatorship of Sparta, to the centralized bureaucracy of China, to the open democracy of Athens, to the chaotic anarchism of the Paris Commune. At each step along the way, we passed these lessons along, in a unbroken chain, to the generations that followed.

We are the children of nearly five hundred generations of civilization. The lessons learned over that immense span of time have brought us to the threshold of a revolution as comprehensive as that which obsolesced our tribal natures and replaced them with more civilized forms. Once again, the selection pressures of sociability force us into a narrow passage, toward another birth.

II: Where Are We Going?

We know that our amazingly comprehensive social skills are located in the newest part of our brain; we also know that they are among the last capabilities to mature during our cognitive development. Our sociability depends upon so much: a strong command of language, the ability to empathize and sympathize, the ability to consider the wants and needs of others, the ability to give freely of one’s self – altruism. At any point this complex and delicate process can be interrupted, by nature or by nurture.

My own nephew, Alexander, was diagnosed with an Autism Spectrum Disorder at the end of 2005. For leading-edge brain researchers, autism represents a natural failure of the brain’s inherent capability to model the behavior of others. The hundred and fifty people running around inside of the head of someone with an Autism Spectrum Disorder are shaped differently than the ones running about in mine; they still exist, but they are not (in an admittedly subjective assessment) as complete. Now that we know roughly what autism is, we work with these children intensively, because, while they lack certain inherent features we associate with normalcy, these children, if diagnosed early enough, can learn to become much more sensitive to the world-views and feelings of others.

My nephew attended a state-of-the-art pre-school in his San Diego suburb, where autistic children and “normal” children (such as his year-younger brother, Andrew) mix freely, because it is now known that the autistic children can and will learn necessary social skills through this continuous interaction. Alexander has now been mainstreamed, while my younger nephew remains as a “peer” in this school, showing other children how to be a fully socialized human being.

Then there are the children who have suffered neglect or abuse. Not having been nurtured themselves, they have not learned how to nurture others. This deficit manifests as emotional withdrawal, or in anti-social behaviors. Children who have not received love can not find it within themselves to love others. It is not that love is learned, per se, but rather, that we learn to recognize it as others demonstrate it toward us. The drive to connect with another human being, although entirely inherent, can be so confused, or so atrophied through disuse (these areas of the brain, if under-stimulated, will die away, leaving the child with a permanent deficit), that the child essentially becomes locked into a solitary world, unable to initiate or maintain the social relationships essential to success.

None of us are perfect; all of us feel embarrassment and disappointment and awkwardness in a range of social situations. Yet those sensations, of themselves, are proof our normalcy: we sense our social shortcomings. We had little awareness of our social nature when we were young. Only as we matured, turning the corner into tweenhood, did we rise into an awareness of the strong social bonds which form the largest part of our experience as human beings. For each and every one of us, this is a painful experience.

The brain, furiously making connections between regions which have been developing from before birth, integrates our comprehensive understanding of human behavior, our own emotional state, and our perceptions of the actions and emotions of others to create a model of how we are viewed by others, our “social standing”. It is this that natural selection has driven us to optimize: individuals with the highest social standing get the lion’s share of attention, affection and resources.

In particular, this burden lies heaviest on young women, who have the additional selection pressure (now more-or-less vestigial) driving them to form the social bonds of altruism with their peers which would, in prehistoric times, lead to greater help with childbearing and child-rearing. Young women emerge into a social consciousness so rich and so complex it makes young men look nearly autistic in comparison.

It is the reason why young woman invest themselves so wholly in their looks, in their friends, in their cliques, in the “in group” and the “out group”. Films like Heathers (one of my personal favorites) and Mean Girls tell tales as old as humanity: the rise into social consciousness of that most social of all the animals on the planet – the young woman.

It also provides some explanation for why young women are often emotionally overwrought. It isn’t just hormones. It’s the rising awareness of a vast social game that they don’t know how to play, with rules taught only through trial and error. Every mistake is potentially fatal, every success fleeting. And each of these moments of singular significance are amplified by a genetic imperative, a drive to connect, which leaves them helpless. Resistance is futile, and engagement only brings more learning, and more pain.

Oh, and we just made things a whole lot more complicated.

This generation of young adults, coming of age just now, have access to the best tools for connection and communication created by our species.

A few years ago, these kids, bounded by proximity and temporality, took their cues from their immediate peers. But now these connections can be forged via text messages, or MySpace pages, or YouTube videos, and so on. An average fifteen year-old girl might send and receive a hundred text messages in a single day and think nothing of it. Her inherent drive to connect has been freed from space and time; she can reach out everywhere, at any time; she can be reached anywhere, anytime. We have added a technological dimension – an intense and comprehensive acceleration – to a wholly natural process.

During the two hundred years of the industrial revolution, we amplified our capability for physical work. Steam engines and electric motors replaced muscle. As we moved from physical labor to monitoring and control of our machines, our capacity for work exploded, transforming the world. Still, these changes were entirely external. They did not affect our nature as social beings, but simply extended our physical capabilities. Now – just now – we have moved beyond the physical extension of our capabilities into a comprehensive amplification of our social nature. The mobile and the Internet are already transforming the human world as utterly as the steam engine transformed the landscape; but this transformation is happening in eighth-time.

The transition to industrialization, which took about a hundred years to complete, seems slow when compared to the rise of the Human Network, which will take about fifteen years, end-to-end.

Already, half of humanity owns a mobile phone; within about three years, three-quarters of the planet will own a mobile. That’s everyone except for the most desperately poor among us. No one, anywhere, expected this, because no one reckoned on this most basic of all human drives – the need to connect. The mobile is the steam engine, the electric motor, and the internal combustion engine of the 21st century: every bit of the potential framed by each of these enormous innovations now rests comfortably in the palm of three and a half billion hands.

Getting the tools for the amplification of our social natures is only half the story. That’s just hardware. What really counts is the software. And that’s why we turn, at the end of this tale, to Bey, the child conceived by Neil and Kylin, back in the last days of 1998.

III: Who Will Lead the Way?

Hardware is not enough. We spent fifty thousand years in idle, despite the best cognitive hardware on the planet, before anything truly interesting occurred. We are ensuring that every single person on Earth has a connection to the Human Network, but that doesn’t mean any of us know how to use it. Still, we are learning. And humans excel at learning from one another.

A recent study run with young chimps and toddlers showed that the chimps surpassed the toddlers in their cognitive capabilities, but that the toddlers far surpassed the chimpanzees in their ability to “ape” behavior. Humans learn by mimesis: the observation of our parents, our peers, our mentors and teachers. (Which is why the injunction, “Do as I say, not as I do,” never works.) As such, we closely observe each other to learn what works, and we copy it. This mimetic behavior, which used to be constrained by distance, has itself become a global phenomenon. Whatever works gets copied widely. It could be a good behavior, or a bad behavior: the only metric is the success of the behavior. If it achieves its ends, it will be observed and copied, widely and nearly instantaneously.

It took us two thousand generations to build up the cognitive software for civilization, as individual tribes made the same discoveries, independently, but lacked the means to share them. Even the diffusion of agriculture depended more on the migration of whole peoples than the dissemination of knowledge.

We know how to be social beings, but never before have we been globally and instantaneously social. For this reason, we are learning – and each of are intensely involved in this education. We are learning from ourselves, applying the lessons of our own socialization, to see if these lessons work in this new world. That’s pure constructivism. We are learning from each other, watching our peers as intently as any young woman would, when desperately trying to defend her position in an ever-more-competitive social circle. That’s pure mimesis. Together they’re a potent combination, and, when multiplied by the accelerator of the Human Network, it means we’re learning very rapidly indeed. Learning is never complete: ignorance is a permanent feature of the human condition. That said, competence can come quickly, when the students are wholly engaged in learning. As we are.

This means that, in another two or three years, when Bey is old enough to get her first mobile phone, at precisely the moment that she begins to awaken to her intense cognitive capabilities as social animal, those abilities will have been so comprehensively rewritten and transformed by the new software of sociability that she will find herself suddenly both intensely empowered and, most likely, entirely overwhelmed.

Bey will be among the first children who become socially aware within a world where the definition, rules and operating principles of the social universe have utterly changed. That transformation will not be complete, by any means, but it will be far enough along that the basic features and outlines of 21st century social civilization will be present.

This is the only social world that she will ever know. For her, social connections will not end with the classroom and the home. Social connectivity is already edging toward a state where everyone is directly connected to everyone else, all six point eight billion of us, a world where each of us can directly forge a relationship with everyone else. Bey will not know any of the boundaries we consider natural and solid, the boundaries of the classroom, the suburb, the family, or the nation: under the pressure of this intense hyperconnectivity, all of those boundaries dissolve, or are blown over. Only connect. Connection is all that matters. The social instinct, hyperempowered and taken to an entirely new level by hyperconnectivity, is rewriting the rules of culture.

This world looks utterly alien to us, yet it is already here. Author William Gibsonsays, “The future is already here, it’s just not evenly distributed.” We have moments of hyperconnectivity – as in the thirty-six hours after the Sichuan earthquake, when text messaging and other tools for hyperconnectivity spontaneously created a Human Network, sharing news of the tragedy and working to locate missing people. Such moments are becoming more frequent, gradually merging into a continuum.

But what about Bey? What lessons can we offer her? She will learn everything she can from everyone, everywhere. She will span the planet for best practices in sociability, because she can, and because she must. She will outpace us in every way, because the simultaneous emergence of the Human Network and her own social capabilities makes her potent in ways we can’t wholly predict. Her powers will be greater, but that also means that her crash will be more spectacular – apocalyptic, really – when she tries something, and fails.

We do know this: just as Furby created a new ontological class of being, a nether zone between animate and inanimate which children instinctively recognized and embraced, Bey will be living a new ontology of sociability, connection and relationship. These girls, just on the verge of becoming young women, will lead the way into this new world. They will be the first masters of the Human Network.

I want to close this essay with both a warning — and a hope. The warning is simply this: these young women will be vastly more powerful than we are. Harnessing the immense energies of the Human Network will be, quite literally, child’s play to them. If they sense they are being wronged, and can build a network of peers who concur in this assessment, you will need to watch out, because they will have the capacity to destroy you with a word. We already see students threatening educators with damage to their reputations; multiply that a billion-fold and you can sense the potential for catastrophe. I am not saying that this will inevitably happen, only that it can.

At the same time, despite their thermonuclear potential, it would be a mistake to handle these kids too delicately. Children are all passion, but lack wisdom. Adults have plenty of wisdom, but, all too often, we lack passion.

We need to build strong relationships with these children, using the Human Network of hyperconnectivity, so that each of us can infect the other. We need their passion to move forward without fear in a world where the human universe has shifted beneath our feet. They desperately need our wisdom to guide them into healthy and stable relationships throughout the Human Network. To do this, we need to bring these kids inside our heads, and we need to get ourselves into theirs, so that, together, we can make sense of a world so new, and so different, that we all seem but little children in a big world.

In mid-1994, sometime shortly after Tony Parisi and I had fused the new technology of the World Wide Web to a 3D visualization engine, to create VRML, we paid a visit to the University of Santa Cruz, about 120 kilometers south of San Francisco. Two UCSC students wanted to pitch us on their own web media project. The Internet Underground Music Archive, or IUMA, featured a simple directory of artists, complete with links to MP3 files of these artists’ recordings. (Before I go any further, I should state that they had all the necessary clearances to put musical works up onto the Web – IUMA was not violating anyone’s copyrights.) The idea behind IUMA was simple enough, the technology absolutely straightforward – and yet, for all that, it was utterly revolutionary. Anyone, anywhere could surf over to the IUMA site, pick an artist, then download a track and play it.

This was in the days before broadband, so downloading a multi-megabyte MP3 recording could take upwards of an hour per track – something that seems ridiculous today, but was still so potent back in 1994 that IUMA immediately became one of the most popular sites on the still-quite-tiny Web. The founders of IUMA – Rob Lord and Jon Luini – wanted to create a place where unsigned or non-commercial musicians could share their music with the public in order to reach a larger audience, gain recognition, and perhaps even end up with a recording deal. IUMA was always better as a proof-of-concept than as a business opportunity, but the founders did get venture capital, and tried to make a go of selling music online. However, given the relative obscurity of the musicians on IUMA, and the pre-iPod lack of pervasive MP3 players, IUMA ran through its money by 2001, shuttering during the dot-com implosion of the same year. Despite that, every music site which followed IUMA, legal and otherwise, from Napster to Rhapsody to iTunes, has walked in its footsteps. Now, nearing the end of the first decade of the 21st century, we have a broadband infrastructure capable of delivery MP3s, and several hundred million devices which can play them. IUMA was a good idea, but five years too early.

Just forty-eight hours ago, a new music service, calling itself Qtrax, aborted its international launch – though it promises to be up “real soon now.” Qtrax also promises that anyone, anywhere will be able to download any of its twenty-five million songs perfectly legally, and listen to them practically anywhere they like – along with an inserted advertisement. Using peer-to-peer networking to relieve the burden on its own servers, and Digital Rights Management, or DRM, Qtrax ensures that there are no abuses of these pseudo-free recordings.

Most of the words that I used to describe Qtrax in the preceding paragraph didn’t exist in common usage when IUMA disappeared from the scene in the first year of this millennium. The years between IUMA and Qtrax are a geological age in Internet time, so it’s a good idea to walk back through that era and have a good look at the fossils which speak to how we evolved to where we are today.

In 1999, a curly-haired undergraduate at Boston’s Northeastern University built a piece of software that allowed him to share his MP3 collection with a few of his friends on campus, and allowed him access to their MP3s. This scanned the MP3s on each hard drive, publishing the list to a shared database, allowing each person using the software to download the MP3 from someone else’s hard drive to his own. This is simple enough, technically, but Shawn Fanning’s Napster created a dual-headed revolution. First, it was the killer app for broadband: using Napster on a dial-up connection was essentially impossible. Second, it completely ignored the established systems of distribution used for recorded music.

This second point is the one which has the most relevance to my talk this morning; Napster had an entirely unpredicted effect on the distribution methodologies which had been the bedrock of the recording industry for the past hundred years. The music industry grew up around the licensing, distribution and sale of a physical medium – a piano roll, a wax recording, a vinyl disk, a digital compact disc. However, when the recording industry made the transition to CDs in the 1980s (and reaped windfall profits as the public purchased new copies of older recordings) they also signed their own death warrants. Digital recordings are entirely ephemeral, composed only of mathematics, not of matter. Any system which transmitted the mathematics would suffice for the distribution of music, and the compact disc met this need only until computers were powerful enough to play the more compact MP3 format, and broadband connections were fast enough to allow these smaller files to be transmitted quickly. Napster leveraged both of these criteria – the mathematical nature of digitally-encoded music and the prevalence of broadband connections on America’s college campuses – to produce a sensation.

In its earliest days, Napster reflected the tastes of its college-age users, but, as word got out, the collection of tracks available through Napster grew more varied and more interesting. Many individuals took recordings that were only available on vinyl, and digitally recorded them specifically to post them on Napster. Napster quickly had a more complete selection of recordings than all but the most comprehensive music stores. This only attracted more users to Napster, who added more oddities from their on collections, which attracted more users, and so on, until Napster became seen as the authoritative source for recorded music.

Given that all of this “file-sharing”, as it was termed, happened outside of the economic systems of distribution established by the recording industry, it was taking money out of their pockets – probably something greater than billions of dollars a year was lost, if all of these downloads had been converted into sales. (Studies indicate this was unlikely – college students have ever been poor.) The recording industry launched a massive lawsuit against Napster in 2000, forcing the service to shutter in 2001, just as it reached an incredible peak of 14 million simultaneous users, out of a worldwide broadband population of probably only 100 million. This means that one in seven computers connected to the broadband internet were using Napster just as it was being shut down.

Here’s where it gets more interesting: the recording industry thought they’d brought the horse back into the barn. What they hadn’t realized was that the gate had burnt down. The millions of Napster users had their appetites whet by a world where an incredible variety of music was instantaneously available with few clicks of the mouse. In the absence of Napster, that pressure remained, and it only took a few weeks for a few enterprising engineers to create a successor to Napster, known as Gnutella, which provided the same service as Napster, but used a profoundly different technology for its filesharing. Where Napster had all of its users register their tracks within a centralized database (which disappeared when Napster was shut down) Gnutella created a vast, amorphous, distributed database, spread out across all of the computers running Guntella. Gnutella had no center to strike at, and therefore could not be shut down.

It is because of the actions of the recording industry that Gnutella was developed. If legal pressure hadn’t driven Napster out of business, Gnutella would not have been necessary. The recording industry turned out to be its own worst enemy, because it turned a potentially profitable relationship with its customers into an ever-escalating arms race of file-sharing tools, lawsuits, and public relations nightmares.

Once Gnutella and its descendants – Kazaa, Limewire, and Acquisition – arrived on the scene, the listening public had wholly taken control of the distribution of recorded music. Every attempt to shut down these ever-more-invisible “darknets” has ended in failure and only spurred the continued growth of these networks. Now, with Qtrax, the recording industry is seeking to make an accommodation with an audience which expects music to be both free and freely available, falling back on advertising revenue source to recover some of their production costs.

At first, it seemed that filmic media would be immune from the disruptions that have plagued the recording industry – films and TV shows, even when heavily compressed, are very large files, on the order of hundreds of millions of bytes of data. Systems like Gnutella, which allow you to transfer a file directly from one computer to another are not particularly well-suited to such large file transfers. In 2002, an unemployed programmer named Bram Cohen solved that problem definitively with the introduction of a new file-sharing system known as BitTorrent.

BitTorrent is a bit mysterious to most everyone not deeply involved in technology, so a brief of explanation will help to explain its inner workings. Suppose, for a moment, that I have a short film, just 1000 frames in length, digitally encoded on my hard drive. If I wanted to share this film with each of you via Gnutella, you’d have to wait in a queue as I served up the film, time and time again, to each of you. The last person in the queue would wait quite a long time. But if, instead, I gave the first ten frames of the film to the first person in the queue, and the second ten frames to the second person in the queue, and the third ten frames to the third person in the queue, and so on, until I’d handed out all thousand frames, all I need do at that point is tell each of you that each of your “peers” has the missing frames, and that you needed to get them from those peers. A flurry of transfers would result, as each peer picked up the pieces it needed to make a complete whole from other peers. From my point of view, I only had to transmit the film once – something I can do relatively quickly. From your point of view, none of you had to queue to get the film – because the pieces were scattered widely around, in little puzzle pieces, that you could gather together on your own.

That’s how BitTorrent works. It is both incredibly efficient and incredibly resilient – peers can come and go as they please, yet the total number of peers guaratees that somewhere out there is an entire copy of the film available at all times. And, even more perversely, the more people who want copies of my film, the easier it is for each successive person to get a copy of the film – because there are more peers to grab pieces from. This group of peers, known as a “swarm”, is the most efficient system yet developed for the distribution of digital media. In fact, a single, underpowered computer, on a single, underpowered broadband link can, via BitTorrent, create a swarm of peers. BitTorrent allows anyone, anywhere, distribute any large media file at essentially no cost.

It is estimated that upwards of 60% of all traffic on the Internet is composed of BitTorrent transfers. Much of this traffic is perfectly legitimate – software, such as the free Linux operating system, is distributed using BitTorrent. Still, it is well known that movies and television programmes are also distributed using BitTorrent, in violation of copyright. This became absolutely clear on the 14th of October 2004, when Sky Broadcasting in the UK premiered the first episode of Battlestar Galactica, Ron Moore’s dark re-imagining of the famous shlocky 1970s TV series. Because the American distributor, SciFi Channel, had chosen to hold off until January to broadcast the series, fans in the UK recorded the programmes and posted them to BitTorrent for American fans to download. Hundreds of thousands of copies of the episodes circulated in the United States – and conventional thinking would reckon that this would seriously impact the ratings of the show upon its US premiere. In fact, precisely the opposite happened: the show was so well written and produced that the word-of-mouth engendered by all this mass piracy created an enormous broadcast audience for the series, making it the most successful in SciFi Channel history.

In the age of BitTorrent, piracy is not necessarily a menace. The ability to “hyperdistribute” a programme – using BitTorrent to send a single copy of a programme to millions of people around the world efficiently and instantaneously – creates an environment where the more something is shared, the more valuable it becomes. This seems counterintuitive, but only in the context of systems of distribution which were part-and-parcel of the scarce exhibition outlets of theaters and broadcasters. Once everyone, everywhere had the capability to “tuning into” a BitTorrent broadcast, the economics of distribution were turned on their heads. The distribution gatekeepers, stripped of their power, whinge about piracy. But, as was the case with recorded music, the audience has simply asserted its control over distribution. This is not about piracy. This is about the audience getting whatever it wants, by any means necessary. They have the tools, they have the intent, and they have the power of numbers. It is foolishness to insist that the future will be substantially different from the world we see today. We can not change the behavior of the audience. Instead, we must all adapt to things as they are.

But things as the are have changed more than you might know. This is not the story of how piracy destroyed the film industry. This is the story how the audience became not just the distributors but the producers of their own content, and, in so doing, brought down the high walls which separate professionals from amateurs.

II. The Barbarian Hordes Storm the Walls

Without any doubt the most outstanding success of the second phase of the Web (known colloquially as “Web 2.0”) is the video-sharing site YouTube. Founded in early 2005, as of yesterday YouTube was the third most visited site on the entire Web, led only by Yahoo! and YouTube’s parent, Google. There are a lot of videos on YouTube. I’m not sure if anyone knows quite how many, but they easily number in the tens of millions, quite likely approaching a hundred million. Another hundred thousand videos are uploaded each day; YouTube grows by three million videos a month. That’s a lot of video, difficult even to contemplate. But an understanding of YouTube is essential for anyone in the film and television industries in the 21st century, because, in the most pure, absolute sense, YouTube is your competitor.

Let me unroll that statement a bit, because I don’t wish it to be taken as simply as it sounds. It’s not that YouTube is competing with you for dollars – it isn’t, at least not yet – but rather, it is competing for attention. Attention is the limiting factor for the audience; we are cashed up but time-poor. Yet, even as we’ve become so time-poor, the number of options for how we can spend that time entertaining ourselves has grown so grotesquely large as to be almost unfathomable. This is the real lesson of YouTube, the one I want you to consider in your deliberations today. In just the past three years we have gone from an essential scarcity of filmic media – presented through limited and highly regulated distribution channels – to a hyperabundance of viewing options.

This hyperabundance of choices, it was supposed until recently, would lead to a sort of “decision paralysis,” whereby the viewer would be so overwhelmed by the number of choices on offer that they would simply run back, terrified, to the highly regularized offerings of the old-school distribution channels. This has not happened; in fact, the opposite has occured: the audience is fragmenting, breaking up into ever-smaller “microaudiences”. It is these microaudiences that YouTube speaks directly to. The language of microaudiences is YouTube’s native tongue.

In order to illustrate the transformation that has completely overtaken us, let’s consider a hypothetical fifteen year-old boy, home after a day at school. He is multi-tasking: texting his friends, posting messages on Bebo, chatting away on IM, surfing the web, doing a bit of homework, and probably taking in some entertainment. That might be coming from a television, somewhere in the background, or it might be coming from the Web browser right in front of him. (Actually, it’s probably both simultaneously.) This teenager has a limited suite of selections available on the telly – even with satellite or cable, there won’t be more than a few hundred choices on offer, and he’s probably settled for something that, while not incredibly satisfying, is good enough to play in the background.

Meanwhile, on his laptop, he’s viewing a whole series of YouTube videos that he’s received from his friends; they’ve found these videos in their own wanderings, and immediately forwarded them along, knowing that he’ll enjoy them. He views them, and laughs, he forwards them along to other friends, who will laugh, and forward them along to other friends, and so on. Sharing is an essential quality of all of the media this fifteen year-old has ever known. In his eyes, if it can’t be shared, a piece of media loses most of its value. If it can’t be forwarded along, it’s broken.

For this fifteen year-old, the concept of a broadcast network no longer exists. Television programmes might be watched as they’re broadcast over the airwaves, but more likely they’re spooled off of a digital video recorder, or downloaded from the torrent and watched where and when he chooses. The broadcast network has been replaced by the social network of his friends, all of whom are constantly sharing the newest, coolest things with one another. The current hot item might be something that was created at great expense for a mass audience, but the relationship between a hot piece of media and its meaningfulness for a microaudience is purely coincidental. All the marketing dollars in the world can foster some brand awareness, but no amount of money will inspire that fifteen year old to forward something along – because his social standing hangs in the balance. If he passes along something lame, he’ll lose social standing with his peers. This factors into every decision he makes, from the brand of runners he wears, to the television series he chooses to watch. Because of the hyperabundance of media – something he takes as a given, not as an incredibly recent development – all of his media decisions are weighed against the values and tastes of his social network, rather than against a scarcity of choices.

This means that the true value of media in the 21st century is entirely personal, and based upon the salience, that is, the importance, of that media to the individual and that individual’s social network. The mass market, with its enforced scarcity, simply does not enter into his calculations. Yes, he might go to the theatre to see Transformers with his mates; but he’s just as likely to download a copy recorded in the movie theatre with an illegally smuggled-in camera that was uploaded to The Pirate Bay a few hours after its release.

That’s today. Now let’s project ourselves five years into the future. YouTube is still around, but now it has more than two hundred million videos (probably much more), all available, all the time, from short-form to full-length features, many of which are now available in high-definition. There’s so much “there” there that it is inconceivable that conventional media distribution mechanisms of exhibition and broadcast could compete. For this twenty year-old, every decision to spend some of his increasingly-valuable attention watching anything is measured against salience: “How important is this for me, right now?” When he weighs the latest episode of a TV series against some newly-made video that is meant only to appeal to a few thousand people – such as himself – that video will win, every time. It more completely satisfies him. As the number of videos on offer through YouTube and its competitors continues to grow, the number of salient choices grows ever larger. His social network, communicating now through FaceBook and MySpace and next-generation mobile handsets and iPods and goodness-knows-what-else is constantly delivering an ever-growing and increasingly-relevant suite of media options. He, as a vital node within his social network, is doing his best to give as good as he gets. His reputation depends on being “on the tip.”

When the barriers to media distribution collapsed in the post-Napster era, the exhibitors and broadcasters lost control of distribution. What no one had expected was that the professional producers would lose control of production. The difference between an amateur and a professional – in the media industries – has always centered on the point that the professional sells their work into distribution, while the amateur uses wits and will to self-distribute. Now that self-distribution is more effective than professional distribution, how do we distinguish between the professional and the amateur? This twenty year-old doesn’t know, and doesn’t care.

There is no conceivable way that the current systems of film and television production and distribution can survive in this environment. This is an uncomfortable truth, but it is the only truth on offer this morning. I’ve come to this conclusion slowly, because it seems to spell the death of a hundred year-old industry with many, many creative professionals. In this environment, television is already rediscovering its roots as a live medium, increasingly focusing on news, sport and “event” based programming, such as Pop Idol, where being there live is the essence of the experience. Broadcasting is uniquely designed to support the efficient distribution of live programming. Hollywood will continue to churn out blockbuster after blockbuster, seeking a warmed-over middle ground of thrills and chills which ensures that global receipts will cover the ever-increasing production costs. In this form, both industries will continue for some years to come, and will probably continue to generate nice profits. But the audience’s attentions have turned elsewhere. They’re not returning.

This future almost completely excludes “independent” production, a vague term which basically means any production which takes place outside of the media megacorporations (News Corp, Disney, Sony, Universal and TimeWarner), which increasingly dominate the mass media landscape. Outside of their corporate embrace, finding an audience sufficient to cover production and marketing costs has become increasingly difficult. Film and television have long been losing economic propositions (except for the most lucky), but they’re now becoming financially suicidal. National and regional funding bodies are growing increasingly intolerant of funding productions which can not find an audience; soon enough that pipeline will be cut off, despite the damage to national cultures. Australia funds the Film Finance Corporation and the Australian Film Council to the tune of a hundred million dollars a year, to ensure that Australian stories are told by Australian voices; but Australians don’t go to see them in the theatres, and don’t buy them on DVD.

The center can not hold. Instead, YouTube, which founder Steve Chen insists has “no gold standard” of production values, is rapidly becoming the vehicle for independent productions; productions which cost not millions of euros, but hundreds, and which make up for their low production values in salience and in overwhelming numbers. This tsunami of content can not be stopped or even slowed down; it has nothing to do with piracy (only nine percent of the videos viewed on YouTube are violations of copyright) but reflects the natural accommodation of the audience to an era of media hyperabundance.

What then, is to be done?

III. And The Penny Drops

It isn’t all bad news. But, like a good doctor, I want to give you the bad news right up front: There is no single, long-term solution for film or television production. No panacea. It’s not even entirely clear that the massive Hollywood studios will do business-as-usual for any length of time into the future. Just a decade ago the entire music recording industry seemed impregnable. Now it lies in ruins. To assume that history won’t repeat itself is more than willful ignorance of the facts; it’s bad business.

This means that the one-size-fits-all production-to-distribution model, which all of you have been taught as the orthodoxy of the media industries, is worse than useless; it’s actually blocking your progress because it is effectively keeping you from thinking outside the square. This is a wholly new world, one which is littered with golden opportunities for those able to avail themselves of them. We need to get you from where you are – bound to an obsolete production model – to where you need to be. Let me illustrate this transition with two examples.

In early 2005, producer Ronda Byrne got a production agreement with Channel NINE, then the number one Australian television network, to make a feature-length television programme about the “law of attraction”, an idea she’d learned of when reading a book published in 1910, The Science of Getting Rich. The interviews and other footage were shot in July and August, and after a few months in the editing suite, she showed the finished production to executives at Channel NINE, who declined to broadcast it, believing it lacked mass appeal. Since Byrne wasn’t going to be getting broadcast fees from Channel NINE to cover her production costs, she negotiated a new deal with NINE, allowing her to sell DVDs of the completed film.

At this point Byrne began spreading news of the film virally, through the communities she thought would be most interested in viewing it; specifically, spiritual and “New Age” communities. People excited by Byrne’s teaser marketing could pay $20 for a DVD copy of the film (with extended features), or pay $5 to watch a streaming version directly on their computer. As the film made its way to its intended audience, word-of-mouth caused business to mushroom overnight. The Secret became a blockbuster, selling millions of copies on DVD. A companion book, also titled The Secret, has sold over two million copies. And that arbiter of American popular taste, Oprah, has featured the film and book on her talk show, praising both to the skies. The film has earned back many, many times its production costs, making Byrne a wealthy woman. She’s already deep into the production of a sequel to The Secret – a film which already has an audience identified and targeted.

Chagrined, the television executives of Channel NINE finally did broadcast The Secret in February 2007. It didn’t do that well. This sums up the paradox distribution in the age of the microaudience. Clearly The Secret had a massive world-wide audience, but television wasn’t the most effective way to reach them, because this audience was actually a collection of microaudiences, rather than a single, aggregated audience. If The Secret had opened theatrically, it’s unlikely it would have done terribly well; it’s the kind of film that people want to watch more than once, being in equal parts a self-help handbook and a series of inspirational stories. It is well-suited for a direct-to-DVD release – a distribution vehicle that no longer has the stigma of “failure” associated with it. It is also well-suited to cross-media projects, such as books, conferences, streamed delivery, podcasts, and so forth. Having found her audience, Byrne has transformed The Secret into an exceptional money-making franchise, as lucrative, in its own way, and at its own scale, as any Hollywood franchise.

The second example is utterly different from The Secret, yet the fundamentals are strikingly similar. Just last month a production group calling themselves “The League of Peers” released a film titled Steal This Film, Part 2. The first part of this film, released in late 2006, dealt with the rise of file-sharing, and, in specific, with the legal troubles of the world’s largest BitTorrent site, Sweden’s The Pirate Bay. That film, although earnest and coherent, felt as though it was produced by individuals still learning the craft of filmmaking. This latest film feels looks as professional as any documentary created for BBC’s Horizon or PBS’s Frontline or ABC’s 4Corners. It is slick, well-lit, well-edited, and has a very compelling story to tell about the history of copying – beginning with the invention of the printing press, five hundred years ago. Steal This Film is a political production, a bit of propaganda with an bias. This, in itself, is not uncommon in a documentary. The funding and distribution model for this film is what makes it relatively unique.

Individuals who saw Steal This Film, Part One – which was made freely available for download via BitTorrent – were invited to contribute to the making of the sequel. Nearly five million people downloaded Steal This Film, Part One, so there was a substantial base of contributors to draw from. (I myself donated five dollars after viewing the film. If every viewer had done likewise that would cover the budget of a major Hollywood production!) The League of Peers also approached arts funding bodies, such as the British Documentary Council, with their completed film in hand, the statistics showing that their work reached a large audience, and a roadmap for the second film – this got them additional funding. Now, having released Steal This Film, Part Two, viewers are again invited to contribute (if they like the film), promised a “secret gift” for contributions of $15 or more. While the tip jar – literally, busking – may seem a very weird way to fund a film production, it’s likely that Steal This Film, Part Two will find an even wider audience than Part One, and that the coffers of the League of Peers will provide them with enough funds to embark on their next film, The Oil of the 21st Century, which will focus on the evolution of intellectual property into a traded commodity.

I have asked Screen Training Ireland to include a DVD of Steal This Film, Part Two with the materials you received this morning. You’ve been given the DVD version of the film, but I encourage you to download the other versions of the film: the XVID version, for playback on a PC; the iPod version, for portable devices; and the high-definition version, for your visual enjoyment. It’s proof positive that a viable economic model exists for film, even when it is given away. It will not work for all productions, but there is a global community of individuals who are intensely interested in factual works about copyright and intellectual property in the 21st century, who find these works salient, and who are underserved by the media megacorporations, who would not consider it in their own economic best interest to produce or distribute such works. The League of Peers, as part of the community whom this film is intended for, knew how to get the word out about the film (particularly through Boing Boing, the most popular blog in the world, with two million readers a week), and, within a few weeks, nearly everyone who should have heard of the film had heard about it – through their social networks.

Both The Secret and Steal This Film, Part Two are factual works, and it’s clear that this emerging distribution model – which relies on targeting communities of interest – works best with factual productions. One of the reasons that there has been such an upsurge in the production of factual works over the past few years is because these works have been able to build their own funding models upon a deep knowledge of the communities they are talking to – made by microaudiences, for microaudiences. But microaudiences, scaled to global proportions, can easily number in the millions. Microaudiences are perfectly willing to pay for something or contribute to something they consider of particular value and salience; it is a visible thank you, a form of social reinforcement which is very natural within social networks.

What about drama, comedy and animation? Short-form comedy and animation probably have the easiest go of it, because they can be delivered online with an advertising payload of some sort. Happy Tree Friends is a great example of how this works – but it took producers Mondo Media nearly a decade to stumble into a successful economic model. Feature-length comedy and feature-length drama are more difficult nuts to crack, but they are not impossible. Again, the key is to find the communities which will be most interested in the production; this is not always entirely obvious, but the filmmaker should have some idea of the target audience for their film. While in preproduction, these communities need to be wooed and seduced into believing that this film is meant just for them, that it is salient. Productions can be released through complementary distribution channels: a limited, occasional run in rented exhibition spaces (which can be “events”, created to promote and showcase the film); direct DVD sales (which are highly lucrative if the producer does this directly); online distribution vehicles such as iTunes Movie Store; and through “community” viewing, where a DVD is given to a few key members of the community in the hopes that word-of-mouth will spread in that community, generating further DVD sales.

None of this guarantees success, but it is the way things work for independent productions in the 21st-century. All of this is new territory. It isn’t a role that belongs neatly to the producer of the film, nor, in the absence of studio muscle, is it something that a film distributor would be competent at. This may not be the producer’s job. But it is someone’s job. Someone must do it. Starting at the earliest stages of pre-production, someone has to sit down with the creatives and the producer and ask the hard questions: “Who is this film intended for?” “What audiences will want to see this film – or see it more than once?” “How do we reach these audiences?” From these first questions, it should be possible to construct a marketing campaign which leverages microaudiences and social networks into ticket receipts and DVD sales and online purchases.

So, as you sit down to do your planning today, and discuss how to move Irish screen industries into the 21st century, ask yourselves who will be fulfilling this role. The producer is already overloaded, time-poor, and may not be particularly good at marketing. The director has a vision, but might be practically autistic when it comes to working with communities. This is a new role, one that is utterly vital to the success of the production, but one which is not yet budgeted for, and one which we do not yet train people to fill. Individuals have succeeded in this new model through their own tireless efforts, but each of these have been scattershot; there is a way to systematize this. While every production and every marketing plan will be unique – drawn from the fundamentals of the story being told – there are commonalities across productions which people will be able to absorb and apply, production after production.

One of my favorite quotes from science fiction writer William Gibson goes, “The future is already here, it’s just not evenly distributed.” This is so obviously true for film and television production that I need only close by noting that there are a lot of success stories out there, individuals who have taken the new laws of hyperdistribution and sharing and turned them to their own advantage. It is a challenge, and there will be failures; but we learn more from our failures than from our successes. Media production has always been a gamble; but the audiences of the 21st century make success easier to achieve than ever before.

Everything is changing. Everything has changed. Everything always changes, but at times that change is particularly pronounced and thus specifically noteworthy. For media – which is the topic du jour – this is so plainly obvious that any attempt to refer to the “before” time has an almost archeological feel, as though we must shovel carefully through layers of dirt to uncover how media worked just a few year ago. These transformations have been seismic, and singular. There is no going back.

But what, exactly, has happened?

The revolution we glimpsed in 1994, when the rough beast of the Web, its hour come at last, made the earth tremble, seducing and subsuming us into its ever-broadening expanse, fell back, for a brief while, into patterns more established and more familiar. We glimpsed a utopia; then a fog rose, and the vision faded. We endured half a decade of stupidity, cupidity and the slow strangulation of dreams. We longed for communion; we got DVD players delivered in under an hour. Fortunately, the network accelerates everything it embraces, and what might have taken a generation in earlier times took just five years to run its course, from Netscape to Razorfish, and the lunar crater of NASDAQ seemed to spell the final doom of all our hopes. The Web, people loudly proclaimed, was so over.

Silly humans.

During those first five years, we learned just how different network economics could be; not just in theory, but in practice. We learned that the essence of the digital artifact is that it exists to be copied. Like a gene in the Cambrian seas of the early Web, information was copied and recopied endlessly. John Perry Barlow’s Declaration of the Independence of Cyberspace was one of the first such objects, spread via email and website until it became nearly impossible to ignore. More recently, Cory Doctorow’s lecture on DRM for Microsoft Research – in text, Pig Latin and video versions – has been passed around like a cheap two-dollar…well, you know. Each of these digital artifacts eventually reached nearly every single individual who might find them interesting, because, as they were copied and read, forwarded and linked to, each of the human nodes in this network made a decision that this information was important enough to share. In the networked era, salience is the only significant quality of information. For that reason, it was only a matter of time until the technologies of the network would reinforce this natural tendency, and accelerate it.

So even as the Web died, it was reborn. The top-down design of a hundred centralized sources of information evolved into seven hundred million peers. From each according to their ability, to each according to their need. Feeds replaced websites, and torrents replaced streams. The revolution we had fleetingly glimpsed had finally – blessedly – arrived.

But one man’s blessing is another’s curse.

The network revolution presented incredible opportunities to anyone working in the media industries. Suddenly, it became possible to reach massive audiences, unbounded by proximity. But instead of reinforcing the previous structures of media ownership and information distribution, the network has consistently undermined them. Mention Craigslist to a newspaperman, and watch as the color drains from their face. Casually drop BitTorrent into a conversation with a studio executive, and observe as they choke back their rage. The network carries within it the seeds of their destruction. And they’re absolutely, utterly, completely powerless to stop it.

This would be a sad story if professional media had not willingly cooperated in their own demise. The technologies of the digital era were simply too tempting to be ignored, too important to the bottom line. But the network has its own economics, and quickly overcomes or blithely ignores any attempt to subvert its innate qualities. Film studios make the majority of revenues from DVD distribution of their productions, but that same DVD, because of its essentially digital nature, can be copied and recopied endlessly, at no cost. If it is salient, it will be copied widely. That’s not just a horror story: that’s the law.

And if you don’t want your film copied? Well then, you have to resort to antique production technique. Make sure it’s shot to film stock, physically edited (good luck finding an editor who prefers a Steenbeck to an Avid) and graded – with no digital intermediates – then projected in an exhibition space where every audience member has been subjected to a humiliating physical search of their bodies. If you did that, you’d kill piracy. Probably. Of course, you’d also kill your exhibition revenues. But the studios (and the record companies, and the broadcasters, and the book publishers) want to have it both ways, want the benefits of digital distribution, all the while denying the essential quality of the medium – it exists to be copied.

That, at least, is the message from a hundred insta-pundits, on the business pages of newspapers, in blogs, and countlessanalysts’reports. The entire world seemed shocked by the entirely expected purchase of video-sharing site YouTube by Google for 1.65 billion dollars. It’s a bad deal, some say, doomed to fail. It isn’t worth it. It’ll bring Google crashing back to earth with endless litigation from the copyright holders who have just been waiting for someone with deep enough pockets to sue.

Feh.

What most everyone overlooked – as it happened the very same day as the Google purchase – were the licensing agreements YouTube struck with Universal, Sony BMG, and CBS. Together with their earlier deal with Warner, YouTube now has a deal with every major music publisher in the world. YouTube will now figure out how to share the revenues it will be generating with Google’s advertising technology with all of the copyright holders whose materials end up on YouTube.

Some pundits – most notably, Mark Cuban – have indicated that only a moron would buy YouTube, because it’s widely believed that YouTube has built its business entirely upon the violation of copyright. Certainly, YouTube established its reputation with a specific piece of video owned by someone else – a digital short from NBC’s Saturday Night Live, “Chronic Sunday.” That video – viewed millions of times before NBC rattled its legal saber and the content was removed – introduced most users to YouTube. In the year since “Chronic Sunday,” YouTube has become a clearing house for the funniest bits of video content produced by other companies, from segments of The Daily Show with Jon Stewart, to South Park, to Family Guy to The Simpsons. Why has YouTube become the redistributors of these clips? Because none of the copyright holders made an effort to distribute these clips themselves. YouTube has been acting as an arbitrageur of media, equalizing an inequity in the market place – and getting very rich in the process. It may be copyright violation, but the power of the audience is far, far greater than the power of the copyright holder. YouTube could delete every clip uploaded in violation of copyright – to some degree they do – but if you have a few thousand people uploading the same clip, how do you stay ahead of that? Even YouTube itself is subject to the power of its audience. And if they become draconian in their enforcement of copyright – which is a possible outcome of the Google purchase – they will simply force the audience elsewhere, to other sites. Better by far to strike a deal with the copyright holders, so that they receive recompense for their efforts. NBC has started to distribute Saturday Night Live’s digital shorts on its own website; ABC and FOX offer full streaming versions of their programs; everyone is queuing up to sell their TV shows on iTunes. Is this a willing transition? Probably not. Minutes spent in front of the computer are minutes lost to television ratings. But if the copyright holders don’t distribute their content as widely as possible, someone else will. YouTube has proven this point beyond all argument.

Cuban believes that YouTube will die without a steady stream of content uploaded in violation of copyright. But if recent history is any guide, the studios are now falling over each other in their eagerness to do a deal, and share some of that money. The simultaneity of the Google purchase and the YouTube deals with the recording industry are not accidental; they’re indicative of a great sea-change. Big media has swallowed the bitter pill, and realized that they’ve lost control of distribution. Now they’ll try to make money off of it.

But Cuban makes another, and more damning point: he says that no one wants to watch the little hand-made videos which make up the vast majority of uploads to YouTube. This is the Big Lie of Big Media: if it isn’t professionally produced, the audience won’t watch it. No statement could be more mendacious, no assertion could be further from the truth. As a film producer and broadcaster, Cuban certainly hopes that audiences will always prefer professional content to amateur productions, but there’s no evidence to support this position – and rather a lot which counters it. The success of Red versus Blue, Homestar Runner, Happy Tree Friends, and The Show with Zefrank – each of which command audiences in the hundreds of thousands to millions – prove that audiences will find the content which interests them, and share that content with their friends, using the hyperdistribution techniques enabled by the network that ensure these audiences can get what they want – from anyone, anywhere, at any time – with a minimum of difficulty. These productions lie completely outside the bounds of “professional” media; they are “amateur,” not in the sense of raw, or poorly produced, but because they have turned their back on the antique systems of distribution which previously separated the big boys from the wannabes.

A perfect example of this transition can be seen in a video on YouTube by the Australian band Sick Puppies. Shot by the band’s drummer, it features a well-known character, Juan Mann, who inhabits Sydney’s Pitt Street mall, bearing a sign reading “Free Hugs.” The band befriended this unlikely character, and shot hours of video of him at work, giving free hugs to passers-by. While in Los Angeles, pursuing a recording deal, the drummer cut his footage into a three minute film, then added one the band’s song “All The Same” as a temp track. Thinking to share his work around, he uploaded the video to YouTube on the 26th of September, and told his friends. Who told their friends. Who told their friends. YouTube is particularly good at “viral” distribution of media – it’s the one thing they’ve gotten absolutely right – so, within three week’s time, that little hand-made video had been viewed well over three million times. Sick Puppies are now on the map; their music video has given them a worldwide fan base. A debut album on a major label – expected early next year – will complete their transformation from amateurs to professionals.

Salience determines whether an audience will gather around and share media, not production values. In the time before hyperdistribution, audiences had a severely limited pool of choices, all of them professionally produced; now the gates have come down, and audiences are free to make their own choices. When placed head-to-head, can a professional production of modest salience stand up against an amateur production of great salience? Absolutely not. The audience will always select the production which speaks to them most directly. Media is a form of language, and we always favor our mother tongue.

The future for YouTube lies with the amateurs, not with the professionals. Cuban misses the point entirely, assuming that the audience will behave as it always has. But this is not that audience; this is an audience which has essentially infinite choice, and has come to understand that the sharing of media is an act of production in itself – that we are all our own broadcasters.

And you’d have to be a moron to miss that.

III. The Epidemiology of Cool

We know why YouTube has had such an incredible string of successes; the site makes it easy to share a video with your friends, and for those friends to share that videos with their friends, and so on. The marketers call this “viral distribution,” but we know it by another and rather more prosaic name – friendship. As an inherently social species, we are constantly reinforcing the our social connections through communication. It could be an IM, a text message, an email, a phone call, or a video – it’s all the same to the enormous section of our forebrains that we use to process the intricacies of our social relationships. We share these things to tell our friends that we’re thinking of them – and, rather more competitively, to show our friends that we’re on the tip. Each of us are coolfinders (some of us do it professionally), and we each keep a little internal thermometer which measures our own cool against that of our peers. That innate drive to be recognized for our tastes has been accelerated to the speed of light by the network. Now, even as we coolfind, we are constantly inundated and challenged by the coolfinding of our peers. It’s produced a very healthy, if ultra-Darwinian, ecology of cool. Our peers are the selection pressure as we struggle to pass our memes on to the next generation.

Thus far, we’ve done this on our own, with very little assistance from the wealth of computing machinery which crowds our lives. We create ad-hoc solutions for media distribution: mailing lists, websites, podcasts – each of these an attempt to spread our ideas more successfully. But they’re held together tenuously, only by our constant activity, busy bees maintaining the cells of our hive. And it’s a lot of work. We’re forced to do it – forced to run the race, lest we be overrun by the memes of others – but we’ve reached the one practical limit: time. No one has enough time in the day to keep up with all of the information we should be absorbing. We can filter ruthlessly – and perhaps miss out on something we’ll regret later – or declare email bankruptcy, like Lawrence Lessig, or just withdraw to an ever-more-specialized domain of coolfinding. And we are doing each of these things, every day, under the pressure of all this information.

There’s got to be a better way.

In the early years of the 19th century, farmers in western Pennsylvania kept their wagon wheels greased with puddles of bubbling muck that studded the countryside. Although useful, the puddles were a toxic nuisance to livestock. If the farmers could have rid their lands of these puddles, they likely would have. A half a century later, western Pennsylvania became a boomtown, built on its substantial petroleum reserves. The bubbling muck had immense value – but it had to wait for the demands of the kerosene lamp and the internal combustion engine.

In the early years of the 21st century, we each generate an enormous amount of interaction data – every click on a computer, every email sent or received, every website visited, every text message, every phone call, every swipe of a credit card or loyalty card or debit card, every face-to-face interaction. None of it is recorded – or at least, it’s not recorded by any of us, for any of us (though the NSA has expressed some interest in it) – because it hasn’t been seen as valuable. It’s bubbling up through all of us, and around all of us, as we create data shadows that have grown longer and longer, resembling Jacob Marley’s lockboxes and chains, rattling throughout cyberspace.

All of that information is worth more than oil, more than gold. And all of it is sadly – almost obscenely – dropped on the floor as soon as it is created. If we’re lucky, it is deleted. If we’re unlucky, someone uses it to create a digital simulacrum, and we find our identities hijacked. But in no case is this information ever exposed to us, for our own use. We’re told it has no value to us, and – so far – we’ve been stupid enough to believe it.

But now, just now, economic forces are linking the persistence of our data shadows to our ability to filter the avalanche of information which characterizes life in the 21st century. Turns out this data guck is good for more than greasing the wheels of commerce. These data shadow glow with the evanescent echo of our real social networks – not the baby steps of MySpace and Friendster – but the real ground-truth interactions which reveal ourselves and our relations one to another. It is human metadata. And it is the most valuable thing we’ve got, now that there’s demand for it.

YouTube records every email address you use to forward a video to a friend. It uses these, at present, to do auto-completion of addresses as you type them in. It also presents a friendly list of these addresses, to make forwarding all that much easier. What they’re not doing – at least, not visibly, and very likely not at all – is keeping any record of what I sent to whom, nor when, nor why. Yet every video forwarded through YouTube is forwarded for a reason – salience. YouTube could record those moments of salience, could use them to build a model, a data shadow, which could reinforce your own ability to make decisions about who should see what. It might even, to some degree, automate that process. When you add to this the newly emerging capabilities of analytic folksonomy – comparing a user’s tag clouds against the tag clouds of others within their social network – certain other relationships and affinities emerge. Again, these relationships can be used to improve the capability of the system to help find, filter and forward relevant videos. This is how a social network really works. It’s not about having 500 first-degree friends in MySpace. It’s about listening to your naturally occurring social network to direct, improve, and accelerate information flow. When the brand-new power of the individual as broadcaster is reified by the capabilities of computing machinery to listen to and model our interactions, the result is hypercasting. This is what media distribution in the 21st century is inevitably hurtling toward, driven by the natural selection of steadily increasing informational pressure.

Hypercasting solves some lingering questions confronting us. The first and most important of these is: How will we figure out what to watch now that we’ve got a near-to-infinite set of choices? We’ll rely on the recommendation of our friends, as we always have, but now these recommendations will be backed up by a hypercasting system which will invisibly and pervasively keep track our interests, the points of interest we hold in common with our friends, our communities, our families, and our co-workers. It will not be automatic – no one really wants to see some out-of-control hypercasting system deluge us with video spam – but it will be so tightly integrated into our interactive experiences that it will barely register on our perceptions. We’ll simply come to expect that our iPods, our Media Centers, our PSPs and our mobiles are loaded up and ready for us, with things we’re sure to find compelling. Addiction to television will soar to new highs, a new crop of amateurs – millions of them – will find successful and lucrative careers in media production, and advertisers, as always, will find a way to spread their messages. On the surface, things will look much as they do now, but everything will move at a more rapid clip. Videos will fly across the world in seconds, not days, and a global audience of a million will gather in moments. Almost accidentally, this will change news reporting forever, as citizen journalism becomes a real threat to established media companies, and their utter undoing. Shouldn’t the New York Times be subject to the same pressures as NEWS Corporation?

Is YouTube the harbinger of the transition to hypercasting? The lead is theirs to lose. GooTube delivers over half of all videos seen on the Internet. They have the cash and the brainpower to transform broadcasting into hypercasting. And they have to worry about the next set of 20-somethings, in a garage, working on the Next Big Thing. Those kids, nurtured by YouTube, know just what’s wrong with it, and how to make it better. YouTube faces its own selection pressures, which will only increase as it grows exponentially and cuts content deals and just tries to keep the whole centralized mess up and running.

Yet it doesn’t matter. We have seen birth and death, and thought they were different. But the death of the Web brought a new kind of life, a vitality and surefootedness suppressed during the years of MBAs and crazy business plans and IPOs. Perhaps history is repeating itself, as everyone goes wild with another case of gold fever, and we’ll lose the plot again. In that case, we should be glad of another death.

Hypercasting might need to wait a few years, for a platform very much like a fully mature Democracy DTV – or something we haven’t even dreamt up. It may be that YouTube will disappoint. But that doesn’t mean anything at all. YouTube isn’t driving the evolution toward hypercasting. The audience is. And the audience – in its teeming, active, probing billions – always gets whatever it wants. That’s the first rule of show business.

Security never becomes an issue until it is violated. Our boundaries begin open and undefended, sufficient for integrity, if not defense. But nature thrives on conflict, so every boundary becomes a battlefront in the war for continuing integrity – a war which we all eventually lose. People die. Cities fall. Civilizations collapse. Yet, in each of these failures lies the seed of renewal, and eventual victory. The pressure of natural selection forces an evolution of technique; overrun borders are reborn as more resilient walls, and the eternal battle moves up a notch in intensity.

Network culture is something less than 30 years old. With the birth of USENET in 1979, the individuals networked together by the then-fledgling Internet began to engage in a collective and as-yet-uninterrupted conversation about every conceivable topic, from the mundane (a bicycle needing repair) to the sublime (does God exist). This conversation shattered into a million pieces after the emergence of the World Wide Web in the early 1990s; the singular threaded conversation of USENET became more conversations on more websites than anyone could hope to count.

The websites thus constellated each represent one or perhaps a few of the conversations previously embraced by USENET (which still survives, though as a shadow of its former self). Although the unity of conversation has been irretrievably lost, it’s been more than made up for by a laser-like focus: these websites are very specific, concentrating on one topic, and serving those interested in that topic very well. Furthermore, an ecology of conversations now exists; websites grow and fade based on how well they serve their base of users. If you upset your user base enough, you lay the seeds of your own destruction, for your users can and will compete with you – and perhaps put you out of business.

Furthermore, networked media do not function in a vacuum. Although in its earliest days mainstream print and electronic media regarded networked media as a useful adjunct to their franchises, most neglected to note how the inherent qualities of networked media – and in particular, hyperdistribution – have changed the basic economies of media. Nowhere is this more clear than in the United States, with the curious case of Craigslist.

II.

Founded in 1995 as a list to keep San Franciscans up-to-date with events and parties, Craigslist quickly grew to a one-size-fits-all website which exists to connect people. These connections are, for the most part, bounded by proximity; Craigslist keeps separate websites for all major American cities, as well as a growing number of “international” cities, such as London, Sydney, and Tokyo. A functional cross-pollination of a bulletin board and a website, with an interface that hasn’t changed significantly since 1996, Craigslist serves as the “market maker” between people who have things to offer, and people who want those things. The definition of “things” is very broad on Craigslist. It could be something absolutely material (a bicycle), or far more subtle (a boyfriend). With few exceptions it costs nothing to post to Craigslist; a marketplace with no barrier to entry has produced a powerfully self-reinforcing path dependence which has resulted in Craigslist becoming the 30th most-visited site on the Web. People love Craigslist because it’s helped them out with something they need, or need to be rid of.

Although Craigslist has clearly created markets where none existed previously, it has also effectively removed one source of revenue from print media, which have, for many years, garnered substantial revenues from classified advertising – the sort of “thing trading” that Craigslist excels at. Most major American newspapers have seen at least a 30% drop in classified advertising revenues as Craigslist has grown in significance, and there seems to be no end in sight. Or rather, the future seemed boundless until just a few weeks ago, when a highly publicized incident pointed up the inherent flaw of all open systems, including Craigslist – a fundamental lack of security, predicated on the assumption that all human beings are basically honest.

Craigslist is not the first nor the most significant case of this peculiar form of naïveté. SMTP, the protocol which moves electronic mail across the Internet, was also designed as an open system, predicated by the assumption that people would only send mail to people who wanted to receive it. We now know that is not true, and – unless we actually abandon SMTP (very unlikely) – we will live for quite some time with an arms race of spammers and spam filters. In a networked world, one bad apple does spoil the whole barrel.

While Craigslist has had consistent low-level problems with fraud, no one was quite prepared for “The Craigslist Experiment.” (WARNING: SEXUALLY EXPLICT CONTENT) In September 2006, Seattle web developer Jason Fortuny posted a personal ad on Craigslist, masquerading as a woman in search of sexual gratification. As responses from interested men piled up in his email, he took these personal statistics (often including graphic photos) and made a website from the replies, publicly revealing the identities of the responders. While one can question the wisdom of the men who replied to an anonymous posting, one could also argue that they assumed a good-faith relationship with the poster. This assumption – again, drawn from the provably false assumption that all human beings are basically honest – points toward the missing element on Craigslist: trust.

III.

Trust is generated iteratively, emerging from the continuous interactions between communicating entities, whether human beings or computers, or some combination of the two. While trust can never be taken to be absolute, the history of interactions can be used to develop a trust model: if someone has been trustworthy so far, it is likely that they will continue to be trustworthy. Furthermore, trust is to some degree communicative across social networks: if my friend trusts you, it becomes that much easier for me to trust you.

eBay – which dealt with trust issues from its earliest days – implements the first of these models. Each buyer and each seller rates the quality of the trust relationship after the transaction, and both the buyer’s and the seller’s trust level is visible to trading parties before they enter into a transaction. Friendster – which began life as a dating service – implements the second of these models: if you are a friend of my friend, it’s probably safe for me to go out on a date with you. (You’re less likely to be a serial killer if you’re in my social network.) Neither of these models, on their own, are entirely foolproof. eBay sellers have been known to spoof the trust model by building layers of circular references, where each partner in a dishonest enterprise fully endorses the other members. The tenuous nature of connections on a digital social network means that a friend-of-a-friend on Friendster may not actually be a friend at all, or even an acquaintance.

Since neither model is entirely perfect, why not combine the two? The eBay trust model serves as a generic thermometer of trust – although someone may be putting a match under the thermometer’s bulb. In that case, you’d need to ask, “Whom do I know who knows this seller?” If there is no connection whatsoever to the other party in the transaction, that must be noted, and presented to both parties as a serious roadblock to establishing trust. This combination of techniques – eBay plus Friendster – adds to the security of both parties, but these relationships can not be wholly anonymous – and Craigslist is famed for its anonymity.

You need present no credentials to post to Craigslist, other than a valid email address. Since these are notoriously easy to acquire – and easy to spoof, or make opaque and anonymous – an email address provides no trust information whatsoever. Yet Craigslist does have a login capability, so it can potentially record each of the interactions users have through the system. It could collect data about the quality of the trust interactions users experience on Craigslist, and use this information to annotate all of the postings on the system. In short, every posting on Craigslist could be accompanied by metadata which allows users to have some basic sense of the trustworthiness of the other participant in a given transaction. With each successive transaction, Craigslist could begin to model an emergent digital social network, developed from observation, and supplemented by a user’s list of first-degree contacts. With over 10 million visitors a month – many of them repeat users – it should be relatively easy to develop a strong trust model, combining elements of both the eBay and Friendster systems, to produce an effective and anonymous solution (anonymous, that is, from the user’s perspective, as this information can be maintained opaquely within Craigslist, though this brings up a further question of whether Craigslist itself can be trusted, which can only be learned via a user’s long-term interactions with Craigslist itself).

It is possible that such a proposal would be anathema to Craigslist, whose creators value the noble but antique qualities which make it so susceptible to violations of trust. Craigslist does carry the warning Caveat Emptor. Yet, in the unceasing war to garner attention, how long will it be before someone else – perhaps eBay, or Friendster, or MySpace, or Google – puts the pieces together, and produces a free marketplace based on trust? Craigslist must adapt, or it will be entirely overrun by barbarian hordes, its walls breached, its gates burned. Out of that collapse will come a more trustworthy system – but perhaps Craig Newmark and his crew are smart enough to know that more is required. Perhaps the lessons of the past will motivate them to a more secure future.