Microblogging with Shitter – I Mean Twitter

Excuse me for not joining the bandwagon, but the recent hair-on-fire frenzy about Twitter needs a bucket of cold water. Let’s leave aside for the moment, Ashton Kutcher’s inane race with CNN to be the first Twitter account with one million followers (trailed closely by Britney Spears). Let’s ignore for now the empty words celebrities and millions of others spew forth each day using Twitter (the crapper analogy, as a result, is not empty). Let’s forget that Twitter is virtually eponymous for meaninglessness and suspend our curiosity about whether such fast food language heralds the end of days. All of that is true, but irrelevant because it is simply obvious, and therefore already a coin of the realm.

No one can take seriously the idea that Twitter will become the vital communications connective tissue for the 21st century. Twitter Journalism? Let’s be real. Can Twitter serve as an early source of information about important events such as the downing of the US Airways jet in the Hudson River or the terror attack in Mumbai? Sure. But so can cellphone cameras and text messages. Does Twitter represent, as its founders have argued, some transpersonal communications organism? Not at all, because the microblog, by definition, is incapable of communicating more than fragments of narrative, and to these fragments random aggregation, or viral repetition or call and response, can never supply coherence or meaning.

Let’s be clear about two things. First, like Facebook, Twitter cannot succeed because it has no sustainable business model. Second, Twitter is not socially disruptive; it is socially destructive because it justifies the fragmentation of communication with the illusion that lots of small posts together communicate a coherent and immediate knowledge map on any particular event or subject.

Twitter recently received a new $35 million round of venture funding (from, among others, Jeff Bezos), and has now received a total of $57 million in funding, despite the fact that it has existed for three years and never earned a penny of revenue. Facebook and Google are both reported to be sniffing out the possibility of acquiring Twitter, which has experienced mind-blowingly rapid traffic growth recently, the sure sign of a bubble in the making. While no one is suggesting Twitter can support the $15 billion valuation that Facebook received for its Microsoft investment several years ago, the mania surrounding Twitter surely indicates the Web 2.0 frenzy has probably peaked.

Let’s enumerate some reasons. First, Twitter has, until recently, employed only a handful of people. The company’s technology is relatively simple with apparently little or no intellectual property to protect (check the USPTO search engine, Oh wise and prudent VCs, and see what patents Twitter has obtained). For this reason, the barriers to entry for this application (and let’s be clear, Twitter is an application, not a platform) appear to be nonexistent. As a result, Twitter seems to be limited to advertising as a source of revenue, and will depend for meaningful advertising revenue on continued long-term and sustainable growth of its user base. This growth seems to be unlikely. In fact, the odds are that traffic and use of Twitter will eventually level off or shrink, precisely because it possesses the classic attributes of a fad.

Because its usefulness is so limited by its messaging format (the idea of microblogs), because the innate incoherence of Twitter is ultimately discombobulating and off-putting to the recipients of the “tweets”, and because the self-referential satisfaction that comes from broadcasting to the world the desiderata of one’s day is ultimately unsustainable (most people will quickly, if they have a brain, bore themselves to death), the popularity of Twitter is more akin to the superficial and ephemeral popularity of a Pet Rock or a a Beenie Baby than to the enduring impregnation of the culture of truly disruptive technologies such as Windows or the iPhone.

Ultimately Twitter subsists on the fallacious premise of much social media — that people can and will invest enormous amounts of time and fragment their lives in the pursuit of voice, connection, and community that has no depth, no resonance, and no sustainability. Social media is very good at certain things. User comments on products purchased at Amazon are phenomenally useful. Blogging sites such as The Huffington Post that assume, and to some degree require, a high bar for the quality of its citizen voices, perform an incredibly valuable service on behalf of the new journalism, particularly as the old journalism issues forth its death rattle. But if most of us received Twitter posts in our email inboxes, even on an opt-in basis, we would quickly declare it to be indistinguishable from spam.

Recently, epidemiologists have learned they can rapidly identify and track flu epidemics using Google. And we may end up learning that Twitter as a communications organism can also offer similar benefits for identifying the “viral” transmission and movement of illness as well as political events, business cycles, cultural phenomena, and even environmental change. But we do not need Twitter to perform these functions, and at best Twitter can only provide very primitive signals of change or disruption, while other technologies and communications methods exist to more fully vet, call out, and confirm meaning on significant events.

At the end of the day, the problem Twitter faces is that it has no ability, and no concern, for distinguishing between what is trivial and what is important. The fact that its leading proponents and visible voices are celebrities — simulacra of artists who have become famous largely for being famous — is indicative of the contradiction Twitter will not be able escape — to sustain its growth and popularity, it has to validate a Babel of voices that have not earned an audience.