Category Archives: Plexus

In a few days time, it will be exactly thirty-two years – a bit more than a billion seconds – since I learned to code. I was lucky enough to attend a high school with its own DEC PDP 11/45, and lucky that it chose to offer computer science courses on a few VT-52 video terminals and a DECWriter attached to it. My first OS was RSTS/E, and my first programming language was – of course – BASIC.

A hundred million seconds before this, a friend dragged me over to a data center his dad managed, sat me down at a DECWriter, typed ‘startrek’ at the prompt, and it was all over. The damage had been done. From that day, all I’ve ever wanted to do is play with computers.

I’ve pretty much been able to keep to that.

Oddly, the only time I didn’t play with computers was at MIT. After MIT, when I began work as a software engineer, I got to play and get paid for it. I’ve written code for every major microprocessor family (with the exception of the 6502), all the common microcontrollers, and every OS from CP/M to Android. I’ve even written a batch-executed RPG II program, typed up on punched cards, exectuted on an IBM 370 mainframe.

(Shudder, shudder.)

At Christmas 1990, I sat down and read a novel published a few years before, by an up-and-coming science fiction writer. That novel – Neuromancer – changed my life. It gave me a vision that I would pursue for an entire decade: a three-dimensional, immersive, visualized Internet. Cyberspace. I dropped everything, moved myself to San Francisco – epicenter of all work in virtual reality – and founded a startup to design and market an inexpensive immersive videogaming console. It was hard work, frequently painful, and I managed to pour my life savings into the company before it went belly up. But I can’t say that any of the other VR companies faired any better. A few of them still exist, shadows of their former selves, selling specialty products into the industrial market.

These companies failed because each of them – my own among them – coveted the whole prize. With the eyes of a megalomaniac, each firm was going to ‘rule the world’. Each did lots of inventing, holding onto every scrap of invention with IP agreements and copyrights and all sorts of patents. I invented a technology very much similar to that seen in the Wiimote, but fourteen years before the Wiimote was introduced. It’s all patented. I don’t own it. After my company collapsed the patent went through a series of other owners, until eventually I found myself in a lawyer’s office, being deposed, because my patent – the one I didn’t actually own – was involved in a dispute over priority, theft of intellectual property, and other violations.

Lovely.

With the VR industry in ruins, I set about creating my own networked VR protocol, using a parser donated by my friend Tony Parisi, building upon work from a coder over in Switzerland, a bloke by the name of Tim Berners-Lee, who’d published reams and reams of (gulp) Objective-C code, preprocessed into ANSI C, implementing his new Hypertext Transport Protocol. I took his code, folded it into my own, and rapidly created a browser for three-dimensional scenes attached to Berners-Lee’s new-fangled World Wide Web.

This happened seventeen years ago this week. Half a billion seconds ago.

When I’d gotten my 3D browser up and running, I was faced with a choice: I could try to hold it tight, screaming ‘Mine! Mine! Mine!’ and struggle for attention, or I could promiscuously share my code with the world. Being the attention-seeking type that I am, the choice was easy. After Dave Raggett – the father of HTML – had christened my work ‘VRML’, I published the source code. A community began to form around the project. With some help from an eighteen year-old sysadmin at WIRED named Brian Behlendorf, I brought Silicon Graphics to the table, got them to open their own code, and we had a real specification to present at the 2nd International Conference on the World Wide Web. VRML was off and running, precisely because it was open to all, free to all, available to all.

It took about a billion seconds of living before I grokked the value of open source, the penny-drop moment I realized that a resource shared is a resource squared. I owe everything that came afterward – my careers as educator, author, and yes, panelist on The New Inventors – to that one insight. Ever since then, I’ve tried to give away nearly all of my work: ideas, articles, blog posts, audio and video recordings of my talks, slide decks, and, of course, lots of source code. The more I give away, the richer I become – not just or even necessarily financially. There are more metrics to wealth than cash in your bank account, and more ways than one to be rich. Just as there is more than one way to be good, and – oh yeah – more than one way to be evil.

Which brings us to my second penny-drop moment, which came after I’d been programming computers for almost a billion seconds…

I: ZOMFG 574LLm4N W45 r19H7!

Sometimes, the evil we do, we do to ourselves. For about half a billion seconds between the ages of nineteen and thirty nine, I smoked tobacco, until I realized that anyone who smokes past the age of forty is either a fool or very poorly informed. So I quit. It took five years and many, many, many boxes of nicotine chewing gum, but I’m clean.

A few years ago, Harvard researcher Dr. Nicholas Christakis published some interesting insights on how the behavior of smoking spreads. It’s not the advertising – that’s mostly banned, these days – but because we take cues from our peers. If our friends start smoking, we ourselves are more likely to start smoking. There’s a communicative relationship, almost an epidemiological relationship at work here. This behavior is being transmitted by mimesis – imitation. We’re the imitating primates, so good at imitating one another that we can master language and math and xkcd. When we see our friends smoking, we want to smoke. We want to fit in. We want to be cool. That’s what it feels like inside our minds, but really, we just want to imitate. We see something, and we want to do it. This explains Jackass.

Mimesis is not restricted to smoking. Christakis also studied obesity, and found that it showed the same ‘network’ effects. If you are surrounded by the obese people, chances are greater that you will be obese. If your peers starts slimming, chances are that you will join them in dieting. The boundaries of mimesis are broad: we can teach soldiers to kill by immersing them in an environment where everyone learns to kill; we can teach children to read by immersing them in an environment where everyone learns to read; we can stuff our faces with Maccas and watch approvingly as our friends do the same. We have learned to use mimesis to our advantage, but equally it makes us its slaves.

Recent research has shown something disturbing: divorce spreads via mimesis. If you divorce, its more likely that your friends will also split up. Conversely, if your friends separate, it’s more likely that your marriage will dissolve. Again, this makes sense – you’re observing the behavior of your peers and imitating it, but here it touches the heart, the core of our being.

Booting up into Homo Sapiens Sapiens meant the acquisition of a facility for mimesis as broadly flexible as the one we have for language. These may even be two views into the same cognitive process. We can imitate nearly anything, but what we choose to imitate is determined by our network of peers, that set of relationships which we now know as our ‘social graph’.

This is why one needs to choose one’s friends carefully. They are not just friends, they are epidemiological vectors. When they sneeze, you will catch a cold. They are puppet masters, pulling your strings, even if they are blissfully unaware of the power they have over you – or the power that you have over them.

All of this is interesting, but little of it has the shock of the new. Our mothers told us to exercise caution when selecting our friends. We all know people who got in with the ‘wrong crowd’, to see their lives ruined as a consequence. This is common knowledge, and common sense.

But things are different today. Not because the rules have changed – those seem to be eternal – but because we have extended ourselves so suddenly and so completely. Our very new digital ‘social networks’ recapitulate the ones between our ears, in one essential aspect – they become channels for communication, channels through which the messages of mimesis can spread. Viral videos – and ‘viral’ behavior in general – are good examples of this.

Digital social networks are instantaneous, ubiquitous and can be vastly larger than the hundred-and-fifty-or-so limit imposed on our endogenous social networks, the functional bandwidth of the human neocortex. Just as computers can execute algorithms tens of millions of times faster than we can, digital social networks can inflate to elephantine proportions, connecting us to thousands of others.

Most of us keep our social graphs much smaller; the average number of friends on any given user account on Facebook is around 35. That’s small enough that it resembles your endogenous social network, so the same qualities of mimesis come into play. When your connections start talking about a movie or a song or a television series, you’re more to become interested in it.

If this is all happening on Facebook – which it normally is – there is another member of your social graph, there whether you like it or not: Facebook itself. You choose to build your social graph by connecting to others within Facebook, store your social graph on Facebook’s servers, and communicate within Facebook’s environment. All of this has been neatly captured, providing an opening for Facebook to do what they will with your social graph.

You have friended Mark Zuckerberg, telling him everything about yourself that you have ever told to any of your friends. More, actually, because an analysis of your social graph reveals much about you that you might not want to ever reveal to anyone else: your sexual preference and fetishes, your social class, your income level – everything that you might choose to hide is entirely revealed because you need to reveal it in order to make Facebook work. Because you do not own it. Because you do not have access to the source code, or the databases. Because it is closed.

Your social graph is the most important thing you have that can be represented in bits. With it, I can manipulate you. I can change your tastes, your attitudes, even your politics. We now know this is possible – and probably even easy. But to do this, I need your social graph. I need you to surrender it to me before I can use it to fuck you over.

We didn’t understand any of this a quarter billion seconds ago, when Friendster went live. Now we have a very good idea of the potency of the social graph, but we find ourselves almost pathetically addicted to the amplified power of communication provided by Facebook. We want to quit it, but we just don’t know how. Just as with tobacco, going cold turkey won’t be easy.

On 28 May 2010, I killed my Facebook profile and signed off once and for all. There is a cost – I’m missing a lot of the information which exists solely within the walled boundaries of Facebook – but I also breathe a bit easier knowing that I am not quite the puppet I was. When someone asks why I quit – an explanation which has taken me over a thousand words this morning – they normally just close down the conversation with, “My grandmother is on Facebook. I have to be there.”

That may be our epitaph.

We are so fucked. We ended up here because we surrendered our most vital personal details to a closed-source system. We should have known better.

And that’s only the half of it.

So much has happened in the last eight weeks that we’ve almost forgotten that before all of this disaster and tragedy afflicted Queensland, we were obsessed with another sort of disaster, rolling out in slow-motion, like a car smash from inside the car. On 29 November 2010, Wikileaks, in conjunction with several well-respected newspapers, began to release the first few of a quarter million cables, written by US State Department officials throughout the world. The US Government did its best to laugh these off as inconsequential, but one has already led more-or-less directly to a revolution in Tunisia. We also know that Hilary Clinton has requested credit card numbers and DNA samples for all of the UN ambassadors in New York City, presumably so she can raise up a clone army of diplomats intent on identity theft. Not a good look.

In early December, as the first cables came to light, and their contents ricocheted through the mediasphere, the US government recognized that it had to act – and act quickly – to staunch the flow of leaks. The government had some help, because an individual seduced by the United States’ projection of power decided to mount a Distributed Denial of Service attack against the Wikileaks website. In the name of freedom. Or liberty. Or something.

Wikileaks went down, but quickly relocated its servers into Amazon.com’s EC2 cloud. This lasted until US Senator Joseph Lieberman started making noises. Wikileaks was quickly turfed out of EC2, with Amazon claiming newly discovered violations of its Terms of Service. Another ‘discovery’ of a violation followed in fairly short order with Wikileaks’ DNS provider, everyDNS. For the coup de gras, PayPal had a look at their own terms of service – and, quelle horreur! – found Wikileaks in violation, freezing Wikileaks accounts, which, at that time, must have been fairly overflowing with contributions.

Deprive them of servers, deprive them of name service, deprive them of funds: checkmate. The Powers That Be must have thought this could dent the forward progress of Wikileaks. In fact, it only caused the number of copies of the website and associated databases to multiply. Today, nearly two thousand webservers host mirrors of Wikileaks. Like striking at a dandelion, attacking it only causes the seed to spread with the winds.

Although Wikileaks successfully resumed its work releasing the cables, the entire incident proved one ugly, mean, nasty point: the Internet is fundamentally not free. Where we thought we breathed the pure air of free speech and free thought, we instead find ourselves severely caged. If we do something that upsets our masters too much, they bring the bars down upon us, leaving us no breathing room at all. That isn’t liberty. That is slavery.

This isn’t some hypothetical. This isn’t a paranoid fantasy. This is what is happening. It will happen again, and again, and again, whenever the State or forces in collusion with the State find themselves threatened. None of it is secure. None of it belongs to us. None of it is free.

This is why we are so truly and wholly fucked. This is why we must stop and rethink everything we are doing. This is why we must consider ourselves victims of another kind of disaster, another tragedy, and must equally and bravely confront another kind of rebuilding. Because if we do not create something new, if we do not restore what is broken, we surrender to the forces of control.

Like it or not, we find ourselves at war. It’s not a war we asked for. It’s not a war we wanted. But war is upon us, the last great gasp of the forces of control as they realize that when they digitized, in pursuit of greater efficiency, profit, or extensions of their own power, whatever they once held onto became so fluid it now drains away completely.

That’s one enemy, the old enemy, the ones whom history has already ruled irrelevant. But there’s the other enemy, who seeks to exteriorize the interior, to make privacy difficult and therefore irrelevant. Without privacy there is no liberty. Without privacy there is no individuality. Without privacy there is only the mindless, endless buzzing of the hive. That’s the new enemy. Although it announces itself with all of the hyperbole of historical inevitability, this is just PR aimed at extending the monopoly power of these forces.

We need weapons. Lots of weapons. I’m not talking about the Low Orbit Ion Cannon. Rather, I’m recommending a layered defensive strategy, one which allows us to carry on with our business, blithely unmolested by the forces which seek to constrain us.

Here, then, is my ‘Design Guide for Anarchists’:

Design Principle One: Distribute Everything

The recording industry used the courts to shut down Napster because they could. Napster had a single throat they could get their legal arms around, choking the life out of it. In a display of natural selection that would have brought a tear to Alfred Russel Wallace’s eye, the selection pressure applied by the recording industry only led to the creation of Gnutella, which, through its inherently distributed architecture, became essentially impossible to eradicate. The Day of the Darknet had begun.

This is an extension of the essential UNIX idea of simple programs which can be piped together to do useful things. ‘Small pieces, loosely joined.’ But these pieces shouldn’t live within a single process, a single processor, a single computer, or a single subnet. They must live everywhere they can live, in every compatible environment, so that they can survive any of the catastrophes of war.

Design Principle Two: Transport Independence

The inundation of Brisbane and its surrounding suburbs brought a sudden death to all of its networks: mobile, wired, optic. All of these networks are centralized, and for that reason they can all be turned off – either by a natural disaster, or at the whim of The Powers That Be. Just as significantly, they require the intervention of those Powers to reboot them: government and telcos had to work hand-in-hand to bring mobile service back to the worst-affected suburbs. So long as you are in the good graces of the government, it can be remarkably efficient. But if you find yourself aligned against your government, or your government is afflicted with corruption, as simple a thing as a dial tone can be almost impossible to manifest.

We have created a centralized communications infrastructure. Lines feed into trunks, which feed into central offices, which feed into backbones. This seems the natural order of things, but it is entirely an echo of the commercial requirements of these networks. In order to bill you, your communications must pass through a point where they can be measured, metered and tariffed.

There is another way. Years before the Internet came along, we used UUCP and FidoNet to spread mail and news posts throughout a far-flung, only occasionally connected global network of users. It was slower than we’re used to these days, but no less reliable. Messages would forward from host to host, until they reached their intended destination. It all worked if you had a phone line, or an Internet connection, or, well, pretty much anything else. I presume that a few hardy souls printed out a UUCP transmission on paper tape, physically carried it from one host to another, and fed it through.

A hierarchy is efficient, but the price of that efficiency is vulnerability. A rhizomatic arrangement of nodes within a mesh is slow, but very nearly invulnerable. It will survive flood, fire, earthquake and revolution. To abolish these dangerous hierarchies, we must reconsider everything we believe about ‘the right way’ to get bits from point A to point B. Every transport must be considered – from point-to-point laser beams to wide-area mesh networks using unlicensed spectrum down to semaphore and smoke signals. Nothing is too slow, only too unreliable. If we rely on TCP/IP and HTTP exclusively, we risk everything for the sake of some speed and convenience. But this is life during wartime, and we must shoulder this burden.

Design Principle Three: Secure Everything

Why would any message traverse a public network in plaintext? The bulk of our communication occurs in the wide open – between Web browsers and Web servers, email servers and clients, sensors and their recorders. This is insanity. It is not our job to make things easy to read for ASIO or the National Security Agency or Google or Facebook or anyone else who has some need to know what we’re saying and what we’re thinking.

As a baseline, everything we do, everywhere, must be transmitted with strong encryption. Until someone perfects a quantum computer, that’s our only line of defense.

We need a security approach that is more comprehensive than this. The migration to cloud computing – driven by its ubiquity and convenience, and baked into Google’s Chrome OS – deprives us of any ability to secure our own information. When we use Gmail or Flickr or Windows Live or MobileMe or even Dropbox (which is better than most, as it stores everything encrypted), we surrender our security for a little bit of simplicity. This is a false trade-off. These systems are insecure because it benefits those who offer these systems to the public. There is value in all of that data, so everything is exposed, leaving us exposed.

If you do not know where it lives, if you do not hold the keys to lock it or release it, if it affects to be more pretty than useful (because locks are ugly), turn your back on it, and tell the ones you love – who do not know what you know – to do the same. Then, go and build systems which are secure, which present nothing but a lock to any prying eyes.

Design Principle Four: Open Everything

I don’t need to offer any detailed explanation for this last point: it is the reason we are here. If you can’t examine the source code, how can you really trust it? This is an issue beyond maintainability, beyond the right to fork; this is the essential element that will prevent paranoia. ‘Transparency is the new objectivity’, and unless any particular program is completely transparent, it is inherently suspect.

Open source has the additional benefit that it can be reused and repurposed; the parts for one defensive weapon can rapidly be adapted to another one, so open source accelerates the responses to new threats, allowing us to stay one step ahead of the forces who are attempting to close all of this down. There’s a certain irony here: in order to compete effectively with us, those who oppose us will be forced to open their own source, to accelerate their own responses to our responses. On this point we must win, simply because open source improves selection fitness.

When all four of these design principles are embodied in a work, another design principle emerges: resilience. Something that is distributed, transport independent, secure and open is very, very difficult to subvert, shut down, or block. It will survive all sorts of disasters. Including warfare. It will adapt at lightning speed. It makes the most of every possible selection advantage. But nothing is perfect. Systems engineered to these design principles will be slower than those built purely for efficiency. The more immediacy you need, the less resilience you get. Sometimes immediacy will overrule other design principles. Such trade-offs must be carefully thought through.

Is all of this more work? Yes. But then, building an automobile that won’t kill its occupants at speed is a lot more work than slapping four wheels and a gear train on a paper mache box. We do that work because we don’t want our loved ones hurtling toward their deaths every time they climb behind the wheel. Freedom ain’t free, and ‘extremism in the defense of liberty is no vice.’

Let me take a few minutes to walk you through the design of my own open-source project, so you can see how these design principles have influenced my own work.

III: Plexus

When I announced I would quit Facebook, many of my contacts held what can only be described as an ‘electronic wake’ for me, in the middle of my Facebook comment stream. As if I were about to pass away, and they’d never see me again. I kept pointing them to my Posterous blog, but they simply ignored the links, telling me how much I’d be missed once I departed. ‘But why can’t you just come visit me on Posterous?’ I asked. One contact answered for the lot when he said, ‘That’s too hard, Mark. With Facebook I can check on everyone at once. I don’t need to go over there for you, and over here for someone else, and so on and so on. Facebook makes it easy.’

That’s another epitaph. Yet it precipitated a penny-drop moment. The reason Facebook has such lock-in with its users is because of a network effect: as more people join Facebook, its utility value as a human switchboard increases. It is this access to the social graph which is Facebook’s ‘flypaper’, the reason it is so sticky, and surpassing Google as the most visited site on the Internet.

That social graph is the key thing; it’s what the address book, the rolodex and the contacts database have morphed into, and it forms the foundation for a project that I have named Plexus. Plexus is a protocol for the social web, ‘plumbing’ that allows all social web components to communicate: from each, according to their ability, to each, according to their need. Some components of the social web – Facebook comes to mind – are very poor communicators. Others, like Twitter, have provided every conceivable service to make them easy to talk to.

Plexus provides a ‘meta-API’, based on RFC2822 messaging, so that each service can feed into or be fed by an individual’s social graph. This social graph, the heart of Plexus, is what we might call the ‘Web2.0 address book’. It’s not simply a static set of names, addresses, telephone numbers and emails, but, rather, an active set of connections between services, which you can choose to listen to, or to share with. This is the switchboard, where the real magic takes place, allowing you listen to or be listened to, allowing you to share, or be shared with.

Plexus is agnostic; it can talk to any service, and any service can talk to it. It is designed to ‘wire everything together’, so that we never have to worry about going hither and yon to manage our social graph, but neither need we be chained in one place. Plexus gives us as much flexibility as we require. That’s the vision.

Just after New Year, I had an insight. I had originally envisioned Plexus as a monolithic set of Python modules. It became clear that message-passing between the components – using an RFC2822 protocol – would allow me to separate the components, creating a distributed Plexus, parts of which could run anywhere: on a separate process, on a separate subnet, or, really, anywhere. Furthermore, these messages could easily be encrypted and signed using RSA encryption, creating a strong layer of security. Finally, these messages could be transmitted by any means necessary: TCP/IP, UUCP, even smoke signals. And of course, all of it is entirely open. Because it’s a protocol, the pieces of Plexus can be coded in any language anyone wants to use: Python, Node.js, PHP, Perl, Haskell, Ruby, Java, even shell. Plexus is an agreement to speak the same language about the things we want to share.

I could go into mind-numbing detail about the internals of Plexus, but I trust those of you who find Plexus intriguing will find me after I leave the stage this morning. I’m most interested in what you know that could help move this project forward: what pieces already exist that I can rework and adapt for Plexus? I need your vast knowledge, your insights and your critiques. Plexus is still coming to life, but a hundred things must go right for it to be a success. With your aid, that can happen.

The Chinese Taoist laughs at civilization and goes elsewhere.
The Babylonian Chaoist sets termites to the foundations.

Plexus is a white ant set to the imposing foundations of Facebook and every other service which chooses to take the easy path, walling its users in, the better to control them. There is another way. When the network outside the walls has a utility value greater than the network within, the forces of natural selection come into play, and those walls quickly tumble. We saw it with AOL. We saw it with MSN. We’ll see it again with Facebook. We will build the small and loosely-coupled components that individually do very little but altogether add up to something far more useful than anything on offer from any monopolist.

We need to see this happen. This is not just a game.

Conclusion: The Next Billion Seconds

A billion seconds ago, Linux did not exist. The personal computer was an expensive toy. The Internet – well, one of my friends is the sysadmin who got HP onto UUCP – this was before the Internet became pervasive – and he remembers updating his /etc/hosts file weekly – by hand. Every machine on the Internet could be found within a single file, that could be printed out on two sheets of greenbar. A billion seconds later, and we’re a few days away from IPocalypse, the total allocation of the IPv4 number space.

Something is going on.

I’m not as teleological as Kevin Kelly. I do not believe that there is evidence to support a seventh class of life – the technium – which is striving to come into its own. I don’t consider technology as something in any way separate from us. Other animals may use tools, but we have gone further, becoming synonymous with them. Our social instinct for imitation, our language instinct for communication, and our technological instinct for tool using all seem to be reaching new heights. Each instinct reinforces the others, creating a series of rising feedbacks that has only one possible end: the whole system overloads, overflows all its buffers, and – as you might expect – knocks the supervisor out of the box.

Call this a Singularity, if you like. I simply refer to it as the next billion seconds.

The epicenter of this transition, where all three streams collide, sits in the palm of our hands, nearly all the time. The mobile is the most pervasive technology in human history. People who do not have electricity or indoor plumbing or literacy or agriculture have mobiles. Perhaps five and a half billion of the planet’s seven billion souls possesses one; that’s everyone who earns more than one dollars a day. Countless studies shows that individuals with mobiles improve their economic fitness: they earn more money. Anything that improves selection fitness – and economic fitness is a big part of that – spreads rapidly, as humans imitate, as humans communicate, as humans take the tool and further it, increasing its utility, amplifying its ability to amplify economic fitness. The mobile becomes even more useful, more essential, more indispensable. A billion seconds ago, no one owned a mobile. Today, nearly everyone does.

Hundreds of billions of dollars are being invested to make the mobile more useful, more pervasive, and more effective. The engines of capital are reorganizing themselves around it, just as they did, three billion seconds ago, for the automobile, and a billion seconds ago for the integrated circuit. But unlike the automobile or the IC, the mobile is quintessentially a social technology, a connective fabric for humanity. The next billion seconds will see this fabric become more tangible and more tightly woven, as it becomes increasingly inconceivable to separate ourselves from those we choose to share our lives with.

Call this a Hive Mind, if you like. I simply refer to it as the next billion seconds.

This is starting to push beneath our skins the way it has already colonized our attention. I don’t know that we will literally ‘Borg’ ourselves. But the strict boundaries between ourselves, our machines, and other humans are becoming blurred to the point of meaninglessness. Organisms are defined by their boundaries, by what they admit and what they refuse. In this billion seconds, we are rewriting the definition of homo sapiens sapiens, irrevocably becoming something else.

Do we own that code? Are parts of that new definition closed off from us, fenced in by the ramparts of privilege or power or capital or law? Will we end up with something foreign inside each of us, a potency unnamed, unobserved, and unavoidable? Will we be invaded, infected, and controlled? This is the choice that confronts us in the next billion seconds, a choice made even in its abrogation. Freedom is not just an ideal. Liberty is not some utopian dream. These must form the baseline human experience in our next billion seconds, or all is lost. We ourselves will be lost.

We have reached the decision point. Our actions today – here, in this room – define the future we will inhabit, the transhumanity we are emerging into. We’ve had our playtime, and it’s been good. We’ve learned a lot, but mostly we’ve learned how to discern right from wrong. We know what to do: what to build up, and what to tear down. This transition is painful and bloody and carries with it the danger of complete loss. But we have no choice. We are too far down within it to change our ways now. ‘The way down is the way up.’

Call it a birth, if you like. It awaits us within the next billion seconds.

The slides for this talk (in OpenOffice.org Impress format) are available here. They contain strong images.

In February 1984, seeking a reprieve from the very cold and windy streets of Boston, Massachusetts, I ducked inside of a computer store. I spied the normal array of IBM PCs and peripherals, the Apple ][, probably even an Atari system. Prominently displayed at the front of the store, I spied my first Macintosh. It wasn’t known as a Mac 128K or anything like that. It was simply Macintosh. I walked up to it, intrigued – already, the Reality Distortion Field was capable of luring geeks like me to their doom – and spied the unfamiliar graphical desktop and the cute little mouse. Sitting down at the chair before the machine, I grasped the mouse, and moved the cursor across the screen. But how do I get it to do anything? I wondered. Click. Nothing. Click, drag – oh look some of these things changed color! But now what? Gah. This is too hard.

That’s when I gave up, pushed myself away from that first Macintosh, and pronounced this experiment in ‘intuitive’ computing a failure. Graphical computing isn’t intuitive, that’s a bit of a marketing fib. It’s a metaphor, and you need to grasp the metaphor – need to be taught what it means – to work fluidly within the environment. The metaphor is easy to apprehend if it has become the dominant technique for working with computers – as it has in 2010. Twenty-six years ago, it was a different story. You can’t assume that people will intuit what to do with your abstract representations of data or your arcane interface methods. Intuition isn’t always intuitively obvious.

A few months later I had a job at a firm which designed bar code readers. (That, btw, was the most boring job I’ve ever had, the only one I got fired from for insubordination.) We were designing a bar code reader for Macintosh, so we had one in-house, a unit with a nice carrying case so that I could ‘borrow’ it on weekends. Which I did. Every weekend. The first weekend I got it home, unpacked it, plugged it in, popped in the system disk, booted it, ejected the system disk, popped in the applications disk, and worked my way through MacPaint and MacWrite and on to my favorite application of all – Hendrix.

Hendrix took advantage of the advanced sound synthesis capabilities of Macintosh. Presented with a perfectly white screen, you dragged the mouse along the display. The position, velocity, and acceleration of the pointer determined what kind of heavily altered but unmistakably guitar-like sounds came out of the speaker. For someone who had lived with the bleeps and blurps of the 8-bit world, it was a revelation. It was, in the vernacular of Boston, ‘wicked’. I couldn’t stop playing with Hendrix. I invited friends over, showed them, and they couldn’t stop playing with Hendrix. Hendrix was the first interactive computer program that I gave a damn about, the first one that really showed me what a computer could be used for. Not just pushing paper or pixels around, but an instrument, and an essential tool for human creativity.

Everything that’s followed in all the years since has been interesting to me only when it pushes the boundaries of our creativity. I grew entranced by virtual reality in the early 1990s, because of the possibilities it offered up for an entirely new playing field for creativity. When I first saw the Web, in the middle of 1993, I quickly realized that it, too, would become a cornerstone of creativity. That roughly brings us forward from the ‘olden days’, to today.

This morning I want to explore creativity along the axis of three classes of devices, as represented by the three Apple devices that I own: the desktop (my 17” MacBook Pro Core i7), the mobile (my iPhone 3GS 32Gb), and the tablet (my iPad 16GB 3G). I will draw from my own experience as both a user and developer for these devices, using that experience to illuminate a path before us. So much is in play right now, so much is possible, all we need do is shine a light to see the incredible opportunities all around.

I: The Power of Babel

I love OSX, and have used it more or less exclusively since 2003, when it truly became a useable operating system. I’m running Snow Leopard on my MacBook Pro, and so far have suffered only one Grey Screen Of Death. (And, if I know how to read a stack trace, that was probably caused by Flash. Go figure.) OSX is solid, it’s modestly secure, and it has plenty of eye candy. My favorite bit of that is Spaces, which allows me to segregate my workspace into separate virtual screens.

Upper left hand space has Mail.app, upper right hand has Safari, lower right hand has TweetDeck and Skype, while the lower left hand is reserved for the task at hand – in this case, writing these words. Each of the apps, except Microsoft Word, is inherently Internet-oriented, an application designed to facilitate human communication. This is the logical and inexorable outcome of a process that began back in 1969, when the first nodes began exchanging packets on the ARPANET. Phase one: build the network. Phase two: connect everything to the network. Phase three: PROFIT!

That seems to have worked out pretty much according to plan. Our computers have morphed from document processors – that’s what most computers of any stripe were used for until about 1995 – into communication machines, handling the hard work of managing a world that grows increasingly connected. All of this communication is amazing and wonderful and has provided the fertile ground for innovations like Wikipedia and Twitter and Skype, but it also feels like too much of a good thing. Connection has its own gravitational quality – the more connected we become, the more we feel the demand to remain connected continuously.

We salivate like Pavlov’s dogs every time our email application rewards us with the ‘bing’ of an incoming message, and we keep one eye on Twitter all day long, just in case something interesting – or at least diverting – crosses the transom. Blame our brains. They’re primed to release the pleasure neurotransmitter dopamine at the slightest hint of a reward; connecting with another person is (under most circumstances) a guaranteed hit of pleasure.

That’s turned us into connection junkies. We pile connection upon connection upon connection until we numb ourselves into a zombie-like overconnectivity, then collapse and withdraw, feeling the spiral of depression as we realize we can’t handle the weight of all the connections that we want so desperately to maintain.

Not a pretty picture, is it? Yet the computer is doing an incredible job, acting as a shield between what our brains are prepared to handle and the immensity of information and connectivity out there. Just as consciousness is primarily the filtering of signal from the noise of the universe, our computers are the filters between the roaring insanity of the Internet and the tidy little gardens of our thoughts. They take chaos and organize it. Email clients are excellent illustrations of this; the best of them allow us to sort and order our correspondence based on need, desire, and goals. They prevent us from seeing the deluge of spam which makes up more than 90% of all SMTP traffic, and help us to stay focused on the task at hand.

Electronic mail was just the beginning of the revolution in social messaging; today we have Tweets and instant messages and Foursquare checkins and Flickr photos and YouTube videos and Delicious links and Tumblr blogs and endless, almost countless feeds. All of it recommended by someone, somewhere, and all of it worthy of at least some of our attention. We’re burdened by too many web sites and apps needed to manage all of this opportunity for connectivity. The problem has become most acute on our mobiles, where we need a separate app for every social messaging service.

This is fine in 2010, but what happens in 2012, when there are ten times as many services on offer, all of them delivering interesting and useful things? All these services, all these websites, and all these little apps threaten to drown us with their own popularity.

Does this mean that our computers are destined to become like our television tuners, which may have hundreds of channels on offer, but never see us watch more than a handful of them? Do we have some sort of upper boundary on the amount of connectivity we can handle before we overload? Clay Shirky has rightly pointed out that there is no such thing as information overload, only filter failure. If we find ourselves overwhelmed by our social messaging, we’ve got to build some better filters.

This is the great growth opportunity for the desktop, the place where the action will be happening – when it isn’t happening in the browser. Since the desktop is the nexus of the full power of the Internet and the full set of your own data (even the data stored in the cloud is accessed primarily from your desktop), it is the logical place to create some insanely great next-generation filtering software.

That’s precisely what I’ve been working on. This past May I got hit by a massive brainwave – one so big I couldn’t ignore it, couldn’t put it down, couldn’t do anything but think about it obsessively.

I wanted to create a tool that could aggregate all of my social messaging – email, Twitter, RSS and Atom feeds, Delcious, Flickr, Foursquare, and on and on and on. I also wanted the tool to be able to distribute my own social messages, in whatever format I wanted to transmit, through whatever social message channel I cared to use.

Then I wouldn’t need to go hither and yon, using Foursquare for this, and Flickr for that and Twitter for something else. I also wouldn’t have to worry about which friends used which services; I’d be able to maintain that list digitally, and this tool would adjust my transmissions appropriately, sending messages to each as they want to receive them, allowing me to receive messages from each as they care to send them.

That’s not a complicated idea. Individuals and companies have been nibbling around the edges of it for a while.

I am going the rest of the way, creating a tool that functions as the last ‘social message manager’ that anyone will need. It’s called Plexus, and it functions as middleware – sitting between the Internet and whatever interface you might want to cook up to view and compose all of your social messaging.

Now were I devious, I’d coyly suggest that a lot of opportunity lies in building front-end tools for Plexus, ways to bring some order to the increasing flow of social messaging. But I’m not coy. I’ll come right out and say it: Plexus is an open-source project, and I need some help here. That’s a reflection of the fact that we all need some help here. We’re being clubbed into submission by our connectivity. I’m trying to develop a tool which will allow us to create better filters, flexible filters, social filters, all sorts of ways of slicing and dicing our digital social selves. That’s got to happen as we invent ever more ways to connect, and as we do all of this inventing, the need for such a tool becomes more and more clear.

We see people throwing their hands up, declaring ‘email bankruptcy’, quitting Twitter, or committing ‘Facebookicide’, because they can’t handle the consequences of connectivity.

We secretly yearn for that moment after the door to the aircraft closes, and we’re forced to turn our devices off for an hour or two or twelve. Finally, some time to think. Some time to be. Science backs this up; the measurable consequence of over-connectivity is that we don’t have the mental room to roam with our thoughts, to ruminate, to explore and play within our own minds. We’re too busy attending to the next message. We need to disconnect periodically, and focus on the real. We desperately need tools which allow us to manage our social connectivity better than we can today.

Once we can do that, we can filter the noise and listen to the music of others. We will be able to move so much more quickly – together – it will be another electronic renaissance: just like 1994, with Web 1.0, and 2004, with Web2.0.

That’s my hope, that’s my vision, and it’s what I’m directing my energies toward. It’s not the only direction for the desktop, but it does represent the natural evolution of what the desktop has become. The desktop has been shaped not just by technology, but by the social forces stirred up by our technology.

It is not an accident that our desktops act as social filters; they are the right tool at the right time for the most important job before us – how we communicate with one another. We need to bring all of our creativity to bear on this task, or we’ll find ourselves speechless, shouted down, lost at another Tower of Babel.

II: The Axis of Me-ville

Three and a half weeks ago, I received a call from my rental agent. My unit was going on the auction block – would I mind moving out? Immediately? I’ve lived in the same flat since I first moved to Sydney, seven years ago, so this news came as quite a shock.

I spent a week going through the five states of mourning: denial, anger, bargaining, depression, and acceptance. The day I reached acceptance, I took matters in hand, the old-fashioned way: I went online, to domain.com.au, and looked for rental units in my neighborhood.

Within two minutes I learned that there were two units for rent within my own building!

When you stop to think about it, that’s a bit weird. There were no signs posted in my building, no indication that either of the units were for rent. I’d heard nothing from the few neighbors I know well enough to chat with. They didn’t know either. Something happening right underneath our noses – something of immediate relevance to me – and none of us knew about it. Why? Because we don’t know our neighbors.

For city dwellers this is not an unusual state of affairs. One of the pleasures of the city is its anonymity. That’s also one of it’s great dangers. The two go hand-in-hand. Yet the world of 2010 does not offer up this kind of anonymity easily. Consider: we can re-establish a connection with someone we went to high school with, thirty years ago – and really never thought about in all the years that followed – but still not know the names of the people in the unit next door, names you might utter with bitter anger after they’ve turned up the music again. How can we claim that there’s any social revolution if we can’t be connected to people whom we’re physically close to? Emotional closeness is important, and financial closeness (your coworkers) is also salient, but both should be trumped by the people who breathe the same air as you.

It is almost impossible to bridge the barriers that separate us from one another, even when we’re living on top of each other.

This is where the mobile becomes important, because the mobile is the singular social device. It is the place where our of the human relationships reside. (Plexus is eventually bound for the mobile, but in a few years’ time, when the devices are nimble enough to support it.) Yet the mobile is more than just the social crossroads. It is the landing point for all of the real-time information you need to manage your life.

On the home page of my iPhone, two apps stand out as the aids to the real-time management of my life: RainRadar AU and TripView. I am a pedestrian in Sydney, so it’s always good to know when it’s about to rain, how hard, and how long. As a pedestrian, I make frequent use of public transport, so I need to know when the next train, bus or ferry is due, wherever I happen to be. The mobile is my networked, location-aware sensor. It gathers up all of the information I need to ease my path through life. This demonstrates one of the unstated truisms of the 21st century: the better my access to data, the more effective I will be, moment to moment. The mobile has become that instantaneous access point, simply because it’s always at hand, or in the pocket or pocketbook or backpack. It’s always with us.

In February I gave a keynote at a small Melbourne science fiction convention. After I finished speaking a young woman approached me and told me she couldn’t wait until she could have some implants, so her mobile would be with her all the time. I asked her, “When is your mobile ever more than a few meters away from you? How much difference would it make? What do you gain by sticking it underneath your skin?” I didn’t even bother to mention the danger from all that subcutaneous microwave radiation. It’s silly, and although our children or grandchildren might have some interesting implants, we need to accept the fact that the mobile is already a part of us.

We’re as Borg-ed up as we need to be. Probably we’re more Borg-ed up than we can handle.

It’s not just that our mobiles have become essential. It’s getting so that we can’t put them down, even in situations when we need to focus on the task at hand – driving, or having dinner with your partner, or trying to push a stroller across an intersection. We’re addicted, and the first step to treating that addiction is to admit we have problem. But here’s the dilemma: we’re working hard to invent new ways to make our mobiles even more useful, indispensable and alluring.

We are the crack dealers. And I’m encouraging you to make better crack. Truth be told, I don’t see this ‘addiction’ as a bad thing, though goodness knows the tabloid newspapers and cultural moralists will make whatever they can of it. It’s an accommodation we will need to make, a give-and-take. We gain an instantaneous connection to one another, a kind of cultural ‘telepathy’ that would have made Alexander Graham Bell weep for joy.

But there’s more: we also gain a window into the hitherto hidden world of data that is all around us, a shadow and double of the real world.

For example, I can now build an app that allows me to wander the aisles of my local supermarket, bringing all of the intelligence of the network with me as I shop. I hold the mobile out in front of me, its camera capturing everything it sees, which it passes along to the cloud, so that Google Goggles can do some image processing on it, and pick out the identifiable products on the shelves.

This information can then be fed back into a shopping list – created by me, or by my doctor, or by bank account – because I might be trying to optimize for my own palette, my blood pressure, or my budget – and as I come across the items I should purchase, my mobile might give a small vibration. When I look at the screen, I see the shelves, but the items I should purchase are glowing and blinking.

The technology to realize this – augmented reality with a few extra bells and whistles – is already in place. This is the sort of thing that could be done today, by someone enterprising enough to knit all these separate threads into a seamless whole. There’s clearly a need for it, but that’s just the beginning. This is automated, computational decision making. It gets more interesting when you throw people into the mix.

Consider: in December I was on a road trip to Canberra. When I arrived there, at 6 pm, I wondered where to have dinner. Canberra is not known for its scintillating nightlife – I had no idea where to dine. I threw the question out to my 7000 Twitter followers, and in the space of time that it took to shower, I had enough responses that I could pick and choose among them, and ended up having the best bowl of seafood laksa that I’d had since I moved to Australia!

That’s the kind of power that we have in our hands, but don’t yet know how to use.

We are all well connected, instantaneously and pervasively, but how do we connect without confusing ourselves and one another with constant requests? Can we manage that kind of connectivity as a background task, with our mobiles acting as the arbiters? The mobile is the crossroads, between our social lives, our real-time lives, and our data-driven selves. All of it comes together in our hands. The device is nearly full to exploding with the potentials unleashed as we bring these separate streams together. It becomes hypnotizing and formidable, though it rings less and less. Voice traffic is falling nearly everywhere in the developed world, but mobile usage continues to skyrocket. Our mobiles are too important to use for talking.

Let’s tie all of this together: I get evicted, and immediately tell my mobile, which alerts my neighbors and friends, and everyone sets to work finding me a new place to live. When I check out their recommendations, I get an in-depth view of my new potential neighborhoods, delivered through a marriage of augmented reality and the cloud computing power located throughout the network. Finally, when I’m about to make a decision, I throw it open for the people who care enough about me to ring in with their own opinions, experiences, and observations. I make an informed decision, quickly, and am happier as a result, for all the years I live in my new home.

That’s what’s coming. That’s the potential that we hold in the palms of our hands. That’s the world you can bring to life.

III: Through the Looking Glass

Finally, we turn to the newest and most exciting of Apple’s inventions. There seemed to be nothing new to say about the tablet – after all, Bill Gates declared ‘The Year of the Tablet’ way back in 2001. But it never happened. Tablets were too weird, too constrained by battery life and weight and, most significantly, the user experience. It’s not as though you can take a laptop computer, rip away the keyboard and slap on a touchscreen to create a tablet computer, though this is what many people tried for many years. It never really worked out for them.

Instead, Apple leveraged what they learned from the iPhone’s touch interface. Yet that alone was not enough. I was told by sources well-placed in Apple that the hardware for a tablet was ready a few years ago; designing a user experience appropriate to the form factor took a lot longer than anyone had anticipated. But the proof of the pudding is in the eating: iPad is the most successful new product in Apple’s history, with Apple set to manufacture around thirty million of them over the next twelve months. That success is due to the hard work and extensive testing performed upon the iPad’s particular version of iOS.

It feels wonderfully fluid, well adapted to the device, although quite different from the iOS running on iPhone. iPad is not simply a gargantuan iPod Touch. The devices are used very differently, because the form-factor of the device frames our expectations and experience of the device.

Let me illustrate with an example from my own experience: I had a consulting job drop on me at the start of June, one which required that I go through and assess eighty-eight separate project proposals, all of which ran to 15 pages apiece. I had about 48 hours to do the work. I was a thousand kilometers from these proposals, so they had to be sent to me electronically, so that I could then print them before reading through them. Doing all of that took 24 of the 48 hours I had for review, and left me with a ten-kilo box of papers that I’d have to carry, a thousand kilometers, to the assessment meeting. Ugh.

Immediately before I left for the airport with this paper ball-and-chain, I realized I could simply drag the electronic versions of these files into my Dropbox account. Once uploaded, I could access those files from my iPad – all thousand or so pages. Working on iPad made the process much faster than having to fiddle through all of those papers; I finished my work on the flight to my meeting, and was the envy of all attending – they wrestled with multiple fat paper binders, while I simply swiped my way to the next proposal.

This was when I realized that iPad is becoming the indispensable appliance for the information worker.

You can now hold something in your hand that has every document you’ve written; via the cloud, it can hold every document anyone has ever written. This has been true for desktops since the advent of the Internet, but it hasn’t been as immediate. iPad is the page, reinvented, not just because it has roughly the same dimensions as a page, but because you interact with it as if it were a piece of paper. That’s something no desktop has ever been able to provide.

We don’t really have a sense yet for all the things we can do with this ‘magical’ (to steal a word from Steve Jobs) device.

Paper transformed the world two thousand years ago. Moveable type transformed the world five hundred years ago. The tablet, whatever it is becoming – whatever you make of it – will similarly reshape the world. It’s not just printed materials; the tablet is the lightbox for every photograph ever taken anywhere by anyone. The tablet is the screen for every video created, a theatre for every film produced, a tuner to every radio station that offers up a digital stream, and a player for every sound recording that can be downloaded.

All of this is here, all of this is simultaneously present in a device with so much capability that it very nearly pulses with power.

iPad is like an Formula One Ferrari, one we haven’t even gotten out of first gear. So stretch your mind further than the idea of the app. Apps are good and important, but to unlock the potential of iPad it needs lots of interesting data pouring into it and through it. That data might be provided via an application, but it probably doesn’t live within the application – there’s not enough room in there. Any way you look at it, iPad is a creature of the network; it is a surface, a looking glass, which presents you a view from within the network.

What happens when the network looks back at you?

At the moment iPad has no camera, though everyone expects a forward-facing camera to be in next year’s model. That will come so that Apple can enable FaceTime. (With luck, we’ll also see a Retina Display, so that documents can be seen in their natural resolution.) Once the iPad can see you, it can respond to you. It can acknowledge your presence in an authentic manner. We’re starting to see just what this looks like with the recently announced Xbox Kinect.

This is the sort of technology which points all the way back to the infamous ‘Knowledge Navigator’ video that John Sculley used to create his own Reality Distortion Field around the disaster that was the Newton. Decades ahead of its time, the Knowledge Navigator pointed toward Google and Wikipedia and Milo, with just a touch of Facebook thrown in. We’re only just getting there, to the place where this becomes possible.

These are no longer dreams, these are now quantifiable engineering problems.

This sort of thing won’t happen on Xbox, though Microsoft or a partner developer could easily write an app for it. But that’s not where they’re looking, this is not about keeping you entertained. The iPad can entertain you, but that’s not its main design focus. It is designed to engage you, today with your fingers, and soon with your voice and your face and your gestures. At that point it is no longer a mirror; it is an entity on its own. It might not pass the Turing Test, but we’ll anthropomorphize it nonetheless, just as we did with Tamagotchi and Furby. It will become our constant companion, helping us through every situation. And it will move seamlessly between our devices, from iPad to iPhone to desktop. But it will begin on iPad.

Because we are just starting out with tablets, anything is possible. We haven’t established expectations which guide us into a particular way of thinking about the device. We’ve had mobiles for nearly twenty years, and desktops for thirty. We understand both well, and with that understanding comes a narrowing of possibilities. The tablet is the undiscovered country, virgin, green, waiting to be explored. This is the desktop revolution, all over again. This is the mobile revolution, all over again. We’re in the right place at the right time to give birth to the applications that will seem commonplace in ten or fifteen years.

I remember the VisiCalc, the first spreadsheet. I remember how revolutionary it seemed, how it changed everyone’s expectations for the personal computer. I also remember that it was written for an Apple ][.

You have the chance to do it all again, to become the ‘mothers of innovation’, and reinvent computing. So think big. This is the time for it. In another few years it will be difficult to aim for the stars. The platform will be carrying too much baggage. Right now we all get to be rocket scientists. Right now we get to play, and dream, and make it all real.