Reports from the connected present

Main menu

Category Archives: Apple

At the end of May I received an email from a senior official at the Victorian Department of Education and Early Childhood Development. DEECD was in the midst of issuing an RFP, looking for new content to populate FUSE (Find, Use, Share, Education), an important component of ULTRANET, the mega-über-supremo educational intranet meant to solve everyone’s educational problems for all time. Or, well, perhaps I overstate the matter. But it could be a big deal.

The respondents to the RFP were organizations who already had working relationships with DEECD, and therefore were both familiar with DEECD processes and had been vetted in their earlier relationships. This meant that the entire RFP to submissions could be telescoped down to just a bit less than three weeks. The official asked me if I’d be interested in being one of the external reviewers for these proposals as they passed through an official evaluation process. I said I’d be happy to do so, and asked how many proposals I’d have to review. “I doubt it will be more than thirty or forty,” he replied. Which seemed quite reasonable.

As is inevitably the case, most of the proposals landed in the DEECD mailbox just a few hours before the deadline for submissions. But the RFP didn’t result in thirty or forty proposals. The total came to almost ninety. All of which I had to review and evaluate in the thirty-six hours between the time they landed in my inbox and the start of the formal evaluation meeting. Oh, and first I needed to print them out, because there was no way I’d be able to do that much reading in front of my computer.

Let’s face it – although we do sit and read our laptop screens all day long, we rarely read anything longer than a few paragraphs. If it passes 300 words, it tips the balance into ‘tl;dr’ (too long; didn’t read) territory, and unless it’s vital for our employment or well-being, we tend to skip it and move along to the next little tidbit. Having to sit and read through well over nine hundred pages of proposals on my laptop was a bridge too far. I set off to the print shop around the corner from my flat, to have the whole mess printed out. That took nearly 24 hours by itself – and cost an ungodly sum. I was left with a huge, heavy box of paper which I could barely lug back to my flat. For the next 36 hours, this box would be my ball and chain. I’d have to take it with me to the meeting in Melbourne, which meant packing it for the flight, checking it as baggage, lugging it to my hotel room, and so forth, all while trying to digest its contents.

How the heck was that going to work?

This is when I looked at my iPad. Then I looked back at the box. Then back at the iPad. Then back at the box. I’d gotten my iPad barely a week before – when they first arrived in Australia – and I was planning on taking it on this trip, but without an accompanying laptop. This, for me, would be a bit of a test. For the last decade I’d never traveled anywhere without my laptop. Could I manage a business trip with just my iPad? I looked back at the iPad. Then at the box. You could practically hear the penny drop.

I immediately began copying all these nine hundred-plus pages of proposals and accompanying documentation from my laptop to the storage utility Dropbox. Dropbox gives you 2 GB of free Internet storage, with an option to rent more space, if you need it. Dropbox also has an iPad app (free) – so as soon as the files were uploaded to Dropbox, I could access them from my iPad.

I should take a moment and talk about the model of the iPad I own. I ordered the 16 GB version – the smallest storage size offered by Apple – but I got the 3G upgrade, paired with Telstra’s most excellent pre-paid NextG service. My rationale was that I imagined this iPad would be a ‘cloud-centric’ device. The ‘cloud’ is a term that’s come into use quite recently. It means software is hosted somewhere out there on the Internet – the ‘cloud’ – rather than residing locally on your computer. Gmail is a good example of a software that’s ‘in the cloud’. Facebook is another. Twitter, another. Much of what we do with our computers – iPad included – involves software accessed over the Internet. Many of the apps for sale in Apple’s iTunes App Store are useless or pointless without an Internet connection – these are the sorts of applications which break down the neat boundary between the computer and the cloud. Cloud computing has been growing in importance over the last decade; by the end of this one it will simply be the way things work. Your iPad will be your window onto the cloud, onto everything you have within that cloud: your email, your documents, your calendar, your contacts, etc.

I like to live in the future, so I made sure that my iPad didn’t have too much storage – which forces me to use the cloud as much as possible. In this case, that was precisely the right decision, because I ditched the ten-kilo box of paperwork and boarded my flight to Melbourne with my iPad at my side. I poured through the proposals, one after another, bringing them up in Dropbox, evaluating them, making some notes in my (paper) notebook, then moving along to the next one. My iPad gave me a fluidity and speed that I could never have had with that box of paper.

When I arrived at my hotel, I had another set of two large boxes waiting for me. Here again were the proposals, carefully ordered and placed into several large, ringed binders. I’d be expected to tote these to the evaluation meeting. Fortunately, that was only a few floors above my hotel room. That said, it was a bit of a struggle to get those boxes and my luggage into the elevator and up to the meeting room. I put those boxes down – and never looked at them again. As the rest of the evaluation panel dug through their boxes to pull out the relevant proposals, I did a few motions with my fingertips, and found myself on the same page.

Yes, they got a bit jealous.

We finished the evaluation on time and quite successfully, and at the end of the day I left my boxes with the DEECD coordinator, thanking her for her hard work printing all these materials, but begging off. She understood completely. I flew home, lighter than I might otherwise have, had I stuck to paper.

For at least the past thirty years – which is about the duration of the personal computer revolution – people have been talking about the advent of the paperless office. Truth be told, we use more paper in our offices than ever before, our printers constantly at work with letters, notices, emails, and so forth. We haven’t been able to make the leap to a paperless office – despite our comprehensive ability to manipulate documents digitally – because we lacked something that could actually replace paper. Computers as we’ve known them simply can’t replace a piece of paper. For a whole host of reasons, it just never worked. To move to a paperless office – and a paperless classroom – we had to invent something that could supplant paper. We have it now. After a lot of false starts, tablet computing has finally arrived –– and it’s here to stay.

I can sit here, iPad in hand, and have access to every single document that I have ever written. You will soon have access to every single document you might ever need, right here, right now. We’re not 100% there yet – but that’s not the fault of the device. We’re going to need to make some adjustments to our IT strategies, so that we can have a pervasively available document environment. At that point, your iPad becomes the page which contains all other pages within it. You’ll never be without the document you need at the time you need it.

Nor will we confine ourselves to text. The world is richer than that. iPad is the lightbox that contains all photographs within it, it is the television which receives every bit of video produced by anyone – professional or amateur – ever. It is already the radio (Pocket Tunes app) which receives almost every major radio station broadcasting anywhere in the world. And it is every one of a hundred-million-plus websites and maybe a trillion web pages. All of this is here, right here in the palm of your hand.

What matters now is how we put all of this to work.

II: Pad, works

Let’s project ourselves into the future just a little bit – say around ten years. It’s 2020, and we’ve had iPads for a whole decade. The iPads of 2020 will be vastly more powerful than the ones in use today, because of something known as Moore’s Law. This law states that computers double in power every twenty-four months. Ten years is five doublings, or 32 times. That rule extends to the display as well as the computer. The ‘Retina Display’ recently released on Apple’s iPhone 4 shows us where that technology is going – displays so fine that you can’t make out the individual pixels with your eye. The screen of your iPad version 11 will be visually indistinguishable from a sheet of paper. The device itself will be thinner and lighter than the current model. Battery technology improves at about 10% a year, so half the weight of the battery – which is the heaviest component of the iPad – will disappear. You’ll still get at least ten hours of use, that’s something that’s considered essential to your experience as a user. And you’ll still be connected to the mobile network.

The mobile network of 2020 will look quite different from the mobile network of 2010. Right now we’re just on the cusp of moving into 4th generation mobile broadband technology, known colloquially as LTE, or Long-Term Evolution. Where you might get speeds of 7 megabits per second with NextG mobile broadband – under the best conditions – LTE promises speeds of 100 megabits. That’s as good as a wired connection – as fast as anything promised by the National Broadband Network! In a decade’s time we’ll be moving through 5th generation and possibly into 6th generation mobile technologies, with speeds approaching a gigabit, a billion bits per second. That may sound like a lot, but again, it represents roughly 32 times the capacity of the mobile broadband networks of today. Moore’s Law has a broad reach, and will transform every component of the iPad.

iPad will have thirty-two times the storage, not that we’ll need it, given that we’ll be connected to the cloud at gigabit speeds, but if it’s there, someone will find use for the two terabytes or more included in our iPad. (Perhaps a full copy of Wikipedia? Or all of the books published before 1915?) All of this still cost just $700. If you want to spend less – and have a correspondingly less-powerful device, you’ll have that option. I suspect you’ll be able to pick up an entry-level device – the equivalent of iPad 7, perhaps – for $49 at JB HiFi.

What sorts of things will the iPad 10 be capable of? How do we put all of that power to work? First off, iPad will be able to see and hear in meaningful ways. Voice recognition and computer vision are two technologies which are on the threshold of becoming ‘twenty year overnight successes’. We can already speak to our computers, and, most of the time, they can understand us. With devices like the Xbox Kinect, cameras allow the computer to see the world around, and recognize bits of it. Your iPad will hear you, understand your voice, and follow your commands. It will also be able to recognize your face, your motions, and your emotions.

It’s not clear that computers as we know them today – that is, desktops and laptops – will be common in a decade’s time. They may still be employed in very specialized tasks. For almost everything else, we will be using our iPads. They’ll rarely leave our sides. They will become so pervasive that in many environments – around the home, in the office, or at school – we will simply have a supply of them sufficient to the task. When everything is so well connected, you don’t need to have personal information stored in a specific iPad. You will be able to pick up any iPad and – almost instantaneously – the custom features which mark that device as uniquely yours will be downloaded into it.

All of this is possible. Whether any of it eventuates depends on a whole host of factors we can’t yet see clearly. People may find voice recognition more of an annoyance than an affordance. The idea of your iPad watching you might seem creepy to some people. But consider this: I have a good friend who has two elderly parents: his dad is in his early 80s, his mom is in her mid-70s. He lives in Boston while they live in Northern California. But he needs to keep in touch, he needs to have a look in. Next year, when iPad acquires a forward-facing camera – so it can be used for video conferencing – he’ll buy them an iPad, and install it on the wall of their kitchen, stuck on there with Velcro, so that he can ring in anytime, and check on them, and they can ring him, anytime. It’s a bit ‘Jetsons’, when you think about it. And that’s just what will happen next year. By 2020 the iPad will be able to track your progress around the house, monitor what prescriptions you’ve taken (or missed), whether you’ve left the house, and for how long. It’ll be a basic accessory, necessary for everyone caring for someone in their final years – or in their first ones.

Now that we’ve established the basic capabilities and expectations for this device, let’s imagine them in the hands of students everywhere throughout Australia. No student, however poor, will be without their own iPad – the Government of the day will see to that. These students of 2020 are at least as well connected as you are, as their parents are, as anyone is. To them, iPads are not new things; they’ve always been around. They grew up in a world where touch is the default interface. A computer mouse, for them, seems as archaic as a manual typewriter does to us. They’re also quite accustomed to being immersed within a field of very-high-speed mobile broadband. They just expect it to be ‘on’, everywhere they go, and expect that they will have access to it as needed.

How do we make education in 2020 meet their expectations? This is not the universe of ‘chalk and talk’. This is a world where the classroom walls have been effectively leveled by the pervasive presence of the network, and a device which can display anything on that network. This is a world where education can be provided anywhere, on demand, as called for. This is a world where the constructivist premise of learning-by-doing can be implemented beyond year two. Where a student working on an engine can stare at a three-dimensional breakout model of the components while engaging in a conversation with an instructor half a continent away. Where a student learning French can actually engage with a French student learning English, and do so without much more than a press of a few buttons. Where a student learning about the Eureka Stockade can survey the ground, iPad in hand, and find within the device hidden depths to the history. iPad is the handheld schoolhouse, and it is, in many ways, the thing that replaces the chalkboard, the classroom, and the library.

But iPad does not replace the educator. We need to be very clear on that, because even as educational resources multiply beyond our wildest hopes –more on that presently – students still need someone to guide them into understanding. The more we virtualize the educational process, the more important and singular our embodied interactions become. Some of this will come from far away – the iPad offers opportunities for distance education undreamt of just a few years ago – but much more of it will be close up. Even if the classroom does not survive (and I doubt it will fade away completely in the next ten years, but it will begin to erode), we will still need a place for an educator/mentor to come into contact with students. That’s been true since the days of Socrates (probably long before that), and it’s unlikely to change anytime soon. We learn best when we learn from others. We humans are experts in mimesis, in learning by imitation. That kind of learning requires us to breathe the same air together.

No matter how much power we gain from the iPad, no matter how much freedom it offers, no device offers us freedom from our essential nature as social beings. We are born to work together, we are designed to learn from one another. iPad is an unbelievably potent addition to the educator’s toolbox, but we must remember not to let it cloud our common sense. It should be an amplifier, not a replacement, something that lets students go further, faster than before. But they should not go alone.

The constant danger of technology is that it can interrupt the human moment. We can be too busy checking our messages to see the real people right before our eyes. This is the dilemma that will face us in the age of the iPad. Governments will see them as cost-saving devices, something that could substitute for the human touch. If we lose touch, if we lose the human moment, we also lose the biggest part of our ability to learn.

III: The Work of Nations

We can reasonably predict that this is the decade of the tablet, and the decade of mobile broadband. The two of them fuse in the iPad, to produce a platform which will transform education, allowing it to happen anywhere a teacher and a student share an agreement to work together. But what will they be working on? Next year we’ll see the rollout of the National Curriculum, which specifies the material to be covered in core subject areas in classrooms throughout the nation.

Many educators view the National Curriculum as a mandate for a bland uniformity, a lowest-common denominator approach to instruction, which will simply leave the teacher working point-by-point through the curriculum’s arc. This is certainly not the intent of the project’s creators. Dr. Evan Arthur, who heads up the Digital Educational Revolution taskforce in the Department of Education, Employment and Workplace Relations, publicly refers to the National Curriculum as a ‘greenfields’, as though all expectations were essentially phantoms of the mind, a box we draw around ourselves, rather than one that objectively exists.

The National Curriculum outlines the subject areas to be covered, but says very little if anything about pedagogy. Instructors and school systems are free to exercise their own best judgment in selecting an approach appropriate to their students, their educators, and their facilities. That’s good news, and means that any blandness that creeps into pedagogy because of the National Curriculum is more a reflection of the educator than the educational mandate.

Precisely because it places educators and students throughout the nation onto the same page, the National Curriculum also offers up an enormous opportunity. We know that all year nine students in Australia will be covering a particular suite of topics. This means that every educator and every student throughout the nation can be drawing from and contributing to a ‘common wealth’ of shared materials, whether they be podcasts of lectures, educational chatrooms, lesson plans, and on and on and on. As the years go by, this wealth of material will grow as more teachers and more students add their own contributions to it. The National Curriculum isn’t a mandate, per se; it’s better to think of it as an empty Wikipedia. All the article headings are there, all the taxonomy, all the cross references, but none of the content. The next decade will see us all build up that base of content, so that by 2020, a decade’s worth of work will have resulted in something truly outstanding to offer both educators and students in their pursuit of curriculum goals.
Well, maybe.

I say all of this as if it were a sure thing. But it isn’t. Everyone secretly suspects the National Curriculum will ruin education. I ask that we can see things differently. The National Curriculum could be the savior of education in the 21st century, but in order to travel the short distance in our minds between where we are (and where we will go if we don’t change our minds) and where we need to be, we need to think of every educator in Australia as a contributor of value. More than that, we need to think of every student in Australia as a contributor of value. That’s the vital gap that must be crossed. Educators spend endless hours working on lesson plans and instructional designs – they should be encouraged to share this work. Many of them are too modest or too scared to trumpet their own hard yards – but it is something that educators and students across the nation can benefit from. Students, as they pass through the curriculum, create their own learning materials, which must be preserved, where appropriate, for future years.

We should do this. We need to do this. Right now we’re dropping the best of what we have on the floor as teachers retire or move on in their careers. This is gold that we’re letting slip through our fingers. We live in an age where we only lose something when we neglect to capture it. We can let ourselves off easy here, because we haven’t had a framework to capture and share this pedagogy. But now we have the means to capture, a platform for sharing – the Ultranet, and a tool which brings access to everyone – the iPad. We’ve never had these stars aligned in such a way before. Only just now – in 2010 – is it possible to dream such big dreams. It won’t even cost much money. Yes, the state and federal governments will be investing in iPads and superfast broadband connections for the schools, but everything else comes from a change in our behavior, from a new sense of the full value of our activities. We need to look at ourselves not merely as the dispensers of education to receptive students, but as engaged participant-creators working to build a lasting body of knowledge.

In so doing we tie everything together, from library science to digital citizenship, within an approach that builds shared value. It allows a student in Bairnsdale to collaborate with another in Lorne, both working through a lesson plan developed by an educator in Katherine. Or a teacher in Lakes Entrance to offer her expertise to a classroom in Maffra. These kinds of things have been possible before, but the National Curriculum gives us the reason to do it. iPad gives us the infrastructure to dream wild, and imagine how to practice some ‘creative destruction’ in the classroom – tearing down its walls in order to make the classroom a persistent, ubiquitous feature of the environment, to bring education everywhere it’s needed, to everyone who needs it, whenever they need it.

This means that all of the preceding is really part of a larger transformation, from education as this singular event that happens between ages six and twenty-two, to something that is persistent and ubiquitous; where ‘lifelong learning’ isn’t a catchphrase, but rather, a set of skills students begin to acquire as soon as they land in pre-kindy. The wealth of materials which we will create as we learn how to share the burden of the National Curriculum across the nation have value far beyond the schoolhouse. In a nation of immigrants, it makes sense to have these materials available, because someone is always arriving in the middle of their lives and struggling to catch up to and integrate themselves within the fabric of the nation. Education is one way that this happens. People also need to have increasing flexibility in their career choices, to suit a much more fluid labor market. This means that we continuously need to learn something new, or something, perhaps, that we didn’t pay much attention to when we should have. If we can share our learning, we can close this gap. We can bring the best of what we teach to everyone who has the need to know.

And there we are. But before I conclude, I should bring up the most obvious point –one so obvious that we might forget it. The iPad is an excellent toy. Please play with it. I don’t mean use it. I mean explore it. Punch all the buttons. Do things you shouldn’t do. Press the big red button that says, “Don’t press me!” Just make sure you have a backup first.

We know that children learn by exploration – that’s the foundation of Constructivism – but we forget that we ourselves also learn by exploration. The joy we feel when we play with our new toy is the feeling a child has when he confronts a box of LEGOs, or new video game – it’s the joy of exploration, the joy of learning. That joy is foundational to us. If we didn’t love learning, we wouldn’t be running things around here. We’d still be in the trees.

My favorite toys on my iPad are Pocket Universe – which creates an 360-degree real-time observatory on your iPad; Pulse News – which brings some beauty to my RSS feeds; Observatory – which turns my iPad into a bit of an orrery; Air Video – which allows me to watch videos streamed from my laptop to my iPad; and GoodReader – the one app you simply must spend $1.19 on, because it is the most useful app you’ll ever own. These are my favorites, but I own many others, and enjoy all of them. There are literally tens of thousands to choose from, some of them educational, some, just for fun. That’s the point: all work and no play makes iPad a dull toy.

So please, go and play. As you do, you’ll come to recognize the hidden depths within your new toy, and you’ll probably feel that penny drop, as you come to realize that this changes everything. Or can, if we can change ourselves.

Today is a very important day in the annals of computer science. It’s the anniversary of the most famous technology demo ever given. Not, as you might expect, the first public demonstration of the Macintosh (which happened in January 1984), but something far older and far more important. Forty years ago today, December 9th, 1968, in San Francisco, a small gathering of computer specialists came together to get their first glimpse of the future of computing. Of course, they didn’t know that the entire future of computing would emanate from this one demo, but the next forty years would prove that point.

The maestro behind the demo – leading a team of developers – was Douglas Engelbart. Engelbart was a wunderkind from SRI, the Stanford Research Institute, a think-tank spun out from Stanford University to collaborate with various moneyed customers – such as the US military – on future technologies. Of all the futurist technologists, Engelbart was the future-i-est.

In the middle of the 1960s, Engelbart had come to an uncomfortable realization: human culture was growing progressively more complex, while human intelligence stayed within the same comfortable range we’d known for thousands of years. In short order, Engelbart assessed, our civilization would start to collapse from its own complexity. The solution, Engelbart believed, would come from tools that could augment human intelligence. Create tools to make men smarter, and you’d be able to avoid the inevitable chaotic crash of an overcomplicated civilization.

To this end – and with healthy funding from both NASA and DARPA – Engelbart began work on the Online System, or NLS. The first problem in intelligence augmentation: how do you make a human being smarter? The answer: pair humans up with other humans. In other words, networking human beings together could increase the intelligence of every human being in the network. The NLS wasn’t just the online system, it was the networked system. Every NLS user could share resources and documents with other users. This meant NLS users would need to manage these resources in the system, so they needed high-quality computer screens, and a windowing system to keep the information separated. They needed an interface device to manage the windows of information, so Engelbart invented something he called a ‘mouse’.

In other words, in just one demo, Engelbart managed to completely encapsulate absolutely everything we’ve been working toward with computers over the last 40 years. The NLS was easily 20 years ahead of its time, but its influence is so pervasive, so profound, so dominating, that it has shaped nearly every major problem in human-computer interface design since its introduction. We have all been living in Engelbart’s shadow, basically just filling out the details in his original grand mission.

Of all the technologies rolled into the NLS demo, hypertext has arguably had the most profound impact. Known as the “Journal” on NLS, it allowed all the NLS users to collaboratively edit or view any of the documents in the NLS system. It was the first groupware application, the first collaborative application, the first wiki application. And all of this more than 20 years before the Web came into being. To Engelbart, the idea of networked computers and hypertext went hand-in-hand; they were indivisible, absolutely essential components of an online system.

It’s interesting to note that although the Internet has been around since 1969 – nearly as long as the NLS – it didn’t take off until the advent of a hypertext system – the World Wide Web. A network is mostly useless without a hypermedia system sitting on top of it, and multiplying its effectiveness. By itself a network is nice, but insufficient.

So, more than can be said for any other single individual in the field of computer science, we find ourselves living in the world that Douglas Engelbart created. We use computers with raster displays and manipulate windows of hypertext information using mice. We use tools like video conferencing to share knowledge. We augment our own intelligence by turning to others.

That’s why the “Mother of All Demos,” as it’s known today, is probably the most important anniversary in all of computer science. It set the stage the world we live in, more so that we recognized even a few years ago. You see, one part of Engelbart’s revolution took rather longer to play out. This last innovation of Engelbart’s is only just beginning.

II: Share and Share Alike

In January 2002, Oregon State University, the alma mater of Douglas Engelbart, decided to host a celebration of his life and work. I was fortunate enough to be invited to OSU to give a talk about hypertext and knowledge augmentation, an interest of mine and a persistent theme of my research. Not only did I get to meet the man himself (quite an honor), I got to meet some of the other researchers who were picking up where Engelbart had left off. After I walked off stage, following my presentation, one of the other researchers leaned over to me and asked, “Have you heard of Wikipedia?”

I had not. This is hardly surprising; in January 2002 Wikipedia was only about a year old, and had all of 14,000 articles – about the same number as a children’s encyclopedia. Encyclopedia Britannica, though it had put itself behind a “paywall,” had over a hundred thousand quality articles available online. Wikipedia wasn’t about to compete with Britannica. At least, that’s what I thought.

It turns out that I couldn’t have been more wrong. Over the next few months – as Wikipedia approached 30,000 articles in English – an inflection point was reached, and Wikipedia started to grow explosively. In retrospect, what happened was this: people would drop by Wikipedia, and if they liked what they saw, they’d tell others about Wikipedia, and perhaps make a contribution. But they first had to like what they saw, and that wouldn’t happen without a sufficient number of articles, a sort of “critical mass” of information. While Wikipedia stayed beneath that critical mass it remained a toy, a plaything; once it crossed that boundary it became a force of nature, gradually then rapidly sucking up the collected knowledge of the human species, putting it into a vast, transparent and freely accessible collection. Wikipedia thrived inside a virtuous cycle where more visitors meant more contributors, which meant more visitors, which meant more contributors, and so on, endlessly, until – as of this writing, there are 2.65 million articles in the English language in Wikipedia.

Wikipedia’s biggest problem today isn’t attracting contributions, it’s winnowing the wheat from the chaff. Wikipedia has constant internal debates about whether a subject is important enough to deserve an entry in its own right; whether this person has achieved sufficient standards of notability to merit a biographical entry; whether this exploration of a fictional character in a fictional universe belongs in Wikipedia at all, or might be better situated within a dedicated fan wiki. Wikipedia’s success has been proven beyond all doubt; managing that success is the task of the day.

While we all rely upon Wikipedia more and more, we haven’t really given much thought as to what Wikipedia gives us. At its most basically level, Wikipedia gives us high-quality factual information. Within its major subject areas, Wikipedia’s veracity is unimpeachable, and has been put to the test by publications such as Nature. But what do these high-quality facts give us? The ability to make better decisions.

Given that we try to make decisions about our lives based on the best available information, the better that information is, the better our decisions will be. This seems obvious when spelled out like this, but it’s something we never credit Wikipedia with. We think about being able to answer trivia questions or research topics of current fascination, but we never think that every time we use Wikipedia to make a decision, we are improving our decision making ability. We are improving our own lives.

This is Engelbart’s final victory. When I met him in 2002, he seemed mostly depressed by the advent of the Web. At that time – pre-Wikipedia, pre-Web2.0 – the Web was mostly thought of as a publishing medium, not as something that would allow the multi-way exchange of ideas. Engelbart has known for forty years that sharing information is the cornerstone to intelligence augmentation. And in 2002 there wasn’t a whole lot of sharing going on.

It’s hard to imagine the Web of 2002 from our current vantage point. Today, when we think about the Web, we think about sharing, first and foremost. The web is a sharing medium. There’s still quite a bit of publishing going on, but that seems almost an afterthought, the appetizer before the main course. I’d have to imagine that this is pleasing Engelbart immensely, as we move ever closer to the models he pioneered forty years ago. It’s taken some time for the world to catch up with his vision, but now we seem to have a tool fit for knowledge augmentation. And Wikipedia is really only one example of the many tools we have available for knowledge augmentation. Every sharing tool – Digg, Flickr, YouTube, del.icio.us, Twitter, and so on – provides an equal opportunity to share and to learn from what others have shared. We can pool our resources more effectively than at any other time in history.

The question isn’t, “Can we do it?” The question is, “What do we want to do?” How do we want to increase our intelligence and effectiveness through sharing?

III: Crowdsource Yourself

Now we come to all of you, here together for three days, to teach and to learn, to practice and to preach. Most of you are the leaders in your particular schools and institutions. Most of you have gone way out on the digital limb, far ahead of your peers. Which means you’re alone. And it’s not easy being alone. Pioneers can always be identified by the arrows in their backs.

So I have a simple proposal to put to you: these three days aren’t simply an opportunity to bring yourselves up to speed on the latest digital wizardry, they’re a chance to increase your intelligence and effectiveness, through sharing.

All of you, here today, know a huge amount about what works and what doesn’t, about curricula and teaching standards, about administration and bureaucracy. This is hard-won knowledge, gained on the battlefields of your respective institutions. Now just imagine how much it could benefit all of us if we shared it, one with another. This is the sort of thing that happens naturally and casually at a forum like this: a group of people will get to talking, and, sooner or later, all of the battle stories come out. Like old Diggers talking about the war.

I’m asking you to think about this casual process a bit more formally: How can you use the tools on offer to capture and share everything you’ve learned? If you don’t capture it, it can’t be shared. If you don’t share it, it won’t add to our intelligence. So, as you’re learning how to podcast or blog or setup a wiki, give a thought to how these tools can be used to multiply our effectiveness.

I ask you to do this because we’re getting close to a critical point in the digital revolution – something I’ll cover in greater detail when I talk to you again on Thursday afternoon. Where we are right now is at an inflection point. Things are very fluid, and could go almost any direction. That’s why it’s so important we learn from each other: in that pooled knowledge is the kind of intelligence which can help us to make better decisions about the digital revolution in education. The kinds of decisions which will lead to better outcomes for kids, fewer headaches for administrators, and a growing confidence within the teaching staff.

Don’t get me wrong: this isn’t a panacea. Far from it. They’re simply the best tools we’ve got, right now, to help us confront the range of thorny issues raised by the transition to digital education. You can spend three days here, and go back to your own schools none the wiser. Or, you can share what you’ve learned and leave here with the best that everyone has to offer.

There’s a word for this process, a word which powers Wikipedia and a hundred thousand other websites: “crowdsourcing”. The basic idea is encapsulated in a Chinese proverb: “Many hands make light work.” The two hundred of you, here today, can all pitch in and make light work for yourselves. Or not.

Let me tell you another story, which may help seal your commitment to share what you know. In May of 1999, Silicon Valley software engineer John Swapceinski started a website called “Teacher Ratings.” Individuals could visit the site and fill in a brief form with details about their school, and their teacher. That done, they could rate the teacher’s capabilities as an instructor. The site started slowly, but, as is always the case with these sorts of “crowdsourced” ventures, as more ratings were added to the site, it became more useful to people, which meant more visitors, which meant more ratings, which meant it became even more useful, which meant more visitors, which meant more ratings, etc.

Somewhere in the middle of this virtuous cycle the site changed its name to “Rate My Professors.com” and changed hands twice. For the last two years, RateMyProfessors.com has been owned by MTV, which knows a thing or two about youth markets, and can see one in a site that has nine million reviews of one million teachers, professors and instructors in the US, Canada and the UK.

Although the individual action of sharing some information about an instructor seems innocuous enough, in aggregate the effect is entirely revolutionary. A student about to attend university in the United States can check out all of her potential instructors before she signs up for a single class. She can choose to take classes only with those instructors who have received the best ratings – or, rather more perversely, only with those instructors known to be easy graders. The student is now wholly in control of her educational opportunities, going in eyes wide open, fully cognizant of what to expect before the first day of class.

Although RateMyProfessors.com has enlightened students, it has made the work of educational administrators exponentially more difficult. Students now talk, up and down the years, via the recorded ratings on the site. It isn’t possible for an institution of higher education to disguise an individual who happens to be a world-class researcher but a rather ordinary lecturer. In earlier times, schools could foist these instructors on students, who’d be stuck for a semester. This no longer happens, because RateMyProfessors.com effectively warns students away from the poor-quality teachers.

This one site has undone all of the neat work of tenure boards and department chairs throughout the entire world of academia. A bad lecturer is no longer a department’s private little secret, but publicly available information. And a great lecturer is no longer a carefully hoarded treasure, but a hot commodity on a very public market. The instructors with the highest ratings on RateMyProfessors.com find themselves in demand, receiving outstanding offers (with tenure) from other universities. All of this plotting, which used to be hidden from view, is now fully revealed. The battle for control over who stands in front of the classroom has now been decisively lost by the administration in favor of the students.

Whether it’s Wikipedia, or RateMyProfessors.com, or the promise of your own work over these next three days, Douglas Engelbart’s original vision of intelligence augmentation holds true: it is possible for us to pool our intellectual resources, and increase our problem-solving capacity. We do it every time we use Wikipedia; students do it every time they use RateMyProfessors.com; and I’m asking you to do it, starting right now. Good luck!

Few terms convey less meaning than “futurist.” What exactly is a futurist? What does he do? The definition, so far as I chose to apply it, is simple: a futurist looks at the present, at human behavior and human tendencies, to imagine how these trends develop. This is less science than storytelling; the development of any human endeavor is fraught with non-linear events, which yank the arrow of the progress this way and that. One can never know the future with any precision, and the farther the future recedes down the light-cone, the less distinct it becomes. We might know with high accuracy what will happen tomorrow. But five years from now, or twenty? That’s more alchemy than anthropology.

Yet, in order to play the game, futurists must make predictions. It’s what we do. So, for those few futurists who are willing to take the big risks of making short-term predictions – ranging from twelve to thirty-six months in the future – the game is particularly dangerous. Any futurist can predict what will come to pass in twenty years’ time, because no one will remember how wrong they were. But to make a prediction for the near term risks being revealed as a charlatan. Such predictions must be considered carefully, revealed hesitantly, and pronounced provisionally. Doing that will give you an out later on. Yet I have never been one to be either hesitant or provisional; I leap in where braver (and, arguably wiser) souls fear to tread. My particular brand of futurism – the “futurest” – is expansive, encompassing, and uncompromisingly revolutionary. I say this not to tout my strengths, but rather, to reveal the dangers.

In the early 1990s I predicted that VR would become the standard interface metaphor for computers by the 21st century. Did I get that right? It seems not; after all, we still use windows and mice as standard the interaction paradigm, just as we did back in 1990. Yet, if we can draw anything from the recent and somewhat surprisingly successful introduction of the Nintendo Wii, it’s that VR did arrive, is pervasive, and has become a dominant interface metaphor. Just not on the computer desktop. VR isn’t about head-mounted displays, although it might have seemed so, fifteen years ago. VR is about bringing the body into contact with the simulated world. Nintendo, with its clever, cheap, attractive and highly functional Wiimote, has done just that. They’ve done what decades of other researchers and engineers failed to do: they’ve brought us into the game. So predictions might come to pass, but rarely do they come in the form imagined. But every so often, when you step up to the plate, you connect completely, and knock one out of the park.

II

In early December 2005 I was invited to give a plenary presentation to the Australian conference on Interaction and Entertainment Design. This was one of the rare opportunities I get to talk on any subject I desired. Most of these academics wanted to talk about the latest trends in gaming and online communities; having been through that, and more, a decade ago, I decided to take the conversation in an entirely different direction, by focusing on that most common of our electronic peripherals, the mobile phone.

So common as to be nearly invisible, the mobile phone has become the focal point of our social existence. Yet, despite its constant presence, the mobile seemed poorly fit to the task of being our perpetual servant. It seemed stuck in an liminal position, between the wired world and the pervasive networked environment which is the global reality of the 21st century. The mobile was broken, and needed to be fixed. Hence, working with Angus Fraser, my graduate student – who, on his own, has had years of experience developing interfaces and applications for mobile phones – I wrote “The Telephone Repair Handbook”. I started off by challenging the audience to answer three questions:

Q: Why does a mobile phone have a keypad? We never use it.
A: Because wired phones have keypads. And so we can enter text. Badly.

Q: How many networks are our mobile phones really connected to?
A: The answer is generally at least three: GPRS/GSM, Bluetooth and IrDA.

Q: What are our phones doing all the time they’re idle?
A: Nothing. They’re just waiting for a phone call or a text message to make their day.

These basic failures in the design of the mobile phone, I argued, arose from our fundamental misunderstanding of the function of the device. Mobile phones are not simply passive terminals, waiting to be activated. They are (or rather, should be) active communications processors, managing the minutiae of our social relationships.

Once I’d set up the straw men, and knocked them down, I described a new kind of mobile phone, designed from the outset to be a communications servant, a nexus which tracked, facilitated and recorded all of the social interactions happening through it, or, via Bluetooth, proximal to it. And, because I can code, I demonstrated the very first version of Blue States, a small Java J2ME application which allowed mobile phones to note and record the presence of other Bluetooth devices in their immediate proximity. This information, I insisted, could be come the foundation of an emergent social network. The mobile, at all times with you, or nearby, knows your social life better than you do. When exposed, and analyzed, this data becomes a powerful tool. Angus and I worked up a few user scenarios to demonstrate our point: the mobile can be so much more. All it needs is the right software. I finished by encouraging this room of researchers to re-invent the mobile phone, to make it the digital social secretary, the majordomo, and grand vizier.

A month after I gave that presentation, I left my teaching position, and began coding, full-time, on Blue States, readying it for its first deployment, at ISEA San Jose. As an art project at an art festival, it might influence the creative minds of electronic artists. Perhaps they would begin to pervert their own mobile phones, transforming them into something entirely more useful.

As it turns out, I didn’t have to wait for the artists to catch up with me. For it seems that even as I was beginning my research work, more than two years ago, and formulating my theories on the future of the mobile telephone, another group of researchers set to the same task, and came to many of the same conclusions.

III

Yesterday, on a stage in San Francisco, Steve Jobs, CEO of the now-renamed Apple, Inc., introduced the iPhone, Apple’s much-rumored and long-awaited convergence device. Three things must be noted as essential to the design:

It has no keyboard.

It is connected to wireless internet, Bluetooth and GPRS/EDGE networks simultaneously, and moves between each seamlessly.

It has a sophisticated operating system, and is constantly executing several tasks at once. It is never truly idle.

The iPhone is a combination of an iPod and a mobile telephone, and these elements have been fused together with a fingertip-based user interface to make the device nearly as tactile and natural as any familiar object. It is a mobile phone, but it has – finally and rationally – lost its vestigial connections to the wired phone. It is not simply a wireless phone; it is a network terminal, with all that implies. That it has a true operating system – instead of the “toy” operating systems of earlier mobile phones, which are cranky, and which crash all too often – means that programmers can harness the capabilities of the device wholly, taking it into directions that its creators at Apple never intended. This is not simply an iPod, or a mobile phone, but a complete redefinition of the device. This, quite simply, is the future, as I predicted it, thirteen months ago.

Will the iPhone succeed? No one yet knows. The device is both new enough and different enough that significant changes in user behavior must follow in its wake. Like the Macintosh with its Graphical User Interface, this transformation might take a decade to become the dominant interaction paradigm. Or – given the level of hype and excitement seen in the media in the last twenty-four hours – it might be the right device, at the right time. It may be that Apple has told the world not only why the telephone must be reinvented, but has show it how it should be done. If they have, the iPhone will make the iPod look like a weak overture. Copies and clones will proliferate, skirting to the edge of every one of Apple’s two hundred iPhone patents. And people will begin to have very different expectations for their mobile phones.

While the iPhone both excites and dazzles me with its ingenuity, design and inventiveness, I am not completely satisfied with it. It is still a phone, an iPod, and an “internet communicator” rolled into one. It is not, in any true sense, wholly integrated. There is no way for my friends in San Francisco, with their iPhones, to know what my favorite songs are, or what I’m listening to at the moment, or what I’m reading on the web, or who I’m texting. It is halfway to the social device which I see as the inevitable end point. But the rest is just software. The hardware platform is there, ready and waiting, and will be disrupted by a dozen innovations that no one can yet predict. But I do predict they will happen, in the next twelve to thirty-six months.