What is contemplative computing?

Contemplative computing may sound like an oxymoron, but it's really quite simple. It's about how to use information technologies and social media so they're not endlessly distracting and demanding, but instead help us be more mindful, focused and creative.

About Alex Soojung-Kim Pang

I write about people, technology, and the worlds they make.

My book on contemplative computing, The Distraction Addiction, was published by Little, Brown and Company in 2013. (It's been translated into Dutch (as Verslaafd aan afleiding) and Spanish (as Enamorados de la Distracción); Russian, Chinese and Korean translations are in the works.)

My next book, Rest: Why Working Less Gets More Done, is under contract with Basic Books. Until it's out, you can follow my thinking about deliberate rest, creativity, and productivity on the project Web site.

December 2011

In barely one generation we’ve moved from exulting in the time-saving devices that have so expanded our lives to trying to get away from them — often in order to make more time. The more ways we have to connect, the more many of us seem desperate to unplug. Like teenagers, we appear to have gone from knowing nothing about the world to knowing too much all but overnight….

We have more and more ways to communicate, as Thoreau noted, but less and less to say. Partly because we’re so busy communicating. And — as he might also have said — we’re rushing to meet so many deadlines that we hardly register that what we need most are lifelines.

Iyer talks about friends who've taken up yoga, digital sabbaths, or sports that take them out of cellphone range or into environments that are unsafe for ordinary consumer electronics (sailing, for example). He himself goes on retreats on a regular basis, even though he says "it's vital"

to stay in touch with the world, and to know what’s going on; I took pains this past year to make separate trips to Jerusalem and Hyderabad and Oman and St. Petersburg, to rural Arkansas and Thailand and the stricken nuclear plant in Fukushima and Dubai. But it’s only by having some distance from the world that you can see it whole, and understand what you should be doing with it.

I think this last point is really critical. Connectivity and silence, collaboration and contemplation, sociability and solitude, are not opposites, and we shouldn't think that we should choose between one and the other. Rather, they're like food and water, or parents and children. Each is essential; they're different but not mutually exclusive. The great challenge is to find places for them all, and to know how to use them.

53% of all the young adults ages 18-29 go online for no particular reason except to have fun or to pass the time. Many of them go online in purposeful ways, as well. But the results of a survey by the Pew Research Center’s Internet & American Life Project show that young adults’ use of the internet can at times be simply for the diversion it presents. Indeed, 81% of all young adults in this age cohort report they have used the internet for this reason at least occasionally.

These results come in the larger context that internet users of all ages are much more likely now than in the past to say they go online for no particular reason other than to pass the time or have fun. Some 58% of all adults (or 74% of all online adults) say they use the internet this way. And a third of all adults (34%) say they used the internet that way “yesterday” – or the day before Pew Internet reached them for the survey. Both figures are higher than in 2009 when we last asked this question and vastly higher than in the middle of the last decade.

Via James Fallows, I just found Bytes of China, written by a Fulbright scholar doing fieldwork in China. It's great work: check out this post on supercomputing in China, and the soft infrastructure, cultural factors, and stories that support innovation. The last caught my eye, because I've been doing a lot of work recently on how computers shape the way we think about ourselves.

Transparency is important, but it's not enough to achieve the goals that we attribute to the reasons behind wanting to fight for transparency. So if you're fighting for transparency, you need to understand these three things:

My anniversary is coming up soon: my wife and I got married on New Year's Eve fifteen years ago. One of the things we did at the wedding was put a bunch of disposable cameras on the tables in the dinner, so people could take pictures of their tables, the dancing, etc.

This is the sort of situation Hipstamatic has in mind for its new disposable camera. Of course, the concept of a "disposable camera" app is a curious piece of semantics, as the app itself is not really "disposable" the way a physical camera is. (Or maybe it is; I haven't used it yet.) In this case, though, disposable is a synonym for "social:" you create a "roll" for an event, make it accessible to friends, then all the pictures the group takes the "are sent to the same roll... [and] instantly exchanged once it's finished."

I won't dwell on the deeper implications of making "social" and "disposable" into related terms, nor the fact that the video sets forth the premise that photography is valuable because sometimes you're too drunk to remember the night before. Maybe I'll try out the program at some Christmas parties this weekend.

Occasionally over the last few years I've had clients who had butlers. Not many wealthy people seem to have butlers any more-- rich people have assistants, chiefs of staff, and so on, but few have people they call butlers-- but the butlers I've met have been fascinating people.

One reason I found them interesting is because the concept of the butler is one that comes up regularly in high tech. The idea of "digital butlers" is one that's been around for a long time, but it seems to me that it's based almost completely on an idealized, or highly formal and simplified, version of what butlers do.

For example, in a piece on "Interface Agents as Digital Butlers," to take but one example, Nicholas Negroponte argues for "a future where your interface agent can read every newspaper and catch every broadcast on the planet, and then, from this, construct a personalized summary"-- but the piece assumes that a "butler" is just "someone who applies intimate knowledge about you in your service."

Other projects use the term "butler" as a signifier that points not only to service and intimacy, but unobtrusiveness as well. A 2001 digital butler project at Microsoft Research aimed to "give the automated assistants the manners of a classy English butler who knows when to interrupt and when to disappear." Ten years later, design student Jessie Torres' proposal for a Personal Butler Digital app that would be "an engaging and interactive call and messaging manager" that works "'intuitively behind the scenes' as any dutiful mostly quiet butler will do."

Occasionally, the term "butler" just refers to something high-tech and personal. The 2009 Motorola Digital Butler concept phone by Motorola consisted of a "circular touch-screen interface, accelerometer technology, PDA phone, full-resolution built-in multimedia LED projector; squeeze buttons on the perimeter and of course a full-time network connection to VIP services." In other words, it's essentially a round iPhone. (As a couple people have pointed out, the Apple Knowledge Navigator video had something akin to a digital butler.)

But meeting real butlers has taught me that in real life, the work of being a butler is a lot more complex than task scheduling, finding stuff, and so on. My suspicion is that you could design some really terrific software-- or hybrid software / app / hardware / furniture / architecture / whatever-- if you did some in-depth interviews and ethnographic work with real live butlers, and really studied what they do in real life.

One of the toughest aspects of the job is trying to second-guess my employer. Not only what, but when, how and where. Although I have some involvement in every aspect of his life, from dressing him in the morning and serving his breakfast to managing his homes, planning trips and representing him at meetings, it takes a long time to get to know someone well enough to anticipate what they will like….

A good butler should offer a diplomatic solution to any awkward moment, and never be afraid of standing up to a terrifying boss.

I believe the true test of an intelligent modern butler is not how much he knows how to do, but how he behaves when he doesn't know.

Constantly by my master's side, I am privy to things that even his inner circle do not know. This can be infuriating to others and there are always political waters to navigate. But with knowledge comes power, and with power comes responsibility. As the relationship grows, I observe his quirks, his weaknesses and his vices, some of which are not always palatable. Once I have earned his trust, the lines between master and servant begin to blur. I do not expect to be treated equally (nor would I want to be), but the hope is that one day he will come to rely upon having me by his side. This is the unspoken understanding, that one day the balance of power will shift, and the butler will know more about what his master wants than the master himself. Classic Jeeves and Wooster stuff.

As a modern-day butler I'm expected to be well versed in etiquette, and conduct myself with a suitable demeanor; on the other hand I must also adapt to my master's culture and all the contradictions this brings with it. I am always on call, and should never say no unless I can be certain that no is the only answer. The modern day butler must be able to navigate the trials and tribulations of the modern world with efficiency and style and, in whichever way he can, make life a little bit easier for his master. In that respect my job probably hasn't changed much since the 19th-century, other than the fact I carry two BlackBerrys instead of tails.

The new report on the crash of Air France flight 447 concludes that pilot error was responsible for the crash-- or more accurately, that the pilot's mental model of how so safe a plane would react to a challenging situation failed. As Popular Mechanics explains,

We now understand that, indeed, AF447 passed into clouds associated with a large system of thunderstorms, its speed sensors became iced over, and the autopilot disengaged. In the ensuing confusion, the pilots lost control of the airplane because they reacted incorrectly to the loss of instrumentation and then seemed unable to comprehend the nature of the problems they had caused. Neither weather nor malfunction doomed AF447, nor a complex chain of error, but a simple but persistent mistake on the part of one of the pilots.

As Jalopnik puts it, the pilot misunderstood how a plane designed to be flown automatically would respond to problems:

[O]ne of the two junior officers in charge of an Air France flight that crashed in June 2009 was of the belief that he couldn't crash the plane and thus made poor decisions because of misunderstanding the complex systems designed to protect the aircraft.

Jalopnik argues that this system, and ones like adaptive cruise control, work 90% of the time, but have an under appreciated ability to fail catastrophically:

works extremely well when it is fully in control, but when it loses a key piece of information and requires an input from a human — just like in the tragic Air France flight — things can go terribly wrong.

It's a textbook example of Yale sociologist Charles Perrow's idea of everyday catastrophes, and his argument that safety systems can reduce the rate of accidents but make the ones that happen catastrophic.

The Popular Mechanics article is a chilling reconstruction of the last minutes of the flight, and how the pilots managed to stall the plane.

Mitsubishi recently showed off a new concept car interior featuring wraparound displays, a yoke rather than a steering wheel, and buttons that appear or disappear depending on use context.

This last is getting some well-deserved heat. As one commenter summarized (accurately),

Variable controls and data are an insanely bad idea for a car. Insanely bad.

When driving, you want 100% consistency in your inputs and outputs. You want the same control to do the same thing every time. You want to look in the same place for the same data every time. This is because you don't have time to analyze in those moments when you MUST do something by reflex. Such moments occur all time time when driving.

As if we don't have enough distractions when driving, we need to design distractions INTO the driving experience.

Work helps prevent one from getting old. My work is my life. I cannot think of one without the other. The man who works and is never bored, is never old. A person is not old until regrets take the place of hopes and plans. Work and interest in worthwhile things are the best remedy for aging.

While I've read most of the technical articles that Jeremy Bailenson's lab has published (though from his perch in Xerox PARC, Nick Yee is giving his alma mater a run for its money), I still should read his new book, coauthored with Jim Blascovich. The Los Angeles Times, which runs some of the mostdisreputablereviews you'll ever encounter, liked it:

Humanity may be in the process of being transformed by a virtual revolution, but as Jim Blascovich and Jeremy Bailenson tell it in their exhilarating book "Infinite Reality," virtual worlds are as old as human experience, whether one is thinking of the cave paintings in Lascaux, France, the Ayahuasca trips of the Amazonian Urarina people or Orson Welles' 1938 radio broadcast of "The War of the Worlds." To a large extent, then, the virtual revolution underway is not merely a technological sleight of hand in which digital immersive tools trick the mind into accepting artificial environments as real, but something more profound — the fulfillment of an ancient human impulse. It turns out we have always been, to borrow a phrase, "here to go," from the "grounded" physical world to the virtual.

The Internet allows us to do all kinds of things we never imagined possible. It lets us communicate with people across the world. We can learn whatever we want at the click of a button. We can navigate roads using our iPhones, and translate languages within seconds. It makes us smarter, and more versatile, and faster than ever. But the Web isn’t just a truly extraordinary invention, it is the apex of human evolution — and the ultimate evolutionary adaptation.

It may seem strange to think of the Web as part of the process of natural selection, but Raymond Neubauer, a professor at the University of Texas, doesn’t think so. In his far-reaching new book, “Evolution and the Emergent Self,” he argues that technology should be seen as part of our planet’s grand evolutionary narrative. He claims that two evolutionary strategies — one, emphasizing simplicity and rapid reproduction (as in bacteria), and the other, emphasizing complexity and hyper-intelligence (as in humans) — have been hugely successful in dominating the planet. The book charts the ways those strategies have managed to pop up everywhere from the animal kingdom to cellphones.

Okay, I have no idea what that title means. But I wanted to flag this review:

According to Mayer-Schönberger, we have committed too much information to “external memory,” thus abandoning control over our personal records to “unknown others.” Thanks to this reckless abandonment, these others gain new ways to dictate our behavior. Moreover, as we store more of what we say for posterity, we are likely to become more conservative, to censor ourselves and err on the side of saying nothing….

Delete is more a romanticist rebellion against technology than a how-to manual. The focus of the rebellion is technologically enhanced remembering, and Delete is an impassioned call for less of it. Unfortunately, this interesting argument suffers from three large and arguably fatal flaws: a very loose account of what memory is, an insufficient appreciation of the value of remembering, and—most important for public policy—an unconvincing effort to distinguish the animating concerns about memory from more conventional (and serious) concerns about privacy.

In Delete, Mayer-Schönberger traces the history of... external memories – cave paintings, scrolls, photographic slides, diaries – and their importance to the flourishing of human knowledge. "Since the early days of humankind," he writes, "we have tried to remember, to preserve our knowledge, to hold on to our memories and we have devised numerous devices and mechanisms to aid us. Yet through millennia, forgetting has remained just a bit easier and cheaper than remembering."

No longer. Because of the digital revolution, he argues, it is easier to keep everything – the drunken email you sent your boss, the photo you put on Facebook in which you're doing something non-CV-enhancing to an inflatable cow – rather than go through the palaver of deciding what to consign to oblivion.

That's because so many of our external memories – digital pictures, emails – are now hardly as heavy as Mayer-Schönberger's stepfather's glass slides, but lighter than bees' wings. The overabundance of cheap storage on hard disks means that it is no longer economical to even decide whether to remember or forget. "Forgetting – the three seconds it takes to choose – has become too expensive for people to use," he writes.

It seems to me that much of the debate about remembering vs. forgetting, and human vs. digital memory, is clouded by the participants' conflation of different kinds of memory. The memory for phone numbers is very different from the memory for family events, and we need to recognize that offloading the one sort of memory doesn't necessarily threaten the other.

Delete argues that digital memory has the capacity both to trap us in the past and to damage our trust in our own memories. When I read an old email describing how angry I once was at someone, I am likely to find myself becoming angry again, even if I have since forgiven the person. I may trust digital records over my own memory, even when these records are partial or positively misleading. Forgetting, in contrast, not only serves as a valuable social lubricant, but also as a bulwark of good judgment, allowing us to give appropriate weight to past events that are important, and to discard things that are not. Digital memory - which traps us in the past - may weaken our ability to judge by distorting what we remember.

Reading the great article but Liam Bannon, director of the Interaction Design Centre at the University of Limerick, on forgetting as a feature, not a bug. His central insight is that "human-computer interaction is largely founded on a view that compares the capability of humans and machines," (9) and that in the case of memory that that raises some problems.

[T]he dominant perspective in the human sciences over the past quarter-century has been one that views the human mind as an information-processing device, similar to computing machines. This computer model of mind has blinded us to a number of crucial features of human thinking, most importantly, the active and embodied nature of human thinking and acting in the world. In the context of our discussions on memory, I argue that this approach has over- emphasized a passive rather than an active model of human memory, ignoring the fact that remembering and forgetting are active processes....

[New] technologies are currently being viewed as either substitutes for, or possible augmentations of, human faculties. I argue that the proffered scenarios of computerized ‘help’ for human activities evident in the ubiquitous computing world tends to focus on augmentation of human remembering, with sensors and computer networks archiving vast amounts of data, but neglects to consider what augmentation might mean when it comes to that other human activity, namely, forgetting. (5)

Our models of memory are replete with technical terms such as ‘erasure’, ‘content addressing’, ‘retrieval’, which equate human and computer memory. Yet it has been common knowledge within the human sciences for decades that human memory is not akin to the storage model of computer memory. (5)

[In this model] forgetting is seen as one more example of the fragility of the human mind, where it loses out to computers, with their ability to retain information indefinitely. Forgetting is thus seen as a bug in the human makeup, an aspect of the human memory system that has negative connotations. (6)

There's no computational equivalent of human institutions for forgetting, which is problematic because forgetting is synonymous with forgiveness: as Bannon notes, there are any number of social and legal practices-- "pardon, amnesty, Catholic absolution in confession" (10)-- that officially serve to close off an event in a person's (or a group's) past. Not only is there no similar function with digital technologies, they work against any such ability: it's harder now to expunge criminal records, for example. (Some of the people who are mining public arrest records are awesomely colorful, or shady, characters, depending on your vision.) Bannon does a nice job of helping us become more aware of the stakes in not creating means of digital forgetting, and treating forgetting as a bug rather than a feature.

I'll save my current thinking about this subject for the book-- I'm trying to explain how our interactions with computers, and the language we employ when talking about computers, affect our mental models of human intelligence and the mind-- but this piece I wrote in 2008 on keywords of human-computer difference gives a sense of where my argument will go.

As a blog post, it holds up reasonably well. The fact that it's inspired by Ellen Ullman's lovely article on memory and technology helps.

If you were looking for a subject where most people would think, "I live it, so tell me why I have to dive into the scholarly literature on it?" frustration with computers would be pretty high, somewhere in the same category as breathing. However, a combination of things-- a problem with my Bluetooth keyboard pairing with my iPad, an all-day OSX updating marathon, and a regular problem my kids have using our HP printer-- got me thinking about the problem of frustration in the user experience. It seems to me that frustration is one of those things that's common, that everyone experiences, but still can probably surprise you if you look at it closely.

So I started doing some digging, and came across one of my colleagues at Microsoft Research Cambridge, a brilliant postdoc named Helena Mentis. It turns out she did an M.A. thesis on frustrating user experiences. (Of course, there are any number of jokes you could make that (fairly or not) put the words "Microsoft" and "frustrating user experiences" together in the same sentence, but I won't go there.)

Interestingly, she found that frustrations that are most memorable to users occur while the computer is responding to something you've done-- that is, you've clicked on a link and are waiting for it to load in your browser, or hit "Print", etc. (this is what Donald Normal calls the Outcome phase). She also found that the most widely-mentioned sources of frustration

are intrusive and interrupt the cognitive flow of the user. When the user decided on what goal they wanted to achieve they had an idea of the steps that were needed to complete that goal. However, when there was an unanticipated interruption, the user had to compensate for that interruption thus breaking the cognitive flow.

This seems to be an important rule for interface design and responsive systems. Responses of a system should not interrupt the user’s cognitive flow and should not take control away from the user. If there is a system response that could possibly be intrusive, allow the user to easily regain control. These interruptions are remembered by the users and color their perception of the experience of using the system.

Given that frustrating experiences have got to be one of the principal sources of non-contemplative computing experiences, it makes sense to have a couple paragraphs about them in the book.