Other Blogs

53 posts categorized "Design"

August 16, 2010

Another example of bicycles becoming smarter and more social: Social Bikes.

For those who aren't familiar with how these resource-sharing services typically work, check out our story about the technology behind Zipcar. In a nutshell, there are little car lots (or in the case of B-Cycle, a company that will soon deploy shared bikes in Chicago, bike stations) located all over a city that are locked when not in use. A user can make a reservation online for a car or bike and then pick it up at the designated time.

There is no human interaction required: once the mode of transportation is reserved, the user identifies him or herself to the car or bike either by RFID (Zipcar) or PIN at the cycle station (B-Cycle), which then unlocks the car/bike. When the user is done, he or she returns the vehicle to the same lot so that others can make use of the car. For B-Cycle, users can return bikes to any B-Cycle station, not necessarily the one they rented from.

The SoBi system follows a similar path, but the technology is a bit more advanced than that of services like B-Cycle.... For one, there are no cycle stations: SoBi bikes are parked all over the city (starting in New York City) at regular old bike racks. This means that bikes could, in fact, be anywhere at any given time, and not just at a designated station that could be blocks away. You can pick up any bike that's not already reserved, and drop it off anywhere without having to hunt down a drop-off station....

Like a Zipcar, each SoBi bike is equipped with its own "lockbox" that communicates wirelessly with the SoBi servers via GPS and a cellular receiver (an H-24 module from Motorola, Rzepecki told Ars). When you make a reservation online or via smartphone, you see a map of all the bikes in the area based on their GPS data and are given the option to unlock a specific bike when you click on it....

Since the lockbox contains a GPS module, a cell chip, and a lock that works with a PIN pad, there has to be some way to power it. The SoBi team is still working out the kinks in power consumption, but plans to power the devices with a hub dynamo on the bike's rear wheel. The lockbox is essentially powered by your pedaling—no charging stations required.

August 14, 2010

For a long time, I've been interested in getting an electric bike, especially after I saw the Optibike at the California Academy of Sciences. Via the Daily Dish, I came across an MIT hybrid bicycle project that looks like just the thing: the Copenhagen Wheel. Check out the video:

Not completely clear from the video exactly how it works, but I like how elegantly it attaches to a bicycle (some bike motors look like real kludges), and that it also is a smart device:

Smart, responsive and elegant, it transforms existing bicycles quickly into hybrid electric-bikes with regeneration and real-time sensing capabilities. Its sleek red hub not only contains a motor, batteries and an internal gear system – helping cyclists overcome hilly terrains and long distances - but also includes environmental and location sensors that provide data for cycling-related mobile applications. Cyclists can use this data to plan healthier bike routes, to achieve their exercise goals or to create new connections with other cyclists. Through sharing their data with friends or their city, they are also contributing to a larger pool of information from which the whole community can benefit.

It's called the Copenhagen Wheel because the bike-friendly wants to increase the number of people who cycle, and worked wit the team to

to investigate how small amounts of technology could improve the cycling experience and how the four main obstacles to getting people on bikes - distance, topography, infrastructure and safety – could be overcome. What has resulted is the Copenhagen Wheel: a new type of electric smart-bike which utilizes a technical solution for overcoming distance and topography (a motor and batteries with regeneration capabilities that can provide riders with a boost when needed) and a real-time data network and series of applications to support infrastructure creation and foster a sense of safety.

Trading intelligence for resources; encouraging mergers of people and devices on human terms rather than device terms; bringing information to users in context-- all great examples of an end of cyberspace device.

March 25, 2010

Back in 2004, when I was a columnist for Red Herring, I wrote a piece about what would happen when reputation systems make their way into the world-- that is, when they stop being things that we only consult in online transactions, and become things we can consult easily in real-world transactions. I talked about how they could jump-start car-sharing systems.

person-to-person car-sharing service, which will be launching soon in Baltimore. Unlike fleet-based services—Zipcar, City CarShare, I-GO, and others—which maintain their own vehicles, RelayRides relies on individual car owners to supply the vehicles that other members will rent.

There are a couple other services like this, including Divvycar, but there seems to be a sense that these systems are ready to take off. So "why are peer-to-peer car-sharing services emerging now?"

Part of the answer might lie in the way online and offline services like Zipcar, Prosper, Netflix, and Kiva.org are training us to share our stuff—people are simply getting used to the idea. “‘Zip’ has become a verb to the point that we could ‘zip’ anything—they just happened to start it with cars. Close on their heels was Avelle (formerly Bag, Borrow Or Steal) and now SmartBike for bikes on demand. The next step seems to be a crowd-sourced version of Zipcar,” says Freed.

Another part of the answer might be found in our response to the ecological and economic crises Americans are facing. As Clark explains, “You just think of the number of cars on the road, and the resource that we have in our own communities is so massive... what the peer-to-peer model does is it really allows us to leverage that instead of starting from scratch and building our own fleet.”

From an individual’s perspective, peer-to-peer sharing is a means for owners to monetize their assets during times when they don’t require access to them. But peer-to-peer models can also be understood to utilize existing resources more efficiently—ultimately, to reduce the number of cars on the road—through shifted mentalities about ownership, the intelligent organization of information and, increasingly, through real-time technologies.

Since peer-based car-sharing companies don’t bear the overhead costs of owning and maintaining their own fleets, they don’t require the high utilization rates for vehicles that Zipcar and similar programs do—the result is comparatively fewer limitations for the size and scale of peer-to-peer operations.

Always satisfying for a futurist to see the future actually start to arrive.

February 09, 2010

Front Design has developed a unique method of materializing freehand sketches. Strokes made in the air are recognized with motion-capture video technology and then digitized into a three-dimensional computer model. The digital files are then sent to a rapid-manufacturing machine that uses computer-controlled lasters to fabricate the objects in plastic, resulting in furniture that is a clear translation of drawing into object.

Check out the video:

As IdeaFestival observes, "When an action as simple as tracing an object in the air can result in a manufactured piece of furniture, the wall separating virtual and physical reality becomes a little less relevant." It proposes the term "performance manufacturing," though all manufacturing is a kind of performance, and often is more creative and inventive a process than we realize.

I've written for Samsung's DigitAll magazine about 3d printing and its potential for transforming the factory, and it seems to me that rapid prototyping, motion capture or object scanning, and 3d design tools-- which people encounter in games and virtual worlds like Second Life-- are going to make a powerful combination.

February 02, 2009

Freedom is an application that disables networking on an Apple computer for up to eight hours at a time. Freedom will free you from the distractions of the internet, allowing you time to code, write, or create. At the end of your selected offline period, Freedom re-enables your network, restoring everything as normal.

This reminds me a bit of Write Room, and why I like it: it's designed to be distraction-free.

At what point did the absence of distraction become a luxury? Is it just me, or is concentration (not just attention, but the ability to really focus seriously for long periods of time) an ever-scarcer state of being? (I hate to call it a commodity, despite its economic or productive value.)

November 30, 2008

The trouble with tunnel vision is that it leads to tunnel design. We are designing all sorts of information technologies that make things more efficient, but not necessarily more effective. (John Seely Brown)

A few weeks ago a friend of mine announced that she was taking a break from Web 2.0.* She was going to prune her Twitter feeds, reduce her time on Facebook, and cut back on her time on IM. She needed to pay more attention to her real life, and to real relationships. Recollecting friends from high school and college was interesting for a while (Web 2.0 is a time machine for my generation, after all), but a large volume of acquaintances can't provide the same satisfaction and support as a handful of friends you can see-- or who can take the kids out to the park for an hour. Getting Tweets on her cell phone was also a poor combination of intrusiveness and minutiae. And there was laundry to be done.

As one of the digital lemmings who pushed her over the edge, the episode got me thinking. Why do I Tweet? After thinking about it for a while, I've come the conclusion that while it's certainly popular with lots of my friends, I have a couple serious questions about Twitter, as a writer and a reader.

First, I have to admit that my regular life isn't interesting enough to justify throwing out real-time updates about it. Nobody needs to know that I've just convinced the kids to make their own breakfasts, or have come back from lunch at Zao Noodles, or am trying to decide where to go on this weekend's hike. The exception is when I'm on the road or doing something else unusual: at those times, my life-- or my world-- might get interesting enough document in detail.

There's also the problem that I'm not sure what I get out of my own tweets. One of the signal features of Web 2.0, I think, is that it's not just broadcasting: it's self-documentation. Some of my friends use Twitter to jot down little notes about what they're reading. But for me, the absence of tags in Twitter makes it hard for me to find things I've looked at long enough to know I should look for them again later, or to keep track of citations; del.icio.us is still the better tool for that. (I suppose you could replicate a little of that functionality with #tags, but that's a workaround, and there's no auto-complete....) And I'm not sure I've gone back and looked at my own Twitter stream, ever. My regular blog is valuable because it's a way to keep track of my own life; this one has been invaluable for recording and trying out ideas for my book; my kids' blog has been a place where I could store huge amounts of detail about my kids' childhoods-- those pictures of them doing cute but ordinary things, or saying wonderful things, or just growing up. Tossing out tweets feels like shooting sparks from a wheel: the sparks may be entertaining, but it's the object you're shaping with the wheel that's really valuable.

Finally, as a reader, I find that seeing the raw feed of even a few people's lives can quickly become overwhelming. In the last 24 hours, a relatively quiet time after Thanksgiving, I got 34 tweets; during a busy time-- when people are traveling or at SXSW-- I can get several times that, easily. There's an argument to be made, as Clive Thompson has done, that the minutiae of tweets resolve into ambient awareness... but as it's currently designed, the system still puts big demands on readers, who have to constantly read their friends' Twitter streams, develop a sense of the rhythm of their posting, and build up a model of their real-world state from their online behavior. In a world in which the challenge is not to broadcast a lot of information, but to generate a lot of meaning, the stream-of-existence quality of tweeting makes it easy to mistake detail for intimacy, quantity of tweets for quality of expression or depth of understanding. As a preview of the world of ubiquitous computing and ambient awareness, Twitter is an interesting experiment (an experiment that's being conducted my hundreds of thousands of people on themselves and their friends.)

This is actually not a bad lesson for designers. Creating ambient devices isn't about pushing information; presence isn't just about connection. Connecting people virtually is as much about quality and meaning in the digital world as it is in the real world.

Which is not to say that Twitter is hopeless. Twitter is strongest as a platform for conversation and reportage. It's easy to share a rapid fire of short notes at conferences, for example, and the final result-- assuming people are listening and paying attention-- can be useful. (I wonder if there are examples of Twitter being used by students in lecture classes?) A couple of the people I follow use it as much for pinging friends as for talking about what they're doing: for them, Twitter is a cross between the Facebook wall and a chat room. And I find Twitter useful for getting reactions to news events: I stopped watching the presidential debates this fall, for examples, after I realized that most of my friends were tweeting their reactions to them.

So what do I do with my Twitter stream? I'm not going to shut it down, because there are times when I'll want to provide moment-by-moment updates about what I'm doing ("Just cleared customs in Kai Tak! Where's the cab line?" "Have now been in Victoria Stations on four continents...."). But for me, when I do use it, the challenge will be to figure out how to write the Web 2.0 equivalent of Zen koans: to fit meaning into 140 characters, rather than to fight the limitations of the medium by posting a lot.

November 11, 2008

“Hypermiling” was coined in 2004 by Wayne Gerdes, who runs this web site. “Hypermiling” or “to hypermile” is to attempt to maximize gas mileage by making fuel-conserving adjustments to one’s car and one’s driving techniques. Rather than aiming for good mileage or even great mileage, hypermilers seek to push their gas tanks to the limit and achieve hypermileage, exceeding EPA ratings for miles per gallon.

I've been interested in hypermiling and its mainstreamining, because I see its popularization (measured very nicely by its being word of the year, thank you OUP) as a really good example of what happens when digital information leaves cyberspace and becomes available in the world, and available in real time. We can see it in way drivers react to the Toyota Prius mileage estimator.

November 09, 2008

I think 2005 was the year we began living in the world of commonplace ubiquitous computing devices. That year Apple put out the screenless iPod Shuffle, Adidas launched the adidas_1 shoe, and iRobot launched the Discovery—its second-generation vacuum robot.

Sadly, even though we live in that world, the user experience design of most everyday ubiquitous computing devices—things you see in gadget blogs—is typically terrible. That’s because we do not address ubicomp user experience design as a distinct branch of interaction design, much as we did not treat interaction design as separate from visual design in the early days of the Web.

His recommendations for doing good user experience design for ubicomp:

I have this really neat, polished talk that I give on The Design of Future Things and I’m not going to give it to you. What I’m going to do is stumble through what I’m thinking about. That’s what I really like to do. I like to teach and think and talk about stuff I don’t understand. And so I give crappy talks because I’m making it up as I go along. And when I do understand it, I write a book and then I’m bored with it and go on to the next topic. So I’ll tell you what I’m working on.

October 29, 2008

When modern architecture emerged in the first years of the last century, it threw down a gauntlet at the feet of traditional neoclassical and academic architecture. Modernism's style was stripped-down and functional. It celebrated the beauty of machines and the art of engineering, and expressed itself in concrete and steel, rather than brick and wood. Most important, it declared that the future would never again look like the past: from now on, architecture would be about innovation and change, not about working with timeless principles and eternal proportions.

Implicitly at first, and then consciously, architectural exhibits became predictions. Buckminster Fuller's Dymaxion house, first exhibited in 1927, exemplifies how modern architecture backed into the futures business. The Dymaxion house was a hexagonal structure, suspended from a central load- and services-bearing column. Virtually everything in it was made of aircraft-grade medal. The house wouldn't be built on-site, like traditional houses; instead, it would be mass-produced, like cars or cans of peas, and delivered to owners.

Soon "the home of the future" became a stock element of every architectural exhibit, World's Fair, forward-looking corporate display, or popular magazine special issue. (Even World War II couldn't derail them: a 1943 brochure showed a couple admiring a neighborhood of modern houses under the caption, "After total war can come total living.") Sporting automated kitchens, robot butlers, furniture that you washed with a high-pressure hose, and helipads (the long, sad story of why we don't have personal helicopters or jet packs will have to wait for another time), these houses were sleek temples of convenience, promises of a world in which the home would be as frictionless and worry-free as a department store.

Of course, almost none of this has come to pass. Instead, the "home of the future" projects serve as textbook examples of how you can get the future wrong, and why.

September 26, 2008

This BBC News piece is from a couple months ago, but still, as someone who's written about Stanford, IDEO, and the design of the Apple mouse, it caught my eye...

Say goodbye to the computer mouse

It's nearly 40 years old but one leading research company says the days of the computer mouse are numbered.

A Gartner analyst predicts the demise of the computer mouse in the next three to five years.

Taking over will be so called gestural computer mechanisms like touch screens and facial recognition devices.

"The mouse works fine in the desktop environment but for home entertainment or working on a notebook it's over," declared analyst Steve Prentice.

He told BBC News that his prediction is driven by the efforts of consumer electronics firm which are making products with new interactive interfaces inspired by the world of gaming....

So just how ready are people to wave their hands in the air or make faces at devices with embedded video readers?

Gartner's Mr Prentice says millions are already doing it thanks to machines like Nintendo's Wii and smartphones like the iPhone....

[But not everything may change.] "For all its faults, the keyboard will remain the primary text input device," he said. "Nothing is easily going to replace it. But the idea of a keyboard with a mouse as a control interface is the paradigm that I am talking about breaking down."

June 13, 2008

While an object becomes a tool only when it is linked to an agent, to a subjectivity, and is therefore a sign of a person, this is not the whole story. As Ingold points out, the tool must not be thought of in isolation: it is lined to a technique: a technique for tool manufacture and the techniques in which it is employed.... This means that the tool points beyond itself to a world of tools and a world in which that world of tools is embedded. Tool-using involves cooperation, imitation, teaching... and so is inextricably connected with personhood and sociality. Tool-manufacture requires the growth of collective knowledge and collective ownership of the product. (231-232)

The tool's social status and its endurance over time make it an external representation of, and underwriting of, the individual's social existence and his endurance over time. (232)

The unique sociality of human consciousness is captured in the relationship between the tool, the technology of which it is a part, the tool-user and the community to which he belongs. (233)

June 04, 2008

A reader of the "Paper Spaces" article pointed me to Malcolm Gladwell's 2002 New Yorker essay, "The Social Life of Paper." I read the piece, which is essentially a long essay built around Sellen and Harper's Myth of the Paperless Office, but I didn't realize it was available online.

Gladwell's description of the key affordance of paper-- its tangibility, flexibility, and writability (is that a word?)-- is particularly nice.

The case for paper is made most eloquently in "The Myth of the Paperless Office" (M.I.T.; $24.95), by two social scientists, Abigail Sellen and Richard Harper. They begin their book with an account of a study they conducted at the International Monetary Fund, in Washington, D.C. Economists at the I.M.F. spend most of their time writing reports on complicated economic questions, work that would seem to be perfectly suited to sitting in front of a computer. Nonetheless, the I.M.F. is awash in paper, and Sellen and Harper wanted to find out why. Their answer is that the business of writing reports -- at least at the I.M.F -- is an intensely collaborative process, involving the professional judgments and contributions of many people. The economists bring drafts of reports to conference rooms, spread out the relevant pages, and negotiate changes with one other. They go back to their offices and jot down comments in the margin, taking advantage of the freedom offered by the informality of the handwritten note. Then they deliver the annotated draft to the author in person, taking him, page by page, through the suggested changes. At the end of the process, the author spreads out all the pages with comments on his desk and starts to enter them on the computer -- moving the pages around as he works, organizing and reorganizing, saving and discarding.

Without paper, this kind of collaborative, iterative work process would be much more difficult. According to Sellen and Harper, paper has a unique set of "affordances" -- that is, qualities that permit specific kinds of uses. Paper is tangible: we can pick up a document, flip through it, read little bits here and there, and quickly get a sense of it. (In another study on reading habits, Sellen and Harper observed that in the workplace, people almost never read a document sequentially, from beginning to end, the way they would read a novel.) Paper is spatially flexible, meaning that we can spread it out and arrange it in the way that suits us best. And it's tailorable: we can easily annotate it, and scribble on it as we read, without altering the original text. Digital documents, of course, have their own affordances. They can be easily searched, shared, stored, accessed remotely, and linked to other relevant material. But they lack the affordances that really matter to a group of people working together on a report. Sellen and Harper write:

Because paper is a physical embodiment of information, actions performed in relation to paper are, to a large extent, made visible to one's colleagues. Reviewers sitting around a desk could tell whether a colleague was turning toward or away from a report; whether she was flicking through it or setting it aside. Contrast this with watching someone across a desk looking at a document on a laptop. What are they looking at? Where in the document are they? Are they really reading their e-mail? Knowing these things is important because they help a group coördinate its discussions and reach a shared understanding of what is being discussed.

April 29, 2008

I'm going to Oxford this summer for the workshop on imagining business. I'll be talking about "paper spaces," the large, often room-sized roadmaps, timelines, and other documents the Institute uses in its workshops.

I've put a PDF of the paper online; I may experiment with putting a copy on Google Docs, and using Zotero to manage the citations (though that seems iffy, given that I often write pretty long footnotes). Whatever environment I use, the piece is like to undergo substantial revision over the next couple months, as I know there are a couple parts of the argument I want to expand. Here's the introduction:

This article is about paper spaces: room-sized maps, timelines, and charts used to develop, record and share ideas. When used in collaborative work, paper spaces support both focused, creative activity—the creation of a strategy roadmap, the outlines of a software development project, etc.—and informal social goals, like the development of a sense of community or common vision. These are essentially very large pieces of paper, but the term "paper spaces" (the term is borrowed from computer-aided design ) highlights several things. First, we're used to thinking of things made of paper as physical objects whose qualities help shape the experience of reading, but it's useful to pay attention to their spatial and architectural qualities as well. Large visuals aren't just things: they're spaces that possess some of the qualities of desks or offices. IFTF workshops exploit their scale and physicality to promote social activity between workshop participants. In this case, the spatiality of paper is fairly self-evident; but many of our interactions with paper, books, and writing have a spatial quality. Scholars could gain much by analyzing print media using conceptual tools from architecture, design, and human-computer interaction, as well as literary theory and book history.

Second, studying paper spaces help us understand the role that visualizations play in contemporary organizations. Historians have used studies of visual media and visual thinking to expand our understanding of science, technology, and other fields. The business world is supersaturated with visualizations—everything from advertisements, to PowerPoint presentations, to org charts, to brands, to workflows and flow charts—and studying those images could bring similar benefits. At the same time, it warns us against taking too passive or formal a view of visual tools in business, of treating them like paintings on a wall. In the way users interact with them-- they're annotated, extended, argued over, and played with-- they're more like Legos than landscapes. The process of creating maps, and the maps themselves, both reflect a set of attitudes about how to understand and prepare for the future, one that emphasizes user involvement, and the need for actors to develop and possess shared visions of the future. Finally, the term "paper spaces" highlights their hybrid, ephemeral quality. They work because they're simultaneously interactive media and workspace, but their lives are brief and easy to overlook: they are designed to support idea- and image-making, but leave little trace of themselves.

To illustrate how paper spaces work, this article will focus on their use in a specific context: in expert workshops and roadmapping exercises conducted at the Institute for the Future (IFTF), a Silicon Valley-based think-tank. The article begins with an overview of information spaces, and a brief look at IFTF's local culture and research practices. Next, it looks in detail at our expert workshops and facilitated exchanges, and describes how they're organized, what they aim to accomplish, and how they work. It then discusses how paper spaces support the co-creation of knowledge about the future, and a sense of group solidarity. Finally, it argues that paper spaces are ubiquitous: most of our interactions with texts and other media have a spatial dimension that affects the ways we read, think, and create.

The piece is currently a relatively svelte 5000 words long; I figure it'll hit 6000-7000 before I'm done. There are two big things I still have to do.

First, I have to build out the discussion of how working with (or in) paper spaces generates group solidarity, or a sense of common identity and purpose among participants.

Second, I hadn't planned on doing this, but my experience working with ZuiPrezi has made me think I should make explicit something I had planned to leave implicit: that the paper spaces I describe will become extinct in the forseeable future. When I was in Malaysia, I used ZuiPrezi in one of my workshops, and it was a terrific experience; and it leads me to believe that we're not far off from being able to replicate most, if not all, of the social functionalities of paper spaces in digital, projected tools. Thinking about what has made paper spaces work well has been essential for making them obsolete, and I think I'm going to add a section explicitly laying out what a digital system has to do in order to work as well as paper.

April 03, 2008

One of the things I've come to realize in the course of this project is how rewarding it can be to look closely at humans' interactions with computers, mobile devices, and other technologies. Cyberspace, I'm arguing, made sense in a world in which getting online was hard, and there were clearer behavioral divides between the everyday world that we inhabit naturally, and the online "world" that we visited via computer modem. Today, things like the cellphone, iPhone, and Intel's new Mobile Information Devices, combined with the proliferation of wireless networks and always-on services, are all eroding that sense of the digital world as something separate from regular life.

Today I saw another example of how changes in the ways we engage with technologies can break down conceptual divisions-- this time involving the divide between people and robots. New Scientist reports on a project by Georgia Tech researchers Ja-Young Sung and Rebecca Grinter that examines how people interact with the Roomba, the robotic vacuum cleaner. Apparently a lot of owners give their Roomba a name, dress it up, or even take it on vacations:

"Dressing up Roomba happens in many ways," Sung says. People also often gave their robots a name and gender, according to the survey... which Sung presented at the Human-Robot Interaction conference earlier this month in Amsterdam, the Netherlands.

Kathy Morgan, an engineer based in Atlanta, said that her robot wore a sticker saying "Our Baby", indicating that she viewed it almost as part of the family. "We just love it. It frees up our lives from so much cleaning drudgery," she says.

Sung believes that the notion of humans relating to their robots almost as if they were family members or friends is more than just a curiosity. "People want their Roomba to look unique because it has evolved into something that's much more than a gadget," she says. Understanding these responses could be the key to figuring out the sort of relationships people are willing to have with robots.

Until now, robots have been designed for what the robotics industry dubs "dull, dirty and dangerous" jobs, like welding cars, defusing bombs or mowing lawns. Even the name robot comes from robota, the Czech word for drudgery. But Sung's observations suggest that we have moved on. "I have not seen a single family who treats Roomba like a machine if they clothe it," she says. "With skins or costumes on, people tend to treat Roomba with more respect."

So as they move from environments that we don't like into places that are more familiar, and from doing work we hate to work we just dislike, two things happen to our perception of robots: their social status goes up, and they become more familiar. But this doesn't just happen with robots who are doing "dull, dirty and dangerous" jobs: humans who are doing those jobs can develop bonds with those robots, too.

US soldiers serving in Iraq and interviewed last year by The Washington Post developed strong emotional attachments to Packbots and Talon robots, which dispose of bombs and locate landmines, and admitted feeling deep sadness when their robots were destroyed in explosions. Some ensured the robots were reconstructed from spare parts when they were damaged and even took them fishing, using the robot arm's gripper to hold their rod.

Figuring out just how far humans are willing to go in shifting the boundaries towards accepting robots as partners rather than mere machines will help designers decide what tasks and functions are appropriate for robots. Meanwhile, working out whether it's the robot or the person who determines the boundary shift might mean designers can deliberately create robots that elicit more feeling from humans. "Engineers will need to identify the positive robot design factors that yield good emotions and not bad ones - and try to design robots that promote them," says Sung.

This is not to say that we're starting to think of robots as more like people, but at least we're starting to treat them a little more like, say, pets: they're not us, but they're still part of our emotional lives, and we have some appreciation for what they do for us.

(* A reference to Stephen Colbert's great description of what would make his show different: "Other shows read the news to you. We feel the news at you.")

[To the tune of Mono, "Lost Snow," from the album "Ex Plex, Los Angeles, September 24, 2005".]

February 20, 2008

Recently I've been using a couple tools a lot, for reasons that are worth noting (worth it to me, anyway). Increasingly, I find my choice of technologies depends on fairly small and specific things, keyed more to the way I'm able to use them than to functional specs.

The first is WriteRoom. I've had it for a while, but I've now made it my default basic text editor. The interesting thing about WriteRoom is that it revives an old interface for a new purpose: it turns it into a tool for focusing an author's attention. This is writing without distraction, the Web site promises.

Walk into WriteRoom, and watch your distractions fade away. Now it's just you and your text. WriteRoom is a place where your mind clears and your work gets done. When your writing is complete, exit WriteRoom and re-enter the busy world with your work in hand.

With so much e-mail and information pouring in, the digital life we lead can sure be a blur. If you've found it getting harder to focus on the words you want to write, if you've forgotten how great it feels to really write distraction-free, then let WriteRoom help you rediscover your muse.

Of course I find the spatial metaphor interesting.

But what I find I really like about it is that it's particularly well-suited for writing late at night. I have these regular bouts where I'm up until 2 or 3 at night writing-- periods when I can really get a lot done, or have those conceptual or organizational breakthroughs that every writer finds really satisfying. Most of the time I'm not writing something that requires elaborate formatting or layout, so I can use a simpler writing tool. But when I'm in bed, the lights are out, and I'm trying to work without keeping my wife up, the amber lettering on a black screen seems especially fitting. The amber and black screen are gentler on the eyes. They focus attention on the words at a time when I don't have much energy, but have some of my best ideas.

The other tool I'm using a lot these days is Skype. Of course, I have lots of ways to talk to people-- two cell phones (one used mainly for text messaging), but I'm finding Skype really good for work-related calls, for a couple reasons.

First, I just bought a headset, which has made it possible to walk around while talking. Before I had it, I had to lean over the computer and yell into the microphone (wherever it is on my computer), which is not a superior communications experience. With the headset, on the other hand, the sound quality is excellent, and I can get up and move about. Much better.

Further, when I'm working, I'm never at my desk-- I don't even have a desk-- but I'm always at my computer. (When I'm not working I'm also often at my computer.) Since I actually lost my office phone a long time ago, it's a lot easier to do calls through Skype.

Finally, the combination of talking and texting makes it possible to share notes with the person you're talking to, pass URLs back and forth, etc. Since I generally have to send a follow-up e-mail after any phone conversation, having the ability to write those notes in real-time is really useful. And since Skype can save text threads, you can use it as an archive of previous conversations. That's really useful for things like weekly conference calls, which I'm now doing with some Oxford students I'm advising on a project.

January 12, 2008

For those of you old enough to have played video games in the late 1970s or 1980s-- the halcyon days of Defender, Xevious, and Tron, not to mention a Pac Man franchise that rivaled CSI-- the terrific retro arcade photset on Flickr is not to be missed.

Perhaps I'm just over-generalizing from my own over-excited teenage reactions to these kinds of spaces, but I think these arcades, with their spaceship or Buck Rogers interiors, darkness lit only by the neon and the light of the games, played an underappreciated role in creating a psychological association between computers and space-- or alternate spaces.

called Station Break. The arcade was on the edge of the Virginia Commonwealth University campus, near student eateries, bookstores, and the city's only independent movie theatre. For a teenager, it was a neighborhood that spoke of leisure, freedom, and escape. The arcade itself was like another world.

The appeal of these spaces hasn't disappeared entirely, though most arcades are gone. The memory of the old arcade model was compelling enough to inspire MAME developers to create a virtual arcade, and there's a pretty clear linage from Station Break to Chuck E Cheese to the Pizza Planet in Toy Story. For those who really want the old experience, a Springfield, MO arcade, 1984, is a nostalgic re-creation of arcades from the era, right down to the 50+ classic games.

December 18, 2007

Atul Gawande has a terrific article in last week's New Yorker on an information technology that, after several years' testing, looks like it could transform intensive care. It's mainly been used in the reduction of line infections, which Gawande explains are

so common that they are considered a routine complication. I.C.U.s put five million lines into patients each year, and national statistics show that, after ten days, four per cent of those lines become infected. Line infections occur in eighty thousand people a year in the United States, and are fatal between five and twenty-eight per cent of the time, depending on how sick one is at the start. Those who survive line infections spend on average a week longer in intensive care.

This new technology was developed a few years ago by Johns Hopkins professor Peter Pronovost. After the first trial using it in a hospital,

The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital... [it] had prevented forty-three infections and eight deaths, and saved two million dollars in costs.

For years we've heard that information technology could solve some of the most tractable problems with our health care system, and this seems to make that promise true. So what is this technology?

A checklist.

Not a gigantic database, or RFID tags in unconscious patients, or steerable needles (which boffins at UC Berkeley are now working on); but pieces of paper listing the steps you're supposed to take when doing something. You know what they are.

So why are they good-- good to the point of being able to save lots of lives and millions of dollars in an average hospital? Checklist offer

two main benefits, Pronovost observed. First, they helped with memory recall, especially with mundane matters that are easily overlooked in patients undergoing more drastic events. (When you’re worrying about what treatment to give a woman who won’t stop seizing, it’s hard to remember to make sure that the head of her bed is in the right position.) A second effect was to make explicit the minimum, expected steps in complex processes. Pronovost was surprised to discover how often even experienced personnel failed to grasp the importance of certain precautions. In a survey of I.C.U. staff taken before introducing the ventilator checklists, he found that half hadn’t realized that there was evidence strongly supporting giving ventilated patients antacid medication. Checklists established a higher standard of baseline performance.

Tools like checklists aren't just accidental media containing information; when you look at how they're used, they turn out to be aids to memory, objects that help standardize what can be chaotic practices. Under some circumstances, they're tools for diffusing practices and raising standards.

The power of checklists rests in their simplicity, particularly the simplicity of their use. Documents behave predictably. That predictability, I would argue, in turn is important for its incorporation into work practices. With a checklist, you can easily see that steps have been followed: it's a bit like how strips of paper in air traffic control centers serve as tools for tracking who has responsibility for a plane.

December 17, 2007

[D]espite two decades of lectures from Dr. Norman on the virtue of “user-centered” design and the danger of a disease called “featuritis,” people will still be cursing at their gifts this Christmas.

And the worse news is that the gadgets of Christmas future will be even harder to command, because we and our machines are about to go through a rocky transition as the machines get smarter and take over more tasks. As Dr. Norman says in his new book, “The Design of Future Things,” what we’ll have here is a failure to communicate.

“It would be fine,” he told me, “if we had intelligent devices that would work well without any human intervention. My clothes dryer is a good example: it figures out when the clothes are dry and stops. But we are moving toward intelligent machines that still require human supervision and correction, and that is where the danger lies — machines that fight with us over how to do things.”

You can’t explain to your car’s navigation system why you dislike its short, efficient route because the scenery is ugly. Your refrigerator may soon know exactly what food it contains, what you’ve already eaten today and what your calorie limit is, but it won’t be capable of an intelligent dialogue about your need for that piece of cheesecake.

To get along with machines, Dr. Norman suggests we build them using a lesson from Delft, a town in the Netherlands where cyclists whiz through crowds of pedestrians in the town square. If the pedestrians try to avoid an oncoming cyclist, they’re liable to surprise him and collide, but the cyclist can steer around them just fine if they ignore him and keep walking along at the same pace. “Behaving predictably, that’s the key,” Dr. Norman said. “If our smart devices were understandable and predictable, we wouldn’t dislike them so much.” Instead of trying to anticipate our actions, or debating the best plan, machines should let us know clearly what they’re doing.

December 07, 2007

Yesterday I broke my Nokia N95 super-phone. Today I packed it up and sent it off to Nokia repairs... in Huntsville. Not the first place I'd think to send a phone to be fixed, but hey, if it was good enough for Werner Von Braun, it's good enough for cell phone repairs.

I hope the experience of getting it fixed is better than than the last one I had when I needed to have a phone repaired. As I realized then, good repair service may seem like one of those things that a company should invest in if it can get around to it, but it actually really matters for today's intimate devices:

A cell phone repair isn't something that requires lots of precision machine work and soldering: you pop open the unit, swap out a circuit board, close it up, and move on. The actual diagnosis/repair/testing cycle probably takes 5-10 minutes (anyone who repairs cell phones and has better numbers, please feel free to comment).

Nonetheless, for whatever reasons it takes months for phones to move through the local store-telco-repair shop ecology. And from talking to other people in the store, it seems that my experience isn't unusual: other, less persistent people have waited for 4-6 months for their phones, and showing up in person to plead for news of their repairs-- like going to the prefecture's office for a visa to leave Casablanca-- now seems to be the norm.

This matters because bad repair service could inhibit the growth of the kind of always-on, pervasive, ubiquitous computing and communications that lots of futurists (and more than a few electronics and cell phone companies) see as just over the horizon. Many people already develop deep, personal relationships with their cell phones; I feel naked if I leave the house without mine. As we invest more in customizing them, and acquire phones that have a larger and larger number of features, bad service is going to feel more and more wrenching, and those loaner phones-- which are always old returns-- will be more and more unsatisfactory.

Having the camera-- I mean phone... or whatever you call an N95-- out of my life for a few days gives me a chance to reflect on how I've used it.

First, it's not quite a device that can replace all my other devices-- I usually leave the house with a cell phone, camera, and iPod-- but it's a lot closer than I expected.

I've long been skeptical of the concept of the single device that replaces lots of specialized devices. One objection has been about performance quality: cell phone cameras generally aren't as good as cameras, and I've not been willing to make the sacrifice.

Another is that my various devices have different, and contradictory, design parameters. I want a measure of solidness in a camera that I don't need in a cellphone. A cellphone ought to be light, but still stand up to abuse. A camera doesn't have to be heavy, but it should still feel dense and rugged; and the materials and detailing that you use to achieve that aren't appropriate for a phone. And neither cellphone nor camera aesthetics are appropriate for an iPod.

But if I didn't already have these other devices, and wasn't already accustomed to being a little fussy about them, I suspect I'd be perfectly happy with just the N95. The camera isn't quite as good as the SD630, but for most everyday purposes it's really everything I need. Likewise, the MP3 player function isn't quite as good as the iPod, but I can put in enough memory to store a few hundred songs (and, by constructing a smart playlist, choose some songs that I rate highly but haven't played in a while, and songs that I listen to a lot).

Interestingly, I find that performance issues aside, there are things I miss in the N95-- but they're different, depending on whether I'm using it as a camera or an MP3 players. When I'm using it as a camera, I really miss having a wrist strap. (The times I've not used the wrist strap on my camera, I've either dropped it in the ocean, or dropped it on Gloucester Road.) When I'm using it as an MP3 player, I miss the scroll wheel.

The thing I really love, on the other hand, is Lifeblog. Being able to take a picture of something and blog it immediately (or put it on Flickr) is great. Some of the posts admittedly have been a tad frivolous, but I've been able to post pictures to kids opening their advent calendar gifts seconds after they're open. I don't imagine that I've got relatives hitting the "refresh" button every morning; what's good is that it eliminates procrastination and delay on my part. But I expect that the next time I got somewhere that has lots of free wifi spots, I'm going to be photoblogging in more or less real time. I did something like this during my trip to Budapest, but with a phone that can go online, I could really go to town.

November 17, 2007

The interface is the basic aesthetic form of digital art. Just as literature has predominantly taken place in and around books, and painting has explored the canvas, the interface is now a central aesthetic form conveying digital information of all kinds. This circumstance is simultaneously trivial, provocative, and far-reaching--trivial because the production, reproduction, distribution and reception of digital art increasingly take place at an interface; provocative because it means that we should start seeing the interface as an aesthetic form in itself that offers a new way to understand digital art in its various guises, rather than as a functional tool for making art (and doing other things); and, finally, far-reaching in providing us with the possibility of discussing contemporary reality and culture as an interface culture.

October 19, 2007

At the time I had been trying to imagine the office of the future. I suggested to the film team that we would be surrounded by a single seamless screen in an arc, and that we would stand up and gesture into it. I had observed that when you think on your feet you have different thoughts. I like to think while I walk or pace because I feel my whole body is thinking then. It may turn out to be a short-term anomaly that today we think while we are sitting.

September 17, 2007

I'm not a Mac fanatic, but every computer I've bought with my own money has been a Mac. I got an SE in 1988, and have gone through various Quadras, iMacs, and laptops since then. Since the beginning much of the appeal of the Mac was the graphical interface. First, it was the only personal computer with a GUI. Then after the appearance of Windows, it was a better version of the GUI: cleaner, faster, more intuitive, or whatever.

I still gravitate to Macs, but I'm beginning to see the outlines of a future in which graphics are really good, but the graphical user interface is obsolete.

Two things are driving the fall of the GUI. One is mobile devices, whose screens are too small to handle the kinds of GUIs we've had on personal computers. The other is the growth of search and tagging tools as an alternative to visual (and often hierarchical) systems for organizing and accessing documents on personal computers. I'll talk about the first here.

Consider the iPod. For all of the attention the neat color screens have gotten-- and they are pretty neat-- what strikes me about the iPod, and the iPod Touch, is how much of the navigation is text- and list-based. Sure, it'll play movies and TV shows, and show you album cover art, and the little screens are surprisingly easy to watch (though I have a much more satisfying time watching things I'm familiar with, probably because my brain is filling in details that the screen doesn't actually show). But you don't use icons to navigate: you navigate through text menus.

I've spent a little time playing with Cover Flow, and my sense is that it really doesn't make the iPod interface less logocentric: it provides an additional piece of information to, for example, help you tell the difference between two different versions of "Midnight Train to Georgia," but it doesn't put you back in a world of folders or desktops.

Likewise, every cell phone has a nice color screen, and some icons that when clicked on will take you to different functions; but again, most of the time, I'm selecting from menus and scrolling through lists. The screen may be pretty, and the color is nice on the eyes, but my cell phone company hasn't tried to create a little information landscape on the phone's screen. Instead, they've gone with menus.

That's probably a smart choice, because menus are probably easier to work through, particularly when you're only giving partial attention to the interface. When I was sitting at my desk, I could focus on icons and folders, but when I'm walking down the street or driving (not that I ever do that), I want something much simpler: looking at simple words, or better yet, one-touch dialing.

Creating devices that let you interact with information while interacting with the world reduces the appeal of interfaces that are themselves little worlds. And I suspect that shifting from situations where we devote the bulk of our attention to graphical interfaces, to ones where we devote fragments of our attention to text-based interfaces, reduces the relevance of the the idea that we're interacting with an alternate dimension of information.

August 28, 2007

At the Institute, a couple of us have been talking about the declining perceived value of anonymity as one of the big impacts of Web 2.0. Social software (however you want to define that slippery term) encourages sociability by giving people stable identities, even if they needn't be identities that track back to a person in the physical world.

I think one of the consequences of the growing centrality of online identity is a growing recognition of how anonymity didn't work online: while there's an argument that it allowed marginal people to be heard in online conversations that they never could have joined in real life, it also served as a cover for-- or even promoted-- bad behavior, as this t-shirt succinctly put it:

I was thinking about this recently while driving on the freeway, and having to put up with various drivers doing 80, occasionally passing saner drivers by zipping onto the breakdown lane. One of the reasons this kind of behavior happens on the highway is that if you do something bad on the highways, you can essentially drive away from the consequences of your actions. The odds are incredibly small that you'll be chased down, much less have anyone remember you at a time when they can do something to bring you to account. Contrast this to a small town where everyone recognizes your car, sees you in the coffee shop, and damn well is going to have a word with you if you cut them off on the road.

July 06, 2007

In last month's Harvard Business Review, Jonathan Zittrain warns against the seductive appeal of "tethered appliances." I think he's onto something.

The core boon and bane of the combined Internet and PC is its generativity: its accessibility to people all over the world -- people without particular credentials or wealth or connections -- who can use and share the technologies' power for various ends, many of which were unanticipated or, if anticipated, would never have been thought to be valuable.

The openness that has catapulted these systems and their evolving uses to prominence has also made them vulnerable. We face a crisis in PC and network security, and it is not merely technical in nature. It is grounded in something far more fundamental: the doubled-edged ability for members of the public to choose what code they run, which in turn determines what they can see, do, and contribute online.

Poor choices about what code to run -- and the consequences of running it -- could cause Internet users to ask to be saved from themselves. One model to tempt them is found in today's "tethered appliances." These devices, unlike PCs, cannot be readily changed by their owners, or by anyone the owners might know, yet they can be reprogrammed in an instant by their vendors or service providers (think of TiVo, cell phones, iPods, and PDAs). As Steve Jobs said when introducing the Apple iPhone earlier this year, "We define everything that is on the phone. You don't want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone, and then you go to make a call and it doesn't work anymore. These are more like iPods than they are like computers."

If enough Internet users begin to prefer PCs and other devices designed along the locked-down lines of tethered appliances, that change will tip the balance in a long-standing tug of war from a generative system open to dramatic change to a more stable, less-interesting system that locks in the status quo. Some parties to the debates over control of the Internet will embrace this shift. Those who wish to monitor and block network content, often for legitimate and even noble ends, will see novel chances for control that have so far eluded them.

November 20, 2006

Last week I gave an impromptu talk at the Royal College of Art, outlining the end of cyberspace argument and its implication for interaction design. Chris Hand and Andy Broomfield, two recent graduates of the interaction design program, both blogged about the talk.

The whole thing was kind of hurried and off-the-cuff-- one of the recent grads and now current faculty invited me right before I got on the plane, and the night I was giving the talk, I dashed from Paddington Station down to the RCA, on the other side of Hyde Park, managing to wander around for a few minutes before finding the right entrance. But it was a large crowd, basically supportive about the overarching idea but also highly skeptical of the particulars-- in other words, the sort that's at once satisfying without being too much of an ego boost.

I've been ending most of my presentations on the subject with a slide that shows various overlays of digital images atop a normal street scene.

Turns out the students didn't quite hate it, but they thought it didn't work. And upon reflection, I'm inclined to agree with them, for a couple reasons.

First, and most important, instinct says that we're quickly going to find that when it comes to overlaying information on top of our everyday views of the physical world, less will be more. To some degree, we've assumed that users would go for My Own Private Shibuya (hereafter, MOPS):

Part of the pleasure of these streetscapes is precisely that they're collectively experienced, rather than individual visions: for even a brief period, we share with other postmodern, globe-hopping flaneurs and expatriates and temporary natives the light of the ABC-Mart sign and storefront.

If I had a pair of glasses that fed me annotations of the city around me, what would I really want? Would I want dinosaur heads peering around buildings? In England, where I worry constantly about looking the wrong way when I cross the street, absolutely not: I'd be killed instantly. Indeed, in any big city, MOPS would be at worst a hazard to life and private property (how long would it take thieves to learn to target people who are walking down the street watching YouTube?), and at least an intrusion on my experience of the place.

Instead, most of the time I'd want a safety reminder or two, maybe directions if I'm headed somewhere, and then some occasional "look here for more information" icon that popped up whenever, say, I passed a building designed by a particular school of architects. At other times, I'd want other information: when I travel with my kids I want to know where clean, publicly accessible bathrooms are. But would I want MOPS? Almost never.

As is so often the case, the real value won't come in providing a constant stream of semi-processed data, but in useful abstraction and restrained but enlightening presentation.

August 24, 2006

A friend points me to Sven Johnson's blog post/essay "Smiley Face Saavy," which touches on some of the same ideas-- in particular about highly responsive manufacturing made possible by flexible, fast-moving manufacturing and supply chains-- I write about in my latest Samsung piece.

August 22, 2006

I have an essay on rapid prototyping, personal fabrication, and the future of manufacturing in the latest issue of Samsung DigitAll Magazine. Here's the opening:

The transformation of the factory from a vast machine into a creative, knowledge-intensive space is a development few could have seen. Are you ready for the next industrial revolution?

For many people, the word “factory” conjures up images of William Blake’s “dark Satanic mills” or Charlie Chaplin’s Modern Times. They imagine landscapes of machinery, consuming men and raw materials, blackening skies and destroying lives. Whatever they produce, factories are inhuman and unnatural. Certainly such factories still exist; but companies that aren’t trying to win the race to the bottom are taking different paths. The outsourcing movement, and more recent attention to product design, have eclipsed a quiet transformation of the factory from a vast machine into a more knowledge-intensive, even creative, space. In surprising ways, the factory is now following a path blazed by the design studio and modern office: it’s becoming more knowledge-intensive and flexible, even as it grows more tightly connected to markets and suppliers.

June 08, 2006

One of the great unwritten history of technology stories is the biography of the Post-It. (By "biography" I mean its invention; its subsequent use, appropriation, reinvention, etc. by millions of users; and its cultural life-- i.e. use as a visual metaphor in advertising, inspiration in interface design, and probably medium in performance art, who knows.) It's one of those technologies that is incredibly simple, yet shows up everywhere; it's very modest, yet you have the sneaking suspicion that, in ways you can't quite describe, it changes the way you work and think.

No wonder it's been an inspiration for ubiquitous computing. Reverse engineer the magic of the Post-It, and you're a genius. I was reminded of this when I recently ran across this abstract by Maribeth Back and her colleagues at FXPAL on Post-Bits:

A Post-Bit is a prototype of a small ePaper device for handling multimedia content, combining interaction control and display into one package. Post-Bits are modeled after paper Post-Its™; the functions of each Post-Bit combine the affordances of physical tiny sticky memos and digital handling of information. Post-Bits enable us to arrange multimedia contents in our embodied physical spaces. Tangible properties of paper such as flipping, flexing, scattering and rubbing are mapped to controlling aspects of the content. In this paper, we introduce the integrated design and functionality of the Post-Bit system, including four main components: the ePaper sticky memo/player, with integrated sensors and connectors; a small container/binder that a few Post-Bits can fit into, for ordering and multiple connections; the data and power port that allows communication with the host com-puter; and finally the software and GUI interface that reside on the host PC and manage multimedia transfer.

You never know what will serve as the source for some illuminating (or at least entertaining) metaphor. For example, Stephen Colbert's graduation speech has made me think about one aspect of technologies and the future of cyberspace.

When I was starting out in Chicago, doing improvisational theatre with Second City and other places, there was really only one rule I was taught about improv. That was, “yes-and.” In this case, “yes-and” is a verb.... [Y]es-anding means that when you go onstage to improvise a scene with no script, you have no idea what’s going to happen, maybe with someone you’ve never met before. To build a scene, you have to accept. To build anything onstage, you have to accept what the other improviser initiates on stage. They say you’re doctors—you’re doctors. And then, you add to that: We’re doctors and we’re trapped in an ice cave. That’s the “-and.” And then hopefully they “yes-and” you back. You have to keep your eyes open when you do this. You have to be aware of what the other performer is offering you, so that you can agree and add to it. And through these agreements, you can improvise a scene or a one-act play. And because, by following each other’s lead, neither of you are really in control. It’s more of a mutual discovery than a solo adventure....

Well, you are about to start the greatest improvisation of all. With no script. No idea what’s going to happen, often with people and places you have never seen before. And you are not in control. So say “yes.” And if you’re lucky, you’ll find people who will say “yes” back.

Okay, nice enough for a graduation speech. But it strikes me that "yes-and" serves as a good shorthand for thinking about one of the opportunities that ubiquitous computing technologies create.

Cyberspace had an important either/or: most of the time, you could either interact with it, or with the world, but not both at once. The personal computing model of interacting with information was socially disruptive: the keyboard and monitor require a lot of your attention. Under most circumstances, this meant choosing between things seen through a screen on your desk, or the world around your desk. To put it another way, the same technologies that made it easy for you to interact in real time with someone thousands of miles away made it hard to interaction with someone a few feet away.

What ubicomp offers is the possibility of creating devices, spaces, and interactions that don't force an either/or choice upon their users, but rather explore the opportunities and exploit the synergies of a yes-and: combining the affordances of physical media, the familiarity of traditional workspaces, or the complexity and richness of social settings, with the speed and flexibility of bits. Some examples:

E-paper that looks and feels like traditional paper, but can be updated much more easily.

Devices like the Ambient Orb that can communicate information while staying at the edges of your attention, not forcing their way into the center.

Tags: VERB Yellowball uses an ID to connect the ball to a digitally-managed story about it; Semapedia is a physical wiki, a framework to "connect the virtual and physical world by bringing the best information from the internet to the relevant place in physical space;" Thinglink is a system for generating ID numbers for craft goods, which also serve as pointers to database records about those objects-- a bit like blogjects, a bit like MARC records.

June 07, 2006

Most internet users know hyperlinks as highlighted words on a web page that take them to certain other sites. But hyperlinks today are quite complex forms of instant connection—for example, tags, API mashups, and RSS feeds. Moreover, media convergence has led to increased instant linking among desktop computers, cell phones, PDAs, MP3 players, digital video recorders, and even billboards.

Through these activities and far more, “links” are becoming the basic forces that relate creative works to one another. Links nominate what ideas and actors have the right to
be heard and with what priority. Various stakeholders in society recognize the political and economic value of these connections. Governments, corporations, non-profits and individual media users often work to digitally privilege certain ideas over others.

Do links encourage people to see beyond their personal situations and know the broad world in diverse ways? Or, instead, do links encourage people to drill into their own territories and not learn about social concerns that seem irrelevant to their personal interests? What roles do economic and political considerations play in creating links that nudge people in one or the other direction?

The notion of links "becoming the basic forces that relate creative works to one another," and helping to define "what ideas and actors have the right to be heard and with what priority" strikes me as right on (it has strong echoes of actor-network theory, a branch of science studies that I've drawn on in my own work). Personally, I would make the case that you need to pay more attention to things that hyperlink between physical objects and digital information, like Semapedia. (David Weinberger recently declared that "Last year, it was Web 2.0 and tagging. This year, it's going to be unique IDs (UIDs).") The rise of these links is going to have very serious implications for structuring connection, attention, and influence.

It also make me wonder, to what degree has the character of hyperlinks influenced the way we've thought about cyberspace? Way back when, hyperlinks were Really Cool: I still remember how much time I lavished on them when, as an instructor at UC Davis, I put together my first course Web site.

It also seems to me that there's a growing serious interest in charting the cognitive impacts of new media: Susan Greenfield'srecent speech calling for more study of the relationship between new technology and our brains seems to have crystallized something.

For [architect] Jason Payne of gnuform, Los Angeles provided an opportunity, as he says, “to strain through materiality” the more abstract formal experimentation his office had been pursuing in New York....

Los Angeles’s unique culture of fabrication that make it one of the most exciting places to practice in the world today. Drawing on the expertise of fabricators working with Los Angeles-based aerospace, automotive, and entertainment industries, these and other area architects are beginning to materialize designs that until recently were trapped inside their computers. What seems especially appealing is the willingness of Los Angeles fabricators to take on jobs that require extraordinary flexibility in schedule, budget and specifications of final product.

This looseness and embrace of collaboration has fostered a design culture in which fabrication has become an increasingly important engine of design innovation. Architects design by making, by fabricating, which enables them to quickly learn from successes and failures, building the design intelligence required of more refined and robust designs.

I suspect that many of the substantive objections to having computers in the classroom can be boiled down to issues involving bringing a then-disruptive cyberspace into the classroom-- and that we can begin to see how, for some disciplines at least, we could design our way past those problems.

May 02, 2006

I've started reading Everyware: The Dawning Age of Ubiquitous Computing. I'm around thesis 10, with about 75 to go. So far, we're still in fairly basic territory, and I can't quite tell who the book is aimed at: interaction designers and other professionals, or a more general public?

On one hand, the book's strong on the intellectual history of ubicomp-- almost stronger on the history than you'd expect a book for practitioners to be. But the organization into theses makes it less accessible a read, and more prescriptive.

As usual, Gene Becker beat me to the punch, this time getting the Everyware-Martin Luther comparison out before I could.

The book it also makes clear just how valuable a biography of Mark Weiser, or at least an article that talks about the origins and development of his concept of ubiquitous computing, would be. Weiser keeps showing up in the story, as the Man With the (Original) Plan, the guy who first imagined what a ubicomp world could be like. Another book or two like this, and he'll be the Buddy Holly of computing.

April 28, 2006

I've been reading up on rapid prototyping technologies, and came across an interesting argument: that the use of 3-D printers, which allow students to make quick physical copies of things they've designed on computers, is making engineering cool, and helping kids develop spatial skills.

Timothy Jump, a teacher at Benilde-St. Margaret's High School, a private college preparatory school in St. Louis, Missouri... [says], "Until 3D printing came along, we were unable to show young people the beauty of the engineering process, taking an initial idea all the way to completion, until late in their educational experience.... 3D printing stimulates a student's mechanical-spatial awareness in ways that textbooks cannot."

Don Jalbert, a CAD/CAM mechanical design instructor at the Lewiston Regional Technical Center in Lewiston, Maine, says 3D printers can help young people realize they have a knack for engineering. "When I taught CAD 10 years ago, the concepts were wholly theoretical because the students could not touch or feel the objects they created. Now with the 3D printer, students can do much more than draw a part. They can evaluate it, refine it, assess how it fits in a larger assembly, and hand it to people. The 3D printer is a great recruiting tool for getting students excited about engineering."

When you think about it, massive multiplayer games are essentially fun-ride versions of CAD and CAAD systems: part of the appeal of Second Life is that you can build all kinds of interesting virtual stuff, from bodies to buildings. It may be that, in the long run, the phenomenon of video games eroding kids' mechanical or spatial skills will be replaced with a pattern in which they translate the design and engineering skills they learn in virtual worlds into the physical world, through the mediation of 3D printing technology. Just a thought.

April 23, 2006

At the end of Shaping Things, Bruce Sterling lays out what the post-spime world might look like.

The step after the Spime Wrangler-- tomorrow's tomorrow-- is neither an object nor a person. It's a Biot, which we can define as an entity which is both object and person.

A Biot would be the logical intermeshing, the blurring of the boundary between Wrangler and spime. This is happening now, but we can't perceive and measure it.

Today, every human being... carries a load of industrial effluent.... A human body can be understood as a sponge of warm salt water within a shell of skin; so everything we emit [or manufacture or consume] ends up partially within ourselves.

Some artificial substances are bioaccumulative; our metabolisms preferentially suck them out of the biosphere and try to make structure out of them. These processes are involuntary and take place beneath our awareness. (134)

A Biot is somebody who knows about this and can deal with the consequences. He's in a position to micromanage and design the processes that shape his own anatomy. (135)

When will be get to the Biot Age? Sterling guesses around 2070. What kinds of technologies will a Biot technosociety create?

In a Biot world, the leading industries are not artifacts, machines, products, gizmos, or spimes, but technologies for shaping human beings.... The driving technologies of a Biot technosociety would be cybernetics, biotechnology, and cognition. (135)

Because some of the most advanced, valuable technologies will be incorporated into the body, or lived with every day (with full awareness of the biological impacts of that contact), and because of the need for more environmentally sustainable design and manufacturing, a Biot technosociety would prefer

technology that can eventually rot and go away all by itself. Its materials and processes are biodegradable, so it's an auto-recycling technology.... It means room-temperature industrial assembly without toxins. (143)

But there will still exist two other kinds of technologies. One will be

artifacts deliberately built to outlast the passage of time. This is very hard to do and much over-estimated. Many objects we consider timeless monuments, such as the Great Pyramid and the Roman Colosseum, are in fact ruins. (143-4)

The other will be

the kind [of technology] I have tried to haltingly describe here. It's a fully documented, trackable, searchable technology. This whirring, ultra-buzzy technology can keep track of all its moving parts and, when its time inevitably comes, it would have the grace and power to turn itself in at the gates of the junkyard and suffer itself to be mindfully pulled apart. It's a toybox for inventive, meddlesome humankind that can puts its own toys neatly and safely away. (144-5)

How will spimes help save the world? Bruce Sterling lays out a scenario in Shaping Things. Essentially, it's the first book in which metadata is a superhero.

The fact that objects are divorced from information about them encourages us to focus on and take responsibility for only a tiny part of any object's life, and makes it far harder to perceive the consequences of our encouraging the creation of that object, our consumption of it, or our disposal.

Consider a bottle of wine (see chap. 9). Today, our interactions with it are reduced to consulting the price tag, drinking the wine, then throwing away the bottle. But

there must be a mountain of externalities, currently obscured and invisible to me, that involved this object. That growing and fermenting of grapes... topsoil loss, chemical fertilizer, insecticide sprays, the fuels involved in heating and distilling all that liquid.... [Were the workers] suntanned Italian peasantry in the full healthful glow of EU agricultural regulations... [or] illegal African or Abanian immigrants? If that's the case, then I've been invegled into oppressing these people under a veil of my own ignorance.... Why do I collaborate with someone who forces me, through obscurantism, to do that against my will?...

This bottle sure came a long way. How'd it get here to me? How much carbon dioxide got spewed into the planet's air ino order to to ship this object into my hands?...

I'm not supposed to worry my head about all of that, but you know something? I know I am paying for it somehow....

What goes around, comes around. If I ignore distant consequences merely because they seem distant, then distant people will similarly inflict their consequences on me. That's a beggar-your-neighbor situation, a race to the bottom.

But suppose I show them how the object came to be, and I link that information to the object. That would be "transparent production."

So a spime is a moral entanglement with a built-in decoder ring. It's no less a savior or destroyer of worlds than any manufactured object that came before; but by making it laying bare its composition, history, and real costs, you can make better decisions about whether buying and using it will be good for you-- by which you mean, good for you, the world, and the future.

Right now, if these externalities are dealt with at all, they're handled by markets or governments: the price might include a ltitle extra for better labor practices (or it might not), and our taxes cover the costs of disposal and environmental cleanup (or they might not). Our capacity to deal with them independently is pretty limited: knowledge about what companies are socially or environmentally responsible is separated from the point of sale, while detailed information about the composition and history of things is often simply unavailable. Today, how do you know you're making the consumption choice you'd make if you were fully informed? You don't.

This bottle arrived in my possession seemingly stripped of consequences, but those consequences exist.... My relationship to this bottle of wine is a parable of my human relationship to all objects....

My own single-handed effort is entirely unequal to that challenge of discovering all those relationships]. I can't simply know enough... but I can't Wrangle all the world's technosocial issues all the time.

It follows this much of this activity should be done for me by other people.

Who would do that? "Designers."

Just as John Markoff argued that the idea of personal computing was invented before the personal computer itself-- that the PC embodied an already-extant notion of how people and computers should relate-- so too does Sterling suggest that fifty years from now, we'll see concepts like the triple bottom line, environmentally aware consumption, and social investing as anticipating the things we'd be able to do, easily and with greater consequence, with spimes.

In laying out his vision of the future in Shaping Things, Bruce Sterling employs two concepts that require a little decoding: metahistory and synchronic society.

Every civilization has a metahistory, a kind of internal cultural logic. One great flaw is that metahistories tend of see themselves as permanent; a contingent metahistory that allowed for the possibility of its own end-- and was more thoughtful about how to avoid that end-- would work better.

Our own current metahistory is damaging in its short-sigtedness and have yielded "slow crises cheerfully generated by people rationally pursuing their short-term interests." (41) As Sterling puts it,

The 20th century's industrial infrastructure has run out of time. It can't go on; it's antiquated, dangerous and not sustainable. it's based on a finite amount of ice in our ice caps, of air in our atmosphere, of free room for highways and transmission lines, of room in the dumps, and of combustible filth underground. This is a gathering crisis gloomily manifesting itself int he realm of bad weather and resource warfare. It is the legacy we received from world'shaping industrial titans such as Thomas Edison, and Henry Ford, and John D. Rockefeller-- basically, the three 20th century guys who guys us into the Greenhouse Effect. (131)

Its no use starting from the top by ideologically re-educating the consumer to become some bizarre kind of rigid, hairshirt Green.... The only sane way out of a technosociety is through it, in to a newer one that knows everything the older one knew.... That means revolutionizing the interplay of human and object. It means bringing more attention and analysis to bear on objects than they have undergone. It also means engaging with the human body and our affordances. (131-132)

The fact that we can insulate ourselves from the histories and consequences of our decisions, and that markets can assist us in that process (by reducing our relationships to things to price, and treating everything from the social consequences of abusive labor practices to the environmental costs of disposal of packaging as an "externality" that neither you nor the manufacturer has to think about), means that we can live in a state of blissful, deadly innocence.

Ironically, in the artifact era, when most humans grew their own food and made their own things-- or were related to those who did-- we knew a lot more about where stuff came from, and the consequences of making things poorly (of using unsustainable farming practices or building a shoddy furnace); but there were also few enough of us so that anything we did was likely to have very little impact on the world.

Our ability to change the world, intentionally or unintentionally, has far outstripped our ability to make sense of those changes. (Will history regard the internal combustion engine, and not nuclear weapons, as the greatest technological terror of the 20th century?)

To deal with this, "[w]e need a designed metahistory," (42) and Sterling thinks it will

combine the computational power of an INFORMATION SOCIETY with the stark interventionist need for a SUSTAINABLE SOCIETY. The one is happening anyway; the other has to happen. (42)

It would be a synchronic society. Such a society

Has a temporalist perspective: it seeks to generate more time and greater opportunity, both at the micro-scale, and the level of civilizations. (To this society, burning fossil fuels is the height of folly.)

Treats objects as expressions of and generators of information, interesting not just for their obvious physical properties.

If we design that metahistory to exploit the power of spimes, which are "information melded with sustainability," (43) we can create a dynamic by which we can preserve and learn from our history, thus giving us the chance to evolve our way out of the current mess. Spimes are especially important because they exist at:

the intersection of two vectors of technosocial development. They have the capacity to change the human relationship of time and material processes, by making those processors blatant and generalization. Every spime is a little metahistorical generator.

History is this technoculture's primary source of wealth. As it transits through time, due to the principles of its organization, it will increase in knowledge, capability, wealth, and power.

The concept of the spime is central to Shaping Things. Most briefly put, a spime is a thing, plus a lot of information about that things: its design, manufacture, shipping history, provenance, use, and ultimate death. A spime is "a set of relationships first and always, and an object now and then." (77) The information about the spime is more important, and more valuable, than the spime itself.

We don't have spimes today, but we have relationships with things that give a hint of what living with spimes would be like. Think of a book or CD that you love. Your relationship with it began in somewhere: maybe you found that book on a rainy day in a musty little bookstore in Cambridge when you were a bright-eyed, naive exchange student, or the CD in a basement music store in the East Village a few shell-shocked months after 9/11. You've developed a relationship with that artifact-- the cover is scratched, the pages are underlined and marked with post-its and coffee stains-- and you've taken it with you on trips to Rome, Kwangju, Curitiba, and Ithaca. Right now, that history is only recorded in the material record of the object itself, or maybe in your blog.

Now imagine every object you own having a history like that. Imagine that that history is recorded in a manner that makes it searchable. And imagine that every experience everyone has with other copies of their spimes is likewise recordable and retrievable.

So a spime is at once a faint expression of an ideal design, a throwaway expression in atoms of the real object living in the Platonic plane of bits, and it's a unique object.

I've been distracted the last few days by an unexpected project that's required all my attention, but I'm now in a position to wrap up my reading of Bruce Sterling's Shaping Things. More posts shortly.

Spimes are "manufactured objects whose informational support is so overwhelmingly extensive and rich that they are regarded as material instantiations of an immaterial system." (11) Spimes have Wranglers.

Each kind of thing has a different characteristic technoculture associated with it. (One thing about this book is that there's a weird STS double vision when reading it: whenever Sterling writes about about "technosocial" I hear echoes of Bruno Latour.)

Spimes aren't prevalent yet, but they're coming. They'll pose challenges, but we can design around them; "the future can be yours to make." (13)

Shaping Things, Chapter 3:

These aren't hard and fast categories: a bottle of wine, for example, combines elements of an artifact (the wine and vineyard), a machine (the technologies used to ferment and wine), a product (the bottle), and a gizmo (the label, bar code, and related Web site).

Every one of these transitions-- Artifact to Machine to Product to Gizmo-- involves an expansion of information. It enables a deeper, more intimate, more multiplex interaction between humans and artifacts.*

Like all gizmos, this wine has a short lifespan; lots of built-in functionality; an interface to a lot of information; and it presents consumers with potential problems of cognitive overload and opportunity costs.

My next book, Rest: Why Working Less Gets More Done, is under contract with Basic Books. Until it's out, you can follow my thinking about deliberate rest, creativity, and productivity on the project Web site.