Category Archives: Interaction

I’ve been cleaning a lot of the cruft out of my domains lately. Subdomains, development domains, MySQL databases originally setup to stage all sorts of nefarious dealings… they’ve all been pulled up by the roots and tossed into heaping piles of gzipped tarballs.

As part of this activity I’ve been cleaning out my Google Analytics account as well, as many of my analytic site profiles refer to domains long gone, testing procedures long concluded, directions I thought my web interests would go but didn’t. Having just made a Great and Terrible Mistake and irreversibly destroying a trove of information courtesy of the slop that is the Google Analytics interface, I have penned a cautionary tale to let you aware of two of its most dangerous functions: pagination and deletion.

The pagination tool in Google Analytics defaults to displaying only 10 site profiles per page. Using the dropdown menu you can change this to 5, 10, 20, 35, 50 or 100.

An option to display only five profiles per page? What the hell? In what universe would that be useful? Are we seriously so pressed for bandwidth in 2010 that we cannot afford to peer at the world through more than a pinhole? Further, the cognitive load of needing to choose between six freaking options is ridiculous. It’s a modest burden to bear but oftentimes interfaces manage to kill their users not through a single fatal flaw, but through an endless series of tiny papercuts such as this.

Seriously, Google Analytics. If you must have pagination, limit the options to 10, 50 and All. And for all that is holy, remember my choice for at least the duration of my session. Needing to reset the number of rows every time I go back to my profile list is maddening, and the fact that I can’t save this option as a personal setting is driving me insane.

Or would drive me insane, if I hadn’t screwed up in a much bigger way. Pagination in Google Analytics has an additional feature whose destructive tendencies are so finely tuned that they trump even the above critique. To expand on this, we’ll take a quick stroll through the flawed workflow for deleting a site profile.

Deletion: With great power comes insufficient gravity and illustrative consequence surrounding said power.

To delete a site profile, you click the “Delete” link in its corresponding row:

When you click “Delete” a beautiful alert box pops up, a charming implementation of the “Hello World” of any beginner’s javascript tutorial:

In the alert box, the profile that will be deleted is not mentioned by name. It is up to you to remember, guess or assume which profile you originally clicked on. The most prominent information on this alert is the domain of the website that initiated the alert. Is that really the most important thing you need to know at this point, in order to make an informed decision? More important than the fact that the profile data cannot be recovered? More important than the name of the profile that’s actually being deleted?

Also note that “OK” is selected by default, so that pressing the return key will delete the profile. With an action as destructive as the irrecoverable deletion of years worth of information, it’s insanely poor form to select this choice by default.

Perhaps if creating a sensible “Delete” workflow in Google Analytics was as precious as maximizing clickthru rates on text ads, we’d see Google employing the same obsessive levels of testing that the color of hyperlinks currently enjoy. As it stands, all I can say is user experience my ass.

One Plus One Equals Gone

The ambiguous delete tool in Google Analytics, combined with its poorly-executed pagination functionality, creates a perfect storm of destruction. No matter what page you are on, when you click “OK” to confirm the deletion of a profile, Google Analytics redirects you to the first page of your profile list.

(The alert box for confirming the delete action appears over your current page. After clicking “OK” from the alert box you are redirected to the first page, losing the context of your delete action.)

Like most humans, I have a finely-tuned spatial memory. I instinctively take note of where things are located in space, I can predict where they will go, and I can remember where they were. If I’m performing a repetitive task, say spooning food into my mouth, I don’t check my plate after every bite to make sure it hasn’t turned into a bowl of snakes. There is an expectation, born from my experience with physical reality, that the plate and food will remain fairly consistent between mouthfuls such that it doesn’t demand constant conscious consideration. In the words of Heidegger, the spoon, plate and food are ready-to-hand, an extension of myself, part of my world of absorbed coping.

In Google Analytics I had identified two profiles that were outdated, and I moved to delete both of them. Spatially, they were located right next to each other, one after the other. I deleted the first one, and instinctively went to the location of the second one, and deleted it as well. The javascript alert, boldly declaring https://www.google.com/, was promptly ignored because it offered no useful information to confirm.

So long, dear friends.
Well, numerical representations of friends.

Unbeknownst to me, after deleting the first site profile I had been quietly redirected to the first page of my profiles list. And so, it came to pass that I deleted not the profile I intended to delete, but the profile documenting four years of activity here at Daneomatic. Clearly I’m notthefirst person to have accidentally (and irrecoverably) deleted a profile from Google Analytics.

Dear friends of Daneomatic, I ask that you enjoy your fresh start. Save your comments, I know nothing of you, of your browsers or activities or search terms.

Please, remake yourselves however you see fit. The gentle fellows at You Look Nice Today may offer some valuable suggestions as to how to best use this opportunity.

I do believe, however, that multitasking and the ready availability of always-on, always-connected technology adversely affects my quality of life in many ways. And I do believe that I personally do not have the faculties necessary to deliberately manage these multiple, constant threads of information on my own.

Thus, my retreats into the woods. Externally-imposed isolation, where connectedness is not an option, is a very different beast than self-imposed isolation, and one I am far more fit to manage.

So, when I look at Campbell’s rig, I do not see it as an ideal to which to aspire, nor do I see it as a symbol of a computer-mediated life gone to horrible extreme. I simply see it as one person’s elaborate setup, their attempt to deal with the deluge of modern information, and I find it valuable and fascinating in its own right. I am here to observe, to sense-make, not to judge.

Really, I believe a focus on the number of screens misses the point, and what I find most interesting is the ecosystem that Campbell has created for himself.

Most poignant for me is the lowly Post-it Note, hanging off his primary monitor, front-and-center. For all the screens, all the software, the physical and spatial world was still implicated to record, display and remind Campbell of a few pressing tasks:

Signup breaks on template

Missing [frigge?] in add input

Trailing slashes on add input

Password reset issues

All recorded with pen and Post-it, and slapped up front on a 27″ monitor.

For all our screens, the physical, embodied world still holds significance and its own, rich meanings.

Mr. Strayer, the trip leader, argues that nature can refresh the brain. â€œOur senses change. They kind of recalibrate â€” you notice sounds, like these crickets chirping; you hear the river, the sounds, the smells, you become more connected to the physical environment, the earth, rather than the artificial environment.â€

Indoors and Outdoors. Natural and Artificial. Digital and Physical. Isolation and Connectedness.

My final semester of graduate school is now long over, I have spent the last few weeks immersed in the awesome culture that is Adaptive Path, and yet embodied interaction continues to dominate my thoughts.

My results didn’t quite meet my initial expectations. Electronics, it turns out, is still an archaic craft wrapped in cloaks of obtuse language and user-hostile encodings, and is certainly an art unto itself. I realized that to produce the robust interactions I had intended, with all the nuance and detail with which I approach my screen-mediated design work, would take an entire career worth of learning and refinement.

So then, were my efforts with Hans and Umbach all for naught? I don’t believe so. Physical computing exists at the intersection of the physical and the so-called digital worlds, which is why I was originally so interested in studying it. In reflecting extensively on my own process of learning electronics, and simultaneously diving deep on academic research behind notions of embodiment, I came to realize that perhaps stumbling through the craft of linking these two worlds together wasn’t the best use of my strengths.

Because, I realized, the boundary between the physical and digital worlds was a false one.

Indeed, mentally compartmentalizing the physical from the digital makes sense from a computer science perspective, or from a system architecture perspective, but it is a wrong, dead-wrong approach for an interaction design perspective.

Every interaction, whether it is with a coffee cup or a keyboard or pixels on a screen, exists in the physical world, is perceived through our senses, is actively interpreted by us, and is thus rendered meaningful by our interpretation. Whether it is physical or digital, every interaction is embodied, as we only interact with the physical manifestations of digital information.

This was a surprising conclusion to reach, as the whole reason I set out on this inquiry was to prove that the interactions afforded by the devices at the Musee Mecanique were of a different class than those afforded by screens and input devices. What I began to discover, though, is that even our most familiar, most natural, most culturally-embedded interactions, are all technologically-mediated.

There is nothing natural about plain paper, dark ink or the printing press; these are all technologies. However, a book differs greatly from an e-book in terms of the richness of its physicality. Screens typically comprise an interaction that is physically impoverished, given the rich range of sensing capabilities we have as human beings. By not engaging our senses for texture, warmth, smell and sound, the e-book is limited in how it engages our sense of embodiment, but it is embodied nevertheless.

Indeed, too much effort has been wasted trying to explain how and why tangible computing is new and different than what came before it… what, intangible computing? I believe that the assertion is irrelevant, that tangible computing is not new, but as an area of inquiry it has given us a new perspective from which to reconsider all interactions, namely that of their embodied qualities. While tangible computing is mostly concerned with the sense of touch and physical manipulability, embodied interaction considers the larger notions of physicality as a whole, the human body as a mediator of experience, the nature of being, and the role of individual interpretation as central to the formation of meaning.

All interactions can benefit from an embodied perspective, not just analog, physical, in-the-world interactions, but so-called digital ones as well. There are all these things in the world, hardly perceptible but nonetheless important, that we use every day to create meaning.

What I continue to outline, through my consideration of embodied interaction, phenomenology and metaphor, is a means by which we can talk about these experiences in such a way that embodiment can better inform our design process.

Okay. It’s been awhile. I’ve taken some time off. From this, as well as from embodied interaction.

But let’s get back at it. Embodied interaction, that is.

I’ve had quite some time to decompress about this, take a pause and see what sort of ideas keep bubbling to the surface. And the results are not surprising. Or are surprising. Or whatever.

On a postmodern interpretation of technology, and rejecting a sense of inevitability.

Jaron Lanier. And questioning underlying assumptions, tacit assumptions, the colorless, odorless nature of our technological surroundings. Of our environments. Render explicit to consciousness that our ecologies are not inevitable, that they are not natural, that they are not predestined, that they are constructed and hold no truths. That is not to say they are completely relative, but just that they are subjective. Indeed. Privacy on the internet. That you cannot hear people from your car. Certain responsibilities, the lines we draw between designer and user, producer and consumer, etc.

Who is responsible when my gas pedal sticks? How about when I swerve to miss a deer? Or if I hit the deer? Should the car allow me to swerve so strong that I can flip it? Should it never be allowed to exceed 55 mph? What is the logical limit for the top speed? What is safe? What is safe enough? Given infinite resources, given no technological constraints, where would we find ourselves?

On the future of screen-mediated interactions.

Screens, for instance. There’s nothing inherently wrong with screens. A screen is just a collection of pixels. RGB emissive light, photons in this case. However, they could just as well be CMYK dots, like a newspaper. Like a magazine. Imagine how that would change our relationship with paper. Indeed, in that case you could pinch-zoom the 2010 Rand McNally U.S. Road Atlas. You’ve already tried that. Jake’s already tried to slide-unlock his wallet.

There is a feedback loop here. Digital technology influences how we interact with our analog environments. Not just vice versa. Twenty years ago the analog interaction of operating a Rolodex offered a logical metaphor for the digital interaction of browsing an address book. But no more. How many 15-year-olds today do you think have operated a Rolodex? How many do you think have operated an iPod? Or an iPhone? One metaphor for interaction here can no longer effectively be leveraged. Methinks the metaphor of the Rolodex interaction is dead.

So, you take Iron Man 2, with its transparent glass screens, and you think, man, that looks cool at first, but then you realize trying to focus on flat content projected on a transparent screen would be somewhat straining. But. If you can project an image on clear glass like that, who is to say you cannot also project a black background? And now, your office may consist of hundreds of screens, but when they’re off they’re transparent. Open. Airy. They barely exist. And the fact that it can be transparent, or can project its own black background for familiar contrast, or yes, that they can be transparent, opens up all sorts of options for augmented reality.

Though, for a truly transparent glass screen (which is merely transitory technology on the way to in-air displays) you come up against problems of auto-stereoscopy and determining the relativistic perspective of the user’s viewpoint… parallax and the like. I move my head, and the display needs to update accordingly. Or, what if I’m sharing the same augmented reality screen with someone else? They need to see a different display, from a different angle, than myself. Here we need some sort of holography, that projects a unique display along all emissive points.

When we think of screens, we need to think beyond the current technological implementation of the screen, and instead think of the screen metaphorically. What are the terms we use to construct our thinking of this display? Can we touch it? Do we touch it? If we touch it does it get greasy? Not necessarily. I predict in a few years that nanotechnology will provide us with materials, perhaps inspired by the leaves of the lotus, that collect no dust, accept no grease (even from the infinitely-greasy human hand). Imagine if all glass were made of such a material.

Is a screen an extension of a book? A viewport into another world? A wormhole? How we align ourselves, socially and culturally, with these artifacts greatly influences how we perceive them, how we conceptualize them, how we imagine ourselves using them. We look back at old science fiction movies and laugh at their cornball conceptualization of the future, but it’s important to recognize that every piece of science fiction is a product of a unique society and culture. Especially mainstream science fiction (or depictions of the future, as seen in Iron Man) needs to consider their sociocultural situatedness.

Ironically, technology in science fiction needs to appear futuristic, but not so much so that it seems unbelievable and unachievable given current understandings of the world. I recently read that plants may use quantum entanglement to maximize the efficiency of photosynthesis, and that quantum entanglement may allow birds to “see” the Earth’s magnetic field, aiding in migration. If these theories turn out to hold weight and thusly become popularized, they will influence our shared, intersubjective world, and become a resource that science fiction can leverage for believably futuristic renderings of the, well, future.

On questioning hegemonies.

I am realizing that one of my roles as a designer is to question, or at least render explicit, the tacit assumptions of the hegemonies in which we conduct our lives. As interaction designers, we have inherited the legacy, a powerful and important legacy at that, of a scientific approach to computation, as well as an initially cognitive-systems approach to interaction. The scientific, non-humanistic origins of our field, I believe, continue to silently influence the way we think about and talk about interaction.

There is a strong, increasingly strong, reaction against these rational histories of human-computer interaction, towards a more experiential model that considers the whole person, their emotions, desires, goals and fears, not only as something to design for, but something to design with. The user as a medium for design. Indeed, the interpretative abilities of the user are an incredible resource that can, nay must, be effectively leveraged by our designs.

The value in a design is not objectively measurable, and is not contained in the designed artifact itself, but in the union between the artifact and the user. The simplest designs are compelling not merely because they are simple, but because they so gracefully leverage the rich intersubjective world of the user (or users) to give them meaning. As phenomenology tells us, these meanings are situated not in the artifact, but in the consciousness of the user herself. Interaction design is concerned not with the objective world, but the messy, subjective world of interpretation. Phenomenology is at the very core of interaction design; concerned with reality as it is revealed to and manifest in consciousness.

I am proud that interaction design is increasing concerned with the messy subjective world, that it realizes that an account of the objective qualities of the world are insufficient to design compelling interactions. Nevertheless, I believe there is still significant work to be done in shrugging off the scientific cloak of computation, so that we can truly design future-facing interactions. I believe certain metaphors used for describing our systems have hung on past their prime, and silently and insidiously damage progress in our field. Most notably, as I have described recently, is the conceptualization of a virtual world that exists independently of the physical world.

On dispelling the myth of the virtual world.

While the difference between the physical and digital is certainly important from a technology and computation perspective, I believe it is meaningless from an interactive perspective. Nevertheless, we still speak of making virtual friends, roaming virtual worlds, or downloading digital information. I believe this categorization creates a false boundary between the physical and digital worlds, mischaracterizing the digital and trivializing the real, physical, embodied interactions that happen, that must happen, when a user interacts with the so-called virtual world.

Interacting with a friend in World of Warcraft is greatly different than interacting with them when they’re standing in your living room, but not because one is a “virtual” interaction and the other is a “real” interaction. No, they are both physical interactions, one mediated in co-present physical space (with all the available expressive faculties that come along with such co-presence), and one mediated through keyboard, mouse, screen and audio. To characterize the latter as “virtual” is to casually dismiss the embodied interactions that must happen in order for the conversation to take place, and to neglect possible opportunities to make the interaction more richly embodied.

On disentangling interaction design from its computational roots.

Computer science must necessarily distinguish between hardware and software layers, either of which can branch into any multitude of sub-disciplines. However, users do not necessarily make any such distinction. I have observed college freshmen working with computers, and their conceptual model of computers often does not distinguish between operating system and application, or even local (as in, on their computer) or remote (as in, on the internet). To them, a computer (or even computation as a whole) is one amorphous interactive mass, which whether we like it or not, is how we have to design it.

Also. We must design in the abstract, but ultimately our design are interacted with at the ultimate particular level. People never abstractly interact with a product. They only particularly, specifically, interact with something.

Last night I delivered my thesis presentation, effectively completing my master’s degree in human-computer interaction design. Over the last seven months I’ve been conducting a design exploration into the ways we find nature meaningful to us, and uncovering ways to enliven indoor environments with a sense of the outdoors.

Here is the 20-minute presentation:

A big, hearty thanks to everyone who came out to see it live and in person!

There is something different, something tangibly different, about real objects in real space around you. Sound that emanates from two metal pieces clanging together in real space is so much more satisfying than a recording played through a speaker. That they move, that they displace the air around them, the same air that you breathe, is just one of the ways we seem inexplicably tied to the physical realm.

Electronics as a tool for extending computation into the physical world.

This was the goal as we began our inquiry; how to create physical interactions, that exist in the real world and involve the manipulation of real artifacts, that are invisibly-backed by the strengths of modern computation and network technology. In our efforts to reintroduce interaction to the space around us we learned electronics, we experimented with Arduino, and we took our knowledge of programming and extended it into interactive electronic artifacts that existed in the world with us.

As we tinkered with electronics we quickly discovered that all of the subtlety and nuance, as well as challenges, that go into designing “digital” interactions are present in physical computing, only amplified because we were now considering both an electronic and physical layer. These are two layers that, say, when you build a website or application, you take for granted. With Arduino they become your responsibility, subject to whatever limited grasp you may have with the subject area. We are disappointed by the limited progress we made in learning electronics, but we certainly have a renewed appreciation for people with the wide array of skills necessary to make not only functional, not only good, but great computationally-backed physical interactions.

From a theoretical perspective, our original interests were to understand why these physical in-the-world interactions are so fulfilling and evocative, and why our virtual interactions feel so vapid in comparison. Our goal was to explain why these physical interactions should earn a privileged seat at the table, while virtual interactions should be sent to their room.

Tangible computing: digital information, physical interaction.

As we sifted through the layers of theory on embodiment, we realized we needed a better understanding of what defines a virtual interaction, and how it is different from a physical one. Traditionally, a virtual/digital interaction involves a screen comprised of pixels, with a keyboard and pointing device used for controlling the interface. Tangible computing has worked to categorize interactions, characterizing certain ones as ‘tangible’ and other ones as ‘digital’. Indeed, with ambient computing Ishii and Ullmer have dedicated much of their work to studying how we can render ‘digital’ information, or ‘bits’, in ‘physical’ space. A number of authors have sought to define tangible computing in a manner that differentiates it from ‘regular’ computing. A few common tenets:

Tangible computing unifies input and output surfaces. Instead of a keyboard that adds characters to a screen, or instead of a mouse that moves a cursor, tangible computing offers a new interactive model where the input mechanism and output display are one and the same artifact.

Tangible computing affords direct manipulation. Instead of mapping the physical space my mouse occupies to the virtual space on the screen, the physical object I am working with can be grasped, turned and shaped in order to change its unified output characteristics.

Hornecker outlines three primary views of tangible interaction. The work of Ishii and Ullmer concerns a data-centered viewpoint, where physical artifacts are computationally-augmented with digital information. A second is a perceptual-motor-centered view of tangible interaction, which aims to leverage the affordance and rich sensory experience of physical objects. As championed by Djajadiningrat and Overbeeke among others, this view of tangible interaction emphasizes the expressive nature of human movement. Finally, Hornecker subscribes to a space-centered view of tangible interaction, which involves embedding virtual displays in real-world spaces.

I subscribe to a perceptual-motor-centered view of tangible interaction.

And I believe that the data-centered and space-centered views are absolute nonsense.

Let’s talk about digital and virtual.

Both the data-centered and space-centered views of tangible computing make reference to a ‘digital’ or ‘virtual’ world of information. These concepts are familiar enough, and we all have a gut feeling as to what they mean. Digital information is ones and zeroes. It lives in a computer or on a server somewhere. It is ephemeral, existing without really existing, and can be infinitely accessed, copied, reproduced and distributed without loss. It’s what got the music industry’s panties in a bundle.

The virtual world is the world in which this ‘digital’ information exists, and it lacks many of the familiar characteristics of the physical world. Things don’t actually ‘exist’ in the virtual world. A photo on Flickr is not a photograph in real life. You don’t need to be co-present with the photo in order to see it. Multiple people can look at the same photograph at the same time, without being aware of one another.

The metaphors we use to describe digital information and the virtual world emphasize its distributed, ephemeral nature. Deleted files disappear “into the ether” and we pull things down from “the cloud.” Like an atmosphere that envelops us, we think of it as existing independent of us, independent of the moments we perceive it through our devices.

Nevertheless, digital and virtual are only conceptual metaphors. They do not describe the objective qualities of our networked, computational systems, but rather our subjective framing of them. They are extremely effective metaphors, yes, as characterized by their widespread use and prevalence in thought. But it is nonsense, utter nonsense, to claim that the virtual world exists, and is any different than the physical world.

There is no digital information. There is no virtual world. There is only the physical world, where we encounter mediated instances of so-called ‘digital’ information.

You never see, nor interact with, a virtual world. There is no such thing as a virtual display. The idea of augmenting real, physical objects with digital information is meaningless, as is the idea of augmenting physical spaces with virtual displays.

If a tree falls in the forest and no one hears the sound, it does not make a sound.

My mind’s telling me virtual, but my body says physical.

All of your interactions with the ‘virtual’ world are necessarily mediated by whatever system or tool you are using to access it. This post, for instance, is not virtual. It is a collection of physical pixels emitting physical photons of light, which enter your eye in a pattern that your brain recognizes and interprets as text. This is the case if you are reading this on your laptop, your phone or your iPad. If you want to comment on this post, you will perhaps press physical keys on your laptop to make recognizable characters appear on your physical screen, until they are in an amount and order you deem satisfactory.

Perhaps you will comment by pecking this out on an iPhone or iPad’s ‘virtual’ keyboard. Again, just because the keyboard is rendered on a screen, comprised of pixels, does not mean that it is virtual. Just because there isn’t tactile feedback (technically there is tactile feedback, as your finger doesn’t pass through the device like a ghost, but it may be feedback that doesn’t meet your expectations for a keyboard) doesn’t mean it isn’t physical.

You never have direct, unmediated access to what is metaphorically described as the virtual world. All of your interactions with ‘virtual’ information are necessarily physical, necessarily tangible, and therefore embodied. Thus, anyone who claims that tangibility is a new agenda for computing is sadly mistaken. Tangibility has always been core to our ability to interact with and experience computational devices, from pixels to keyboards to touch screens.

All interactions are not created equal.

The arguments over classification, determining what ‘is’ a tangible interaction versus what ‘is’ a virtual interaction, are completely misguided. All interactions are tangible, all interactions are physical, all interactions are embodied, but all interactions are not necessarily created equal. As humans we have highly-developed capacities to perceive, interpret, and make meaning out of our surroundings. In traditional desktop computing, as well as touch screen computing, devices tend to leverage only our most basic capacities for seeing and touching. These characteristics do not make an interaction virtual, they do not make it intangible, but they do make it physically impoverished.

This was the big surprise the boys and I encountered over the course of this project. We initially set out to explain why physical interactions were more fulfilling than virtual ones, and how the traditional screen, keyboard and mouse ignored all but the most rudimentary human capabilities for interacting with the world. What we realized was that we couldn’t merely categorize some interactions as physical and the other ones as virtual, because all interactions are necessarily situated in and mediated by the physical world. ‘Virtual’ is a convenient conceptual metaphor for describing a certain class of interactions, those that evoke only a limited set of our physical and perceptual capabilities, but the notion of a disembodied virtual world independent of the physical world is absolute nonsense. Moreover, I believe the appeal of the ‘physical’ and ‘virtual’ metaphors, and the territorial battles that have been fought under their banners, have distracted us from far more important agendas.

Traditional desktop interactions are unsatisfying not because they are ‘intangible’ or ‘virtual’, but because they offer an impoverished physical interaction that does little to leverage our unique abilities to perceive, interpret, and make meaning from our surroundings. Tangible computing differentiates itself not because it offers a ‘physical’ representation of ‘digital’ information, but because it uniquely focuses on the tactile qualities of interaction, and the rich sensory experiences that the world can afford.

Ultimately, all interactions are tangible. By acknowledging the metaphorical barrier between the ‘physical’ and ‘virtual’ worlds as a false one, and instead focusing on our ability to deliver richly evocative interactions through these different interactive paradigms, we are empowered to build more compelling interactions.

Through their research, Hans and Umbach have discovered that there is no shortage of brilliant work summarizing the primary concepts of embodied interaction. From Antle to Schiphorst, from Dourish to Hornecker, from Robertson to Sharlin to Lowgren to Fernaeus to Djajadiningrat to Fishkin, everyone seems to be reading the right stuff. Everyone is talking about Heidegger and his hermeneutical phenomenology, a philosophical approach to understanding the way the world is manifest in consciousness, how we interpret our experience with the world, and ultimately how we form meanings with it.

Everyone is channeling Dourish, and his work unifying social computing and tangible computing under the banner of embodied interaction. Many authors are channeling Lakoff and Johnson, and their profound work studying linguistics, metaphors and embodied cognition. Indeed, any text that discusses embodied interaction, without reference to Lakoff and Johnson, is immediately suspect in the boys’ book.

Lakoff and Johnson, and the role of metaphor in human thinking.

Lakoff and Johnson posit that much of our language, and thus much of our thinking, is dependent on our use of metaphors to describe the world. These metaphors are so ingrained in our thinking that we are rarely conscious of their use. For example, we describe time using spatial metaphors, or even material metaphors. Things that are in the future are “ahead” of us, and things in the past are “behind” us. We talk about the speed at which we perceive time passing, and we describe time as though it is water, a continually flowing substance. Time slips through our fingers, we don’t have enough of it, and we frequently run out of it.

Lakoff and Johnson argue that metaphors are not just convenient linguistic tricks we use that allow us to communicate more efficiently with one another, but that our brains are hard-wired to categorize and associate in such a way that we can’t help but think in metaphor. Hans and Umbach have definitely experienced that in the last few months, as they’ve been learning electronics. As they work with circuits and components, trying to build things that work and debug things that don’t, they’re constantly using spatial and material metaphors as a foundation to their thinking. We talk about electricity “flowing” from negative to positive, as though it is water. We talk about resistors resisting (or constricting) the flow of electricity. We talk about capacitors “filling up”, or buttons “closing” a circuit, or transistors “waiting” for a signal.

If we pause just for a moment, none of these thoughts regarding electricity make any sense at all. We can’t see it, so it’s meaningless to “know” or even to “think” that it acts like water, even while this particular mental model sets us up for success when creating a functional circuit. I close things in my environment all the time, such as doors, windows and notebooks, but to say that a pressed button “closes” a circuit is nonsense. Worst of all, how can a transistor “wait”? For something to wait implies that it perceives time, that it can anticipate the future, that it will respond in some manner when the appropriate stimuli presents itself.

Animals wait. Humans wait. Transistors do not wait, and yet this metaphor, that of the transistor as an organism that can anticipate and respond, tells us how to work with them. This, then, is where Lakoff and Johnson’s work gets particularly juicy. Humans are biological creatures with particular sensory capacities. We see light across a particular spectrum, can sense heat across a particular range at varying degrees of sensitivity, and have bodies with arms, fingers and hands that grant us certain abilities for interacting with the world.

Cognition is situated in the the body, and the body influences cognition.

J.J. Gibson’s work in ecological psychology argues that action and cognition are radically situated in the environment and inseparable from it, such that you can make no predictions about an organism’s behavior without knowing about the environment in which it is situated. Lakoff and Johnson extend Gibson’s work by channeling the concept of embodied cognition, which similarly claims that cognition is radically situated in the body.

Indeed, according to embodied cognition, the reason we perceive the world the way we do is not necessarily because the world possesses certain perceptible qualities, but rather because our bodies perceive and make sense of the world in a certain way. We perceive time in a certain way because we are hard-wired to experience it in that way. We organize the physical world in time because it is impossible for us to organize it independent of time. The more we learn about quantum mechanics, too, the more we learn that there is little in the world that objectively reflects the common sense human experience of time.

This is not to say that the objective world does not exist, but rather that we need to deliberately consider the way our minds make sense of the world. Since our minds are situated in our bodies, and our bodies have certain capabilities that pre-filter our access to the world, the importance of considering subjective experience as a phenomenon independent of the objective world cannot be understated.

“I can’t get my body out of my mind.”

The notion of embodied cognition has profound implications, and we can see some of them manifested in the way we talk about, and orient ourselves towards, the physical world. Our bodies are basically symmetrical from left to right, but strongly asymmetrical from front to back. We can see things when they are in front of us, but not when they are behind us. Our limbs are oriented in such a way that we walk in a forward vector, towards our line of sight.

Thus, things that we encounter “in the future” we typically encounter as we walk towards them, and things that we encountered “in the past” are things that are behind us. This asymmetry from front to back gives rise not only to the way we orient ourselves spatially, but also influences how we perceive the world. In this way our bodies’ unique configuration determines our understanding of time, spatially situating our temporal metaphors.

The richer notions of embodiment that Hans and Umbach have discovered over the course of our project consider these notions of metaphor as a fundamental part of how we interpret the world and make meaning of it. These metaphors arise out of the unique qualities and perceptual capabilities of our bodies, such that the way we make sense of and interact with the world is necessarily shaped by our own physical characteristics.

My work with Hans and Umbach on physical computing and embodied interaction took an interesting turn recently, down a path I hadn’t anticipated when I set out to pursue this project. My initial goal with this independent study was to develop the skills necessary to work with electronics and physical computing as a prototyping medium. In recent years, hardware platforms such as Arduino and programming environments such as Wiring have clearly lowered the barrier of entry for getting involved in physical computing, and have allowed even the electronic layman to build some super cool stuff.

Rob Nero presented his TRKBRD prototype at Interaction 10, an infrared touchpad built in Arduino that turns the entire surface of one’s laptop keyboard into a device-free pointing surface. Chris Rojas built an Arduino tank that can be controlled remotely through an iPhone application called TouchOSC. What’s super awesome is that most everyone building this stuff is happy to share their source code, and contribute their discoveries back to the community. The forums on the Arduino website are on fire with helpful tips, and it seems an answer to any technical question is only a Google search away. SparkFun has done tremendous work in making electronics more user-friendly and approachable, offering suggested uses, tutorials and data sheets right alongside the components they sell.

Dourish and Embodied Interaction: Uniting Tangible Computing and Social Computing

In tandem with my continuing education with electronics, I’ve been doing extensive research into embodied interaction, an emerging area of study in HCI that considers how our engagement, perception, and situatedness in the world influences how we interact with computational artifacts. Embodiment is closely related to a philosophical interest of mine, phenomenology, which studies the phenomena of experience and how reality is revealed to, and interpreted by, human consciousness. Phenomenology brackets off the external world and isn’t concerned with establishing a scientifically objective understanding of reality, but rather looks at how reality is experienced through consciousness.

“Embodiment 1. Embodiment means possessing and acting through a physical manifestation in the world.”

He takes issue with this definition, however, as it places too high a priority on physical presence, and proposes a second iteration:

“Embodiment 2. Embodied phenomena are those that by their very nature occur in real time and real space.”

Indeed, in this definition embodiment is concerned more with participation than physical presence. Dourish uses the example of conversation, which is characterized by minute gestures and movements that hold no objective meaning independent of human interpretation. In “Technology as Experience” McCarthy and Wright use the example of a wink versus a blink. While closing and opening one’s eye is an objective natural phenomena that exists in the world, the meaning behind a wink is more complicated; there are issues of the intent of the “winker”, whether they intend for the wink to represent flirtation, collusion, or whether they simply had a speck of dirt in their eye. There are also issues of interpretation of the “winkee”, whether they perceive the wink, how they interpret the wink, and whether or not they interpret it as intended by the “winker.”

Thus, Dourish’s second iteration on embodiment deemphasizes physical presence while allowing for these subjective elements that do not exist independent of human consciousness. A wink cannot exist independent of real time and real space, but its meaning involves more than just its physicality. Indeed, Edmund Husserl originally proposed phenomenology in the early 20th century, but it was his student Martin Heidegger who carried it forward into the realm of interpretation. Hermeneutics is an area of study concerned with the theory of interpretation, and thus Heidegger’s hermeneutical phenomenology (or the study of experience and how it is interpreted by consciousness) has rather become the foundation of all recent phenomenological theory.

Beyond Heidegger, Dourish takes us through Alfred Schutz, who considered intersubjectivity and the social world of phenomenology, and Maurice Merleau-Ponty, who deliberately considered the human body by introducing the embodied nature of perception. In wrapping up, Dourish presents a third definition of embodiment:

Embodied 3. “Embodied phenomena are those which by their very nature occur in real time and real space. … Embodiment is the property of our engagement with the world that allows us to make it meaningful.”

Thus, Dourish says:

“Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.”

Dourish’s thesis behind “Where The Action Is” is that tangible computing (computer interactions that happens in the world, through the direct manipulation of physical artifacts) and social computing (computer-augmented interaction that involves the continual navigation and reconfiguration of social space) are two sides of the same coin; namely, that of embodied interaction. Just as tangible interactions are necessarily embedded in real space and real time, social interaction is embedded as an active, practical accomplishment between individuals.

According to Dourish, embodied computing is a larger frame that encompasses tangible computing and social computing. This is a significant observation, and “Where The Action Is” is a landmark achievement. But, as Dourish himself admits, there isn’t a whole lot new here. He connects the dots between two seemingly unrelated areas of HCI theory, unifies them under the umbrella term embodied interaction, and leaves it to us to work it out from there.

And I’m not so sure that’s happened. “Where The Action Is” came out nine years ago, and based on the papers I’ve read on embodied interaction, few have attempted to extend the definition beyond Dourish’s work. While I wouldn’t describe his book as inadequate, I would certainly characterize it as a starting point, a signifiant one at that, for extending our thoughts on computing into the embodied, physical world.

From Physical Computing to Notions of Embodiment

For the last two months I have been researching theories on embodiment, teaching myself physical computing, and reflecting deeply on my experience of learning the arcane language of electronics. Even with all the brilliantly-written books and well-documented tutorials in the world, I find that learning electronics is hard. It frequently violates my common-sense experience with the world, and authors often use familiar metaphors to compensate for this. Indeed, electricity is like water, except when it’s not, and it flows, except when it doesn’t.

In reading my reflections I can trace the evolution of how I’ve been thinking about electronics, how I discover new metaphors that more closely describe my experiences, reject old metaphors, and become increasingly disenchanted that this is a domain of expertise I can master in three months. What is interesting is not that I was wrong in my conceptualizations of how electronics work, however, but how I was wrong and how I found myself compensating for it.

While working with a seven-segment display, for instance, I could not figure out which segmented LED mapped to which pin. As I slowly began to figure this out, it did not seem to map to any recognizable pattern, and certainly did not adhere to my expectations. I thought the designers of the display must have had deliberately sinister motives in how their product so effectively violated any sort of common sense interpretation.

To compensate, I drew up my own spatial map, both on paper as well as in my mind, to establish a personal pattern where no external pattern was immediately perceived. “The pin in the upper lefthand corner starts on the middle, center segment,” I told myself, “and spirals out clockwise from there, clockwise for both the segments as well as the pins, skipping the middle-pin common anodes, with the decimal seated awkwardly between the rightmost top and bottom segments.”

It was this personal spatial reasoning, this establishment of my own pattern language to describe how the seven-segment display worked, that made me realize how strongly my own embodied experience determines how I perceive, interact with, and make sense of the world. So long as a micro-controller has been programmed correctly, it doesn’t care which pin maps to which segment. But for me, a bumbling human who is poor at numbers but excels at language, socialization and spatial reasoning, you know, those things that humans are naturally good at, I needed some sort of support mechanism. And that mechanism arose out of my own embodied experience as a real physical being with certain capabilities for navigating and making sense of a real physical world.

Over time this realization, that I am constantly leveraging my own embodiment as a tool to interpret the world, dwarfed the interest I had in learning electronics. I’m still trying to figure out how to get an 8×8 64-LED matrix to interface with an Arduino through a series of 74HC595N 8-bit shift registers, so I can eventually make it play Pong with a Wii Nunchuk. That said, it’s frustrating that every time I try to do something, the chip I have is not the chip I need, and the chip I need is $10 plus $5 shipping and will arrive in a week, and by the way have I thought about how to send constant current to all the LEDs so they’re all of similar brightness because my segmented number “8” is way dimmer than my segmented number “1” because of all the LEDs that need to light up, and oh yeah, there’s an app for that.

Sigh.

Especially when I’m trying to play Pong on my 8×8 LED matrix, while someone else is already playing Super Mario Bros. on hers.

Extending Notions of Embodiment into Design Practice

In accordance with Merleau-Ponty and his work introducing the human body to phenomenology, and the work of Lakoff and Johnson in extending our notions of embodied cognition, I believe that the human body itself is central to structuring the way we perceive, interact with, and make sense of the world. Thus, I aim to take up the challenge issued by Dourish, and extend our notions of embodiment as they apply to the design of computational interactions. The goal of my work is to establish a language of embodied interaction that will help design practitioners create more compelling, more engaging, more natural interactions.

Considering physical space and the human body is an enormous topic in interaction design. In a panel at SXSW Interactive last week, Peter Merholz, Michele Perras, David Merrill, Johnny Lee and Nathan Moody discussed computing beyond the desktop as a new interaction paradigm, and Ron Goldin from Lunar discussed touchless invisible interactions in a separate presentation. At Interaction 10, Kendra Shimmell demonstrated her work with environments and movement-based interactions, Matt Cottam presented his considerable work integrating computing technologies with the richly tactile qualities of wood, and Christopher Fahey even gave a shout-out specifically to “Where The Action Is” in his talk on designing the human interface (slide 50 in the deck). The migration of computing off the desktop and into the space of our everyday lives seems only to be accelerating, to the point where Ben Fullerton proposed at Interaction 10 that we as interaction designers need to begin designing not just for connectivity and ubiquity, but for solitude and opportunities to actually disconnect from the world.

Establishing a Language of Embodied Interaction for Design Practitioners

To recap, my goal is to establish a language of embodied interaction that helps designers navigate this increasing delocalization and miniaturization of computing. I don’t know yet what this language will look like, but a few guiding principles seem to be emerging from my work:

All interactions are tangible. There is no such thing as an intangible interaction. I reject the notion that tangible interaction, the direct manipulation of physical representations of digital information, is significantly different from manipulating pixels on a screen, interactions that involve a keyboard or pointing device, or even touch screen interactions.

Tangibility involves all the senses, not just touch. Tangibility considers all the ways that objects make their presence known to us, and involves all senses. A screen is not “intangible” simply because it is comprised of pixels. A pixel is merely a colored speck on a screen, which I perceive when its photons reach my eye. Pixels are physical, and exist with us in the real world.

Likewise, a keyboard or mouse is not an intangible interaction simply because it doesn’t afford direct manipulation. I believe the wall that has been erected between historic interactions (such as the keyboard and mouse) and tangible interactions (such as the wonderful Siftables project) is false, and has damaged the agenda of tangible interaction as a whole. These interactions exist on a continuum, not between tangible and intangible, but between richly physical and physically impoverished. A mouse doesn’t allow for a whole lot of nuance of motion or pressure, and a glass touch screen doesn’t richly engage our sense of touch, but they are both necessarily physical interactions. There is an opportunity to improve the tangible nature of all interactions, but it will not happen by categorically rejecting our interactive history on the grounds that they are not tangible.

Everything is physical. There is no such thing as the virtual world, and there is no such thing as a digital interaction.Ishii and Ullmer (PDF link) in the Tangible Media Group at the MIT Media Lab have done extensive work on tangible interactions, characterizing them as physical manifestations of digital information. “Tangible Bits,” the title of their seminal work, largely summarizes this view. Repeatedly in their work, they set up a dichotomy between atoms and bits, physical and digital, real and virtual.

The trouble is, all information that we interact with, no matter if it is in the world or stored as ones and zeroes on a hard drive, shows itself to us in a physical way. I read your text message as a series of latin characters rendered by physical pixels that emit physical photons from the screen on my mobile device. I perceive your avatar in Second Life in a similar manner. I hear a song on my iPod because the digital information of the file is decoded by the software, which causes the thin membrane in my headphones to vibrate at a particular frequency. Even if I dive deep and study the ones and zeroes that comprise that audio file, I’m still seeing them represented as characters on a screen.

All information, in order to be perceived, must be rendered in some sort of medium. Thus, we can never interact with information directly, and all our interactions are necessarily mediated. As with the supposed wall between tangible interactions and the interactions that proceeded them, the wall between physical and digital, or real and virtual, is equally false. We never see nor interact with digital information, only the physical representation of it. We cannot interact with bits, only atoms. We do not and cannot exist in a virtual world, only the real one.

This is not to say that talking with someone in-person is the same as video chatting with them, or talking on the phone, or text messaging back and forth. Each of these interactions is very different based on the type and quality of information you can throw back and forth. It is, however, to illustrate that there isn’t necessarily any difference between a physical interaction and a supposed virtual one.

Thus, what Ishii and Ullmer propose, communicating digital information by embodying it in ambient sounds or water ripples or puffs of air, is no different than communicating it through pixels on a screen. What’s more, these “virtual” experiences we have, the “virtual” friendships we form, the “virtual” worlds we live in, are no different than the physical world, because they are all necessarily revealed to us in the physical world. The limitations of existing computational media may prevent us from allowing such high-bandwidth interactions as those allowed by face-to-face interaction (think of how much we communicate through subtle facial expressions and body language), but the fact that these interactions are happening through a screen, rather than at a coffee shop, does not make them virtual. It may, however, make them an impoverished physical interaction, as they do not engage our wide array of senses as a fully in-the-world interaction does.

Again, the dichotomy between real and virtual is false. The dichotomy between physical and digital is false. What we have is a continuum between physically rich and physically impoverished. It is nonsense to speak of digital interactions, or virtual interactions. All interactions are necessarily physical, are mediated by our bodies, and are therefore embodied.

The traditional compartmentalization of senses is a false one. In confining tangible interactions to touch, we ignore how our senses work together to help us interpret the world and make sense of it. The disembodiment of sensory inputs from one another is a byproduct of the compartmentalization of computational output (visual feedback from a screen rendered independently from audio feedback from a speaker, for instance) that contradicts our felt experience with the physical world. “See with your ears” and “hear with your eyes” are not simply convenient metaphors, but describe how our senses work in concert with one another to aid perception and interpretation.

Humans have more than five senses. Our experience with everything is situated in our sense of time. We have a sense of balance, and our sense of proprioception tells us where our limbs are situated in space. We have a sense of temperature and a sense of pain that are related to, but quite independent from, our sense of touch. Indeed, how can a loud sound “hurt” our ears if our sense of pain is tied to touch alone? Further, some animals can see in a wider color spectrum than humans, can sense magnetic or electrical fields, or can detect minute changes in air pressure. If computing somehow made these senses available to humans, how would that change our behavior?

My goal in breaking open these senses is not to arrive at a scientific account of how the brain processes sensory input, but to establish a more complete subjective, phenomenological account that offers a deeper understanding of how the phenomena of experience are revealed to human consciousness. I aim to render explicit the tacit assumptions that we make in our designs as to how they engage the senses, and uncover new design opportunities by mashing them together in unexpected ways.

Embodied Interaction: A Core Principle for Designing the Next Generation of Computing

By transcending the senses and considering the overall experience of our designs in a deeper, more reflective manner, we as interaction designers will be empowered to create more engaging, more fulfilling interactions. By considering the embodied nature of understanding, and how the human body plays a role in mediating interaction, we will be better prepared to design the systems and products for the post-desktop era.

I’ve been working on my capstone project for two semesters now, trying to figure out a way to introduce a slice of the outdoor experience to the inside world. Playing, recreating and simply being outside is something that is extremely important to me, and based on conversations with my research participants, important to them as well.

There’s an apparent dichotomy between the richly engaging, dynamically changing outside world, and the rather static, sterile, sensory-deprivation tank that is the typical indoor workspace. Regarding the individual who has established a deep, personal connection to the outdoors, or to nature, or to wilderness, how do we improve the quality of life for this person if they have to spend most of their waking hours in an indoor built environment? What sort of experiential qualities are present in an outdoor setting that we can appropriately introduce to an indoor space? How can we do this in a manner that is still aligned with work and business needs?

My interests are not in arriving at a factual, scientifically objective account of outdoor experience, but rather how outdoor spaces are received by our senses, interpreted in our minds, and ultimately made meaningful to us. Mine is a phenomenological approach, where I am concerned with the experience of direct realism. How does nature reveal itself to our consciousness? How does our consciousness interpret the outdoors, and regard it as meaningful? How is the situatedness of the individual, from their perceptual capabilities, to their social and cultural values, to their memories and lived experiences, how are these evoked by a particular experience, and how do they determine how the individual interprets it?

The goal of my capstone project is to establish a series of high-level design principles that help to guide interaction designers who find themselves trying to evoke a sense of the outdoors in an indoor space. I do not precisely know yet what these principles will be, but a few possible threads have bubbled to the surface.

The Biological Thread

Most animals have what is called a circadian rhythm, a biological clock that runs on a 24-hour period and determines when an organism wakes up, does certain activities, and goes to sleep. Animals still heed to this internal clock even when deprived of external stimuli, such as the movement of the sun and changes in temperature, and humans are no exception. Despite artificial lighting and built environments, we are still inexplicably bound to this rhythm.

The circadian rhythm is clearly an evolutionary response to the 24-hour day of our planet, and in this way our biology is not only situated in, but largely determined by our environment. Our biological nature is born from the nature of the Earth itself, and its subsequent rhythms. Indeed, the natural length of a day is inescapably woven into the biology of our own humanity.

It goes further than that, however. Lakoff and Johnson have done extensive work demonstrating that our use of language, and our thoughts themselves, are tightly coupled to a series of primary metaphors that rise out of our experience with our own bodies. The foundation of human thought is bound up not in some kind of disembodied rationality, argue Lakoff and Johnson, but is rather determined by our own embodied cognition. We talk of purpose as a destination, time in terms of motion, and things that are similar as being close together. These are not just convenient linguistic phrases, but are the very foundation of how we structure and make sense of the world.

Our perceptions and subsequent rationalism are a product of our own embodiment, and our embodiment is a product of our biology. Since our biology evolved in response to the inescapable rhythms of the natural world, it would seem that a connection to the outside world is an undeniably important component of our humanity. To deny the rhythms of the outside world is to deny the very thing that makes us human.

As humans we are unavoidably situated in our biology, which influences how we perceive, categorize and make meaning of the world. A design that aims to communicate a sense of the outdoors must consider the biological connection that makes the natural world intrinsically meaningful to us.

The Cultural Thread

A longstanding claim has been that it is reason, our unique access to a transcendent and objective reality, that distinguishes humans from other animals. The implications of Lakoff and Johnson’s work, that rationality is not disembodied but is rather a product of our own embodiment, stands to elevate other uniquely human activities such as culture and art to a similar level as reason.

This is certainly not to undercut rational thought, which remains an incredibly powerful tool that, in the case of quantum mechanics, continues to unearth a world that is in direct violation of our common-sense notions of direct realism. It is, however, to demonstrate that reason is not the privileged, disembodied force we may think it is, but is rather determined by the unique nature of our own humanity. If reason (that is, human reason) is one important capability that make us uniquely human, than our other capabilities such as culture and art may be equally important, despite their subjective nature.

Our relationship with the outdoors cannot be described fully in a purely biological, or purely rational, account, as our social and cultural experiences influence our attitudes towards the natural world as well. There is biological precedent for our connection, but the way we ultimately make meaning and form relationships with the outdoors will be highly dependent on the culture we are situated in, and the experiences with the outdoors that we have collected.

As a designer, it is inappropriate to assume that everyone will interpret a palm tree in the same way, or a cactus, or a coniferous tree. For a person in the midwestern United States a palm tree might signify a faraway exotic place to spend spring break, whereas for a person in Florida it may represent just another damn tree. Someone who lives in the mountains may not have the same appreciation for their local topography as someone who grew up in the plains.

The values we associate with the outdoors are heavily influenced by the society and culture we inhabit. A design that aims to communicate a sense of the outdoors must consider the sociocultural relationships its users have with the natural world, and how (or if) it intends to change them.

The Temporal and Perceptual Thread

The natural world changes slowly, often at a rate below immediate human perception. We notice the leaves changing in autumn, but you can’t sit down and literally watch the leaves change. The sun moves across the sky throughout the day, the days get longer or shorter depending on one’s latitude and the time of year, and the phases of the moon change. There are, however, changes that we can perceive, such as wind blowing, clouds moving, rain falling, and certainly lightning striking nearby.

The indoor world has limited access to these natural processes, but it does possess some of its own. Co-workers arrive in the morning, fetch their coffee, take bathroom breaks, go to lunch, and eventually filter out for the evening. Human Resources may hang holiday decorations depending on the time of year, and the wear-and-tear of the hallway carpet may become a topic of conversation for bored individuals. Indeed, we are ambiently aware of these processes, often without consciously attending to them or deliberately marking them out.

From an informational standpoint the natural world is always communicating its status, albeit at a level below that of immediate human perception. We notice changes from time to time, but we cannot consciously focus and attend to them, because they cannot be actively witnessed by our senses. The sun moves, the phases of the moon change, the trees bud and the flowers bloom, and while all of these channels communicate information about the state of the outdoors, they are far from being distracting or overwhelming. Thus, a design for bringing a sense of the outdoors indoors would do well for capturing and communicating these slow processes in an elegant manner.

However, part of the intrigue of the outside world is the interplay between these longer imperceivable processes, and the more immediate perceivable ones. I can’t sit down and watch the sun move across the sky, but on a partly cloudy day I can tell when it comes out from behind a cloud. I can feel and hear the breeze on a windy day, and while I could just barely perceive that thunderhead bearing down on me, I can certainly feel its drenching rain.

This interplay demonstrates how the processes of nature situate themselves in a multi-scalar, almost fractal relationship. Certain changes are perceivable minute-to-minute, hour-to-hour or day-to-day. Others are only noticeable at larger timescales, such as week-to-week, month-to-month or season-to-season. Still other changes are noticeable from year-to-year. The natural world of course works on timescales far beyond this, beyond the limits of human perception and even imagination, and certain creative designs cast a reflective light on even these vast timescales.

A design that aims to communicate a sense of the outdoors must allow for multiple levels of perception and temporal resolution, utilizing different magnitudes of perceivable change to communicate the multi-scalar cyclic relationships of the natural world.

So that largely summarizes my current work. I’m not sure if these are the actual design principles I’m going to roll with, but a few categories definitely seem to be emerging. I’m deeply interested in a phenomenological standpoint that considers sense-making, sensuality and embodied experience as core to my argument. I have found that a key component to my work is the temporal, multi-scalar, cyclic nature of outdoor processes, as well as the differing levels of human perception of those changes. Indeed, these two principles are tightly woven together at this point, but it may make more sense to split them apart.

I’m already realizing that I need a principle that considers space, such as the way sunlight filters through leaves or how crepuscular rays fill outdoor space, and mapping these to surfaces in the office or dust particles in the air. Nature has an interesting way of rendering space visible in subtle ways and using it to communicate information, and I’m fairly certain I need a principle that captures that. I also aim to further explain my design principles by applying them specifically to light as a design medium, based on my lighting studies.

In Summary

As humans we are unavoidably situated in our biology, which influences how we perceive, categorize and make meaning of the world. A design that aims to communicate a sense of the outdoors must consider the biological connection that makes the natural world intrinsically meaningful to us.

The values we associate with the outdoors are heavily influenced by the society and culture we inhabit. A design that aims to communicate a sense of the outdoors must consider the sociocultural relationships its users have with the natural world, and how (or if) it intends to change them.

A design that aims to communicate a sense of the outdoors must allow for multiple levels of perception and temporal resolution, utilizing different magnitudes of perceivable change to communicate the multi-scalar cyclic relationships of the natural world.