On-Object (tentative title) is a physical user interface with sensing and wireless communication capabilities designed to be dispensed, attached to, and become a part of our everyday objects. This paper first describes the interface currently in conception, explains its scope with the reference to the taxonomy in product design, online social media and Human-Computer Interaction (HCI). Based on applied linguistics, human-centered design approaches and my personal aspirations, I then articulate the interaction design principles and objectives of On-Object interfaces. According to those criteria, I discuss potential domain examples and value propositions of such interface. Finally, I explain how On-Object is poised to transform the conventional "device user" of HCI into a "world actor" via its ability to provide emergent interactivity.

Description of On-Object Interface

On-Object is a physical computing interface attachable to physical objects. One subset of On-Object interfaces are form-fitting and attach onto the surface of a physical object like a piece of tape or sticker. Another subset of On-Object is form-enhancing and adds a new form factor and affordances like a handle, strap, tab or hook to physical objects.

With its capability to sense hand touch, squeeze or sudden movement, On-Object allows us to detect real-world events that are equivalent to mouse over/rollover, mouse out/rollout, mouse down/press or mouse drag in graphical user interface (GUI), with our hands acting as an equivalent to a mouse.

figure 1. Basic Usage of On-Object

With its form-fitting and form-enhancing designs, On-Object is intended to become a part of the physical object and environment, unlike most tangible interfaces or electronic gadgets that are standalone objects or installations themselves. Rather than purchasing, learning how to use a new gadget and fitting it into the flow of daily lives, people who try On-Object pick an object around them which they may already own, and applies the interface according to the input and output logic they see appropriate or desire, allowing people to create a sensor network based on the semantics they define.

Taxonomy of Objects

Multi-State Objects from Product Design Perspective

Object-oriented thinking and state machines have been an instrumental concept in computer science [1]. While such frameworks have not been widely adopted by the field of product design, design researchers have occasionally used states to discuss designed physical objects.

One prominent example is Per Mollerup's definition of "collapsibles," which are "smart man-made objects with the capacity to adjust in size to meet a practical need" with "two opposite states, one folded and passive, one (or more) unfolded and active." Through principles including rolling, folding, hinging or inflating, objects like umbrellas, eyeglasses, newspapers or rubber boats fit more than one situation. In other words, "collapsibles are man-made accommodations to change." [2]

Influenced by the object-oriented thinking in computer science, interaction designers and user interface design researchers are also appropriating state model to design the interaction flow of electronics and software products. Design researchers from the University of Tehran reported a case study where object oriented tools like Unified Modeling Language (UML) was used in a multidisciplinary product design and development project. [3]

As one of the motivation of such endeavor, the authors explain "the smartness and complexity" of many products "make them similar to software systems." Bijan Aryana et al. also add:

Today's advanced products are not only the solid objects, as they have specific behaviors. So, the designers should be able to design their logical entities too.

On-Object reflects the transition in physical product design above quote captures -- we are increasingly surrounded by electronics and digitally enhanced artifacts, and are starting to perceive conventional physical objects to possess not only form and function, but also logic. As an interaction designer I share such perception, and furthermore, desire to add, remove and modify logic to existing objects while manifesting the logic with audiovisual, tactile and motion output.

By creating and manifesting a logical relationship between user input and system output -- in this case, object and environment output -- the object and our surroundings gain an interactivity that did not previously exist.

Folksonomy and Semantic Appropriation

End-user transformation of everyday physical objects into interactive ones, as described above, is inherently de-centralized and personalized. Instead of professional building designers or retail store owners deciding which artifacts and surfaces are to become interactive, On-Object allows end users to configure the input and output logic of their environment.

De-centralized appropriation of semantic networks have become commonplace in online social media, often referred to as Web 2.0. Users of internet services such as Flickr, Last.fm and Delicious tag digital content and the aggregated results then influence the decisions users make as they are continuously expanding and modifying the namespace.

According to Kathryn Sarah Macarthur who studied object classification within an online community, "there are two main ways of classifying objects in a community: initial classification done by field experts (an ontology) and evolutionary tagging done by users of that community (a folksonomy)." Folksonomy in online social media is described as "socially created, typically flat name-spaces" according to Shirky [4].

On-Object interfaces encourage individuals to create interactive "logicspace" in their physical world. You can take a particular object that you already own or are familar with, then decide which action on which part of the object should be detected, based on your history, habits, and usage pattern regarding the object. The semantics you choose to create, such as "turn on a light on my bike helmet when my hands are on the handlebars" is likely to be unique to you as an individual biker, when you can make hundreds of other mappings of input and output to choose from.

This semantic appropriation of real world with On-Object resembles an online folksonomy of digital content. In its original definition and scope however, On-Object focuses on creation of personalized logicspace for a given individual, while folksonomy by definition relies on the aggregation of namespaces created by multiple individuals. Aggregation and negotiation of multiple logicspaces may be relevant for the future development of On-Object.

Interaction Classification

On-Object as a Human-Computer Interaction (HCI) Style

Another helpful and relevant frame of reference to describe the nature of On-Object is the HCI classificaiton by Jun Rekimoto and Katashi Nagao's 1995 publication The World through the Computer [5].

In a desk-top computer (with a GUI as its interaction style), interaction between the user and the computer is isolated from the interaction between the user and the real world. There is a gap between the two interactions. Some researchers are trying to bridge this gap by merging a real desk-top with a desk-top in the computer.

In a virtual reality system, the computer surrounds the user completely and interaction between the user and the real world vanishes.

In the ubiquitous computers environment, the user interacts with the real world but can also interact with computers embodied in the real world.

Rekimoto and Nagao's definition of Augmented Interaction as a "style of human-computer interaction that aims to reduce computer manipulations by using environmental information as implicit input" applies. But the diagram of Augmented Interaction above does not quite explain On-Object, because On-Object is distributed amongst many objects and embedded in the real world, rather than an wearable overlay display located on the human side. The Ubiquitous Computers diagram seems also applicable, but the distributed computers in On-Object are not nearly as autonomous or discrete as described in the diagram. They are rather a part of the objects that make up the real world, typically lacking any display or textual and pointing input device.

At the moment, I conclude that On-Object is a form of ubiquitous computing through augmentation of the environment, using both environmental information and human actions as implicit input. It is the active touch and movement of the humans to the objects (and the interface as part of the object) -- the combined effect of the object's materiality, shape, and the human action -- that is used as the input. Authors of the paper also predict combination of ubiquitous computing and Augmented Interaction:

Augmented Interaction tries to achieve its goal by introducing a portable or wearable computer that uses real world situations as implicit commands. Ubiquitous computing realizes the same goal by spreading a large number of computers around the environment ... These two approaches are complementary and can support each other. We believe that in future, human existence will be enhanced by a mixture of the two; ubiquitous computers embodied everywhere, and a portable computer acting as an intimate assistant.

While On-Object does not require a portable computer, it is a mix of embedded computers and the future direction of Augmented Reality Rekimoto and Nagao envisioned as they suggest that "by incorporating other external factors such as real world IDs, the usefulness of AR should be much more improved."

What makes such system useful? As one of the limitations of GUIs, Rekimoto's aforementioned paper points out the following:

Augmented Interaction tries to achieve its goal by introducing a portable or wearable computer that uses real world situations as implicit commands. Ubiquitous computing realizes the same goal by spreading a large number of computers around the environment ... These two approaches are complementary and can support each other. We believe that in future, human existence will be enhanced by a mixture of the two; ubiquitous computers embodied everywhere, and a portable computer acting as an intimate assistant.

Objects within a database, which is a computer generated world, can be easily related, but it is hard to make relations among real world objects, or between a real object and a computer based object. ... once a document has been printed out, the system can no longer maintain such an output. It is up to the user to relate these outputs to objects still maintained in the computer. This is at the user’s cost.

With On-Object, you can not only instrument the document you just printed out to know when it's being read or destroyed, but you can also group two documents simply by applying the same sticker to both documents and detect group actions as you pin them together. Furthermore, you can associate the group of printouts with the backpack you carry to remind you when you forget the documents, or make the stickers discolored when the document is outdated.

On-Object as an Augmented Interaction

The difference between Augmented Interaction and On-Object can be further explained by the classification of Augmented Reality by Emmanual Dubois and Lauence Nigay [6].

figure 3. Four AR Types by Dubois and Nigay

Dubois and Nigay describe the four types of Augmented Reality as following. In their own words:

(3) Manipulate real objects for execution: "enable the user to perform new actions in the real world that would not be possible without the computer. For example, using DigitalDesk, the user can perform a cut/paste operation on real drawings."

Depending on the application scenario and input-output choice, On-Object could achieve (1), (3), and even (4). The priority however is in (3), where manipulating real objects with an On-Object interface results in physical output such as tactile transformation, visual and audio, manifested in the same object or others that users have virtually linked for a reason.

A potentially meaningful distinction in augmentation of real-world objects is whether or not the augmentation alters the object physically. While both form-fitting and form-enhancing On-Object interfaces are designed to become a part of the physical object, those two types differ from each other as form-fitting On-Object augments the physical object only by providing it with a programmatically understood set of states, while form-enhancing On-Object augments the object both physically by adding to its form factor, and digitally by adding programmability.

Design Principles

With new inventions comes the need to demonstrate its potential value. As a researcher of novel technology, my personal goal is to invent new physical interfaces that can be appreciated by as many people as possible. In order to find novel yet useful applications of On-Object, I have looked at concepts in applied linguistics, user-centered design ethnography, software user interface design, and my previous research idea.

Charles Goodwin's paper Action and Embodiment within Situated Human Interaction (2000) is an analysis of action from an applied linguistics point of view, aiming to provide an understanding beyond the dichotomy between [verbal] "text" and [non-verbal] "context." [7]

figure 4. Goodwin's Analysis of Girls Playing Hopscotch

From an observation of three girls playing hopscotch, Goodwin explains that "actions are assembled and understood through temporary unfolding juxtaposition of quite different semiotic resources" including the speech, body gesture and surroundings, which he calls "semiotic fields." According to Goodwin, action is "built through the visible and public deployment of multiple semiotic fields that mutually elaborate each other" and employ different modalities.

Goodwin describes two concepts particularly noteworthy and relevant to the design of On-Object interfaces.

Participation framework: Goodwin explains that semiotic fields constitute "contextual configurations," an array of semiotic fields that participants orient to, and that actions are built through the ever-changing alignment of semiotic fields. Finally, a "participation framework" between people is "built and sustained through the visible embodied actions of the participant." This concept of participation framework can be transposed into an "interaction framework" for the purpose of analyzing a domain scenario and identifying design opportunities for On-Object.

Reflexive adaptation via access to semiotic resources: According to Goodwin, "central to [the actor]'s construction of action is ongoing analysis of how her recipient is positioned to co-participate in the interactive frameworks necessary for the constitution of that action" and "one of the things required for an actor to perform such rapid, reflexive adaptation is access to a set of structurally different semiotic resources." The form factor and material of On-Object interface itself, plus the logic added to an object with the interface can be considered as additional semiotic resources that participants can create and utilize on the fly.

Relational Objects: Enrich Interactive Framework with a Dynamic Output Based on Object Properties

As a matter of fact, modifying and expanding semiotic resources has been my personal interest. During my 2008 admissions interview with Professor Hiroshi Ishii of the MIT Media Lab, I presented following slide:

figure 5. Relational Objects Slide from 2008 Interview

Tentatively labeled "Relational Objects" at that time, one of my research directions was to focus on the way seemingly mundane properties induce peculiar human behaviors, such as the perforation on toilet paper resulting in counting and allocating behavior. I noted that as an interaction designer, I have the choice of enforcing the adherence further by graphically emphasizing the perforation, breaking the adherence by making the properties unreliable, or change the parameters of adherence by reconfiguring the object.

Another way to change the parameters, I presented, is making the object properties dynamic and conditional to environmental and user input. Re-phrasing my hypothesis on Goodwin's terms, I believe designers can induce new behaviors and encourage people to form new relationships with their surroundings by modifying or increasing the semiotic resources people can choose from.

What has changed since then is that On-Object provides end-users with the ability to make such reconfiguration, rather than interaction designers creating objects with built-in logic.

Operatorial Objects: Leverage, Amplify or Create Object Affordances

In addition to objects with visually explicit properties like the perforated toilet paper, there is another class of objects, which that I call "Operatorial Objects," that are particularly interesting.

Operatorial Objects are designed to be form-compatible with other objects, and define relationships between those objects. Binder clips, a piece of tape, nails or containers and bags are candidate Operatorial Objects.

As a logical semiotic resource, Operatorial Objects may be compared to operators in programming languages such as addition (+), subtraction (-), or multiplication (*) between two or more variables. If you consider regular objects as variables, applying Operatorial Objects to them may result in a semantic operation.

figure 6. Operatorial Objects

As shown in the figure above, Operatorial Objects rely on their own physical affordances to stick, clamp, enclose, and so on and the variable objects' affordances to be stuck on, clamped on, or fit in the enclosure. Based on these affordances, a semiotic relationship is formed between multiple objects.

How do objects' affordances — Donald Norman definition of perceived affordances — affect the deployment of semiotic resources? Will the tape form factor, when combined with other objects, present new opportunities? Is one form factor more logical than others?

The idea of form-enhancing interface stemmed from Operational Objects. As you create new affordances to an existing objects using form-enhancing On-Object, the new affordance may allow you to create a physical relationship between multiple objects, not only a logical relationship.

figure 7. Form-Enhancing On-Object as an Operatorial Object

In summary, analyzing our everyday interactions with our surroundings and other people based on Goodwin's taxonomy reveals the following design opportunities for On-Object:

Offer people the ability to create their own semiotic resources when the appropriate ones are missing

Take advantage of existing affordances of the objects when applying On-Object to them (form-fitting)

Create new affordances to existing objects by attaching On-Object (form-enhancing)

Investigate Both Reflexive and Habitual Adaptation

So far I have focused on reflexive discovery and appropriation of On-Object as a semiotic resource. However, once the logic-space is created by the person and it is deemed effective, the person may use these semantics repeatedly. In fact, the repeated use of the same semantic may indicate its effectiveness.

In the scope of my research, I intend to stay inclusive and investigate both reflexive and habitual adaptation of On-Object to gain insight about what affordances allow reflexive as opposed to, or in addition to, habitual responses.

Reduce the Action and Referential Cost to a User

An inspiration for the tentative name for this research comes from my previous work experience as a Product Designer at Microsoft's Office division between 2002 and 2005. Windows compatible Office software products had been employing "on-object user interfaces," more frequently referred to as "OOUI" since the 2003 version. When you paste a piece of text into a document in Word 2003 or a word you typed is highlighted by an auto spell correction function, a small icon appears under the last word. When you click the icon you are presented with multiple actions you can click to perform.

Because the majority of the Office user interface consisted of menus and toolbars located on the top part of the application window, overlaying a piece of interface on top of the text object was considered novel. In addition to the novelty, it was hailed for its convenience thanks to its context-sensitivity and efficiency due to the short distance the mouse needs to move. The next version of Office released in 2007 utilized OOUI more broadly, so that, for example, highlighting a piece of text will bring up the most common commands — usually formatting functions.

figure 8. On-Object UI in Word 2003 (left) and Word 2007 (right)

Jensen Harris, a Program Manger involved in the design decisions, highlighted about two virtues of OOUI in his blog [8]. First, OOUIs are in the context of an existing action, in this case, the mouse action of highlighting or pasting. This in turn improves the efficiency of mouse users. Secondly, OOUIs are easily blended into the context thanks to its transparency control. According to Harris, [the team] "wanted to make sure that it's not annoying by designing it to be incredibly shy and easy to dismiss."

Mark Weiser, in his 1992 article Some Computer Science Issues in Ubiquitous Computing, wrote that ubiquitous computers aim to be "invisible in use [9]." Tolmie et al. of Xerox Europe also investigated "what it means for features of activities to be 'unremarkable [10].'" As a potential ubiquitous computing concept, my On-Object interfaces may resemble Microsot OOUI in a way — as a physical OOUI, On-Object aims to become a part of the world and blends into the environment and the user actions.

In the aforementioned paper, Rekimoto and Nagao say that keeping relations between objects happens at the user's cost. In addition to adding to the rich array of semiotic resources necessary for our actions and interaction with one another, another goal for On-Object may be to reduce such cost by making the interface context-sensitive and easy to reach, yet easy to dismiss.

Strive for a Better, Delightful Experience and an Enduring Solution

Aside from the efficiency and utility for effective communicaiton, our actions also have creative, generative, and emotional aspects. Jane Fulton Suri at IDEO -- where I worked as an Interaction Designer between 2005 and 2007 -- is an expert in appropriation of everyday objects as a design inspiration.

Her book Thoughtless Acts is a photobook of "the subtle and amusing ways... we adapt, exploit, and react to things in our environments; things we do without really thinking." These thoughtless acts "reveal how people behave in a world not always perfectly tailored to their needs." Fulton Suri writes:

things used in unintended ways ... usually indicate something about people's needs. and needs often translate to design opportunities... We develop exquisite awareness of the possibilities and sensory qualities of different materials, forms, and textures. This awareness is evident from out actions, even when we are not conscious of them—these are our "thoughtless acts." [11]

figure 9. Categories of "Thoughtless Acts"

Rather than providing a definitive explanation, she invites designers and researchers to observe the phenomena of thoughtless acts and ponder "what are the implied human motivations and needs, and how might design respond to these?" Subsequently, Fulton Suri urges designers to not only discover the hidden needs of the people, but "uncover emotional experiences and build a better experience":

Interpretation and speculation inevitably take us a step beyond the purely objective to a subjective level, where we draw on empathy. We have an empathic sense, for example, of what the girl is feeling as she cools her forehead with a chilled soda can [on the book's page 69].

Going beyond the utilities and functional level with my research project is something I strive for, as someone with a background in experience design. Considering sensory and evocative aspects of the user experience is the first dimension. To design of On-Object applications, for example, it may be worthwhile to look for situations where adding tactile or sensory resources can result in a more desirable experience.

Secondly, emphasizing "the realities of everyday behavior, of design in use," she hopes the book will inspire more "flexible and enduring solutions." In the book Human-Machine Reconfigurations, Lucy Suchman makes a similar statement:

... the object of design must shift. Rather than fixed objects that prescribe their use, artifacts -- particularly computationally based devices -- comprise a medium or starting place elaborated in use.

By employing form factors that are compatible with a multitude of existing objects, I hope On-Object will be flexible, widely applicable and reliable resources for those who try it.

The third dimension of building a better experience is allowing people to utilize their resident knowledge about their own surroundings and belongings. During the first week of conceiving On-Object as a research project, Ivan Poupyrev of Sony CSL asked me why I would use existing objects, instead of designing and giving people the objects to apply my interface too. It is because everyone is an active player in configuring our world. Fulton Suri also says in the book:

We are all active in arranging and adapting things -- everyone is an expert in the design of efficiency and convenience in their own world ... each of us possesses unique knowledge that we use in creative ways to achieve our personal and social goals.

There have, of course, been a number of research projects regarding common objects that are physicalled tagged for digital augmentation. These projects have applied the interface systems for the following domains:

Military workplace CSCW, for efficient information management and communication [14]

Programming tools for physical interactive environments that are appropriate for use by young children of age 4-6 [15]

Home automation and programming of lights and appliances by assigning input objects and output objects wirelessly linked or wired with each other [16]

Potential Domains for On-Objects, a Work in Progress

Based on the definition so far, On-Object can be particuarly useful where:

The products in the market don't necessarily satisfy the niche needs and the combination of existing products is not enough

A sizeable population are all highly customizing their situation and finding workarounds

Conductivity

Two or more objects need to be instantly linked

Immediate and improvised actions need to be tracked

For instance, bike messengers, truckers, carpenters, parents with strollers: The constraints of their work cause them to do all kinds of tweaks to their clothes, gears and communication devices. While they are relatively non-conforming do-it-yourself (DIY) types, the set of gears, devices and tools are also relatively well defined. For example, a bike messenger may use On-Object to detect the slope of the road and change the steering gear, detect the squeezing on the breaks to turn on a break light, light up a signal light based on how you squeeze the handlebar.

Retail environment: Most stores are equipped with dozens of racks, hundreds of hangers and hooks. Many hangers have clothing size marked on them too. Imagine hangers and wall hooks equipped with On-Object. Attach a price tag in a form of a loop, have it interface with the hangers to track the pairing, how many times the piece of clothe has been tried on. When packing groceries, goods and clothes, associate the goods with the bag or the container so that customer doesn't lose them.

Hospitals, research labs, veterinarians and zoos: Instant tagging on patient bodies and relating them to documents, medications takes up a good chunk of the work in this environment.

Like the rest of this paper, identifying domain applications for On-Object is still in conception and will continue for the next year.

From Device Users to World Actors

A remarkable difference between On-Object and other tangible user interfaces (TUIs) is that instead of providing a specially designed object or installation like the I/O Brush or Audiopad, On-Object provides an accessory to existing objects. It also sets On-Object apart from standalone electronic gadgets such as mobile phones or digital cameras.

figure 10. I/O Brush (Left), Audiopad (Middle), inTouch (Right)

This departure from offering discrete devices and instead encouraging people to appropriate the interface in the context of their existing world has another implication in HCI research. The field of HCI has largely described and prescribed how a "user" of a discrete computational "device" or closed-loop "system" performs. With augmented interactions and ubiquitous embedding of computational capabilities, On-Object does not fit the conventional HCI model. Rather, it concerns how an a person acts in their world to continuosly assess it and configure it to suit their needs and have a more desirable experience.

On-Object is not the only example that signals this transition from "device users" to "world actors" as the subject of HCI. As future user interfaces take the forms of soft materials, wearables or ambient media, the subject of Human-Computer Interaction (HCI) will rather be "actors" who perceive, adapt to, rearrange and punctuate the "world" around them including their clothing, residences, lighting or food.

Actors that operate within the semantic network are the basis of the Action-Network Theory (ANT). As a 'material-semiotic' method, ANT "maps relations that are simultaneously material (between things) and 'semiotic' (between concepts) [17]." These networks of relations are transient, in need of continuous definition and redefinition through discovery, appropriation and negotiation.

In this frame of thinking, actors including humans and non-human objects equally contribute to the forming of the network ("generalized symmetry"), and agency is understood as a material-semiotic attribute not locatable in either humans or nonhumans [18].

However, in my opinion, a network only exists in the human eye because we define it, perceive it and use it. When humans are the only ones perceiving and conceiving them, it is an error to say humans and objects equally contribute to the networks. How can objects be contributing them in the same fashion when they don't know about it?

The only reason they might is because we humans made the objects to think and feel like we do. With computational and software driven objects like my On-Object interface, or any TUIs in that matter, can be programmed to think and behave in a certain way. But the agency we are able to program into these TUIs is still greatly limited and contrived, a far cry from making them equal actors to the humans.

Suchman also notes in the aforementioned book:

Mutualities, moreover, are not necessarily symmetries. My own analysis suggests that persons and artifacts do not constitute each other in the same way ... As Pickering points out with respect to humans and nonhumans, "Semiotically, these things can be made equivalent; in practice they are not."

Although it is based on the perspective that emphasizes semiotic configurations of situated actions and emphasizes both the intangible logic and tangible matriality, On-Object is not considred an actor with an agency itself. Rather, it is a material and semiotic resource for the human actors.