Abstract

Exploratory search is a difficult activity that requires iterative interaction. This iterative process helps the searcher to understand and to refine the information need. It also generates a rich set of data that can be used effectively to reflect on what has been found (and found useful). In this paper, we describe a framework for unifying transitions among various stages of exploratory search, and show how context from one stage can be applied to the next. The framework can be used both to describe existing information-seeking interactions, and as a means of generating novel ones. We illustrate the framework with examples from a session-based exploratory search system prototype that we have built.

Abstract

Virtual, mobile, and mixed reality systems have diverse uses for data visualization and remote collaboration in industrial settings, especially factories. We report our experiences in designing complex mixed-reality collaboration, control, and display systems for a real-world factory, for delivering real-time factory information to multiple users. In collaboration with (blank for review), a chocolate maker in San Francisco, our research group is building a virtual “mirror” world of a real-world chocolate factory and its processes. Real-world sensor data (such as temperature and machine state) is imported into the 3D environment from hundreds of sensors on the factory floor. Multi-camera imagery from the factory is also available in the multi-user 3D factory environment. The resulting "virtual factory" is designed for simulation, visualization, and collaboration, using a set of interlinked, real-time 3D and 2D layers of information about the factory and its processes. We are also looking at appropriate industrial uses for mobile devices such as cell phones and tablet computers, and how they intersect with virtual worlds and mixed realities. For example, an experimental iPhone web app provides mobile laboratory monitoring and control. The app allows a real-time view into the lab via steerable camera and remote control of lab machines. The mobile system is integrated with the database underlying the virtual factory world. These systems were deployed at the real-world factory and lab in 2009, and are now in beta development. Through this mashup of mobile, social, mixed and virtual technologies, we hope to create industrial systems for enhanced collaboration between physically remote people and places – for example, factories in China with managers in Japan or the US.

Abstract

Over the years I have enjoyed Mermin's colorful, idiosyncratic, and insightful papers. His interest in
the foundations of quantum mechanics has led him to discover alternative explanations for various
quantum mechanical puzzles and protocols. These explanations are often superior to previous
explanations in both simplicity and insight, and even when they are not outright better, they
provide a valuable alternative point of view. His book is filled with such explanations, and with
strong, sometimes controversial, opinions on the right way of seeing something, which make his
book both valuable and entertaining.

Abstract

Photo libraries are growing in quantity and size, requiring better support for locating desired photographs. MediaGLOW is an interactive visual workspace designed to address this concern. It uses attributes such as visual appearance, GPS locations, user-assigned tags, and dates to filter and group photos. An automatic layout algorithm positions photos with similar attributes near each other to support users in serendipitously finding multiple relevant photos. In addition, the system can explicitly select photos similar to specified photos. We conducted a user evaluation to determine the benefit provided by similarity layout and the relative advantages offered by the different layout similarity criteria and attribute filters. Study participants had to locate photos matching probe statements. In some tasks, participants were restricted to a single layout similarity criterion and filter option. Participants used multiple attributes to filter photos. Layout by similarity without additional filters turned out to be one of the most used strategies and was especially beneficial for geographical similarity. Lastly, the relative appropriateness of the single similarity criterion to the probe significantly affected retrieval performance.

Abstract

Creating virtual models of real spaces and objects is cumber-
some and time consuming. This paper focuses on the prob-
lem of geometric reconstruction from sparse data obtained
from certain image-based modeling approaches. A number of
elegant and simple-to-state problems arise concerning when
the geometry can be reconstructed. We describe results and
counterexamples, and list open problems.

Abstract

Twitter provides a search interface to its data, along the lines of traditional search engines. But the single ranked list is a poor way to represent the richly-structured Twitter data. A more structured approach that recognizes original messages, re-tweets, people, and documents as interesting constructs is more appropriate for this kind of data. In this paper, we describe a prototype for exploring search results delivered by Twitter. The design is based on our own experience with using Twitter search, and as well as on the results of an small online questionnaire.

Abstract

The use of whiteboards is pervasive across a wide range of work domains. But some of the qualities that make them successful—an intuitive interface, physical working space, and easy erasure—inherently make them poor tools for archival and reuse. If whiteboard content could be made available in times and spaces beyond those supported by the whiteboard alone, how might it be appropriated? We explore this question via ReBoard, a system that automatically captures whiteboard images and makes them accessible through a novel set of user-centered access tools. Through the lens of a seven week workplace field study, we found that by enabling new workflows, ReBoard increased the value of whiteboard content for collaboration.

Abstract

The modern workplace is inherently collaborative, and this collaboration relies on effective communication among coworkers. Many communication tools – email, blogs, wikis, Twitter, etc. – have become increasingly available and accepted in workplace communications. In this paper, we report on a study of communications technologies used over a one year period in a small US corporation. We found that participants used a large number of communication tools for different purposes, and that the introduction of new tools did not impact significantly the use of previously-adopted technologies. Further, we identified distinct classes of users based on patterns of tool use. This work has implications for the design of technology in the evolving ecology of communication tools.

Abstract

PACER is a gesture-based interactive paper system that
supports fine-grained paper document content manipulation
through the touch screen of a cameraphone. Using the
phone's camera, PACER links a paper document to its
digital version based on visual features. It adopts camera-based
phone motion detection for embodied gestures (e.g.
marquees, underlines and lassos), with which users can
flexibly select and interact with document details (e.g.
individual words, symbols and pixels). The touch input is
incorporated to facilitate target selection at fine granularity,and to address some limitations of the embodied
interaction, such as hand jitter and low input sampling rate. This hybrid interaction is coupled with other techniques such as semi-real time document tracking and loose physical-digital document registration, offering a gesture-based
command system. We demonstrate the use of
PACER in various scenarios including work-related
reading, maps and music score playing. A preliminary user
study on the design has produced encouraging user
feedback, and suggested future research for better
understanding of embodied vs. touch interaction and one vs.
two handed interaction.

Abstract

In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we either segment the image, mask previously seen areas of the image, or combine the two techniques, and repeat the search. We compare the different methods proposed for the second search phase by evaluating the inspection method using true positive and false negative rates, and subjective workload. Results show that gaze-blocked configurations reduced the subjective workload, and that gaze-blocking without segmentation showed the largest increase in true positive identifications and the largest decrease in false negative identifications of previously unseen objects.

Abstract

This project investigates practical uses of virtual, mobile, and mixed reality systems in industrial settings, in particular control and collaboration applications for factories. In collaboration with TCHO, a chocolate maker start-up in San Francisco, we have built virtual mirror-world representations of a real-world chocolate factory and are importing its data and modeling its processes. The system integrates mobile devices such as cell phones and tablet computers. The resulting "virtual factory" is a cross-reality environment designed for simulation, visualization, and collaboration, using a set of interlinked, real-time 3D and 2D layers of information about the factory and its processes.

Abstract

Paper is static but it is also light, flexible, robust, and has high resolution for reading documents in various scenarios. Digital devices will likely never match the flexibility of paper, but come with all of the benefits of computation and networking. Tags provide a simple means of bridging the gap between the two media to get the most out of both. In this paper, we explore the tradeoffs between two different types of tagging technologies – marker-based and content-based – through the lens of four systems we have developed and evaluated at our lab. From our experiences, we extrapolate issues for designers to consider when developing systems that transition between paper and digital content in a variety of different scenarios.

Abstract

Browsing and searching for documents in large, online enterprise document repositories are common activities. While internet search produces satisfying results for most user queries, enterprise search has not been as successful because of differences in document types and user requirements. To support users in finding the information they need in their online enterprise repository, we created DocuBrowse, a faceted document browsing and search system.
Search results are presented within the user-created document hierarchy, showing only directories and documents matching selected facets and containing text query terms. In addition to file properties such as date and file size, automatically detected document types, or genres, serve as one of the search facets. Highlighting draws the user’s attention to the most promising directories and documents while thumbnail images and automatically identified keyphrases help select appropriate documents. DocuBrowse utilizes document similarities, browsing histories, and recommender system techniques to suggest additional promising documents for the current facet and content filters.

Abstract

Embedded Media Markers, or simply EMMs, are nearly
transparent iconic marks printed on paper documents that
signify the existence of media associated with that part of
the document. EMMs also guide users' camera operations
for media retrieval. Users take a picture of an EMMsignified
document patch using a cell phone, and the media
associated with the EMM-signified document location is
displayed on the phone. Unlike bar codes, EMMs are nearly
transparent and thus do not interfere with the document
contents. Retrieval of media associated with an EMM is
based on image local features of the captured EMMsignified
document patch. This paper describes a technique
for semi-automatically placing an EMM at a location in a
document, in such a way that it encompasses sufficient
identification features with minimal disturbance to the
original document.

Abstract

The current trend toward high-performance mobile networks and increasingly sophisticated mobile devices has fostered the growth of mobile workers. In mobile environments, an urgent need exists for handling documents using a mobile phone, especially for browsing documents and viewing Rich Contents created on computers. This paper describes Seamless Document Handling, which is a technology for viewing electronic documents and Rich Contents on the small screen of a mobile phone. To enhance operability and readability, we devised a method of scrolling documents efficiently by applying document image processing technology, and designed a novel user interface with a pan-and-zoom technique. We conducted on-site observations to test usability of the prototype, and gained insights difficult to acquire in a lab that led to improved functions in the prototype.

Abstract

Browsing and searching for documents in large, online enterprise document repositories is an increasingly common problem. While users are familiar and usually satisfied with Internet search results for information, enterprise search has not been as successful because of differences in data types and user requirements. To support users in finding the information they need from electronic and scanned documents in their online enterprise repository, we created an automatic detector for genres such as papers, slides, tables, and photos. Several of those genres correspond roughly to file name extensions but are identified automatically using features of the document. This genre identifier plays an important role in our faceted document browsing and search system. The system presents documents in a hierarchy as typically found in enterprise document collections. Documents and directories are filtered to show only documents matching selected facets and containing optional query terms and to highlight promising directories. Thumbnail images and automatically identified keyphrases help select desired documents.

Abstract

Changing the model underlying information and computation from a classical mechanical to a quantum mechanical one yields faster algorithms, novel cryptographic mechanisms, and alternative methods of communication. Quantum algorithms can perform a select set of tasks vastly more efficiently than any classical algorithm, but for many
tasks it has been proven that quantum algorithms provide no advantage. The breadth of quantum computing applications is still being explored. Major application areas include security and the many fields that would benefit from efficient quantum simulation. The quantum information processing viewpoint provides insight into classical algorithmic issues as well as a deeper understanding of entanglement and other non-classical aspects of quantum physics.

Abstract

We describe an efficient and scalable system for automatic image categorization. Our approach seeks to marry scalable “model-free” neighborhood-based annotation with accurate boosting-based per-tag modeling. For accelerated neighborhood-based classification, we use a set of spatial data structures as weak classifiers for an arbitrary number of categories. We employ standard edge and color features and an approximation scheme that scales to large training sets. The weak classifier outputs are combined in a tag-dependent fashion via boosting to improve accuracy. The method performs competitively with standard SVM-based per-tag classification with substantially reduced computational requirements. We present multi-label image annotation experiments using data sets of more than two million photos.

Abstract

The Pantheia system enables users to create virtual models by `marking up' the real world with pre-printed markers.
The markers have predefined meanings that guide the system as it creates models. Pantheia takes as input user captured images or video of the marked up space. This video
illustrates the workings of the system and shows it being
used to create three models, one of a cabinet, one of a lab,
and one of a conference room. As part of the Pantheia system, we also developed a 3D viewer that spatially integrates
a model with images of the model.

Abstract

Existing cameraphone-based interactive paper systems fall short of the flexibility of GUIs, partly due to their deficient fine-grained interactions, limited interaction styles and inadequate targeted document types. We present PACER, a platform for applications to interact with document details (e.g. individual words, East Asian characters, math symbols, music notes, and user-specified arbitrary image regions) of generic paper documents through a camera phone. With a see-through phone interface, a user can discover symbol recurrences in a document by pointing the phone's crosshair to a symbol within a printout. The user can also continuously move the phone over a printout for gestures to copy and email an arbitrary region, or play music notes on the printout.

Abstract

Reading documents on mobile devices is challenging. Not only are screens small and difficult to read, but also navigating an environment using limited visual attention can be difficult and potentially dangerous. Reading content aloud using text-to-speech (TTS) processing can mitigate these problems, but only for content that does not include rich visual information. In this paper, we introduce a new technique, SeeReader, that combines TTS with automatic content recognition and document presentation control that allows users to listen to documents while also being notified of important visual content. Together, these services allow users to read rich documents on mobile devices while maintaining awareness of their visual environment.

Abstract

The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional
next-generation conference rooms. Our first design prototype focuses on creating a "no wizards"
room for an American executive; that is, a room the executive could walk into and use by himself,
without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. Since the system needs to fit both with the personal leadership style of the executive and the corporation's meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive.
Based on our work flow analysis and the scenarios we developed from it, USE developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The first USE room was designed for FXPAL's executive "Ian" and installed in Niji, a small executive conference room at FXPAL.
The room Niji currently contains two large interactive whiteboards for projection of presentation
material, for annotations using a digital whiteboard, or for teleconferencing; a Tandberg teleconferencing system; an RFID authentication plus biometric identification system; printing via network; a PDA-based simple controller, and a tabletop touch-screen console. The console is used for the USE room control interface, which controls and switches between all of the equipment mentioned above.

Abstract

Most mobile navigation systems focus on answering the question,“I know where I want to go, now can you show me exactly how to get there?” While this approach works well for many tasks, it is not as useful for unconstrained situations in which user goals and spatial landscapes are more fluid, such as festivals or conferences. In this paper we describe the design and iteration of the Kartta system, which we developed to answer a slightly different question: “What are the most interesting areas here and how do I find them?”

Abstract

Most mobile navigation systems focus on answering the question, "I know where I want to go, now can you show me exactly how to get there?" While this approach works well for many tasks, it is not as useful for unconstrained situations in which user goals and spatial landscapes are more fluid, such as festivals or conferences. In this paper we describe the design and iteration of the Kartta system, which we developed to answer a slightly different question:
"What are the most interesting areas here and how do I find them?"

Abstract

A personal interface for information mash-up: exploring worlds both physical and virtual

Book chapter in "Understanding the New Generation Office: Collective Intelligence of 100 Specialists" (book project in Japan, by New Era Office Research Center, Tokyo) , August 18, 2009

This is a Big Idea piece for a collective intelligence book project by the New Era Office Research Center, Tokyo. It is written at the invitation of FX colleague Koushi Kawamoto. The project asks the same questions of 100 specialists: Answer these four questions about an idea for a next-generation workplace:
1. Want: what do I want to be able to do?
2. Should: what should a system to support this "want" be able to do?
3. Create: imagine what an instance of this idea might be.
4. Can: how could this instance be realized in reality?

WANT: In my ideal work environment, the data I need on everything and everyone should be available at my fingertips, all the time, in many configurations that I can mix-and-match to suit the needs of any task. This includes things like:
• documents of all types
• people's status, tasks, and availability
• audio, video, mobile, and virtual world communication channels
• links to the physical world as appropriate, for example sensors delivering factory data, or the state of the machines I use daily in the workplace (printers, my PC, conference room systems), or awareness data about my colleagues.
CAN: How can we approach this problem? Let's consider the creation of a personal interface or instrument for information mashup, capable of interacting with complex data structures, for tuning smart environments, and for exploring worlds both physical and virtual, in business, social and personal realms. Like any interactive system this idea has two parts: human-facing and system-facing. These can be called Interstitia I (extending human interactivity) and Interstitia II (enabling smart environments).