Hierarchy visualization has been a hot topic in the Information
Visualization community for the last decade. A number of hierarchy
visualization techniques have been invented, with each having advantages for
some applications, but limitations or disadvantages for other applications. No
technique has succeeded for a wide variety of applications. We continue to
struggle with basic problems of high cognitive overhead (e.g., loss of
context), poor fit to the data (e.g., problems of scale), and poor fit to the
user's task at hand (e.g., handling multiple points of focus). At the same
time, information access improvements have made available to us much richer
sources of information, including multiple hierarchies and other relationships.
I call this broader problem Polyarchy Visualization. In this talk, I will
review what we know about hierarchy visualization, illustrate the broader
polyarchy visualization problem with some examples, and introduce some novel
polyarchy visualization techniques.

This paper discusses the main issues for creating Interactive Virtual
Environments with Virtual Humans emphasizing the following aspects: creation of
Virtual Humans, gestures, interaction with objects, multimodal communication.

The Cognitive Dimensions framework outlined here is generalised broad-brush
approach to usability evaluation for all types of information artifact, from
programming languages through interactive systems to domestic devices. It also
has promise of interfacing successfully with organisational and sociological
analyses.

We present an algorithm for the aesthetic drawing of basic hierarchical blob
structures, of the kind found in higraphs and statecharts and in other diagrams
in which hierarchy is depicted as topological inclusion. Our work could also be
useful in window system dynamics, and possibly also in things like newspaper
layout, etc. Several criteria for aesthetics are formulated, and we discuss
their motivation, our methods of implementation and the algorithm's
performance.

Keywords: blob, hierarchy, higraph, layout

A comparison of set-based and graph-based visualisations of overlapping
classification hierarchies

The visualisation of hierarchical information sets has been a staple of
Information Visualisation since the field came into being in the early 1990's.
However, at present, support for visualising the correlations between multiple,
overlapping sets of hierarchical information has been lacking. This is despite
the realisation that for certain tasks this information is as important as the
information that forms the individual hierarchies. In response to this, we have
produced two early visualisation prototypes, one based on a graph
visualisation, and the other on a set-based metaphor, that endeavour to display
such information in a readily perceived form to potential users. The science of
botanical taxonomy is used as an example of a field where such a visualisation
would be useful, and also as a resource for example information sets that the
prototypes can act upon. Technical and perceptual issues involved in the design
and implementation of both prototypes are discussed. Following this, informal
user testing on both prototypes is described, which utilised user observation
techniques to elicit qualitative feedback from the taxonomists. These findings
are then used to emphasise the shortcomings and advantages of each prototype,
and from these probable issues for future prototyping and development are
drawn.

In previous work, the first author argued for simple lightweight
visualisations. These are surprisingly complex to produce due to the need for
infrastructure to read files, etc. onCue, a desktop 'agent', aids the rapid
production of such visualisations and their integration with desktop and
Internet applications. Two examples are used dancing histograms for 2D tables
and pieTrees for hierarchical numeric data. A major focus is the importance of
architecture, both that of onCue itself and the underlying component
infrastructure on which it is built -- separation of concerns, mixed initiative
computation and plug-and-play components lead to easily produced and easily
used systems.

Most diagrams, particularly those used in software engineering, are line
drawings consisting of nodes drawn as rectangles or circles, and edges drawn as
lines linking them. In the present paper we review some of the literature on
human perception to develop guidelines for effective diagram drawing.
Particular attention is paid to structural object recognition theory. According
to this theory as objects are perceived they are decomposed into 3D set of
primitives called geons, together with the skeleton structure connecting them.
We present a set of guidelines for drawing variations on node-link diagrams
using geon-like primitives, and provide some examples. Results from three
experiments are reported that evaluate 3D geon diagrams in comparison with 2D
UML (Unified Modeling Language) diagrams. The first experiment measures the
time and accuracy for a subject to recognize a sub-structure of a diagram
represented either using geon primitives or UML primitives. The second and
third experiments compare the accuracy of recalling geon vs. UML diagrams. The
results of these experiments show that geon diagrams can be visually analyzed
more rapidly, with fewer errors, and can be remembered better in comparison
with equivalent UML diagrams.

This paper describes the software architecture for our pen-based electronic
whiteboard system, called Flatland. The design goal of Flatland is to support
various activities on personal office whiteboards, while maintaining the
outstanding ease of use and informal appearance of conventional whiteboards.
The GUI framework of existing window systems is too complicated and
heavy-weight to achieve this goal, and so we designed a new architecture that
works as a kind of window system for pen-based applications. Our architecture
is characterized by its use of freeform strokes as the basic primitive for both
input and output, flexible screen space segmentation, pluggable applications
that can operate on each segment, and built-in history management mechanisms.
This architecture is carefully designed to achieve simple, unified coding and
high extensibility, which was essential to the iterative prototyping of the
Flatland interface. While the current implementation is optimized for large
office whiteboards, this architecture is useful for the implementation of a
range of various pen-based systems.

Individual characters and text are the main inputs in many computing
devices. Currently there is a growing trend in developing small portable
devices like mobile phones, personal digital assistants, GPS-navigators, and
two-way pagers. Unfortunately these portable computing devices have different
user interfaces and therefore the task of text input takes many forms. The
user, who in the future is likely to have several of these devices, has to
learn several text input methods. We argue that there is a need for a universal
text input method. A method like this would work on a wide range of interface
technologies and allow the user to transfer his or her writing skill without
device-specific training. To show that device independent text input is
possible, we present a candidate for a device independent text entry method
that supports skill transfer between different devices. A limited longitudinal
study was conducted to achieve a proof of concept evaluation of our Minimal
Device Independent Text Input Method (MDITIM). We found MDITIM writing skill
acquired with a touchpad to work almost equally well on mouse, trackball,
joystick and keyboard without any additional training. Our test group reached
on average 41% of their handwriting speed by the end of the tenth 30-minute
training session.

This paper proposes a technique for generating more comprehensible
animations from discussions, which are often hard to follow, in USENET. This
technique consists of two steps. In the first step, our prototype system
generates a scenario from articles in a news thread using the quote
relationship. In the second step, it generates an animation based on the
scenario, casting 3D avatars as the authors of the articles. We also
implemented a prototype system based on this technique and made several
animations from articles posted to USENET.

While car navigation systems are widely commercialized already today,
pedestrian information systems are still in the early research stage. However,
recent progress in mobile computing has opened perspectives for pedestrian
navigation systems. In this context, graphics is and will still be an important
modality to convey all types of route information. This paper will address the
question how to generate graphics for navigation systems that help pedestrians,
e.g., airport passengers, city tourists or conference attendees, to find their
way in complex environments. We will discuss how the presentation of graphics
can be tailored to various technical and cognitive constraints, and we will
demonstrate our ideas within a scenario where a passenger of an airport gets
navigational help from a stationary info booth and afterwards on her way via a
handheld device (PDA). Both the 3D visualization at the info booth and the
sketch-like presentation on the PDA are generated from the same data and by the
same system, yet are adapted to the specific situation, output medium and user
as far as possible.

In today's automotive industry there is an increasing demand for VR
technology, because it provides the possibility to switch from cost and time
insensitive physical mock up's (PMU) to digital mock up's (DMU). Unfortunately
many current VR applications are either limited in the way people can interact
with them, or provide a large set of functions, which are hard to use. In this
paper we present the design of a VR user interface for applications in the area
of digital design review. The basic requirements of such an UI are the ease of
use, and the ability to work simultaneously with a group of people on one
system. Furthermore we investigate the functional requirements for this kind of
application, including navigation, manipulation, examination and documentation
of flaws in the design of the models. Documentation is stored as HTML and could
therefore be easily transmitted between different parties. The design of the
user interface is based on the basic interaction tasks (BIT'S), introduced by
Foley et. al., which allow to build complex functionality on top of only a few
interaction metaphors. Finally we evaluate the concept on a prototype
implementation, done in cooperation with BMW AG.

Keywords: VR, design review, interaction, user interface

Reification, polymorphism and reuse: three principles for designing visual
interfaces

This paper presents three design principles to support the development of
large-scale applications and take advantage of recent research in new
interaction techniques: Reification turns concepts into first class objects,
polymorphism permits commands to be applied to objects of different types, and
reuse makes both user input and system output accessible for later use. We show
that the power of these principles lies in their combination. Reification
creates new objects that can be acted upon by a small set of polymorphic
commands, creating more opportunities for reuse. The result is a simpler yet
more powerful interface.
To validate these principles, we describe their application in the redesign
of a complex interface for editing and simulating Coloured Petri Nets. The
cpn2000 interface integrates floating palettes, toolglasses and marking menus
in a consistent manner with a new metaphor for managing the workspace. It
challenges traditional ideas about user interfaces, getting rid of pull-down
menus, scrollbars, and even selection, while providing the same or greater
functionality. Preliminary tests with users show that they find the new system
both easier to use and more efficient.

A multiple view system uses two or more distinct views to support the
investigation of a single conceptual entity. Many such systems exist, ranging
from computer-aided design (CAD) systems for chip design that display both the
logical structure and the actual geometry of the integrated circuit to
overview-plus-detail systems that show both an overview for context and a
zoomed-in-view for detail. Designers of these systems must make a variety of
design decisions, ranging from determining layout to constructing sophisticated
coordination mechanisms. Surprisingly, little work has been done to
characterize these systems or to express guidelines for their design. Based on
a workshop discussion of multiple views, and based on our own design and
implementation experience with these systems, we present eight guidelines for
the design of multiple view systems.

The use of models has entered into current practice when developing various
types of software product. However, there is a lack of methods able to use the
information contained in relevant models concerning human-computer interaction
for supporting the design and development of user interfaces. In this paper, we
propose a method for using information contained in formally represented task
models in order to support the design of interactive applications, with
particular attention to those applications where both usability and safety are
the main concern. Examples taken from our experience in a case study from the
domain of Air Traffic Control are introduced and further discussed to explain
how the method can be applied.

Multiple coordinated visualizations enable users to rapidly explore complex
information. However, users often need unforeseen combinations of coordinated
visualizations that are appropriate for their data. Snap-Together Visualization
enables data users to rapidly and dynamically mix and match visualizations and
coordinations to construct custom exploration interfaces without programming.
Snap's conceptual model is based on the relational database model. Users load
relations into visualizations then coordinate them based on the relational
joins between them. Users can create different types of coordinations such as:
brushing, drill down, overview and detail view, and synchronized scrolling.
Visualization developers can make their independent visualizations snap-able
with a simple API.
Evaluation of Snap revealed benefits, cognitive issues, and usability
concerns. Data savvy users were very capable and thrilled to rapidly construct
powerful coordinated visualizations. A snapped overview and detail-view
coordination improved user performance by 30-80%, depending on task.

Designing high quality visual interfaces for hypermedia applications is
difficult; it involves organizing different kinds of interface objects (for
example those triggering navigation), prevent the user from cognitive overhead,
etc. Unfortunately, interface design methods do not capture design decisions or
rationale, so it is hard to record and convey interface design expertise.
In this paper, we introduce interface patterns for hypermedia applications
as a concept for reusing interface designs. The structure of this paper is as
follows: first, we introduce the context in which these patterns were
discovered and we give a rationale for their use. Then we present some simple
but effective patterns using a standard template. We finally discuss some
further issues on the use of interface patterns in hypermedia applications.

In the usability inspection of complex hypermedia a great deal is left to
the skills, experience, and ability of the inspectors. The SUE inspection
technique has been proposed to help usability inspectors share and transfer
their evaluation know-how, make it easier the hypermedia inspection process for
newcomers, and achieve more effective and efficient evaluations. The SUE
inspection is based on the use of evaluation patterns, called Abstract Tasks,
which precisely describe the activities to be performed by evaluators during
inspection. This paper presents an empirical validation of this inspection
technique: two groups of novice inspectors have been asked to evaluate a
commercial hypermedia CD-ROM applying the SUE inspection or the traditional
heuristic evaluation technique. Results have shown a clear advantage of the SUE
inspection over the heuristic evaluation, demonstrating that Abstract Tasks are
efficient tools to drive evaluator's performance.

This short paper describes the presentation model used by the Teallach
model-based user-interface development environment. Teallach's presentation
model provides both abstract and concrete interactors, which are first-class
objects that may be freely intermixed when building a user-interface. An
example is provided showing this approach in use.

Declarative models play an important role in most software design
activities, by allowing designs to be constructed that selectively abstract
over complex implementation details. In the user interface setting, Model-Based
User Interface Development Environments (MB-UIDEs) provide a context within
which declarative models can be constructed and related, as part of the
interface design process. However, such declarative models are not usually
directly executable, and may be difficult to relate to existing software
components. It is therefore important that MB-UIDEs both fit in well with
existing software architectures and standards, and provide an effective route
from declarative interface specification to running user interfaces. This paper
describes how user interface software is generated from declarative
descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include
its open architecture, which connects directly to existing applications and
widget sets, and the generation of executable interface applications in Java.
This paper focuses on how Java programs, organized using the
model-view-controller pattern (MVC), are generated from the task, domain and
presentation models of Teallach.

Focus + context information visualizations have sought to amplify human
cognition by increasing the amount of information immediately available to the
user. We study how the focus + context distortion of the Hyperbolic Tree
browser affects information foraging behavior in a task similar to the CHI '97
Browse Off. In comparison to a more conventional browser, Hyperbolic users
searched more nodes, searched at a faster rate, and showed more learning.
However, the performance of the Hyperbolic was found to be highly affected by
"information scent", proximal cues to the value of distal information. Strong
information scent made hyperbolic search faster than with a conventional
browser. Conversely, weak scent put the hyperbolic tree at a disadvantage.
There appears to be two countervailing processes affecting visual attention in
these displays: strong information scent expands the spotlight of attention
whereas crowding of targets in the compressed region of the Hyperbolic narrows
it. The results suggest design improvements.

In this paper we propose new visual interface technology to address
multidimensional data exploration and browsing tasks. MultiNav, a prototype
from GTE Laboratories, is based upon a multidimensional information model that
affords new data exploration and semantically structured browsing interactions.
The primary visual metaphor is based on sliding rods, each of which is
associated with an information dimension from the underlying model. Users can
interactively select value ranges along the rods in order to reveal hidden
relationships as well as query and restrict the set through direct
manipulation. A novel focus+context view is afforded in which detail about
individual items is revealed within the context of the global multidimensional
attribute space. We propose a novel interaction technique to change focus,
which is based on dragging rods from side to side. We relate this work on
multidimensional information visualization to other research in the area,
including Parallel Coordinates, Dynamic Histograms, Dynamic Queries, and
focus+context tables.

This paper proposes a framework for easily integrating and controlling
information visualization (infoVis) components within web pages to create
powerful interactive "live" documents, or LiveDocs. The framework includes a
set of infoVis components which can be placed and linked within a standard HTML
document, initialized to focus on key analysis results, and directly
manipulated by readers to explore and analyze data further. In addition,
authors can script the manipulation of views at the user interaction level
(e.g., to set view options, select items within a view, or animate a view). We
illustrate our approach with a sample analysis of a real-life data set.

Rapid Serial Visual Presentation, or RSVP, is the electronic equivalent of
riffling a book in order to assess its content. RSVP allows space to be traded
for time and has tremendous potential to support electronic information
browsing and search particularly on small displays. However, before this
potential can be realised, it is necessary to investigate the parameters
involved in the successful application of RSVP in the user interface. The rapid
display of images or text is well within the capabilities of current desktop
computers and even of current or near future mobile devices. The limiting
factor in the application of RSVP, therefore, has to be the limited capability
of the user's visual system. Users' reading comprehension with RSVP of text has
been studied extensively. The transfer of information with RSVP of images,
however, has received relatively little attention. This paper examines some of
the problems with applying RSVP for image browsing and search.

We illustrate how a formal model of interaction can be employed to generate
documentation on how to use an application, in the form of an Animated Agent.
The formal model is XDM, an extension of Coloured Petri Nets that enables
representing user-adapted interfaces, simulating their behaviour and making
pre-empirical usability evaluations. XDM-Agent is a personality-rich animated
character that uses this formal model to illustrate the role of interface
objects and to explain how tasks may be performed; its behaviour is programmed
by a schema-based planning followed by a surface generation, in which verbal
and non-verbal acts are combined appropriately, the agent's 'personality' may
be adapted to the user characteristics.

Teams of operators are required to monitor and control complex real-time
processes. Process information comes from different sources and is often
displayed by existing User Interfaces using a variety of visual and auditory
forms and compressed into narrow time-windows. Most presentation modalities are
fixed during interface design and are not capable of adaptation during system
operation. The operators alone must provide the flexibility required in order
to deal with difficult and unplanned situations.
This paper presents an innovative Auto-Adaptive Multimedia Interface (AAMI)
architecture, based on Intelligent Agent collaboration, designed to overcome
the above drawbacks. The use of this technology should speed up the design and
the implementation of human-centred multimedia interfaces, and significantly
enhance their usability.
The proposed architecture separates generic knowledge about adaptive user
interface management from application specific knowledge in order to provide a
generic framework suitable to be customised to different application domains.
Benefits from the AAMI approach are evaluated by developing two industrial
field-test application including Electrical Network Management and Thermal
Plant Supervision system.
The paper reports the architecture and the basic design principles of the
generic framework as well details of the two applications.
The work is being carried out within the European ESPRIT project: AMEBICA.

The field of human-computer interaction has been widely investigated in the
last years, resulting in a variety of systems used in different application
fields like virtual reality simulation environments, software user interfaces,
and digital library systems.
A very crucial part of all these systems is the input module which is
devoted to recognize the human operator in terms of tracking and/or recognition
of human face, arms position, hand gestures, and so on.
In this work a software architecture is presented, for the automatic
recognition of human arms poses. Our research has been carried on in the
robotics framework. A mobile robot that has to find its path to the goal in a
partially structured environment can be trained by a human operator to follow
particular routes in order to perform its task quickly. The system is able to
recognize and classify some different poses of the operator's arms as direction
commands like "turn-left", "turn-right", "go-straight", and so on.
A binary image of the operator silhouette is obtained from the gray-level
input. Next, a slice centered on the silhouette itself is processed in order to
compute the eigenvalues vector of the pixels co-variance matrix. This kind of
information is strictly related to the shape of the contour of the operator
figure, and can be usefully employed in order to assess the arms' position.
Finally, a support vector machine (SVM) is trained in order to classify
different poses, using the eigenvalues array.
A detailed description of the system is presented along with some remarks on
the statistical analysis we used, and on SVM. The experimental results, and an
outline of the usability of the system as a generic shape classification tool
are also reported.

To support users in querying geographic databases we have developed a system
that lets people sketch what they are looking for. It closes the gap between
user and information system, because the translation of a user's question into
a processable query statement is delegated to the information system so that a
user can focus on the actual query rather than spending time with its
formulation. This system paper highlights a set of interaction methods and
sketch interpretation algorithms that are necessary for pen-based querying of
geographic information systems. They are part of a comprehensive prototype
implementation of Spatial-Query-by-Sketch, which provides feature-based and
relation-based spatial similarity retrieval.

The focus of the presented work is visualization of routes of objects that
change their spatial location in time. The challenge is to facilitate
investigation of important characteristics of the movement: positions of the
objects at any selected moment, directions, speeds and their changes with the
time, overall trajectories and those for any specified interval etc. We propose
a dynamic map display controlled through a set of interactive devices called
time controls to be used as a support to visual exploration of spatial
movement.

We demonstrate a graphic visualisation of a travel itinerary, with special
emphasis on time and time zones. A traditional itinerary is a text document,
detailing locations, and arrival and departure times for travel and
accommodation. It is usually written in diary form, showing the sequence of
events to be followed on a trip. Many questions can be answered easily from
such an itinerary. What time should the traveler check in at the airport? Which
country are they visiting on a particular date? Other questions can be more
difficult to answer. How long is the first flight? What time is it at home when
the traveler reaches their hotel? The written form is also quite poor at
providing a 'picture' of an entire trip. The reader cannot tell at a glance how
many countries are being visited, or whether the stay in England is longer than
the slay in France. The visualisation discussed here attempts to make answering
such questions relatively simple. Usability studies are described which show
the advantages of our visualisation.
Negotiation between the traveler and a travel agent are implicit in the
development of a travel itinerary. The visualisation has been developed as part
of our "Collaborative Information Gathering" project, whose overall goal is to
investigate ways of supporting information search and document creation. The
travel system covers the search aspect by supporting collaborative World Wide
Web browsing, and document creation by supporting multiple complementary views
of the trip and allowing collaborative editing via any of these.

Zoomable User Interfaces (ZUIs) are difficult to use on large information
spaces in part because they provide insufficient context. Even after a short
period of navigation users no longer know where they are in the information
space nor where to find the information they are looking for. We propose a
temporary in-place context aid that helps users position themselves in ZUIs.
This context layer is a transparent view of the context that is drawn over the
users' focus of attention. A second temporary in-place aid is proposed that can
be used to view already visited regions of the information space. This history
layer is an overlapping transparent layer that adds a history mechanism to
ZUIs. We complete these orientation aids with an additional window, a hierarchy
tree, that shows users the structure of the information space and their current
position within it. Context layers show users their position, history layers
show them how they got there, and hierarchy trees show what information is
available and where it is.
ZUIs, especially those that include these new orientation aids, are
difficult to use with standard interaction techniques. They provide a large
number of commands which must be used frequently and on a changing image. The
mouse and its buttons cannot provide a rapid access to all these commands
without new interaction techniques. We propose a new type of menu, a control
menu, that facilitates the use of ZUIs and which we feel can also be useful in
other types of applications.

This paper describes hierarchical Flip Zooming, a focus+context
visualization technique for hierarchical information sets. It allows for
independent focus+context views at each node of the hierarchy and enables
parallel exploration of different branches of the hierarchy. Visualization,
navigation and interaction in the Flip Zooming technique is described as well
as how the technique fits into existing models of information visualization.
Examples of applications using the technique are given.

For data with large dimensionality, placing labels is critical for users'
comprehension of a scatterplot or a map of items. We propose a dynamic label
sampling technique that, combined with graphical fisheye views, selects
appropriate labels out of a large set of items on a map. Labels are sampled to
give focus and contextual information according to users' panning/zooming and
filtering operation. The paper also demonstrates an example of visual
exploration with the image browser based on our technique.

This paper explicates the metaphors used to conceive of asynchronous
text-based communication (ATBC) software, such as email and newsgroups. Design
of such software has been guided by an understanding of ATBC as essentially a
text communication (textual metaphor). However, this mode of discourse has many
similarities with oral communication as well. The interaction of oral and
textual aspects in ATBC gives rise to a phenomenon of multithreaded discourse,
where several discourse threads develop simultaneously, which is a unique
property of this medium.
Our main tenet here is that application of textual metaphor has narrowed the
scope of possible designs. We propose a design approach, which explicitly
promotes the metaphor of oral communication (conversation) and oral traits of
ATBC discourse, while also supporting the multithreaded discourse structure.
The consequent interface design challenge is that of creating a way to
visualise human conversation that would preserve the spontaneity of oral
conversation whilst also utilising the persistent nature of text. This goal has
been accomplished by spatial representation of multi-threaded discourse in a
shared workspace. Based on this proposed way of visualisation, a prototype tool
called 'Conversation Space' (ConverSpace) has been created.

In this paper we present MapViews, Magic Lounge, and Call-Kiosk, three
different but related systems that address the integration of mobile
communication terminals into multi-user applications. MapViews is a test-bed to
investigate how a small group of geographically dispersed users can jointly
solve localization and route planning tasks while being equipped with different
communication terminals. Magic Lounge is a virtual meeting space that provides
a number of communication support services and allows its users to connect via
heterogeneous devices. Finally, we sketch Call-Kiosk a system that is currently
being designed for setting up a commercial information service for mobile
clients. All three systems emphasize the high demand for automated design
approaches which are able to generate information presentations that are
tailored to the available presentation capabilities of particular target
devices.

Keywords: collaborative systems, mobile communication, multimedia

KVispatch: a visual language that rewrites kinematic objects in animation

A rule-based visual language that controls kinematics is proposed. It is
designed for visually describing realistic and reactive animations with
real-time kinematic modeling. While the kinematics animates graphical objects
to simulate continuous phenomena such as the expansion/contraction and free
falling of objects, the rewriting system controls discontinuous behaviors such
as changes in connectivity, creation/extinction, sudden changes in velocity, or
user interactions. Kinematic as well as geometric conditions can trigger rules.
Conversely, the rewriting system can change the targets' kinematic states such
as velocity as well as geometrical relationships. Thus the kinematics and the
rewriting systems cooperate. Using these techniques, kinematically modeled
animation can enjoy on-the-fly re-composition controlled by events simply by
adding graphical rules. To show this advantage, reactive GUIs and action games
are built. Our approach also extends the notion of figure-rewriting from space
to space-time.

In interorganisational processes, documents are used to record information
created during the processes. Legislative processes involving several
legislative organisations, or manufacturing processes involving complicated
networks of companies and officials are examples of such processes. In the
contemporary computerised environments a great deal of the recorded information
is scattered in different kinds of Web repositories with different kinds of
interfaces. The repositories should serve as valuable knowledge assets but
their use may be difficult and even the knowledge about the kinds of
repositories available may be insufficient. The paper presents a method for
improving information management in interorganisational processes. In the
method, the interorganisational processes are first analysed and the metadata
related to the production of documents in the processes is collected. Then the
metadata is visualised as graphical models by which documents created in the
processes can be accessed. To support a generic solution, an XML specification
for the metadata is developed. The method has been used to create visual
interfaces for European legal information repositories. The interfaces are
currently under testing in the EULEGIS project, which belongs to the Telematics
Application Programme of the European Commission.

We explore interaction as a basis for human-computer comprehension.
Perceptual experience is organised through categories establishing the kind of
distinctions imposable on perceived phenomena. An interactive tool exploiting a
similar categorisation in graphics is integrated into a system for subjective
retrieval of images, so that the user interacts with active regions supporting
the same type of distinction.

This paper proposes an efficient technique for eye gaze interface suitable
for the general GUI environments such as Microsoft Windows. Our technique uses
an eye and a hand together: the eye for moving cursors onto the GUI button
(move operation), and the hand for pushing the GUI button (push operation). We
also propose the following two techniques to assist the move operation: (1)
Automatic adjustment and (2) Manual adjustment. In the automatic adjustment,
the cursor automatically moves to the closest GUI button when we push a mouse
button. In the manual adjustment, we can move the cursor roughly by an eye,
then move it a little more by the mouse onto the GUI button. In the experiment
to evaluate our method, GUI button selection by manual adjustment showed better
performance than the selection by a mouse even in the situation that has many
small GUI buttons placed very closely each other on the GUI.

We describe a speech-based interface to an information visualization
(infoVis) system. Users ask natural-language questions about a given data
domain. Our interface then maps the questions into infoVis operations, which
result in the display of data visualizations that address the questions. Users
can interact with these views via speech or direct manipulation. If users give
incomplete information, our interface guides them in clarifying their
questions. The intelligence behind our interface is encapsulated in a service
logic that embodies domain knowledge about both the data being explored and the
infoVis system. This allows users to focus on answering questions, rather than
on the mechanics of accessing data and creating views.

Digital television user interfaces are composed of text, graphics and video.
Usability issues that arise include information visualization, searching and
navigation. This paper introduces two user interface prototypes for digital
television. Both prototypes were tested with real users and the test results
are discussed.

MBE (Mail by Example) is a visual interface that provides advanced
facilities for handling large volumes of electronic messages. It enables users
to define ad hoc queries for retrieving messages, folders, or information about
those. MBE is based on a "by-example" query style (QBE), to suit the
requirements of typical users of email environments. The first evaluation of
MBE revealed a generalized satisfaction towards its features.

We present a multi-scale layout algorithm for the aesthetic drawing of
undirected graphs with straight-line edges. The algorithm is extremely fast,
and is capable of drawing graphs of substantially larger size than any other
algorithm we are awars of. For example, the algorithm achieves optimal drawings
of 1000 vertex graphs in less than 3 seconds. The paper contains graphs with
over 6000 nodes. The proposed algorithm embodies a new multi-scale scheme for
drawing graphs, which can significantly improve the speed of essentially any
force-directed method.
Graphs have become an important part of recently proposed user interfaces,
hence the relevance of this paper to work on interface issues.

The problem of finding a pleasant layout for a given graph is a key
challenge in the field of information visualization. For graphs that are biased
towards a particular property such as tree-like, star-like, or bipartite, a
layout algorithm can produce excellent layouts -- if this property is actually
detected.
Typically, a graph may not be of such a homogeneous shape but is comprised
of different parts, or it provides several levels of abstraction each of which
dominated by another property.
The paper in hand addresses the layout of such graphs. It presents a meta
heuristic for graph drawing, which is based on two ideas: (i) The detection and
exploitation of hierarchical cluster information to unveil a graph's inherent
structure. (ii) The automatic selection of an individual graph drawing method
for each cluster.

The visual interface plays a significant role in a visual programming
system. We therefore developed heuristics that improve the readability of
control- and data-flow diagrams with many hundreds, and even thousands of
nodes. In this paper, we study how the body of research in graph drawing (GD)
can be applied to an actual graph-based interface.

Different approaches support the construction of software by representing
certain aspects of a system graphically. Recently, the UML has become common to
provide software designers with tools, in which they can create visual
representations of software interactively. But the UML is intended to be drawn
on two-dimensional surfaces. Our approach extends UML into a third and fourth
dimension in a way that we can place both static and dynamic aspects in one
single view. By this, we can show behavior in the context of structural
aspects, instead of drawing different diagrams for each aspect with only loose
relation to each other. We also use the third dimension to emphasize important
things and to place less interesting things in the background. Thereby, we
direct the viewer's attention to the important things in the foreground.
Currently, UML shows dynamic behavior by diagrams which do not change and are
therefore static in nature. In sequence diagrams, for example, time elapses
from the top of the diagram to the bottom. We point out that behavior is better
visualized by animated diagrams where message symbols move from the sender
object to the receiver object. Our approach supports the creation of a system
as well as the communication of its dynamic processes especially to customers.

The introduction of VRML has facilitated the production of virtual worlds.
Apart from being a format for defining 3D geometries, VRML also provides the
foundation for specifying interactive behaviour. However this mechanism is
rather primitive, there is no direct modelling for composite events or
conditions and no efficient treatment of time. The approach presented in this
paper, seeks to address these issues while formalising interactive behaviour in
3D-spatiotemporal worlds. We define the 3D-STECA (3D-SpatioTemporal Event
Condition Action) Rules which apply to the user and the objects of a virtual
environment. The set of rules comprise a scenario which can be mapped to VRML
or Java3D. With this work we document behaviour by using formal expressions and
provide the basis for guaranteeing consistency in the interaction.

Keywords: 3D, VRML, spatiotemporal data

A modular approach for exploring the semantic structure of technical
document collections

The identification and analysis of an enterprise's knowledge available in a
documented form is a key element of knowledge management. Visual methods which
allow easy access to a document collection's contents are an enabling
technology. However, no single information retrieval technique is likely to
adequately deal with such tasks independent of the specific situation. In this
paper, we therefore present a visualization technique based on a modular
approach that allows a variety of techniques from semantic document analysis to
be used in the visualization of the structure of technical document
collections.

If we accept that computer representation of a conceptual model must be
closer to the idea or the world the user has in mind, the conventional
object-oriented methods fail in properly addressing the specification of
semantics of presentation features within the conceptual model itself. To solve
this problem, the conceptual model expressiveness must be enriched adding
interface specification to capture such essential information. This information
capture must be done preserving the semantics of the conceptual model as a
whole.
In this paper, a set of relevant interface patterns will be introduced in
the conceptual modelling phase, preserving the homogeneity of the model. The
enrichment is applied to OO Method, a software production OO method developed
in the Information Systems and Computation Department at the Valencia
University of Technology. Following the OO-Method approach a software product
is automatically obtained from the conceptual model and the interface
generation is obtained in a natural way.

We define virtual prototype as a functional, photo realistic, and three
dimensional digital model of a future hand held electronics product. Besides
visualisation, product concept designers need to know the physical attributes
of the product, such as dimensions, weight and surface texture. WebShaman
Digiloop system augments digital virtual prototypes with physical objects in
order to support such tangibility. A data glove is used to manipulate the
virtual prototype and a physical mock-up of a concept prototype adds the
physical aspects of the product concept to the virtual prototype. The user of
this system can examine the functionality and features of the product concept
as well as feel the dimensions, weight and texture, and move the prototype
freely in physical space.

In various circumstances, it is possible to arrive at the need to specify
sequences of operations that a "machine" has to perform to achieve a purpose.
This paper will present VISPS, a visual system originally designed to specify
mission plans for the SARA autonomous submarine robot. Although this is the
particular setting for which the system had been initially devised, thanks to
its flexibility it can be easily configured to adjust to different contexts and
situations, even if it is always based on the same simple basic visual
mechanism.