Visualizing Complex Information Spaces

This paper describes an intelligent system to help people share and filter
information communicated by computer-based messaging systems. The system
exploits concepts from artificial intelligence such as frames, production
rules, and inheritance networks, but it avoids the unsolved problems of natural
language understanding by providing users with a rich set of semi-structured
message templates. A consistent set of "direct manipulation" editors
simplifies the use of the system by individuals, and an incremental enhancement
path simplifies the adoption of the system by groups.

Creating and debugging knowledge-based systems, such as expert systems,
requires easy access to rules and facts in a vast, loosely-connected system.
Three graphic representations were devised for a system development tool that
integrates forward chaining, backward chaining, and full truth maintenance. In
one representation, possible interactions among rules, determined by
syntactically parsing the rules, are displayed as a directed graph. In a
second representation, actual interactions among facts and rules are displayed
dynamically. The third representation is a fish-eye view of the knowledge base
that explains why a fact was asserted. In addition, the text of rules and
facts is displayed in editing windows.

In many contexts, humans often represent their own "neighborhood" in great
detail, yet only major landmarks further away. This suggests that such views
("fisheye views") might be useful for the computer display of large information
structures like programs, data bases, online text, etc. This paper explores
fisheye views presenting, in turn, naturalistic studies, a general formalism, a
specific instantiation, a resulting computer program, example displays and an
evaluation.

Tutors and Consultants

UC is a natural language computer consultant system for the UNIX operating
system. The user model in UC encodes the user's knowledge state and allows UC
to tailor its responses to the user. The model encodes apriori knowledge in a
double stereotype system that is extremely efficient. Models of individual
users are updated dynamically and build on top of the user's stereotype. The
model deals with uncertainty in apriori information and attempts to deduce the
user's level during the course of a session.

TNT: A Talking Tutor 'N' Trainer for Teaching the Use of Interactive
Computer Systems

Tutor 'N' Trainer (TNT) is an automated tutor for vi, the UNIX system screen
editor. TNT fosters learning by doing. The Tutor component guides the
student's practice with spoken instruction and feedback. The Trainer component
assures safety during practice by permitting only previously taught and
appropriate operations. Individualization and effectiveness are achieved in
two ways: special helper keys enable slow learners to get extra help and repeat
troublesome tasks; and practice loops force slow learners to practice
repeatedly until competency is achieved.

Several hours of advisory protocols from a consultant for Personal Computing
were taped and analysed in terms of the role which the advisor played in the
interaction. The advisor's role was determined by the user's initial approach
and the advisor's perception of the needs of the user: informing the user about
available information, defining terms or procedures, indexing into appropriate
solution sources or methods for more complex problems or structuring a nebulous
or poorly understood problem. A taxonomy of stages of information exchange is
outlined and the patterns of alternation within each advisor role is described.
We suggest implications of this study for the design of advisory systems.

Visual Programming Environment Designs

It has been argued for a long time that the representation of a problem is
of crucial importance to understanding and solving it. Equally accepted is the
fact that the human visual system is a powerful system to be used in
information processing tasks. However there exist few systems which try to
take advantage of these insights. We have constructed a variety of system
components which automatically generate graphical representations of complex
structures. We are pursuing the long-range goal of constructing a software
oscilloscope which makes the invisible visible. Our tools are used in a
variety of contexts: in programming environments, in intelligent tutoring
systems, and in human-computer interaction in general by offering aesthetically
pleasing interfaces.

Design Principles for the Enhanced Presentation of Computer Program Source
Text

In order to make computer programs more readable, understandable, appealing,
memorable, and maintainable, the presentation of program source text needs to
be enhanced over its conventional treatment. Towards this end, we present five
basic design principles for enhanced program visualization and a framework for
applying these principles to particular programming languages. The framework
deals comprehensively with ten fundamental areas that are central to the
structure of programming languages. We then use the principles and the
framework to develop a novel design for the effective presentation of source
text in the C programming language.

There has been a great interest recently in systems that use graphics to aid
in the programming, debugging, and understanding of computer programs. The
terms "Visual Programming" and "Program Visualization" have been applied to
these systems. Also, there has been a renewed interest in using examples to
help alleviate the complexity of programming. This technique is called
"Programming by Example." This paper attempts to provide more meaning to these
terms by giving precise definitions, and then uses these definitions to
classify existing systems into a taxonomy. A number of common unsolved
problems with most of these systems are also listed.

Transfer of User Skill Between Systems

A study was conducted to examine knowledge transfer between word processing
systems. The study examined the performance of naive subjects learning to use
a word processing system, as well as performance of individuals with word
processing experience as they learned to use a new system. Subjects initially
familiar with one system carried out a series of tasks on this system and then
were asked to carry out a similar series of tasks on a second system with which
they were initially unfamiliar. The second systems varied in similarity to the
first system along several dimensions. Subject performance was significantly
slower on the second set of tasks for all groups compared to a control group
using a single system. The reduced performance is attributed primarily to
'syntactic' differences in the user interfaces of the systems.

Learning and Transfer for Text and Graphics Editing with a Direct
Manipulation Interface

For a Direct Manipulation interface, transfer of skill between text and
graphics editing tasks has been investigated. A learning experiment has been
carried with two groups of novice users starting with a series of sessions in
one task domain and then switching over to the other domain. The empirical
results are discussed in the framework of the "cognitive complexity" theory of
Polson and Kieras.

All discussions of interface design criteria emphasize the importance of
consistent operating procedures both within and across applications This paper
presents a model for positive transfer and thus a theoretical definition of
consistency. An experiment manipulating training orders for utility tasks was
designed to evaluate the transfer model. The experimental manipulations
produced large transfer effects. Quantitative predictions were derived from
the Kieras and Polson (1985) theory of human-computer interaction and the
transfer model and fit using regression techniques. The transfer model
accounted for 88% of the variance of the 31 cell means.

Debate

A basic underlying theme to this conference and to the entire field of
Human-Computer Interaction is that Interface Design makes an important
difference. Does it? What is the evidence? If the point is so obvious, why
do so many expert users scoff? Why are so many of the best users content with
what they have, and why do manufacturers and designers continue to produce more
of the same?
In this debate, Norman and Card provide a serious examination of the
evidence for and against the field of interface design. The goal is to make
the issues stand out more clearly, thereby illuminating them more thoroughly.
The debate is intended to be lively, but to get at the major underlying bases
for the field of human-computer interaction.

Plenary Address

The Office of the Future -- Increasing Effectiveness and Enhancing the
Quality of Working Life

Windowing and Graphical Representation

Medical inference problems that seem too complex for intuitive solution can
be made tractable if the problem information is presented in the form of a
graphic display. The medical cognitive graphics approach to aiding complex
problem solving conceives of a medical professional as a person trying to form
a mental model of the patient's situation. Appropriate computer graphics make
mental models easier to form and easier to explore. This paper develops the
notion of medical cognitive graphics via two examples drawn from medical
diagnosis and monitoring.

How are Windows Used? Some Notes on Creating an Empirically-Based Windowing
Benchmark Task

Users of a windowing system were studied for the purpose of creating an
empirically based windowing benchmark. Each filled out a paper questionnaire
that sampled subjective opinions of windowing commands, and were observed for
approximately 22 minutes while performing typical daily activities on the
computer. Subjects were also asked to demonstrate a typical log-on procedure
and were personally interviewed. Windowing command frequencies, and screen
layout characteristics were collected and analyzed. The data revealed a
relatively high use of a small number of commands that were primarily concerned
with moving between windows. This study enabled the creation of a more
accurate windowing benchmark task.

It is widely believed that overlapping windows are preferable to tiled
(non-overlapping) ones, but there is very little research to support that
belief. An analysis of the basic characteristics of windowing regimes predicts
that there are, in fact, situations where overlapping windows are inferior to
tiled. An experiment to test this prediction verified that there are indeed
tasks and users for which tiled windows yield faster performance. This result
suggests a need for closer study of the principles underlying windowing
regimes, so that designers have a better understanding of the tradeoffs
involved in using them.

Documentation

Two experiments examine the effects of incorporating user knowledge into the
design of training materials for a database querying system. In Experiment I
an informal cognitive model of a query language is derived from the verbal
reports of expert users, and incorporated into existing documentation. Two
groups of subjects were then asked to solve queries using either the revised or
original manual. In Experiment II the cognitive model was formalized to
explicitly describe the conceptual and procedural information that was
incorporated into training materials. Three groups of subjects then received
either a conceptual model, procedural model, or neither in addition to basic
instructions, and then solved four sets of queries. The results show that
whether or not a given type of information facilitates performance depends on
the type of query, and whether the model is consistent with the operation of
the query system.

DOMAIN/DELPHI is the retrieval component of Apollo's in-house, integrated
publishing system. It retrieves and displays documentation in a networked
workstation environment in which each workstation has access to a common
database of user and systems documents. Users can find information by
"browsing" through a table of contents or by an indexed search for all
documents on a subject. DELPHI incorporates a graphical, menu-driven user
interface and displays output with multiple fonts and line art.

The effects of general global documentation, detailed step-by-step
documentation, and combined global and detailed documentation were examined for
high, medium, and low experienced students. The 198 students in this study
used a word-processing program to complete two problems during a two-hour
session. Results from univariate and multivariate analyses indicated that both
general time measures for reading documentation and completing problems as well
as the student users' reactions to the documentation, the program, and the
computer system were affected by either the type of documentation, the level of
experience, or both of these factors.

Drawing and Animation Systems

Algorithm animation has an acknowledged and growing role in computer aided
algorithm design, as well as in documentation and technology transfer, since
the medium of interactive graphics is a broader, richer channel than text by
which to communicate information. Since an animation constitutes an interface
between a user and an algorithm, a kit that facilitates the construction of
such has all the basic elements of a User Interface Management System.
Constraint languages are useful in constructing such an interface construction
kit, whereby consistency is maintained among the elements of a structure and
among those of a view of that structure presented to the user. But constraints
specify only static state in current implementations. To specify the evolution
of structures and views by discrete time increments, as in animation, requires
an extension to current constraint languages to allow expression of
specifications of temporal behavior.

A number of constraint-oriented, interactive graphical systems have been
constructed. A typical problem in such systems is that, to define a new kind
of constraint, the user must leave the graphical domain and write code in the
underlying implementation language. This makes it difficult for less
experienced users to add new kinds of constraints. As a step toward solving
this problem, the system described here allows the graphical definition of
constraints. An interface has been built in which a user can conveniently
construct a new kind of object, annotating it with the relations that it must
obey.

A User Interface for Multiple-Process, Turnkey Systems Targeted for the
Novice User

Multi-processing in a turnkey system provides capabilities which are not
available in a single-process system. Metagraphics has developed a menu-driven
user interface for its M-4200 product which allows the operator to control the
multiple process system with just a single-button mouse. Through the use of
stacked menus and soft buttons, the interface is optimized to shorten the
learning time for beginners and people unaccustomed to operating CAD/CAM
equipment. The user interface software completely handles the synchronization
of the concurrent processes for the operator as well as presenting the state of
the system in an attractive and easily understood format.

Case Studies

Learning Modes and Subsequent Use of Computer-Mediated Communication Systems

New users of four computer-mediated communication systems were asked to
indicate which of a variety of learning modes they had used, including reading
written manuals, using online automated help facilities, personal or group
lessons from a human teacher, and trial-and-error learning. Despite often
elaborate documentation and online help, the most frequent mode actually
selected by users is trial and error learning. Rather than bemoaning the fact
that users do not make proper use of written documentation, the implication for
system implementation is that it should be designed to effectively encourage
and support trial-and-error learning. An experimental intervention offering a
guided learning activity supports this conclusion.

Voice Messaging Enhancing the User Interface Based on Field Performance

Computer-based voice messaging systems are used to send and receive
confidential messages via touch-tone telephones. Auditory prompts guide users
through a series of menus, listing options as users proceed through their
sessions. This report describes how a voice messaging system was enhanced and
redesigned based on thinking aloud protocols, customer site interviews, and
usage statistics that describe summary patterns of behavior. The goal of the
human factors effort was to optimize system use. The evaluation of the length,
wording and phrasing of auditory prompts as well as ease-of-accessibility
provided by the menu structure led to specific enhancements and redesign.
Feedback also helped define an audio HELP/OTHER OPTIONS system that (1)
provided context sensitive assistance and (2) documented infrequently used
options that enabled streamlining of routine transactions.

Integrated Software Usage in the Professional Work Environment: Evidence
from Questionnaires and Interviews

In a field study of use of integrated business software by business
professionals, we found several characteristics of the real-world situation
leading to the under-utilization of integrated software and being of importance
for its human factors. Professionals work in a heterogeneous software
environment filled with practical problems, they follow "satisficing"
strategies of sub-optimal usage, and they have problems migrating to more
advanced uses. Current levels of software integration do not always adequately
or easily support the "task integration" requirements of real tasks such as
handling many small things.

Program Debugging

Two experiments investigated expert-novice differences in debugging computer
programs. Debugging was done on programs provided to the subject, and were run
on a microcomputer. The programs were in LOGO in Exp. 1 and Pascal in Exp. 2.
Experts debugged more quickly and accurately, largely because they generated
high quality hypotheses on the basis of less study of the code than novices.
Further, novices frequently added bugs to the program during the course of
trying to find the original one. At least for these simple programs, experts
superior debugging performance seemed to be due primarily to their superior
ability to comprehend the program.

Does Programming Language Affect the Type of Conceptual Bugs in Beginners'
Programs? A Comparison of FPL and Pascal

The effect of the graphical programming language FPL (First Programming
Language) on the occurrence of conceptual bugs in programs written by novices
was studied. The type and location for each bug, and the frequency for each
type were all recorded following procedures developed in an earlier Yale
University study of novice Pascal programming. The findings were compared with
those of the earlier study, and suggest that FPL may help beginning programmers
avoid some common conceptual errors in their programming.

In this paper, we investigate whether or not most novice programming bugs
arise because students have misconceptions about the semantics of particular
language constructs. Three high frequency bugs are examined in detail -- one
that clearly arises from a construct-based misconception, one that does not,
and one that is less cut and dry. Based on our empirical study of 101 bug
types from three programming problems, we will argue that most bugs are not due
to misconceptions about the semantics of language constructs.

Voice Enhancement

Designing a Quality Voice: An Analysis of Listeners' Reactions to Synthetic
Voices

Eight subjects listened to a set of synthetic voices reflecting a crossing
of four voice qualities: head size, pitch, richness and smoothness. The
listeners evaluated the voices on sixteen perceptual scales, and judged each
voice's appropriateness for twenty voice-output scenarios. Factor analysis of
the perceptual ratings recovered two factors, fullness and clarity. A similar
analysis of the appropriateness ratings revealed three situational factors,
information, entertainment and feedback. Further analyses indicated that the
voice qualities associated with the three situational factors were quite
different, and suggest ways to optimize voices used for a particular purpose.

Though technology in speech recognition has progressed recently, Automatic
Speech Recognition (ASR) is vulnerable to noise. Lip-information is thought to
be useful for speech recognition in noisy situations, such as in a factory or
in a car.
This paper describes speech recognition enhancement by lip-information. Two
types of usage are dealt with. One is the detection of start and stop of
speech from lip-information. This is the simplest usage of lip-information.
The other is lip-pattern recognition, and it is used for speech recognition
together with sound information. The algorithms for both usages are proposed,
and the experimental system shows they work well. The algorithms proposed here
are composed of simple image-processing. Future progress in image-processing
will make it possible to realize them in real-time.

An experiment was run in which elderly and younger people used a keyboard
editor and a simulated listening typewriter to compose letters. Performance
was measured and participants rated the systems they used.
Our general conclusions were as follows:

- There are no major differences in performance between elderly computer users
and their younger counterparts in carrying out a computer-based composition
task.

- Elders appear to be more enthusiastic users of computer systems than are
younger people. This is shown by preference ratings, behavioral
observations, and post-experimental debriefings.

- Voice input does not improve performance on composition tasks, but it is
greatly preferred over the traditional keyboard input method.

Interface Management and Prototyping

This paper discusses a set of tools supporting the rapid development of
voice and telephony applications. The tool allows interfaces to be rapidly
prototyped, tested and installed without impacting the underlying system. Used
directly by behavioral specialists, they have played a key roll in the building
of two production systems. We review several essential features of this
facility and then outline its role in the rapid development of a voice
messaging system for the athletes and officials at the 1984 Summer Olympics in
Los Angeles.

Trillium is a computer-based environment for simulating and experimenting
with interfaces for simple machines. For the past four years it has been used
by Xerox designers for fast prototyping and testing of interfaces for copiers
and printers. This paper defines the class of "functioning frame" interfaces
which Trillium is used to design, discusses the major concerns that have driven
the design of Trillium, and describes the Trillium mechanisms chosen to satisfy
them.

An Interactive Environment for Dialogue Development: Its Design, Use, and
Evaluation; or, Is AIDE Useful?

The Author's Interactive Dialogue Environment (AIDE) of the Dialogue
Management System is an integrated set of direct manipulation tools used by a
dialogue author to design and implement human-computer interfaces without
writing source code. This paper presents the conceptual dialogue transaction
model upon which AIDE is based, describes AIDE, and illustrates how a dialogue
author develops an interface using AIDE. A preliminary empirical evaluation of
the use of AIDE versus the use of a programming language to implement an
interface shows very encouraging results.

Design Methods I

A technique is described in which a user's knowledge of a software package
is elicited by means of a series of photographs depicting the system in a
variety of states. The resultant verbal protocols were codified and scored in
relation to the way in which the system actually worked. In the illustrative
study described, the probes were administered twice after 5 and 10 hrs of
system experience with an office product (VisiOn). The number of true claims
elicited increased with experience but the number of false claims remained
stable. The potential value of the technique and its outputs are discussed.

A unified approach to improved usability can be identified in the works of
Gilb (1981, 1984), Shackel (1984), Bennett (1984), Carroll and Rosson (1985),
and Butler (1985). We term this approach "usability engineering," and seek to
contribute to it by showing, via a product development case study, how
user-derived estimates of the impact of design activities on engineering goals
may be made.

In a recent paper, Gould and Lewis (1983a) argued for the importance of four
key principles in computer system design. These principles are: early focus on
users, interactive design, empirical measurement, and iterative design. Gould
and Lewis also express their belief that these principles are essential to
successful design and refer to an example of their use (Gould and Lewis,
1983b). It is the purpose of this paper to report another example of how these
principles played a major role and proved their worth in the design of a
successful system.

Panel

Much of the work in the field of computer human interaction consists of
finding out what is wrong with existing interfaces or which of several existing
alternatives is better. Over the next few decades, the possibilities for
computer human interaction will explode. This will be due to: 1) continued
decrease in the costs of processing and memory, 2) new technologies being
invented and existing technologies (e.g., handwriting recognition, speech
synthesis) being extended, 3) new applications and 4) new ideas about how
people can interact with computers.
While changes along these lines are bound to occur, we need not take the
view that investigators in human-computer interaction are to be passive
observers of some uncontrolled and uncontrollable evolution. Indeed, we can
help steer this process by visions of what the future of human computer
interaction could and should be like.

The Semantics of Interaction

The design and implementation of adaptive systems as opposed to nonadaptive
systems creates new demands on user interface designers. This paper discusses
a few of these demands as encountered by the authors while utilising a formal
notation for the design of an adaptive user interface to an electronic mail
system. Recommendations for the extension of this formal notation are proposed
and discussed.

Interactive user interfaces depend critically on underlying computing system
facilities for input and output. However, most computing systems still have
input-output facilities designed for batch processing. These facilities are
not adequate for interfaces that rely on graphical output, interactive input,
or software constructed with modern methodologies. This paper details the
deficiencies of batch-style input-output for modern interactive systems,
presents a new model for input-output that overcomes these deficiencies, and
suggests software organizations to take advantage of the new model.

Design Methods II

NASA Space Station missions will include crewmembers who are highly
experienced in the use of the Space Station computer system, as well as others
who are novices. Previous research into novice-expert differences has strongly
implied that user interface changes that aid novices tend to impair experts and
vice versa. This experiment investigated the impact reformatting alphanumeric
information on current Space Shuttle computer displays had on the speed and
accuracy of experts and nonexperts in two different search tasks. Large
improvements in speed and accuracy were found for nonexperts on the reformatted
displays. Experts had fewer errors but no response time difference on
reformatted displays. Differences in expert and nonexpert search strategies
and implications for the design of computer displays are discussed.

Skills developed by software user interface designers to solve problems in
communication, management, implementation, and other areas may influence design
decisions in the absence of sufficient knowledge of user populations. Given
today's rapid changes in both "faces" to the software interface -- user
populations and software functionality -- the first pass at a design may be
made without sufficient understanding of the relevant goals and behaviors of
the eventual users. Without this information, designers are less able to grasp
"user logic", and may rely on more familiar "logics" that are useful in other
problem-solving arenas. Understanding how these approaches can affect a design
may help us recognize them across a wide range of contexts and enable us to
focus the human factors contribution to the design evolution process.

In this paper we propose a formal interface design methodology based on user
knowledge. The general methodology consists of 1) obtaining distance estimates
for pairs of system units (objects, actions, concepts), 2) transforming the
distance estimates using scaling techniques (e.g., Pathfinder network
analysis), and 3) organizing the system interface based on the scaling
solution. Thus, the organization of the system is based on the cognitive
models of users rather than the intuitions of designers. As an example, we
discuss the application of our methodology to the design of a network-based
indexing aid for the UNIX on-line documentation system (MAN).

Knowledge-Based Interfaces

The benefits of electronic information storage are enormous and largely
unrealized. As its cost continues to decline, the number of files in the
average user's personal database may increase substantially. How is a user to
keep track of several thousand, perhaps several hundred thousand, files? The
Memory Extender (ME) system improves the user interface to a personal database
by actively modeling the user's own memory for files and for the context in
which these files are used. Files are multiply indexed through a network of
variably weighted term links. Context is similarly represented and is used to
minimize the user input necessary to disambiguate a file. Files are retrieved
from the context through a spreading-activation-like process. The system aims
towards an ideal in which the computer provides a natural extension to the
user's own memory.

Learning to control a computer system from limited experience with it seems
to require constructing a mental model adequate to indicate the causal
connections between user actions, system responses, and user goals. While many
kinds of knowledge could be used in building such a model, a small number of
simple, low-level heuristics is adequate to interpret some common computer
interaction patterns. Designing interactions so that they fall within the
scope of these heuristics may lead to easier mastery by learners.

To meet the challenge of constructing interfaces for increasingly complex
multifunctional products, designers will be attracted by the promise offered by
"intelligent" systems. However, the value of such sophisticated systems must
be measured in terms of the quality of their user's models. One such
intelligent interface -- an Expert Help System -- has been designed,
implemented, and evaluated. We argue that the operability problems noted in
the users' interactions with this system are attributable to lack of a strong
user model in the system interface. Such a model plays a critical role in
determining the effectiveness of the system's ability to monitor the user's
planning activities. We discuss the requirements of a strong user model and
provide an example of how such a model might be integrated into a planner-based
intelligent interface.

Haptic Techniques

Two experiments were run to investigate two-handed input. The experimental
tasks were representative of those found in CAD and office information systems.
Experiment one involved the performance of a compound selection/positioning
task. The two sub-tasks were performed by different hands using separate
transducers. Without prompting, novice subjects adopted strategies that
involved performing the two sub-tasks simultaneously. We interpret this as a
demonstration that, in the appropriate context, users are capable of
simultaneously providing continuous data from two hands without significant
overhead. The results also show that the speed of performing the task was
strongly correlated to the degree of parallelism employed.
Experiment two involved the performance of a compound navigation/selection
task. It compared a one-handed versus two-handed method for finding and
selecting words in a document. The two-handed method significantly
outperformed the commonly used one-handed method by a number of measures.
Unlike experiment one, only two subjects adopted strategies that used both
hands simultaneously. The benefits of the two-handed technique, therefore, are
interpreted as being due to efficiency of hand motion. However, the two
subjects who did use parallel strategies had the two fastest times of all
subjects.

A method for interactive validation of transaction data with autocompletion
is introduced and analyzed in a library information system for periodical
publications. The system makes it possible to identify the periodicals by
using the full title thus making a separate coding phase unnecessary. Only the
characters that are needed to distinguish the title from other ones have to be
typed. In our library this is in the average of 4.3 characters. We have
noticed that it is faster to use the autocompletion system compared with the
use of short codes and a code catalogue. The autocompletion feature causes
more errors at least for the novices because the work differs from normal
typing. The errors are, however, very easy to correct with the assistance of
the system.

Workstations require use of the hands both for text entry and for
cursor-positioning or menu-selection. The physical arrangement does not allow
these two tasks to be done concurrently. To remove this restriction, various
alternative input devices have been investigated. This work focuses on the
class of foot-operated computer input devices, called moles here. Appropriate
topologies for foot movement are identified, and several designs for realising
them are discussed.