IJMMS 1985 Volume 23 Issue 1

Rule-based systems are a development associated with recent research in
artificial intelligence (AI). These systems express their decision-making
criteria as sets of production rules, which are declarative statements relating
various system states to program actions. For computer-assisted instruction
(CAI) programs, system states are defined in terms of a task analysis and
student model, and actions take the form of the different teaching operations
that the program can perform. These components are related by a set of
means-ends guidance rules that determine what the program will do next for any
given state.
The paper presents the design of a CAI course employing a rule-based
tutorial strategy. This design has not undergone the test of full
implementation; the paper presents a conceptual design rather than a
programming blueprint. One of the unique features of the course design
described here is that it deals with the domain of computer graphics. The
precise subject of the course is ReGIS, the Remote Graphics Instruction Set on
Digital Equipment Corporation GIGI and VT125 terminals. The paper describes
the course components and their inter-relationships, discusses how program
control might be expressed in the form of production rules, and presents a
program that demonstrates one facet of the intended course: the ability to
parse student input in such a way that rules can be used to update a dynamic
student model.

A definition for the logical inference of one proposition based upon the
assertion of some premise propositions in multivalued logics is introduced.
Some of the properties of this definition are investigated. It is shown that
it is a generalization of the concept of inference in binary logic.

Computer Transcription of Handwritten Shorthand as an Aid for the Deaf -- A
Feasibility Study

An automatic speech transcription aid for the deaf provides a written
version of speech for a deaf person to read. Such a system need not provide a
perfect rendition of the speech, all that is necessary is for the output to be
readable and to be provided at or close to verbatim speeds. One possible
system to achieve this would be to translate handwritten shorthand into
readable orthography. This paper describes a feasibility study in which such a
transcription system was implemented on a microcomputer to ascertain the
practical problems involved.

Speed of Response Using Keyboard and Screen-Based Microcomputer Response
Media

Response latencies were recorded during a continuous performance task using
four standard microcomputer response media: the standard QWERTY keyboard, a
numeric keypad, a light-pen and a touch-screen. There were significant
differences among these media, the fastest responses being made with the
touch-screen, and the slowest with the light-pen. The keypad was superior to
the full keyboard. Within devices, the physical layout of the key-based
devices accounted for the differences among responses with specific keys. A
common pattern of response using the touch-screen and light-pen was found, with
a constant offset of 309 ms for the light-pen. A tentative model of response
execution with these screen-based devices is presented, with implications for
the nature of response parameters in unconstrained response environments, and
for the design of human-computer interfaces.

The paper argues that user models are an essential component of any system
which attempts to be "user friendly", and that expert systems should tailor
explanations to their users, be they super-experts or novices. In particular,
this paper discusses a data-driven user modelling front-end subsystem, UMFE,
which assumes that the user has asked a question of the main system (e.g. an
expert system, intelligent tutoring system etc.), and that the system provides
a response which is passed to UMFE. UMFE determines the user's level of
sophistication by asking as few questions as possible, and then presents a
response in terms of concepts which UMFE believes the user understands.
Investigator-defined inference rules are then used to suggest additional
concepts the user may/may not know, given the concepts the user indicated he or
she knew in earlier questioning. Several techniques are discussed for
detecting and removing inconsistencies in the user model. Additionally, UMFE
modifies its inference rules for individual users when it detects certain types
of inconsistencies. UMFE is a portable domain-independent implementation of a
system which infers overlay models for users. UMFE has been used in
conjunction with NEOMYCIN; and the paper contains several protocols which
demonstrate its principal features. The paper concludes with a critique of
UMFE and suggestions for enhancing the current system.

The use of microcomputers in clinical assessment in psychology and
psychiatry is briefly reviewed. An explanation is given of the distinction
between nomothetic assessment, based on the statistical comparison of a
person's performance with a normative sample of his peers, and idiographic
assessment allowing the measurement of clinically relevant experiences unique
to the individual and expressed in his own words. The Personal Questionnaire
Rapid Scaling Technique due to Mulhall is described. This paper and pencil
test is an example of an idiographic instrument. Its repeated use generates
data allowing graphs to be drawn showing the covariation of clinically relevant
phenomena over time. The implementation of the test on a microcomputer is
described and an example given of the use of the computer version in a clinical
case. Patients describe the computer version as more pleasant to use, and it
is significantly easier and quicker for the clinician, obviating paperwork and
printing the results in an ergonomically efficient format enabling rapid
assimilation of clinical data.

IJMMS 1985 Volume 23 Issue 2

Co-Operative Structuring of Information: The Representation of Reasoning and
Debate

Interactive computer networks create new opportunities for the co-operative
structuring of information which would be impossible to implement within a
paper-based medium. Methods are described for co-operatively indexing,
evaluating and synthesizing information through well-specified interactions by
many users with a common database. These methods are based on the use of a
structured representation for reasoning and debate, in which conclusions are
explicitly justified or negated by individual items of evidence. Through
debates on the accuracy of information and on aspects of the structures
themselves, a large number of users can co-operatively rank all available items
of information in terms of significance and relevance to each topic.
Individual users can then choose the depth to which they wish to examine these
structures for the purposes at hand. The function of this debate is not to
arrive at specific conclusions, but rather to collect and order the best
available evidence on each topic. By representing the basic structure of each
field of knowledge, the system would function at one level as an information
retrieval system in which documents are indexed, evaluated and ranked in the
context of each topic of inquiry. At a deeper level, the system would encode
knowledge in the argument structures themselves. This use of an interactive
system for structuring information offers further opportunities for improving
the accuracy, integration and accessibility of information.

Program comprehension is considered to be a two-stage process. The two
stages broadly consist of formation and use of a mental representation of the
program. Based on this two-stage process, five features of the program are
identified as factors contributing to the psychological complexity of the
program. The identified factors are: meaningfulness, size, control structure,
data structure and execution structure. This article reports consolidated
results of a series of experiments conducted to establish and to study the
integrated effect of these factors on program comprehension. The two
experiments in the series are discussed in detail. The results of the
judgemental experiment indicate that program size, control structure, data
structure and execution structure contribute independently to perceived program
complexity and their contributions to complexity are different. The results of
experiments, including the two described in the paper, on program comprehension
confirm that the above four algorithm-dependent factors affect program
understanding and hence contribute to the psychological complexity of the
program.

A Performance Profile Methodology for Implementing Assistance and
Instruction in Computer-Based Tasks

The development of assistance and instructional systems is an alternative
design strategy for human-computer interfaces. An integral component in the
construction of these interfaces concerns the representation used to capture
the user, task or machine. Representational issues and strategies in
assistance and instructional systems are discussed in the paper. These
representational issues concern the use of symbolic and quantitative models,
the development of performance and cognitive models, the distinction between
skilled and unskilled models, the use of static and dynamic models, and the
possible development of multiple representation strategies. From these issues,
a profile methodology was distilled that could be implemented as an assistant
or tutor in a file-search environment based upon novice and expert differences.
This profile method is a quantitative, performance-based approach that samples
expert behaviour to define a skilled model of file search. The development and
formative evaluation of the methodology used 16 novice and 16 expert subjects.
File-search performance and strategies were analysed in a complex
information-retrieval task using the profile methodology. The method proved to
be sensitive to novice and expert differences in file search and also seemed to
be capable of being implemented as a diagnostic device for assistance and
instruction. Limitations and extensions of the profile methodology are
discussed in terms of command sequence information, explanation and reasoning,
strategic and planning resources, and possible learning and adaptive models.

Pictures provide an effective means of communication both between humans and
within human-computer systems. This paper describes how a
microcomputer-controlled high resolution digitizer may be used to implement a
variety of pictorial human-computer interfaces using a number of novel
transaction types.

The information regarding a particular field of knowledge is conceptualized
as a large, specified set of questions (or problems). The knowledge state of
an individual with respect to that domain is formalized as the subset of all
the questions that this individual is capable of solving. A particularly
appealing postulate on the family of all possible knowledge states is that it
is closed under arbitrary unions. A family of sets satisfying this condition
is called a knowledge space. Generalizing a theorem of Birkhoff on partial
orders, we show that knowledge spaces are in a one-to-one correspondence with
AND/OR graphs of a particular kind. Two types of economical representations of
knowledge spaces are analysed: bases, and Hasse systems, a concept generalizing
that of a Hasse diagram of a partial order. The structures analysed here
provide the foundation for later work on algorithmic procedures for the
assessment of knowledge.

The advent of microcomputers promises to open up a new era in psychometrics.
In order to tap the full potential of the computer, it should do more than
simply administer and score psychological tests, generate interpretations and
graph the results. Psychometricians should tap the decision-making capacity of
the computer, so that the most information can be generated from the fewest
questions necessary accurately to assess, diagnose and treat the patient. This
exploratory study explores the feasibility of converting the full-length
Minnesota Multiphasic Personality Inventory (566 items) into a shorter form
using a computer to individualize the number of items needed to reproduce a
full-scale MMPI. 499 subjects from Kaiser Permanente (Southern California
Region) who had taken the MMPI-566 in the paper and pencil format were used for
the norming group, and 489 subjects from the same population were used for the
comparison group. Multiple regression equations were produced for each step,
and the computer generated a new equation for each item until criterion
(matching two consecutive predictions, accurate to the first decimal place) was
reached. The mean of the correlations comparing the predicted raw scores
against the actual raw scores (scales across subjects) was r = 0.95, with a
range of 0.88 to 0.98. The mean of the correlations comparing the predicted
profiles (subjects across scales) was r = 0.98 for the norming group and r =
0.97 for the comparison group. The mean number of items needed for the
comparison group was 187 (after adjusting for item overlap), and the profile
correlation by subject for number of items needed was r = 0.20. It was
concluded that (1) psychological tests can be developed that will allow for
individualized computer administration and (2) scores on these tests can
accurately predict the scores on tests from which they were derived.
Implications for use of computers in psychometrics are discussed.

IJMMS 1985 Volume 23 Issue 3

An important issue in the usability of software is the quality of the
manuals that are provided with the product. Some manuals describe the rules of
a programming language to the user, and the syntax notation or metalanguage
used to describe these rules can take various forms. In this study, we tested
a notation which uses brackets and braces (designated "Signs") against a method
which uses syntax diagrams (designated "Maps"). A two-day experimental design
was used in which the subjects were trained and tested on one metalanguage the
first day, and then trained and tested on the second metalanguage the next day
in a Latin square design. Four phases of this experiment were conducted, with
12 subjects in each phase. The procedure remained the same in each phase, but
the subject population (programmers or non-programmers) or training material
(full manual or one-page instruction) was changed from one phase to the next.
No performance differences were found between Maps and Signs when the subjects
were given extensive training on the metalanguage. However, when training was
restricted to one page of instruction, performance was consistently superior
with Maps for both programmers and non-programmers.

Exhibit effectiveness data from Jasper National Park in the Canadian Rockies
was collected using both a paper and pencil survey and by means of a Victor
9000 microcomputer. These methods were compared to see if responses to a
manually administered questionnaire are similar to responses to the same
questions asked at a computer terminal. Both groups of respondents got similar
percentages correct on questions about major exhibits that were textually
based, but the computer group scored significantly better on those exhibits
that had sound instead of text (in addition to a major visual component). The
computer and manual groups were similar in their indication of how easy they
felt most exhibits were to understand. However, the computer group indicated
they saw significantly fewer major exhibits than did the manual group. Thus as
a preliminary theory, it can be suggested that the non-random group of people
drawn to the computer differ from typical random respondents in that they seem
to gain more knowledge from or know more about features in exhibits which have
a major sound component rather than a textual one. It was also found that the
use of computerized surveying was well received by both visitors and staff.
Respondents in the computer group particularly preferred it over paper and
pencil questionnaires. Given these results it is concluded that the computer
can be used to measure the effectiveness of many exhibits, and for those that
have visual and auditory components, it may be a means of establishing a
base-line or relative knowledge level for determining changing knowledge as
exhibits are altered.

A Study of the Effect of Different Data Models on Casual Users Performance
in Writing Database Queries

The motivation for this study was that database query facilities are not
effectively meeting casual users' needs. A solution to this problem is
especially important due to the increasing number of potential users falling
into the classification of "casual user". There is considerable controversy
revolving around the question of which elements and/or which combination of
elements within the casual users' environment are necessary to provide an
effective man-machine interface. This study is intended to extend the basic
knowledge relating to the effect of different data models on casual users'
performance and confidence in writing database queries. The data models used
to present the external view of the database in this study are the relational,
hierarchical and network models. The experiment involves a written test
designed to permit the evaluation of the subject's ability to comprehend and
retrieve information from a database by writing English-like queries. The
subject's performance in writing the queries is based on errors made in (1)
writing the specification portion, (2) writing the condition portion, (3)
writing the navigation portion, and (4) the use of the language. Overall
performance is represented by the sum of the four component scores. The
subject's confidence is a self-reported value on a scale of one to five. The
group using the relational model performed significantly better than the group
using the hierarchical model when writing the specification and navigation
portions of the queries. The absences of a significant difference in overall
performance among the data model groups supports the technique used to evaluate
specific aspects of the queries independently. No statistically significant
differences in confidence level were detected among the data model groups.
This is an indication that the group using the hierarchical model may have been
overly confident, in view of their poorer performance.

Most personal identity mechanisms in use today are artificial. They require
specific actions on the part of the user, many of which are not "friendly".
Ideally, a typist should be able to approach a computer terminal, begin typing,
and be identified from keystroke characteristics. Individuals exhibit
characteristic cognitive properties when interacting with the computer through
a keyboard. By examining the properties of keying patterns, statistics can be
compiled that uniquely describe the user. Initially, a reference profile is
built to serve as a basis of comparison for future typing samples. The profile
consists of the average time interval between keystrokes (mean keystroke
latency) as well as a collection of the average times required to strike any
two successive keys on the keyboard. Typing samples are scored against the
reference profile and a score is calculated assessing the confidence that the
same individual typed both the sample and the reference profile. This
mechanism has the capability of providing identity surveillance throughout the
entire time at the keyboard.

A model is presented for the architecture of the neural networks which
encode visual information for storage and which reconstruct iconic
representations from storage representations. (Iconic representations are
geometrically similar to projections of the objects they represent.) Each
storage representation consists of a sequence of patterns derived while the
eyes fixate at different positions in the visual field. Each pattern in the
sequence has three components: (1) a control component which describes both
where the eyes fixated and the size of the attended scene fragment; (2) a
surface quality component which describes visual surface characteristics of the
object; and (3) a spatial component which describes the spatial extent, spatial
position (depth), surface orientation and visual flow (movement) of the surface
having the specified surface characteristics. Prior to storage, all spatial
components are transformed using a complex logarithmic mapping. As a
consequence, stored spatial patterns are not iconic representations of the
scene fragments they represent. Also, storage representations can be
recognized and reconstructed at any desired size and orientation: they are size
and orientation invariant. During reconstruction, each pattern in the storage
representation is transformed back into an iconic representation using a
complex exponential mapping. One consequence of the combined complex
logarithmic and exponential mappings and the limited size of the storage
representations is that the fidelity of the recalled information degrades
exponentially from its centre.
A neural network, called spatial memory, not only holds the partially
reconstructed representation during recall, but also shifts it to remain in
registration with the fragment currently being recalled and combined. The
control system uses the control component of each stored pattern and knowledge
of the size and orientation of the reconstruction to determine how to shift the
partially reconstructed representation in spatial memory. Due to the
decreasing fidelity from the centre to the perimeter of each reconstructed
scene fragment, spatial memory only preserves information from overlapping
fragments having the highest fidelity. It does so by maintaining and using
fidelity information for each position in the reconstructed representation.
Spatial memory can maintain a current stable representation of the visual
world. It can also magnify, reduce, shift and rotate representations. The
representations are therefore independent of their position in spatial memory.
It is suggested that the representations held and processed by spatial memory
correspond to the representations we call mental images and for this reason
they are called mental images in the model.

A spatial image is a representation of a scene which encodes the spatial
location, distance away, surface orientation and movement of each visible
surface in the scene. Mental images are spatial images which are held and
transformed by a neural network called spatial memory. Spatial memory is a
large two-dimensional array of processors called spatial locations which
operate in parallel under the control of a single supervising processor.
Though held in spatial memory, mental images are independent of it, and can be
transformed, shifted and rotated by transforming and moving image parts among
the spatial locations. The architecture and control structure of spatial
memory are presented as are details of its operation in translating, scaling
and rotating mental images of three-dimensional objects. Computer simulations
of spatial memory are summarized, and spatial memory is compared with other
models of mental imagery.

IJMMS 1985 Volume 23 Issue 4

For the last few years we have observed a growing interest among researchers
about how to make computers behave "intelligently". The field of computer
science has gained a substantial level of development especially in the field
of so-called expert systems. This particular area has also obtained relatively
wide approval and applicability. This paper describes an experimental version
of the conversational natural language information retrieval system which is
currently under investigation at the Institute for Informatics of Warsaw
University. This system deals with gastroenterology, a branch of internal
medicine. The system's purpose is to provide physicians and hospital personnel
with information which may be consulted during the diagnostic process. The
system first acquires a base knowledge in the field which is presented to it in
the form of a comprehensive natural language text. From this point on
knowledge can be retrieved and/or updated in conversational manner. The system
has a modular structure and its most important parts are the natural language
processor and the reasoning module based on procedural deduction. The
deduction process is realized through the mechanisms known as fuzzy logic
incorporated in the FUZZY programming language. The system has been designed
in close co-operation with specialists in medical science, and implemented on
an IBM 370/148 at Warsaw University.

A Research Model for Studying the Gender/Power Aspects of Human-Computer
Communication

A new research model was developed for examining the gender and power
conceptualizations affecting human-computer communications. University
students worked on an Apple II computer on which the linguistic output was
stereotyped male or female. Potency attributions of the computer were rated on
a semantic differential scale. A test of the research model indicated
significant differences in potency ratings. There was an interaction between
gender-stereotyped linguistic output, user's sex, and user's computer
experience (F(1,19) = 5.10, P < .0343). The human-computer communication
research model was demonstrated to be useful. It can be used for examining
human-computer communication from both theoretical and applied perspectives.

Automation is the ability to perform a very well-practised task rapidly,
smoothly and correctly, with little allocation of attention. This paper
reports on experiments which sought evidence of automation in two programming
subtasks, recognition of syntactic errors and understanding of the structure
and function of simple stereotyped code segments. Novice and expert
programmers made a series of timed decisions about short, textbook-type program
segments. It was found that, in spite of the simplicity of the materials,
experts were significantly faster and more accurate than novices. This
supports the idea that experts automate some simple subcomponents of the
programming task. This automation has potential implications for the teaching
of programming, the evaluation of programmers, and programming language design.

GENIE: A Modifiable Computer-Based Task for Experiments in Human-Computer
Interaction

The results of many human-computer interaction studies are often not
applicable as desired because the task environment in which they are run does
not possess characteristics common to other interfaces. This paper describes a
generalized task environment that contains elements appearing in several
systems having human-computer interfaces. The environment is implemented
through a software system called GENIE (Generic ENvironment for Interactive
Experiments), and is based on controlling the motion of vehicles through
three-dimensional space. Aside from providing a task with common
characteristics, GENIE's implementation was designed to allow for adaptation to
a variety of studies.
This paper introduces and motivates the development of the GENIE software
system. The software components are described at a functional level to provide
the background for a discussion of how various instantiations of GENIE's
human-computer interface can be created. To exemplify the generic nature of
GENIE, specific changes to the user's interface are described. We show how
GENIE's software must be modified to implement each of the changes and
demonstrate how the use of a compiler-compiler eases the burden of doing so.
The paper concludes with a discussion of GENIE as constructed for a
voice-output experiment.

This paper presents an interactive fuzzy decision-making method by assuming
that the decision-maker (DM) has fuzzy goals for each of the objective
functions in multi-objective non-linear programming problems. Through the
interaction with the DM, the fuzzy goals of the DM are quantified by eliciting
the corresponding membership functions. After determining the membership
functions for each of the objective functions, in order to generate a candidate
for the satisficing solution which is also Pareto optimal, the DM is asked to
specify his reference intervals for each of the membership functions, called
reference membership intervals. For the DM's reference membership intervals,
the corresponding augmented weighted minimax problem is solved and the DM is
supplied with the Pareto optimal solution which is in a sense close to his
requirement together with the trade-off rates between the membership functions.
Then by considering the current values of the membership functions as well as
the trade-off rates, the DM responds by updating his reference membership
intervals. In this way the satisficing solution for the DM can be derived
efficiently from among a Pareto optimal solution set by updating his reference
membership intervals. On the basis of the proposed method, a time-sharing
computer program is written and an illustrative numerical example is
demonstrated along with the corresponding computer outputs.

Pictures provide an effective means of communication both between humans and
within human-computer systems. This paper describes how pictorial interfaces
might facilitate interaction with a microcomputer data base system.

IJMMS 1985 Volume 23 Issue 5

Two interactive models for crew station design are discussed. WORG is
developed for arranging workstations within a work-space. WOLAG aims at
generating the layout of the instrument panel at each station for sit-stand
duty. Both models collect evaluative measures for the designs generated. The
input data, internal structure, and output files of both WORG and WOLAG are
discussed, together with actual sample outputs.

This paper reports the results of an exploratory study that investigated
expert and novice debugging processes with the aim of contributing to a general
theory of programming expertise. The method used was verbal protocol analysis.
Data was collected from 16 programmers employed by the same organization.
First, an expert-novice classification of subjects was derived from information
based on subjects' problem solving processes: the criterion of expertise was
the subjects' ability to chunk effectively the program they were required to
debug. Then, significant differences in subjects' approaches to debugging were
used to characterize programmers' debugging strategies. Comparisons of these
strategies with the expert-novice classification showed programmer expertise
based on chunking ability to be strongly related to debugging strategy. The
following strategic propositions were identified for further testing. 1. (a)
Experts use breadth-first approaches to debugging and, at the same time, adopt
a system view of the problem area; (b) Experts are proficient at chunking
programs and hence display smooth-flowing approaches to debugging. 2. (a)
Novices use breadth-first approaches to debugging but are deficient in their
ability to think in system terms; (b) Novices use depth-first approaches to
debugging; (c) Novices are less proficient at chunking programs and hence
display erratic approaches to debugging.

A Knowledge Acquisition Program for Expert Systems Based on Personal
Construct Psychology

Retrieving problem-solving information from a human expert is a major
problem when building an expert system. Methods from George Kelly's personal
construct psychology have been incorporated into a computer program, the
Expertise Transfer System, which interviews experts, and helps them construct,
analyse, test and refine knowledge bases. Conflicts in the problem-solving
methods of the expert may be enumerated and explored, and knowledge bases from
several experts may be combined into one consultation system. Fast (one to two
hour) expert system prototyping is possible with the use of the system, and
knowledge bases may be constructed for various expert system tools.

Knowledge bases for natural language processing systems which can support
the interpretation of figurative language input must reflect certain pragmatic
considerations in their representations. Among these are salience,
prototypicality and epitomization. Confusion of these three terms in the
literature impedes a clear understanding of their effect and the requirements
for their possible implementation. An exploration of their differences and
interrelationships is presented with an eye toward solving the "containment
problem" for figurative language. The notions of a "holistic approach" and
"disjoint clustering" are introduced. Implications for machine translation are
briefly discussed.

The problem considered here concerns the situation in which we are
interested in the degree to which a set of elements called the diagnosis set
explains another set called the evidence set. A measure is suggested to
calculate the degree of explanatory power the diagnosis set has for the
evidence set. We also concern ourselves with the problem of minimizing the
diagnosis set.

This article describes knowledge-based systems for genetics called
GENETICS-I, II, and III, and their possible extensions. GENETICS-I works on a
simple genetic model where a phenotype is determined by a gene-pair, each gene
having a value of 0 or 1 (diallelism). In GENETICS-II, which is a
generalization of GENETICS-I, each gene can have a value of 0, 1,..., or gmax
(multi-allelism; e.g. the ABO blood group). GENETICS-III is a further
generalization of GENETICS-II in which a phenotype is determined by more than
one pair of genes (multi-genes; e.g. major histocompatibility complex).
In each system, a knowledge base is established as a collection of
production rules which are repeatedly applied on the database representing
phenotypes and genotypes for a family tree to deduce new information. During
the course of generalization, substantial changes in the database and
knowledge-base structures have been made to deal with new types of problems as
well as to increase the efficiency of computer time and memory space
utilization. Possible extensions of these systems to include some common
characteristics in expert systems are also discussed; included are a heuristic
search of rules, user-system interactions, and reasoning under uncertain
information.

Originally developed by Mulhall (1977), the "Personal Relations Index" (PRI)
used the computer to generate a personalized questionnaire which could be used
in mapping an interpersonal relationship. Viewed within the context of the
current trend towards automated psychological tests, the PRI stands out as
being one which attempts to take advantage of computer capabilities beyond mere
automation.
The current study sought to overcome the primary disadvantage of the PRI by
developing an on-line or interactive version capable of running on the popular
Apple II microcomputer. This would eliminate the use of antiquated computer
cards, written questionnaires and scoring templates by allowing the user to
read the questions on a video screen and to respond to them by pressing
particular keys on the keyboard. It would also eliminate the wait for the
personalized questionnaire to be produced and the delay before results were
available. An additional advantage was that users could be advised by the
computer of areas in which results had not reached an acceptable level of
significance and they could be given an opportunity of doing these sections a
second time.
Written in the computer language BASIC, the resulting version of the PRI was
developed in close consultation with subjects who had tried out various
versions of the program. The interactive testing process appealed to users and
test results were found to be internally consistent, as well to demonstrate
promising signs of validity on pilot trials.

IJMMS 1985 Volume 23 Issue 6

A General-Purpose Man-Machine Environment with Special Reference to Air
Traffic Control

First, we describe briefly some issues concerning decision-making and
planning. We then discuss decision support systems which increase the
effectiveness and efficiency of human judgmental processes by extending their
range, capabilities and speed. A novel man-machine environment is proposed
that is a useful tool in training human decision-makers and planners, and can
also serve as the basis for routine operations. Finally, we describe the first
area of application of this environment in simulated air traffic control. Five
large-scale projects are integrated in the environment, which are also
discussed.

The allocation of tasks between human and computer and the merits of a
dynamic approach to this allocation are discussed. Dynamic task allocation
requires efficient human-computer communication. This communication may be
accomplished in an implicit, model-based, or explicit, dialogue-based, manner.
A framework for the study of dialogue-based human-computer communication is
introduced and a study exemplifying the use of the framework is presented.
This study investigated the effects of two input media and four task allocation
strategies on the performance of a human-computer system. The task environment
represented a simplified version of an air traffic control scenario wherein
computer aid could be evoked by the human controller to accomplish task sharing
between the human and the computer. Dedicated function keys proved to be a
more effective input medium than the standard Sholes QWERTY keyboard in terms
of both objective performance and subjective preference measures. Of the task
allocation strategies considered, spatial assignment, contingency-based
assignment, and assignment by designation achieved the highest levels of
overall system performance, while temporal assignment achieved a significantly
lower level of performance. Subjective ratings indicated an overall preference
for assignment by designation, followed by spatial assignment and
contingency-based assignment. Spatial assignment was the most powerful, but
the least specific strategy. Assignment by designation was the least powerful
strategy, but the most specific and most flexible strategy.

The advent of the low-cost software-controlled raster printer has
transformed typography, which is an interesting amalgam of human factors and
technology, from an esoteric and specialist discipline into a widely available
medium of expression. This tutorial paper introduces the software and system
organization aspects of computer typography. Taking a practical approach, it
explains the world of fonts and typographic measurements; how fonts are
represented digitally; the technical issues of line breaking, hyphenation, and
justification; the problems of page make-up and the inclusion of tabular and
graphical information in documents.

Sixty students performed simple menu selection with one of ten menus; each
with 64 items arranged in four columns of 16 on a single frame. Target words
consisted of eight items from each of eight categories. In eight categorized
menus, words belonging to the same category were presented together in the
display. Three factors were varied in the categorized menus: alphabetical vs
categorial ordering of words within categories; spacing vs no additional
spacing between category groups; and category organization arranged by column
or by row. In the final two menus the entire array was arranged in
alphabetical order, top-to-bottom by column in one, and left-to-right by row in
the other. Both spacing and columnar organization facilitated search time.
Menus with spacings between category groups were searched approximately 1 s
faster than menus without additional spacing and menus with categories
organized by column were searched about 1 s faster than menus organized by row.
Furthermore, the effects of spacing and organization were additive. Given
categorized menus, no difference in search time was observed for categorial vs
alphabetical ordering within categories. Menus in which the entire array was
arranged in alphabetical order were searched with rates similar to those for
categorized menus with spacings and faster than categorized menus without
spacings; these effects were observed with both forms of organization, row and
column. Explanations were offered for the results and their implications for
menu design were discussed.

This report reviews work on defining and measuring conceptual structures of
expert and novice fighter pilots. Individuals with widely varying expertise
were tested. Cognitive structures were derived using multidimensional scaling
(MDS) and link-weighted networks (Pathfinder). Experience differences among
pilots were reflected in the conceptual structures. Detailed analyses of
individual differences point to factors that distinguish experts and novices.
Analysis of individual concepts identified areas of agreement and disagreement
in the knowledge structures of experts and novices. Applications in selection,
training and knowledge engineering are discussed.

A computer coach unobtrusively monitors interaction with a system and offers
individualized advice on its use. Such active on-line assistance complements
conventional documentation and its importance grows as the complexity of
interactive systems increases. Instead of studying manuals, users learn
highly-reactive systems through experiment, imported metaphors and natural
intelligence. However, in so doing they inevitably fail to discover features
which would help them in their work.
This paper describes Anchises, a coach which aims to detect inefficient use
and ignorance of important facilities of an interactive program in a
domain-independent way. Its current knowledge base is the EMACS text editor,
and Anchises provides highly-selective access to pertinent parts of the on-line
documentation with little overhead for the user. In the design of Anchises,
close attention has been paid to the user modelling component which determines
the needs of an individual without entering into any explicit dialogue with
him; in general this is the least well-understood aspect of computer coaches.
An informal experiment was conducted to determine the effectiveness of the user
modelling techniques employed.