Chapter 1: The
Puzzle: Historical overview of the problem of conscious experience and the various solutions that have been proposed, the
latest of which is the subject of this book, the Theory of Narrative Thought.

Chapter 2: Narratives: The nature and properties of narrative thought and how it gives unity and direction to experience
by uniting the past and present to forecast the future.

Chapter 3: Forecasts: The nature
and properties of forecasts and appraisal of the desirability of the future they predict.

Chapter 4: Memory: The role of memory and
cognitive rules in the construction of both narratives and forecasts.

Chapter 5: Values: The role of primary and secondary values in determining the desirability of forecasted futures.

Chapter 6: Plans: The nature
of action sequences designed to intervene in the course of unfolding events in an effort to ensure that the future, when it
arrives, is more desirable than that which has been forecasted.

Chapter 7: Decisions: The mechanism
for determining the desirability of the forecasted future, what to do if it is undesirable, and whether efforts to change
it are succeeding.

Chapter
8: Paradigms: The nature of explanatory and procedural narratives that are designed to provide information to, and overcome
the limits of, everyday narrative thought.

Chapter 9: A Decision Paradigm: The logic of the procedural narrative that helps us decide about the desirability
of forecasted futures and the adequacy of plans to remedy things when they are undesirable.

Chapter 10: Expanding the Paradigm: The logic
and procedures for a decision paradigm for complex, life changing decisions.

Chapter 11: The Paradigm for Organizations: An example of how the expanded decision paradigm is used for organizational
decisions.

Chapter
12: Antecedents of the Theory: Discussion of other theories that have informed and shaped the theory of narrative thought.

Chapter 13: Research: A discussion
of the research needed for the theory and suggestions about how it should be done.

Summary: Brief overview of the theory.

Sources and Further Reading: References for
both cited work and other, related, publications.

Beginning with work on unaided human judgment and decision making and continuing in other areas, primarily behavioral economics,
researchers have demonstrated an impressive array of “cognitive errors.” These are discrepancies between the behavior
of the participants in the experiments and the behavior implied or prescribed by various formal paradigms for solving specific
classes of problems or making specific classes of inferences—probability theory, rational choice theory, formal logic,
various aspects of economic theory, and the like. When this
research was first undertaken, the agenda was to use these discrepancies to generate a general descriptive theory of judgment
and decision making. As it turned out, Prospect Theory (Kahneman & Tversky, 1979) was the only thorough-going attempt
to follow through on this agenda. Instead, a disparate set of concepts has come to be used to both label and “explain”
a multitude of cognitive errors that have been observed in a multiplicity of tasks. This has resulted in a very large literature;
at last count, Wikipedia listed 93 cognitive errors. But, the original goal of a general descriptive theory seems to have
been abandoned.

This is not to say that the work on cognitive errors lacks a theoretical underpinning.
Indeed, the use of formal paradigms as criteria for correctness implicitly assumes that they are prototypes for correct thinking.
This assumption has roots in the psychological theories of Egon Brunswik (1947), Jean Piaget (1952) and others whose views
were influential at the time that the cognitive error research was getting underway. These theorists viewed people as “intuitive
scientists” who learn about the physical world as a result of having to cope with its demands and constraints. From
this it followed that because the physical world is described by the physical sciences, discrepancies between performance
and the prescriptions of scientific paradigms can be used to evaluate how learning progresses; hence the focus on errors.
That this viewpoint shaped the early work in judgment and decision making, is clear in Peterson and Beach’s (1967) early
article, “Man as an Intuitive Statistician” (the title of which, in fact, quoted Brunswik). The article used the
table of contents of a typical statistical textbook to organize a review of the existing research on unaided human judgments
about probabilistic events; explicitly citing statistical theory as the prototype for thinking about such events. Although
the article’s conclusion contained all the usual nuances and hedges, many critics interpreted it as an overgenerous
endorsement of statistical theory as a descriptive theory of people’s judgments. Their skepticism prompted a torrent
of research. But, for all its success at refuting the descriptive adequacy of statistical theory, this research produced little
more than a list of loosely related errors, with nothing to take statistical theory’s unifying role.

Sometime near the zenith of cognitive error research, an old idea (see Hacking, 1975, for the history)
was given new life by Kahneman, Slovic, and Tversky (1982; Tversky and Kahneman, 1983). They suggested that cognitive errors
reflected a conflict between two different modes of thinking, modes that became known as aleatory and epistemic.
Aleatory thinking is the logic of gambling and probability theory (an aleator is a dice player). A major feature
of aleatory logic is that all events in a particular set are mutually intersubstitutable so that statements about the characteristics
of any event are based on its class membership rather than on its unique properties. In contrast, epistemic thinking involves
the unique properties of events as well as information about the conceptual systems in which they and their properties are
embedded. Barnes (1984) investigated this aleatory/epistemic distinction and obtained results suggesting that both modes of
thinking generate judgments and predictions, but that they may do so in different ways that frequently yield different results.
She concluded that when an experimenter adapts aleatory logic as the standard of correctness, but the participants in the
experiment think epistemically, one should expect differences and that it may not be sufficient to merely call the differences
cognitive errors.

Attributing cognitive errors to the difference between aleatory and epistemic thinking
was provocative but ultimately not very productive. Although aleatory thinking was clearly defined by probability theory,
epistemic thinking tended to be defined as anything that was not clearly aleatory. Moreover, it seemed rather extreme to condemn
cognition in general on the basis of errors in judgments that were largely about probabilities. In an attempt to provide a
more useful, yet broad, characterization of epistemic thought, Beach and Mitchell (1990; Beach, 1990) proposed a new theory,
called Image Theory. The theory was successful in that it generated a good deal of research but its central concept, images,
turned out to be opaque. An effort to replace images with something that retains their
essence but is more easily understood ended up requiring revision of other of the theory’s elements. The revision resulted
in a view of epistemic thought (Beach, 2010) that adopts and significantly extends Walter Fisher’s (1989) ideas about
the role of narratives in communications, rhetoric, and criticism. In the revision, called the Theory of Narrative Thought
(Beach, 2010), images are replaced by narratives and the other elements are revised or replaced by concepts borrowed from
other theorists who have sought to cast judgment and decision making in other than aleatory terms, especially Gary Klein’s
(1989) Recognition Theory of decision making. In addition, by elaborating upon Bruner’s (1986) differentiation between
paradigms and narratives, the Theory of Narrative Thought encompasses both aleatory and epistemic thinking within a single
over-arching framework.

The Theory of Narrative Thought

The Theory of
Narrative Thought begins with the assumption that everyday thought is in the form of narratives, which are causally
motivated, time-oriented chronicles, or stories, that connect the past and present with the future, thereby giving continuity
and meaning to ongoing experience. Narratives are not simply the voice in your head, nor are they simply words, like a novel
or a newspaper article. They are a rich mixture of memories and of current visual, auditory, and other aspects of awareness,
all laced together by emotions to form a mixture that far surpasses mere words in their ability to capture context and meaning.

The elements
of narratives are symbols that stand for real or imagined events and actors, where the latter are animate beings or inanimate
forces. The glue that binds the elements is causality and impliedpurpose. The narrative is a temporal arrangement
of events that are purposefully caused by animate beings or are the result of inanimate forces. The narrative’s story
line is the emergent meaning created by arranging the elements according to time, purpose, and causality. Just as arranging
words into sentences creates emergent meaning that unarranged words do not have, and just as arranging sentences into larger
units creates even more emergent meaning, arranging events, actors, time, purpose, and causality into a narrative creates
the emergent meaning that is its story line or plot.A “good narrative”
is coherent and plausible; coherent when effects can be accounted for by causes and plausible when the actions of its actors
are consistent with their own or similar actors’ actions across contexts (i.e., across different narratives). We tend
to believe that good narratives are valid.

We each have many
narratives in play at any time, one for each area of our lives, and we switch back and forth as required by the context. The
narrative that is the focus of attention at the moment is called the currentnarrative, the story that is
being constructed to make sense of what just happened, what is happening right now, and what will happen next. That is, it
is partially memory, partly current awareness, and partly expectations for the future. As each second passes, as the present
becomes the past, that part of your current narrative that was the present a moment ago becomes the past and is stored in
episodic memory. Consider an analogy: The “crawl” is the writing that
appears at the bottom of the picture when you watch the evening news. It appears on one side of the screen, moves across,
and disappears on the other side. Think of the past as the information that has disappeared, the information on the screen
as current experience, and the information that has yet to appear as the future that will unfold in due course. As you read,
you store the information that is disappearing, you read what is currently visible, and you anticipate what has not yet appeared.
The latter is important because you really do not know what will appear, but based on what you have seen and what you are
seeing, you can make a fairly good guess about the future.

This “good
guess” about the future is called the extrapolated forecast, because it is an extrapolation of the past through
the present and into the future. The extrapolated forecast is what you expect to happen if you (or someone else, or something
else) do not intervene to change the course of events. This extrapolated forecast seldom is very detailed, but its overall
desirability is evaluated by weighing its prominent features against the corresponding features of your desired future.
The desired future is dictated by your enduring values and your more transient preferences (see Beach, 2010 for details).
If the forecasted future is not too deviant from your desired future, you can simply continue doing what you are doing and
let the future unfold as it will. If it is too deviant from your desired future, you must intervene to guide the course of
unfolding events toward a more desirable future. Decision making occurs when the forecasted future is compared to the desired
future and either accepted or rejected. This part of the theory is called narrative-based decision making (N-BDM) and constitutes
a significant part of the theory.

Intervention requires you to have some notion of what you are going to do. This is accomplished by devising a plan, however
rough, and forecasting the results of its implementation. The forecast is called the action forecast because it is
what you think will happen if you do what you propose to do. As with the extrapolated forecast, the action forecast is compared
to your desired future. If its expected results are not too deviant from the results you want, it is implemented—with
continual monitoring to see that it is working to produce the future you desire. If it is not working properly, the plan is
repaired or it is rejected and another is formulated. An action forecast for the repaired or new plan is then compared to
the desirable future, and so on until an acceptable plan is obtained, whereupon its implementation begins.The theory is not as simplistic as this description makes it sound, but this is the essential idea. The fuller version
(Beach, 2010) closely examines the nature of narratives and forecasts, explores the role of memory and values in the process,
and outlines the structure and use of plans—from simple habits to elaborate schemes for achieving desirable ends.

Paradigms

Narrative thinking, and the actions it prompts, is generally sufficient for everyday life. But narratives, which are
great for the “big picture,” do not do well when precision, detail, or complexity is required. And, just as we
humans have invented tools to extend and improve our physical abilities (levers, pulleys, pencils, hammers, telescopes, computers
and other things that help us do tasks that we otherwise could not do easily), so too have we invented tools, called paradigms,
that have the rigor, precision, and ability to deal with complexity that narratives do not have. The function of paradigms
is to acquire information that we need to improve the plausibility and coherence of our narratives.

Actually, Narrative Thought theory views paradigms as a special case of narratives in general. As a result, it is convenient
to differentiate between the story-like narratives discussed above, called chronicularnarratives, and tool-like
narratives, called paradigmaticnarratives. Moreover, because paradigmatic narratives have two functions,
we differentiate between explanatoryparadigms and proceduralparadigms.

Explanatory paradigms
tell us how events (happenings, persons, objects, or concepts) relate to each other and, therefore, what to expect of them.
For narrative thought, linking an event to other events within a conceptual framework, the paradigm, explains the event. Examples
of explanatory paradigms are taxonomies for classifying plants, animals, minerals, and societies as well as conceptual frameworks
such as scientific theories, political ideologies, religions, and systems of rules such as bodies of law or codes of professional
conduct. Each paradigm allows for both categorization of the event in question and access to information about the nature
of events in the category, and by inference about the specific event in question.Procedural
paradigms are sets of steps for manipulating both cognitive and physical events in order to achieve desired ends. Examples
are recipes for cooking salmon or mixing a cocktail, instructions for assembling a set of bookshelves or operating a drill
press, manipulative algorithms such as in arithmetic, algebra, geometry and other forms of mathematics. The result of applying
a procedural paradigm, either success or failure, provides information for refining the chronicular narrative that prompted
the paradigm’s use in the first place.

The Structure of Chronicular and Paradigmatic Narratives

Chronicular
narratives are particularistic and are structured around time. The current narrative, the extrapolated forecast, and the action
forecast are all chronicular narratives and all consist of events arrayed along a time line. Purpose and causality give meaning
to the specific events and their ordering, but the underlying structure is the time line.Explanatory paradigmatic narratives are general and structured by subordination, by how categories of elements relate
to one another in a hierarchical or quisihierarchical manner. Textbooks, for example, are explanatory paradigms and their
subordinative structure is revealed by their hierarchy of topic headings—where the topics are categories. Meaning is
provided by a concept’s location in this hierarchical structure and its links to other concepts in the hierarchy. Procedural paradigmatic narratives also are general but they are structured by conditional sequentially.
Instructions, for example, are procedural paradigms consisting of sequences of steps; execution of each step is conditional
upon the results of the step(s) that preceded it. Their generality comes from the applicability of the instructions to any
task in the category for which this paradigm was developed.

Origins of Paradigms

Paradigmatic
narratives derive from individuals’ efforts to construct plausible, coherent chronicular narratives. To the degree that
an ad hoc paradigm achieves this, it is deemed to be valuable and is stored away in the person’s memory for possible
future use; this is called a privateparadigm. Success often leads to the paradigm being recommended to
others, whereupon it becomes a publicnarrative. Public explanatory paradigms are given labels like world
history, the periodic table, the theory of the firm, astronomy, political science and so on. Public procedural paradigms are
given labels like probability, geometry, long division, How to start a car, How to iron a shirt, and the like.Once they become public, paradigmatic narratives are available for others to revise and develop.
Particularly in the hands of scholars, this often leads to explanatory and procedural paradigms that have a subtlety and sophistication
that far outpaces the understanding or day-to-day needs of the majority of people. Probability is a good example. Starting
with an everyday chronicular need to express more precisely one’s uncertainty about events (“It probably will
rain,” “He probably is a thief”), probability theory has become a self-contained mathematical theory in
which the concept of probability has become so esoteric that it is virtually unrecognizable as the subjective uncertainty
that started it all.

This lack of resemblance between elaborated
public paradigms and their less sophisticated private forbearers means that they are fairly far removed from the everyday
thought processes that originally gave rise to them. This is the point, of course; paradigms are tools for obtaining needed
information through use of precise, objective, structured systems that are beyond the scope of everyday chronicular narrative
thought. It is not surprising that people’s everyday thinking fails to conform to the dictates of public paradigms.
Paradigms only exist because we cannot normally think that way. If we could, there would have been no need to develop the
paradigms in the first place.Indeed, the wonder is not that we do not think paradigmatically.
It is that, collectively, we have recognized the limitations of our chronicular narrative thought and, over the years, have
invented paradigms to help us overcome those limits. In reference to our earlier discussion of cognitive errors; berating
ourselves for not thinking paradigmatically is as pointless as berating ourselves for not running as fast as a locomotive
or flying like an airplane or calculating as accurately as a calculator, tools which exist precisely because we cannot normally
do what they allow us to do. In this light, cognitive errors serve less as indictments of human thinking and more as sign
posts that mark the boundaries of our thinking.

ANew
Mission for Cognitive Error Research

None of this is to say that research on cognitive errors is unimportant; quite the opposite. Although humans, collectively,
have recognized that there are limits to chronicular narratives and that there is a consequent need for paradigmatic narratives,
the research shows that, individually, we routinely fail to recognize our own limits—so the need for paradigms often
goes unappreciated, even when we know about them. As has been stated so many times, research on cognitive errors is important
because the errors can be dangerous. However, merely demonstrating more and more errors does little to mitigate these dangers.Cognitive error research needs to adopt a new mission. It needs to build upon its collection of
demonstrations, each of which explores a small outpost at or beyond the boundary of useful chronicular narrative thought,
by undertaking parametric studies that systematically map that boundary and then study how the boundary is, in effect, expanded
by the use of paradigms. The existing list of tenuously related errors only provides glimpses of this boundary. Unless we
go beyond our list, we will never fully understand epistemic thought nor develop a technology for improving it

Toward an Understanding of Epistemic
Thought

What might the effort to understand epistemic thought look like? It seems
to me that it would be tripartite. The first part would be a theory of epistemic thought. The second part would be a theory
of contexts and their demands; that is, a theory of tasks. The third part would be a theory of paradigms.Of course, I nominate chronicular narrative thought as the theory of epistemic thought, the first part of the tripartite
theory. The second part, a theory of tasks, should view tasks separately from what it takes to successfully undertake them,
in the sense that medicine distinguishes between disease as a malfunction of a bodily systems that can be studied in and of
itself and treatment protocols which are paradigms for treating the disease once it is manifest in a patient. In our case,
the theory of tasks begins with a taxonomy of the malfunctions that are common to categories of contexts or systems, where
both words are used in the broadest sense. These malfunctions set the parameters of tasks, so a central feature of the taxonomy
would be complexity (multiplicity of factors that define the malfunction) and time available for correcting the malfunction.
The theory of tasks would be the totality of the taxonomy and the rules for locating a malfunction/task within it.The third part of the tripartite theory would be a theory of paradigms, for which I nominate paradigmatic
narratives. This would consist of a taxonomy of explanatory and procedural paradigms together with the rules for locating
a paradigm within the taxonomy. The paradigms in this taxonomy are the multitude of formal prescriptions for identifying and
correcting the multitude of malfunctions to which systems are subject.Research would
begin by mapping the paradigm taxonomy onto the taxonomy of system malfunctions, much as diagnostic and treatment protocols
are mapped onto diseases. This would be followed by parametric studies of unaided humans of various degrees of training and
motivation. Tasks of increasing complexity within a category would be presented, crossed with increasing time constraints,
and participants would be asked to perform them. The points at which performance fails would allow us to trace the boundary
of useful epistemic (chronicular narrative) thought—indicating where the use of paradigms (paradigmatic narratives)
should begin. Doing this with different groups of participants would allow us to see how the boundaries are extended by training,
motivation, and the availability of appropriate paradigms—not substantially different from seeing how the boundaries
of a person’s ability to dig a hole is extended by training, motivation, and the availability of a shovel

Summary

The theory of Narrative-Based Decision Making grew out of an effort to refine the concept of epistemic thought. Although
richer than can be presented in the space available here, the theory is basically simple. The key concept is the cognitive
narrative, the story that makes sense of our past and present experience and that allows us to make educated guesses (forecasts)
about the future. Decisions arise when the forecasted future violates our values and preferences, causing us to intervene
in the ongoing flow of events to create a more acceptable future.Narratives are temporal
arrangement of events that are purposefully caused by animate beings or inanimate forces. There are two kinds of narratives,
chronicular and paradigmatic. Chronicular narratives need not be true (they can be
imaginary or conjectural) but we attempt to make our current narrative about what is happening right now as valid as possible
because it is the basis of forecasts and consequent actions—where plausibility and coherence are surrogates for validity.Paradigmatic narratives grow out of our need to think about things that are not easily handled
by chronicular narratives. They are tools for expanding our narrative ability by providing information to use in the construction
or refinement of other narratives.

Cognitive errors are examples of what happens when we try to use chronicular narrative thought to deal with tasks for which
paradigms are better suited. As such, they suggest a new mission for researchers—the parametric examination of the boundaries
of useful chronicular narrative thinking and how these boundaries are extended by the use of paradigms. In short, the idea
of humans as proto-scientists emerges anew. Just as scientists transcend the limitations of their narratives about the natural
world through the use of scientific paradigms, so too can ordinary people learn to use paradigms to improve and expand their
narratives about their own worlds. Doing so can provide them a deeper and more justifiable understanding of their ongoing
experience as well as mitigating the errors that could endanger their efforts to manage the ongoing course of their lives.

Brunswik, E. (1947). Systematic and
representative design of psychological experiments, with results in physical
and social perception. Berkeley, CA: University of California Press.

Fisher, W. R. (1989). Human communication
as narration: Toward a philosophy of reason, value, and action. Columbia, SC: University of South Carolina Press.Hacking,
I. (1975). The emergence of probability. New York: Cambridge University Press.

When a Difference Makes a Difference in theScreening of Decision Options.

Lehman Benson III, Daniel P. Mertens, and Lee Roy Beach

Abstract

AbstractPrevious Image Theory research has addressed the effects of the number of differences (violations) between desired
and observed features of an option on the decision to eliminate (reject) it from the set of options from which a choice will
be made, a process called pre-choice screening. Depending on the circumstances, rejection generally occurs at about three
or four violations, called the rejection threshold. The present research examines how large the difference between a desired
and observed feature must be before it counts as a violation, called the violation threshold. Two experiments were conducted,
the first using a within-subjects design and the second using a between-subjects design. The results of both revealed a violation
threshold below which a difference between a standard and an option's corresponding feature was not treated as a violation
and above which it was. Moreover, the threshold decreased as a function of how many other violations the option was known
to have. In short, a small flaw that might not matter if the option were not otherwise flawed may matter if other flaws are
known to exist, possibly tipping the decision toward elimination of the option from the choice set.

In previous research (summarized in Beach 1993,1998; Beach & Connolly, 2005)
we have examined how differences, called violations, between decision criteria, called standards, and the features of available
options lead decision makers to drop some options and retain others for subsequent choice of the best from among them, a process
known as pre-choice screening. In all of this work, options' features either clearly violated the standards or they clearly
did not and the number of violations was varied. Depending upon the circumstances, it is found that three or four violations,
called the rejection threshold, is generally sufficient to screen an option out of the choice set. This has been called the
rejection threshold (Beach & Mitchell, 1987, 1990).

The present research addresses the question of what constitutes a violation. That is, how big the difference must
be between a decision standard and the corresponding feature of an option before it counts as a violation and therefore weighs
against retention of the option for the choice set. In short, is there evidence of a violation threshold?

Two areas of research are particularly relevant to degree of
violation. In discussing lexicographic semiorders, Tversky (1969) proposed that choices between options may be made by comparing
them on a single feature and if the difference between them exceeds some minimal value, called eta (e), the option with the
more favorable value on the dimension should be retained and the other rejected. The concept of importance here is e, which
is similar to a difference threshold in psychophysics; the point at which a difference between two stimuli can be detected.
In both cases, e and difference thresholds, the notion is that small, sub-threshold differences make no difference but large,
supra-threshold discrepancies do. Moreover, difference thresholds have been shown to vary according to the circumstances (e.g.,
Swets, 1964).

The second area of relevant
research concerns decision makers' judgments of equivalence (Beach, Beach, Carter & Barclay, 1974; Beach, 1990). Here
it is found that a judgment (How tall am I?) or an answer to a problem to be solved in one's head (what is 86% of 2537?) can
deviate from a standard, usually the correct value, to some degree and still be regarded by participants as essentially equivalent
to the correct value, while larger deviations are regarded as wrong. As with difference thresholds, equivalence thresholds
vary according to the circumstances, and there are individual differences.For example, if you were to estimate the U.S.
national debt and your answer were half a billion dollars too high, you probably would regard it as essentially correct. But,
if you were to estimate the amount of money in your savings account and your estimate were half a billion dollars too high,
you would regard it as wildly wrong.

Returning
to the question of degree of violation and the acceptance or rejection of decision options, difference thresholds, e, and
equivalence thresholds all suggest that we should expect a difference between a decision standard and a corresponding feature
of a decision option to be tolerated up to a point, a threshold, above which it will be regarded as a violation and, therefore,
evidence that the option should be rejected, i.e., eliminated from further consideration. That is:Hypothesis1:
There is a threshold below which differences between a decision standard and an option's corresponding feature do not contribute
to the option's rejection and above which they do.In other words, violation thresholds exist.

A second hypothesis is suggested by introspection and observation: If you already
know that an option has significant flaws (violations), an additional small flaw that otherwise would be insignificant may
become significant, moving the decision option toward rejection. For example, suppose you are looking into the details of
a house you are thinking of buying. As you discover some of the house's shortcomings, you become increasingly uneasy about
whether you should buy it. The more shortcomings you uncover, the more apt you are to regard the next one, even a small one
that otherwise would not trouble you, as telling evidence of the house's unsuitability. Thus: Hypothesis 2:
The threshold at which a difference contributes to an option's rejection decreases when the decision maker knows that the
option has supra-threshold differences on other features.

In other words, violation thresholds decrease as the number of known violations increase.In previous research
(Benson & Beach, 1996; Ordonez, Benson & Beach, 1999), college students were asked to assume the role of a newly graduating
job seeker who possesses a prescribed set of standards for assessing potential jobs (options). Each of a number of jobs was
described by a list of features, each of which clearly violated or did not violate the corresponding prescribed standard (for
example, the prescribed standard was a desire to work in a small firm and the job was described as being in a large firm).
The participant's task was to read the list of descriptors and decide whether to reject the job or apply for it. Different
jobs had different numbers of violations and, on average, participants rejected jobs with four or more violations.For
the two experiments in the present research, the jobs and their features were kept the same as in the previous studies, with
one exception. A key feature (required travel) of one of the jobs (called the target job) was expanded to include six levels
of difference (3, 6, 9, 12, 15, 18 or 36 weeks per year) from the standard ("as little travel as possible"). As
before, the participants' task was to decide to reject each job or apply for it. These decisions constitute the data for the
research. The dependent variable in both experiments was the point, the threshold, at which differences between the standard
of as little travel as possible and the amount of travel required by the target job led to the target job being rejected.

Experiment 1Hypothesis
#1 was tested by presenting participants with descriptions of three jobs, each of which was a target job. Each job description
consisted of six features and the description was followed by a list of increasing amounts of the seventh feature, required
travel. Participants were asked to read the list of six features and to decide whether they would reject or apply for the
job if it required the first amount of travel on the list, then if it required the second, higher, amount on the list, and
so forth. The prediction was that there would be the single point on the list of required amounts of travel below which the
participant would decide to apply for the job and above which he or she would not.

Hypothesis #2 was tested by varying the number of violated features the jobs had in addition to required travel.
For one of the three jobs, all features except travel matched their corresponding standards; this will be called the 1 violation
condition even though small amounts of required travel were not expected to count as a violation. A second of the three jobs
had one clear violation in addition to travel; this will be called the 2 violation condition. The third of the three jobs
had two clear violations in addition to travel; this will be called the 3 violation condition. The prediction was that the
rejection threshold for required travel would decrease as the number of additional violations increased.

MethodParticipants: One hundred and forty two
undergraduate business students volunteered for class credit.

Procedure: Participants were presented with four-page booklets the first page of which instructed them to: "Imagine
that you are a 22 year old student who will soon graduate with a bachelor's degree in Marketing. You have gone to the Placement
Center to look at available jobs. The Center has provided you with descriptions of three jobs in marketing, all of which pay
the entry level salary. Each of the jobs is described in terms of 7 features: (1) firm size, (2) location, (3) creative freedom,
(4) administrative responsibility, (4) whether there is an initial training period, (5) whether extensive travel is required,
and (7) amount of annual vacation granted during the first two years with the firm."

"You have strong requirements in regard to each of these 7 job features: You
want to work for a small firm, preferably in Tucson. You want a high degree of creative freedom, but a low degree of administrative
responsibility until you have been on the job for a few years. You want an initial training period so you can more easily
fit into the firm, you want as little travel as possible, and you want at least 2 weeks of vacation. Of course, you may not
get precisely what you want, but these requirements reflect your preferences." "In light of your requirements, please read each job description and answer the questions at the end of each
description."

Each of the next three
pages of the booklet contained a description of one of three jobs; a list of 6 features of the job (excluding travel), each
of which corresponded to one of the job seeker's requirements. The features were always listed in the same order for each
job. Thus, each job was some combination of: Firm Size: Large/Small; Location: Tucson/Out of State; Creative Freedom: High/Low;
Administrative Responsibility: High/Low; Initial Training: Yes/No; Required Travel: ?;Vacation: 1 week/2 weeksFor none
of the three jobs was the amount of required travel stated on the list; instead there was merely a question mark. At the bottom
of the list the participant was asked seven questions, one for each of the seven levels of required travel:If the job
required 3 weeks of travel, you would _____ reject _____ applyIf the job required 6 weeks of travel, you would _____
reject _____ applyand so on for 9, 12, 15, 18 and 36 weeks of travel.

For the 1 violation job, travel as the only feature that was different from the standard. For the 2 violation job,
low creative freedom in addition to travel violated their respective standards. For the 3 violation job, both low creative
freedom and large firm size in addition to travel differed from their respective standards.

ResultsHypothesis 1 received moderate support: 98 of the 142 participants (70%) exhibited a distinct
threshold for all three jobs, that is., there was a single level of travel for each job below which they accepted the job
and above which they rejected it.

See
Fig. 1 at end of manuscript.

Hypothesis #2 also was
supported. As can be seen in Figure 1, the modal threshold for the 98 participants who had three distinct thresholds was lower
when travel was accompanied by another violation than when it was the only violation, and the modal threshold was even lower
when travel was accompanied by two other violations. The modal threshold (38% of the 98 participants) for the 1 violation
condition (travel only) was 12 weeks of required travel, with 78% of the 98 participants having thresholds at 9, 12, or 15
weeks. The modal threshold (36%) for the 2 violation condition (travel plus low creative freedom) was 9 weeks of required
travel, with 85% of the 98 participants having thresholds at 6, 9, or 12 weeks. The modal threshold (37%) for the 3 violation
condition (travel plus low creative freedom plus large firm size) was 3 weeks of required travel, with 83% of the 98 participants
having thresholds at 3, 6, or 9 weeks.

DiscussionThe results of experiment 1 provide moderate support for both hypotheses, but there are problems. The major problem
is that the demand characteristics of the task seem particularly strong; a participant who is paying close attention (although
only 70% of them apparently were doing so) should realize that having once rejected a job when it requires too much travel,
it would be inconsistent to accept it were it to require even more travel, thus producing a threshold. Moreover, presenting
seven questions at the end of each job description highlights the need for such consistency and even further ensures that
the experiment will yield thresholds. Finally, the seven questions about the seven levels of travel require seven decisions
about a single job, rather than the single decision that would be made in a real job search.

Experiment 2Experiment 2 addresses the problems in experiment 1 by requiring each participant
to make a decision for only one of the seven levels (3 through 36 weeks) of required travel in only one of the three (1, 2,
or 3 violation) conditions. This design generates a 3 x 7 matrix that contains 21 cells. Each cell in the matrix required
a group of participants. Each participant in each group was asked to decide to reject or apply for a job (called the target
job) that was randomly presented in a booklet along with 7 other jobs that served as filler. The pattern of decisions across
the cells of the matrix allowed us to make inferences about the existence of thresholds (hypothesis 1) and whether they decrease
as a function of the number of other violations (hypothesis 2).

MethodParticipants: Three hundred and fifty eight undergraduate business students volunteered to participate
for extra course credit, with 16 -18 participants assigned to each of the 21 cells in the matrix (Table 1).

Procedure: Participants were presented with nine-page booklets, the first page
of which contained instructions and the following eight pages of which contained lists that described each of eight jobs,
one job to a page. One of the eight jobs had no violations, one had 1 violation, one had 2 violations, and so on up to a job
for which all 7 features were violations. Seven of the eight jobs merely provided filler within which the eighth, the target
job, was imbedded. The eight jobs were presented in random order in each booklet.

The instructions were the same as in experiment 1 with the following exceptions: The instructions referred
to eight job descriptions instead of only three. In order to make sure participants understood that minimal
travel was important (a pilot study showed that many had little appreciation of the impact of extended travel) the instructions
stated that because the participant was soon to become a parent, as little travel as possible was important in deciding about
a job. The final instruction was: "In light of your requirements, please screen the eight job descriptions,
rejecting those that are of no further interest and retaining those that you would apply for. Do this by marking one or the
other blank (____ Reject or ____Apply) at the end of each description."

ResultsThe results of Experiment 2 are shown in Table 1. In what follows, whenever a difference between
proportions is termed "significant," it is the result of a standard normal variate test for proportions with p at
.05.

Table 1. The proportion of participants in each group who decided to apply for
the target job at each level of required travel for each violation condition in Experiment 2. The horizontal bars indicate
that all of the proportions on one side of the bar are significantly different from all the proportions on the other side
and the proportions on one side of a bar are not significantly different from one another.

Weeks of Required Travel

369121518…
…….36

1 Violation909585 | 60 45 55 60

2 Violation85| 503040395060

3 Violation504040354050
|10

Each cell in the
table contains the proportion of the 16-18 participants in that group who decided to apply for the target job. The proportions
in each row reflect the effects of increasing the amount of required travel holding the number of other violations constant.
Looking at the row for 1 violation, where travel is the only discrepant feature, there are no significant differences among
the proportions of decisions to apply for the target job for 3, 6 and 9 weeks of required travel nor among the proportions
for 12, 15, 18 or 36 weeks. However, each of the 3-9 week proportions is significantly different from each of the 12-36 week
proportions, indicating a significant drop in decisions to apply between 9 and 12 weeks. This implies that when decision makers
know of no other violations for the job, the threshold at which a difference between required travel and the standard of "as
little travel as possible" becomes a violation lies between 9 and 12 weeks of required travel. Recall that 12 weeks was
the modal threshold in experiment 1.

In the row
for the 2 violation condition, the proportion of decisions to apply for a job requiring 3 weeks of required travel is significantly
different from each of the proportions for 6-36 weeks of required travel, none of which are significantly different from each
other. This implies that when decision makers know of one other violation for the job, the threshold at which a difference
between required travel and the standard becomes a violation is between 3 and 6 weeks, which is lower than the modal threshold
of 9 in experiment 1.

In the row for the 3 violation condition,
even 3 weeks of required travel is sufficient to make half the participants decide to reject the target job. None of the proportions
in the row is significantly different from the others except for 36 weeks (.10), which is significantly lower than all other
proportions in the row, as well as being significantly lower than the other proportions in the column, an anomaly that will
be addressed below. Excepting the 36 week proportion for the moment, the results in this row of the table imply that when
decision makers know of two other violations for the job, the threshold at which a difference between required travel and
the standard becomes a violation is between 0 and 3 weeks. Recall that the modal threshold was 3 in experiment 1.

Each column in the table reflects the effects of increasing the number of other
violations, holding the weeks of required travel constant. Looking at the column for 3 weeks of required travel, the proportions
for travel only (1 violation condition) and travel plus low creative freedom (2 violations) are not significantly different
from one another, but they both are significantly different from the proportion for travel plus low creative freedom plus
large firm size (3 violations). This implies that 3 weeks of required travel is not considered to be so different from the
prescribed standard of "as little travel as possible" that it counts as a violation unless the job has at least
two other violations.

For both 6 and 9 weeks of required
travel, the proportions for travel only (1 violation) are significantly different from the proportions for travel plus low
creative freedom (2 violations) and from the proportions for travel plus low creative freedom plus large firm size (3 violations),
but the latter proportions are not significantly different from each other. This implies that neither 6 weeks and 9 weeks
of required travel are considered to be so different from the standard of "as little travel as possible" that they
count as violations unless the job has at least one other violation.

For 12 or more weeks of required travel, none of the proportions are significantly different from each other (except
for 36 weeks for 3 violations). This implies that any amount of required travel equaling or exceeding 12 weeks is so different
from "as little travel as possible" that it constitutes a violation, whether or not the job has other violations.

DiscussionWith the exception of the cell
in the lower right of the table, most of the proportions in Table 1 may seem fairly high-that is, participants are prone to
accept the target job rather than reject it. This is not surprising because previous research using these job descriptions
found, on average, that it took four or more violations for a job to be rejected, and there were never more than three violations
in the present research. We did not include a 4 violation condition because we suspected that the acceptance levels would
all be so low that no threshold would be revealed, thus making a 4 violation condition uninformative. The fact that the threshold
for the 3 violation condition in Table 1 is at or below the lowest level of required travel, 3 weeks, implies that this is
indeed would have been the case.

The anomaly in Table
1 is the .10 for 36 weeks in the 3 violation condition. Although .60 of the participants decided to apply for the job when
36 weeks of travel was its only violation, and the same proportion decided to apply for it when it had both 36 weeks of required
travel and low creative freedom as violations, adding yet another violation, large firm size, seems to have been the straw
that broke the camel's back, almost nobody was interested in the job.

The instructions about impending parenthood may have heightened participants' sensitivity both to the three violations
and to the 36 weeks of required travel. To check on this, two additional groups were presented with 36 weeks and the 3 violation
condition; one group received the parenthood instruction and the other did not. In the group that received the instruction
the proportion of participants deciding to apply for the job was .10, the same as in the table. In contrast, the proportion
for the group without the instruction was .60, which is not significantly different from the other proportions in the row
and is the same as the other proportions in the column. Recall, however, that the parenthood instruction was included specifically
to highlight the importance of "as little travel as possible" lest we observe too few rejections to detect thresholds.
So, a proportion of .60 acceptances without the instruction is perhaps lower than might be expected, suggesting that the parenthood
instruction does not fully account for the anomalous result in that cell of the table. Instructions aside, the constellation
of 36 weeks of required travel and two other violations is bad news for the target job.

ConclusionsImage Theory (Beach & Mitchell, 1987) posits both a violation threshold and
a rejection threshold, but all previous research has been on the rejection threshold (see Beach 1998, Beach & Connolly,
2005). The present research was designed to clarify the nature of the violation threshold. The results of both experiments
lend support to the first hypothesis that there is a threshold below which differences between a decision standard and an
option's feature are not regarded as a violation and above which they are. In addition, the results of both experiments both
lend support to the second hypothesis; the threshold for regarding a difference as a violation (violation) decreases when
other violations (violations) are known to exist. In short, a small flaw doesn't matter unless the option is otherwise flawed,
then it may be treated as if it were a larger flaw, thereby tipping the decision toward dropping the option from further consideration.

Future research should examine the conditions that influence
the size of violation thresholds, such as the importance of the feature, the clarity of the difference, as well as individual
differences. The latter are amply evidenced in the data for both of our experiments.