A creative person is usually very intelligent in the
ordinary sense of the term and can meet the problems of life as rationally as
anyone can, but often he refuses to let intellect rule; he relies strongly on
intuition, and he respects the irrational in himself and others. Above a certain
level, intelligence seems to have little correlation with creativity--i.e.,
a highly intelligent person may not be as highly creative. A distinction is
sometimes made between convergent thinking, the analytic reasoning measured
by intelligence tests, and divergent thinking, a richness of ideas and originality
of thinking. Both seem necessary to creative performance, although in different
degrees according to the task or occupation (a mathematician may exhibit more
convergent than divergent thinking and an artist the reverse).

Personality.

Many creative people show a strong interest in apparent
disorder, contradiction, and imbalance; they often seem to consider asymmetry
and disorder a challenge. At times creative persons give an impression of psychological
imbalance, but immature personality traits may be an extension of a generalized
receptivity to a wider-than-normal range of experience and behaviour patterns.
Such individuals may possess an exceptionally deep, broad, and flexible awareness
of themselves.

Studies indicate that the creative person is nonetheless
an intellectual leader with a great sensitivity to problems. He exhibits a high
degree of self-assurance and autonomy. He is dominant and is relatively free
of internal restraints and inhibitions. He has a considerable range of intellectual
interests and shows a strong preference for complexity and challenge.

The
unconventionality of thought that is sometimes found in creative persons may
be in part a resistance to acculturation, which is seen as demanding surrender
of one's personal, unique, fundamental nature. This may result in a rejection
of conventional morality, though certainly not in any abatement of the moral
attitude.

Theories of intelligence

Theories of intelligence, as is the case with most scientific
theories, have evolved through a succession of paradigms that have been put
forward to clarify our understanding of the idea. The major paradigms have been
those of psychological measurement (often called psychometrics); cognitive psychology,
which concerns itself with the mental processes by which the mind functions;
the merger of cognitive psychology with contextualism (the interaction of the
environment and processes of the mind); and biologic science, which considers
the neural bases of intelligence.

Psychometric theories

Psychometric theories have generally sought to understand the
structure of intelligence: What form does it take, and what are its parts, if
any? Such theories have generally been based on and tested by the use of data
obtained from paper-and-pencil tests of mental abilities that include analogies
(e.g., lawyer : client :: doctor : ?), classifications (e.g., Which word does
not belong with the others? robin, sparrow, chicken, bluejay), and series completions
(e.g., What number comes next in the following series? 3, 6, 10, 15, 21, ?).

Underlying the psychometric theories is a psychological model
according to which intelligence is a composite of abilities measured by mental
tests. This model is often quantified by assuming that each test score is a
weighted linear composite of scores on the underlying abilities. For example,
performance on a number-series test might be a weighted composite of number,
reasoning, and possibly memory abilities for a complex series. Because the mathematical
model is additive, it assumes that less of one ability can be compensated for
by more of another ability in test performance. For instance, two people could
gain equivalent scores on a number-series test if a deficiency in number ability
in the one person relative to the other was compensated for by superiority in
reasoning ability.

The first of the major psychometric theories was that of the British
psychologist Charles E. Spearman, who published his first major article on intelligence
in 1904. Spearman noticed what, at the turn of the century, seemed like a peculiar
fact: People who did well on one mental ability test tended to do well on the
others, and people who did not do well on one of them also tended not to do
well on the others. Spearman devised a technique for statistical analysis, which
he called factor analysis, that examines patterns of individual differences
in test scores and is said to provide an analysis of the underlying sources
of these individual differences. Spearman's factor analyses of test data suggested
to him that just two kinds of factors underlie all individual differences in
test scores. The first and more important factor Spearman labeled the "general
factor," or g, which is said to pervade performance on all tasks requiring
intelligence. In other words, regardless of the task, if it requires intelligence,
it requires g. The second factor is specifically related to each particular
test. But what, exactly, is g? After all, calling something a general factor
is not the same as understanding what it is. Spearman did not know exactly what
the general factor might be, but he proposed in 1927 that it might be something
he labeled "mental energy."

The American psychologist L.L. Thurstone disagreed not only with
Spearman's theory but also with his isolation of a single factor of general
intelligence. Thurstone argued that the appearance of just a single factor was
an artifact of the way Spearman did his factor analysis and that if the analysis
were done in a different and more appropriate way, seven factors would appear,
which Thurstone referred to as the "primary mental abilities." The
seven primary mental abilities identified by Thurstone were verbal comprehension
(as involved in the knowledge of vocabulary and in reading); verbal fluency
(as involved in writing and in producing words); number (as involved in solving
fairly simple numerical computation and arithmetical reasoning problems); spatial
visualization (as involved in mentally visualizing and manipulating objects,
as is required to fit a set of suitcases into an automobile trunk); inductive
reasoning (as involved in completing a number series or in predicting the future
based upon past experience); memory (as involved in remembering people's names
or faces); and perceptual speed (as involved in rapidly proofreading to discover
typographical errors in a typed text).

It is a possibility, of course, that Spearman was right and Thurstone
was wrong, or vice versa. Other psychologists, however, such as the Canadian
Philip E. Vernon and the American Raymond B. Cattell, suggested another possibility--that
both were right in some sense. In the view of Vernon and Cattell, abilities
are hierarchical. At the top of the hierarchy is g, or general ability. But
below g in the hierarchy are successive levels of gradually narrowing abilities,
ending with Spearman's specific abilities. Cattell, for example, suggested in
a 1971 work that general ability can be subdivided into two further kinds of
abilities, fluid and crystallized. Fluid abilities are the reasoning and problem-solving
abilities measured by tests such as the analogies, classifications, and series
completions described above. Crystallized abilities can be said to derive from
fluid abilities and be viewed as their products, which would include vocabulary,
general information, and knowledge about specific fields. John L. Horn, an American
psychologist, suggested that crystallized ability more or less increases over
the life span, whereas fluid ability increases in the earlier years and decreases
in the later ones.

Most psychologists agreed that a broader subdivision of abilities
was needed than was provided by Spearman, but not all of these agreed that the
subdivision should be hierarchical. J.P. Guilford, an American psychologist,
proposed a structure-of-intellect theory, which in its earlier versions postulated
120 abilities. For example, in an influential 1967 work Guilford argued that
abilities can be divided into five kinds of operations, four kinds of contents,
and six kinds of products. These various facets of intelligence combine multiplicatively,
for a total of 5 4 6, or 120 separate abilities. An example of such an ability
would be cognition (operation) of semantic (content) relations (product), which
would be involved in recognizing the relation between lawyer and client in the
analogy problem, lawyer : client :: doctor : ?. In 1984 Guilford increased the
number of abilities proposed by his theory, raising the total to 150.

It had become apparent that there were serious problems with psychometric
theories, not just individually but as a basic approach to the question. For
one thing, the number of abilities seemed to be getting out of hand. A movement
that had started by postulating one important ability had come, in one of its
major manifestations, to postulating 150. Because parsimony is usually regarded
as one of several desirable features of a scientific theory, this number caused
some disturbance. For another thing, the psychometricians, as practitioners
of factor analysis were called, didn't seem to have any strong scientific means
of resolving their differences. Any method that could support so many theories
seemed somewhat suspect, at least in the use to which it was being put. Most
significant, however, was the seeming inability of psychometric theories to
say anything substantial about the processes underlying intelligence. It is
one thing to discuss "general ability" or "fluid ability,"
but quite another to describe just what is happening in people's minds when
they are exercising the ability in question. The cognitive psychologists proposed
a solution to these problems, which was to study directly the mental processes
underlying intelligence and, perhaps, relate them to the factors of intelligence
proposed by the psychometricians.

Cognitive theories

During the era of psychometric theories, the study of intelligence
was dominated by those investigating individual differences in people's test
scores. In an address to the American Psychological Association in 1957, the
American psychologist Lee Cronbach, a leader in the testing field, decried the
fact that some psychologists study individual differences and others study commonalities
in human behaviour but never do the two meet. In Cronbach's address his plea
to unite the "two disciplines of scientific psychology" led, in part,
to the development of cognitive theories of intelligence and of the underlying
processes posited by these theories. Without an understanding of the processes
underlying intelligence it is possible to come to misleading, if not wrong,
conclusions when evaluating overall test scores or other assessments of performance.
Suppose, for example, that a student does poorly on the type of verbal analogies
questions commonly found on psychometric tests. A possible conclusion is that
the student does not reason well. An equally plausible interpretation, however,
is that the student does not understand the words or is unable to read them
in the first place. A student seeing the analogy, audacious : pusillanimous
:: mitigate : ?, might be unable to solve it because of a lack of reasoning
ability, but a more likely possibility is that the student does not know the
meanings of the words. A cognitive analysis enables the interpreter of the test
score to determine both the degree to which the poor score is due to low reasoning
ability and the degree to which it is a result of not understanding the words.
It is important to distinguish between the two interpretations of the low score,
because they have different implications for understanding the intelligence
of the student. A student might be an excellent reasoner but have only a modest
vocabulary, or vice versa.

Underlying most cognitive approaches to intelligence is the assumption
that intelligence comprises a set of mental representations (e.g., propositions,
images) of information and a set of processes that can operate on the mental
representations. A more intelligent person is assumed to represent information
better and, in general, to operate more quickly on these representations than
does a less intelligent person. Researchers have sought to measure the speed
of various types of thinking. Through mathematical modeling, they divide the
overall time required to perform a task into the constituent times needed to
execute each mental process. Usually, they assume that these processes are executed
serially--one after another--and, hence, that the processing times are additive.
But some investigators allow for partially or even completely parallel processing,
in which case more than one process is assumed to be executed at the same time.
Regardless of the type of model used, the fundamental unit of analysis is the
same: a mental process acting upon a mental representation.

A number of cognitive theories of intelligence have evolved. Among
them is that of the American psychologists Earl B. Hunt, Nancy Frost, and Clifford
E. Lunneborg, who in 1973 showed one way in which psychometrics and cognitive
modeling could be combined. Instead of starting with conventional psychometric
tests, they began with tasks that experimental psychologists were using in their
laboratories to study the basic phenomena of cognition, such as perception,
learning, and memory. They showed that individual differences in these tasks,
which had never before been taken seriously, were in fact related (although
rather weakly) to patterns of individual differences in psychometric intelligence
test scores. These results, they argued, showed that the basic cognitive processes
might be the building blocks of intelligence.

Following is an example of the kind of task Hunt and his colleagues
studied in their research. The experimental subject is shown a pair of letters,
such as "A A," "A a," or "A b." The subject's
task is to respond as quickly as possible to one of two questions: "Are
the two letters the same physically?" or "Are the two letters the
same only in name?" In the first pair the letters are the same physically,
and in the second pair the letters are the same only in name.

The psychologists hypothesized that a critical ability underlying
intelligence is that of rapidly retrieving lexical information, such as letter
names, from memory. Hence, they were interested in the time needed to react
to the question about letter names. They subtracted the reaction time to the
question about physical match from the reaction time to the question about name
match in order to isolate and set aside the time required for sheer speed of
reading letters and pushing buttons on a computer. The critical finding was
that the score differences seemed to predict psychometric test scores, especially
those on tests of verbal ability, such as verbal analogies and reading comprehension.
The testing group concluded that verbally facile people are those who have the
underlying ability to absorb and then retrieve from memory large amounts of
verbal information in short amounts of time. The time factor was the significant
development here.

A few years later, the American psychologist Robert J. Sternberg
suggested an alternative approach to studying the cognitive processes underlying
human intelligence. He argued that Hunt and his colleagues had found only a
weak relation between basic cognitive tasks and psychometric test scores because
the tasks they were using were at too low a level. Although low-level cognitive
processes may be involved in intelligence, according to Sternberg they are peripheral
rather than central. He proposed that psychologists should study the tasks found
on the intelligence tests and then determine the mental processes and strategies
that people use to perform those tasks.

Sternberg began his study with the analogies tasks such as lawyer
: client :: doctor : ?. He determined that the solution to such analogies requires
a set of component cognitive processes: namely, encoding of the analogy terms
(e.g., retrieving from memory attributes of the terms lawyer, client, and so
on), inferring the relation between the first two terms of the analogy (e.g.,
figuring out that a lawyer provides professional services to a client), mapping
this relation to the second half of the analogy (e.g., figuring out that both
a lawyer and a doctor provide professional services), applying this relation
to generate a completion (e.g., realizing that the person to whom a doctor provides
professional services is a patient), and then responding. Using techniques of
mathematical modeling applied to reaction-time data, Sternberg proceeded to
isolate the components of information processing. He determined whether or not
each experimental subject did, indeed, use these processes, how the processes
were combined, how long each process took, and how susceptible each process
was to error. Sternberg later showed that the same cognitive processes are involved
in a wide variety of intellectual tasks, and he suggested that these and other
related processes underlie scores on intelligence tests.

Other cognitive psychologists have pursued different paths in
the study of human intelligence, including the building of computer models of
human cognition. Two leaders in this field have been the American psychologists
Allen Newell and Herbert A. Simon. In the late 1950s and early 1960s they worked
with a computer expert, Clifford Shaw, to construct a computer model of human
problem solving. Called the General Problem Solver, it could solve a wide range
of fairly structured problems, such as logical proofs and mathematical word
problems. Their program relied heavily on a heuristic procedure called "means-ends
analysis," which, at each step of problem solving, determined how close
the program was to a solution and then tried to find a way to bring the program
closer to where it needed to be. In 1972, Newell and Simon proposed a general
theory of problem solving, much of which was implemented on the computer.

Most of the problems studied by Newell and Simon were fairly well
structured, in that it was possible to identify a discrete set of moves that
would lead from the beginning to the end of a problem. For example, in logical-theorem
proving the final result is known, and what is needed is a discrete set of steps
that lead to that solution. Even in chess, another object of study, a discrete
set of moves can be determined that will lead from the beginning of a game to
checkmate. The biggest problem for a computer program (or a human player, for
that matter) is in deciding which of myriad possible moves will most contribute
toward winning a game. Other investigators have been concerned with less well-structured
problems, such as how a text is comprehended, or how people are reminded of
things they already know when reading a text.

All of the cognitive theories described so far have in common
their primary reliance on what psychologists call the serial processing of information.
Fundamentally, this means that cognitive processes are executed in series, one
after another. In solving an algebra problem, for example, first the problem
is studied, then an attempt is made to formulate some equations to define knowns
and unknowns, then the equations may be used to solve for the unknowns, and
so on. The assumption is that people process chunks of information one at a
time, seeking to combine the processes used into an overall strategy for solving
a problem.

For many years, various psychologists have challenged the idea
that cognitive processing is primarily serial. They have suggested that cognitive
processing is primarily parallel, meaning that humans actually process large
amounts of information simultaneously. It has long been known that the brain
works in such a way, and it seems reasonable that cognitive models should reflect
this reality. It proved, however, to be difficult to distinguish between serial
and parallel models of information processing, just as it had been difficult
earlier to distinguish between different factor models of human intelligence.
Subsequently advanced techniques of mathematical and computer modeling were
brought to bear on this problem, and various researchers, including the American
psychologists David E. Rumelhart and Jay L. McClelland, proposed what they call
"parallel distributed processing" models of the mind. These models
postulated that many types of information processing occur at once, rather than
just one at a time.

Even with computer modeling, some major problems regarding the
nature of intelligence remain. For example, a number of psychologists, such
as the American Michael E. Cole, have argued that cognitive processing does
not take into account that the description of intelligence may differ from one
culture to another and may even differ from one group to another within a culture.
Moreover, even within the mainstream cultures of North America and Europe, it
had become well known that conventional tests, even though they may predict
academic performance, do not reliably predict performance in jobs or other life
situations beyond school. It seemed, therefore, that not only cognition but
also the context in which cognition operates had to be taken into account.