ABSTRACT - This paper summarizes the basic ideas associated with Craik and Lockhart's (1972) levels-of-processing framework for memory research. The major changes in their ideas are described and the remaining problems are identified and briefly discussed. An argument is made that future research and theorizing should explicitly examine the effects on encoding of existing knowledge structures. The paper concludes with two broad suggestions for future research.

[Preparation of this paper was supported by the Science and Education Administration of the U.S. Department of Agriculture under Grant No. 5901-0410-8-0151-0 from the Competitive Grants Office and by a Corporate Associates Grant from the College of Business Administration.]

ABSTRACT -

This paper summarizes the basic ideas associated with Craik and Lockhart's (1972) levels-of-processing framework for memory research. The major changes in their ideas are described and the remaining problems are identified and briefly discussed. An argument is made that future research and theorizing should explicitly examine the effects on encoding of existing knowledge structures. The paper concludes with two broad suggestions for future research.

INTRODUCTION

This paper concerns encoding processes, the cognitive processing operations involved in "assigning" symbolic codes to sensations received from sensory receptors. Stated more simply, encoding is how we represent, in cognitive form, the stimuli in our environment to which we attend. Encoding processes are of central importance in explaining and predicting other cognitive processes such as memory retrieval and information integration during decision making.

Lachman, Lachman, and Butterfield (1979) suggest that one basic, perhaps metatheoretical assumption underlying the information processing approach is that human mental processes involve symbol manipulation. This idea was persuasively introduced to many consumer behaviorists by Newell and Simon (1972). Clearly, the processes by which symbolic representations (a) are acquired (concept formation) and (b) are assigned to perceived stimuli during encoding operations are centrally important in the information processing paradigm. Because all information handling processes are presumed to operate on the cognitive representations of stimuli, not the stimuli themselves, the cognitive processes by which these coded representations are created and assigned--here, broadly termed the encoding process--are of fundamental importance.

In this paper, encoding is considered to be broadly analogous to the familiar notion of comprehension (cf. McGuire 1976). Thus, I am concerned with the cognitive processes by which consumers select information from their environments and comprehend that information--that is, represent it in cognitive, symbolic forms. To clarify the distinction between information and coded information, I refer to the stimuli present in one's environment as information while the cognitive representation of that information is termed knowledge (see Russo 1978). Knowledge, in this sense, may take many different forms, including simple, relatively concrete representations of color or size, for example, and more complex, abstract representations such as style, quality, or serviceability.

This paper has two purposes. The first is to briefly describe the currently popular framework for encoding and memory research provided by levels-of-processing (LOP) theory and to discuss recent developments in the basic LOP ideas. My second objective is to consider, again briefly, the effects of the encoder's pre-existing knowledge on encoding processes and on the coded representations/knowledge produced by those processes. My basic point is that existing knowledge structures must be incorporated into any meaningful model or empirical study of encoding processes. Several suggestions for how to so are offered. Due to space constraints, the paper addresses conceptual issues only; data are not presented.

THE LEVELS-OF-PROCESSING FRAMEWORK

In 1972, Craik and Lockhart published an influential paper in which they presented the basic LOP ideas as a broad framework for memory research. Here, memory refers to "classic" memory phenomena--people's remembrances of past events in their lives (termed episodic memory by Tulving 1972). The avowed purpose of LOP was to serve as a metatheoretical perspective, a broad way of thinking about memory and memory research, rather than as a tightly specified theory from which one could derive falsifiable propositions.

The original LOP ideas were rather simple. Craik and Lockhart (1972) considered perception (one could say comprehension) as "...a series or hierarchy of processing stages..." which could be ordered in terms of "...'depth of processing' where greater 'depth' implies a greater degree of semantic...analysis." They further postulated that encoding analysis "...proceeds through a series of sensory stages to levels associated with matching or pattern recognition and finally to semantic-associated stages of stimulus enrichment" (all quoted material from p. 675). From Craik and Lockhart's LOP perspective, memory, or more specifically, the coded representation or memory trace, is a direct result of the type of perceptual analysis that occurs during encoding. Thus, the content of the memory trace is a function of the encoding operations that created the trace. But Craik and Lockhart further proposed that the persistence or durability of the coded trace is also affected by the type of processing operations that occur during encoding. Specifically, "...trace persistence is a function of depth of analysis, with deeper levels of analysis associated with more elaborate, longer lasting, and stronger traces" (p. 675). These few ideas, succinctly stated here, constitute the core ideas in Craik and Lockhart's LOP framework for memory research.

According to Craik and Lockhart, encoding processes could be considered in terms of stages or domains of cognitive operations, ranging from sensory to semantic analyses. They suggested "depth" of analysis as a useful metaphor. That is, encoding operations could provide "shallow" analyses of the physical, sensory aspects of a stimulus, as well as progressively "deeper," more semantic analyses of the more abstract aspects of stimulus meaning. This LOP idea implies that memory contains a range of trace types, from the relatively short-lived sensory codes of physical features produced by shallow encoding analyses to the more durable, semantic-associative codes produced by deeper levels ef processing (cf. Triesman 1979). Given the LOP perspective, the structural multistore theories of memory (sensory, short-term, and long-term memory) are less useful for explaining the short-term, transient vs. long-term, perhaps permanent retention of coded information. Instead, under the LOP framework, the memorability of knowledge (memory codes, representations, or traces) is a direct function of the "depth" of encoding operations that occurred during early processing of the information.

To illustrate these ideas, consider the types of initial encoding processes that hypothetically occur upon exposure to a print advertisement. Perhaps one consumer attends to and analyzes the physical characteristics of the ad. He or she might thereby encode or represent features such as the dominant colors in the ad, the ad layout, the size and style of type, whether or not human actors are portrayed, etc. This person would be considered as engaging in shallow, sensory encoding operations. And, according to LOP "theory," the resulting symbolic representations that constitute this type of comprehension should not be easily retrieved from memory. In contrast, consider the consumer who, during exposure to the ad, analyzes the more abstract, semantically meaningful aspects of the ad. For instance, that consumer might encode the basic functional attributes of the product, whether or not these functions are useful to him/her, whether others would approve of a purchase, and so on. Because these cognitive operations focus on the semantic meaning of the product (for that consumer), such analysis are termed "deep." And, in the LOP framework, such semantic traces are expected to be more retrievable. Thus, deep processing should produce better memory performance (e.g., higher recall scores).

In summary, I wish to emphasize three major postulates of Craik and Lockhart's original LOP framework. First, information enters memory as a coded representation, memory trace, or knowledge, simply by being encoded. Second, and perhaps just as obviously, the type or level of encoding operation determines the form and content of the stored memory trace (as well as its association with other traces). The third and the major LOP idea is that deeper, more semantic encoding operations produce stronger, more durable, more permanent, more memorable memory traces/knowledge than do nonsemantic, sensory analyses. Operationally, this "depth effect" is frequently measured in terms of the differences in recall or recognition scores for stimuli presumably encoded via sensory or semantic analysis. In their LOP framework, Craik and Lockhart presented an attractive, simpler alternative to the static multistore theories of memory. The LOP ideas focused research attention on the initial processes that occur early in the stages of an information processing model. Introduced at a time when process-oriented models were becoming popular, the LOP framework attracted a great deal of attention.

CHANGES AND MODIFICATIONS IN LOP

In the eight years since its introduction, the LOP ideas have changed somewhat (cf. Cermak and Craik 1979; Lock-hart, Craik, and Jacoby 1976). Several of the original propositions have been modified or dropped and new considerations have evolved. The changes in LOP were due to both conceptual criticisms and troublesome empirical results. Although many studies obtained support for the basic LOP "depth effect" on memory performance (Craik and Tulving 1975, Hyde and Jenkins 1969, Lane and Robert-son 1979), others did not show semantic processing to enhance memory performance compared to nonsemantic processing (cf. Stein, Morris, and Bransford 1978). And, many memory theorists criticized the LOP framework on conceptual grounds (cf. Baddeley 1978, Nelson Walling and McEvoy, 1979, T. Nelson 1977, Tulving 1979). Only the most important of these changes and criticisms are described below.

Ordering of Levels

An early casualty of the original LOP ideas was that the analytic operations that occur during the encoding of a stimulus begin at shallow, sensory levels of analysis and proceed through successively deeper levels to deep, semantic analyses. It has become widely accepted that encoding processing can be and often is maintained at one level of analysis. It is now common to discuss variation in the encoding operations at a particular level or depth in terms of "breadth" or "elaboration" of processing (Craik and Tulving 1975). These and other ideas regarding elaboration are discussed in more detail later.

Manipulations of Depth

A second evolutionary change in the LOP framework concerns the increasing recognition that researchers are limited in the extent to which they can control the type of encoding operations engaged in by a subject (cf. Triesman 1979). In the typical LOP study, the experimenter varies the "orienting instructions" given to the subject. The directions are intended to focus the subject's encoding processes on certain characteristics of the stimuli (e.g., either sensory or semantic). The usual operationalization is to ask, for each stimulus, a question that seems to require either sensory or semantic encoding analyses to produce the correct answer. Typically, presumed level or depth produced by each orienting question is determined by the experimenter's intuitive judgment. Commonly, subjects are shown a series of words, one at a time, each preceded by a question. For example, questions such as "Is this word in capital letters?" are assumed to create sensory encoding processes, whereas answering the question "Is this a mammal?" presumably requires semantic analyses. Subjects are told to answer each question with a yes or no response and response latencies are often measured as well as accuracy.

Recent findings have shown that subjects may engage in some semantic processing even when the orienting question requires sensory analysis, and vice versa (cf. Bransford, Franks, Morris and Stein 1979, Nelson, Reed and McEvoy 1977, among many others). That is, both sensory and semantic encoding analyses may occur in both sensory and semantic tasks. The usual explanation suggests that the encoding involved "well-practiced processes based on substantial past experience." That is, familiar stimuli (by definition, I suppose) have been processed, semantically and/or nonsemantically, many times before. For such stimuli, the encoding processes become highly practiced and may be essentially automatic (cf. Shiffren and Schneider, 1977). Thus, a person may have little conscious control over the encoding operations that occur when he/she is exposed to such stimuli--no matter what the orienting instructions seem to require. In sum, it may be impossible to direct, stop, or block certain types of encoding analyses. I prefer to say that one's existing knowledge structures are activated by the stimulus and are "driving" the encoding processing. This notion will be expanded later in the paper.

Criterial Task

Another evolution of the LOP paradigm involves the current widespread interest in the type (one might say, level) of criterial task used in measuring a subject's memory performance. This issue is often described as the "cue specificity'' phenomenon, an idea introduced by Tulving and Thompson (1973) and recently elaborated by Tulving (1979). Tulving argues that the similarity or compatibility relationship between the coded memory trace and the cue(s) provided in the retrieval/critical task is sufficient to explain memory performance phenomena. As this compatibility between trace and cue increases, so does the subject's ability to retrieve the original trace. In contrast, Craik (1979, Fisher and Craik 1977) argues that the qualitative aspects of the trace itself--e.g., its "depth," its degree of "elaborateness," its distinctiveness, all qualities produced by the original encoding operations--are also necessary to fully explain memory performance.

It seems clear that both the original encoding operations that create the memory trace and the compatibility of the trace with the cues available at retrieval are necessary to explain memory performance. That is, the effectiveness of a particular retrieval cue, because of its compatibility with the memory trace, is partially dependent upon the qualitative nature of the trace itself. These features are, of course, a function of the encoding operations that occurred during initial exposure. Tulving's basic point that the memory performance obtained in a LOP study is not only a function of the original encoding operations is important, however, and should be given greater attention in future LOP research. The cues present in the retrieval task play a role too. Researchers must be aware that the degree of compatibility between retrieval cues and the memory trace will have powerful effects on measured memory performance. The criterial task must be carefully constructed so as to be appropriate for (i.e., reasonably compatible with) the presumed memory trace. This would seem to argue for experiments involving multiple retrieval tasks.

Summary

Over the last few years, the LOP framework has evolved and changed. The original simple proposition that greater depth leads to stronger memory is now more complex. The basic predictions derived from the LOP framework now have more limitations, restrictions, and qualifications. Despite these accommodations, however, the LOP ideas have had a useful effect on memory research. These ideas clearly pointed out the direct relationship between perceptual processes (encoding operations) and memory. And, the LOP framework has helped point toward a "processing" view of memory, away from a more static, structural view. However, the LOP framework retains a number of serious problems. Several of these issues become particularly obvious and critical when one attempts to apply the LOP conceptual paradigm to situations in which people are processing complex, multidimensional stimuli such as advertisements. Before discussing the relevance of the LOP paradigm for understanding such effects, let us briefly examine these shortcomings.

REMAINING PROBLEMS WITH LOP

The Depth Concept

Perhaps the most widely recognized and most serious problem with the LOP paradigm involves the concept of "depth" itself. There is no clear definition of depth beyond general references to semanticity (cf. Triesman 1979). Measures of depth independent of its observed effect on memory performance have not been developed. Thus, researchers are left in the uncomfortable position of inferring that encoding processing was deep (or deeper than some other condition) if subsequent memory performance was strong (or stronger). Without an independent measure of depth, such circular reasoning does not provide compelling evidence for the concept of "depth."

Curiously, memory researchers have not mounted a concerted effort to develop alternative measures of "depth." In fact, much of the published empirical research can be seen as mere demonstrations of presumed "depth" effects on memory performance. Some researchers, however, have grappled with this thorny problem. For example, Craik and Tulving (1975) examined "time spent in encoding" as a surrogate measure of depth, reasoning that deeper, more semantic analyses require more time than do sensory analyses. However, time was not always directly related to depth. For certain tasks and informational materials, nonsemantic sensory encoding operations may take longer than more semantic analyses.

Another possible index of depth that has received some attention is expended processing capacity. Here the logic is that encoding operations involving more semantic analyses require more cognitive processing capacity than do less semantic encoding operations. Eysenck and Eysenck (1979) produced data supporting this notion, but indicated that, like processing time, expended capacity will not be a perfect indicator of depth. In fact, it would seem that both the amount of expended cognitive capacity and the time required to carry out a semantic encoding operation depend critically upon the content and organization of the relevant existing knowledge stored in memory. If that knowledge structure is complex and highly organized, and has been used frequently over previous experiences, then deep semantic analyses may occur rather quickly without expending large amounts of cognitive processing capacity. We will return to a discussion of knowledge structures later in the paper.

The Depth Mechanism

A second major problem with the LOP framework concerns the lack of clarity regarding the "mechanism" or the "factors" responsible for the depth effect. Specifically, what is it about "deeper" memory traces that typically produces better memory performance than for more sensory codings? What is it about semantic analyses that tend to produce better remembered memory traces? As noted above, it is not particularly enlightening to be told that deep levels of processing create better remembered traces because the traces are more semantic. Although several other concepts besides semanticity have been proposed to account for the depth effect, many of these provide only different, perhaps more pleasing metaphors, not greatly enhanced explanation or understanding of the causal processes involved.

Craik (1979; Jacoby and Craik, 1979) and Klein and Saltz (1976) and others, have argued that the crucial variable producing the depth effect is the "distinctiveness" of the memory trace, unfortunately, what makes a memory trace distinctive and the processes by which it becomes distinctive are no more clear than the analogous issues regarding depth. Basically, the concept of distinctiveness refers to the "contrastiveness" of the coded trace, relative to other stored traces. In other words, a memory trace is distinctive if it can be easily differentiated from other traces, presumably at the time of retrieval. Semantic processing is assumed to create more unique, more distinctive traces. Note that distinctiveness is context-specific. That is, the distinctiveness of a memory trace depends upon the content of the knowledge structure within which it is stored. For instance, the distinctiveness of the encoded representation for the four-wheel-disk-brakes attribute may be a distinctive trace when associated with a passenger car memory structure. But such a memory trace would be less distinctive in the context of knowledge about racing cars.

Another concept that has been proposed to account more precisely for the LOP depth effect is elaboration (cf. Craik and Jacoby 1979, Craik and Tulving 1975). Here, the "elaborateness" of the memory trace is seen as the critical feature causing superior memory performance. In general, however, the distinguishing feature of more or less elaborate codes have not been well specified. For instance, terms like "richer" or "more complex" have been used to describe elaborate codes. Recently, Anderson and Reder (1979) defined elaboration more specifically in terms of the number and types of traces produced during encoding. This approach recognizes that different encoding operations will be more or less productive of coded representations or traces. Anderson and Reder argue that semantic analyses naturally produce more traces than do sensory operations, and that semantic traces tend to be more thoroughly interconnected than are sensory codes. They further propose that the better memory performance with elaborative (typically semantic) processing is due to the multiple, redundant traces associated with the target trace. These elaborations are connected in memory with each other and the to-be-remembered trace. The redundancy provided by these multiply-associated traces enhances the probability that the retrieval task will activate one of the elaborations, which in turn could activate the target trace.

It seems clear that elaborative processing requires a pre-existing structure of stored knowledge to be used in elaborately encoding the incoming information. In fact, most LOP researchers do recognize, although usually only in a general way, that any encoding operation--whether semantic or sensory or elaborate or simple--requires the use of existing knowledge in memory. (See Naus and Halasz 1979 for a thorough discussion of this issue). However, researchers have not explicated the processes by which existing structures of knowledge are used in encoding operations.

Episodic vs. Semantic Memory

Another major problem with the LOP framework, especially for those wishing to extend its applications from simple words to the encoding of natural, more complex information, is its nearly complete focus on phenomena of episodic memory. Although this limitation has become widely recognized (see Cermak and Craik, 1979), there has been little empirical attention to semantic memory effects of LOP (cf. Smith, 1978).

Tulving (1972) proposed the distinction between episodic and semantic memory. Because certain knowledge stored in memory seems best characterized as the encoded representations of actual events that occurred in the past, such knowledge can be termed episodic. In contrast, one's general abstracted knowledge about facts and principles is also stored in memory. Such knowledge is more semantic or meaningful--thus, the term, semantic memory. Much knowledge in semantic memory seems to be stored independently of representations of the event when that knowledge was acquired. Thus, two types of knowledge seem to exist and it may be useful to consider two types of memory, episodic and semantic. Clearly, however, the two types of memory representations interact; semantic knowledge must be derived from episodic representations. Therefore, it may be better to consider episodic and semantic memory as the endpoints on a continuum of memory representations. In this sense, the episodic-semantic memory distinction may be more useful as an overall conceptual perspective for memory research than as an actual physical or psychological distinction (cf. Naus and Halasz 1979). However, the distinction does point up the fact that most memory research has focused on issues involving episodic memory. In contrast, semantic memory phenomena, which may be more important in the explanation of real-world behaviors, have been relatively neglected (see Olson, 1978b).

The original LOP formulation provides an encoding processing explanation for subjects' ability to retrieve episodic knowledge from memory. In the typical incidental (nonintentional) learning paradigm, subjects are exposed to several stimuli such as words, and, via different orienting questions, are induced to engage in different encoding operations. Then at some later time, subjects are given an unanticipated retrieval task in which they are to recall or recognize as many original stimuli as they can remember having experienced. This task clearly involves retrieval from episodic memory. Are such phenomena, however, of major importance in explaining complex real-world behaviors? For instance, why is it particularly valuable for an advertising manager to know that X% of the audience for a television commercial remembered seeing an ad the night before? It would seem more important to know the content of the general, abstract product knowledge that members of the audience acquired from that exposure and subsequently integrated into their semantic memory. Such aspects of semantic memory, rather than the episodic representation of the ad exposure event, seem more likely to affect future purchase decisions.

Although semantic memory may be of greater ecological interest, very few studies have examined the impact of alternative encoding operations on semantic knowledge (see Arbuckle and Katz 1976 for an exception). As noted above most LOP researchers recognize that semantic knowledge structures are necessary for a person to engage in deep, semantic encoding operations (cf. Craik and Lockhart 1972), but they have not been concerned with the processes by which existing knowledge structures in either episodic or semantic memory are used during encoding operations.

LOP AND EXISTING KNOWLEDGE

At a general level, Olson (1978a, 1978b) has discussed how existing knowledge might be used in encoding processes. His explanation makes use of memory schemas, highly organized structures of knowledge, as the necessary element from semantic memory. As described in Olson (1978a), exposure to a stimulus/context situation activates the appropriate schema (or schemas) which is thus made available for use in the encoding analyses. The activated schema then "drives" the encoding process until completed or until another schema is activated to further control the processing. Although the activation processes are not yet well understood, progress is being made in their explication (see Norman 1979).

Schemas are considered to contain one's knowledge regarding stimuli and contexts, as well as processing rules and operations for responding to that stimulus (see Norman 1979, Bobrow and Norman 1975, 01son 1978b). Schemas organize one's stored knowledge regarding specific topic/context domains. As such, they provide the cognitive framework within which new information regarding that topic/context domain is interpreted and assigned meaning, i.e., encoded. Over a lifetime of experience, people acquire vast numbers of schemas. It seems reasonable to assume that some people may possess alternative schemas for a single topic that are relevant for various levels of processing. For instance, a creative staff member in an advertising agency may have a well-developed knowledge structure regarding the physical layout or print ads, headlines, typefaces, pictorial images, etc. Such knowledge might be considered as a relatively nonsemantic schema in that it concerns the physical aspects of the stimulus. An account executive in the same agency may possess an entirely different schema for print advertisements. Both people will have different schemas suitable for processing ads in terms of his/her role as a consumer. In contrast to "sensory schemas" semantic schemas organize a person's abstract knowledge regarding the meaning of a stimulus. Yet other schemas may contain and interrelate both physical and semantic aspects of a stimulus. At present, the notion of schema is admittedly metaphoric, a convenient and heuristic way of conceptualizing the knowledge structures that exist in memory, rather than a clearly operationalizable construct. However, it should be possible to measure, at least indirectly, aspects of memory schemas (cf. Markus 1977, Olson and Muderrisoglu 1979).

I believe that it is necessary to consider a consumer's unique existing schemas or knowledge structures in order to understand the encoding process and the traces produced during encoding (Olson 1978a, 1978b). This need may be made clearer by considering how consumers might encode a relatively complex stimulus such as a print advertisement. Certainly, several consumers would not be expected to generate identical memory traces upon exposure to a single ad. A rather direct explanation for such variation concerns the individual differences in memory schemas which, through their unique effects on the encoding process, produce different memory traces. Interestingly, such individual differences issues were not obvious in the verbal learning paradigm favored in LOP research. Because the English-speaking subjects certainly knew nearly all of the simple words typically used, their basic knowledge schemas regarding each word were probably quite similar. If so, subjects' existing knowledge structures or memory schemas would not have a visible effect in the typical LOP study, not because they were not activated and used by subjects during encoding, but because subjects had very similar structures.

However, for the more complex stimuli of interest to consumer researchers, such as products, brands, and attributes, the variable content of consumers' knowledge structures may have a more critical effect on encoding. For instance, if a person does not possess much semantic knowledge about nutrition and nutrients, and their relationship to breakfast cereal, it is not likely that that person will (or can) engage in deep semantic processing of such information contained in an advertisement. Semantic processing would seem to require preexisting semantic knowledge structures or schemas. As the semantic content of a schema becomes more complex, abstract, interrelated, etc., deeper semantic encoding operations will tend to be more likely, easier, and more efficient. The same relationships should hold for sensory encoding processing. Unless one "knows" the physical characteristics of an automobile, for instance, the degree to which he or she can engage in elaborative encoding at a sensory level will be limited. Stated differently, one encodes the characteristics of a stimulus in terms of known characteristics, whether sensory or semantic. Ail this seems quite simple and obvious upon stating it. However, few LOP researchers have noted these ideas as explicitly as stated here. Stein, Morris, and Bransford (1978) may have said it best, "Rather than emphasize the superiority of semantic over nonsemantic processing, it may be more useful to ask how people use what they know (whether this knowledge is nonsemantic, semantic, etc.) to more precisely encode and retain information" (p. 708).

In summary, I have tried to make a convincing case for the important role of existing knowledge structures in encoding processes. Although I was unable to deal with the basic issues in greater depth, selected references have been cited to which the interested reader may refer for more elaboration. Before concluding, let me suggest two specific directions that future research might take if these arguments have been persuasive.

FUTURE RESEARCH DIRECTIONS

One avenue for future research would be to carefully observe, across a variety of settings, the effects of different schematic knowledge structures in the encoding process. This approach has been called for by others and interest in doing so seems to be increasing (cf. Brown 1979, Naus and Halasz 1979). A variety of research issues could be addressed. By what processes does existing knowledge affect encoding analyses? What cues (e.g., stimulus vs. context) activate existing schemas? How does varying schema content (more or fewer concepts, more or less strongly interrelated, finer or broader discriminations, etc.) affect encoding operations at various levels? Essentially, I recommend careful observational experiments (cf. Lachenmeyer 1970) to be accompanied by a heavy dose of inductive inference (cf. Platt 1964), before attempting to verify rather narrow hypotheses. First, we should attempt to learn more about what is going on during encoding.

The other research direction I advocated has seldom been discussed in the cognitive literature. An exception is Bransford, Franks, Morris and Stein (1979) who note that nearly all the methodological paradigms in LOP/memory research test only the remembering of the encoded inputs (almost always words) that occurred during the initial exposure(s). Perhaps this emphasis on episodic memory should not be surprising given the semantically impoverished stimuli (almost always single words with no context). Few studies have measured what was "learned" during exposure. What was the semantic nature of the memory trace? Did it contain abstract meaning derived from that initial processing event? For the most part, memory research has measured only effects on episodic memory--not semantic memory (Naus and Halasz 1979). Researchers have measured memory for the physical stimuli in the exposure event, only. The events, factors and processes that strongly effect retrieval from episodic memory may not be the same as those influential for retrieval of knowledge from semantic memory.

Moreover, the relevant dependent variables themselves may not be similar for episodic vs. semantic memory phenomena. Spurred on by Tulving's strong arguments regarding cue specificity effects in the retrieval task and the subsequent need to manipulate retrieval cues, aided recall seems to be the currently dominant operationalization of memory performance. Early LOP research, however, often used free recall or recognition measures. One could argue that the paired associates task involved in cued recall for single word stimuli has minimal relevance for many real-world behaviors of importance in consumer behavior. Klein and Saltz (1979), and others, have argued that we move away from the virtual total dominance of the cued recall paradigm and begin to investigate memory performance with free, noncued recall tasks.

Lockhart (1979) among others recognized that increasing the mundane realism of LOP experiments by using stimuli and encoding and retrieval tasks with ecological validity is vitally important. Retrieval tasks could use broad general cues similar to those that seem to trigger recall or recognition in real-world situations. Such an emphasis would return research attention to the critical issue addressed by LOP framework; namely, how does initial processing (as influenced by the task situation, the information materials, and existing knowledge structures) affect memory?

In conclusion, my second recommendation for future research on encoding/memory processes is to begin to study more than episodic memory effects. We should begin asking how various encoding operations affect (a) the content of existing knowledge, (b) the internal organization of that knowledge, and (c) the relationships between the schema and other schemas.

Hyde, T. S., and Jenkins, James J. (1969), "Differential Effects of Incidental Tasks on the Organization of Recall of a List of Highly Associated Words," Journal of Experimental Psychology, 82, 472-481.