Σχόλια 0

Το κείμενο του εγγράφου

[what do you see?/

nothing, absolutely nothing]

—Paul Auster, “Hide and Seek” (in Auster, 1997)The traditional way of establishing unconscious percep-tion has been to demonstrate that awareness of some criti-cal stimulus is absent, even though the same stimulus af-fects behavior (Reingold & Merikle, 1988). To show this,two types of measurement are needed. First, the degreeto which the critical stimulus reaches conscious aware-ness must be assessed, for example by asking observerswhether or not they are aware of it or by testing their abil-ity to identify or detect it. This is called adirect measure

(D) of processing, because the task explicitly requiressome type of direct report on the perception of the criticalstimulus. Second, one must assess the degree to which thecritical stimulus, even if not consciously perceived, affectssome other behavior. This is called anindirect measure(I)because responses are usually made to some stimulus otherthan the critical one, with the latter exerting an influenceon processing the former. This research paradigm of com-paring direct and indirect measures has been called thedis-sociation procedure(Reingold & Merikle, 1988, 1993).The traditional criterion for perception without aware-ness is to establish that the direct measure is at chancelevel and that the indirect measure has some nonzerovalue. This so-calledzero-awareness criterionmay seemlike a straightforward research strategy, but historically ithas encountered severe difficulties. From the beginning,the field was plagued with methodological criticism con-cerning how to make sure that a stimulus wascompletely

outside of awareness. This criticism is still at the heartof the most recent debates (e.g., in Erdelyi, 2004) andhas overshadowed the most thought-provoking findingsin unconscious cognition (e.g., Kunst-Wilson & Zajonc,1980; Marcel, 1983). It has placed the burden of proof soone-sidedly on the shoulders of the unconscious cognitionhypothesis that the zero-awareness criterion seems to havebeen more effective in hindering scientific progress thanin helping it.The zero-awareness criterion critically depends onshowing that the value of the direct measure is at chancelevel, which introduces the statistical problem of corrobo-rating a null hypothesis. In principle, this is not a substan-tive theoretical problem—it could be dealt with pragmati-cally by setting appropriately strict criteria for maximumeffect sizes in the direct measure and for minimum statisti-cal power to detect such effects at a fixed level of signifi-cance (Murphy & Myors, 1998). But binding standards ofthis sort have never been established in applied statisticsor in the field of unconscious cognition, and many semi-nal studies have invited attack by employing somewhat le-nient criteria or low statistical power for “proving the null”(e.g., Dehaene et al., 1998; Marcel, 1983). It may thuscome as little surprise that in 1960, Eriksen concluded inan extensive review of the literature that no unequivocalevidence for unconscious perception existed at all; but it isirritating that a quarter of a century later, a new review byPart of this work was supported by Deutsche ForschungsgemeinschaftGrant Schm1671/1-1 to T.S. Correspondence may be sent to T. Schmidt,Abteilung Allgemeine Psychologie, Universität Gießen, Otto-Behaghel-

.uni-giessen.de or d.vorberg@tu-bs.de).Criteria for unconscious cognition:

Three types of dissociationTHOMAS SCHMIDTUniversität Gießen, Gießen, GermanyandDIRK VORBERGTechnische Universität Braunschweig, Braunschweig, GermanyTo demonstrate unconscious cognition, researchers commonly compare a direct measure (D) ofawareness for a critical stimulus with an indirect measure (I) showing that the stimulus was cognitivelyprocessed at all. We discuss and empirically demonstrate three types of dissociation with distinct ap-pearances inD–Iplots, in which direct and indirect effects are plotted against each other in a sharedeffect size metric.Simple dissociationsbetweenDandIoccur whenIhas some nonzero value andDis at chance level; the traditional requirement of zero awareness is necessary for this criterion only.

490SCHMIDT AND VORBERGHolender (1986) reached the same conclusion with essen-tially the same arguments. In a recent article, that authorhas defended the theory that at least semantic processingis exclusively conscious (Holender & Duscherer, 2004).These skeptics felt that the hypothesis of unconsciouscognition should be tested as rigorously as possible. Intheir view, proponents of unconscious processing have torefute what we will call thenull modelof unconsciouscognition. Under the null model, there is only one typeof information processing, conscious processing, whichaccounts for purportedly unconscious effects and couldbe revealed as the sole source of these effects by somesufficiently sensitive measure. In this view, the problemof demonstrating unconscious perception simplifies todisproving the null model.Our purpose here is to explore several methods forrefuting the null model, by distinguishing three types ofdissociation between direct and indirect measures. Westart by specifying the conditions under which behavioralmeasures can be compared at all. Then we state a set ofminimal measurement-theoretical assumptions that mustbe met for any reasonable kind of comparison between di-rect and indirect measures, showing that strong additionalrequirements must be met for the traditional zero-awarenesscriterion, but also that alternative criteria exist that rest onmuch milder assumptions. Because these other criteria donot require the critical stimulus to remain outside aware-ness, they turn out to be more powerful for detecting un-conscious influences. Through a comparison of the prosand cons of the different approaches, we will argue thatzero awareness of critical stimuli is neither necessary nordesirable for establishing unconscious perception. We willthen demonstrate each type of dissociation with empiricaldata from a response priming paradigm (Vorberg, Mat-tler, Heinecke, Schmidt, & Schwarzbach, 2003, 2004) andshow how some previously proposed criteria for uncon-scious cognition can be interpreted as special cases of ourframework. Although we will deal primarily with uncon-scious visual perception, our results generalize naturallyto other areas of unconscious cognition and, in fact, to anyfield of research that involves dissociations between twoor more variables.AvoidingD–IMismatchThe most important requirement for any direct mea-sure is that it be a valid measure of those conscious inputsthat might explain nonzero performance in the indirecttask—in other words,Dmust be a valid measure not ofconscious processing per se, but of those sources of con-scious processing that are potentially relevant forI. Forthat reason, Reingold and Merikle (1988) have argued thatthe tasks used to measureDandIshould be directly com-parable. Otherwise, there may beD–Imismatch wheneverDis measuring something different from the consciousinformation actually drivingI.In our view,D–I mismatch is avoided if (1) stimuli areidentical in both tasks, (2) the same stimulus features arejudged in both tasks, and (3) the assignment of stimulusfeatures to motor responses is the same in both tasks. Ide-ally, then, direct and indirect measures should be based ontasks that are identical in all respects except which stimu-lus serves as target. In a priming experiment as typicallyemployed in unconscious perception research, this wouldmean that participants always indicate the same featurediscrimination on the same stimuli and with the same setof responses; once with respect to the primed target (indi-rect task), and once with respect to the prime itself (directtask).Unfortunately,D–Imismatch has been the rule ratherthan the exception. For example, in Dehaene et al.’s (1998)study, the indirect task was to indicate as quickly as pos-sible whether a target number was numerically smaller orlarger than 5, in the presence of masked number primes thatwere also smaller or larger than 5. The indirect measurewas the difference in response times when the responseevoked by the prime was congruent or incongruent withthat evoked by the target (for example, when the primewas smaller but the target was larger than 5, as comparedwith when both were larger than 5). To match this indi-rect task, the optimal direct task would have asked for thesame feature discrimination—namely, deciding whetherthe prime was larger or smaller than 5—because this wasthe information in the prime driving the indirect effect.Instead, the authors employedtwodirect tasks, detectionof the primes against an empty background and discrimi-nation of the primes from random letter strings, neitherof which captured the critical distinction of whether theprime was smaller or larger than 5.As another example, Draine and Greenwald (1998,Experiment 1) used semantic priming effects on the clas-sification of target words (e.g., pleasant vs. unpleasant)as the indirect measure. The direct measure, however, in-volved discriminating the prime words from letter stringsconsisting ofXs andGs. These tasks differed not only inthe stimuli employed (theXGstrings were never presentedin the indirect task), but also in the stimulus dimensionjudged. Because theXGtask was supposedly easier thanthe semantic classification task, it was argued that fail-ing to discriminate words fromXGstrings would giveeven more convincing evidence for unconscious percep-tion than would failure to classify the words semantically.However, the direct task may have invited a feature searchforXs rather than a pleasant–unpleasant discrimination ofthe primes, so different aspects of conscious processingmight have driven the two tasks.D–Imismatch could havebeen avoided if both tasks had involved exactly the sameprime stimuli (i.e., noXGstrings), as well as the sametype of semantic classification (i.e., categorizing primesas pleasant or unpleasant). The use of a response windowtechnique in the indirect but not the direct task furthercomplicates interpretation of this study.A more intricate example ofD–Imismatch comesfrom the seminal study by Neumann and Klotz (1994;see also Jaskowski, van der Lubbe, Schlotterbeck, & Ver-leger, 2002). In the indirect task, participants performeda speeded discrimination of whether a square was pre-sented to the left or right of a diamond, using a one-to-onemapping of stimulus configurations to responses. This

CRITERIA FOR UNCONSCIOUS PERCEPTION 491target pair was preceded either by a pair of congruent orincongruent primes (a square and a diamond in the sameconfiguration as the targets or in the reverse configura-tion, respectively) or by a neutral prime pair (e.g., twodiamonds). In the direct task, however, participants hadto classify the prime pairs as neutral or nonneutral, suchthat the neutral prime pair was mapped onto one responseand both remaining prime pairs onto the other response.Thus, even though the direct and indirect tasks employedidentical stimuli, the direct task used a more complex, andpresumably more difficult, stimulus–response mapping(Macmillan, 1986), which may have underestimated anytrue direct effect.Properties of Direct and Indirect MeasuresIn general, a stimulus may give rise to some type andamount of perceptual information, which we assume canbe represented by some quantitative variables. The nullmodel that we want to refute assumes that one such vari-able is sufficient for representing all relevant perceptualinformation, in contrast to models assuming that morethan one variable is necessary. To distinguish the nullmodel from these alternatives, it suffices to pit it againstthe predictions of models with exactly two variables. It isunimportant formally what types of information are repre-sented by the variables. In our context, and without loss ofgenerality, we may label themcandu, denoting consciousand unconscious information, respectively.To keep the model general, we allow direct as well as in-direct measures to be affected by either type of input, anddefine them as functions of two variables,D5

D(c,u)andI5

I(c,u) (see section 1 of the Appendix for formaldefinitions). The logic of our approach is to refute the nullmodel’s claim thatu50 by showing that certain observ-able data patterns ofDandIare incompatible with thisassumption.A sensitive measure of cognitive processing shouldbe able to reflect changes in the type of information it isintended to measure. A minimal requirement is that allother things being equal, an appropriately coded measureshould not decrease when one of its arguments increases;rather, it should either increase or (at worst) remain con-stant. We will refer to this property asweak monotonicity.In contrast, a measure is said to obeystrong monotonicity

if increases in an argument must lead to increases in themeasure.Our general model of how direct and indirect measuresdepend on conscious and unconscious information is sum-marized in Figure 1A. This model rests on minimal as-sumptions: First, it is assumed thatDandIare constructedto avoidD–Imismatch. Second, bothDandIare assumedto be weakly monotonic in argumentcand in argumentu.

(Actually, as shown in the Appendix, this assumption canbe weakened for some of the proofs.) The null model isidentical to the general model, except for assuming thatu50 throughout, or in other words that unconscious in-formation does not exist (Figure 1B). Refuting the nullmodel is tantamount to showing thatu0.Empirical results can be visualized by plotting indirecteffects against direct effects inD–I space (Figure 2). Wewill see shortly that it is convenient to convert both mea-sures into a shared effect size metric. The different typesof dissociation considered here all have distinct appear-ances inD–Ispace. For simplicity, we restrict discussionto the first quadrant ofD–Ispace, within which both mea-sures are positive.Some types of dissociation will require properties ofdirect and indirect measures beyond those contained in thegeneral model (see section 1 of the Appendix for formaldefinitions). Measures are calledexhaustivewith respectto some type of information if they are able to detect anychange in that information—that is, if they are stronglymonotonic with regard to it. In contrast, measures arecalledexclusivewith respect to one type of informationif they are unaffected by changes in the other type. Forexample, a measure is exhaustive forcif it detects anychange in the amount of conscious information, whereasa measure exclusive foruwill be unaffected by changes inc. Finally, two measures have the samerelative sensitivity

with respect to some type of information if a change inthat information leads to changes of the same magnitudein both measures; obviously, this comparison requires thatthe two measures share the same metric. A measure isatleast as sensitiveas another measure if it changes at leastas much as the other in response to a change in relevantinformation.A Closer Look at the Dissociation Procedure:Simple DissociationsWe can now examine in more detail the formal difficul-ties encountered by the traditional zero-awareness crite-rion. Data patterns that conform to this criterion give riseto asimple dissociation, or values ofDat chance levelin the presence of nonzeroI. If scales are such that zerovalues correspond to absence of sensitivity to the stimuli(e.g., chance performance in an identification task), datapoints in aD–Iplot that conform to a simple-dissociationpattern line up along theD5 0 vertical (Figure 3A).The simple-dissociation paradigm has been extensivelycriticized by Reingold and Merikle (1988, 1990, 1993;Reingold, 2004; see also Shanks & St. John, 1994, on im-plicit learning). These authors have argued that even if thedissociation procedure succeeds at producing an indirecteffect without a direct effect, this is inconclusive evidencefor unconscious perception unless some additional as-sumptions hold. In particular,Dmust be an exhaustivemeasure of conscious information (Figure 3B; see section2 of the Appendix for a proof); ifDreflects only someaspects of awareness but not others, absence of a directeffect does not imply absence of awareness, because someconscious information might have gone undetected thatmight account for the above-chance performance in theindirect task. We will refer to this as theexhaustivenessassumptionof simple dissociations.Actually, the exhaustiveness assumption as originallystated by Reingold and Merikle (1988) is more restrictive

492SCHMIDT AND VORBERGthan necessary: Because simple dissociations requireDtobe zero,Dmust be strongly monotonic with respect tocatthe origin ofD–Ispace only. However, strong monotonic-ity at the origin still requires a deterministic, noise-freedirect measure. Even a noisy measure might approximatethis ideal when estimated with high precision, but no em-pirical measure can be assumed to meet the exhaustive-ness assumption in a strict sense.There is yet another way for a simple dissociation to giveconclusive evidence for unconscious processing—namely,when the indirect measureIis exclusive for unconsciousinformation (Figure 3C; see section 2 of the Appendix fora proof). BecauseIis then affected byuonly, unconsciousprocessing is implied wheneverIis above zero, whateverthe value ofD. We call this theexclusiveness assumption

of simple dissociations. Of course, researchers believingthat they possessed such a measure would have no rea-son for using the dissociation procedure in the first place,because they could measure unconscious processes di-rectly. In contrast to measures exhaustive forc, however,measures exclusive forumay actually exist and may berevealed by converging evidence.1It should be noted that Reingold and Merikle (1988,1990) also state an exclusiveness assumption for the directmeasure, demanding that it be sensitive to conscious in-formation only. This assumption is redundant for purposesof interpreting an empirically established simple dissocia-tion, for ifDis exhaustive forcand equal to zero, it isimmaterial whether or not it is also sensitive tou.Sensitivity DissociationsReingold and Merikle (1988, 1993) have argued for acriterion of unconscious perception that does not requireabsence of awareness for the critical stimulus. Beyondthe minimal assumptions stated in the general model ofFigure 1A, this criterion requires thatDbe at least as sen-sitive to conscious information asI; we will refer to thisas thesensitivity assumption (Figure 4B). As we show inFigure 1. (A) Under the general model, direct and indirect measures(DandI) of processing are weakly monotonic functions of both con-scious and unconscious information (candu), which in turn depend onlyon physical stimulus characteristics. (B) The null model makes the sameassumptions, except that unconscious information is disallowed.cuDIStimuli:Conscious andunconsciousperceptual information:Behavioralmeasures:A) MinimalAssumptions Under the General ModelcDIStimuli:Consciousinformation only:Behavioralmeasures:B)Assumptions Under the Null Model= weakly monotonic function

CRITERIA FOR UNCONSCIOUS PERCEPTION 493section 3 of the Appendix, the finding thatInumericallyexceedsDthen implies thatIis influenced by unconsciousinformation to at least some degree.Of course, two measures can be numerically comparedonly if they are expressed in the same metric. One wayof establishing this comparability is to express them ineffect size units (see the next section). InD–Ispace, withaxes equally scaled, any data point lying above theD5I

diagonal is evidence for a sensitivity dissociation (Fig-ure 4A). In section 3 of the Appendix, we sketch a proofthat does not require additivity of conscious and uncon-scious sources of information, in contrast to the originalproof by Reingold and Merikle (1988).How do sensitivity dissociations imply the existenceof unconscious information processing? Remember thatthe null model maintains that indirect effects are drivenby conscious information only, so that any observed in-direct effect is due to some residual awareness that couldbe revealed by a sufficiently sensitive direct measure. Inorder to explain the finding thatI.

D, the null modelwould have to claim that both measures reflect consciousinformation only, but thatIis more sensitive to it (whichis one traditional objection to simple dissociations). Thisis exactly what is ruled out by the sensitivity assumption,so the surplus effect inIcan then only stem from an ad-ditional source of information. Note that the simple disso-ciation is a special case of the sensitivity dissociation, andthat data patterns failing to show thatD50 often showat least thatD,

I.Double DissociationsA double dissociation can be established if bothDandIare measured under at least two experimental conditions(for example, a variation in prime duration or intensity,or some difference in visual masking or attention). Es-sentially, a double dissociation is an empirical finding thatdirectly contradicts the null model. To establish a doubledissociation, one has to show that some experimental ma-nipulation leads to adecreasein the direct measure andat the same time to anincreasein the indirect measure,or vice versa (Figure 5A). As we show in section 4 of theAppendix, the only requirement is that bothDandIbeweakly monotonic inc (Figure 5B). Under this condition,the null model predicts that changing the amount ofccanFigure 2. AD–Ispace is obtained by plotting indirect effectsagainst direct ones after converting them to the same effect sizemetric. We discuss only the first quadrant of the plot, where bothDandIare positive.Indirect Effect (Effect Size Units)Direct Effect (Effect Size Units)I

=

DI

�

DI

�

D210210Figure 3. (A) The data pattern required for establishing a simple dissociation. Data pointsmust lie on theD50 line. In addition,Imust be weakly monotonic inu, as indicated by thecurve symbol in panel B. (B) Evidence for a simple dissociation is unequivocal only if the ex-haustiveness assumption is met, so thatDis an exhaustive measure of all aspects of consciousinformation. (C) Alternatively,Ican be an exclusive measure of unconscious information.cuuDIIc DIndirect Effect (Effect Size)Direct Effect (Effect Size)A) Simple DissociationB) ExhaustivenessAssumptionC) ExclusivenessAssumptionexhaustiveexclusive210210

494SCHMIDT AND VORBERGeither changeDandIin the same direction or leave themunchanged, but it cannot lead to changes in opposite direc-tions. This is possible only ifDandIare driven by at leasttwo different sources of information that respond differ-ently to experimental manipulation. Ifcandureflect theonly sources of information in the model, the double disso-ciation implies nonzerouin at least one of the conditions.If data points are weakly ordered with respect to oneaxis ofD–Ispace, the null model predicts their weak or-dering with respect to the other axis (Di#

Dj⇔

Ii#

Ij

for experimental conditionsiandj; Figure 6A). A doubledissociation is suggested whenever two data points showopposite orderings on the two axes, so that they can beconnected by a straight line with negative slope [e.g.,(Di,

I diagonal. (B) Evidence for a sensitivity dissociation isunequivocal only ifDis more sensitive to conscious information than isI. Weak mono-tonicity is required for all functions.cuDIIndirect Effect (Effect Size)Direct Effect (Effect Size)B)Sensitivity

AssumptionA) Sensitivity Dissociation210210Figure 5. (A) If an experimental manipulation leads to effects of opposite orderingin direct and indirect measures, a double dissociation is demonstrated. (B) Evidencefor a double dissociation is unequivocal as long as the minimal assumptions fromFigure 1 are met. However, the monotonicity assumption can be abandoned for func-tions ofu.cuDIIndirect Effect (Effect Size)Direct Effect (Effect Size)B) Only Requirement:

Weak Monotonicity inc210210A) Double Dissociation

CRITERIA FOR UNCONSCIOUS PERCEPTION 495that there is some way of affectingDandIin oppositedirections (Figure 6C).There are two important boundary cases. First, datapoints may change only vertically—that is, in the direc-tion ofIbut not ofD. This is no evidence for a double dis-sociation because we only assume thatDis weakly mono-tonic, and the fact thatDis constant tells us nothing aboutthe direction of change incunlessDis exhaustive forc. Itis thus conceivable thatcdid actually change in the samedirection asIbut that the direct measure failed to detectthis, socalone might suffice to explain the data pattern,consistent with the null model. For the same reason, datapoints changing only horizontally, in the direction ofDbutnot ofI, do not suffice to establish a double dissociationunder weak monotonicity assumptions.Double dissociations open up new possibilities for find-ing dissociable functions of different brain areas (e.g., byconsidering dissociation patterns among more than twovariables in multichannel brain imaging procedures).2

For instance, brain regions of interest can be examinedpairwise for double dissociations, and the two-way doubledissociations developed here can be readily generalized todissociations among three or more variables.A concept of double dissociations very much akin toours has long been traditional in neuropsychology andmedicine. For example, two brain functions are said tobe doubly dissociable if one lesion impairs functionA

but not functionB, and another lesion does the opposite(Shallice, 1979; Teuber, 1955). Figure 6D shows a hypo-thetical clinical data pattern in which long- and short-termretention performance is measured in patients with twotypes of brain lesions. Assume that Lesion 1 (left panel)impairs long-term retention less than short-term retention,and Lesion 2 (right panel) does the opposite. One can seethat this pattern is analogous to our concept of a doubledissociation by plotting long-term retention against short-term retention for both types of lesion, yielding two datapoints, one for each lesion, that can be connected by astraight line with negative slope in an opposition spaceanalogous to ourD–Ispace. Note that all that matters hereis the different ordering of short- and long-term retentionperformance across lesion groups, not the absolute levelsFigure 6. (A) The null model predicts that all pairs of data points in theD–Iplot can be connected by straightlines with positive slope. (B) Any connection with negative slope constitutes evidence for a double dissociation.(C) This holds also for pairs of data points that differ in more than one independent variable. (D) The double-dissociation pattern traditional in neuropsychology is analogous to our approach. In this example, Lesion 1 (leftpanel) impairs long-term retention less than short-term retention, and Lesion 2 (right panel) does the opposite.(E) Plotting long-term retention against short-term retention for both types of lesion yields two data points thatcan be connected by a straight line with negative slope.Direct Effect (Effect Size)Indirect Effect (Effect Size)C)B)A)210210210210210 210D)E)Lesion1Lesion2Long-TermShort-TermLesion 1Lesion2Short-Term RetentionLong-TermShort-TermLong-Term RetentionRetentionCondition1Condition2

496SCHMIDT AND VORBERGof performance. The main difference in our double dis-sociation concept is that the dissociation can only be es-tablished between different groups of patients, not withinsingle participants.Dunn and Kirsner (1988) have strongly criticized theneuropsychological double-dissociation logic, arguingthat it requires “process-pure” measures of underlyingsources of information. As an alternative, they proposedlooking for “reversed associations” among experimentaltasks (a synonym for the data pattern presented in Fig-ure 6E), not noticing that this was just a novel way of plot-ting the traditional data pattern. In fact, the underlyingassumptions for double dissociations in neuropsychologyare similar to those stated in section 4 of the Appendix,even though some provisions must be made for the factthat the dissociation is between groups of participants. Inparticular, there is no need for process-pure measures: Be-cause double dissociations do not require each task to becompletely spared by the lesion affecting the other task,lesions do not have to exclusively influence one cognitiveprocess but not the other.Shared Metric for Direct and Indirect EffectsAlthough simple and double dissociations require noassumptions about the scaling of direct and indirect mea-sures, sensitivity dissociations require them to be scaledequally. However,IandDoften involve different responsemetrics, like response time and percent correct measures,which has prompted the claim that tasks should be de-signed in such a way thatDandIare measured on thesame type of scale (Reingold & Merikle, 1988). Fortu-nately, it is not necessary to restrict the tasks in such a way,because it is straightforward to convert differently scaledDandImeasures into a shared metric.The metric we suggest here is thed′statistic of signaldetection theory (SDT; see Macmillan & Creelman, 2005),which is essentially an effect size measure. SDT assumesthat in a stimulus discrimination task, stimulus alterna-tives are mentally represented as noisy distributions alongthe stimulus dimension judged. For example, if the taskis to decide whether a visually masked stimulus is red orgreen (Schmidt, 2000), red stimuli are assumed to inducea distribution of values near the “red” end of the subjectivered–green continuum, and green stimuli induce a distribu-tion near the “green” end. When a stimulus appears, itgenerates some value along the continuum; if this valueis to the “red” side of a decision criterion, the stimulus isjudged to be red, and if the value is to the “green” side,the stimulus is judged to be green. A “green” response toa stimulus that is actually green may be arbitrarily called ahit(H), and afalse alarm(F) stands for a “green” responseto a red stimulus. Assuming that the stimulus representa-tions are approximately normal with meansµred

sgreenis made(Macmillan & Creelman, 2005). Sensitivity is thus as-sessed by the difference between the normalized hit andfalse alarm rates.Numerous alternative measures for sensitivity and re-sponse bias exist, based on different mathematical con-ceptions of the underlying decision spaces and strategies.The experimental design described here is a two-alternativeyes–no design (Macmillan & Creelman, 2005). Equation 1applies unchanged to detection tasks in which one stimu-lus is discriminated from noise rather than from anotherstimulus. Note that different mathematical models un-derlie other task types (e.g.,n-alternative yes–no, forcedchoice, or matching-to-sample tasks; see Macmillan &Creelman, 2005).How can response time data be converted into ad′met-ric, so that they can be compared with detection, discrim-ination, or recognitiond′s? We consider three differenttechniques, all of which can be used to inquire whetherresponding is faster in those conditions that should bespeeded by the indirect effect.In themedian-split technique, response times are firstclassified as “slow” or “fast” in comparison with the over-all response time median, and then cross-tabulated withthe appropriate experimental conditions (Schmidt, 2002).By arbitrarily defining fast responses to congruentlyprimed stimuli as hits and fast responses to incongruentlyprimed stimuli as false alarms,d′can be computed fromthe corresponding frequencies as described above.In contrast, theordinal dominance techniquemakes useof the full cumulative distribution functions (cdfs) of re-sponse times on congruent and incongruent prime trials.If congruent primes shorten response times, the relativefrequency of response times shorter than some valuetforcongruent trials will exceed the frequency for incongru-ent trials, implying that the empiricalcdffor congruent tri-als leads thecdffor incongruent trials. Plottingcdfcongruentagainstcdfincongruentresults in an ordinal dominance graph(Bamber, 1975), which is analogous to the receiver oper-ating characteristic curve that results when hit rates areplotted against false alarm rates for different values ofresponse bias. For the equal-variance normal-distributionmodel,d′is functionally related to the area under the ordi-nal dominance graph. Tables for converting area measuresintod′are given by Macmillan and Creelman (2005).Finally, theeffect size techniqueexploits the fact thatd′

as defined in Equation 1 is simply the distance betweenthe means of the two stimulus representations, expressedin standard deviation units of one of the two underlyingdistributions. In contrast to typical applications of signaldetection theory, the probability distributions involvedhere are directly observable.Letxwcongruentands2congruentbe the observed response timemean and variance for congruent trials, andxwincongruentands2incongruentbe the analogous statistics for incongruent tri-

CRITERIA FOR UNCONSCIOUS PERCEPTION 497als. A reasonable estimator of the response time effect (as-suming equal sample sizes) is thendx xsxa=−=incongruentcongruentpooledincongrueent congruentincongruentcongruent−+(xs s122 2)),(3)which estimates a generalized effect size measure thatexpresses the mean difference in units of the pooled stan-dard deviation. Monte Carlo simulations (Vorberg, 2004)indicate that under a wide variety of conditions (normalvs. shifted gamma distributions, equal vs. unequal vari-ances, or contamination by outliers), this measure is gen-erally the most robust of the three, yielding the smallestmean squared error when based on the observed trimmedresponse time means and their winsorized standard devia-tions (see Wilcox, 1997).3Three Types of Dissociation: Empirical ExamplesFigure 7 shows a reanalysis of data from our own labo-ratory (see Vorberg et al., 2003, for details) that illustratehow simple, sensitivity, and double dissociations provideconverging evidence for unconscious processing in thevisual domain. In each trial, participants saw a large arrowstimulus, which also served to visually mask the primestimulus, a small arrow that had been presented brieflybefore at the same position (Figure 7A). This arrange-ment produces a form of strong backward masking of theprime stimulus calledmetacontrast(Breitmeyer, 1984;Francis, 1997). Because the amount of masking dependson the temporal separation of prime and mask, we variedthe stimulus onset asynchrony (SOA) between them. Inthe indirect task, participants had to indicate as quickly aspossible whether the mask pointed to the left or to the rightby pressing one of two response keys. The indirect mea-sure was theresponse priming effect (Vorberg et al., 2003;see also Dehaene et al., 1998; Eimer & Schlaghecken,1998; Klotz & Neumann, 1999; Mattler, 2003; Neumann& Klotz, 1994), defined as the difference in responsetimes when the mask was preceded by a congruent prime(pointing in the same direction as the mask) versus an in-congruent prime (pointing in the reverse direction). As adirect measure, we asked participants to identify, withoutspeed pressure, the direction of the masked primes.D–I

mismatch was avoided by employing identical stimuli andstimulus–response mappings in both tasks and by basingdirect and indirect tasks on the same critical stimulus fea-ture, arrow direction. Prime and mask identification taskswere performed in different blocks of trials.Figure 7B shows aD–Iplot of the results.4The abscissaindicates prime identification performance in terms ofd′,estimated from the relative frequencies of hits and falseFigure 7. (A) Stimulus timing in Vorberg et al.’s (2003) response priming task. (B) AD–Iplot of thedata from Vorberg et al.’s (2003) Experiment 1. Data points are connected to show their ordering byincreasing prime–mask SOA (14–84 msec). Each data point on theDvariable is based on 6 partici-pants performing about 3,000 trials each. None of the participants performed above chance in anycondition. Error bars indicate61 standard error around the mean in all directions, with estimatesbased on the ipsative procedure recommended by Loftus and Masson (1994).Fixation700 msecPrime14 msecSOA14–84 msecMask(140 msec)A)B)I: Response Priming(da)D: Prime Identification(d�)14 msec84 msec210210

498SCHMIDT AND VORBERGalarms (Equation 2); the ordinate indicates the responsepriming effect in terms ofda, estimated directly from theresponse time distributions by the effect size technique(Equation 3). Priming effects strongly increased withSOA, whereas prime identification performance was es-sentially at chance, with none of the 6 participants exhibit-ing better-than-chance accuracy in as many as 3,000 trials.Data points line up along theD50 line and conform tothe simple-dissociation pattern. There is also evidence fora sensitivity dissociation, because all data points but onelie clearly above theD5

Idiagonal.Figure 8 shows what happened when the visibility of theprimes was altered by varying the durations of primes andmasks. This manipulation left priming effects unchanged,so all curves rise with prime–mask SOA in the vertical di-rection. However, masking (i.e., the degree to which meta-contrast affected performance in the direct task) stronglydepended on the exposure durations of primes and masks.Masking of 14 msec primes by 42 msec masks was quiteefficient but not perfect, so that data points do not line upalong theD50 line. Instead,Dtends to increase withSOA, yielding a positively sloped curve inD–Ispace.However, since most data points lie above theD5

Idiag-onal, there is evidence for unconscious processing by thesensitivity criterion. When mask duration was reduced to14 msec, primes became more visible, which is reflectedin the shift of the curve to the right. Under this condition,most data points fell on or below the diagonal, so thatunconscious processing could not be inferred from thissubset of the data.A strikingly different pattern was obtained when primeduration was increased to 42 msec (Figure 8B). Withlonger mask duration (42 msec), a phenomenon namedtype II maskingwas obtained, in which visibility firstdecreased with SOA, then increased again (Breitmeyer,1984). In contrast to this U-shaped time course of mask-ing, priming effects increased with SOA, so that a partof this curve displays a negative slope inD–Ispace. Thisconstitutes clear evidence for a double dissociation. Notethat most of the data points lie above theD5

Idiagonal,and thus give evidence of a sensitivity dissociation as well.Reducing mask duration to 14 msec again made the curveshift to the right, eliminating the evidence for a sensitivitydissociation, but still leaving some evidence for a doubledissociation.General DiscussionAll three criteria for demonstrating unconscious pro-cessing—sensitivity, simple, and double dissociations—can be combined within a common framework that as-sumes that direct and indirect processing measures mayeach be affected by conscious and unconscious informa-tion. All criteria rest on the comparability of direct andindirect tasks, which must employ identical stimuli andstimulus–response mappings, as well as judgments of thesame stimulus features, so that the direct task explicitlyassesses the stimulus information driving the effect in theindirect task. Empirical examples for each of these datapatterns can be obtained in a response priming paradigm,where all three criteria provide converging evidence forunconscious visuomotor processing of masked primestimuli (Vorberg et al., 2003).IfDandIare expressed in the same effect size met-ric,D–Iplots of indirect versus direct measures can bechecked for all three types of dissociation simultaneously.Data points above theD5

CRITERIA FOR UNCONSCIOUS PERCEPTION 499ciations. Finally, data points that show opposite orderingsof the two measures (revealed by pairs of data points thatcan be connected by a straight line with negative slope)provide evidence for double dissociations.Clearly, the different types of dissociation are restrictedto different areas withinD–Ispace. Evidence for simpledissociations can be obtained only on a single line of thisspace, which means that experimental conditions must beestablished in which participants are perfectly unaware ofthe critical stimuli. In contrast, sensitivity dissociationscan arise in the entire upper half-space, which implies thatparticipants may be aware of at least some of the criti-cal stimuli, as long as indirect effects exceed direct ones.Finally, double dissociations can result anywhere inD–I

space—the critical stimuli may be partly visible even tothe extent that direct effects exceed the indirect ones.For evaluating the relative merits and problems of thethree criteria, it is crucial to examine how restrictive theirunderlying assumptions are. The exhaustiveness or exclu-siveness assumptions that underlie the simple-dissociationcriterion are highly problematic, because researchers can-not know beforehand whether a given measure meets theserequirements, or whether such a measure exists at all. Inparticular, exhaustiveness requires that the direct measurebe strongly monotonic with respect to the amount of con-scious information, even under conditions in which suchinformation is virtually absent. To the degree that a directmeasure fails to capture small magnitudes of consciousinformation, it fails to meet exhaustiveness. Annoyingly,such failure must inevitably result from random noise,either in the measurement process or due to the proba-bilistic nature of difficult discrimination tasks. The onlyremedy here is massive statistical power in measuringD.Given that studies measuringDwith appropriate precisionwill also be likely to detect minuscule departures fromzero, convincingly demonstrating a simple dissociation islargely a matter of good fortune.In contrast, sensitivity dissociations require weakmonotonicity only, which is a much milder assumption.However, it also requires that direct and indirect measuresbe expressed in the same metric, which creates new prob-lems. A conversion into effect size units, as proposed here,is a mathematical transformation of the data but does notguarantee equalization of the underlying process met-rics. For example, two measures ofD(orI) with identi-cal expected values but different variances would havedifferent coordinates inD–Ispace, so spurious sensitiv-ity dissociations could be produced by employing highlyreliable indirect measures together with unreliable directones (Reingold & Merikle, 1988). Viewed in this way, thesensitivity assumption seems problematic if employedwithout thorough knowledge of the inherent properties,including the reliabilities, of the measures involved.Double dissociations go beyond both sensitivity andsimple dissociations, since they require neither strongmonotonicity, exclusiveness, exhaustiveness, shared met-ric, nor relative sensitivity assumptions. IfD–Imismatchis avoided, the only requirements left are those of the gen-eral model introduced in Figure 1, and even these can beweakened (e.g., weak monotonicity of measures with re-spect to unconscious information is not strictly necessary;see section 4 of the Appendix). As a consequence, double-dissociation patterns allow conscious and unconsciousinformation to interact arbitrarily—for instance, whenincreased availability of conscious information becomesdetrimental to the utilization of unconscious information(Snodgrass, Bernat, & Shevrin, 2004). This is a further ad-vantage over the sensitivity dissociation criterion, whichrequires monotonicity in both arguments.There are alternative approaches for demonstrating un-conscious perception, some of which can be regarded asspecial cases of our framework. For instance, the regres-sion method proposed by Draine and Greenwald (1998;Greenwald, Klinger, & Schuh, 1995), though makinguse of aD–Iplot, is an instance of a simple dissociation.The authors suggested that, instead of trying to establishexperimental conditions that bring the direct measure tozero, one should allow for a wide range ofDvalues andtest whether the (possibly nonlinear) regression functionofIagainstDhas a nonzero intercept term. Obviously, thisis simply another way of stating thatI.0 atD50. Un-fortunately, beyond the problematic assumptions neededto establish a simple dissociation, strong additional as-sumptions have to be met for the regression methodologyto be valid. The procedure has met with severe criticism(e.g., by Dosher, 1998; Merikle & Reingold, 1998; Miller,2000; see also Klauer, Draine, & Greenwald, 1998), thebottom line being that in the absence of a strong and law-ful relationship betweenDandI(i.e.,R² of the regressionmodel close to 1), the intercept term will primarily reflecterror variance, and the approach will be practically useless.Merikle and Cheesman (1987) suggested abandoningthe simple dissociation in favor of aqualitative disso-ciationapproach, which requires showing that a criticalstimulus has qualitatively different effects on an indirectmeasure, which is supposed to switch sign when per-ceived unconsciously rather than consciously. To demon-strate such a qualitative dissociation, Merikle and Joor-dens (1997a, 1997b) employed a Stroop task in whichincongruent color–prime combinations were presentedmore frequently than congruent ones, so that participantslearned to respond faster to incongruently than to congru-ently primed targets, provided that the primes were visible(a reverse Stroop effect). In contrast, the regular Stroopeffect (with faster responses to congruently primed tar-gets) was observed when primes were made invisible bymasking.As is shown in section 5 of the Appendix, the qualita-tive dissociation defined by Merikle and Joordens (1997a,1997b) can be seen as a special case of our double disso-ciation, and even employs some unnecessary side condi-tions. In applying our analysis to the Stroop example, wecan conceive response times to targets with congruent andincongruent primes, respectively, as two different indirectmeasures, one of which has to decrease with experimentalconditions (i.e., the level of awareness) while the other

500SCHMIDT AND VORBERGincreases. If this happens, however, one of themmustforma double dissociation with the direct measure.Several things should be noted here. First, in a qualita-tive dissociation, only one of the measures,IorJ, formsa double dissociation withD, and the remaining measureis completely uninformative. The qualitative dissociationis thus not stronger than the double dissociation. Second,the demand thatD50 in one condition is redundant; itsuffices that the conditions differ in awareness. Third, itis only required that eitherIorJhas an ordering oppositefromD’s, not that one of them actually switches sign. Atthe same time, note that it is not sufficient forIandJtohave orderings oppositeto each other, which could occurifIandJstart out at different values whenDis smalland then both increase withD, so that the initially smallermeasure overtakes the initially larger measure. In thiscase, the reversed ordering could be explained by higherrelative sensitivity in the initially smaller measure.Some rival approaches should be mentioned that requirea more detailed analysis than can be given here. Cheesmanand Merikle (1984) have argued that, rather than employ-ingobjective measuresofD(which are based on a verifi-able match or mismatch between stimulus and response),one should usesubjective measures, which assess the con-fidence with which the observer consciously perceivesthe stimulus. The proposition has not been followed uni-versally, because it is not clear whether data patterns thatlook like dissociations truly reflect qualitative differencesin cognitive processes, rather than uncontrolled changesin participants’ response criteria. What is needed is a thor-ough analysis of the assumptions on which this approachrests and of the conditions under which valid conclusionscan be drawn from particular dissociation patterns.The same can be said about a novel approach by Snod-grass et al. (2004). This proposal is based on two ideas,each of which rests on strong psychological assumptions.The first idea is that an ordered set of perceptual tasks canbe constructed such that above-chance performance in alower-level task (e.g., detection) is a precondition for non-zero performance in all higher-level tasks (e.g., identifica-tion). The second idea is that conscious and unconscioussources of information can interact in such a way that in-creased availability of conscious information interfereswith the utilizability of unconscious information. Thestrategy proposed by Snodgrass et al. is to establish anordered series of perceptual thresholds for the direct mea-sure, such as a subjective identification threshold (Chees-man & Merikle, 1984), an objective identification thresh-old, and an objective detection threshold, and to assessan indirect measure at each threshold. The authors arguethat if indirect effectsincreaseas the threshold becomesstricter, this is indicative of unconscious influences on theindirect measure asserting themselves against recedingconscious information.Their whole approach might be viewed as a doubledissociation established at task level rather than para-metrically. However, the notion that perceptual taskscan be ordered hierarchically is an assumption we arereluctant to accept, since all these tasks depend on dif-ferent decision spaces and response criteria (Macmillan,1986; see also Luce, Bush, & Galanter, 1963), and thusare difficult to compare. Furthermore, their approachcould detect only those unconscious processes that workin opposition

to conscious ones and would fail to reveal,for instance, early unconscious processing steps laterdeveloping into conscious representations. Note thatthe latter problem also limits application of the double-

dissociation approach.Jacoby’s (1991, 1998)process dissociationapproachtries to pit conscious and unconscious sources of memoryagainst each other by requiring the participant to either re-produce as many items as possible from a previously pre-sented list (inclusion task) or try to avoid items from that listwhen generating new items (exclusion task). It is assumedthat the inclusion task measures conscious recollection ofthe item list, whereas the exclusion task identifies thoseitems that were unconsciously activated but failed to be con-sciously rejected. The primary merit of process dissociationapproaches lies in the quantitative modeling of consciousand unconscious memory retrieval, presupposing that suchprocesses exist. Recently, however, Hirshman (2004) hasshown that inferences about the ordering of unconsciousprocesses across different conditions can be drawn from theordering of memory performance in inclusion and exclu-sion tasks under assumptions similar to ours, demonstratingthat the inclusion/exclusion logic can be employed to refutea null model of only conscious retrieval.5Concluding Comments: Beyond the Zero-Awareness CriterionFor more than four decades, methodological debate onunconscious cognition has revolved around the questionof how to make sure that critical stimuli completely re-main outside awareness. We believe that it is time to leavethis stationary orbit. It is now clear that the traditionalzero-awareness criterion has relied on the strong assump-tions required for simple dissociations, thus upholdingoverly restrictive methodological standards. Ironically,more convincing criteria can be founded on much weakerassumptions.One peculiarity of double dissociations is that criticalstimuli are not allowed to be invisible throughout experi-mental conditions, or else the restrictive exhaustivenessassumption will intrude. This leads to the somewhat coun-terintuitive conclusion that the best way to demonstrateunconscious cognition is to use stimuli that arenotun-conscious. The major drawback of double dissociationsis that they may be hard to find: They can only occur ifthe processes underlying direct and indirect effects workin opposition, which may be the exception rather thanthe rule. However, examples of successfully establisheddouble dissociations do exist. For instance, Mattler (2003)reported a series of experiments in unconscious percep-tion that found response priming not only for overt motorresponses, but also for cross-modal attention shifts andtask switch sets, with clear double dissociations from vi-sual awareness in each experiment. As we have seen, thefindings of Merikle and Joordens (1997a, 1997b) are fur-

Mahwah, NJ: Erlbaum.Wilcox, R. R.(1997).Introduction to robust estimation and hypothesistesting. New York: Academic Press.NOTES1. Recent evidence suggests that visuomotor activation in responsepriming paradigms can be explained exclusively by successive waves offeedforward motor activation triggered by primes and targets (Schmidt,Niehaus, & Nagel, 2006). At the same time, recent theories (Di Lollo,Enns, & Rensink, 2000; Lamme, 2002; Lamme & Roelfsema, 2000)stress the importance of intracortical feedback and recurrent processingas necessary conditions for visual awareness. Therefore, evidence maycorroborate the idea that motor control in response priming and similartasks is mandatorily unconscious because it precedes intracortical feed-back mechanisms.2. We thank Hakwan Lau for this suggestion.3. An outlier criterion of 10% from either end of each response timedistribution was used for trimming and winsorizing the distributions.Winsorizationis a procedure that replaces the values beyond the outliercriteria with the most extreme values retained.4. Results from the indirect task are pooled across partsa,b, ande

of Vorberg et al. (2003), Experiment 1. Results from the direct task arepooled across partscandd. See that previous article for details.5. Most of Hirshman’s (2004) proofs critically depend on the assump-tion that inclusion and exclusion measures are strongly monotonic forconscious as well as for unconscious memory information. However, hisproof of an implicit-memory analog of our double dissociation is similarto the one reported here and earlier in Vorberg et al. (2003, Supplemen-tary Material section), and it can be shown to remain valid under weakmonotonicity.

CRITERIA FOR UNCONSCIOUS PERCEPTION 503A x1. Definitions and AssumptionsLetAandBdenote two types of sensory information, withaandbindexing their strength; for simplicity, weassumea,b$0. Consider measuresMandM′, which seek to assess the sensory information available to anobserver. The measures are intended to focus on one type of information only, but they may be contaminated bythe other type as well. Therefore, we model them as functions of two arguments.Monotonicity. A measureMis weakly monotonic inaif for allb,M(a,b)#

M(a′,b) whenevera#

a′. Weakmonotonicity inbis defined analogously. A measure is weakly monotonic in both arguments if both propertieshold. Strong monotonicity is defined correspondingly, except that the inequalities must be strict.Note that monotonicity in only one argument allows arbitrary interactive effects ofaandbon a measure.In contrast, monotonicity in both arguments permits ordinal interactions only—for example,M(a,b)$

max[M(a,0),M(0,b)]$

M(0, 0).Exhaustiveness.Mis exhaustive with respect to type A information ifM(a,b).

M(0,b) fora.0 and allb—that is, ifMis strictly monotonic ina. Exhaustiveness with respect to type B information is defined analo-gously. Exhaustive measures produce nonzero effects whenever the relevant argument is nonzero, no matterhow small the effect.Exclusiveness.Mis exclusive with respect to type B information if it is sensitive to this type of informationonly:M(a,b)5

M(0,b) for allaandb. Exclusiveness with respect to type A information is defined analo-gously.Relative sensitivity. A measureMis at least as sensitive to type A information as another measureM′if

M(a,b)2

M(0,b)$

M′(a,b)2

M′(0,b) for allaandb.In the following discussion, letCandUdenote the types of sensory information potentially accessible toconscious or unconscious processing, respectively, andcandudenote their strengths.DandIare the directand indirect indices intended to measure them.DandIare conceptualized as sharing the same arguments,D5

D(c,u) andI5

I(c,u). Unless stated otherwise, we assume either measure to be weakly monotonic with respectto either argument. We define effects on a measure by the difference from the corresponding baseline—forinstance,D*5

u.0.Note that either derivation requires weak monotonicity of the indirect measure in the second argument,u.3. Sensitivity DissociationProposition. An observed orderingI*.

D*impliesu.0 if the direct measureDis at least as sensitive toconscious information as the indirect measureI.Proof. We work from the definitions ofI*andD*by adding and subtracting the termsI(0,u) andD(0,u):I*.

D*

⇔

I(c,u)2

I(0, 0).

D(c,u)2

D(0, 0)⇔

I(c,u)2

I(0,u)1

I(0,u)2

I(0, 0).

D(c,u)2

D(0,u)1

D(0,u)2

D(0, 0)⇔

I(0,u)2

I(0, 0).[D(c,u)2

D(0,u)]2[I(c,u)2

I(0,u)]1[D(0,u)2

D(0, 0)].The difference between the first two bracketed terms on the right-hand side is nonnegative if the sensitivityassumption holds and by weak monotonicity of both measures with respect toc, whereas the difference in theremaining bracket is nonnegative by monotonicity ofDwith respect tou. Thus,I*.

D*⇒

I(0,u)2

I(0, 0). 0⇒

u.0.Note that the derivation requires weak monotonicity in both arguments for either measure.4. Double DissociationProposition. LetD*kandI*kdenote the direct and the indirect effects observed under experimental conditionsk,k

∈{1, 2}. The joint observation ofD*1,

D*2andI*1.

I*2implies max(u1,u2).0.

PPENDI

504SCHMIDT AND VORBERGA x (Continued)Proof. We prove thatu1

u2by showing that the assumptionu15

u25

uleads to contradiction:D*1,

D*2⇒

D(c1,u),

D(c2,u)⇒

c1,

c2;I*1.

I*2⇒

I(c1,u).

I(c2,u)⇒

c1.

c2.These inequalities directly refute the null model because they show that direct and indirect effects cannot bothbe driven by variation in thecargument only. Asu1,u2$0 by assumption,u1

u2implies max(u1,u2).0,which means that there is evidence for nonzero unconscious information under at least one of the experimentalconditions.Note that the proof requires strict inequalities because, for instance,D(c1,u)#

D(c2,u) does not implyc1#

c2unlessDis exhaustive forc. Mere invariance in one of the measures is thus insufficient to produce a doubledissociation. Remarkably, the proof requires weak monotonicity ofDandIin thecargument only, in contrastto the requirements for sensitivity dissociations; the measures may depend onuin an arbitrary way. Therefore,we can allowCandUto interact in an arbitrary fashion, as in reciprocal inhibition.Double dissociations also refute the argument thatDandIactually measure the same single source of in-formation, but thatDis less sensitive to it thanIis. As a definition, we say thatAis at least as sensitive asBiffor any two experimental conditionsiandj,Ai.

AjimpliesBi$

Bj, andAi5

AjimpliesBi5

Bj. The intuitionbehind this is simple: IfAregisters an effect when conditions change fromitoj, the less sensitive measure mayalso register an effect or remain unaffected. IfAdoesn’t register an effect, then the less sensitive measureB

must also fail to do so.Proposition. The joint observationD*1,

D*2andI*1.

I*2is incompatible with the assumption thatDandI

depend on a single source of underlying information and differ in sensitivity only.Proof.Case 1. AssumeIis the more sensitive measure. ThenI*1.

I*2impliesD*1$

D*2, which contradicts theobservationD*1,

D*2.Case 2. Assume thatDis the more sensitive measure. ThenI*1.

I*2impliesD*1.

D*2, which also contradictsthe data.5. Merikle and Joordens’s (1997a, 1997b) Qualitative DissociationAssume that there are two different indirect measures,I5

I(c,u) andJ5

J(c,u). LetI*kandJ*kdenote thecorresponding indirect effects andD*kthe direct effects observed under experimental conditionsk. A qualitativedissociation is said to exist if eitherIorJshows an ordering opposite from that ofD; that is, there exist condi-tions,mandn, such thatD*m.