Wednesday, August 28, 2013

Some people say that they generally hear the words of the text in their heads, either in their own voice or in the voices of narrator or characters; others say they rarely do this. Some people say they generally form visual images of the scene or ideas depicted; others say they rarely do this. Some people say that when they are deeply enough absorbed in reading, they no longer see the page, instead playing the scene like a movie before their eyes; others say that even when fully absorbed they still always visually experience the words on the page.

Some quotes:

Baars (2003): “Human beings talk to themselves every moment of the waking day. Most readers of this sentence are doing it just now.”

Jaynes (1976): “Right at this moment… as you read, you are not conscious of the letters or even of the words, or even of the syntax or the sentences, or the punctuation, but only of their meaning.”

Titchener (1909): “I instinctively arrange the facts or arguments in some visual pattern [such as] a suggestion of dull red… of angles rather than curves… pretty clearly, the picture of movement along lines, and of neatness or confusion where the moving lines come together.”

Wittgenstein (1946-1948): While reading “I have impressions, see pictures in my mind’s eye, etc. I make the story pass before me like pictures, like a cartoon story.”

Burke (1757): While reading “a very diligent examination of my own mind, and getting others to consider theirs, I do not find that one in twenty times any such picture is formed.”

Hurlburt (2007): Some people “apparently simply read, comprehending the meaning without images or speech. Melanie’s general view… is that she starts a passage in inner speech and then “takes off” into images.”

Alan and I can find no systematic studies of the issue.

We recruited 414 U.S. mechanical Turk workers to participate in a study on the experience of reading. First we asked them for their general impressions about their own experiences while reading. How often -- on a 1-7 scale from "never" to "half of the time" to "always" -- do they experience visual imagery? Inner speech? The words on the page? (We briefly clarified these terms and gave examples.)

The responses:

[Note: For words on the page, we asked: "How often do you NOT experience the words on the page as you read? Example: your mind is filled with the ideas of the story and not the actual black letters against the white background". We have reversed the scale for presentation here.]

Now, if you're anything like me, you'll be pretty skeptical about the accuracy of these types of self-reports. So Alan and I did several things to try to test for accuracy.

Our general design was to give each person a passage to read, during which they were interrupted with a beep and asked if they were experiencing imagery, inner speech, or the words on the page. Afterwards, we asked comprehension questions, including questions about visual or auditory details of the story or about details of the visual presentation of the material (such as font). Finally, we asked again for participants' general impressions about how regularly they experience imagery, inner speech, and the words on the page when they read.

The comprehension questions were a mixed bag and difficult to interpret -- too much for this blog post (maybe we'll do a follow-up) -- but the other results are striking enough on their own.

Among those who reported "always" experiencing inner speech while they read, only 78% reported inner speech in their one sampled experience. Think a bit about what that means. Despite, presumably, some pressure on participants to conform to their earlier statements about their experience, it took exactly one sampled experience for 22% of those reporting constant inner speech to find an apparent counterexample to their initially expressed opinion. Suppose we had sampled five times, or twenty?

For comparison: 9% of those reporting "always" experiencing visual imagery denied experiencing visual imagery in their one sampled experience. And 42% did the same about visually experiencing the words on the page.

Participants' final reports about their reading experience, too, suggest substantial initial ignorance about their reading experience. The correlations between participants initial and final generalizations about reading experience were .47 for visual imagery, .58 for inner speech, and .37 for experience of words on the page. Such medium-sized correlations are quite modest considering that the questions being correlated are verbatim identical questions about participants' reading experience in general, with an interval of about 5-10 minutes between. One might have thought that if people's general opinions about their experience are well-founded, the experience of reading a single passage should have only a minimal effect on such generalizations.

Thursday, August 22, 2013

William James's view of emotion is famous. Its most famous feature is his claim that emotional experience is entirely bodily:

If we fancy some strong emotion, and then try to abstract from our consciousness of it all the feelings of its characteristic bodily symptoms, we find we have nothing left behind... and that a cold and neutral state of intellectual perception is all that remains.... What kind of an emotion of fear would be left, if the feelings neither of quickened heart-beats nor of shallow breathing, neither of trembling lips nor of weakened limbs, neither of goose-flesh nor of visceral stirrings, were present, it is quite impossible to think. Can one fancy the state of rage and picture no ebullition of it in the chest, no flushing of the face, no dilatation of the nostrils, no clenching of the teeth, no impulse to vigorous action, but in their stead limp muscles, calm breathing, and a placid face? The present writer, for one, certainly cannot. The rage is as completely evaporated as the sensation of its so-called manifestations, and the only thing that can possibly be supposed to take its place is some cold-blooded and dispassionate judicial sentence, confined entirely to the intellectual realm, to the effect that a certain person or persons merit chastisement for their sins (1890/1950, vol. 2, p. 451-2).

Two other features are less commonly noted. One is that emotional experience is ever-present in rich detail:

Every one of the bodily changes, whosoever it be, is felt, acutely or obscurely, the moment it occurs. If the reader has never paid attention to this matter, he will be both interested and astonished to learn how many different local bodily feelings he can detect in himself as characteristic of his various emotional moods.... Our whole cubic capacity is sensibly alive; and each morsel of it contributes its pulsations of feeling, dim or sharp, pleasant, painful, or dubious, to that sense of personality that every one of us unfailingly carries with him (p. 451).

Another is that emotional experience is highly variable:

We should, moreover, find that our descriptions had no absolute truth; that they only applied to the average man; that every one of us, almost, has some personal idiosyncrasy of expression, laughing or sobbing differently from his neighbor, or reddening or growing pale where others do not.... The internal shadings of emotional feeling, moreover, merge endlessly into each other. Language has discriminated some of them, as hatred, antipathy, animosity, dislike, aversion, malice, spite, vengefulness, abhorrence, etc., etc.; but in the dictionaries of synonyms we find these feelings distinguished more by their severally appropriate objective stimuli than by their conscious or subjective tone (p. 447-448).

Disagreement continues, about all three issues.

Some scholars, such as Walter Cannon and Peter Goldie have argued that bodily sensations cannot possibly exhaust emotional experience; but others, such as Antonio Damasio and Jesse Prinz, have defended accounts of emotion that are broadly Jamesian in this respect.

Some scholars, such as John Searle (p. 140) have argued that we have ever-present emotional mood experiences even if they are often fairly neutral, while others, such as Russell Hurlburt and Chris Heavey, have argued that such feeling experiences are only present about 25% of the time on average. (This issue is a dimension of the larger question of how sparse or abundant human conscious experience is in general -- a question I have argued is methodologically fraught.)

I have seen less explicit discussion of how much variability there is in emotional experience between people, but some theories seem to imply that similar emotions will tend to have similar experiential cores: Keith Oatley and P.N. Johnson-Laird, for example, seem to think that each type of emotion -- e.g., anxiety, anger, disgust -- has a "distinctive phenomenological tone" (p. 34); and Goldie, while in some places emphasizing the complex variability of emotion, in other places (as in the article linked above), seems to imply that there's a distinctive qualitative character that an emotion like fear has which one cannot know unless one has experienced that emotion type. Hurlburt, in contrast, holds that people's emotional experiences are highly variable with no common core among them (e.g., here, p. 187, Box 8.8).

For all the work on emotion that has been done in the past 120 years, we are still pretty far, I think, from reaching a well-justified consensus opinion on these questions. Such is the enduring infancy of consciousness studies.

Thursday, August 15, 2013

Do university ethics classes actually have any practical effect on students' moral behavior outside of a university classroom or laboratory? Basically, we have no idea. I don't believe there is a single published empirical study on the issue. (If I'm wrong, let me know!)

We can make an empirically-informed best guess, though. Here's how: Look at the literature that examines the influence of ethics classes on students' self-reported moral attitudes. If there's a large effect of ethics instruction on student attitudes, maybe it's reasonable to conclude that there would be a moderate effect on student behavior. If there's a medium-sized effect on student attitudes, maybe conclude that there's a small effect on student behavior. Ethics classes work directly on students' attitudes and only indirectly on student behavior outside the classroom. Whatever effect business ethics courses have, for example, on students' tendency to verbally endorse mottoes like "it's bad to pad expense accounts", presumably the effect on behavior will be substantially smaller.

So what does the existing literature on ethics instruction suggest?

The research literature suggests that university ethics classes have at most a small, short-term effect on students' verbally espoused attitudes. This is so even when the researchers seem almost to be begging students to confirm their hypotheses -- for example, by giving before-and-after attitude questionnaires close to the topics of the classes the researchers are teaching (see, e.g., thesetwo studies, among the most-cited recent empirical studies on ethics instruction).

Given the small and inconsistent short-term effects of ethics instruction on student attitudes, I suggest that in the absence of direct evidence it is reasonable to tentatively conclude that the long-term effects on students' attitudes are tiny to non-existent, that the short-term effect on students' practical behavior outside the university setting are tiny to non-existent, and that the long-term effects on practical behavior, if any, are smaller still.

Nor do I find it entirely clear that whatever tiny long-term effects the typical ethics class has on student behavior would be overall positive. Maybe for every positive change in one student's long-term conduct, there's a negative change in another student -- through associating intellectual ethical discourse with that horrible class in which she got a D, or through learning that one can always concoct some theory to rationalize attractive misconduct, or through reinforcing a pre-existing tendency to be a sophomoric know-it-all. Ethics professors don't seem to behave any better as a result of their familiarity with the university ethics curriculum; so why should students?

Wednesday, August 14, 2013

I tell my students: Spend half your time reading what everyone else is reading, and spend half your time reading what no one else is reading (Icelandic folk stories, in De Cruz's example). For the latter especially, trust your dorky sense of fun, not some crabbed and conventional notion of what is productive. It will broaden and energize you, and unexpected avenues will open.

However, the typical review article or quantitative meta-analysis in these fields does not conclude that there is no effect. Below, I'll discuss why.

The features are:

(1.) A majority of studies show the predicted positive effects, but a substantial minority of studies (maybe a third) show no statistically significant effect.

(2.) The studies showing positive vs. negative effects don't fit into a clearly interpretable pattern -- e.g., it's not like the studies looking for X result almost all show effects while those looking for Y result do not.

(3.) Researchers reporting positive effects often use multiple measures or populations and the effect is found only for some of those measures or populations (e.g., for women but not men, for high-IQ subjects but not for low-IQ subjects, by measure A but not by measure B) -- but again not in a way that appears to replicate across studies or to have been antecedently predicted.

(4.) Little of the research involves random assignment and confounding factors readily suggest themselves (e.g., maybe participants with a certain personality or set of interests are both more likely to have taken a business ethics class and less likely to cheat in a laboratory study of cheating, an association really better explained by those unmeasured differences in personality or interest rather than by the fact that business ethics instruction is highly effective in reducing cheating).

(5.) Much of the research is done in a way that seems almost to beg the participants to confirm the hypothesis (e.g., participants are asked to report their general imagery vividness and then they are given a visual imagery task that is a transparent attempt to confirm their claims of high or low imagery vividness; or a business ethics professor asks her students to rate the wrongness of various moral scenarios, then teaches a semester's worth of classes, then asks those same students to rate the wrongness of those same moral scenarios).

(6.) There is a positive hypothesis that researchers in the area are likely to find attractive, with no equally attractive negative hypothesis (e.g., that subjective reports of imagery correlate with objective measures of imagery; or that business ethics instruction of the sort the researcher favors leads students to adopt more ethical attitudes).

The really striking thing to me about these literatures is that despite what seems likely some pretty strong positive-effect biases (features 4-6), still researchers in these areas struggle to show a consistent pattern of statistical significance.

In my mind this is the picture of a non-effect.

The typical meta-analysis will report a real effect, I think, for two reasons, one mathematical and one sociological. Mathematically, if you combine one-third null-effect studies with two-thirds positive-effect studies, you'll typically find a statistically significant effect (even with the typical "file-drawer" corrections). And sociologically, these reviews are conducted by researchers in the field, often including their own work and the work of their friends and colleagues. And who wants to devalue the work in their own little academic niche? See, for example, this meta-analysis of the business ethics literature and this one of the imagery literature.

In a way, the mathematical conclusion of such meta-analyses is correct. There is a mathematically discoverable non-chance effect underneath the patterns of findings -- the combined effects of experimenter bias, participants' tendency to confirm hypotheses they suspect the researcher is looking for, and unmeasured confounding variables that often enough align positively for positively-biased researchers to unwittingly take advantage of them. But of course, that's not the sort of positive relationship that researchers in the field are attempting to show.

For fun (my version of fun!) I did a little mock-up Monte Carlo simulation. I ran 10,000 sample experiments predicting a randomly distributed Y from a randomly distributed X, with 60 participants in each control group and 60 in each treatment group, adding two types of distortion: First, two small uncontrolled confounds in random directions (average absolute value of correlation, r = .08), and second a similarly small random positive correlation to model some positive-effect bias. (Both X and Y were normally distributed with a mean of 0 and a standard deviation of 1. The confounding correlation coefficients were chosen by randomly selecting r from a normal distribution centered at 0 with a standard deviation of 0.1; for the one positive-bias correlation, I took the absolute value.)

Even with only these three weak confounds, not all positive, and fairly low statistical power, 23% of experiments had statistically significant results at a two-tailed p value of < .05 (excluding the 5% in which the control group correlation was significant by chance). If we assume that each researcher conducts four independent tests of the hypothesis, of which only the "best", i.e., most positive, correlation is pursued and emphasized as the "money" result in publication, then 65% of researchers will report a statistically significant positive result, the average "money" correlation will be r = .28 (approaching "medium" size), and no researcher will emphasize in publication a statistically significant negative result.

Yeah, that's about the look of it.

(Slightly revised 10:35 AM.)

Update August 8th:

The Monte Carlo analysis finds 4% with a significantly negative effect. My wife asks: Wouldn't researchers publish those effects too, and not just go for the strongest positive effect? I think not, in most cases. If the effect is due to a large negative uncontrolled confound, a positive-biased researcher, prompted by the weird result, might search for negative confounds that explain the result, then rerun the experiment in a different way that avoids those hypothetized confounds, writing off the first as a quirky finding due to bad method. If the negative effect is due to chance, the positive-biased author or journal referee, seeing (say) p = .03 in the unpredicted direction, might want to confirm the "weird result" by running a follow-up -- and being due to chance it will likely be unconfirmed and the paper unpublished.