A blog on consciousness by Janet Kwasniak

Main menu

Monthly Archives: June 2011

Post navigation

When I imagine a sort-of-core to my being, knowing this to be an will-o-the-wisp, what I find is rhythms: breathing, heartbeat, walking, speech cadence and so on, all those cyclic or tick-tocking things my body does. I have never taken that feeling too seriously but looking at consciousness seems to re-enforce the motif of rhythm.

Mehta’s group has just published a paper (see citation) on the effect of movement on hippocampal gamma waves. The results in this study are very clear cut and convincing.

Here is the abstract:

Cortical and hippocampal gamma oscillations have been implicated in many behavioral tasks. The hippocampus is required for spatial navigation where animals run at varying speeds. Hence we tested the hypothesis that the gamma rhythm could encode the running speed of mice. We found that the amplitude of slow (2045 Hz) and fast (45120 Hz) gamma rhythms in the hippocampal local field potential (LFP) increased with running speed. The speed-dependence of gamma amplitude was restricted to a narrow range of theta phases where gamma amplitude was maximal, called the preferred theta phase of gamma. The preferred phase of slow gamma precessed to lower values with increasing running speed. While maximal fast and slow gamma occurred at coincident phases of theta at low speeds, they became progressively more theta-phase separated with increasing speed. These results demonstrate a novel influence of speed on the amplitude and timing of the hippocampal gamma rhythm which could contribute to learning of temporal sequences and navigation.

Conscious awareness appears to predict the near future but I have not encountered a clear mechanism or description of what parts of the content of consciousness is projected forward. This paper shows that the navigation system in the hippocampus changes with the speed of movement. In effect it seems possible that the ‘here’ spot in a mental place map is biased by the speed of movement. So the faster we are traveling the more we feel projected up the road. This would be a start to understanding how the prediction involved in consciousness is produced.

Another thing that I found interesting although not entirely new was the idea of control via the phases of a slower wave. The theta wave is in the region of 6 Hz (or has about 6 peaks per second) and the gamma waves are much faster say 60 Hz for example (or 60 peaks per second). In the case of gamma in this region of the brain, they occur with the highest amplitude as the theta wave nears its trough. This type of mechanism can give very complex cyclical activity with different activities happening in sequence as the longer wave runs from a peak to a trough and then back to a peak. It even seems, on the basis of this study, that the sequence can be affected by some input such as the speed of movement. Consciousness is characterized by cycles of increasing and then decreasing gamma wave synchrony between cortical areas, possibly due to phrase locking to a theta-type rhythm. We are so far from understanding but a step closer.

Is our experience of space embodied? Do we learn that the world is three dimensional or is this something we cannot escape because of how our bodies are made? It is embodied by three lines of reasoning: (1) our bodies contain the nature of our space (2) we know space at too early an age to have learnt it from experience (3) our use of spatial metaphors imply an automatic use of our understanding of space.

Physical bodies:

Our sense of movement/acceleration of the head comes from the semi-circular channels of the ear. The three sensors are at right angles to each other like the x,y,z of a three-dimensional graph. They sense any movement as a combination of movement in three directions. That dictates 3-D space.

During embryo development, starting at (or shortly after) the single fertilized egg, development proceeds with differences between ventral and dorsal, rostral and caudial, dextral and sinistral. The chemical signals and gradients that steer development determine what will become up and down, front and back, left and right and where all the tissues will fit in that framework.

The brain, not just the human but all vertebrate brains, have a spatial centre which contains specialized neurons to represent place and space. They include place neurons, grid neurons, border neurons, heading neurons. Vision, hearing and touch cooperate in our representation of space in this internal mapping system. When we loss our place on this internal map, they feel a very particular emotion, the feeling of being lost. This system is central to episodic memory.

So given our bodies, it would be next to impossible to avoid an embodied spatial cognition.

Innate knowledge:

Now there are methods of questioning very young babies about what they know by following their gaze; they look longer at things and events they find unusual or less predictable than they do with the ordinary. This type of investigation can be done with babies that are only a few months old.

Here is part of the discussion in Spelk & Kinzler, Core knowledge (2007):

The last system (previous pages dealt with the others) of core knowledge captures the geometry of the environment: the distance, angle, and sense relations among extended surfaces in the surrounding layout. This system fails to represent non-geometric properties of the layout such as surface color or odor, and it fails under some conditions to capture geometric properties of movable objects. When young children or non-human animals are disoriented, they reorient themselves in accord with layout geometry. Children fail, in contrast, to orient themselves in accord with the geometry of an array of objects, and they fail to use the geometry of an array to locate an object when they are oriented and the array moves. Under some circumstances, children and animals who are disoriented fail to locate objects in relation to distinctive landmark objects and surfaces, such as a colored wall. When disoriented children and animals do use landmarks, their search appears to depend on two distinct processes: a reorientation process that is sensitive only to geometry and an associative process that links local regions of the layout to specific objects…This research suggests that the human mind is not a single, general-purpose device that adapts itself to whatever structures and challenges the environment affords. Humans learn some things readily, and others with greater difficulty, by exercising more specific cognitive systems with signature properties and limits. The human mind also does not appear to be a ‘massively modular’ collection of hundreds or thousands of special-purpose cognitive devices. Rather, the mind appears to be built on a small number of core systems, including the four systems just described. (object, agent and number preceded geometry/place in this description)

Four-month-old infants can integrate local cues provided by two-dimensional pictures and interpret global inconsistencies in structural information to discriminate between possible and impossible objects. This leaves unanswered the issue of the relative contribution of maturation of biologically predisposed mechanisms and of experience with real objects, to the development of this capability. Here we show that, after exposure to objects in which junctions providing cues to global structure were occluded, day-old chicks selectively approach the two-dimensional image that depicted the possible rather than the impossible version of a three-dimensional object, after restoration of the junctions. Even more impressively, completely naive newly hatched chicks showed spontaneous preferences towards approaching two-dimensional depictions of structurally possible rather than impossible objects. These findings suggest that the vertebrate brain can be biologically predisposed towards approaching a two-dimensional image representing a view of a structurally possible three-dimensional object.

Here is the abstract from Izard, Pica, Spelke and Dehaene (2011), Flexible intuitions of Euclidean geometry in an Amazonian indegene group:

Kant argued that Euclidean geometry is synthesized on the basis of an a priori intuition of space. This proposal inspired much behavioral research probing whether spatial navigation in humans and animals conforms to the predictions of Euclidean geometry. However, Euclidean geometry also includes concepts that transcend the perceptible, such as objects that are infinitely small or infinitely large, or statements of necessity and impossibility. We tested the hypothesis that certain aspects of nonperceptible Euclidian geometry map onto intuitions of space that are present in all humans, even in the absence of formal mathematical education. Our tests probed intuitions of points, lines, and surfaces in participants from an indigene group in the Amazon, the Mundurucu, as well as adults and age-matched children controls from the United States and France and younger US children without education in geometry. The responses of Mundurucu adults and children converged with that of mathematically educated adults and children and revealed an intuitive understanding of essential properties of Euclidean geometry. For instance, on a surface described to them as perfectly planar, the Mundurucu’s estimations of the internal angles of triangles added up to ∼180 degrees, and when asked explicitly, they stated that there exists one single parallel line to any given line through a given point. These intuitions were also partially in place in the group of younger US participants. We conclude that, during childhood, humans develop geometrical intuitions that spontaneously accord with the principles of Euclidean geometry, even in the absence of training in mathematics.

Babies and some animals also can follow another’s gaze. This takes a facility with modeling 3-D space. There does not appear to be time for infants to learn about the nature of space from their own experience. A baby would need some framework in order to start learning about the world as quickly as they do.

Root of metaphor:

Finally space is at the root of a great many linguistic metaphors. We use our comfortable knowledge of space in order to understand other things by analogy. Time for instance is often expressed as a space metaphor by almost everyone no matter their language or culture.

We look forward to the future and back to the past. We go straight for the goal or we take a corner in our life. Today I am up for the challenge, tomorrow I may be down. I can rise to the top or get stuck at the bottom of the ladder. She went under under the anesthetic but he got high on the drug. Examples of spatial metaphors can go on for pages and pages but I will resist.

What ubiquitous metaphors (such as the spatial group) tell us is that we have an embodied area of cognition that is so firmly grounded and can be used to visualize, understand, express, and communicate other less grounded areas.

Even though Physics may convince us that there is actually a four dimension, or even eleven and a half, we cannot escape our experience of three dimensions. Our representation of space is embodied.

A blog posting has made me angry. As I have said before, I dislike reading angry posts and try to avoid writing them myself. But there is a limit and so here I am writing an angry post. Below is the start of a blog on the Psychology Today site in its Sapient Nature blog.

Human beings are different from other, lower-order, animals in several ways. Humans are the only species with the ability to imagine, which allows us to “time travel” (that is, reminisce about past events and imagine future ones) and to conceive of things (products, ideas) that currently don’t exist. We are also the only species to be aware that we are going to die, which, according to some psychologists, is the primary reason we have traditions and culture.

A third way in which we are different from others species is that we are the only ones to feel the need to be busy. Most lower-order animals would presumably be perfectly satisfied to idle their time away. Give a lower-order animal sufficient quantities of food, love, and shelter, and the animal will likely grow to be fat and happy; the animal would have no issues about lazing around and frittering away the rest of its life.

The writer than carries on with a reasonable post on human reactions to boredom. He even quotes evidence for what he says about human boredom. (here) Well done – if you ignore the start of the post.

Why then the first two paragraphs? As they have nothing to do with the main idea of the post, I assume they are a literary flourish. I do not think that it is reasonable to say any-old-dumb-thing just to try to be an interesting writer. Not if you then want to be taken seriously in your third paragraph.

There is a definition of bullshit saying it is not really a lie and its truthfulness is not the point. The speaker of bullshit does not care if what he says is true, sometimes it is and sometimes it’s not. The bullshitter does not even care if you believe him. It is the overall impression that counts. I don’t get the feeling that the blogger cares whether I believe what he has to say about animals. He makes no attempt to convince me  no science, no anecdotes, no logic, no folk wisdom. Of course, there are ideas that are so accepted and acceptable in particular contexts that they need no support. But here we have a PhD in psychology who teaches in a university and edits journals, writing in a prominent psychology site. He should (and probably does) know what statements need support and what do not.

What animals think does not have a single answer. After all ‘animals’ includes everything from sponges to us. But definitely it includes other primates, dogs, elephants, whales and dolphins, crows and parrots. The author is saying, with a straight face, that these particular animals as well as many others (1) cannot imagine (2) do not foretell death (3) do not feel boredom. The death remark is not obviously false, probably even true, but also probably not the primary cause of anything so important as culture. The other two remarks are controversial at best and unacceptable at worst. Some people would accept them but many wouldn’t. I wouldn’t. In previous postings I have dealt with various aspects of animal thought so I am not going to repeat them here. The point I am making is that the area is controversial and therefore bold sweeping statements cannot be made without some support.

But for bullshit none of this matters. It was the bullshit that made me mad. If I thought that the writer actually believed his first two paragraphs were about anything except setting a tone for the rest of the post, I would have disagreed but not felt angry. I have a vision of the author thinking about how to make a piece about boredom interesting. They can’t think of anything original so they use the ‘only humans can x’ hook. That should make readers feel warm inside. They know that a great many ‘only humans can x’ have been discredited but does that matter. No need to look this one up  their readers will not care. Well I do. And if I was a zoo keeper who spend hours every day trying to keep the animals from going stir-crazy, I would care even more.

The Blue Brain Project at Lausanne, led by Henry Markram, is to my mind, the most interesting and promising meeting of computer science and neurobiology. They recently published a paper (see citation) on the connectivity in the newborn visual cortex. They showed that neurons in small groups were interconnected independent of experience and that the connection between these groups was the level at which experience must operate. This is very definitely not a blank slate but rather a distinct architecture on which perception rests. It seems probable that animals are born with the equipment to perceive the world  we all perceive the world similarly. But how we interpret, learn from and react to that perception is plastic and molded by experience.

Here is the abstract:

Neuronal circuitry is often considered a clean slate that can be dynamically and arbitrarily molded by experience. However, when we investigated synaptic connectivity in groups of pyramidal neurons in the neocortex, we found that both connectivity and synaptic weights were surprisingly predictable. Synaptic weights follow very closely the number of connections in a group of neurons, saturating after only 20% of possible connections are formed between neurons in a group. When we examined the network topology of connectivity between neurons, we found that the neurons cluster into small world networks that are not scale-free, with less than 2 degrees of separation. We found a simple clustering rule where connectivity is directly proportional to the number of common neighbors, which accounts for these small world networks and accurately predicts the connection probability between any two neurons. This pyramidal neuron network clusters into multiple groups of a few dozen neurons each. The neurons composing each group are surprisingly distributed, typically more than 100 um apart, allowing for multiple groups to be interlaced in the same space. In summary, we discovered a synaptic organizing principle that groups neurons in a manner that is common across animals and hence, independent of individual experiences. We speculate that these elementary neuronal groups are prescribed Lego-like building blocks of perception and that acquired memory relies more on combining these elementary assemblies into higher-order constructs.

Quoting the Blue Brain EPFL site:

The researchers were able to demonstrate that small clusters of pyramidal neurons in the neocortex interconnect according to a set of immutable and relatively simple rules…These clusters contain an estimated fifty neurons, on average. The scientists look at them as essential building blocks, which contain in themselves a kind of fundamental, innate knowledge  for example, representations of certain simple workings of the physical world. Acquired knowledge, such as memory, would involve combining these elementary building blocks at a higher level of the system. This could explain why we all share similar perceptions of physical reality, while our memories reflect our individual experience, explains Markram…If the circuits had only been formed from the experiences lived by the different animals, the values should have diverged considerably from one individual to the next. Thus, the neuronal connectivity must in some way have been programmed in advance. …Current technology is now allowing us to qualify the tabula rasa hypothesis, which argues that our brains are a blank slate at birth, and we only gain knowledge through experience. Its an idea that has permeated science for centuries. There is no question that knowledge, in the sense that we typically understand it (reading and writing, recognizing our friends, learning a language), is the result of our experiences. But the EPFL teams work demonstrates that some of our fundamental representations or basic knowledge is inscribed in our genes. This discovery redistributes the balance between innate and acquired, and represents a considerable advance in our understanding of how the brain works.

The paper discusses the differences between the models of Edelman and Hebbs:

This study reports a form of synaptic clustering in neocortical microcircuits, where call assemblies are not arranged randomly or in a lattice but as small world networks without hubs. This assemblies extend beyond the diameter of neocortical miniclumns, probably contain only a few dozen neurons, and are interlaced with other assemblies within the same space. The finding that connection probability between neurons and the mean synaptic weight within groups of neurons is predictable and tightly related to each other indicates that experiences cannot freely mold network topology and synaptic weights. We speculate that this synaptic organizing principle is genetically prescribed and developmentally expressed, because it applies across different animals. The synaptic clustering we found provides experimental evidence for the primary repertoires proposed earlier by the theory of neuronal group selection by Edelman. Unlike Hebb’s proposal, this theory suggests that functional neural circuitry arises by selection among neuronal groups that already emerged during embryonic development independent of experience. In Edelman’s theory, subsequent experience selects neuronal groups to form secondary repertoires that have survival value. In Hebb’s view, experience carves out and reinforces chains of such elementary assemblies to form phase sequences supporting specific trains of thought. A key difference between Edelman’s theory and Hebb’s proposals is reflected by Edelman’s emphasis on selection, as apposed to instruction, of repertoires of neuronal groups iteratively during perception in a process called reentry. The elementary assemblies that we found are interconnected by fewer and weaker strands of connections than within assemblies, which are more amenable to experience-dependent modification.

Long before we communicated with language, we would be communicated with our bodies, especially our faces. Everyone knows we ‘talk’ with facial expressions but do we ‘hear’ ourselves with them.

A long time ago when the split brain operation was new and the effects were just starting to come out, I read a report that is still with me although I have not been able to find a reference. The set up was that something was shown to the right hemisphere and the left hemisphere was asked a question about it. The left hemisphere guessed and gave an answer. If the answer was right, say ‘yes’ was right, then nothing further happened. But if the answer was wrong, ‘no’, then there was a period when the person appeared uncomfortable and finally said, ‘I mean yes’. What happened during the uncomfortable period was that the person frowned. The right hemisphere heard the wrong answer, it produced a frown, the left hemisphere felt the frown and decided that the answer had been wrong. Caution: this is what I remember and may differ in many ways from the actual report. (If you have the original I would value a link.) At the time I thought that the person would have had to learn this trick, but now I think it probably comes quite naturally.

What is the evidence for embodied facial expressions? There is some from over 20 years ago, the abstract from Strack, Martin, Stepper (1988) Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis.

We investigated the hypothesis that people’s facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1’s results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1’s findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.

We can communicate with by this facial pathway. One person has an emotional affect and this shows in their facial expression which another person mimics and by doing so, feels the emotion. An emotional state has been communicated. There is part of the discussion of Anders, Heinzle, Weiskop, Ethofer, Haynes, Flow of affective information between communciating brains (2011):

In conclusion, our data support current theories of intersubjectivity by showing that affect-specific information is encoded in a very similar way in the brains of senders and perceivers engaged in facial communication of affect. Information is successively transferred from the sender’s brain to the perceiver’s brain, eventually leading to what has been called a shared space of affect.

How do we recognize the emotions other people are feeling? One source of information may be facial feedback signals generated when we automatically mimic the expressions displayed on others faces. Supporting this embodied emotion perception, dampening (Experiment 1) and amplifying (Experiment 2) facial feedback signals, respectively, impaired and improved peoples ability to read others facial emotions. In Experiment 1, emotion perception was significantly impaired in people who had received a cosmetic procedure that reduces muscular feedback from the face (Botox) compared to a procedure that does not reduce feedback (a dermal filler). Experiment 2 capitalized on the fact that feedback signals are enhanced when muscle contractions meet resistance. Accordingly, when the skin was made resistant to underlying muscle contractions via a restricting gel, emotion perception improved, and did so only for emotion judgments that theoretically could benefit from facial feedback.

How does language reliably evoke emotion, as it does when people read a favorite novel or listen to a skilled orator? Recent evidence suggests that comprehension involves a mental simulation of sentence content that calls on the same neural systems used in literal action, perception, and emotion. In this study, we demonstrated that involuntary facial expression plays a causal role in the processing of emotional language. Subcutaneous injections of botulinum toxin-A (BTX) were used to temporarily paralyze the facial muscle used in frowning. We found that BTX selectively slowed the reading of sentences that described situations that normally require the paralyzed muscle for expressing the emotions evoked by the sentences. This finding demonstrates that peripheral feedback plays a role in language processing, supports facial-feedback theories of emotional cognition, and raises questions about the effects of BTX on cognition and emotional reactivity. We account for the role of facial feedback in language processing by considering neurophysiological mechanisms and reinforcement-learning theory.

So our emotions are embodied in our facial expressions: emotions cause facial expressions and facial expressions cause emotions. What does this have to do with consciousness? How emotion enters consciousness is a bit of a mystery, but we know that sensory input via the thalamus is used to create the model of reality from which aspects become part of conscious awareness. So the muscles of the face could give us (or reinforce) knowledge of our emotional state or its intensity through a sensory pathway.

This may not come as a surprise but our brains are part of our bodies  no ‘brain in a vat’. It may seem redundant to have the phrase ’embodied cognition’ as if there was likely to be any other kind. But it seems there are people who are still coming to terms with the idea. In this posting I will talk about some of the experimental evidence for embodiment involving posture.

Vertebrates and probably many other animals are very sensitive to the size, height, width of a potential rival or threatening animal  for good reason I would think. And most use little tricks to make themselves look slightly more looming then they are. On the other hand there is a fairly widespread signal in the animal world for submission which is to make yourself smaller and bowed down. So it is not a great wonder that humans stand tall and wide to show (or fake) superiority and shrink down and inward to show submission. We were probably doing that before we were humans or even primates. Embodiment is shown in the fact that our posture can affect our mood as well as the other way around. Stand akimbo and begin to feel powerful; stoop and begin to feel weakened. It seems to be a two way street. In a power stance there is an increase in testosterone.

Humans and other animals express power through open, expansive postures, and they express powerlessness through closed, contractive postures. But can these postures actually cause power? The results of this study confirmed our prediction that posing in high-power nonverbal displays (as opposed to low-power nonverbal displays) would cause neuroendocrine and behavioral changes for both male and female participants: High-power posers experienced elevations in testosterone, decreases in cortisol, and increased feelings of power and tolerance for risk; low-power posers exhibited the opposite pattern. In short, posing in displays of power caused advantaged and adaptive psychological, physiological, and behavioral changes, and these findings suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices. That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.

There isn’t a division at the neck, just one person. Here are some other examples that I have taking from (PsyBlog):

Hung, Labroo 2011 –

Across five studies, we show that firming one’s muscles can help firm willpower and that firmed willpower mediates one’s ability to withstand immediate pain, overcoming food temptation, consume unpleasant medicines, and attend to immediately disturbing but essential information, provided that doing so is seen as providing long-term benefits. We draw on theories of embodied cognition to explain our results, and we add to that literature by showing for the first time that one’s body can help firm willpower and facilitate the self-regulation essential for the attainment of long-term goals.

Friedman, Elliot 2008 –

Two experiments investigated the hypothesis that arm crossing serves as a proprioceptive cue for perseverance within achievement settings. Experiment 1 found that inducing participants to cross their arms led to greater persistence on an unsolvable anagram. Experiment 2 revealed that arm crossing led to better performance on solvable anagrams, and that this effect was mediated by greater persistence. No differences in comfort, instruction adherence, or mood were observed between the arms crossed and control conditions, and participants appeared to be unaware of the effect of arm crossing on their behavior. Implications of the findings are discussed in terms of the interplay between proprioceptive cues and contextual meaning.

Lipnicki, Byrne 2005 –

There is potentially less locus coeruleus-noradrenergic system activity when lying down than when standing, an effect expected to develop via a difference in baroreceptor load. Furthermore, there is evidence implying that locus coeruleus-noradrenergic system activity impairs attempts to solve anagrams. Consistent with these ideas, we found that subjects solved anagrams significantly faster when supine than when standing. With anagrams characterized as insight problems, our finding suggests that insight may be influenced by body posture.

Cook, Mitchell, Goldin-Meadow 2007 –

The gestures children spontaneously produce when explaining a task predict whether they will subsequently learn that task. Why? Gesture might simply reflect a childs readiness to learn a particular task. Alternatively, gesture might itself play a role in learning the task. To investigate these alternatives, we experimentally manipulated childrens gesture during instruction in a new mathematical concept. We found that requiring children to gesture while learning the new concept helped them retain the knowledge they had gained during instruction. In contrast, requiring children to speak, but not gesture, while learning the concept had no effect on solidifying learning. Gesturing can thus play a causal role in learning, perhaps by giving learners an alternative, embodied way of representing new ideas. We may be able to improve childrens learning just by encouraging them to move their hands.

It is likely that the three aspects of taking a posture – the motor programs that bring about changes in posture, the feelings/emotions/chemicals that trigger those changes and the proprioceptional sense of how the body is moving or positioned  are closely associated in neural communication. Neural communication is more likely to fed back on itself than not. Because we are not consciously aware of these connections does not mean that they are not part of our cognition. Even thinking that seems to us all done consciously, will have large inputs that we are not aware of. Our cognition does not start as a blank slate or a general computer. It must build on the foundation of some very simple basic connections.

This is the first post in a series that I intend to do – next time the face.

Naber (see citation) and others have carefully studied visual rivalry (those stimuli that seem to flip between two distinct images). Rivalry has been billed as having a sharp change between the two images. Is the sharpness part of the perception or is it a function of our awareness? Their results show that the changes are more gradual then previously thought. The changes in our awareness are sharper than in our unconscious perception. Reporting the conscious change with a key or button make it appear even sharper.

Here is the abstract:

Rivalry is a common tool to probe visual awareness: a constant physical stimulus evokes multiple, distinct perceptual interpretations (percepts) that alternate over time. Percepts are typically described as mutually exclusive, suggesting that a discrete (all-or-none) process underlies changes in visual awareness. Here we follow two strategies to address whether rivalry is an all-or-none process: first, we introduce two reflexes as objective measures of rivalry, pupil dilation and optokinetic nystagmus (OKN); second, we use a continuous input device (analog joystick) to allow observers a gradual subjective report. We find that the reflexes reflect the percept rather than the physical stimulus. Both reflexes show a gradual dependence on the time relative to perceptual transitions. Similarly, observers joystick deflections, which are highly correlated with the reflex measures, indicate gradual transitions. Physically simulating wave-like transitions between percepts suggest piece-meal rivalry (i.e., different regions of space belonging to distinct percepts) as one possible explanation for the gradual transitions. Furthermore, the reflexes show that dominance durations depend on whether or not the percept is actively reported. In addition, reflexes respond to transitions with shorter latencies than the subjective report and show an abundance of short dominance durations. This failure to report fast changes in dominance may result from limited access of introspection to rivalry dynamics. In sum, reflexes reveal that rivalry is a gradual process, rivalrys dynamics is modulated by the required action (response mode), and that rapid transitions in perceptual dominance can slip away from awareness.

(In case ‘optokinetic nystagmus’ is a term you have not met  it is the type of eye movement we use to track a movement, when we move our eyes with the movement and then flick back to the starting point and track the movement again. The name was new to me. They had a right moving grating to one eye and a left moving one to the other in one of their experiments and could tell which was perceived by the direction of the returning flick of the eyes.)

And here is their final conclusion:

Reflexes reveal that rivalry is a gradual process, its dynamics are affected by the response mode, and fast changes in dominance can slip away unnoticed (or unreported) by observers. Consequently, reflexes allow access to earlier (subconscious) levels of perception, which are unavailable to awareness, and thus stress the limits of relying on introspection alone.

In some models of conscious awareness, there are time slices of measurable duration where events do not rise to consciousness (like the space between frames of a movie). This, to my mind, could be part of the reason for the sharpness of change and the loss of very short events, in our awareness as opposite to our unconscious perception.

Unconscious pop-out: Attentional capture by unseen feature singletons only when top-down attention is available, is the title of a paper about to be published. When the paper appears it will likely be unavailable to me, but if I can read it, I will post again with more detail. Here is most of the press release:

Paying attention to something and being aware of it seem like the same thing -they both involve somehow knowing the thing is there. However, a new study, which will be published in an upcoming issue of Psychological Science, a journal of the Association for Psychological Science, finds that these are actually separate; your brain can pay attention to something without you being aware that its there.

We wanted to ask, can things attract your attention even when you dont see them at all? says Po-Jang Hsieh, co-author… Usually, when people pay attention to something, they also become aware of it; in fact, many psychologists assume these two concepts are inextricably linked. But more evidence has suggested thats not the case.

To test this, Hsieh and his colleagues came up with an experiment that used the phenomenon called visual pop-out. They set each participant up with a display that showed a different video to each eye. One eye was shown colorful, shifting patterns; all awareness went to that eye, because thats the way the brain works. The other eye was shown a pattern of shapes that didnt move. Most were green, but one was red. Then subjects were tested to see what part of the screen their attention had gone to. The researchers found that peoples attention went to that red shape  even though they had no idea theyd seen it at all.

In another experiment, the researchers found that if people were distracted with a demanding task, the red shape didnt attract attention unconsciously anymore. We need to be able to direct attention to objects of potential interest even before we have become aware of those objects, he says.

What appears to be the gist of the paper is that bottom-up, perception driven, and top-down, task driven, attention can be ‘active’ at the same time; the bottom-up, in this case, determining what reaches awareness and the top-down being independent of awareness. It has been a question in my mind for some time  is attention an integral part of consciousness or just part of its preparation (like perception). This work seems to imply the later, that attention and consciousness are separate processes which interact.

There is something comical about the frustrated using impossible means to an end  example- in Fawlty Towers remember Basil punishing his car by beating it with a tree branch. Does it make the car behave better? No, it makes Basil ridiculous and us laugh, the car is unmoved and unmoving.

Think of the person with very low self esteem earnestly saying some affirming phrase. What happens? The person feels ridiculous and their self esteem is further damaged. We cannot fool ourselves; saying something that we do not believe is not going to accomplish anything.

I think there is a way of having productive conversations with ourselves. Ask questions. Does this ring a bell? Something surprising occurs and you think ‘What was that?’ and immediately a few possibilities spring to mind. What, when, where, how, why, who, which, whether, how much, etc. those are some questions we should be asking ourselves if we want to improve a situation.

A somewhat cartoon-like way to see this is as a bunch workers in rooms. They can phone one another if necessary but they also have an intercom. X has a problem and can get no help from its usual phone contacts so it goes on the ‘blower’ and yells, anyone know why I feel low today? Others do not know who it is on the blower, but push their buttons and yell back. Maybe we are getting a cold. Is it because we have not see a good friend for days. We are out of money. Now we can do things to help the situation  crawl into bed, phone a friend, make a budget etc.

If instead X had gone on the blower and said, Cheer up everyone!, no one would have paid any attention. Or if they did, they might feel bullied and therefore uncooperative. Or they might feel even more low because they were not about to just cheer up.

Of course this is not meant to be taken seriously. It is not an accurate metaphor for how the brain works. It remains true that our internal voice is a help in solving problems. And it remains true that we cannot convince ourselves of what we do not believe by just saying it. We cannot diet by telling ourselves to eat less but we can diet by asking ourselves how we are going to arrange our lives so that we eat less.

Here is the abstract from a paper showing the danger of unconvincing affirmations:

Positive self-statements are widely believed to boost mood and self-esteem, yet their effectiveness has not been demonstrated. We examined the contrary prediction that positive self-statements can be ineffective or even harmful. A survey study confirmed that people often use positive self-statements and believe them to be effective. Two experiments showed that among participants with low self-esteem, those who repeated a positive self-statement (I’m a lovable person) or who focused on how that statement was true felt worse than those who did not repeat the statement or who focused on how it was both true and not true. Among participants with high self-esteem, those who repeated the statement or focused on how it was true felt better than those who did not, but to a limited degree. Repeating positive self-statements may benefit certain people, but backfire for the very people who need them the most.