Richard Halliburton was a misfit, a rebel, in an America that was coming of age in the world. He just could not see things the way most people saw them. Because he dared, he became the icon of his era with farmers' wives in Topeka, factory workers in Detroit, and newspaper boys in Cleveland buying his books. In the 1920s and 1930s he was one of the most famous persons in America, even more than Amelia Earhart, and today he is forgotten. More Also photos & other information here.

Carlos Castaneda & Tin Cups

Ladies’ Home Journal, June 1918

The generally accepted rule is pink for the boys, and blue for the girls. The reason is that pink, being a more decided and stronger color, is more suitable for the boy, while blue, which is more delicate and dainty, is prettier for the girl

Fugue:
My soul is like a hidden orchestra; I do not know which instruments grind and
play away inside of me, strings and harps, timbales and drums. I can only
recognize myself as a symphony.
—Fernando Pessoa, The Book of DisquietCounter Fugue:
What I cannot build, I cannot understand.
—Richard Feynman, physicist. as quoted by Craig Venter & encoded as a watermark in DNA of the first ever synthetic organism.

Bats & Echolocation: Ben Underwood Clicks His Tongue To See

Clouds & ClocksAll they have in common are the first three letters.
You can disassemble clocks. You can reduce them to their parts, then put them back together. You can't do that with clouds. Therein lies the difference between reductionism and emergent systems, as well as reductionism & the unnameable. It depends on your point of view.

More Is Different: EmergenceAs P.W. Anderson had it, here is a broken symmetry. A new level of understanding must be created before we can move on to the next level. You cannot be explained in terms of the particles which compose you.

You are here in the Milky Way Galaxy, 1 of about 100 billion in the visible universe. This is not science-fiction.

We are all conceived in close prison, and then all our life is but a going out to the place of execution, to death. . . .But we sleep all the way. From the womb to the grave, we are never thoroughly awake.(John Donne, Sermons)

Foucault Pendulum

In 1851, Jean Bernard Léon Foucault (1819-1868) demonstrated the Earth turning. At the Paris Pantheon, the pendulum revealed reality not as it seems. Human kind
cannot bear very much reality, said TS Eliot. People are comfortable in the way things seem. Some guests in 1851 thought the pendulum moved with Earth stationary. But gravity kept it moving in the same plane as Earth turned along with the building from which it hung. They felt none of it, just as we feel none of the following phenomena. Earth rotates about 1000 mph (1680 kph) on its axis. At 66,000 mph it fully orbits the sun once a year. With Earth & other planets in tow, the sun orbits our Milky Way galaxy at 483,000 mph, completing the orbit every 230 million years. Somehow the pendulum ignores these "local" motions and aligns with its original orientation. How can this be? Nobody understands why it swings relative to the universe as a whole, but that seems to be the case.

4/11/10

Raise the index finger on your right hand. There, that was easy, wasn't it? You just told the finger to lift and it did. Now I have something not so easy, a question. How did the finger get raised? You did it, you tell me. Sorry, but that's not good enough. Your finger is a physical object. In terms of cause and effect, a physical effect, your finger, can only be acted upon by a physical cause--you? Are you only physical, a lump of matter? To say your brain is physical and it lifted the finger is an acceptable answer, but what is the difference between you and your brain? Are you, your consciousness, physical?

You can say yes--that, at least, is a perfectly rational viewpoint, and one that has been developed by those who argue for emergent non-reductive physical systems. (Of course, others argue for it as reductionists.) The perspective is rational because it answers the problem of causal closure--a non-physical thing, consciousness, should not be able to act upon a physical thing, your finger. The answer from this vantage is that consciousness is a physical system and can be regarded as an emergent phenomenon, emergent from biology.

Obviously, if you accept this proposition, then you must also accept that you have no soul, no spirit, no ghost in the body machine. Your "you" along with your body is a lump of dust, so to speak.

Maybe, though, you don't accept the answer, or at least not so easily. If so, then you have company. Most people would share your viewpoint, but that is because they are what philosophers call naive realists--they really haven't thought about it.

Whether you accept or not, now that you are thinking about this, I want to take you on a trip down the rabbit hole, the same one Alice fell into. I must warn you, though, that once you start thinking about this kind of thing, Alice's pills won't help you. You will find yourself deep in the rabbit hole and will have to find your own way out if you seriously ponder the evidence of neuroscience and of those who have had feelings of transcendental unity, or experiences of Near Death. If followed relentlessly, the question of consciousness leads you to quantum physics and right back into metaphysics that a physicalist would avoid in order to have a rational, discussable model.

First this. People sometimes experience feelings of transcendence when their brains have been damaged by cancer. This can be construed as a wholly physical phenomenon. Feelings of transcending the physical world--as parts of religious experience, or other forms of spirituality--may find their explanation, then, in scientific evidence.

I quote: "The brain region in question, the posterior parietal cortex, is involved in maintaining a sense of self, for example by helping you keep track of your body parts. It has also been linked to prayer and meditation.

To further probe its role, Cosimo Urgesi, a neuroscientist at the University of Udine in Italy, turned to 88 people who were being treated for brain cancer."

Urgesi suggests that removal of neurons from the posterior parietal cortex--also responsible for personality change--may increase feelings of transcendence. According to this view. the sense of higher consciousness is only a biological phenomenon.

But could their removal simply widen the brain's bandwidth to attune with something it receives much as a TV set receives? I mean that there is another possible interpretation and it is this: Our brains do not produce consciousness--as suggested by non-locality in quantum physics.

Rather, consciousness is in the world. Just as there are photon particles there may be an undiscovered consciousness particle. (Strange things have been indicated by quantum theory, such as the Many Worlds theory.) This view would support an analogy between the brain and a television or radio receiver. The brain is attuned to what is out there and the "external" world complements the "internal," both being necessary for consciousness. *

Although not to the above point, an interesting argument can be made of a kind of interactive cognition with the world. For that, see an article on Extended Mind, a theory posited by Andy Clark and David Chalmers. An interesting perspective is that of Stuart Hameroff. (Find him in the sidebar at the main Mind Shadows site.)

There is also another vantage. Instead of a material explanation for transcendent experience, isn't it also possible that our brains are wired to tap into invisible realities? In his The Doors of Perception, Aldous Huxley wrote of the brain as a dimensional filter that reduced the world to what we can deal with. In this view, sometimes the filter does not work as well and we get glimpses of a greater way of being.

Near Death Experiences (NDE) with Out of Body Experiences (OBE) occur when a patient is flat-lined or brain-dead on brain monitors. Occasional and accurate instances of remote viewing are reported. If consciousness arises from neurons and they are not firing, how can a patient recover to describe accurately what instrument the surgeon was holding, what he said, and what the patient saw on another floor of the hospital, a floor which he or she had never seen before? In a study of over 600 NDEs, the majority regarded theirs as a life-changing experience. They lost their fear of death and became more compassionate toward others.

As Hamlet said, There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.-------* Nothing is lost from the rigor of scientific inquiry by accepting this point of view. (Its findings have proven and objective predictive value; on the other hand, self-transcendence experiences are unique and subjective. Moreover, no objective replication and verification is possible for NDE patients, although they report astounding observations of the operating room and hospital while they were brain-dead.)

There are those, however, who are less than objective when they insist on as superstition that which holds views of other-dimensional reality. Of course, I include Richard Dawkins among them, but must include neuro-scientists who share his view. I am reminded of the so-called Expert Bias: The more expert one becomes in a field, the greater the resistance to assimilating information that can undermine her expertise.

3/1/10

Andy Clark & Extended Mind

As our use of technology increases, we are hard put to say the world stops there and the person begins here.

We are prejudiced as we think what matters most is what goes on inside the head.

Most of the ideas were never ours but evolutionary biology conspires to make us think so.

As technology increases, human brains must dance in greater intricacies between symbols, media, formalisms, texts, speech, instruments, and culture. (The mind dances in extension with the world.)

For these and other reasons, to assume a biologically fixed "human nature" may be a mistake. Our nature is shaped to varying degrees by the brain's cognitive activities with something as simple as the words or numbers we jot on a sheet of paper. The words, the numbers, the paper are themselves technological instruments. They exemplify mind extended into the world.

"My body is an electronic virgin. I incorporate no silicon chips, no retinal or cochlear implants, no pacemaker. I don't even wear glasses (though I do wear clothes). But I am slowly becoming more and more a Cyborg. So are you. Pretty soon, and still without the need for wires, surgery or bodily alterations, we shall be kin to the Terminator, to Eve 8, to Cable...just fill in your favorite fictional Cyborg. Perhaps we already are. For we shall be Cyborgs not in the merely superficial sense of combining flesh and wires, but in the more profound sense of being human-technology symbionts: thinking and reasoning systems whose minds and selves are spread across biological brain and non-biological circuitry.

This may sound like futuristic mumbo-jumbo, and I happily confess that I wrote the preceding paragraph with an eye to catching your attention, even if only by the somewhat dangerous route of courting your immediate disapproval! But I do believe that it is the plain and literal truth. I believe, to be clear, that it is above all a scientific truth, a reflection of some deep and important facts about (a whiff of paradox here?) our special, and distinctively human nature. And certainly, I don't think this tendency towards cognitive hybridization is a modern development. Rather, it is an aspect of our humanity which is as basic and ancient as the use of speech, and which has been extending its territory ever since. More

Chaos Theory and Avalanches: Your Brain Is Like A Pile of Sand

So you think your thoughts, you say? Well, if you do, what will you think thirty seconds from now? One minute from now? One hour? Where is this thinker you claim yourself to be?

What about space? You know what that is, right? Show me space. You are wrong if you say it is that which is occupied by the chair, the wall, and your computer monitor. They occur in relationships and are put in something termed space to explain the relationships. In fact, pure space is an illusion. What you call space is instead a sense impression used to filter the relationship of objects.*

When you think about it, you realize that many of our intuitions about the world are only a way for us to make sense out of it so we can get along within it. Making sense out of it is not the same as the way things are within it.

So when I say that the brain--your thoughts, your reason--works in a kind of chaos, don't dismiss the idea out of hand.

Formulated by Edward Lorenz in his study of weather patterns, then applied to population growth by Robert May, and later developed into the fractal geometry of nature by Benoit Mandelbrot, Chaos Theory now has entered the field of neuroscience.

According to those who apply the theory to consciousness, your brain operates on the edge of chaos. Moreover, disorder is essential to the brain's ability to transmit information and solve problems.

"In technical terms, systems on the edge of chaos are said to be in a state of "self-organised criticality". These systems are right on the boundary between stable, orderly behaviour - such as a swinging pendulum - and the unpredictable world of chaos, as exemplified by turbulence."

When sand piles reach a certain height and mass, they unpredictably begin to avalanche. "The brain has much in common with them. Networks of brain cells alternate between periods of calm and periods of instability - "avalanches" of electrical activity that cascade through the neurons. Like real avalanches, exactly how these cascades occur and the resulting state of the brain are unpredictable." More

*Einstein folded Newton's classical space and time into curved spacetime.

2/5/09

Douglas Hofstadter:What Do We Mean When We Say "I"?

What Do We Mean When We Say "I"?

Douglas Hofstadter has a vivid recollection of a pig's head on a table in a market. As a teenager he could see the severed neck that once had lines of communication with the body, that had once connected all the outposts of information with the headquarters in consciousness. He asked, "Who once had been in that head? Who had lived there? Who had looked out through those eyes, heard through those ears? Who had this hunk of flesh really been? Was it a male or female?"

He had a mid-life loss. "In the month of December 1993, when we were just a quarter of the way into my sabbatical year in Trento, Italy, my wife Carol died very suddenly, essentially without warning, of a brain tumor. She was not yet 43, and our children, Danny and Monica, were but five and two. I was shattered in a way I could never have possibly imagined before our marriage. There had been a been a bright shining soul behind those eyes, and that soul had been suddenly eclipsed. The light had gone out."

So what does all this mean? He tries to understand. "Deep down, your brain is a chaotic seething soup of particles. On a higher level it is a jungle of neurons, and on a yet higher level it is a network of abstractions that we call 'symbols.' The most central and complex symbol is the one you call 'I.' An 'I' is a strange loop where the brain's symbolic and physical levels feed back into each other and flip causality upside down so that symbols seem to have gained the paradoxical ability to push particles around, rather than the reverse.

To each human being, this 'I' is the realest thing in the world. But how can such a mysterious abstraction be real? Is our 'I' merely a convenient fiction? Does an 'I' exert genuine power over the particles in our brains, or is it helplessly pushed around by the all-powerful laws of physics?" From I Am A Strange Loop, by Douglas Hofstadter. Here is A Washington Post book review.

1/16/06

Born in Czechoslovakia, in 1931 Kurt Gödel demonstrated that some propositions could not be mathematically proven true or false using the rules and axioms within a given mathematical system. Outside it, the system could be proved or disproved, but by doing so you would only create a larger system with its own unprovable statements.

So what?, you ask. Only this. Gödel undermines any belief that all complex logical systems are logically air-tight—that they contain undeniable truths within them so long as rules and axioms are observed. Instead, each system holds more true statements than it can possibly prove.

This has far-reaching implications.

His Theorem can be used against artificial intelligence as eventually somehow becoming as smart as people. A computer can never be, because its knowledge is limited by a fixed set of axioms.

For my purposes, it also applies to the realm of consciousness. Some neuro-scientists and philosophers of consciousness are bravely optimistic that the processes of consciousness can finally be explained. Gödel's Incompleteness Theorem offers a way to see this as unlikely. You can be sure of what consciousness knows only by relying on what it knows about itself. That which it knows is subjective. You must be somehow able to objectify it to explain it. The problem is that here is no molecular structure, no particles. If you say neurons, you say nothing about what it feels like to be you.

In a phrase, neuroscientists and philosophers will always depend on linguistic place holders. By that phrase, I refer to the use of words to stand for what is not understood. They range from the metaphysical (God) to the physical (dark energy). Mainly, though, I see them operative in the study of consciousness. Note the shift in neurophysiological parlance from matter to physical processes. Matter had once held a promise that it could be plumbed, that somewhere at its base, science would shout Eureka! after years of hard research. The discovery did not happen. Instead, matter behaved very strangely indeed and invited strange imaginings to explain its behavior. The Standard Model of quantum physics wouldn't cooperate with gravity in General Relativity. Now we have the wholly unverifiable excitation modes of String Theory. As quantum physicist Richard Feynman said in a different context, if a scientist isn't confused by it all, he doesn't understand it well enough.

Especially for the study of consciousness, the new linguistic place holder is physical processes, with its implication that matter has been discarded as a hopeless dead end. The new term does not insist on a basement order for physical things, which was the expectation of matter. Instead, processes occur, which are called physical. These processes will eventually be determined, so goes the belief.

Of course, that still leaves the question, What does physical mean? Sounds like a glib replacement for matter to me. The shift from matter to physical is intended to support the meta-paradigm of science—the overarching view that the real world is composed of space, time, and the physical. To slightly alter Hamlet, There are more things in heaven and earth, Horatio, than are dreamt of in your meta-paradigm.

The meta-paradigm holds that only the physical—or the material, if you want—is real. Real? Even if, by that, you mean physical reality, problems still occur. The European cuckoo never sees its parents. It is raised by birds of other species. Near summer’s end, its parents migrate southward to southern Africa, without even a goodbye to their offspring. A month later, the young cuckoo locates other youngsters, and together they also migrate. How can this be? Any answer is superficial if it merely asserts that the instinct is encoded in DNA or passed on through genetic chemicals. It still has not explained how the instinct works. It has resorted to physical processes with more specific words.**

The meta-paradigm does not allow for the existence of any non-physical, non-material, causal agency, nor should it, for--scientifically speaking--any other approach is useless. It's just that when research comes down to consciousness, more humility is in order.

The meta-paradigm allows that swimming around in the primordial soup, life began. Okay. Allow that physical reality has a wholly convincing explanation in this regard—it still would not explain the nature of life. Is that nature physical? This is a pointless question according to the meta-paradigm, which holds that this sense of the word nature is like the word God, simply a universal idea created by the human mind. Mechanical causation would be enough. Nature is described by what happens. Would this causation be enough? Enough for what? For scientific explanation? As far as it can go.

Science has serious problems with cause and effect. It operates predictably in the macro-world, but behaves quite oddly in the micro-world. Evidence for this behavior abounds at the quantum level. To use a classic example, how can a wave also be a particle? What about the so-called Observer’s Paradox? How can the wave function collapse as soon as it is recorded? Problems with consciousness occur at the quantum level. Consider Bell’s Theorem, which demonstrates what Einstein called spooky action at a distance. How does an electron here know what is happening to an electron a million miles over there? Quantum computers may be invented out of this entanglement, but that only means we know how to harness it, not that we understand it in a realm where our cause and effect become confused.

Erwin Schrödinger described the time-dependence of a quantum mechanical system with his equation, which today is fundamental in quantum mechanics as a description of the system. Among scientists, Schrödinger had a nimble mind with interests spanning numerous fields, which gave him versatility. This allowed him to see and think outside the box of science. In My View of the World, (Meine Weltansicht), he expressed an outlook rooted in the ancient Sanskrit teachings of the Vedanta. Written in 1961, his book reveals the culmination of his search for understanding of consciousness. In it he said, “the plurality of sensitive beings is mere appearance (maya); in reality they are all only aspects of the one being.” As a scientist he sought the physical processes of that unity; as an individual he had an understanding where science could not go. He saw deeply into the paradox of objectivity. Were his consciousness not part of the real world, and he must exclude it, then he must exclude objective manifestations of consciousness—his body, others’, as well as the objective manifestations (consciousness) of their brains. Were he to exclude his own manifestations, he would deny his own existence.

Make no mistake. I carry no brief for obscurantism, for mystery. Perhaps consciousness does have a “physical” basis. Maybe consciousness is a feature of certain elementary particles. (That still leaves the door open for God—or the Gaia hypothesis, if you prefer.) The Observer Paradox and other phenomena suggest that some particles have potential for consciousness, just as particles have potential for an electrical charge. (Indian guru Ramesh Balsekar tells his many disciples that Consciousness is all. Sorry Ramesh. In this view, Consciousness is not all. All carry the potential for it while only some are.)

It's just that we are part of the system trying to find out what we are. That's where Gödel's Incompleteness Theorem comes into play. We are what we are looking for, as St Francis of Assisi is alleged to have said. Or, the eye cannot see itself, nor can consciousness. In our long evolution we developed the smarts to understand, describe, and predict much of the three dimensional world. This ability accelerated with the rise of modern science. I find it likely that we just do not have the right kind of intelligence to understand consciousness, somehow entangled with what we call time. (Try to explain time.) To be sure, linguistic analogs for consciousness processes will be developed and they will provide models for thinking about consciousness. They will remain only that—models, and with large areas of theory unavailable for verification or a Unified Theory of Consciousness. I cannot muster any enthusiasm for the optimism of those who hold that consciousness can be fully explained, although I believe their thinking and research is valuable. They and others have enabled a discussion that was long overdue.

I grant that Gödel's is a mathematical system, and can only be used as a metaphor of consciousness. He only served my introduction. My point is that, given my explanations, at the intersection of mind with matter I cannot foresee anything but more linguistic place holders for investigators of consciousness processes. Because the neuroscientific community has not questioned the meta-paradigm of science, that model remains as Holy Mother Church for the belief that the intersection of consciousness with matter can be adequately explained.

Look at it this way. When you see a tree, light is transmitted onto the retina of your eye where the tree image is inverted. Traveling to the back of your brain on electrical impulse through neurons, the image is turned right side up, and the tree is experienced—not necessarily as what is "out there," but as what your hard-wiring transmits. (Not only that, but you are unaware that the tree was ever upside down or that the length of your brain intervened between you and the tree. To pun, matter is immaterial.)

How does the matter of the brain give rise to the experience? The tree is an image in your mind. Touch the tree. Is it now proved as directly real in and of itself? No. It remains a construct of consciousness. Your fingers transmit a sensation to your mind. The physical process of the transmission can be explained, but how does it give rise to the experience? This involves the hard problem of consciousness, according to David Chalmers.***

What about this. Are you the experience or the experiencer? Both? Find yourself in both. You can not, except as consciousness interacting with the experience. Find yourself as experiencer. When you try, you locate thoughts and images, also experiences. Name the experiencer and you have only another linguistic place holder. You just cannot succeed, although your consciousness is the most obvious thing about you.

This obviousness is what you are and for a name it has only a linguistic place holder. In other words, the obviousness has no name. That is your most factual feature and it is beyond the ability of mental concepts or words to describe it. Thomas Nagel put it this way: "We can be compelled to recognize the existence of such facts without being able to state or comprehend them." ****__________________________________* The concept for this article originally appeared in Inveterate Bystander on 2 July 2004, and was titled Long-Legged Fly, Linguistic Place Holders, & The Tenth Man. That article will be republished after I revise its contents.** “But if there is a process, there must be something—an object or substance—in which it goes on. If something happens, there must be something to which it happens, something which is not just the happening itself.” This expresses our ordinary understanding of things, but physicists are increasingly content with the view that physical reality is itself a kind of pure process—even if it remains hard to know exactly what this idea amounts to. The view that there is some ultimate stuff to which things happen has increasingly ceded to the idea that the existence of anything worthy of the name ‘ultimate stuff’ consists in the existence of fields of energy—consists, in other words, in the existence of a kind of pure process which is not usefully thought of as something which is happening to a thing distinct from it. Found in Galen Strawson.*** The question is how does the flux of ions in little bits of jelly in my brain give rise to the redness of red, the flavour of marmite or mattar paneer, or wine. Matter and mind seem so utterly unlike each other. Well, one way out of this dilemma is to think of them really as two different ways of describing the world, each of which is complete in itself. Just as we can describe light as made up of particles or waves—and there's no point in asking which is correct, because they're both correct and yet utterly unlike each other. And the same may be true of mental events and physical events in the brain. (V.S. Ramachandran, 2003 Reith Lectures)**** In this essay, I do not address the functional self, as I do in Of Cars and Selves, 6 January 2006. In the earlier piece, I simply point out the misguided thinking of those who—in search for the basis of consciousness—fail to see that a functional self does exist for survival purposes.

1/6/06

Of Selves and Cars

In the 1 January 2006 article below, John Allen Paulos asserts that a major shift in society and culture would occur once the public widely understood that the self is non-existent. That is one view of the self. Only one.

For thousands of years Buddhists and Hindus have asserted that the self does not exist. Disciples are taught to look for the self. The simplest Zen koan can be, Who am I? Once the disciple learns that the question cannot be answered in terms of a locus for the "I," he is on the road to enlightenment. Basically, the finding is supposed to be that the "I" cannot be found. Instead, there are various forms of sense impressions--thoughts, feelings, perceptions. The absence of solid evidence for self becomes hitched to a belief--that if nothing can be found, the self is an illusion and does not exist.

Not necessarily so.

Consciousness research has determined that, yes, self cannot be located in a single place. It does appear, but not as an objective unity. Several areas of the brain can produce several events, none of which coalesce into a single objective presence on, say, an fMRI brain scan. Still, we feel only one.

The point: because a central, guiding self cannot be found does not mean that it is merely an illusion. When we look inwardly for the colonies of bacteria that digest our food we cannot find them. That does not mean they are illusory. Their presence is manifested by the fact of digestion. So, too, the self’s presence is manifested by its role in monitoring our activities. Even if we deny free will, we cannot deny the sense that something oversees our conscious affairs; this something can even participate in lucid dreaming.* We call it the self. It is part of the space that we call consciousness inside our head. Yet, if we looked inside the head, we would see only densely compacted grey matter, and no space. It and consciousness cannot be located but we believe both to be there.

Black holes provide an analogy. We cannot see them, but can infer their presence by their affects on nearby space.

As one example of the Eastern approach, advaita disciples learn that they cannot find the self by introspecting for it. Still, its absence does not imply that it is a creature of illusion. The inability to find it can be explained otherwise. It disappears under the flashlight of conscious focus. Try an experiment in perception. Stare at a blue dot lit against a yellow background. After a few minutes of focus the blue merges with the yellow. In Buddhist or Hindu meditation, the self dissolves into the background of consciousness. In Zen, this dissolution can initiate the experience of Big Mind, that one is a larger consciousness. But after the experience, self returns. Its earlier absence does not mean that it is illusory. In deep sleep, consciousness fades and self disappears but later returns.

Were self a tag-along phenomenon, a chimera, it would not have evolved through natural selection to play its prominent role in our minds. It has survival value, whether as a monitor of activities, their agent, or both. This does not mean it has independent "reality"--only that it has a function as do eyes, fingers, and orgasms.

Think of it this way. A tire is not a car. The steering wheel is not. Pistons are not. Put them all together and you have a car. You do not think of it as an illusion. Now imagine another situation. You sit on the couch. Drowsy, you dream a little. Then you awaken. You feel deep satisfaction at Mozart on the radio. You notice the cat purring on your lap. You think about a shopping list. You recall your boss's memo. With each of these, a different part of the brain "lights up," although if you had deliberately looked for a self you would not have found it. Yet you are aware of each event. Somehow, they harmonize into an "I" that is composed of them all. Although self is subjective, it gains explanations through comparison to a car. Even though the car is tangible as an object and self is intangible, the analogy is instructive once we allow that, for example, force at a distance (gravity) has an intangible "source," yet we recognize its affects.______________________________*(Yes, the word is "sense," and sensations do not prove reality. Nor do they disprove it. You can look at your chair. Your view, your skin, and the weight of your body against the chair are only perceptions/sensations which may be only that, with nothing "beyond" them, not even the chair, but you assume the chair to have shaped them. You can disclaim the objective reality of the chair if you want, but that is another argument.)

1/1/06

The Self Is A Conceptual Chimera

John Allen Paulos says this about the self: "Doubt that a supernatural being exists is banal, but the more radical doubt that we exist, at least as anything more than nominal, marginally integrated entities having convenient labels like "Myrtle" and "Oscar," is my candidate for Dangerous Idea. This is, of course, Hume's idea — and Buddha's as well — that the self is an ever-changing collection of beliefs, perceptions, and attitudes, that it is not an essential and persistent entity, but rather a conceptual chimera. If this belief ever became widely and viscerally felt throughout a society — whether because of advances in neurobiology, cognitive science, philosophical insights, or whatever — its effects on that society would be incalculable. (Or so this assemblage of beliefs perceptions, and attitudes sometimes thinks.)"

This is his answer to the annual question posed by Edge. For 2006 it is, What is the most dangerous idea of the year?

Paulos is Professor of Mathematics, Temple University, Philadelphia and authored A Mathematician Plays the Stock Market, among other works.

11/9/05

Non-Duality’s No-Self and Antonio Damasio

Non-Duality is the term for a view of the world as not two, but one—not the duality of a person and the world outside him or her, but instead a totality which is wholly subjective and a unity. That is, everything is part of the subjectivity, with nothing outside. The view derives from Eastern belief, principally Hindu advaita, which literally means without duality. It also finds support in Buddhism (Zen, for example).

A central tenet of non-duality is that self—that which we call our self—does not exist. The evidence is offered by a methodology. The disciple is told to look for his self, and to do so relentlessly. Eventually, he concludes that he cannot find it. Only thoughts, feelings, and sensations are there. These are subjective—part of a world which is wholly conscious and without "external" objects. As for the chair in which he sits, all that is also subjective. The pressure of his body against the seat is sensation. His visual perception of it is also sensation. Etc.

The reader would be unwise to dismiss all this as so much balderdash. Quite able intellects, including philosophers George Berkeley and David Hume, have been unable to disregard conclusions they reached. Hume, for example, concluded as much about the self—that insufficient evidence can be found for its existence.

Speaking only about self, Antonio Damasio has a different take on the situation. Damasio, M.D., Ph.D., is Van Allen Professor of Neurology, and Head of the Neurology Department at the University of Iowa. In Descartes’ Error* he offers another way to look at the phenomenon of the self. (* Subtitled, Emotion, Reason, and The Human Brain) As an example of his point, he refers to neural signals from the elbow joint. Of these signals he says, they

“will happen in the early somatosensory cortices in the insular regions [of the brain].”

Of them, he also states,

“Note again, that this is a collection of areas, rather than one center.”

With this comment Damasio offers a point of view as to why the self cannot be found when we introspect for it, either through deliberate search or with meditation. We gain a simple inference from his remark. Introspection requires focus, and focus implies search for isolated neural phenomena. The self is not part of isolated phenomena. It is part of a collective. Picture the focused beam of a flashlight. Self cannot be found with such a search.

By again referring to the early sensory cortices in the brain, he elaborates and makes an important observation regarding the self. First, the build-up to his comments on the self, then what he has to say about the self. In the build-up to his points, he explains that the early sensory cortices generate topographical representations. That is, the cortices represent sensory input to other areas of the brain. But if that were the end of it,

”I doubt we would ever be conscious of them as images. How would we know they are our images? “

He states that they would mean nothing to us, these representations. We would not know what to do with them. He says something would be missing, subjectivity—a subject to make meaning out of them. Something else is needed. Here is his first point:

“In essence, those neural representations must be correlated with those which, moment by moment, constitute the neural basis for the self. “

That is, without a sense of self, they offer no utility for the organism, which must use them to survive in the moment or to plan ahead. It must make meaning out of them.

He lays to rest the homunculus, the little man inside, the intermediary, which somehow bridged Descartes’ gap between mind and the world outside. His second point:

“Self is not the infamous homunculus, a little person inside our brain, perceiving and thinking about the images the brain forms. It is, rather, a perpetually re-created neurobiological state. Years of justified attack on the homunculus concept have made many theorists equally fearful of the concept of self. [Emphasis mine—he does not lend support to the no-self camp] But the neural self need not be homuncular at all. What should cause some fear, actually, is the idea of a selfless cognition.”

In short, cognition cannot occur without a self to cognize things. Introspect, meditate, all you want but, according to Damasio, don’t use your findings as evidence of no-self. Given his explanation, the attempt to find a self implies cognition at work, with a self involved in the effort. Even though self cannot be found—because cognition involves focus and self is non-focused—the neural self is involved in the very attempt to find itself.

An interesting article on self, by Carl Zimmer, can be found in the November 2005 Scientific American, and is titled “The Neurobiology of The Self.”

7/22/05

David Chalmers’ Hard Problem of Consciousness

Although called the father of modern philosophy, René Descartes became challenged in the last century for the split he created between body and mind, the mind-body dualism, or subject and object. For him, body became one thing; mind, another. This presents a problem. Why? Hold out your hand. Open the fist. Now close it. How did the gap get bridged between your hand, the object, and mind, the subject, if the two are split? Yet, the one and the other are somehow in relationship. This is one problem for understanding consciousness and it implicates other problems. In particular, David Chalmers has become widely-known and quoted for a key issue he presents to philosophers and neuro-researchers in the field.

Well-known for his phrase on the "hard problem," as in the article title above, Chalmers’ ideas can be found in his The Conscious Mind: In Search of A Fundamental Theory, among other books. In that work, he puts the hard problem in this manner: "Why is all this processing accompanied by an experienced inner life?"

He introduces processing this way: “Many books and articles on consciousness have appeared in the past few years, and one might think that we are making progress. But on a closer look, most of this work leaves the hardest problems about consciousness untouched. Often, such work addresses what might be called the ‘easy’ problems of consciousness: How does the brain process environmental stimulation? How does it integrate information? How do we produce reports of internal states? But to answer them is not to solve the hard problem: Why is all this processing accompanied by an experienced internal life? [Inveterate Bystander emphasis]”

The problem is compounded by the term consciousness. It is rather slippery and must be handled with care. He observes that sometimes it refers to

“cognitive capacity, such as to introspect or to report one’s mental states;

“awakeness”;

“our ability to focus attention and to voluntarily control behavior”;

“to know about something.”

He points out that each of these are “accepted uses of the term, but all pick out phenomena distinct from the subject I am discussing, and phenomena that are significantly less difficult to explain.”When he refers to consciousness he means “the subjective quality of experience; what it is like to be a cognitive agent.”

That is truly a hard problem.

Not satisfied with Daniel Dennett’s explanations (Consciousness Explained, Elbow Room, & other titles), he has this to say: "Dennett spends much of his book [Consciousness Explained] outlining a detailed cognitive model, which he puts forward as an explanation of consciousness. On the face of it, the model is centrally a model of the capacity of a subject to verbally report a mental state. It might thus yield an explanation of reportability, of introspective consciousness, and perhaps of other aspects of awareness, but nothing in the model provides an explanation of phenomenal consciousness. . . . "

Note phenomenal. In the book's chapter "Two Concepts of Mind" he extensively distinguishes between phenomenal and psychological concepts.

In his closing comments of the book, he indicates the viewpoints he favors. Here are two of them, both in the same paragraph:

“I resisted mind-body dualism for a long time, but I have now come to the point where I accept it, not just as the only tenable view but as a satisfying view in its own right. It is always possible that I am confused, or that there is a new and radical possibility that I have overlooked; but I can comfortably say that I think dualism is very likely true.”

“I have also raised the possibility of a kind of panpsychism. Like mind-body dualism, this is initially counterintuitive, but the counter-intuitiveness disappears with time. I am unsure whether the view is true or false but it is at least intellectually appealing, and on reflection is not too crazy to be acceptable.”

Dualism was briefly explained at the opening of this article. One explanation of panpsychism is caught in a phrase used by Ramesh Balsekar, retired Indian banker, and spiritual teacher: "consciousness is all there is." In physics, the Einstein-Podalsky-Rosen paradox and John Bell's experiments suggest that consciousness is not local (not just inside the skull). For further explanations, click on the links above.

7/18/05

Benjamin Libet's Personal View of Free Will

"If the moon, in the act of completing its eternal way around the earth, were gifted with self-consciousness, it would feel thoroughly convinced that it was travelling its way of its own accord on the strength of a resolution taken once and for all. So would a Being, endowed with higher insight and more perfect intelligence, watching man and his doings, smile about man’s illusion that he was acting according to his own free will." (Attributed to Albert Einstein in the promotion site for Libet's The Volitional Brain. Perhaps from Einstein's autobiography, The World As I See It, in which he addresses his deterministic view of the universe.)

Most scientific thought concurs that the universe is deterministic and that the sense of free will is an illusion. Still, the question remains, as posed by T.S. Eliot:

Between the conceptionAnd the creationBetween the emotionAnd the responseFalls the Shadow

"The Hollow Men," 1925

Benjamin Libet conducted now famous experiments that seemed to light up the shadow. Essentially, they were that a subject thought he had made a decision to act, but instead the action occurred about a half second before the sense of a decision. The decision was illusory. The action involved no agent, no choosing person, but instead happened, with the sense of control occuring afterward. ( In this blog various articles can be found on free will in general and several on Libet in particular, especially Benjamin Libet and Free Won't, 15 March 2003.)

Despite the evidence of his own experiments, Benjamin Libet allows some room for free will in an otherwise deterministic world. He provides many of his arguments in The Volitional Brain (Libet, Freeman, & Sutherland, 1999).

Libet has this to say about his methods: "I have taken an experimental approach to this question. Freely voluntary acts are preceded by a specific electrical change in the brain (the 'readiness potential', RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350-400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro-deterministic laws or can they appear without such constraints, non-determined by natural laws and 'truly free'? I shall present an experimentalist view about these fundamental philosophical opposites."

He has several justifications, one of which is that many readiness potentials are produced by the brain although only one is acted upon. Another is that the conscious mind can veto actions before they are performed. Into this he tosses the notion of "consciousness fields," which is more supposition than evidence-based theory.

Critics argue that he has created a homonculus, a little man inside, who over-rides unconscious urges to act, but his own experiments verify that such vetos do occur. In this regard, the burden of explaining away the evidence lies with his critics, for subjects are indeed able to stop actions within a very narrow window of time. This has been established by Libet's own experiments.

6/30/05

Leaps of Faith

St Augustine said, "I believe because it is absurd." William James, Charles Sanders Pierce, and Miguel Unamuno, among many others, held that the abandonment of reason is acceptable under certain circumstances. Such a situation is allowable when an issue is of extreme importance to human existence and when rational or empirical evidence is inconclusive one way or the other. These philosophers held that such a position is also acceptable regarding the free will/determinism issue, although the preponderance of scientific evidence weighs in for determinism despite our sense of free will and decision. Former Scientific American editor, Martin Gardner takes this view with regard to his belief in God. (Among his books are The Ambidextrous Universe, Weird Water & Fuzzy Logic, The Annotated Alice, The Emperor's New Mind, and Are Universes Thicker Than Blackberries?)

4/14/05

Benjamin Libet is most frequently associated with the Readiness Potential and its implications for free will, but W. Grey Walter (1910-1977), did pioneering work that brought early attention to the phenomenon in Britain and America, although similar findings had been made in Germany and later in Finland.

W. Grey Walter: Background

Born in Kansas City, Missouri, he lived in England from the age of five, and became interested in neurophysiology at King's College, Cambridge. Failing to obtain a Cambridge research fellowship, he did research at various London hospitals. This interest led him to work elsewhere in Europe as well as in the United States and the Soviet Union. He was not a communist party member but as a fellow traveler (so-called in that era) he had clear far-left sympathies and is the father of Nicholas Walter (1934-2000), a prominent English anarchist.

W. Grey Walter was a pioneer in the field of cybernetics. Between 1948 and 1949. he devised autonomous machines, robots Elmer and Elsie, that mimicked human behavior, which paved the way for popularization of cybernetic theory. Claiming his self-help teachings as founded on cybernetics principles, Maxwell Maltz, MD, a cosmetic surgeon, sold many books on Psycho-Cybernetics, essentially advising how the mind can be programmed. (A Biblical phrase puts it tersely, although with other intentions: As a man thinketh, so is he.)

By using Elmer and Elsie, W. Grey Walter established that consciousness complexity can arise out of simplicity in the brain--that a small number of brain cells can give rise to very complex behaviors. This, he felt, would establish that human consciousness is not immeasurably complex and can be studied by scientific means. By doing so, he wanted to demonstrate that human brain hard wiring could provide researchers with understanding of consciousness operations. Called tortoises because of their slow pace on three wheels, Elmer and Elsie provided models to understand brain organization. Capable of photo taxis (movement toward a light source), they could maneuver to a stationed battery charger when running low on power.

He conducted a "self-awareness test" on one robot by placing a light on the "nose" of a tortoise while he watched as the machine observed itself in a mirror. "It began flickering, twittering, and jigging like a clumsy Narcissus", he wrote. He held that this "might be accepted as evidence of some degree of self-awareness" if observed in an animal.

The electroencephalograph (EEG) machine, invented by Hans Berger, became a key instrument in his work. (In 1929, Berger, a German, discovered brain waves when he attached electrodes, one to the forehead, the other to the rear of the skull of a human subject. Because an EEG measures brain electrical activity, Walter revised the device so to detect a variety of brain waves, from alpha (high speed) to delta (low speed) as observed during sleep. By triangulating with the brain occipital lobe (at the back of the brain) he located the source of alpha waves. Delta waves were used to locate tumors or epilepsy lesions.

His work with EEG electronics led him in WW2 to help develop radar technology. In Winston Churchill's history of the Second World War, the former Prime Minister wrote about the debt Britain owed to the developers of radar, a system that was a key tool in scrambling RAF Spitfires aloft to intercept Luftwaffe bombers steady-on for England. Walter was one of those developers.

Free Will: W. Grey Walter, Benjamin Libet, and Other Experimenters

He paved the way for Benjamin Libet when in the 1960s Walter discovered the Readiness Potential, termed by him as contingent negative variation (CNV), which described a negative electrical spike appearing in the brain a half second prior to subjects becoming consciously aware of movements they are about to make. He gave his subjects a dummy button--it would not work--to change slides they viewed. They were told to press the button to change to the next slide. An electrode was attached so that their brain was wired to the slide changer. In fact, the slides were changed via the electrode by the Readiness Potential area of their brain, and before they could push the dummy button. Unaware their own brains had been the agent, the subjects complained that the slides were changing before they could push the button. They thought they had not actuated the change when in fact they had, but not by any decision on their part. The sense of decision came after their brains had already effected the change to the next slide. The slide changed before they had decided to change it. Thus decision, that to which we attribute deeds, was an illusion. This, of course, has far-reaching ramifications for what is loosely termed free will.

It suggests that free agency is an illusion and that we assume we choose when in fact we don't. Instead, we are creatures of cause and effect, determined by stimuli and forces in the environment.

The basic findings have been repeated by various experimenters as the concept is straightforward and its protocols are simple to devise. The term Readiness Potential has come into wide use because of translation of a German term with the same meaning, Bereitschaftpotential, as named by German researchers, Hans Kornhuber and Lüder Deeke. Kornhuber and Deeke had findings in a comparable behavioral context as a Finn, Risto Näätäen later had. Because various experiments have been conducted on the Readiness Potential with consistently similar results, we must conclude that the findings are not an anomaly.

Nor can their implications for consciousness and free agency be lightly dismissed, given the consistency in the experiments--that the decision to act follows the action. In effect, the observer can predict what the subject will do before the subject knows his own response.

6/24/04

Wilder Penfield, Brain Maps, V.S. Ramachandran and Phantom Orgasms: The Man Who Mistook His Foot For A Penis

Canadian neurosurgeon Wilder Penfield performed pioneering experiments in the 1940s and 1950s. During extensive brain surgeries, he applied electrodes to different regions of the brain and stimulated them. He then asked patients what they felt. He recorded and correlated sensations, images, even memories, as reported by the patients. By this means he mapped the brain and found, for example, that the brain area involved with lips and fingers occupies as much space as the area which handles the entire trunk of the body. Of course, the lips and fingers are highly sensitive, and such a large dedication of neuronal space helps explain why. Interestingly, he found that the areas did not always correspond to places on the body. The genital area is not on the brain next the area for thighs. Instead, it is located next the area for the feet. This is a fact that has far reaching implications for the account which follows.

V.S. Ramachandran, University of California, San Diego, received a call one day. An engineer from Arkansas wanted to talk about something that was puzzling. Here is the narrative:

"Is this Dr. Ramachandran?"

"Yes."

"You know, I read about your work in the newspaper, and it's really exciting. I lost my leg below the knee about two months ago but there's still something I don't understand. I'd like your advice."

"What's that?"

"Well, I feel a little embarrassed to tell you this."

I knew what he was going to say but . . . he didn't know about the Penfield map.

"Doctor, every time I have sexual intercourse, I experience sensations in my phantom foot. How do you explain that? My doctor said it doesn't make sense."

"Look," I said. "One possibility is that the genitals are right next to the foot in the body's brain maps. Don't worry about it."

He laughed nervously. "All that's fine, doctor. But you still don't understand. You see, I actually experience my orgasm in my foot. And therefore it's much bigger than it used to be because it's no longer just confined to my genitals."

Patients don't make up such stories. Ninety-nine percent of the time they're telling the truth, and if it seems incomprehensible, it's usually because we are not smart enough to figure out what's going on in their brains. This gentleman was telling me that he sometimes enjoyed sex more after his amputation. The curious implication is that it's not just the tactile sensation that transferred to his phantom but the erotic sensations of sexual pleasures as well. ( A colleague suggested I title this book "The Man Who Mistook His Foot For A Penis.") (From Phantoms In The Brain: Probing The Mysteries of The Human Mind, by V.S. Ramachandran and Sandra Blakeslee. NY: Quill (Harper Collins): 1998

6/22/04

A Phantom Arm and A Bizarre Experience

My right leg seems trapped in time and space, at a day and place long ago. Along the leg runs a large scar that always tingles in the background of my consciousness, and which tightens into fear whenever I focus attention there. Yes, it is unmistakable fear, in memory of the event itself, and it is localized at the scar. How can fear be in a scar? I don't know. I can only tell you that's where it is. If anything touches the scar, my leg wants to pull away from the touch. This occurs and over-rules my reason. It simply happens. It is all a phantom of the mind.

Such phantoms are not unique to me. People have experienced them for thousands of years, but they received a name only in the Nineteenth Century. The Civil War was a gruesome conflict. My ancestor, Captain David Stewart, 28th Iowa Volunteer Infantry, spent part of the war as a field surgeon. He reached a point when he could no longer look at another gangrened leg, or another soldier quite literally biting the bullet to keep from screaming in pain as a limb was sawn through. Preferring being shot to cutting off another leg, Captain Stewart asked for a transfer to combat. I understand why. I have seen the saw he used for amputation. Its teeth could as easily cut through a tree limb.

After the Civil War, tens of thousands of soldiers had amputated limbs and told doctors of strange experiences. Silas Weir Mitchell, a Philadelphia physician, coined the phrase "phantom limb" shortly after the conflict, and did so to explain the phantoms that the veterans described. Fearing ridicule from colleagues, he published anonymously in a popular magazine, Lippincott's Journal, wherein he described the phenomenon. In the century and a half since, phantom limb syndrome has become part of medical and psychological literature.

Older medical journals contain hundreds of fascinating case studies. Some of the described phenomena have been confirmed repeatedly and still need an explanation. In one case, a patient experienced a vivid phantom arm soon after amputation, which is normal, and in a few weeks he developed a peculiar, gnawing sensation in his phantom, which is not normal. He was quite puzzled by the these new sensations and asked his Army doctor about them, but the physician couldn't help. The veteran finally asked, "Whatever happened to my arm after you removed it?" The doctor told him to ask the surgeon.

The veteran did just that. The surgeon replied, "Oh, we usually send the limbs to the morgue."

The man asked the morgue and asked, "What do you do with amputated arms?" They replied, "We send them either to the incinerator or to pathology. Usually we incinerate them."

"Well, what did you do with my arm?" They looked at their records and said, "You know, it's funny. We didn't incinerate it. We sent it to pathology."

The man asked the pathology lab. "Where is my arm?" They said, "Well, we had too many arms, so we just buried it in the garden, out behind the hospital."

They took him to the garden and showed him where the arm was buried. When he exhumed it, he found it crawling with maggots and exclaimed, "Maybe that's why I'm feeling these bizarre sensations in my arm."

He took the limb and incinerated it. His phantom pain disappeared permanently.

6/18/04

Blindsight: Graham Young Is Blind But Can See

Blindsight: Graham Young Is Blind But Can See

What is consciousness? You may say that it is to be aware. But what does it mean to be consciously aware of something? I can type this paragraph while outside my window a bird chirps, shadows dapple the window ledge, and here, inside, my fingers move on the keyboard, music plays on the radio, and so many other events also happen as I focus only on these words. I stop for a moment, and there they are, all these other things. Then I return my attention to the computer screen. In a sense, I see but I don't see. I am aware but I am not aware. Things are part of conscious and, so to speak, they are not.

Graham Young of Oxford, England is a case in point. When eight years old, he suffered a head injury that damaged the visual cortex of his hind brain, and this experience rendered him unable to see on his right hand side, while he can see on his left. This blindness affects both eyes.

He has a condition called blindsight, a term coined by Professor Larry Weiskrantz of Oxford University, and it has made Young a prime candidate for neuroscience experiments, particularly because his damage is quite specific and localized, making his situation one of the purest examples of a very rare condition.

If you put an object on his right side and ask, "What is it?" he cannot say.

If you move it, and ask "What direction?" he can tell you. Up, down, left, right. He can do this even though he cannot see the object.

If a neuroscientist moves an object to his left side, he will see it. As the experimenter moves it to his right he will announce the point at which the object disappears from his view. So long as the object remains motionless, Young can only say nothing is there. When a tiny light is moved, he can announce its direction, although he cannot see it.

The experimenter might say that Graham Young must be able to see it, to which he replies, "It's very easy for me to say to you, 'Oh, I saw that move up. . . ' And as soon as I say that, you're going to say, 'Ah, he can see!' No I can't."

Colin Blakemore, an Oxford scientist, thinks that blindsight is extraordinary in its implications for consciousness. The implications are staggering in that to some extent our brains do not depend on consciousness, and if that be the case, then precisely what is it that consciousness adds to our actions? Why do we need it for certain things?

Of his condition, Young says, " I'm aware of individual functions of sight. Sometimes I am aware of a motion, but that motion has no shape, no color, no depth, no form, no contrast. Sometimes I can tell you what orientation it's at, but then we lose everything else."

Blakemore says that Young lacks "the ability to put it all together, and to recognize an object, a thing, something with meaning. . . . so very, very different from what we would normally call vision." Blakemore again: "If there's one thing that this phenomenon of blindsight teaches us, it is that vision is not entirely seeing, that there can be a disconnection from the capacity to respond to visual information and the actual act of being visually aware of something. Those two things can be separated and probably are in our everyday lives. But the problem is that, obviously, we're not aware of the things that we're not aware of. We just don't know the extent to which they play a part."

Of University of California, San Diego, V.S. Ramachandran says, "It's almost as if the patient is using ESP. He can see and yet cannot see. So it's a paradox, it's almost like science fiction. How is this possible? Well, if you look at the anatomy, you can begin to explain this curious syndrome. It turns out from the eyeball to the higher centers of the brain where you interpret the visual image, there's not just one pathway. There are two separate pathways, which subserve different aspects of vision. One of these pathways is the evolutionarily new pathway, the more sophisticated pathway, if you like, that goes from the eyeball through the thalamus to the visual cortex of the brain. Now, you need the visual cortex for consciously seeing something. The other pathway, which is older evolutionarily, and is more prominent in animals like rodents, lower mammals, birds and reptiles, goes to the brain stem, the stalk on which the brain sits. And, from the brain stem, gets relayed eventually to the higher centers of the brain. Specifically, the older pathway going through the brain stem is concerned with reflexive behavior orienting to something important in the visual field, making eye movements, directing your gaze, directing your head toward something important. In these patients, one of these pathways alone is damaged--the visual cortex is damaged. Because that's gone, the patient doesn't see anything consciously. But the other pathway is still intact. And he can use that pathway to guess correctly the direction of movement of an object that he cannot see."

Reptiles depend on unconscious blindsight for their survival. Graham Young: "A lizard, if it wants to catch a fly, for example, it doesn't actually have to see a fly. It doesn't have to recognize a fly. It just has to be aware of something moving. So I suppose me and the lizard are distant cousins."

As I indicated in the opening paragraph, I must screen out certain events in order to write this article. Blindsight seems to enable us to focus on the task at hand. It allows all else to fall into the background. As for this background, Graham Young is completely unaware of anything happening in his right field yet he gets the event correct 90% of the time. While typing I may register events without becoming consciously aware of them.

Somehow we have consciousness, but have little idea how. Graham Young has no consciousness of certain things, and we are just beginning to understand how. Brain scans revealed that when this unconscious process was going on in Graham Young, a primitive vision pathway was being used by the brain to communicate the information. This is unlike normal vision, wherein much more of the brain is used.

Hamlet said that he could be bounded in a nutshell and count himself king of infinite space. Inside the nutshell that is our skull, that may well be what we are.(See Nova Online, "Secrets of The Mind," 23 Oct. 2001, and BBC, "All Seeing Yet Un-Seeing," 22 Aug. 2000, later shown on BBC2 "Brain Story")

6/9/04

Bird Brains and Theory of Mind

Humans are exceptional beings, so we like to think. The so-called lower animals lack complex syntax for language. They simply are not as conscious. Many philosophers believe only humans understand that others have their own personal thoughts, which philosophers term as having a theory of mind, without which we would lack our empathy and deception. So goes the point of view. Theory of mind has implications that reach far into our notions about consciousness. For one, experiments suggest that the degree of consciousness has no clear correlation to matter, brain size in this case.

Biologists are wary of exclusionary assertions for human beings. We are not, apparently, the only species with a theory of mind. Biologists have found it in various mammals, ranging from gorillas to goats. Two recent studies suggest that theory of mind can extend beyond mammals to birds. Consider a recent article in the Proceedings of The Royal Society, in which Bernd Heinrich and Thomas Bugnyar, University of Vermont, Burlington, describe experiments conducted on ravens. As birds, ravens are known to be clever and sociable, and for this reason, the scientists set out to find how the ravens would respond to human gaze.

Gaze response helps measure the development of theory of mind in human children. At about 18 months most children can notice another's gaze, follow it, and infer things about the gazer from it. Autism is revealed when a child fails to develop this skill, as the autistic child also fails to understand that other people have minds.

To test whether ravens could follow gaze, Dr Heinrich and Dr Bugnyar used six six-month-old hand-reared ravens, and one four-year-old. With the room divided by a barrier, the birds were placed, one at a time, on a perch. An experimenter sat about a metre in front of the barrier. He moved his head and eyes in a particular direction and gazed for 30 seconds before looking away. Sometimes he gazed up, sometimes to the part of the room where the bird sat, and sometimes to the part of the room hidden behind the barrier. The experiment was videotaped.

Dr Heinrich and Dr Bugnyar found that all the birds were able to follow the gaze of the experimenters, even beyond the barrier. In the latter case, the curious birds either jumped down from the perch and walked around the barrier to have a look or leapt on top of it and peered over. There was never anything there, but they were determined to see for themselves.

A suggestive result, but not, perhaps, a conclusive one. But while at the University of Austria, Dr Bugnyar conducted another study. Its results were published last month in ,, and it suggests that ravens may have mastered the art of deception too.

Wanting to determine what ravens learned from one another while foraging, in his experiment Dr Bugnyar noticed strange behavior between two male birds, Hugin and Munin, the first subordinate, the second dominant.

They had to figure out which color-coded film containers held cheese, then pry off the lids and eat the morsels. The subordinate male excelled at this while the dominant was rather slow in working things out. However, Hugin could only swallow a few bits of cheese before the dominant raven, Munin, bullied him aside. Although it comes as no surprise, this indicated that ravens are able to learn about food sources from one another. They are also able to bully each other to gain access to that food.

Something surprising did happen. Hugin, the subordinate, tried a new strategy. As soon as Munin bullied him, he headed over to a set of empty containers, pried the lids off them and pretended to eat. Munin followed, whereupon Hugin returned to the loaded containers and ate his fill.

At first Dr Bugnyar could not believe what he was seeing. Hugin, he is convinced, was clearly misleading Munin.

Munin grew wise to the tactic, and would not be led astray. He learned from Hugin and tried to locate food on his own. Hugin became furious. "He got very angry," says Dr Bugnyar, "and started throwing things around."

6/4/04

A Hairbreadth Difference and Heaven and Earth Are Set Apart: Theist, Skeptic

Why is there something rather than nothing? Lucretius

The skeptic: The universe is eternal. It simply is. There is no need for a creator. Moreover, If the universe has a creator, then it is impersonal, merely a force.

The theist: The universe couldn't have happened by itself. All results from an uncaused Cause, which is eternal, omnipotent, omniscient, purposeful, and personal. Astronomer magazine's Robert Naeye: "On Earth, a long sequence of improbable events transpired in just the right way to bring forth our existence, as if we had won a million-dollar lottery a million times in a row. Contrary to the prevailing belief, maybe we are special."

Stephen Hawking: There may or may not be a creator, but the Big Bang didn't necessarily depend on it. Hawking to a science writer who asked him about any connection between Hindu myth and Black Hole evidence: "It's fashionable rubbish. People go overboard on Eastern mysticism because it's something that they haven't met before. But as a natural description of reality it fails abysmally to produce results."

My comment: In our daily lives we can believe or disbelieve, but we can take away from quiet moments a different understanding of the matter. In that understanding, the questions no longer seem important. The understanding does not imply atheism or theism as the final truth, the ultimate meaning. Rather, it allows us to see things in a new light * by a method that is empirical and verifiable--not some abstract, debatable notion. Part of our discovery is that thoughts beget thoughts, that they are "mechanical" things requiring no thinker to think them. ( Empirical philosopher David Hume said as much in the Eighteenth Century.) In turn, self is created by mind, neurons if you want, and no ego, no little man or woman, presides over the course of our daily affairs. This understanding derives from a state that precedes all our usual questions and doubts. Unlike some scientists' claim, the state is not wrapped in mist, but is one of high, generalized, awareness. When asked what happened to him, Buddha said that he awakened. Buddha made no claims whether all was finite or infinite, godless, or godly. From the vantage of this state, the questions disappear and are seen as more creatures of the mind.

That is the meaning of the leader for this article's title, an old Zen saying by Seng-Ts'an--"The Perfect Way is only difficult for those who pick and choose. Do not like, do not dislike; all will then be clear. A hairbreadth difference, and Heaven and Earth are set apart." Theist and skeptic impose the difference that sunders Heaven and Earth.

* (In Zen, form is emptiness, emptiness form. Each enables the other and is the other.)

Daniel Dennett, along with various other researchers and thinkers, has arrived at this view of no-self without any "mystical" experience. Theirs, however, seems to remain a largely intellectual understanding. Articles on them can be found variously in this blog. As an example of scientific explanations that don't rely on Eastern thought, see Space Capsules & Eastern I-Told-You-So, 29 January 2004. Another example can be found at Shakey, Beavers, & Cartesian Theater, 12 February 2004.)

5/24/04

Some people think that consciousness and computers are a contradiction in terms. That is, they believe that computers can never qualify as conscious, which is a uniquely human quality.

Marvin Minsky believes that conscious artificial intelligence is not at all out of the question, which is in keeping with his writings. His thinking in no regard qualifies as religious, and therefore holds nothing sacred about human ability. Would religious leaders differ with him? If readers expected solid disagreement from the Tibetan Buddhist community, they might be surprised to learn that the Dalai Lama does not take exception to a point of view such as Minsky's. Here, then, are two perspectives on the issue, first Minsky, then the Dalai Lama.

Minsky: " Just as we walk without thinking, we think without thinking! We don't know how our muscles make us walk--nor do we know much more about the agencies that do our mental work. When you have a hard problem to solve, you think about if for a time. Then, perhaps, the answer seems to come all at once, and you say, ' Aha, I've got it. I'll do such and such.' But if someone were to ask how you found the solution, you could rarely say more than things like the following :

' I suddenly realized . . . '

' I just got the idea . . . '

' It occurred to me that . . . '

If we could really sense the workings of our minds, we wouldn't act so often in accord with motives we don't suspect. We wouldn't have such varied and conflicting theories for psychology. And when we're asked how people get their good ideas, we wouldn't be reduced to metaphors about ' ruminating,' and ' digesting,' ' conceiving' and ' giving birth' to concepts--as though our thoughts were anywhere but in the head. If we could see inside our minds, we'd surely have more useful things to say.

Many people seem absolutely certain that no computer could ever be sentient, conscious, self-willed, or in any other way ' aware' of itself. But . . . More

4/16/04

Now this bell tolling softly for another, says to me, Thou must die. . . .

No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend's or of thine own were. Any man's death diminishes me, because I am involved in mankind; and therefore never send to know for whom the bell tolls; it tolls for thee." (John Donne, Meditation XVII, from Devotions Upon Emergent Occasions, 1624)

Three hundred years lie between Donne's No man is an island and Einstein's remark that a human being is part of a whole (discussed below), and each involves a contemplation of death as a type of illusion.

Early in his life, Albert Einstein became aware of the illusions begotten by common sense. As a boy, he imagined himself riding a light beam and speculated on how things would appear as he approached the speed of light. Understanding the new shapes they would assume, he concluded that the universe is a strange place indeed . . . More

3/21/04

A Jesuit priest born the year before Darwin's death, Pierre Tielhard de Chardin sought the Vatican's approval for his manuscripts, but never got it. His superiors continually denied permission for their release, believing that his theories would not accord with Church doctrine. Published posthumously in 1955 as The Phenomenon of Man, the book assembles his ideas and is based on his work as both a philosopher and paleontologist. His ideas matured in the 1940s while he was in China studying the fossil remains of Peking Man.

In short time his book met with praise and detraction. Its detractors accused Tielhard of imposing teleology, some end goal, on biology and evolution. His phrase for it was the Omega Point. They claimed that he had imported his religious views into science. In his noogenesis, or the origin of human reflective thought, his supporters found evidence that human history cannot be explained by evolutionary theory.

Tielhard de Chardin premises his theory on discontinuity, which cannot be explained by evolutionary theory. He posits a few key transition points when radical changes occurred in evolution, and likens them to water, first luke warm, then brought to a boil. Its state undergoes a discrete alteration, from water, a liquid, to steam, a gas with properties wholly different.

Similarly, he finds major transitions on a grand scale, the appearance of matter, the formation of Earth, the origin of cells, and the rise of reflective thought. Central to his argument is that, with each emergence, the old rules became subsumed under the new. The new rules became preeminent in the evolutionary pattern.

De Chardin wrote of "an explosion pulverizing a primitive quasi-atom . . . then a swarming of elementary corpuscles." Matter thus moves to greater complexity. This is somewhat akin to the Big Bang Theory, which was not in vogue during the 1940s, although de Chardin provides a kind of preview.

After slow eons, life emerged as something wholly, cataclysmically, new. Among his scientific peers today, de Chardin would find few dissenters in this.

De Chardin posits another discontinuity. From the earliest unicellular organisms to mammals, he sees a direction, not random but pointed at the origin of man. With this creature comes noogenesis, reflective thought, a major departure.

If he is right, then sociobiologists are wrong. They maintain that human beings can be studied by finding parallels between people and animals. Animal societies can help explain human societies. Animal ethology and human ethology are not distinctly different for purposes of tracing behavior origins, say most sociobiologists.

But reflective thought is an emergent property and exhibits features unique to people, not chimpanzees. De Chardin would insist that a real discontinuity in programming exists between primates and humans.

The essential question here, then, bears upon Tielhard de Chardin's key concept of noogenesis (occurring in what he calls the noosphere).

Of those who have studied him, many see the central questions as these--Is reflective thought a discontinuity? Or does it evolve out of an earlier order? I see another question as more important, which nobody has addressed.

Rather, one must ask, Just what is reflection? Examination of consciousnesss reveals that any "self-reflected" thought is not generated by a self. It happens and a self arises to take ownership of the thought after the fact. Nobody reflects. Put it this way, if you want--reflecting reflects. There is only the illusion that somebody does so. Scotsman David Hume was one of the first Westerners to point this out, although a history of testimony begins in the East before Buddha. Hume and others have observed that no personal identity underlies perceptions that come and go. They are like images on a movie screen, a series of single pictures to which smoothness, a sense of continuity, is imparted. *

Reflective thought is a naturally occuring phenomenon. Just as the eye incessantly moves although it generates a sense of stability, so does the mind, and it fosters a sense of self. The idea of a self that reflects is an epiphenomenon to help explain understandings which come as a result of thought.

Reflective thought may or may not be a substantial discontinuity, but it does not bring man closer to the angels, as the good priest would like it to have done.

Tielhard de Chardin never considered self as without evidence. If he had, he would still have been faced with another question, one that can redeemed by mystery, although not of his orthodox kind. The question he could have posed, is this--Whence this understanding? How is it, for example, that so many people throughout history can recognize the absence of self? Clearly, understanding understands, if we must phrase the situation so that a process be operated upon.

However we phrase the issue, we must confront the view that the universe itself is intelligent and not the blind thing of the materialists. When the sense of self is seen through as an arising and falling away, understanding remains.

With electrodes connected to their wrists and scalps, his subjects had brain waves recorded as they watched a clock with a spot revolving faster than a second hand. Like you, they were told to flex their wrists spontaneously. They were also told to note the spot's position at the time they decided to do so. They stated where they saw it, and Libet correlated their observations with data recorded by electrodes at wrist and scalp.

Libet measured three factors: the action's beginning, the moment of decision, and the Readiness Potential, which began a certain brain wave pattern. This pattern involves the brain's plans to carry out an action.

Okay, so what did he find out? This. The action was recorded as taking place before the decision to act.

Libet was surprised. He expected a different sequence, this one: first, the decision to act, then the planning stage, otherwise known as Readiness Potential, followed by the action. Instead, the Readiness Potential preceded the decision. No decision caused the brain to get ready to act. The brain got ready, then gave the appearance that a decision was made.

The sense of decision was rather like a hood ornament over a truck engine, symbolic rather than instrumental. Libet found one, Readiness Potential, two, decision, three, action. The Readiness Potential led to the action, with the decision to act as an impression after the fact.

In other words, while his subjects thought they were deciding, they actually saw an internal replay of a decision that had already occurred. They did not initiate an action but thought they had. They thought their decision had caused the action.

Libet found that his subjects apparently didn't have free will, but instead, a kind of free won't. That is, he told them that they could veto an action. Instead of flexing a wrist, they could stop the movement. He discovered an action could be vetoed, but the subjects only had one tenth second (100 milliseconds) to do so. In short, they could not initiate an action and could only overrule any impulse if they were alert and acted instantly. This is reminiscent of Zen teachings about alertness as the road to freedom. (See the Zen parable, "Attention means attention," in Ramesh Balsekar's Inconsistencies, 10 March 2004.)

(Of this experiment, and its implications, Tufts University philosopher Daniel Dennett* is reputed to have said "I want more freedom than that." *(Freedom Evolves, Elbow Room, Consciousness Explained, and other books.) In short, this does not mean he refuses to accept the facts but believes that they can be interpreted differently.)

Now a question. Where is the self that seems to make all the decisions?

Time is out of joint for us. Another Libet experiment, in 1981, revealed that brain stimulation induces conscious sensory impressions, but only one half second after steady stimulation. In other words, consciousness builds over time. It lags behind events and only later corrects the delay by making us think that awareness occurred before the stimulus. (Our brains are masters of deception. See Gorillas & Inattentional Blindness, 13 March 2004.)

During meditation or other introspection, one looks steadily into his experience and finds nothing that lasts, only ever-changing impressions, thoughts, sensations, without separation between observer and observed.

Scottish philosopher David Hume said that whenever he delved within he found only perceptions--heat, cold, pain, pleasure--and from this concluded that the self was nothing but a bundle of perceptions. True, but beyond them, a perceiver remains as distinct from the sense of self, and it cannot be explained by that which it perceives. (See Perception, 8 December 2003.)

So do you make decisions? How do you know? If you believe so, find your beliefs and the self who believes them.