A Life of Meaning (Reason Not Required)

By Robert A. Burton

Sept. 5, 2016

Image

CreditEmily Berl for The New York Times

Few would disagree with two age-old truisms: We should strive to shape our lives with reason, and a central prerequisite for the good life is a personal sense of meaning. Ideally, the two should go hand in hand. We study the lessons of history, read philosophy, and seek out wise men with the hope of learning what matters. But this acquired knowledge is not the same as the felt sense that one’s life is meaningful.

Though impossible to accurately describe, we readily recognize meaning by its absence. Anyone who has experienced a bout of spontaneous depression knows the despair of feeling that nothing in life is worth pursuing and that no argument, no matter how inspired, can fill the void. Similarly, we are all familiar with the countless narratives of religious figures “losing their way” despite retaining their formal beliefs.

Any philosophical approach to values and purpose must acknowledge this fundamental neurological reality: a visceral sense of meaning in one’s life is an involuntary mental state that, like joy or disgust, is independent from and resistant to the best of arguments. If philosophy is to guide us to a better life, it must somehow bridge this gap between feeling and thought.

As neuroscience attempts to pound away at the idea of pure rationality and underscore the primacy of subliminal mental activity, I am increasingly drawn to the metaphor of idiosyncratic mental taste buds. From genetic factors (a single gene determines whether we find brussels sprouts bitter or sweet), to the cultural — considering fried grasshoppers and grilled monkey brains as delicacies — taste isn’t a matter of the best set of arguments. Anyone who’s tried to get his child to eat something she doesn’t like understands the limits of the most cunning of inducements. If thoughts, like foods, come in a dazzling variety of flavors, and personal taste trumps reason, philosophy — which relies most heavily on reason, and aims to foster the acquisition of objective knowledge — is in a bind.

Though we don’t know how thoughts are produced by the brain, it is hard to imagine having a thought unaccompanied by some associated mental state. We experience a thought as pleasing, revolting, correct, incorrect, obvious, stupid, brilliant, etc. Though integral to our thoughts, these qualifiers arise out of different brain mechanisms from those that produce the raw thought. As examples, feelings of disgust, empathy and knowing arise from different areas of brain and can be provoked de novo in volunteer subjects via electrical stimulation even when the subjects are unaware of having any concomitant thought at all. This chicken-and-egg relationship between feelings and thought can readily be seen in how we make moral judgments.

The psychologist Jonathan Haidt and others have shown that our moral stances strongly correlate with the degree of activation of those brain areas that generate a sense of disgust and revulsion. According to Haidt, reason provides an after-the-fact explanation for moral decisions that are preceded by inherently reflexive positive or negative feelings. Think about your stance on pedophilia or denying a kidney transplant to a serial killer. Long before you have a moral position in place, each scenario will have already generated some degree of disgust or empathy.

Nowhere is this overpowering effect of biology on how we think more evident than in the paradox-plagued field of philosophy of mind. Even those cognitive scientists who have been most instrumental in uncovering our myriad innate biases continue to believe in the primacy of reason. Consider the argument by the Yale psychology professor Paul Bloom that we do not have free will, but since we are capable of conscious rational deliberation, so are responsible for our actions.

Though deeply sympathetic to his conclusion, I am puzzled by his argument. The evidence most supportive of Bloom’s contention that we do not have free will also is compelling evidence against the notion of conscious rational deliberation. In the 1980s the neurophysiologist Ben Libet of the University of California, San Francisco, showed that the brain generates action-specific electrical activity nearly half a second before the subject consciously “decides” to initiate the action. Though interpretations of the results are the subject of considerable controversy, a number of subsequent studies have confirmed that the conscious sense of willing an action is preceded by subliminal brain activity likely to indicate that the brain is preparing to initiate the action.

An everyday example of this temporal illusion is seen in high-speed sports such as baseball and tennis. Though batters sense that they withhold deciding whether to swing until they see the ball near the plate, their swing actually begins shortly after the ball leaves the pitcher’s hand. The same applies to tennis players returning a serve coming at them at 140 miles an hour. Initiation of the action precedes full conscious perception of seeing the approaching ball.

It is unlikely that there is any fundamental difference in how the brain initiates thought and action. We learn the process of thinking incrementally, acquiring knowledge of language, logic, the external world and cultural norms and expectations just as we learn physical actions like talking, walking or playing the piano. If we conceptualize thought as a mental motor skill subject to the same temporal reorganization as high-speed sports, it’s hard to avoid the conclusion that the experience of free will (agency) and conscious rational deliberation are both biologically generated illusions.

What then are we to do with the concept of rationality? It would be a shame to get rid of a term useful in characterizing the clarity of a line of reasoning. Everyone understands that “being rational” implies trying to strip away biases and innate subjectivity in order to make the best possible decision. But what if the word rational leads us to scientifically unsound conclusions?

We describe the decision to jam on the brakes at the sight of a child running into the road as being rational, even when we understand that it is reflexive. However, few of us would say that a self-driving car performing the same maneuver was acting rationally. It’s pretty obvious that the difference in how we assign rationality isn’t dependent upon how decisions are made, but how we wish to see ourselves in relationship to the rest of the animal kingdom, and indeed even to plants and intelligent machines.

It is hard to imagine what would happen to modern thought if we abandoned the notion of rationality. Scientific method might partly fill the void. With quantum physics, scientists have been able to validate counterintuitive theories. But empirical methods can’t help us with abstract, non-measurable, linguistically ambiguous concepts such as purpose and meaning. It’s no wonder that pre-eminent scientists like Stephen Hawking have gleefully declared, “Philosophy is dead.”

Going forward, the greatest challenge for philosophy will be to remain relevant while conceding that, like the rest of the animal kingdom, we are decision-making organisms rather than rational agents, and that our most logical conclusions about moral and ethical values can’t be scientifically verified nor guaranteed to pass the test of time. (The history of science should serve as a cautionary tale for anyone tempted to believe in the persistent truth of untestable ideas).

Even so, I would hate to discard such truisms such as “know thyself” or “the unexamined life isn’t worth living.” Reason allows us new ways of seeing, just as close listening to a piece of music can reveal previously unheard melodies and rhythms or observing an ant hill can give us an unexpected appreciation of nature’s harmonies. These various forms of inquiry aren’t dependent upon logic and verification; they are modes of perception.

Robert A. Burton, a former chief of neurology at the University of California, San Francisco, Medical Center at Mount Zion, is the author of “On Being Certain: Believing You Are Right Even When You’re Not,” and “A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves.”