August 28, 2003

Today, the human body is a major site of contestation in the law, whether it be reproductive rights, gay rights, drug testing, or even a run of the mill personal injury case. The development of drugs that will reliably produce targeted changes in a persons senses, cognition, or emotions, will add an entirely new dimension to ongoing debates about individual rights, public health, and social control. One need only look at the controversy that currently surrounds abortion rights to imagine how our legal system will struggle when questions of pro-choice expand into the realm of thoughts and thought-processes.

As I mentioned on Monday, our legal system has a poor track record when it comes to matters of the mind, especially when these matters raise deeper questions about the nature of a "natural," or "normal," human being.

The laws knee-jerk reaction to drugs that change thinking is prohibition. In the early 1900s, fourteen states enacted anti-cigarette laws making it a criminal offense to sell  and in some cases, merely possess  tobacco cigarettes. (Nicotine is powerful psychostimulant.) Alcohol prohibition, which began at the state level, culminated in a thirteen year federal prohibition that required nothing less than modifying the United States Constitution both to begin and to end it. And, presently we are in the midst of an ever-growing war on drugs that has filled our prisons, skewed science and medicine, and shredded many of our most fundamental rights.

When new drugs begin to make it possible to safely and reliably boost intelligence, increase empathy, or decrease anti-social ideations, will the courts mandate their use as a condition to granting probation or parole to some offenders? Already, courts require some probationers to take disulfiram, a drug that makes the ingestion of alcohol in any form, extremely unpleasant. From the 1920s through 1970, pursuant to the laws of at least 32 states, more than 60,000 people were deemed "eugenically unfit." Many of these people were involuntarily sterilized, in part because of low scores on intelligence tests. When one of these laws was challenged and the case reached the United States Supreme Court, it was upheldwith Justice Oliver Wendell Holmes smugly proclaiming, "Three generations of imbeciles are enough."

Today, implantable contraceptives, like Norplant, have replaced court ordered sterilization, and the National Institute of Drug Abuse is funding the development of drugs that police the blood-brain barrier, making it impossible for a person to feel the full effects of certain illegal (and legal) drugs. One such "neurocop," SR141716, is already being positioned as a possible probation condition for those convicted of a marijuana offense.

Until 1973, "homosexuality" was listed as a psychiatric disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM). People who admitted they that were homosexual, or who were "accused" of being gay or lesbian, were subject to involuntary confinement under mental health laws, and subjected to "reparative therapy" designed to convert them into heterosexuals. "Treatment," in addition to counseling, included penile plesthysmograph (electronic shock triggered by penile erection), drugging, and hypnosis. Some state laws even permitted the forcible sterilization of homosexuals. Only in June of this year did the United States Supreme Court rule that laws criminalizing consensual gay sex were unconstitutional.

Helping to guide upcoming law and policy decisions so that they respect the fundamental right to freedom of thought is the focus of the work were doing at the Center for Cognitive Liberty & Ethics. If youre interested in supporting this work, or collaborating in anyway, please e-mail me.

That concludes my guest blogging on Brainwaves. Thanks Zack! If youd like to continue receiving occasional posts on the topic of cognitive liberty, please subscribe to the CCLEs low-volume e-mail news list.

August 27, 2003

As scientists work to develop drugs that improve memory, an article in the current issue of Science indicates that neurobiologists are coming closer to developing drugs that may make it possible to selectively erase memories. If perfected, these drugs could enable a person to eliminate, or significantly dim, specific memories.

Previous studies have shown that if taken before a traumatic event, or within six hours of the event, a drug such as propranolol (a common beta blocker) can significantly reduce recall of that event. Propranolol could soon be used in emergency rooms in an effort to lessen post-traumatic stress disorder that might later haunt victims of serious accidents or violent crimes.

Whats different about the findings published this week in Science, is the possibility of erasing memories that may have been encoded into memory much earlier in life. Whereas propranolol, when used as a memory remover, must be taken in very close temporal proximity to the event that one seeks to forget, these new findings point to the possibility of selectively dumping or dimming memories of incidents that may have occurred as far back as early childhood.

If the science of memory management works as described, and if such drugs make their way into the mainstream, we will be confronted with an array of neuroethical issues. Today, a persons memories are a private and personal matter. But tomorrow, as memory becomes more modifiable, will we have a public obligation to maintain the integrity of our natural memories?

In a future world awash in memory-deleting drugs, a person who commits a crime might be able to silence disconcerting memories of his actions. Some criminals might even force victims to take a memory-erasing drug, after say a rape or robbery. Soldiers on the battlefield, might be able to chemically quell memories of killing enemy soldiers, making it easier to kill again. Executives accused of fraud, embezzlement, or other white collar crimes could chemically delete their own memories of the offenses, and thus truthfully testify that they have no recollection of particular events.

If memory-attenuating drugs take hold in society, how will our legal system deal with them? Will they be regulated like prescription drugs or over-the-counter drugs? Will unauthorized use of a memory drug be made a criminal offense? Will we see new legislation passed that makes it a crime to destroy ones own memories (akin to intentionally destroying evidence), when those memories hold information about a potential civil or criminal matter?

The right to cognitive liberty posits that the power to enhance, erase, or otherwise modify ones own memory ought to be an individual decision; something that is neither compelled nor prohibited by laws. While some people will undoubtedly make poor decisions with regard to modifying their own memories, it should not be a crime to modify your own thinking processes. Government may rightfully police our actions, but it does not, and should not, have the power to police our minds.

This does not mean that evidence of a persons use of a memory-reducing drug should not be admissible in court  it should be admissible. Presently a witness in a civil or criminal case can be cross-examined on just about every aspect of his or her memory, including whether or not they were taking any drugs (legal or illegal) that might have affected their recollection of an incident. Thus, a jury would be permitted to draw its own conclusions if an Enron executive admits on cross-examination that he or she took a memory-eliminating drug.

Further, a person who commits a violent crime and concomitantly forces the victim to ingest a memory-attenuating drug, should be charged with two serious crimes; the violent crime and a separate assault premised on forcing the victim to take a drug.

The creation of memory changing drugs will raise difficult and complex ethical and social issues. But, in our rush to create new laws and regulations, it is important to protect the basic right of law-abiding citizens to manage their own memories.

August 25, 2003

The law advances by looking through a rearview mirror. Judges cite pre-existing legal opinions and reason by analogy to reach decisions in current controversies. This makes the legal system an inherently conservative institution, one that will be caught stiff-legged by the complexities about to be unleashed upon it by developments in neurotechnology.

As Wrye Sententia outlined last week, widely acknowledged legal principles of privacy, autonomy, and choice must be evolved in order to adapt to the swiftly changing technological environment. Cognitive liberty is an effort to accelerate legal thinking about, well, thinking.

Currently, "freedom of thought," remains a legal notion little developed beyond Enlightenment Era understandings of the brain and mind. For the past few centuries, freedom of thought has been largely about buttressing reason and logic, and the major mind-changing technology has been the printing press. Today, however, we are unlocking secrets of the brain, and simultaneously developing drugs and other technologies that make it possible to produce specific changes in how a person thinks.

These changes pose a major jurisprudential challenge. And, unfortunately, when we look back at how our legal system has dealt with previous technological changes that bump up against deeply entrenched notions of the normal or natural, there is little reason for optimism.

Ill discuss some of this legal precedent more specifically in my next post.

Today, new drugs and other technologies developed for augmenting, monitoring, and manipulating cognition require social policies that will promote, rather than restrict, free thinking. Applications of these technologies can benefit from clear principles that ensure cognitive liberty.

Here are three core considerations:

Privacy: What and how you think should be private unless you choose to share it. The use of technologies such as brain imaging and scanning must remain consensual and any information so revealed should remain confidential. The right to privacy must be found to encompass the inner domain of thought.

Autonomy: Self-determination over ones own cognition is central to free will. School boards, for example, should not be permitted to condition a childs right to public education on taking a psychoactive drug such as Ritalin. Decisions concerning whether or how to change a persons thought processes must remain the province of the individual as opposed to government or industry.

Choice: The capabilities of the human mind should not be limited. So long as people do not directly harm others, governments should not criminally prohibit cognitive enhancement or the occasioning of any mental state.

August 20, 2003

When I consider our species agile curiosity and its increasing ability to directly tinker with thinking, a cascade of philosophical, scientific, and societal subtleties leave me swimming in the ever-broader stream of potential benefits and harms.

Thomas Jefferson said, No people can be both ignorant and free. Education can compensate for a lack of knowledge. But freedom of thought today involves a lot more than access to books and reliable information. Recent experiments show the promise of developing neurotechnologies to stimulate creative thought, or maybe even to help grow neurons.

In the process of finding out more about how the brain functions, or how specifically our own brains cope with lived complexity, the question of what we will do individually, and collectively with our growing understanding requires serious focus.

Neuroethics, the topic I will be focusing on this week, calls for a consideration of the percolating social and ethical implications arising from these technological and scientific advances. My particular focus is that of cognitive liberty. Cognitive liberty is a term that updates notions of "freedom of thought" for the 21st century by taking into account the power we now have, and increasingly will have, to monitor and manipulate cognitive function.

As Richard Glen Boire and I guest-blog from the Center for Cognitive Liberty & Ethics over the coming two weeks, we will examine the ways in which social mechanisms designed to protect individual and collective freedom will be challenged by burgeoning developments in neurotechnologies.

August 15, 2003

After almost a century of research into the nature electricity, the 1870s would be the decade when the cluster of innovations that made the new electricity infrastructure emerged -- alternators, dynamos, generators, transformers, switch gear, and power distribution systems.

As broad implementation plans were being planned in the 1870s, smaller scale electrification projects began to slowly revolutionize industry after industry. Low cost, high quality steel was one of the first products cheap electricity made possible. Radical process innovations such as Bessemer and Siemens steel processes used inexpensive electricity to manufacture low cost steel on a mass scale.

Steel and electricity changed society, reshaping how humans lived in close urban quarters. Until the 1880s few buildings were ever built more than five stories tall, but with the emergence of abundant and strong steel, skyscrapers were born. In 1883 the first building to employ steel skeleton construction was Home Insurance Building in Chicago, reaching an amazing 25 stories. The subsequent erection in Chicago of a number of similar buildings made it the center of the early skyscraper architecture. By 1913, New York began to edge out Chicago in the race for dominance with the construction of the Woolworth Building that reached an incredible 60 stories.

It wasnt just steel frame construction that made skyscrapers possible. The electric lift, invented in 1886, was also needed to replace hydraulic lifts that could not go higher than five stories. At the same time, the telephone supported the skyscraper economy by making it possible for people to communicate among the new high rises. From 1890 to 1900, the number of telephones in use surged in the United States from 200,000 to over 1.5 million, most of which were deployed in newly constructed skyscrapers.

As cities built upwards, they also extended downwards. Taking advantage of the growing electrical network, urban electric railroads and underground railroads emerged. From 1887 to 1900, London built a massive urban underground electric railway system whose highly engineered cars were built from inexpensive steel and moved through concrete tubes. Across the Atlantic, the United States also leveraged the developing electricity infrastructure. Over a fifteen-year period from 1890 to 1905, city transit lines powered by electricity grew from 15 percent to over 90 percent.

With the invention of the electric light bulb in 1878 and further refinements including the carbon filament lamp, electric power stations found entirely new markets in public and domestic household illumination, replacing toxic and inefficient the gas lanterns that had to be constantly refilled. The diffusion of electric lighting across cities and towns for use in stadiums, factories, offices, and along walkways forever changed public and private lives.

August 14, 2003

If you are in the Bay Area tomorrow night, please come here my talk on The Neurotechnology Wave (2010-2060) at the Bay Area Futurist Salon. I'll be sharing many of the thoughts from my forthcoming book -- Brain Wave: Our Emerging Neurosociety. If you are interested in having me speak at one of your events please email me.

Also: A Big THANKS to Pat Kane, Steven Johnson and Paul Zak for their guest blogs. They have made it possible for me to make significant progress on the book as well as setting the bar pretty high for the Wrye and Richard who will be focusing on neuroethics in the coming weeks, not to mention the other two amazing guest bloggers that will follow them.

August 13, 2003

Elisabeth Hill and David Sally of University College London have recently completed a very interesting paper using the neuroeconomic method (real social interactions with payoffs) examining cooperation and fairness in adults and children with autistic spectrum disorder or the less devastating Asperger syndrome. The authors specifically are examining the role of "mentalizing" or "theory of mind" (the ability to interpret another's intentions) in strategic interactions. This is a nicely designed study using normal controls and several unrelated control tasks to determine subjects basal theory of mind and cognitive abilities.

Three games were used, the prisoner's dilemma (PD), the ultimatum game (UG) and the Dictator game (D) [I'll presume readers know these strategic social interactions, but if not, PD and UG admit equilibria with either cooperation or defection, and D measures altruism through gift giving.] Normal children had trouble finding the best strategy for all games, while normal adults quickly found good optimal strategies (and experimented on the parameters of them to further optimize). Autistic adults were about as cooperative as normal adults, and autistic children were similar to normal children. All children were more altruistic than adults.

One critique of this study is the (typical for psychologists) lack of use of monetary rewards to motivate attention to task. This study "paid" for performance with stickers for children and chocolate for adults. While these things are presumed desirable, their value across subjects varies (e.g. some adults don't care for chocolate). Cash is king here and has much clearly interpretable effects. A second critique is the very discursive writing of the authors that make it difficult to read (it is downloadable at www.ssrn.com).

Otherwise, this is a very nicely designed and executed study that tells us about social interactions in which the economically rational behavior requires little mentalizing ability. More sophisticated strategic interactions clearly do require this (e.g. see McCabe et al, "A functional imaging study of cooperation in two-person reciprocal exchange" Proc. Nat. Acad. of Sci., 2001).

Bottom line:many social and economic interactions do not require deep cognitive abilities, but are fairly quickly intuited using, e.g. market signals. This is good; it is why economies chug along with little intervention needed as market participants can figure out easily what is best for them (and need not have someone or group tell them what they "should" do). This study gives us an insight as to why this occurs.

August 11, 2003

Trust pervades nearly every aspect of our daily lives, yet the neurobiological mechanisms that permit human beings to trust each are not understood. In this research we find that when someone observes that another person trusts them, a hormone called oxytocin that circulates in the brain and the body rises. The stronger the signal of trust, the more oxytocin increases. In addition, the more oxytocin increases, the more trustworthy (reciprocating trust) people are. Interestingly, participants in this experiment were unable to articulate why they behaved they way they did, but nonetheless their brains guided them to behave in socially desirable ways, that is, to be trustworthy. This tells us that human beings are exquisitely attuned to interpreting and responding to social signals.

Our findings are even more surprising because monetary transfers were used to gauge trust and trustworthiness, and the entire interaction took place by computer without any face to face communication. Signals of trust are sent by sending money that participants earned to another person in a laboratory, without knowing who that person is or what they will do. That, is, there is a real cost to signaling that you trust someone. This research is part of a new transdisciplinary field called neuroeconomics that measures the neurologic processes involved in decisions involving money.

This is how the experiment works: People were recruited and paid $10 for showing up. Then they took seats in a large computer lab and were matched up in pairs, but this done completely anonymously so that no one knew (or would know) the other person in his or her pair. One-half of the participants (decision-maker 1s) then had the opportunity to send none, some or all of their $10 show-up fee to the other person in their pair. Whatever is sent is tripled. So, if $4 was sent, the other person would have $22 ($4 tripled, plus the $10 show-up fee the second person receives). The second decision-maker could then send some amount of this money back to decision-maker 1, but need not. This is how we produce a social signal of trust: decision-maker 1s only reason to transfer money to the other person is because he or she trusts that that person will understand why the money is being sent to them, and in turn will return some to them (be trustworthy). All subjects are told that the initial monetary transfer is tripled, and there is no deception of any kind.

After each person makes his or her decision, they are taken to another room and four tablespoons of blood were taken from an arm vein. Animal studies have shown that oxytocin, a hormone little studied in humans, facilitates social recognition and social bonding, for example, bonding of mothers to their offspring, and in some monogamous species the bonding of males and females in a family unit. Based on the animal studies, we hypothesized that what is happening in the trust experiment is that people are forming temporary social bonds with the other person in their pair. This is just what we found. The stronger the signal of trust, the more oxytocin increases, and the more trustworthy people are. This is surprising given the sterile laboratory environment of the interaction so that the effect of oxytocin on face-to-face interactions must be quite strong.

We also found that women in the experiment who are ovulating were significantly less likely to be trustworthy (for the same signal of trust). This effect is caused by the physiologic interactions between progesterone and oxytocin, and it makes sense behaviorally: women who are, or are about to be, pregnant, need to be much more selective in their interpretation of social signals, and also need more resources than at other times.

Standard economic theory (the Nash equilibrium) predicts that rational self-interested people should never trust another person, and if someone trusts you, you should not be trustworthy. Why? The Nash equilibrium says that if you are decision-maker 2, you should prefer more money than less so you should not be trustworthy (that is, return any money to decision-maker 1). Decision-maker 1 should realize this and therefore never send anything to the second person. Yet we see abundant trust in the lab and in daily life.

What the Nash equilibrium ignores is that humans, while certainly self-interested, also are highly social creatures and have brains designed to interpret social signals; in other words, we care what others think about us and our brains motivate us to take others into account. This could be called empathy. There is little evidence that creatures besides humans are empathetic, and indeed humans are empathetic even to strangers. This reveals an important role for the emotions in decision-making. Further, such empathy enables unrelated humans to live together with generally little violence in large cities and makes modern industrial economies possible.

My lab is now studying brain activation patterns when people receive signals of trust, as well as in the physiologic responses to trust signals in patients who have neurologic damage. Trust is an essential part of our daily lives, from walking down the street to driving to countless other daily activities, so that discovering the neurobiology of trust tells us something important about human nature: that we are so highly social that we pick up social signals of trust and act on them even when we are not consciously aware of these signals. Our brain acts as an internal compass that guides us towards the right thing to do.

August 8, 2003

One of the other personal insights that comes out of thinking about yourself through the lens of neuroscience is an increased awareness of the different mental states you cycle through in a given day. The general categories -- sleepy, alert, energetic, thoughtful -- break into more precise sub-categories, like the folkloric tales of Eskimos and their rich vocabulary for snow. Subtle shifts of attention and awareness seem more vivid, because you know something about the neurological changes behind them.

One way to appreciate these different states is to take any number of attention tests, designed to pinpoint your particular skills at the various subsystems that make up the macro category of attention. You have systems that specialize in "sustain": remaining focused on a single item for extended stretches of time; and systems dedicated to "encoding": transferring that incoming data to your working memory. Each of the sensory inputs has its own channels as well: so you can be very skilled at visual encoding, but weak at auditory sustain. You can take tests to evaluate your skills at these and other attention tasks. Learning your strengths and weaknesses can help you compensate in the appropriate situations, at least until the neuroceuticals arrive to improve our weak links directly.

August 6, 2003

This interview with Paul Ekman, the UCSF psychologist who has become something of a celebrity in the past few years, is a good place to start answering the question I posed in the previous entry: how does knowing something about the mind's inner workings change the way we think about ourselves as individuals?

Ekman is famous for proving the universality of the basic language of human facial expressions (a premise notoriously rejected by Margaret Mead many years ago). But he's also brought to light the subtleties of our expression-reading skills, our knack for detecting the micro-expressions that go beyond the basic palette of seven primary emotions. These skills are part of what some neuroscientists refer to as our "mindreading" system: the part of our brain that's constantly trying to guess what other people are thinking, using all the potential clues available to us, many of which take the form of subtle changes in the musculature of the face.

Mindreading is one of those topics where the "hominid" approach and the more personal, introspective approach nicely overlap. We're all innately talented mindreaders -- unless we're autistic, or suffer from some other comparable disorder. We don't go to school to learn to read facial expressions, even though they utilize an amazingly sophisticated symbolic system. But some of us are better mindreaders than others -- we're better at reading those split-second changes in facial expression or vocal intonation, and thus better at assessing the true meaning of another person's inner thoughts and feelings.

The more you learn about the science of mindreading, the more you find yourself dividing up your friends into two camps: the mindreaders and the mind-dyslexic. It's not a psychological filter that you carried around consciously before, the way you might have thought of certain friends as being extroverts, and others being repressed. But once you apply the filter, it has a revealing quality: you find yourself saying -- "That's why I always had such great conversations with her!" Or: "No wonder his jokes always fell flat."

August 4, 2003

Greetings, all. As Zack alluded to below, I've just started to talk publicly a little about my next book, Mind Wide Open, due to be published in early 2004. So the timing for his guest-blog couldn't be more appropriate. Over the next few days I'll talk about some of the book's themes, hopefully tied to whatever is floating around the blogosphere this week.

I thought I'd start with a quick response to something that Pat wrote in one of his typically brilliant posts from last week. He talked about the opposition between the "the old hominid responses: that repetoire of savannah inheritances, tragic and comic, that have become a consoling pop-science myth for so many people" and the "the scary but exciting area of neurosocial innovation," with its "carefully-calibrated drugs open[ing] new doors of perception."

When I read this, it occurred to me that what has interested me about brain science the most over the past few years -- and what forms the cornerstone of this new book -- lies precisely in the middle ground between these two approaches. The evolutionary psychologists explain how our brains are wired, and the neurotechnologists speculate about re-wiring them. My interest is more prosaic, I suppose, but also I think more immediate, and maybe more useful, at least in the short term.

What I'm interested in his how simply understanding your brain's inner life -- both seeing it in action via imaging technologies, and simply learning about brain science in general -- can change the way that you think about yourself as an individual, your own quirks and passions and habits and fears.

So while I'm fascinated by (and have learned an immense amount from) both the "hominid" and the "neurosocial" approach, I'm right now interested in the plasticity that comes from neurologically-informed introspection. What happens to our layperson brains now that we're able to talk about our mental events in a much more direct, non-metaphoric language? Even without those carefully-calibrated drugs, understanding how thinking works will surely change the way we think. The question is: what kind of change is likely?

August 1, 2003

I'm always keen to expand what we mean by play, moving beyond the usual Puritan cliches of triviality and frivolity. And the capacity of science to give us the power to play with matter and biology itself is one of the most urgent "ethical" issues of all.

There's a great play quote from novelist Margaret Atwood in the New York Review of Books (unfortunately, not free-to-web), reviewing Bill McKibben's intelligent-Luddite analysis, Enough: Staying Human in an Engineered Age (extract here). McKibben's argument is that our new disruptive technologies (bio-, info- and nano-tech) allow us too much power, over nature and ourselves. Do we really want to head towards immortality, for example, if these technologies allow it?

Atwood asks us to question whether all this intelligence is being properly applied:

There are some very clever people at work on the parts that will go into making up our immortality, and what they're doing is on some levels fascinating - like playing with the biggest toy box you've ever seen - but they are not the people who should be deciding our future. Asking these kinds of scientists what improved human nature is like is like asking ants what you should have in your backyard. Of course they would say, "more ants".

Of course, a novelist whose job it is to construct virtual human selves all day, and ultimately has the power to create or destroy her characters, is going to admit to a mild "fascination" with the "toy box" of bio-science. And though she is in agreement with McKibben - that "we can accept or reject technologies according to our social and spiritual criteria" (huge point) - she's "not too sure we'll do it".

Two other thinkers on "playing God with science" perhaps worth exploring, and from very different perspectives, are Robert Kegan and Bruno Latour.

Kegan, a Harvard psychologist says that our techno-scientific evolution far outstrips our mental evolution: thus, in the title of his book we are 'in over our heads' , subjectively drowning in the demands of life. How we close that gap and become, as Kagan puts it, "reconstructive post-modernists" - ie, big enough to 'play across' the various levels of our personal development - is his agenda. How do we attain the wisdom to deal with all this?

Bruno Latour, a French sociologist of science, has a more direct answer about who gets to "play God" with science: we all do. In the edition of Wired magazine edited by Rem Koolhaas , Latour proposes an update of the old US anti-colonialist slogan - 'Taxation without Representation is Tyranny'. His version is, 'Experimentation without Representation is Tyranny':

The sharp divide between a scientific inside, where experts are formulating theories, and a political outside, where nonexperts are getting by with human values, is evaporating. And the more it does, the more the fate of humans is linked to that of things, the more a scientific statement ("The Earth is warming") resembles a political one ("The Earth is warming!"). The matters of fact of science become matters of concern of politics...

Latour's aim is to extend the realm of players in science's future -"an imbroglio of spokespersons in a room". Will more voices improve our choices?