When I was 13, I once dreamt that a beautiful woman was sensuously stroking the palm of my hand, as a family of fridges hummed in the background. In reality, a huge, buzzing wasp had landed on my right hand. It idly walked around for a bit, and then stung me. After the shock had worn off, I was puzzled why my dreaming brain had stopped me from waking up to this potential danger. Contrast this with 6 years ago, when even my deepest sleep would be broken by the first sounds of my newborn baby daughter’s cries. How do our brains decide whether or not to wake us up, based on what’s going on in the world? And why does this policy change, depending on whether we’re dreaming or in some other sleep state?

In a recent paper in the Journal of Neuroscience, Thomas Andrillon and his colleagues have discovered intriguing clues that start to answer these questions. They used electroencephalography (EEG) to record their participants’ electrical brain activity while they completed a simple task: listening to a series of words and pressing a button with one hand if a given word was an animal, or the other hand if it was an object. Whenever the participants made an appropriate hand response, this was always preceded by a spike of brain activity – a well-known EEG marker, called the Lateral Readiness Potential (LRP), indicating extra activity over the motor cortex on the opposite side of the brain to the hand response.

Crucially, the researchers had the participants complete this test while both awake and during all sleep stages. The experimenters were interested in how the participants’ brains responded differently in these states. To make sure the participants didn’t “cheat” by memorizing word and response mappings when awake (for instance, that the word “table” always means “press right”), there were separate lists for when they were awake and in different stages of sleep.

Intriguingly, as the participants drifted off to a light sleep, their LRP brain signal persisted, even after they’d stopped physically responding. This suggests that their brains were still working out the meaning of the words and how to respond. However, no such result was found for deep non-REM (rapid eye movement) sleep or REM sleep – the stage where most dreaming occurs.

But the brain wasn’t completely oblivious to the outside world in these other sleep states. There were some trials where the participants had fallen asleep without the researchers realising (they only noticed when examining the EEG signals more closely later on). On these trials, the participants were still being presented with words for which they’d already worked out the meaning and appropriate response while they were awake. These familiar words did elicit the appropriate LRP response in the REM sleep stage, suggesting a shallow type of processing was still occurring.

One minor concern I have with these results was that whereas the LRP occurred about a second after the word was heard when the participants were awake, it did not show up until 3.5 seconds later in all the sleep states – in the context of EEG, this is a huge difference. Was this really the same marker as the LRP in the wake state, perhaps delayed because of inefficient sleep processing, or something rather different?

One way to explore this question further is to look at another measure – the “compressibility” of the brain signal as recorded via EEG. If it is highly random, uncompressible, then conscious level is high and the person is probably awake, whereas if the brain signal is orderly, easily compressed, then the person may be in a deep sleep. Using this measure, Andrillon and his colleagues found that consciousness was highest when the volunteers were awake, lower for light non-REM sleep, and lowest with deep non-REM sleep. Meanwhile, when in REM sleep, consciousness was almost as high as the waking state. Given that when we’re dreaming we can feel quite conscious, but trapped inside the stories in our heads, these results all fit neatly with experience.

Next, the researchers linked this consciousness measure with the LRP. When the participants were awake or in a light sleep, the stronger the LRP signal, the higher the consciousness measure. One interpretation of this is that the more conscious you are, even in light sleep, the more you hear and understand, and the more the relevant semantic and motor response processes kick in (as revealed by a stronger LRP in this study). There’s a complicating twist, though, because in the REM sleep stage, the researchers found the opposite relationship – the higher the consciousness level, the weaker the LRP – perhaps suggesting that in dreams, consciousness is at its height when people are better isolated from the outside world. Remember though that these REM LRPs (seen when the participants were asleep but the researchers originally thought they were awake) were more to do with a basic recall of previously consciously learnt word–response pairings, than to a deeper judgement on the meaning of unfamiliar words.

The authors looked at another EEG marker (known as the N550) that further supported this interpretation. Present someone with a word and if they are asleep they will typically show a larger negative spike of brain activity around 550ms after the word than if they were awake – a sign of the outside world being ignored. In the study, in both light and deep sleep, the larger the LRP, the smaller was the N550, suggesting that effective processing of the words was competing with, and suppressing the N550 stimulus-blocking mechanism. In contrast, during REM sleep, larger LRPs were actually related to larger N550s. This may be due to dream mechanisms boosting the shutting down of the outside world whenever it begins to encroach on their territory. It’s almost as if in non-REM sleep, salient features of the outside world can open a gate to deeper processing, but in REM sleep, these same features cause the gate to be held shut more tightly, unless these salient features are life-threatening.

The emerging picture from this and related research is far removed from the naïve view of sleep as involving a brain simply half shutting down. Instead, sleep is a dynamic, complex process, dependent on what stage you’re in. Your brain has to walk a tightrope: it needs to remain asleep and isolated from the external environment, so that important neural regenerative, and memory consolidation processes can occur (especially in REM sleep); but it also needs to keep you safe from dangers in the outside world, by being prepared to wake up if urgent action is required. This new research shows that this weighting of how much value to give to external signals can change depending on how important the current sleep stage is to your welfare, with REM sleep being especially well protected – even when wasp feet are masquerading as a beautiful woman’s fingers.

Last month I gave a public talk at the Salon Club in London about computer consciousness, as part of a lively evening of talks exploring on the ways that artificial intelligence (AI) is increasingly encroaching on our lives. One key question that served as a foundation for the evening’s discussion was the main “transhumanist” goal: when will we be able to upload our consciousness into a computer?

Indeed, there is a prominent organization, the 2045 Initiative, whose intent appears to hasten the moment when human consciousness can live on indefinitely in a non-biological substrate, with the realistic (they believe) aim to achieve this milestone by 2045.

At around the same time, AlphaGo was busy thrashing the world champion at Go, seemingly bringing the moment of truly aware, clever AI ever closer. In a flip question to the transhumanist one, does this mean that the time we are all doomed to be destroyed by our AI overlords is just around the corner?

It is easy, especially if you fear your own mortality, to be caught up in the hype and optimism of such endeavours. It is also easy to resort to paranoia, especially when prominent scientists (in other fields), such as Stephen Hawking, proclaim doom-laden statements like:

But I want to devote this blogpost to explaining why 2045, and by extension the rise of our homicidal AI overlords, from a neuroscientific perspective, is fantasy. I’m a hopeful guy, and do believe in the ingenuity of the human mind, as borne out by so many achievements in the history of science, to overcome almost all obstacles. But even given this, I still see this key transhumanist mark, if it is ever possible, as being centuries away.

In terms of AlphaGo, if Lee Sedol, the Go world champion it roundly beat, were to say a few meaningful words, or do a dance, or recognise a dog in the street, or learn a new game that afternoon, these would all have been trivial achievements for the generalist human, but impossible for the specialist program. AlphaGo is a large leap forward in machine learning, but it is based on terribly crude algorithms compared to the human brain, and there are thousands of kilometres to go yet.

Let me start by summarising the most intricate feat of evolution in its four billion year history, and the most complex object in the known universe: your brain.

The Daunting Details of the Task

First off, anyone who doesn’t tell you that the task of digitally capturing a human brain is utterly mind-boggling is laughably wrong. The human brain has around 85 billion neurons, each connected to 7000 others, leading to about 600 trillion connections in all. Inside each of your skulls, this cabling connecting neurons together is 165,000km long, enough to wrap around the world four times over! But that’s just the tip of the iceberg. The different types of neurons is counted in the hundreds, but may be as much as 1000 – each kind performing a subtly different computational role. There are at least as many different types of synapses – the connections between neurons that allows information to pass between them.

Then there are the glia cells, equal in numbers to the neurons, of about 85 billion. It used to be assumed that these provide a supporting role only – providing structure, nutrients, carry out repairs and so on. Increasing evidence is emerging, however, that these glia cells play an information processing role as well, although exactly what computation purpose they serve is still an open question.

If this weren’t overwhelmingly daunting enough, there is an entire further level of computational complexity often overlooked. All of the above assumes that computations are limited to between cells. But this assumption is utterly false. Inside almost every single biological cell, an incredibly complex set of computations are occurring at the interaction between the DNA, RNA and proteins – with cascades of such switches being turned on or off to capture some key information, or enable or disable certain cellular machinery. So inside every single neural or glial cell, key information processing steps are undoubtedly going on as well. Some of these computations might not be so relevant to the representation of our minds. They might instead just be looking after the general upkeep of the cell. But some low level computational processes inside these so-called support cells might be critical to who we are, and how we think and feel. And this extra layer ups the complexity stakes – and the difficulty of the transhumanist task – by orders of magnitude.

From all this, we do have some ideas for the computational processes involved, especially in small, discrete problems. But our knowledge of how the brain computes information is very very far from complete.

Current Progress

So with all this in mind, what is the current state of play in us capturing the human brain in all it’s incredible detail?

First, the anatomy. The current state of the art in trying to capture a brain involves a large group led by Jeff Lichtman, from Harvard University. With work spanning 6 years, and published in Cell last year (with a great video and summary in Nature), they have taken incredibly thin slices of the mouse brain, and then used an electron microscope to capture them in cellular detail, before digitally reconstructing them into a 3D whole. The result is a dust-sized speck of cortex, not even comprising an entire single neuron.

Reconstructed piece of mouse cortex, 1500 cubic microns in size

But nevertheless, within this dense web of neural cables, there are 1700 synapses, and 100,000 other important microscopic structures. It is monumental work, but to do the same job for the entire human brain would involve a zettabyte of information – the current total digital content on the planet. And that’s not the only problem. This is capturing a dead brain, slice by slice, in cellular detail. To really grab a person’s consciousness, we’d need to upgrade this process on multiple levels. We’d need it to be molecular, not cellular, to grab the computations inside cells; we’d need it to be carried out on a live human, and we’d need it to be done fast. A scan that takes years would mean you effectively have a different person at the end compared to the beginning. There is no technology available, or in the remotest pipeline, that can scan like this. And, perhaps, there never will be.

So what about making computational models of the human brain, working from the other end of the problem. Obviously this will include a lot of guesswork if we haven’t sorted out the basic understanding as yet of what’s actually occurring in the brain. But leaving this aside for the moment, what’s the most ambitious project out there?

The Blue Brain Project is a billion Euro consortium with the aim of creating a digital brain. But after over a decade’s work, the latest achievement, published in Cell last year, is a computer model of a cortical column of just 31,000 rat neurons, and 40 million connections.

Computer simulation of cortical column of 30,000 rat neurons, and 40 million connections.

This is about a 7000th of a rat’s brain (which has 200 million neurons), and a human’s brain is 400 times larger than a rat’s, so this model is about 3 million times smaller than a human brain. On top of that, this project has many simplifications. It includes only 55 different neuron types, doesn’t model glia cells, or blood vessels, and doesn’t model any of the cellular machinery inside each cell.

The Blue Brain Project is, therefore, a vast distance from making a digital human.

Future Projects

Perhaps the most exciting imminent project, which will attempt to connect the anatomical and computational wings together, is one funded by the US Intelligence Advanced Research Projects Activity (IARPA), the intelligence community’s equivalent of DARPA. They have provided $100 million to the Machine Intelligence from Cortical Networks (MICrONS) program, which has just launched. From the anatomical perspective, and in collaboration with the Allen Institute, of which the prominent neuroscientist, Christoff Koch, is chief scientific officer, as well as Lichtman’s group in Harvard, MICrONS aims to capture an entire cubic mm of mouse brain to the same detail as the speck of cortex shown above (though 600,000 times larger). This will include some 100,000 neurons, around 10 million synapses – equating to about 4km worth of wires. All in a lump of brain about the size of a poppy seed. This is just one of a set of highly ambitious neuroanatomical projects attached to MICrONS.

But that is only half the MICrONS story. The other half is to infer the information processing that occurs from this mountain of brain data, and come up with the kinds of neural algorithms that allow mammals to learn and perceive so effortlessly. This way, it is hoped, far more sophisticated, general purpose, and powerful computer simulations of biological data crunching than the AlphaGo program will emerge.

This is an enormously high risk and incredibly ambitious project, with a huge budget. And yet, involving just a mm of the mouse brain, which is less than a millionth the volume of our brains, it is still utterly miniscule compared to the vast scales and complexities of the organ that generates our personality, memories, and mental identity.

Computer Consciousness? Don’t hold your breath.

I love life, always have. And I don’t want it to end. We’ve already collected more books on our shelves than I can read in a standard lifetime. There is so much I want to learn, to explore, to experience. And the idea that I could live on, by uploading my brain into a robot in the next few decades, is immensely appealing. Sadly, though, such forecasts completely fail to take into account the unimaginable complexity of the human brain, and our infancy in attempt to capture and understand it. 2045 sounds, to me, a completely naïve guess. 2345, maybe. But even then, I wouldn’t hold my breath.

I work from home a lot and about 20 months ago I converted my home office to one where I could stand or walk as I worked at the computer. For both health and productivity, it’s the best ergonomic decision I’ve made by quite a margin. And in this post I’d like to share with you my own experiences with this different way of working: the benefits it’s given me, the tactics I’ve adopted that work well, and the others I’ve shed that sometimes lead to problems.

Starting out

The evidence has been mounting for years that sitting at an office desk for most of the day, five days a week, is seriously bad for your health. And it’s surprising and disappointing that regular, but discrete gym exercise does little to mitigate this. In September 2013 I took a close look at all the evidence and became largely convinced that I had to make changes. I don’t intend to explore the evidence here (for those interested, here’s a great short summary video and a decent lengthier article in Time). I will just briefly say that the consensus suggests that excessive daily setting – which basically means a standard desk job – considerably increases your chances of early death from cardiovascular disease, stroke, and a range of cancers. It also places a greater strain on your back and neck. At the same time, being more physically active, even in moderate ways, helps protect you from almost every serious physical or mental condition going. The phrase of the moment, based on all this, is that “sitting is the new smoking.”

This was all a strong motivation for me, but I’m not sure it was even the main motivation to try to avoid the office chair as much as possible. Maybe it’s a little to do with my age (I’ll turn 40 this year), but my general alertness levels sitting in the office aren’t what they used to be, especially in the early afternoon dip. Remaining productive for as many hours as possible in the day is hugely important to me. If I feel at the 80% alertness level, say, then this can mean the difference between a useful day and a wasted one, since trying to decipher a difficult academic paper, or write one, or work on one of my other writing projects, tends to require all the concentration I can muster. So I wanted to do all I could to maximise that. Again, avoiding that office chair seemed to make sense to keep me alert.

For the first month or so, to try things out, I simply took over a tall set of drawers in a bedroom, and spent the day standing while working. I deliberately tried not to stand still, instead fidgeting as much as I could. My alertness levels increased dramatically, to the extent that I resolved never to spend whole days sitting and working again, if I could possibly avoid it. I also noticed that the general lower back niggles I would normally get as I spent the whole day sitting had largely disappeared. Having said all that, it wasn’t the most comfortable way to work. My ankles and calves would get sore after some hours of standing solidly, and other parts of my back occasionally also became sore.

But this was encouraging enough for me to take the next step.

The Kit

So in November 2013 I bought a cheap second-hand standing desk (with a motorised mechanism to turn it into a normal sitting desk if I needed that) for about £280.

The desk is perfectly fine, although I actually almost never change the height (the main time the height changes is when my four-year-old daughter invades the office, as she loves pressing the buttons and seeing the desk go up and down). So if I were buying again, I would probably just get a mechanical height changer desk and save a bit of money. I would say, though, that you shouldn’t compromise on some features of the desk. You want it to be solid, so that as you rest your hands on it, you won’t cause the desk to wobble. And unless you are very short, you should get a desk that has a decent top height, because the treadmill will add around 10cm. Many budget desks don’t go that high.

And I bought a very cheap treadmill to go under the desk, the Confidence Power Plus model, for £150 (actually, it’s now only £130 on Amazon), and I followed these instructions to remove the vertical handlebars so it would fit under the desk (but I kept the beep so that I could in theory return the machine under warranty).

The treadmill has performed a stellar job over these last 20 months or so and I’ve been extremely happy with its performance. It was very easy to convert it as per the instructions, taking only about 20 minutes. I did get a bit of a shock when I turned it on at the mains and nothing happened – until I eventually realised that the front of the treadmill has a separate power button – enormous dunce moment on my part (though I don’t actually think that this button is mentioned in the crappy manual). Once I finally got it working, though, I haven’t looked back and to date have walked over 4200 km on it (obviously as a scientist, there is a spreadsheet with daily data involved!), and it still works like new. I’m sure it helps that I’m on the small side (1.70 m, about 60kg). But still, I definitely think I’ve got my money’s worth from this machine.

I am extremely conscientious at looking after it though. Given how much I use it, at the start of every week I apply a liberal amount of silicon lubricant under the belt. And every few months, usually when I hear some unwanted extra noise in the machine, I oil all the moving parts (including taking the front cover off and oiling the motor), and that solves the problem.

A small word of warning: when I bought the treadmill, I had powerline (mains wires) based internet in my home office, and turning on this treadmill completely wrecked that. I tried a bunch of things, but in the end the only solution was to use wifi instead, which works fine with the treadmill.

It is by default set slightly uphill, so I’ve put a blanket under the back feet, as you can see, and use a spirit level to make sure it is completely flat. You might think this is silly, and I am avoiding a more intense workout. But I’ve found that spending whole days walking uphill puts too much of a strain on my ankles.

The machine starts at 1 km/h and increments up in 0.1 km/h steps, up to 10 km/h. Occasionally I’ve taken it up to 7.5 km/h and it’s been fine with this.

The treadmill stops after 30 minutes. Initially I thought that this was a pain, but now I appreciate this feature, as it gives me a little break every so often. And it probably helps the machine from overheating.

In hindsight I would probably buy from the same company but the next machine up, which has a few incline levels, a more powerful motor, and a slightly wider belt.

I have in the past suffered from RSI, so I like my system to be as ergonomic as possible. As you can see, I have a tapered keyboard and a vertical mouse. And I have nice thick gel pads for both, which if you’re walking is really useful to dampen down the movement and keep your hands relatively still.

Initial regime and effects

I started slow and easy, just 1.6 km/h in 30 minutes sections, with large standing breaks, walking just a few hours a day for the first few days. Then I built it up to the whole day, although always with a few minutes standing break in between each 30 minute walk session. From the second week I went up to 2.4 km/h. The next month was 3.2 km/h, and finally I reached my current standard speed of 4 km/h for most of the day, with occasional standing breaks in between.

I quickly found that the aches from standing all day disappeared. My back felt stronger and pain-free, and my calves were fine. The only issue was that in the first week or two it was plain physically tiring to spend most of the day walking, and my thighs ached as if I’d been on a good run.

My alertness levels were even better than spending the day standing, and only seemed to occasionally flag right at the end of the working day if I started getting a little tired.

With all this extra daily exercise, I slept better than I’ve done in many years, probably since my early 20s.

Initially my appetite almost doubled, I assume as I was building up new stamina-based muscles. At the same time, I was losing weight. I wasn’t exactly fat to begin with, maybe with a BMI of about 22.5. But within the first month or two this gradually settled to a BMI of 20. And after the first month, even though I was ramping up my activity steadily, my body must have acclimatised to the new system, as my appetite returned to normal.

I realised various things from these initial excursions into office walking:

Standing all day at a desk is less natural and more of a strain than walking. And if I possibly could, I would always try to spend much of the day walking while working.

I wasn’t a fan of the really low speeds below 2 km/h. Although it was a tad easier to type, it didn’t feel natural and taxed my muscles a lot more than the slow speed indicated. I was actually happier walking faster. Only if I was doing fine-grained mouse work did the walking interfere with anything, and I’d just stand for those parts.

Walking doesn’t interfere with my concentration at all – after a short time, I hardly notice I’m doing it and am totally focused on the work (see comments at end).

With a little pushing, your body can get used to almost anything. During the first month, it was a real struggle to motivate myself also to do gym exercises my usual 3 to 5 days a week after work. But after a month or two I found that a whole day of walking was something I could, if you’ll excuse the pun, easily taking in my stride.

I came to realise that all this activity is what my body is built for and that sitting all day is just very unnatural. I suppose the body associates sitting and lying with time for rest and sleep.

Current regime and my tweaks

Now I rarely walk less than 4 km/h, in 30 minute chunks, with a minute standing break on average in between. I try to mix up the speeds a little bit, because even marginal changes in speed seem to hit your muscles differently. And constant repetitions of exactly the same movement can cause strains. Most of the day I’m walking. Otherwise I’m standing. The office chair is completely superfluous, and I think it’s been over a year since I sat on it to work.

My default work mode is usually to type on the computer, and for that I stick to around 4 km/h, and can type about as well as standing like this. I sometimes also use voice recognition software, and the treadmill sounds don’t seem to interfere much with this (my microphone is noise cancelling though).

If I’m watching an online lecture, or reading an e-book or academic paper, then I might bump up the speed to 6.5 to 7.5 km/h for that hour, as a moderate workout. I usually do this around the end of the working day. Recently, just to spice things up, I’ve also bought a weighted vest, so that I sometimes add about 5 kg to my bodyweight as I walk.

I have flat feet and a dodgy right knee (cartilage and ACL damage following a football injury), so try to be careful to avoid strains. If I am on the treadmill, I always wear decent running shoes (with orthotic insoles for the flat feet), and replace them regularly.

Nowadays, if I’m working from home, my usual daily walk is between 15 and 20 km, though if there is a lot of reading or online lecture watching, that might get closer to 30 on occasion. I only feel slightly tired by the end, and rarely feel any muscle aches any more.

General effects

Walking helps keep me awake and focused on the work. I definitely feel more productive and creative while walking. Occasionally, especially when out and about, I have to sit by my laptop for much of the day, and I’m struck by how lethargic I can feel in comparison.

When meals come, it’s hard to describe, but I feel healthily hungry, rather than eating for its own sake. And although I do eat sensibly, I can probably eat whatever I like and not gain weight, as I’m burning so many calories during the day.

As to sleep, before I started this regime I was resigned to the fact that maybe with my kind of brain and age, poor quality sleep was the norm. I never felt I had a deep nourishing sleep anymore. Part of the problem probably was caffeine intake, and for the last half a year I’ve completely cut out caffeine for the first time in my adult life, and that’s also been transformative (after the initial headaches and exhaustion had faded – I may write another blog post about this). But definitely being physically active for much of the day made an enormous difference to my quality of sleep. Perhaps I sleep a little less now too (6-7 hours, instead of 8-9 before), because when I am asleep I’m pretty dead to the world.

In my daily life, I have so much more stamina. A long holiday’s mountain hike feels like a small stroll. The issue now is more if I have a relatively sedentary day (if I go to the University campus, I have never bothered to sort out a standing desk setup), then I can feel restless, as if my body actually needs all that activity now, and can end up not sleeping so well.

Another major change I’ve noticed is my illness levels. I don’t think in the last 20 months I’ve taken a single day off work, or had a day of a raised temperature. Given that the house has a rampant infection-spreading machine (i.e. my young daughter), this has been even more surprising to me. It’s not the case that I’ve never fallen ill. I’ve definitely been infected, but less than normal, or at least less than my wife, who beforehand used to tease me that she would get ill far less frequently than me. It’s just that when I have been infected since walking most days, usually I wouldn’t actually feel ill. And I’ve always found that staying active, getting on the treadmill and working anyway, has been the best way to shake off any early symptoms of an illness. Although obviously this is all anecdotal, I think there is very good evidence that being physically active does boost the immune system.

A Wider Change?

I wish I’d implemented a treadmill desk 20 years ago. And I definitely plan to carry on with this habit for the rest of my life, not just for the health reasons, but because I can work so much more effectively while being physically active. Spending my office time walking has protected me from illness, and improved my weight and stamina. It has radically aided my quality of sleep. And my alertness and focus at work has significantly increased.

If you have an office job, and if there is any way for you to install a similar system, then I would strongly recommend that you do so. Standing is definitely better than sitting, but it can bring its own problems from being stuck in the same position for hours on end. From my experience at least, walking is actually easier on your body, better at building core muscles, obviously vastly superior at burning calories, and I’ve found it to be the best way to focus your mind on a complex problem.

I also think it’s time we evaluate optimal systems for schools, universities, seminars, conferences, office-based businesses and so on. For concentration, productivity and health, perhaps the default during the day should be to stand. I will be trying to press this for my daughter’s school as a local action, but perhaps there should be a discussion at the wider level, to make standing desks at the very least the standard in any environment that otherwise would require long periods of sitting.

This is just a short post to flag up a really fascinating, and I’m sure highly entertaining, event happening in the UK at the end of January and beginning of February. The London Science Museum is hosting a small festival about those intriguing, terrifying, yet strangely popular cultural icons, the zombies. But it being the Science Museum, they are boldly and inventively using zombies as a metaphor to explore consciousness science. For instance, are zombies conscious, and if so, by how much? How could we tell? And what ethics do you apply to zombies, based on this knowledge?

Then the museum is hosting a “ZombieLab” for the whole weekend of the 2nd/3rd Feb, where there will be loads of events, including about half a dozen zombie games. I’m giving a talk on each day, and generally helping out. It’s free to attend and looks like a fantastic festival. If you spot me there, please come and say hi, and feel free to ask me anything you like about the science behind the event.

I’ve also written a couple of articles on issues related to the book, in various places.

For instance, I wrote an article in Wired UK magazine about the perils and limits of unconscious decisions and learning (NB online version of article might take a few days to turn up, but it’s out in the old fashioned paper version right now).

And there’s an article called When Do We Become Truly Conscious in Slate magazine, which right now is the most read story on the site, beating down into second place a feature article about a New York dominatrix! So maybe there’s hope for science yet!

The book publicist has been doing a wonderful job on my behalf and I’ve also been or will be interviewed for various radio stations about the book, consciousness or the brain, and you can find out details about those here.

So please grab a copy of The Ravenous Brain – and if you want to tell me what you think of it, I’d love to hear from you.

Jonah Lehrer is one of the hottest science writers around. But this week, in a dramatic fall from grace, he resigned from his staff position at the New Yorker, and his publisher has removed his latest book, Imagine, from sale. The catalyst for these dramatic events is the fact that he fabricated quotes from Bob Dylan, as uncovered by the online Tablet magazine.

I had a few small interactions with Jonah Lehrer in late 2009, and looking back, they perfectly reflected both the reasons for his fame, and his impending troubles. At the time he was in charge of the Scientific American Mind Matters blog, and I was writing a piece for this. In a field where some editors are rather brusque, he in contrast was extremely friendly, complimentary, charming, helpful and supportive. It was the easiest thing in the world to like him, and I dearly hoped to have more dealings with him in the future.

At the same time, though, he wrote a News Feature article for Nature with a glaring factual error in it, in a field I know intimately (NB Nature have just corrected the error, but if you want to view the original article, with error intact, you can do so here). He was writing about the celebrated mnemonist, Shereshevsky, and stated, “After a single read of Dante’s Divine Comedy, he was able to recite the complete poem by heart.” This poem is 700 pages long, so that is quite a feat, especially given that Shereshevsky didn’t speak a word of Italian, and the poem was presented to him in its original language. The truth, instead, as Luria writes in his wonderful little book, The Mind of a Mnemonist, is that only a few stanzas of the poem were presented to Shereshevsky.

This doesn’t detract from Shereshevsky’s exceptional skills, though, since he was tested on this foreign set of lines 15 years after this single reading, and was able perfectly to recall not only every word, but every aspect of stress and pronunciation. Unfortunately, Lehrer didn’t recount any of these details, which to me are in some ways more staggering.

When I emailed Lehrer to point out this mistake, his reply was that “it was the one fact my editor added in the final draft…”

At the time, I simply assumed this was true. But now I don’t. This morning I contacted his editor at Nature, Brendan Maher, to ask about this, and Maher told me that this mistake was present in the first draft of the article that Lehrer sent to him, so was most definitely not an inaccurate last minute addition by the editor. To add insult to injury, after I’d pointed out this mistake to Lehrer, he nevertheless repeated it verbatim 7 months later in this Wired blog article, and then 6 months after that in another Wired blog article.

While I enjoy Bob Dylan songs, and admire the man, to me the furore shouldn’t have exploded following this fabrication revelation (though perhaps that was the last straw?), but months before, when some of the early reviews for his latest book, Imagine, came out.

In Lehrer’s previous book, How We Decide (also known as The Decisive Moment in the UK), I experienced yet again the same twin traits of charisma and lack of care over factual accuracy. The writing was utterly engaging, charming, oozing with talent, but at the same time peppered with basic errors. For instance, on page 100 he writes, “This kind of thinking takes place in the prefrontal cortex, the outermost layer of the frontal lobes.” This is anatomical rubbish – the prefrontal cortex instead, as the name implies, is simply the front-most section of the frontal lobes. Layers have nothing to do with it. I expect such mistakes from less able undergraduate students, who are too lazy to read the first line of the relevant Wikipedia article, but never ever in a respected science book. Then on page 112-3, he writes “the first parts of the brain to evolve – the motor cortex and brain stem.” Where did this come from? The brain stem very probably evolved hundreds of millions of years before the much more recent cortex, which the motor cortex is obviously a part of. So this is completely wrong as well. One last example (of many more) on page 100 again: “Neanderthals were missing one of the most important talents of the human brain: rational thought.” To me, rational thought is what keeps most species of animals alive, but at the very least can you make advanced tools and use fire, as Neanderthals did, without “rational thought”?

I was rather surprised to find that How We Decide received almost universal critical acclaim, when the science within it, although beautifully and stylishly explained, was error strewn and somewhat superficial. Most reviewers know little science in detail, I suppose, so don’t notice these errors that scream off the page to a jobbing research scientist. But at what point should these errors be caught?

I am a little ashamed to admit that I felt relief and a little pleasure that Lehrer’s latest book, Imagine, received what I would call a more accurate set of reviews. My favourite is in the The New York Times by the highly respected Harvard scientist, Christopher Chabris, which in part lists a similar set of simple neuroscientific facts that Lehrer got wrong (and here’s another great critical review, by Tim Requarth and Meehan Crist) .

Imagine is sold as a science book, and so the explanation of science doesn’t really suffer if the odd Bob Dylan quote is made up. So although such an act is utterly sloppy, potentially fraudulent and very embarrassing, I don’t feel that this is the aspect of the book we should be pouring the majority of our scorn over.

However, the main purpose of Imagine, to impart science, does suffer significantly when many of these elementary yet important scientific facts are just as wrong as the Dylan quote, and perhaps also if the topic is dealt with in too superficial a way. Chabris’ review came out on May 11th, and it should have been at this stage that the publisher stepped in, and pulled all copies of Imagine off the shelves for a few months, until a factually accurate replacement was available (preferably checked by an actual scientist). And the newspapers and magazines that Lehrer contributed to should have paused and thought about fact and source checking at this point, in early May. Instead, Lehrer was hired as a staff writer for the New Yorker a month later.

I want to emphasise that I think Jonah Lehrer is incredibly talented. He has an enviable writing style, and can talk with amazing eloquence. So it infuriates me even more that he currently lacks rigour in his work, and seems habitually to adopt a deceptive schoolboy attitude to mistakes when they are revealed to him, rather than maturely owning up to his errors. And I do really hope he bounces back from this. If he more deeply immersed himself in a topic, stopped cutting corners, fact-checked religiously, then he could easily reclaim his position towards the top of scientific journalism.

But to me there are wider issues that Lehrer’s case highlights. I’ve written before about the problem of fact-checking and trust within the neuroscientific community (specifically surrounding problems in neuroimaging reporting). But the issue in scientific journalism and book publishing is so much worse.

Cognitive neuroscience might have its set of problems, but at least when we publish an academic paper, it has undergone peer review, where a few other scientists have carefully read through it, and have had an opportunity to highlight problems. For my current general audience science book, The Ravenous Brain (incidentally due out one month today), I took it upon myself to ensure that the book went through an informal peer review process, with academic colleagues reading the entire manuscript, to check for errors. I believe for any general audience science book, but especially those written by non-scientists like Jonah Lehrer, the publisher and author should always include academic scientific review as part of the process, to catch the kind of errors that Lehrer repeatedly makes before they turn up in print. Although my main experience is in the book field, the same applies to newspaper and magazine articles about science.

Another issue brought into focus by Jonah Lehrer is that he clearly has a winning formula for writing popular science books, and might well be able to retire on the earnings from his handful of years of communicating science. This is a very rare position in publishing. Science writing for the general audience should definitely be engaging, fascinating, even inspiring, and there’s no doubt that Lehrer has solved this part of the equation. But ideally it should also be substantive, even challenging at times, not hiding the complexities inherent in almost all science, but guiding the reader carefully through them. Can a non-scientist succeed in this second aim? Possibly in rare cases, but undoubtedly it is far easier for a research scientist within the field to capture this aspect of the work. I wish more scientists wrote for a general audience, and definitely wish that more newspapers and magazines engaged scientists on articles of scientific content. In the magazine sphere it isn’t uncommon for scientists to pair with journalists on articles, and I think it would be great if more articles were written with such partnerships. And perhaps this should be a more common model in the science book realm as well.

In this increasingly competitive academic culture, career respect is almost entirely related to academic publication quantity and quality. I would love for at least a gentle cultural shift, where public engagement of science is given more priority for scientists, not just in the odd talk, or afternoon of public experiments once a year, but in actively providing the time and space for scientists to produce general audience science articles or even books.

Finally, I think the bottom line here is trust. Lehrer betrayed the reader’s trust, not just with making up Bob Dylan quotes, but perhaps more importantly by pretending that his scientific descriptions were carefully, rigorously checked and sourced, when they weren’t. And just as vitally, he didn’t think to update his work when mistakes were apparent. But part of the blame also lies with the industry in failing to create a pressure for accuracy, such as with a pre-publication professional critique. We trust our magazines and publishers to oversee their writers, but this isn’t necessarily the case. With the explosion of blogging, tweeting and so on, I hope increasingly that scientists can keep tabs on such issues, and make them public, as Chabris so ably did with his review of Imagine.

But perhaps you also have a role to play in keeping a hint of doubt always in your mind, perhaps a little more broadly for journalists than scientists. And with the internet an increasingly interactive place, many times you have the power to check facts yourself, badger authors for sources, or other scientist bloggers with questions and clarifications. This way we can all do our bit to raise the quality of scientific writing.

Go to the nearest window and stare for a moment at an object outside – maybe a car if one is nearby. Now that you’re back, it’s an easy task to recall the color of the car, isn’t it? But how much of the surrounding objects were you aware of? We all feel that we’re conscious of the entire scene, kind of, but would you have even noticed if a tree in your periphery had started magically spinning on its axis?

(Freeman and Simoncelli (2011) have shown that, when you are at the right distance and staring at the lefthand dog, both versions of the image look the same to you, even though one version has scrambled much of the periphery)

In analogous fashion, I personally can drive for miles locked in a daydream, but I don’t crash – is that because some of my consciousness was still managing the driving, or was I purely on autopilot, with my unconscious mind watching the road and control the car for me? This general question of the boundary of our awareness during wakefulness is what I will address today.

It is a question that dominated the recent consciousness science conference, ASSC, which this is the third and final report of (after I talked about the science of hypnosis and magic in the first installment, and the neural symphony of consciousness fading in the second). Many talks throughout the conference aggressively took their theoretical stand on one or other side of this debate, and there was an entire symposium devoted to discussing the question of whether we can be conscious of far more than we can accurately report.

Ned Block chaired this symposium. He is the progenitor of the modern version of the idea that there is a clear distinction between the mere feeling and sensations of consciousness, known as phenomenal consciousness, and the kind of consciousness that we can report on, can behaviorally react to, and so on, which is known as access consciousness. In other words, it’s almost as if we have a far wider conscious border than we generally think we do, partly because there is a very narrow poorly erected fence well within this where consciousness becomes functional for us.

This position has rapidly become very popular, and not just in philosophical circles. For instance, the president of ASSC during the conference, Victor Lamme, entirely subscribes to the theory, and incorporates it into his own prominent neuroscientific theory of consciousness, the Recurrent Processing Model.

Two key experiments are used to defend the distinction between phenomenal and access consciousness. The first is known as change blindness and is illustrated by the figure below:

Here, the left and right images are alternately shown on the screen, and it takes subjects a surprisingly long time to notice the change (if you haven’t spotted it, look carefully at the wall behind the figure). Ned Block argues that we must be conscious of the whole scene, but only in a phenomenal way, just as you had a sense of the whole scene outside, even if you only centered on the car. Only when the key difference within our more limited access consciousness is noticed can we actually do anything about it, such as proclaim that the wall is different between the two images.

But was detailed information of the whole scene even processed in our brains? A related experiment suggests that it might be.

Imagine that for a fraction of a second you see a grid of letters like the following:

Even though it’s present for the briefest of moments, you still may have a sense that you definitely saw all twelve letters, but when asked a second later most people can only accurately report 3-4 letters at random places in the grid, so only a single letter from each row on average. Strikingly, though, if as soon as the grid disappears a cue tells you just to focus on any one of the rows, you can still report 3-4 items from that row, as if somewhere, in your head, there was information about all 12 items. Ned block thinks this test, known as the Sperling Task, demonstrates that our phenomenal consciousness has a superior capacity to our access consciousness – our phenomenal consciousness, he claims, holds information about all 12 items. But our access consciousness only has space for 4 items, so there is an overflow situation, with most of what’s in our phenomenal consciousness not being available for our access consciousness.

The upshot of this is that the form of consciousness that we can report on, can attend to, and so on is tiny – really very limited compared to the much vaster expanse of our phenomenal consciousness. But is this really the case?

I certainly don’t think that these conclusions are the only ones based on these experiments. Let’s instead for the moment assume that phenomenal consciousness doesn’t exist and that instead consciousness is indeed made up of feelings and sensations, but these are all ones that we can talk about, are attending to, take actions in response to and so on. How would we explain the change blindness experiment, where people take ages to spot the wall difference between the two pictures, for example? It’s quite possible that we are aware of the entire scene, but only in the vaguest way, and for a brief moment, with everything that we don’t attend to rapidly fading from consciousness. We point our attention to what’s important about the scene, what we expect from memory to be key features, such as the face, and so don’t attend to, and are not conscious of the change, for many seconds. There is no extra information below what we can report, because what we can’t report is barely processed, rapidly dying data. Even if there were extra information, though – so what? We know from many other experiments that you can have unconscious knowledge, so why not here as well?

As for the experiment with the grid of letters, the Sperling Task, there’s absolutely no evidence that we do indeed have some hidden knowledge superiority for all twelve letters. Instead, all letters are very briefly represented in our visual system and we use attention to boost what we can and place it in consciousness, which happens to be only about 3 or 4 items. This either happens at random locations, without a cue, or on a single row with a cue. But again, even if there was some evidence that all 12 letters were truly still accurately recorded in our brains a few seconds later, so what? If we can only consciously access and recall 4, that’s what we are conscious of a few seconds later. We may at the instant that the letters flash on screen have the weakest conscious sense of all 12 letters, but experiments have shown we aren’t conscious of them as letters, but just as vague objects. It’s as if we have the capacity to store up to 4 items in consciousness, as fully processed items, but we can also smear this conscious (or attentional?) resource more widely, to take in more objects in a more superficial way.

In general, some of the confusion over this conceptual issue rests on the fact that consciousness doesn’t have to equal every snippet of data in our brains – no one really thinks it does. Instead, consciousness seems mainly concerned with the endpoints of analysis, those large structures of data, which means we just can’t help consciously perceive a chair when we see it, instead of all the constituent textures, shapes and colors that comprise it.

One of the most impressive talks of the conference, by Michael Cohen, highlighted just how ludicrous this phenomenal/access consciousness distinction really is. He asked us to imagine some future time when we could surgically alter a man’s brain so that he could still phenomenally perceive color, but this was barred from his access consciousness. What happens when you present the man with an apple? What color does he report it as being? He’ll quickly and confidently say that this is a very strange apple indeed, being entirely grey in color. If the scientist insists that his phenomenal consciousness still perceives it as red, even if he is convinced and tells us it is grey, he’ll laugh at the idiocy of this suggestion – he experiences it as grey, can tell us this, remember the greyness and so on, and so it just makes no sense to claim that there is some other kind of consciousness hidden from him that he has no access to, which differs from his own reportable experiences.

A related important issue for the theory, raised by Sid Kouider in the symposium, is that for it to be a real scientific theory, it needs to be potentially falsifiable. But could we ever devise an experiment that showed that this phenomenal consciousness that we have no clear awareness of, can’t speak about, act on, etc. isn’t actually unconscious after all? What possible benchmark or index could we come up with to potentially prove the non-consciousness of phenomenal consciousness? If we can’t do this, then doesn’t that make the theory meaningless?

The upshot of this is that there is only one kind of consciousness and it really is severely limited in some sense. But there are two reasons for why this isn’t crippling: first, once we’ve consciously learnt some complex skill, our unconscious minds are supreme at running those automated programs, for instance by driving a car while we daydream. Second, we may only be able to be fully conscious of a handful of items, but we can draw on a huge knowledge-base for each of those items, understand them profoundly, and examine how they relate to each other. It is this last conscious process, I believe, that makes humanity such a powerful species, capable of such sparkling insight.

Fascinating piecemeal new details about consciousness were revealed at regular intervals throughout the conference. For instance, have a look at this image below:

Believe it or not, the inner circles on the left and right are the same size (this is known as the Ebbinghaus illusion). Intriguingly, some people are more susceptible to this illusion than others (young children are hardly susceptible to it at all). Geraint Rees’s lab has been looking into why this might be, and has found that those with a larger primary visual cortex have a less intense illusion. Exactly why this is remains unclear, though Rees speculates that it’s as if there is a greater resolving power of more neurons in a larger visual cortex, so less opportunity to get confused in such ways.

Another exciting new feature of consciousness revealed at the conference relates to the question of whether consciousness is an all or nothing affair, or a continuum. Say there is a faint object in front of you – are there only two options: Fully consciously perceiving it, or being completely unconscious of the object? Or, instead, are there many different levels, where you can at times be partially aware of the object? The answer, as Bert Windey and colleagues showed, is that it actually depends on what you are trying to perceive. If it’s a simple feature, such as a colour, then our consciousness works in a more graded fashion, where we can catch weak glimpses of the object at times. But if the feature is something more conceptual and high level, like a number, then the situation is much more as if we are either are conscious of it or we aren’t – there is no in between.

But the highlight of the early part of the conference for me was a fascinating symposium by Gernot Supp, Melanie Boly and Emery Brown, on what happens in the brain when consciousness fades. A common way of studying this is to examine what happens when a general anesthetic drug is administered. The current answer is that the prefrontal cortex at the front of the brain and the thalamus (a central hub in the brain’s network, situated in the middle of the brain) start to sing the same tune: Their rhythms become tightly harmonized in the alpha band (about 10 Hz). Given that alpha waves are linked with relaxation, this is all unsurprising so far. What is more surprising, though, is the mechanism by which this shuts down consciousness. The prefrontal cortex and thalamus are closely linked with consciousness, probably by acting as a general purpose staging area for any specific conscious contents arriving from elsewhere in the brain. In anesthesia, these two key brain areas for consciousness are generating such an intense, harmonious (alpha wave) duet that the rest of the brain is barred from the song, and other cortical areas become isolated. And so those other parts of cortex that give detail to consciousness – managing our senses or memories, for instance – can’t access this consciousness network, and without any specific content to consciousness, there is no consciousness. This cutting edge research highlights the emerging picture of the neuroscience consciousness, where local islands of activity are simply not good enough. Instead, much of the cortex has to collaborate in a global wave of activity for consciousness to occur. If you are interested in learning more about how the brain generates consciousness, you might like to take a look at my upcoming book, The Ravenous Brain.

In the next and final part of my report on the recent ASSC conference, I’ll be discussing the main theoretical debate of the conference: Whether our awareness is far broader than the limited part of our world we can consciously describe.

Over the previous week I attended the annual conference for the Association for the Scientific Study of Consciousness (ASSC), which last year was based at the exotic location of Kyoto in Japan, but this time was hosted by my own department in Brighton, in the UK. Academic conferences are an intense week that I always look forward to. The social aspect for me is the highlight: To make up for long absences, we usually take advantage of every lunch, dinner, and post-dinner pub hours, in large groups. Here I can catch up with old friends from around the world, or easily make new friends. But the social bustle can sometimes be a little unwieldy: I felt somewhat sorry for one Italian Restaurant manager during the first night when about 50 of us, in three connected groups, descended on his establishment at 9:30, all hoping to be fed before closing time.

The other aspect of conferences that I can’t wait for is the feeling that I’m swimming on the crest of the research wave, with many talks and academic posters presenting data that hasn’t yet been published – with mine amongst them, as I share my own ideas or experiments. I love discovering these new pieces of the puzzle, and also love the discussions – sometimes rather heated – that are a product of this sharing of information. It’s a rather frenetic time, with around 20 talks to hear during a long day, along with perhaps 100 relevant posters to visit, but as long as the coffee keeps flowing freely, it’s manageable.

This conference there seemed more than the usual crop of new clues about the science of consciousness, and I thought you might like to hear about a few of the highlights.

The first surprise was during a pair of workshops at the start of the conference. I decided to opt for the hypnosis workshop to begin the conference on the Monday morning last week. Hypnosis appears to have a rather tangential relationship to consciousness research, but increasingly scientists are discovering that these apparent cul de sacs of psychology can offer up important central ideas about what consciousness is and what it’s for.

Hypnosis is primarily viewed as a form of therapy or a rather dubious type of public entertainment, but this altered state of mind can also be both a tool of science and an intriguing field on its own. The speaker, Devin Terhune, began by dispelling the myths surrounding hypnosis – for instance, you can’t be hypnotized against your will, or do something that you would normally find repugnant, but you can lie during hypnosis.

However, hypnosis is still a powerful technique. For instance, people can be given hypnotic suggestions to simulate psychiatric or neurological conditions (during the hour following hypnosis, at least). You can hypnotize people to develop many of the symptoms of sufferers of obsessive compulsive disorder, thus becoming temporarily obsessed by cleanliness and repeated hygiene behaviors. You can induce certain delusions, such as of a person failing to recognize themselves in the mirror anymore. You can even, possibly, induce an out of body experience using hypnosis. One important neurological condition closely connected with consciousness research is hemispatial neglect, where, following brain damage (especially when caused by a stroke), the patient fails to attend to the entire left half of space, and may only shave on the right, eat from the right half of his plate and so on. This, too, can be induced for a time by hypnosis. All these temporary syndromes can be studied using neuroimaging, thus revealing insights about the brain networks responsible for these symptoms.

Hypnosis can also induce variants of synesthesia, where a person sees black letters or numbers as if they had color attached.

It can also act as a useful tool to manipulate performance on standard psychological tests. For instance, it can make us better at random number generation. And there is a famous psychology experiment called the Stroop test, where you typically have to read out a colored word. If the word is “red”, say, and it is colored blue, you are slower to say it than if it were colored in red. This robust effect is, surprisingly, highly curtailed following hypnotic suggestions.

Although it’s still far from clear exactly how hypnosis works, there is a clear consensus that it reduces activity in the prefrontal cortex, parietal cortex, and cuts down on chatter between these two regions as well. Given that consciousness is most closely linked with this network or regions, one could speculate that hypnosis is related to a certain kind of lowering of consciousness, so that we’re more amenable to external suggestion.

During the afternoon I attended an equally tangential workshop, this time on the science of magic, by Gustav Kuhn and Ronald Rensink. One would think that magic, with its serious need for secrecy, and science, with the sharing of knowledge, would not make a good mix. But again this is increasingly proving not to be the case, and tricks that magicians use everyday to extract extreme results from the audience can teach psychologists a lot about the mind. For instance, the power of expectation, of what we assume is about to happen, is a key factor in most magic tricks, but it also dictates the style of tricks. Because we expect balls not just to disappear, but to reappear eventually, magicians always round off their tricks by relieving our suspense in this way.

And studying how magic is so effective at diverting our attention away from the main part of the trick clearly shows what grabs our attention most of all. Social cues win here – the direction of gaze and questions asked at us are features of our world we almost can’t help but fixate our attention on. Humor, too, is perfect at disarming us and causing our attention to centre on the joke-teller, and away from the trick. The science of magic is even revealing details about psychiatric disorders. For instance, you might suspect that autistics, who tend not to focus so much on social cues, are somewhat protected from believing in illusions. Actually, the opposite is the case, and autistic people tend to fixate even more on faces than others during such illusions, and are more easily fooled. Exactly why this is the case is still a mystery.

So somewhat esoteric scientific subjects like hypnosis and magic tricks are actually becoming powerful tools in helping us explain psychology, neuroscience and consciousness. And this was just the first day! Over the coming days I’ll add further articles to explain what else I learnt at the science of consciousness conference. The next article will describe the cutting edge research that shows how the fading of consciousness, for instance by anesthesia, involves a complex interplay of different brain waves.

This is my first blog on the study of consciousness, as part of a series of posts planned.

I feel very lucky, both to work as a research scientist at the Sackler Centre for Consciousness Science, and to have been given the chance to publish a book on consciousness for a general audience, where I describe the current scientific and medical aspects of the field, and outline my own views on the purpose of consciousness, and how our brains generate our experiences.

But hopefully this series of posts will give me a separate, less formal, more interactive (lots of comments, please!) outlet for my passion to communicate the science of consciousness.

It is easy to view consciousness as a kind of magic, either in the name of religion and souls, or by how alien it at first appears to science. But many fields, such as the study of life many years ago, have their popular magical states eroded by careful scientific study. I will be robustly arguing here that consciousness is in the midst of a similar revolution.

The investigation of our own awareness is a blossoming scientific field, where experiments are already illuminating many exciting details about this most intimate of scientific subjects.

Consciousness is a vitally important scientific field

Consciousness is in many ways the most important question remaining for science.

On a personal level, consciousness is where the meaning to life resides. All the moments that matter to us, that punctuate our lives, from falling in love to seeing our child’s first smile, to that perfect holiday surrounded by snow-capped mountains, are obviously conscious events. If none of these events were conscious, if we weren’t conscious to experience any of them, we’d hardly consider ourselves alive – at least not in any way that matters.

Whether I’m revelling in a glowing pleasure or even if I’m enduring a sharp sadness, I always sense that behind everything there is the privilege and passion of experience. Our consciousness is the essence of who we perceive ourselves to be. It is the citadel for our senses, the melting pot of thoughts, the welcoming home for every emotion that pricks or placates us. For us, consciousness simply is the currency of life.

Although some philosophers and scientists suspect that consciousness is a pointless side effect of thought, I believe the opposite, that our consciousness might indeed be responsible for our greatest intellectual achievements, both in the arts and sciences. Whether our creativity and insight originates in our unconscious mind or not (I believe that the role of the unconscious has been over-estimated here), at the very least, our consciousness is the conduit to inspect these gems of inspiration, and the driving force for turning them into reality.

The significance of laughter?

It is not surprising, therefore, that questions about consciousness lie at the heart of many of our most fundamental ethical debates, one of which is abortion and the right to life. This is an appropriate point for me to play my proud father card, and slip in a few gratuitously oversized pictures of my daughter, Lalana.

The image above was halfway through pregnancy, at 20 weeks. Soon after this scan, we could clearly feel her kicking, and towards the end of pregnancy, she seemed to have periods of greater activity in the evenings, would kick more if we gave her mother’s stomach a little prod, and we as excited parents-to-be even speculated that all this excitability would lead to a lively child.

And here my daughter was 6 months old, kicking her feet almost as much as inside the womb, but now her limbs pump upwards almost frantically as she experiences her first tastes of such gourmet “solid” foods as mashed sweet potato or strawberry, and she is almost overwhelmed with excitement at the fantastic sensations on offer when boring milk is set aside. She smiles and laughs readily, and her laughing voice particularly, for a parent, brings with it a firm intuition that your child is aware.

And finally this is a recent photo of her. As a 20 month old toddler, Lalana now runs around everywhere and speaks 100 words or so, albeit very unclearly. She can convey events to us that happened days or weeks before, usually because she is still so excited about them! She can also store wishes for the future. For instance, we might offhand tell her that when we get home, we’ll play with making bubbles. Hours later, as soon as she enters the house, she’ll run straight to the shelf with the bubble bottle, screaming “Bubbu!!! Bubbu!!!” As this illustrates, she has a strong set of loves and hates, and her emotionally sensitive, passionate, cheeky, disturbingly stubborn personality is already very visible.

Having prided myself on my objectivity throughout my adult life, I’ve embarrassingly found that my daughter is the main exception to this aim: I’ve not only been taken aback by how fiercely I love her, but also by how proud I am of her and how quickly I distort the truth to make sure she seems exceptional in every way to my clouded eyes. But when I can step back from these views, I regularly ask myself at what point did she become conscious. Was it when she was still in the womb, kicking away? Was it when she first opened her eyes to the outside world on the day of her birth? Was it during her first fits of hysterics a few months later? Or was it even with her first words when she was well over a year later?

These aren’t just personal wonderings, though. In the USA, people have been murdered for carrying out abortions. In many other countries, abortion is illegal, even if the woman has been raped. Although such positions are usually determined by religion, the related mindset is usually that there foetuses are already conscious, and even capable of feeling pain. But, in fact, the evidence to support this is extremely slim.

I’m not saying I agree with this view, but if it were the case that language was required for consciousness, as some neuroscientists and philosophers claim, then we don’t need to worry about consciousness – or pain – in humans until children are one and a half years old. It is therefore imperative that science provides its influential input to discover what consciousness is and when it emerges as we develop, in order to help resolve this ethical debate.

The Twilight of Awareness

Another ethical area calling out for the science of consciousness to find answers is in those instances of patients who are so ill that it is unclear whether they even have the capacity for consciousness anymore. Many people can enter a coma state when particularly ill, as if they are in a deep sleep, never even opening their eyes. Their family members may still speak to the coma patients or hold his hand, as if this husband or father were still there, somewhere deep inside, secretly very conscious. But is this really the case?

A more tricky and controversial group of patients are those who are in a vegetative state. Such patients have sleep-wake cycles, so spend a good portion of each day with eyes awake. Some vegetative state patients often reflexively smile or move their limbs, giving superficial hints that they are indeed aware of things. But is this just a cruel trick played by their more primitive brain areas, with no real consciousness occurring? It’s notoriously difficult to tell at times.

The picture on the left is of Terri Schiavo, who at this point was in her mid twenties. She was happily married, living in Florida, and was hoping soon to start a family.

Tragically this was not to be. Soon after this photo was taken, Terri suffered a massive cardiac arrest, and although the ambulance crew were eventually able to revive her, massive brain damage had already occurred.

Although she did emerge out of coma, this was only to enter a vegetative state, which she never recovered from.

Years later, her husband tried to get the courts to withdraw her feeding tube and let her die. He even had the backing of her doctors, but her parents were convinced she was still conscious, and opposed the move.

There followed one of the most famous, bitterly fought legal battles in history, which ended up going right to the top, with the then president, George W. Bush, signing emergency legislation – in his pyjamas in the middle of the night – partly to attempt to keep Terri alive.

But even this couldn’t stop the legal juggernaut from occurring, and in May 2005, Terri’s feeding tube was indeed removed, and she died 13 days later. As these images show, not only was Terri very much a shadow of her former self, but her brain was very severely damaged. But in principle this doesn’t mean she wasn’t conscious. How can science help ascertain whether people like Terri are indeed still conscious, and by how much? And can it also help heal such severe damage and return such people to full consciousness, if indeed awareness has been robbed from them?

Animal Torments?

And what about consciousness in other animals? Every person on the planet, on average, consumes twice their weight in animal derived food each year. Much of this food is intensively farmed, with minimal considerations for the animal suffering.

Animal experimentation for research, and to test commercial products, could be causing the suffering of many millions of animals yearly.

If no animals except humans have consciousness, then you can’t have suffering when you don’t have consciousness, so there’s no problem. But if even apparently mentally far simpler animals, such as poultry and fish, have a substantive awareness and significant capacity for suffering, then are we justified in inflicting all this pain and discomfort on them?

If science could come up with some means of testing consciousness in other animals, and perhaps also a way of gauging the extent of consciousness when it’s found, then this would have a huge ethical impact on all spheres of the animal rights debate.

Iphones have rights too, you know!

When the first computers were invented in the 1940’s, they were room sized monoliths, dependant on long snakes of punched paper for input, and could do little more than basic calculations. The pervasive, profound way that computers now impact on our lives must appear quite miraculous to any operator from those early days, who was somehow transported to the present.

Although I don’t think any computers at present have any form of consciousness, many are excellent at giving the impression of conscious thought. If you have one of the new Iphones, you might have played with Siri, a pleasing female voice that you can speak to. You can ask her questions about the weather, if there are good local Chinese restaurants around, or you can get her to add an item to your calendar. Siri does such a good job of “understanding” its owner’s wishes and requests, that it is easy to mistake it for a real person (there is an episode of the sitcom, The Big Bang Theory, where Raj basically falls in love with Siri!). In the interests of balance, I should also mention Google Translate, which started life as a laughably poor product, even when translating between very popular languages, such as Spanish and English. However, if you’ve used Google Translate lately, you should have been incredibly impressed. It can translate between over 50 languages, can guess the foreign language if you aren’t sure, and usually does an extremely believable job of translating a section of text for you.

Computers are inching ever closer to human-like thought, and, intriguingly, are also converging on human neural architecture. The “cloud” of multiple digital locations, capable both of parallel storage and processing equally describes how companies like Google manage most of their online products, and how human brains operate.

Indeed, scientists are exploiting the immense processing power of current computer chips (especially academic-oriented graphics processing units, which are a better match for brains than conventional chips) to make crude, but not that crude, copies of simpler animal brains. For instance, a colleague in my department, for his PhD, is trying to represent inside a PC a similar number of neurons, and activity and connections between them, as in a monkey brain.

I happen to believe that it is only a matter of time before we generate real consciousness in computer form, and if we assume that a mouse, say, is conscious, then I think so will computers be within 10 or so years. Human consciousness may take far longer to artificially manufacture, but this is merely an engineering issue, rather than something that is in principle impossible in any being that isn’t a human with its biological brain. Most of us, I think, share this intuition at times.

Characters like Data in Star Trek, or the replicants in the film Bladerunner, are utterly believable as robots with human-like consciousness. And both these characters help us explore the difficult future ethical decisions we may face surrounding beings we manufacture, who may match us in awareness, intelligence and possibly also the capacity to suffer. One sensible position would be to somehow match the rights or status according to the level of consciousness achievable by these robots – but how could we assess this? Will science be able to come up with some consciousness meter that works not only on other animals, but even other robots as well?

Is consciousness even a physical thing?

Strangely, although many of us have no problem believing that Data is conscious, we carry conflicting beliefs that our own awareness is quite different to the biological computer in our own heads, even though many neuroscientists (including me) cold-heartedly claim that consciousness is entirely supported by our brains, and will disappear when we die.

I’ve had a surprising number of chats in the pub about consciousness, and many people just think you cannot reduce this staggering array of different, vibrant human potential experiences to a lump of spongy organic matter weighing about a kilogram. It’s in some ways not surprising that this position is so pervasive – after all, almost all religions assume that our consciousness isn’t even a physical thing, and once our physical bodies have perished, the mental part of us can live on.

Much of the study of philosophy, historically, was shouldered by philosophers, and many arguments have been given to support the religious view and proclaim the unique nature of consciousness. Rene Descartes, in the 17th century, proposed a series of “proofs” for why the mind was independent of the body and, by extension, the entire physical world. To Descartes, there was nothing in the physical world like consciousness, which is singularly subjective – no one else can ever truly know what I experience. Modern philosophers have extended this point, and have continued to make trouble for scientists like me, who assume that there is nothing else except for the physical stuff in the universe, so consciousness must also be physical in some way.

Other modern philosophers have argued that characters like Data can only ever be a fiction, and that it is impossible for mere computers to be conscious and to grasp meaning.

Getting a foothold

So with so much at stake ethically, with some of our everyday intuitions, and many philosophical arguments dating back centuries suggesting that consciousness is quite different from the normal physical features that science explores, is consciousness even the kind of topic that science should be studying? Many psychologists and neuroscientists, for these reasons, have shrunk from the topic, declaring it off limits for science.

This is all quite understandable in some ways, particularly because of the seemingly intimate, inevitably subjective nature of consciousness. This does give the field an impenetrable, admittedly magical atmosphere, and reflects the superficial impossibility of scientifically capturing consciousness.

But Francis Crick, one of the giants of 20th century science, with an untamed curiosity, and a first-rate intellect to accompany this, dissented from this meek view. He decided after a long, sparkling career in genetics, which included the discovery of the structure of DNA, to spend the last period of his life to cracking the science of consciousness. Although he sadly didn’t live to see a clear solution to the problem, he made some critical progress. More important than this, though, he helped make consciousness an acceptable field for science to study. Now, two decades down the line, we have a very robust, active research community. My department is hosting the next conference meeting of the Association for the Scientific Study of Consciousness in the sunny English seaside town of Brighton and over 400 hundred papers have been submitted for this. I would imagine about 600 scientists and philosophers, all with a major research focus on consciousness, will attend (though everyone, including the general public, is welcome to register too, if anyone reading this fancies coming!).

So how does science get a foothold on such a difficult topic as consciousness? Actually, it’s not really as difficult as all that. Most of science breaks down to exploring some process by manipulating it as much as possible and observing the effects. Consciousness is no different.

Although we might not be absolutely clear about what consciousness is, we are all (pretty much) agreed that we have as much of it as possible when fully awake, and little or none when in a coma, under general anaesthesia or at the deepest parts of sleep. So we can examine how the brain changes when we move from full wakeful consciousness to minimal consciousness. On a psychological level, we can also investigate what forms of thought and learning leave us when consciousness erodes, and what forms remain.

By the same token, we don’t need to know the precise definition of consciousness to know that some stimuli are too faint for us consciously to detect, whereas others are clearly seen. Similarly, we know that the experience of seeing a house is a very different experience to viewing a face. So we can also examine how brain function changes when we flip between these experiential states. Again, we can also investigate how learning, attention and other kinds of thought change as consciousness changes, and build up a psychological picture of consciousness as well as a brain-based one.

In future blogs, I’ll be going into the science of consciousness in far greater detail, but the next blog planned will be delving a little into the philosophy of consciousness, and especially how views like Descartes’ that try to place consciousness outside of the physical world – and scientific investigation – are fundamentally flawed.