SHEILA NIRENBERG: I study how the brain processes information. That is, how it takes information in from the outside world, and converts it into patterns of electrical activity. And then how it uses those patterns to allow you to do things: to see, to hear, to reach for an object.

So I'm really a basic scientist, not a clinician, but in the last year and a half I've started to switch over, to use what we've been learning about those patterns of activity to develop prosthetic devices. And what I wanted to do today is show you an example of this. It's really our first foray into this. It's the develop of a prosthetic device for treating blindness.

Okay, so let me start with the problem. There are 10 million people in the US, and many more people worldwide, who are blind or facing blindness due to diseases of the retina, diseases like macular degeneration. And there's little that can be done for them. There are some drug treatments, but they're only effective on a small fraction of the population. And so for the vast majority of patients, their best hope for regaining sight is through prosthetic devices.

The problem is that current prosthetics don't work very well. They're still very limited in the vision they can provide. And so, for example, with these devices patients can see simple things, like bright lights and high contrast edges, not very much more. Nothing close to normal vision has been possible.

What I'm going to tell you about today is the device we've been working on that I think has the potential to make a difference, and be very much more effective. And what I want to do is show you how it works.

Okay, so let me back up a little bit and show you how a normal retina works first, so you can see the problem we're trying to solve. Here we have a retina, so you have an image, a retina, and a brain. When you look at something, like this image of this baby's face, it goes into your eye and it lands on your retina, on the front-end cells here, the photoreceptors. Then what happens is the retinal circuitry, the middle part, goes to work on it. And what it does is it performs operations on it, it extracts information from it, and it converts that information into a code. And the code is in the form of these patterns of electrical impulses that get sent to the brain. And so the key thing is that the image ultimately gets converted into a code.

And when I say code, I do literally mean code. This pattern of pulses here actually means "baby's face". So when the brain gets this pattern of pulses, it knows what was out there was a baby's face. And if it got a different pattern, it would know that there was, say, a dog, or another pattern would be a house. Anyway, you get the idea.

And of course, in real life, it's all dynamic, meaning that it's changing all the time. So the patterns of pulses are changing all the time because the world you're looking at is changing all the time too.

So, you know, it's kind of a complicated thing. You have these patterns of pulses coming out of your eye every milisecond telling your brain what it is that you're seeing.

Okay, so what happens when a person gets a retinal degenerative disease, like macular degeneration? What happens is that the front-end cells die, the photoreceptors die. And over time, all the cells and the circuits that are connected to them, they die too. Until the only things you have left are these cells here, the output cells. The ones that send the signals to the brain. But because of all that degeneration, they aren't sending any signals any more. They aren't getting any more input. So the person's brain no longer gets any visual information. That is, he or she is blind.

So a solution to the problem, then, would be to build a device that could mimic the actions of that front-end circuitry and send signals to the retina's output cells, and then they could go back to doing their normal job of sending the signals to the brain. So this is what we've been working on, and this is what our prosthetic does.

So it consists of two parts: what we call an encoder, and a transducer. And so the encoder does just what I was saying: it mimics the actions of the front-end circuitry, so it takes images in and converts them into the retina's code, and then the transducer then makes the output cells send the code on up to the brain. And the result is a retinal prosthetic that can produce normal retinal output.

So a completely blind retina, even one with no front-end circuitry at all, no photoreceptors, can now send out normal signals. Signals that the brain can understand. No other device has been able to do this.

Ok, so I just want to take a sentence or two, to say something about the encoder and what it's doing, because it's really the key part, and it's sort of interesting and kind of cool. Not sure if cool is really the right word, but you know what I mean. So what it's doing is it's replacing the retinal circuitry, really the guts of the retina, with a set of equations. Equations we can implement on a chip. So it's just math. In other words, we're not literally replacing the components of the retina, it's not like we're making a little mini-device for each of the cell types. We've just abstracted what the retina's doing with a set of equations. And so in a way, the equations are serving as a sort of code book, an image comes in, goes through the set of equations, and out come streams of electrical impulses, just like a normal retina would produce.

Ok, so now let me put my money where my mouth is and show you that we can actually produce normal output, and what the implications of this are.

Here are three sets of firing patterns. The top one's from a normal animal, the middle one's from a blind animal that's been treated with this encoder-transducer device, and the bottom one's from a blind animal treated with a standard prosthetic. The bottom one is the state-of-the-art device that's out there right now, which is basically made up with light detectors but no encoder.

So what we did was we presented movies of every day things, of people, babies, park benches, regular things happening. And we record the responses on the retinas of these three groups of animals. Just to orient you, each box is showing the firing patterns of several cells. And just as in the previous slides, each row is a different cell. I just made the pulses a little smaller and thinner so I could show you a long stretch of data.

As you can see, the firing patterns of the blind animal treated with the encoder-transducer really do closely match the normal firing patterns. And it's not perfect, but it's pretty good. And the blind animal treated with the standard prosthetic, the responses really don't. So with the standard method, the cells do fire, they just don't fire in the normal firing patterns, because they don't have the right code.

How important is this? What's the potential impact in a patient's ability to see? I'm going to show you one bottom-line experiment that answers this, and of course I have a lot of other data, so if you're intersted, I'm happy to show more. The experiment is called a reconstruction experiment. What we did is we took a moment in time from these recordings and ask, what was the retina seeing at that moment? Can we reconstruct what the retina was seeing from the responses from the firing patterns? We did this for responses from the standard method and from our encoder-transducer. Let me know you, and I'm going to start with the standard method first. You can see that it's pretty limited, and because the firing patterns aren't in the right code, they're very limited in what they can tell you about what's out there. You can see that there's something there, but it's not so clear what that something is. And this circles back to what I was saying in the beginning: in the standard method, patients can see high contrast edges and light, but it doesn't easily go further than that. So what was the image? It was a baby's face.

What about with our approach, adding the code? You can see it's much better. Not only can you see that it's a baby's face, but you can tell that it's this baby's face. It's a really challenging task. On the left is the encoder alone, and from the right is from an actual blind retina. The encoder and the transducer. But the key one is the encoder alone, because we can team up the encoder with a different transducer, this is just the first one that we tried.

I want to say something about the standard method. When this first came out, it was just a really exciting thing, the idea that you could even make a blind retina respond at all. But there was this limiting factor: the issue of the code. And how to make the cells respond and produce normal responses. And so this was our contribution.

Now I just want to wrap up, and as I was mentioning earlier, I have a lot of other data if you're interested, but I want to give the basic idea that being able to communicate with the brain, in its language, and the potential power of being able to do that. It's different from the motor prosthetics where you're communicating from the brain to a device. Here we have to communicate from the outside world into the brain, and be understood by the brain.

The last thing I want to say is to emphasise that the idea generalises. The same strategy that we used to find the code for the retina we can also use to find the code for other areas, for instance the auditory system and the motor system, for treating deafness and for motor disorders. Just the same way that we were able to jump over the damaged circuitry and the retina to get the retina's output cells, we can jump over the damaged circuitry in the cochlea to get the auditory nerve, or jump over damaged over in the cortex, in the motor cortex, to bridge the gap produced by a stroke.

I want to end with a simple message. Understanding the code is really really important. If we can understand the code, the language of the brain, things become possible that didn't seem obviously possible before. Thank you.