The concept of retinal implants evokes fantastical concepts such as Deus Ex-like heads-up displays. Before we get there, though, we’d like to be able to give sight to those who are blind or losing their vision. Existing technology provides only limited improvements to vision, such as allowing people to see spots of light or edges with high contrast. To put that in numbers, we can improve someone’s vision to about 20/1,400; the legal definition for blindness is 20/200.

Retinal implants work by directly stimulating cells in the back of the eye with electrical signals, which get transmitted as visual information to the brain. This works because in many cases of blindness, diseases damage the photoreceptor cells but leave the ganglion cells, the neurons that transmit signals to the brain, intact.

Recent research efforts focused on improving the resolution of the device stimulating the retinal cells. However, higher resolution images can only help so much if the signal doesn’t match the patterns that the retinal neurons expect.

In other words, it's a bit like we've been connecting component video cables to an HDMI port, and wondering why the Xbox signal won’t show up properly on our TV.

In order to solve this problem, Sheila Nirenberg and Chethan Pandarinath of Cornell University developed a new optical prosthetic system that consists of two parts that mimic the functionality of an actual retina. An encoder takes visual information and transforms it into the form needed by the ganglion cells. This functions much like a compiler, converting human-readable source code into machine code needed by the computer. In a healthy retina, this transformation (“visual phototransduction”) is performed by the photoreceptor cells—the rods and cones. The second component is the transducer, which takes the signal from the encoder—now in the proper pattern—and stimulates the ganglion cells, in effect executing the code.

To actually create this optical implant, the authors created a system with three components. An encoder took images as input, converted them into coded electrical pulses which were fed into a minidigital light projector (mini-DLP). This, in turn, sent light pulses to the back of the retina. To make sure there's something there to detect the light, they expressed a light-sensitive protein, channelrhodopsin–2 (ChR2), that activated the ganglion cells in response to light. In effect, they created a new psuedoretina inside the eye to replace the damaged retina.

How did they know that their system was producing the correct retinal code? By hooking up electrodes to the ganglion cells in a mouse retina and playing movies with scenes such as landscapes, faces, and people walking. (Sounds pleasant, no?)

The authors used this setup to compare the activity between a normal retina, a blind retina with the new optical implant, and a blind retina with a standard implant (meaning no encoder). While the new device doesn’t perfectly replicate the activity of a normal retina, the improvement over a traditional implant is noticeable.

In another test, they tracked the eye movement of blind mice by sending signals straight to the transducer. Without any visual stimulus, the eyes of blind mice (like humans) drift around. When stimulated using uncoded signals (effectively standard implants work), the same drifting occurred. When the visual signals were coded, the eyes tracked the stimulus.

This development has the potential to transform optical implants into useful, sight-restoring technology. Obviously, further research is needed, particularly in the area of delivering the ChR2 protein to ganglion cells in human retina, but groups (including the current team) have already done this safely using viral vectors. The new component of this system, the encoder and mini-DLP device, could be placed on a pair of glasses and connected to existing stimulators.

Promoted Comments

Latest Ars Video >

The Greatest Leap, Episode 3: Triumph

In honor of the 50th anniversary of the beginning of the Apollo Program, Ars Technica brings you an in depth look at the Apollo missions through the eyes of the participants.

The Greatest Leap, Episode 3: Triumph

The Greatest Leap, Episode 3: Triumph

In honor of the 50th anniversary of the beginning of the Apollo Program, Ars Technica brings you an in depth look at the Apollo missions through the eyes of the participants.

Kyle Niemeyer
Kyle is a science writer for Ars Technica. He is a postdoctoral scholar at Oregon State University and has a Ph.D. in mechanical engineering from Case Western Reserve University. Kyle's research focuses on combustion modeling. Emailkyleniemeyer.ars@gmail.com//Twitter@kyle_niemeyer

Now this is absolutely fascinating! Back in gradeschool, I had an idea similar to Geordi LaForge's visor that did something similar. However, this is even better!

It seems obvious, really. If you want to create a prosthetic, it should interface with the nervous system using the language of the nervous system. (Yes, I know the difficult part is actually being able to do that interface, but it's great to see that progress is being made!)

I remember seeing something almost exactly like this over 8 years ago while I was in college. Is this just a really slow research area, or have they substantially improved the technology yet? If I remember correctly, the retina sensor in testing at that time was only capable of something like 40x40 pixels of resolution.

I wonder how long it will take before we improve this to the point where it's better than normal vision?

Since they're sending signals along existing nerves, then the best they can do is 1:1 on each existing nerve.

However other improvements could come from the camera. Zooming, higher dynamic range, better low light sensitivity are obvious enough enhancements. IR/UV sensitivity too, though the new colour range would have to be compressed into the existing colour range, unless they add them as switchable colour ranges. Perhaps they could even figure out how to extend the range of signals sent to the brain. New colours FTW!

I wonder how long it will take before we improve this to the point where it's better than normal vision?

In visible wavelengths, this will never be better than normal vision. However outside of that range, almost anything would be better, eh.

sidran, 'it seems obvious', but part of the goal of the early research was to in fact decipher the ganglion encoding, to get to where we are today. This step as well has been brewing for many years. Before it can be tested in humans, the gene delivery mechanism (typically viral) must be approved. There are a lot of amazing biomedical technologies that hinge on a safe gene delivery mechanism.

However other improvements could come from the camera. Zooming, higher dynamic range, better low light sensitivity are obvious enough enhancements. IR/UV sensitivity too, though the new colour range would have to be compressed into the existing colour range, unless they add them as switchable colour ranges. Perhaps they could even figure out how to extend the range of signals sent to the brain. New colours FTW!

Switchable like Predator? That would be friggin sweet. If this could correct my myopia too I'd be all about it.

It will be interesting to see whether technical solutions pan out before stem cell regeneration of the photoreceptors does. Implanting cadaver photoreceptors and then regenerating the ganglia links might ultimately be cheaper.

Depends on what you mean by 'hack'. The cybernetics here, would require physical access to be 'hacked' as such, unless of course it reaches the point where it *is* linked to some external source such as the Google Glass idea.

Granted from a certain perspective, once they reach normal vision capabilities they'll be just as hackable as the Mark 1 organic version already is, via subliminals, optical illusions, and other similar visual patterns that stimulate the brain in odd ways via the visual cortex.

I wonder how long it will take before we improve this to the point where it's better than normal vision?

In visible wavelengths, this will never be better than normal vision. However outside of that range, almost anything would be better, eh.

sidran, 'it seems obvious', but part of the goal of the early research was to in fact decipher the ganglion encoding, to get to where we are today. This step as well has been brewing for many years. Before it can be tested in humans, the gene delivery mechanism (typically viral) must be approved. There are a lot of amazing biomedical technologies that hinge on a safe gene delivery mechanism.

Indeed--the hard part of any cybernetics project is to create the interface, both in hardware and in 'language', that would allow the electronics to be accessed and used as naturally as normal organic parts are. After that, it's mainly a matter of streamlining and miniaturizing the hardware to fit properly for the body part it's to be installed in.

I wonder how long it will take before we improve this to the point where it's better than normal vision?

In visible wavelengths, this will never be better than normal vision. However outside of that range, almost anything would be better, eh.

Depends what you mean by normal. If the system is eventually able to correct for defects in eyes it could provide people with the 20/10 acuity that is the eyes theoretical performance limit vs the 20/20 average level. It might be possible to go higher than that if the eyes limit in daytime is the aperture of the pupil, not the performance of the retina.

Why bother with the eyes? Hook up directly to the optic nerves with wires and put the eyes on top of the head for 360° panoramas.

Well yes. Those nerve ganglions behind the photoreceptors surely are the million pin socket that you need a plug for. If one could make a reliable electronics to nerve bundle connection then surely the rest is, if not easy, at least just software.

I wonder... if making a computer to nervous system interface might be easier to do for the spinal cord first?

playing movies with scenes such as landscapes, faces, and people walking

Yes.

Quote:

hooking up electrodes to the ganglion cells

No.

This all sounds good, but the pessimist in me can't help but wonder what kind of nightmare it would be if the interface malfunctioned and, for instance, started sending unfiltered white (noise? vision?) directly to the brain, with the only remedy being surgery.

I wonder how long it will take before we improve this to the point where it's better than normal vision?

In visible wavelengths, this will never be better than normal vision. However outside of that range, almost anything would be better, eh.

Depends what you mean by normal. If the system is eventually able to correct for defects in eyes it could provide people with the 20/10 acuity that is the eyes theoretical performance limit vs the 20/20 average level. It might be possible to go higher than that if the eyes limit in daytime is the aperture of the pupil, not the performance of the retina.

...This all sounds good, but the pessimist in me can't help but wonder what kind of nightmare it would be if the interface malfunctioned and, for instance, started sending unfiltered white (noise? vision?) directly to the brain...

The cable that carries signals from the glasses is coming out the front of the glasses and running through a person's field of view. Shouldn't the cable run out the back of the glasses so that it doesn't obstruct what a person can see?

Will Stevie Wonder get to see before he dies? That's all I really care about.

WIN

Yep. In a way, I can imagine the great joy of a person, long blind, being given the gift of sight. On the other hand, I wonder, what they'd think, friends nearby saying, "Yeah, this is what I've been telling you about!" For sight would seem psychologically or maybe philosophically alien to one previously blind, no?

This is awesome. It does feel a little odd that previously they just threw the output at the optic nerve and expected it to all work. Having said that, how much time did experiments give the brain to 'relearn' the inputs from the new CCD eye?

I also recall seeing years ago video of them 'reading' a fairly conherent images from a monkey's eyes via electrodes on the skull (that is, you could see the checkering when he looked at a checkerboard pattern, not a shock given how much of the brain deals with this stuff). Why not just hook that up to a known set of inputs and then get a computer to try and generate a good mapping from one to the other?

[Can't find the vid on any tube, it was more than a decade ago IIRC probably in the UK]

Will Stevie Wonder get to see before he dies? That's all I really care about.

WIN

Yep. In a way, I can imagine the great joy of a person, long blind, being given the gift of sight. On the other hand, I wonder, what they'd think, friends nearby saying, "Yeah, this is what I've been telling you about!" For sight would seem psychologically or maybe philosophically alien to one previously blind, no?

Also, would he find vision to be a distraction from his music? "What the hell is...oh, it's a piano. That's what an audience looks like? Holy shit...! Uh...stage fright..."

Will Stevie Wonder get to see before he dies? That's all I really care about.

WIN

Yep. In a way, I can imagine the great joy of a person, long blind, being given the gift of sight. On the other hand, I wonder, what they'd think, friends nearby saying, "Yeah, this is what I've been telling you about!" For sight would seem psychologically or maybe philosophically alien to one previously blind, no?

Also, would he find vision to be a distraction from his music? "What the hell is...oh, it's a piano. That's what an audience looks like? Holy shit...! Uh...stage fright..."

I would bet, people who have been blind from birth, incorporate much of the visual centers of the brain into processing other information. If Mr. Wonder were to suddenly be given sight, he might very well see music.

Intraocular implants were pioneered in Germany to help patients with retinitis pigmentosa and age-related macular degeneration. Human trials were conducted using prototype devices in 2004, nearly A DECADE ago.

Drs. Nirenberg and Pandarinath are hardly breaking new ground here. Let me see if I can scrounge up those references...