System Bits: Aug. 29

Could video goggles, and a tiny implant cure blindness?
Incredibly, the world of medical research is on the verge of curing blindness. Similar to cochlear implants for deaf people, Stanford University scientists and engineers are developing new devices to this end, including a bionic vision system based on photovoltaic implants, which is awaiting approval for human clinical trials in Europe. A second system, based on in vitro studies of the retina, could be ready for animal testing within four or five years.

Both inventions have the same goal: to give back some measure of sight to people who have progressive diseases of the retina — especially retinitis pigmentosa and macular degeneration.

The team explained that normal retinal tissue consists of photoreceptors: light-sensitive cells resembling rods and cones at the base of the eye, topped by interconnected layers of neurons. The signal travels from the rods and cones, through bipolar cells to ganglion cells, then via the optic nerve to several brain areas, including the visual cortex. Scientists still aren’t exactly sure why the rods and cones break down in patients with retinal diseases, nor have they figured out ways to prevent, slow or reverse the process. The silver lining is that retinitis pigmentosa and macular degeneration tend to spare some of the bi­polar and ganglion cells, which means that the neurons in these patients’ retinas can be stimulated artificially, with micro-electrodes, bypassing the damaged rods and cones altogether.

Daniel Palanker, PhD, a professor of ophthalmology at Stanford directs Stanford’s Hansen Experimental Physics Laboratory, and has developed and patented numerous devices over the years to diagnose and treat eye diseases.

Palanker’s new prosthetic device, called PRIMA, is being commercialized in partnership with Pixium Vision of France. Like a different retinal system, the ARGUS II, it features a tiny video camera mounted atop futuristic-looking augmented reality goggles, connected to a video processor about the size of a cell phone. Yet it doesn’t require the implantation of a bulky electronics case and antenna, or a cable coming out of the eye, like the German system. Instead it relies on multiple arrays of photodiodes, each about a millimeter in diameter and containing hundreds of pixels, which work like the solar panels on a rooftop. Surgeons can lay down these tiny chips, like tiles, replacing the missing light-sensitive rods and cones in the central retina.

A camera mounted on the PRIMA “bionic” goggles captures an image, say a flower. The attached video processor and microdisplay convert that picture into pulses of near-infrared light, which are projected from the goggles into the eye. Photodiode arrays, implanted under the retina, pick up these signals and convert them into electrical pulses that stimulate the bipolar cells directly above them. The brain perceives these pulses as patterns of light. (Source: Stanford University)

When PRIMA’s camera captures an image of, say, a flower, the video processor transmits that picture to a microdisplay mounted inside the goggles. Powerful pulses of near-infrared light illuminate this display and are projected from the goggles into the eye, like the invisible rays of a TV remote control. The implanted photodiodes pick up these signals and convert them into tiny pulses of electrical current, which stimulate the bipolar cells directly above them. The signals propagate to the ganglion cells and then to the brain, which perceives them as patterns of light: a flower!

In the next generation of the device, Palanker says, “We should be able to put more than 12,000 pixels within 15 degrees of the visual field,” taking the system to 20/150 or even better. And while PRIMA can’t reproduce color vision yet — only various shades of gray — “We are working on single-cell selectivity in retinal stimulation, which might enable color perception,” he says. With more experience, surgeons also might be able to expand the visual field to about 20 degrees.

Of course, the ultimate dream is to build a visual prosthesis so small and powerful that it can stimulate specific neurons inside the retina, rather than sundry patches of them. That’s the goal of E.J. Chichilnisky, PhD, a Stanford professor of neurosurgery and of ophthalmology.

“Think of the retina as an orchestra,” Chichilnisky explains. “When you try to make music, you need the violins to play one score, the oboes to play a different score and so on.” Likewise, the retina’s 1 million or so ganglion cells are composed of about 20 distinct types. Each plays a slightly different role in transmitting the perception of shape, color, depth, motion and other visual features to the brain.

Chichilnisky joined the Stanford faculty in 2013, after 15 years at the Salk Institute for Biological Studies. Since his days as a Stanford doctoral student in the mid-1990s, he has worked with a variety of physicists and engineers, notably Alan Litke, PhD, of the UC-Santa Cruz Institute for Particle Physics, to develop small but powerful electrode arrays capable of measuring neural activity at the cellular level.

To better understand the patterns of electrical activity in the retina, Chichilnisky and his colleagues use eye tissue taken from primates that have been euthanized for other medical studies. By placing small pieces of retinal tissue atop the microchip arrays, then exposing those samples to various patterns of light, they’ve been able to record and study the distinctive electrical responses of five different types of retinal ganglion cells, which together account for 75 percent of the visual signal sent to the brain. They’ve also developed techniques to replicate those electrical patterns, artificially stimulating the ganglion cells with high precision, comparable to the natural signals elicited by the rods and cones.

By learning how to replicate these complex signals, Chichilnisky and his team are one step closer to their ultimate goal: a high-acuity visual prosthesis that behaves like an orchestra conductor, signaling the retina’s myriad neurons to fire in precisely the right ways, at precisely the right times. “I’m not saying we’ve got it nailed,” he says, “but we certainly now have proof of concept for how to make a better device in the future.”

Chichilnisky says the next challenge will be to fit his lab’s formidable computing power onto an implantable electrode array that can do its job safely inside the eye, without overheating surrounding tissues, and autonomously. If all goes well, a prototype of the implant could be ready for testing in lab animals in four to five years.

Programming language plus simple circuit design for self-reporting routers
A team of researchers from MIT, Cisco Systems, and Barefoot Networks reminded that in today’s data networks, traffic analysis — determining which links are getting congested and why — is usually done by computers at the network’s edge, which try to infer the state of the network from the times at which different data packets reach their destinations.

They suggest that if the routers inside the network could instead report on their own circumstances, network analysis would be much more precise and efficient, enabling network operators to more rapidly address problems. To that end, router manufacturers have begun equipping their routers with counters that can report on the number of data packets a router has processed in a given time interval.

At team of researchers from MIT, Cisco Systems, and Barefoot Networks have come up with a new approach to network monitoring that provides great flexibility in data collection while keeping both the circuit complexity of the router and the number of external analytic servers low. (Source: MIT)

The team also noted that raw number counts are only so useful, and giving routers a special-purpose monitoring circuit for every new measurement an operator might want to make isn’t practical. The alternative is for routers to ship data packets to outside servers for more complex analysis, but that technique doesn’t scale well. A data center with 100,000 servers, for instance, might need another 40,000 to 50,000 servers just to keep up with the flood of router data.

As a result, the team has come up with ‘Marple,’ a new approach to network monitoring that provides great flexibility in data collection while keeping both the circuit complexity of the router and the number of external analytic servers low.

Marple consists of a programming language that enables network operators to specify a wide range of network-monitoring tasks and a small set of simple circuit elements that can execute any task specified in the language. Simulations using actual data center traffic statistics suggest that, in the data center setting, Marple should require only one traffic analysis server for every 40 or 50 application servers.

Superconducting sulfur, molybdenum and selenium sandwichRice University researchers have devised up a new twist on 2D materials capable of hosting an intrinsic electric field and that also shows promise for catalytic production of hydrogen.

The Rice laboratory of materials scientist Jun Lou has made a semiconducting transition-metal dichalcogenide (TMD) that starts as a monolayer of molybdenum diselenide. They then strip the top layer of the lattice and replace precisely half the selenium atoms with sulfur. The new material they call Janus sulfur molybdenum selenium (SMoSe) has a crystalline construction the researchers said.

Rice materials scientists replace all the atoms on top of a three-layer, two-dimensional crystal to make a transition-metal dichalcogenide with sulfur, molybdenum and selenium.(Source: Rice University)

This image shows top (left) and side views of Janus sulfur molybdenum selenium created at Rice University. Careful control of heating allows sulfur to replace just the top plane of selenium atoms in the new two-dimensional material. (Source: Rice University)

This type of two-faced structure has long been predicted theoretically but very rarely realized in the 2-D research community, the researchers said, and could lead to applications such as a basal-plane active 2-D catalyst, robust piezoelectricity-enabled sensors and actuators at the 2-D limit.

In addition to Rice University, the team included researchers from the University of Texas at Austin and the University of Pennsylvania.