Joshua Schultz, a Ph.D candidate, says that this system has been made possible in part to piezoelectric cellular actuator technology. Thanks to the actuators developed in their laboratory it is now possible to capture many of the characteristics associated with muscles of the human eye and its cellular structure.

The expectation is that the piezoelectric system could be used for future MRI-based surgery, furthering our ability to research and rehabilitate the human eye.

“Its simIlarity to the muscular system”. You never proofread the articles on this website ?

Back to the topic, I don’t get the usefulness of this. Human eyes move a lot because only the center of vision is precise enough to extract useful data for the brain. So they must scan the entire picture to really “see the whole picture”.
Artificial sensors, on the other hand, can manage a high resolution on a wide field of view, without requiring any specific movement.

It’s nice to mimic nature but what’s the point when you have already exceeded the original ?
It’s a real question.

Trying to replicate things that exist in nature can grant insight into how the natural thing works. Even if you don’t believe that pursuit of knowledge is worthwhile for its own sake, this can in turn power innovations in technology based on the new discovery about nature.

As for your comment on cameras with wide fields of view, why do you think telescopes have different zoom levels? Why do so many animals have too few different kinds of color sensing cell to see “in color”? Many capabilities are mutually exclusive, like how increasing magnification decreases field of view.

And just for the record, there is a potential practical application explicitly stated in the write-up. ;)

The dynamics (both in the non-linear controls world and the medical world) of completely understanding the actuation of the eye in response to a given input is still being researched. If you can design a system that replicates the kinematics of the original system and can predictably define an accurate output signal, then it becomes easier to reverse engineer the system you are trying to define. Pretty useful stuff for diagnosis of problems.

With the current state of research methods, it’s more of a define some input, usually sinusoidal rotations, at a known frequency and observe the output. There has been enough research done in that area for horizontal sinusoids to be able to determine when the output is in a normal range, but not yet completely define what researchers have defined as a “random” in the non-linear portions of the response. There has also been little work in analyzing these responses in 3D or with combined rotation and linear motion.

This is all useful work when you consider that falls (normally an issue with balance, which is higly interconnected with these eye motions) is the number one reason older people either get admitted to a hospital or nursing home or suffer an accidental death (the stats on this are pretty staggering: http://www.cdc.gov/HomeandRecreationalSafety/falls/adultfalls.html).

So, in terms of wide angle vision, yes cameras have exceeded the original thing. There is still plently of work that needs to be done in terms of understanding how the body works and with any subject better tools makes better outcomes. In terms of robotics, this would be a good way to potentially reduce the necessary size of a high precision machine by eliminating the gearboxes used to make the motions so precise. It’s not just limited to vision.

It could be useful for better still cams. The reason we don’t notice image bobbling when we walk is due to the eye muscles compensating. So if the system is set up with these eye muscles and programmed to focus at a certain point then it would result in a much more still video stream without the need for the huge equipment currently used.

“Researchers at Georgia Tech have developed a biologically inspired system to control cameras on board robots that simulate the Saccadic optokinetic system of the human eye. Its similarity to the muscular system of the human eye is uncanny.

Joshua Schultz, a Ph.D candidate, says that this system has been made possible in part to piezoelectric cellular actuator technology. Thanks to the actuators developed in their laboratory it is now possible to capture many of the characteristics associated with muscles of the human eye and its cellular structure.

The expectation is that the piezoelectric system could be used for future MRI-based surgery, furthering our ability to research and rehabilitate the human eye.”

Not much here, really, but they figured out how move the camera using piezo cell chunks tacked together – each chunk can only move a little ways, but if you stack them you get greater range and resolution of movement. So, that means they can move the sensor around and dither and all that to get effectively higher resolutions.

I suppose the money shot is that you can dispense with multi-axis camera actuators and stuff some piezo elements on the back of the chip, patent it and write a little software to get marginally better video stability by replacing image manipulation with motion compensation.

I’m guessing DARPA/NSF wants this and Sony has some patents and all the usual disclaimers apply.
We’re probably not far off from having cameras embedded into all the military kit of most soldiers, which will let our soldiers benefit from the advice of REMF/armchair quarterbacks while they’re out defending whatever it is we defend these days.