[Vitor] et al came up with two versions of hardware for this project. The first is a dual stack of high-resolution LCD displays, while the second revision is an LCD with a lenticular overlay. With this hardware, the team can change the focal plane of an entire image, or just subsets of an image allowing for customized vision correction for anyone with nearsightedness, farsightedness, astigmatism, presbyopia, and even cataracts.

With plenty of head-mounted augmented reality platforms coming down the pipe such as Google’s Project Glass and a few retina displays, we could see this type of software-defined vision correction being very useful for the 75% of adults who use some form of vision correction. It may just be a small step towards the creation of a real-life VISOR, but we glasses-wearing folk will take what we can get.

You can check out the .PDF of the paper here, or watch the video after the break.

16 thoughts on “Improve your vision with computer generated glasses”

Well, once they figure out how to close the loop without manual intervention, this might be something impressive.

There now exist variable focus manually adjusted glasses (better than nothing, but they only correct near/far and suck at a number of things) being pushed off on poor africans as a test market by a UK firm. I think they’re doing it that way to avoid lawsuits should it turn out that the glasses make people turn cross-eyed or something.

Anyone inventing a cure/fix for presbyopia will make mad bank, at least until the trial lawyers show up! I was hoping stem cell research would help this, but it turns out that stem-cell effectiveness is rather hit-and-miss.

If you’re not trolling, I’m not surprised. I’ve tried to obtain samples of the optics, to no avail.

CNC Grind on site is unavailable science fiction in most of Africa, but truthfully, there is little in standard optometry that requires much more than an arduino’s worth of control or laptop to make these things at home. Optical finishing is treated as big science, but it’s simply procedure + materials.

Warning: I am not an optometrist, but I’ve watched them grind and polish lenses.

for these things to work in the real world they need to scan the Z depth of everything and process it in real time to correct for peoples eyesight.

they could go the kinect way and put a LED bulb with a mask with a load of grid point holes on one side. and a cheapo IR cam on the other that calculates depth by how offset each of the gridpoints are. you wouldn’t even need as much resolution as the kinect.

this would make them hevier than regular glasses which is a problem with any HMD tech these days. and also only rich folk would be able to afford them for a while just like old fashioned glasses lol.

You know why? Because any experiment where you can carefully control the generated test data works better than those you don’t.

Saying “we optically deformed an image, and then heuristically figured out how to deform the deformation to obtain the orginal image again” doesn’t quite have the same ring as “we can solve vision problem X”.

Although I wonder if this isn’t MIT style hype, it’s still progress. Is it more valuable than just making the image larger? I don’t know, but
I’m happy they’re at least working on image transforms under the guise of vision improvement…

You are right on: “any experiment where you can carefully control the generated test data works better than those you don’t”

We used fake conditions to assess if the method works fine as any scientific research. The fact that people used a lens to simulate a disease changes nothing from the real condition. Notice that we have a real Keratoconus test, which is much harder to correct than any low-order condition.

I’m certainly all for your research, and simulating the conditions you’ll correct for is valid science. No doubt about it, and I commend your research.

But closing the loop under real conditions is going to be far harder than closing it under arbitrary optical deformation X. I’m not hating on your work – I want this stuff to be real!

However, to use an analogy, this paper is to the imputed solution as “nanopartical X stops cancer cells” is to “nanopartical X works in vitreo to eliminate tumors”.

I remain aghast at how crude the basic science of sensing behind ophthalmic diagnosis is – let alone optometric tools. It’s a unique field, with millions of patients plowed through it each year, and yet we stumble around retinas and lenses as if they were cargo-cult dioramas. Commercial diagnostics are crude, and weighed down by a patent and lawsuit morass driven by little more than pharma ownership of every major vendor.

Yeah, just disregard me – this type of research is a sore point with me because the solutions will not be found on the paths of currently proposed optical solutions
but instead will have to wait on slow-as-molasses funding for retinal micro-structure repair and crystalline materials physics.

Basically, no one is building the correct tools that are required to prototype the tools we need, and all the money is spent on lasik/lens augmentation technology because that’s where the cash flow is. But until the loop is closed, we have to rely on these proximal indicators of visual acuity and there is no chance of organic manipulation to correct visual field deformations.

This is because we keep using 19th century optometric methods of diagnosing visual defects – and that leads to solutions like your team is perfecting – or at least making cool demos of – but are essentially no more than fixing dents with bondo when you could be restoring the car.

In the meantime, there are billions to be made on presbyopia, and people keep trying to fix it by using lasik techniques that just won’t work.

I hope you can get funding because sooner or later you’ll have a chance to close the visual loop to make this practical, but that work is much more important than your implementation.

TLDR; In other words, I’m a crank who has no idea what he’s talking about.