If they're using light field tech it should be of variable focus (i.e. what you look at is what gets into focus). If they can do the computation fast enough (again a big if) your eyes would never notice (again assuming appropriate displays - which don't exist yet). Even if they get variable focus (and parallax and other imaging tricks), I think "indistinguishable" augmented reality will still be limited by two major factors which I don't see any tech ready to touch in the near future: full visible spectru

Your post started off good, noting the issue of focus (the actual depth cue related to focus, though, is not focus itself, but accommodation--the response of the eye to defocus) which is often ignored and people commonly only address stereopsis and motion parallax. However, it then took a couple of wrong turns. First is the issue of color reproduction. Humans are trichromats. We have three types of cone cells in our retinas, and a variant of RGB (read: color space based on only three primaries) can reproduce the full CIE perceptual color gamut. Prototype display systems that do just that exist, and I've seen more than one demoed at SIGGRAPH over the past decade. Perhaps your confusion stems from the very different situation with lighting, where the full spectral response of the luminaire matters because the resultant perceived colors are the product of the light spectrum and the lit surface spectral reflectance, convolved with the retinal cone cell response curves, and so two lights with the same whitepoint but different spectra can result in a situation where two surfaces are perceived to have the same color under one of the lights, but different colors under the other. However, this does not apply to emissive displays, where three primaries is all you need, as long as the choice in primaries is right and the maximum saturation achievable is sufficient. The second wrong turn is in regards to contrast. High dynamic range displays exceeding the dynamic range of the human eye have been around for quite a few years--just search for BrightSide technologies. I still remember the initial prototype at the graphics lab at UBC (where I did my masters). So all the issues have been individually resolved; it's just a matter of putting it all together.

Humans are trichromats. We have three types of cone cells in our retinas...

It's not quite that simple. Putting aside the rare few tetrachromats with four kinds of cone cells, there are also the rods, which can sense a broad spectrum of light overlapping the ranges of the cone cells—some more than others. The color isn't going to look quite right if the overall brightness reported by the rods doesn't match the per-component brightnesses reported by the cones.

That said, three well-chosen primary colors can get us most of the way there, perhaps enough so that these minor differ

The idea that there might be some human tetrachromats has been entirely discredited.

I stand corrected. It appears [wikipedia.org] that while there are plenty of humans with four cones, this has only been identified (in 2012) to lead to enhanced color differentiation in one subject after 20 years of research. The vast majority are "non-functional tetrachromats". So perhaps not entirely discredited, but close enough as makes little difference.

This is separate from the ability for trichromats to distinguish more colors by taking into account both the cones and the rods, which is well-established, though gene

This seems so much more interesting than a watch.
I have not worn a watch for years. I can't at work because I work in a hospital. Outside work, I don't bother because I can get the time off my phone or a clock etc.
Augmented reality could be useful in all sorts of jobs and leisure activities.

So far, the only wearable technology I use is a stereo in-ear bluetooth headset.

The limits in augmented reality wearable technology are processing power, weight,comfort and variable focus. Something as simple as whether you can actually see reality directly through the display or whether reality needs to be captured first, processed and redisplayed with the augmentation added to it. If you can see through the display is greatly simplifies processing requirements, just requiring a portion of the wearable display to block light at the appropriate focal point and display an alternate ima

The variable focal depth issue was solved long, long ago by microlens arrays. Current ultra-high resolution displays make this approach very practical, as you can have 8x8 patches behind each microlens and still have decent overall 2D resolution. As for the variable opacity, just use DLP tech, as was (is?) fashionable in some projectors.

A bit of careful thought would make clear why neither concept as mentioned work all that well in the application you implied, their inherent design fails that particular application. Quick knee jerk thinking make it seems like they might work but careful consideration of the problems inherent in them should make it clear why those solutions fail.

why dont you explain? if it is lol funny then you should be able to say why

Sergey Brin, director of X projects at Google and co-founder of the company, has a strong anti-authoritarian and anti-military streak. The idea that he'd invest himself so deeply into a project focused on military applications is laugh-out-loud funny.

so where was his "strong anti-authoritarian and anti-military streak" when he was rolling over for the NSA **for years**...

That never happened. The NSA tapped Google's fiber without Google's knowledge, but there's no evidence that Google ever willingly participated. As soon as Google found out about the taps, it accelerated a program to get the data on all those fibers encrypted, to lock the NSA out.

Google invades privacy for profit and for decades gave the NSA (and god knows who else) an unaccountable back door to all our data

Google trades the right to target ads to you in exchange for services, and enables you to opt out of the trade if you want, even providing the necessary tools for you to do it. Google has never given the NSA an "unaccountable back

After I checked some of the job requirements posted on their website I was initially tempted to think that they are working on a VRD(virtual retinal display) variant. Then I found this - http://www.google.com/patents/... [google.com]

And there are other patents filled by Magic Leap Inc - https://www.google.com/search?... [google.com]
My "rough" guess is that they are probably looking at a system which combines freeform optical waveguide prism/compensation lens with an image generated by virtual retinal display.

I think it's hilarious that facebook paid 2 billion for Oculus, while Magic Leap has far superior tech and seems to value itself around 1.6 billion.

Here are two possible explanations:1. Zuckerberg is an idiot CEO who overpays for things (he did pay 20 billion for whatsapp, after all).2. Zucker knows his stock is way overpriced, so he is actually getting a better deal than it appears. Most of the Oculus acquisition is paid for with fb stock.