Posted
by
Soulskill
on Friday February 21, 2014 @11:45AM
from the your-bathroom-will-be-in-google-maps-soon dept.

Nerval's Lobster writes "Google's Advanced Technology and Projects Group is working on a new initiative, Project Tango, which could allow developers to quickly map objects and interiors in 3D. At the heart of Project Tango is a prototype smartphone with a 5-inch screen, packed with hardware and software optimized to take 3D measurements of the surrounding environment. The associated development APIs can feed tons of positioning and orientation data to Android applications written in Java, C/C++, and the Unity Game Engine. In addition to a 'standard' 4-megapixel camera, the device features a motion-tracking camera and an aperture for integrated depth sensing; integrated into the circuitry are two computer-vision processors. Google claims it only has 200 developer units in stock, and it's willing to give them to independent developers who can submit a detailed idea for a project involving 3D mapping of some sort. The deadline for unit distribution is March 14, 2014. In theory, developers could use ultra-portable 3D mapping to create better maps, visualizations, and games. ('What if you could search for a product and see where the exact shelf is located in a super-store?' Google's Website asks at one point.) The bigger question is what Google intends to do with the technology if it proves effective. Google Maps with super-detailed interiors, anyone?"

Everything Google is doing now is for their upcoming robotics division. This is how their robots will see and map the environment.Man, I wish my Roomba had this ability, rather than just randomly moving and adjusting direction based on what it bumped into.

Yup, what AI needs most right now is a way to digitize its environment to 3d imagination space like you see in video games. Hey even if you made the technology and no robots, you could map out the world, and have video games that you play across the whole Earth:) Cannon ball run USA anyone? I have an AI blog, says the same thing, we need 3d digitization [botcraft.org]

Some of the competitors products do. I forget which one(s). There was a neat youtube video of some company's robot and it would decide when it was done a room and not go back into it, just by mapping out where the doors were as it went along.

Think bigger. Pair this with some Google Glass/Oculus Rift hybrid (ie what that tech could become in the next decade or two) and some advanced augmented reality software. If you can map your world accurately, then you can project what you want on top of it accurately.

Anything you use but don't usually touch no longer has to actually exist -- it can be projected into reality. No TV. No screens of any sort really. Might be nice to have books, but you can just have one full of blank pages. Don't need artwork o

Yes, but this appears to just use a pair of cameras on the back of a regular phone. Most accurate 3D mapping for robotics is being done with expensive and more energy intensive (battery draining) LIDARs. Cameras are cheap and don't use a lot of power so using cameras could drive down the cost of mapping 3D environments or mapping an object for 3D printing if it is something that gets included in new phones... again sure it has been done, but not in something that could just be added as a feature to everyo

No not wrong. What has been done before is 3D mapping using cameras. And even putting those cameras on a quad-rotor. And yes the first post about building the functionality into a quad-rotor was also a bit ignorant of recent work that has put cameras onto quad-rotors to do 3D mapping, but the underlying enthusiasm about what new capability is being provided is still valid if you consider it a bit less narrowly.

If you could just strap a smartphone that had a 3D camera onto a quad-rotor or ground ro

Why isn't that useful to consumers? I can't tell you the number of time I've spent 10 minutes looking for a product, going up and down aisles, ask an employee which aisle it's in, still can't find it, and then finally realize I've walked right by it a half dozen times. If I could just pull out my phone and it could lead me right to it, I'd love it. That's not them forcing something on me...it's helping me more easily find something I already know I want.

Pretty sure most of the big stores (at least those with a perpetual inventory system) include location markers like aisle 4, shelf 5, 30 units (you can see those RGIS count tags left behind on shelves in the store with that information after a midnight inventory count).

If a product aisle 1, shelf 3, they'd just update that location in the database. So the store's indoor navigation map shouldn't just keep routing to static product locations, but just query for the current location. The underlying 3d map of t

Wall-Mart already tracks the location of everything on their shelf using RFID. They just don't give that data to consumers. So basically they agree with you: Right now it is more useful to store owners than to consumers. However, this is just a more detailed version of GPS. Many people have already let go of their ability to find their way around town without the help of their phone maps. Given a strong enough push many users would probably give up their ability to find their way around a supermarket as wel

Lowe's already has this in their mobile app. They recently added bin numbers to all their shelving. Select a store, search for something and it will give you the aisle and shelf number in addition a map of the store with the location pointed out.

Warning: this video is not what it appears. To all of you saying "WHY THE HELL WOULD I WANT A 3D ENVIRONMENT ON MY PHONE?? TO PLAY GAMESS??", you are not the target consumer.

TO you, the average consumer, it may seem like a neat new project with some cool implications like indoor navigation ("I'm inside the mall, how do I get to Macy's?" or "I am at a football game, where is the nearest hot dog stand?").

Now think about what Google is; Google is evolving far past an advertising company and more into a "big data" provider.

Take it a step further. Combine this tech with Google Glass (or some other wearable peripheral we may not know about), and the possibilities expand ("Where is the user looking usually?" "What is the optimal location for this new billboard? Based on Google Glass 3d mapping data, drivers look to the Northwest usually when traveling down Route 33).

Take it even further to Google's long-known project to catalogue everything on the planet. This will expand into their goal to be able to "Google" real-life stuff. Like a real life Control + F. You lost your keys? You don't remember where you put them, but your phone combined with your Google Glass remembers exactly where they are. Even if it doesn't remember, perhaps you can swivel your head around the room, scanning it with a camera until it alerts you that you are looking directly at an object that looks just like a pair of keys.

Now expand that further; big data. Have you been looking at sweaters a lot lately? Tango knows you've been shopping in department stores when you go to the mall. They can feed this data to advertisers, learn your color preferences, learn everything about you and be able to direct you to products.

Google can learn the shopping habits - of EVERY PERSON IN THE WORLD - and relay that information to marketers. Tell them where the best place to organize their products in brick-and-mortar stores, what and when to put items on sale at a specific time.

Google will be able to provide sales data BEFORE the sale is even MADE. Perhaps in early September people start shopping for winter clothes in New York City, but in August they were googling a new jacket, maybe looked around a leather store and looked at some jackets, etc.

This has huge market potential when combined with all of Google's other products. Remember this project isn't just a side project, this is the result of huge acquisitions and a scientific approach to recruiting and retaining top talent (we are talking salaries in excess of 1 million).

Google is moving to be the top "information manufacturer" in this new information age. Amazing! Wish I was a part of it.

I've been waiting for tech like this, to combine with Google Glass as well -- but to do the exact reverse of what that quote suggests.

Instead of using Glass to scan reality into a digital model, *use it to project a digital model into reality*. This will allow MUCH better augmented reality than we currently have. Perhaps to the point where you can change the color of your bed sheets with the press of a button. If you could achieve that, there's a whole lot of manual labor that suddenly becomes pure informat

There are two potentially huge markets. I, for one, would like to be able to take a few (360-degree) photos of my house and have SketchUp [sketchup.com] (formerly owned by Google) deliver a 3D version that prospective buyers could "walk around in" via their browsers.
Similarly, construction works spend a lot of effort making site measurements to create estimates, order materials, etc.. If that could be automatically produced via 3D renderings, all the better.

Jack Bauer and his pals already have 3D maps and schematics of every power plant, office building, warehouse, outhouse and chicken shack. Not to mention full control of the power, network and hot and cold water taps in each of them. And all in the time it takes Chloe to recalibrate the beam forming firewall protocols against the binary-coded output logs. Or something.