Email a friend

To

From

Thank you

Your message has been sent.

Sorry

There was an error emailing this page.

This image, taken from a YouTube video, shows a pilot project at Walgreens to provide 3D views of goods sought by a shopper in a store. The technology comes from Google’s Project Tango and Aisle411, a shopping location application maker.
Credit:YouTube

Google wants to extend its Project Tango computer vision platform beyond phones and tablet to robots, drones and other devices.

"There's huge impact potential there," said Eitan Marder-Eppstein, who leads the developer, relations and engineering teams for Project Tango at Google, during a speech at the Ubiquity Developer Summit in San Francisco this week.

Devices can acquire a wealth of information on location and objects in view with Project Tango, which is a hardware and software platform. Smartphones and tablets can measure distances, recognize items, create models of 3D objects, and map locations. Relevant information is shown in real-time on screen.

There is a lot of interest in Project Tango from people making quadrotors, which are drones with four rotors, Marder-Eppstein said. Project Tango could also be used in devices focused on visualization and localization.

"We're first focusing on the smartphone space to drive down the cost of sensors and to make the technology more ubiquitous," Marder-Epstein said.

Project Tango got a boost last week at CES. Lenovo and Google announced a Project Tango smartphone, which will ship worldwide for under US$500 by the middle of this year. Intel announced a 6.5-inch smartphone with a 3D RealSense camera that also supports the RealSense SDK and Project Tango software development kit (SDK). The devices have special cameras and sensors.

Google provided examples of how Project Tango works. Mobile devices can measure a couch in a store so a user can tell if the furniture will fit in their living room. Project Tango devices can also map out a store and recognize signboards and items on sale. It can guide users to an aisle where a specific product is being sold, or to cashiers by mapping "check out" signs. A trajectory log helps users track back to previous locations.

Project Tango can also be used for augmented and virtual reality. The 3D models captured in the real world can be transferred to virtual reality games and environments. Users can also play AR games with graphics superimposed on real-world backgrounds.

Project Tango trials are ongoing in malls where users can get turn-by-turn directions to stores. It'll also be possible to locate friends in stores as Project Tango tracks coordinates.

"We do think location-based experiences and navigation experiences are one of the pillars that we'll explore for this technology," Marder-Epstein said.

Google is focusing Project Tango for indoor use, and is working to resolve issues so the technology can be useful outdoors as well. Robots and drones are getting sophisticated enough to avoid obstacles, but Project Tango measurement, tracking and location features could be useful for these moveable devices to navigate and analyze and map surroundings.

Google has solved some problems so Project Tango can recognize and locate obstacles, such as trees, leaves or swinging objects. But measurement and creating accurate 3D models may be more of a challenge. The sun is a source of infrared light and could affect the accuracy of infrared sensors in cameras, which analyze the depth and geometry of 3D models.

More advanced cameras will be released over time, which should resolve those issues. Additional software tools will help enhance visual analysis and tracking information, Marder-Epstein said.