Google hasn’t just kept Motorola's patents in its deal with Lenovo, it's also keeping the mobile manufacturer's skunkworkish Advanced Technology and Projects (ATAP) group.
And that team has just unveiled a new smartphone dubbed Project Tango, which is aimed at developers.
Project Tango handset Project Tango ... watching you …

COMMENTS

Another hairbrained scheme...

...with no idea on how it should be used. If there's a problem it solves, Google would focus on solving the problem rather than asking developers to invent problems that fit the solution. Starting to sound like a cliché I know...

Anyway, I don't give it any more cred than Google Glass, which is their last solution still looking for a problem.

Re: Another hairbrained scheme...

From 'Micro Men', 2009 : "Can't they see we exist to push barriers? We won't be constrained like this. What's that line from Browning...? 'A man's reach must exceed his grasp, or what's a heaven for?'"

They are pushing those barriers. Cut them some slack. Not all of it will come off, but hey, that's what being inventive is all about.

The development of a device with novel sensors and accompanying processors - bundled with cellular radios (on the off chance that the user might wish to transmit the captured data elsewhere, shock horror) does not preclude the phone you want. There are plenty of applications for this technology. For example, some of us might want a device that allows us to scan a room, store it, and then consult that data when we get down to the hardware store - or perhaps just use it to send an order for X square metres of floor tiles.

Who knows what software developers might come up with - but they are more likely to come up with something useful if they are actually in possession of the hardware.

If you want call quality, security and decent battery life, get an old Nokia.

The "depth sensor" is written like it's some commodity thing, but as far as I'm aware there isn't a way to quickly retrieve depth information from a small sensor in high - or even useful - resolution. From my perspective that claimed ability is a massive breakthrough, but all we get from El Reg is:

"The sensors take up to a quarter of a million measurements every second"

Really? Seriously? Here, I'll give you some equally useful statistics for other things: "The Ferrari engine generates a spark up to 30,000 times per second"; "The camera's resolution is 40 megapixels per second"... Those are actually more sensible than your line, which makes no distinctions about how these 'measurements' are split up between three sensors, an undetermined number of axes of data per sensor, the resolution of any of them, or the rate at which they're refreshed. You might as well excitedly tell someone that you got a house with a lot of space, and when asked how big, tell them, "a million cubic feet per month". I don't care if that's what the Google people gave you - even if it is, repeating it like it's meaningful belies either terrible laziness or an atrocious misunderstanding of basic technology.

How is that "depth sensor" any different than Kinect? There are multiple ways known to map a 3D environment, such as IR and sonar. It isn't like this will magically allow you to see what is behind a closed door. If it had enough processing power it might be able to see around corners in some cases (if the IR/sonar reflects back and the device can calculate the multiple reflections required to get around the corner and back)

This is simply packaging Kinect like tech in a smaller form factor. If Apple were rumored to be doing this in a future iPhone, we'd be treated to a horde of people claiming they're copying Kinect, Apple can't innovate, etc. If Google does it we have deluded people like you touting this as a "massive breakthrough". If you think it is, the credit goes to whoever designed the sensors. I doubt Google did, anymore than Apple designed the MEMs accelerometers in the first iPhone.

Where did I credit Google with the sensor design? The *functionality* is a breakthrough. Kinect's goals are relatively limited, and IR / sonar are pretty rough around the edges; unless their implementation has been significantly improved - perhaps by dint of combining the data with other sources, which would hand some of the credit to Google, incidentally - I don't think they're capable of much that's very interesting.

I did a significant amount of work in the 3D-from-sensors (of various types; everything from structure-from-motion to LIDAR etc) world back a few years ago, so I'm not unknowledgeable. But it's been rather a while since I was in the thick of things, so I may have missed recent developments (hence the 'as far as I'm aware').

I honestly don't care who's responsible for a given technology, or who gets the credit, or which damn fanboys get to wave their unsavory extremities in celebration thereof. I'm interested in what I can do with the technology, in what it's capabilities are, and in how it was achieved. Unfortunately, it seems that despite my careful wording, which was chosen specifically to indicate that I was interested in technology rather than assigning blame / credit to a favored organization, you, at least, have seen fit to assume that I just *must* be on *someone's* side.

Perhaps you should consider what other people are actually saying before you jump to conclusions and write them off as delusional.

For the love of all that is holy, can we have a discussion about some interesting stuff with some interesting potential uses without getting in a useless fucking pissing contest about which clique of halfwit cockchugger commentards gets to claim a glorious victory?!

"For the love of all that is holy, can we have a discussion about some interesting stuff with some interesting potential uses without getting in a useless fucking pissing contest about which clique of halfwit cockchugger commentards gets to claim a glorious victory?!"

Upvoted even though you criticized my post, not only because it was an eloquent critique, but because you managed to be eloquent and still work in the phrase "clique of halfwit cockchugger commentards"...

Re:

Re: Re:

> don't care if that's what the Google people gave you - even if it is, repeating it like it's meaningful belies either terrible laziness or an atrocious misunderstanding of basic technology.

Kinect-like hardware has been around fora few years now - heck, even Intel are pushing their 'RealSense reference to Laptop OEMs - so it would appear to be that the current bottleneck is actually the processing of the raw data. And accordingly, the article expanded upon the custom chips used to power this - making reference to the company's past announcement of a GPU/DSP mashup and speculating on the process size now used. Kinect has been around fora while, but when people have demonstrated it capturing data whilst it is being carried around, it has been tethered to a laptop with a fairly powerful GPU.

the next step to monetizing (sic) MY life.

Google, not content with capturing the entire public world via street cars and wifi slurping, now want to get access to *my* private space too.

What happens, when they replace the cameras intrusive 3d mapping with quietly effiecient sonar mapping? What happens when my friend who has it turned on and in his pocket, comes into my house and accidently maps all of *my* rooms? Where's my consent for keeping *my* space private?

What's the next step? Google endoscopes for slurping data about my body too (and we all know the orifice they'll use to gain access)?

Is this the Orwellian future we are sleep-walking into? Not one in which the state knows everything about us, but one in which private companies know everything about us, from our medical history, to our soft furnishings, to whom we communicate with and the brand of washing up liquid we use.

Furniture Fun

I want this on a drone

All this power on something as small and portable as a handset is incredible. Imagine being able to 3D map a building on the inside/outside and superimposing services and cable runs. Or repaint it in a different colour, or try different lighting plans. Fantastic!