Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

n01 writes "A recently published app for the iOS platform uses the propagation of sound waves to measure distances of up to 25 meters in a dual device mode. The technique works through repeatedly sending a chirp signal from the master device to which the other (reflector) device synchronizes itself and then replies in a similar fashion. A novel combination of techniques has been engineered to enhance the robustness in noisy environments, such as using an optimum-autocorrelation-signal and semi-automatic frequency calibration together with an averaging over multiple cycles."

That's not very impressive. Anyone who has two devices that are syncronized to a common timing source (which most cell phones are) can accomplish this. You just say "I started transmitting at x and you received it at y. x-y/speed of sound at sea level = your result. Now if it could be done with one device, and use doppler effect,etc., to map out the room and roughly what's inside it (like in Batman) then we'd be getting somewhere.

You can buy devices that do it standalone, with only one device. Home Depot sells them. They shine a laser for aiming and generate repeated clicks. They then listen for the reflection and give you a distance measurement using that. Works reasonably well. Have to use it in a somewhat quiet indoor space, and if you aim it at something that absorbs sound it doesn't work, but for all that it lets you get a quick and pretty accurate distance measurement for cheap.

I had one of the standalone ones, it was basically useless for anything other than measuring your distance to a perpendicular, flat, hard surface that wasn't too far away. If you use one in a long narrow room the echoes interfere with the reflected signal, if you point it at a bookshelf or a framed picture without glass the signal gets attenuated too much, and forget about trying to measure the distance between a couple of curved columns. I bought one of the laser ones to replace it and yeah it was like 4

I seriously doubt they are using their clock times for this. Yes, they are synchronized to a common timing source, but to measure distance, you require sub-millisecond accuracy. Clock drift means the cell phone clock probably isn't that accurate, if it was even that accurate at the moment it was synchronized, which is also unlikely.

Yes, but for example the iPod touch (for which this software is supposed to work) is not a GPS enabled device. But there is no need for clock synchronization anyway, the way they use their two devices. Since the second device replies after a certain delay, the first device just has to take into account that the time difference between signal and reply is twice the travel time plus the delay (and then correct for the offset introduce by microphone and speakers not being at the same place...:) ).

Fun story. While I was at MIT/Sea Grant working on robot submarines, we'd lay an array of underwater beacons for navigation. To conserve power, they'd listen for a certain sequence of sounds from the sub, then reply back with their unique ping. The sub could measure the time it took to receive each unique ping, and thus determine its position by using the ping times and knowledge of where the beacons were. Kind of an underwater GPS. The beacons could last a year or more when used like this, which was a big deal because it was really annoying to locate and retrieve one just to load it with a fresh battery.

On one particular deployment, we left the beacons because we were planning to return a few months later. When we got back, the beacons weren't working. We retrieved them and all the batteries were dead. So we recharged the batteries and redeployed them. After our tests were over, we left the beacons again. When we returned a couple months later, they were all dead again.

Eventually we figured it out. The dolphins in the area had figured out the sound sequence used to make the beacons respond (probably just listened in on our sub). They thought it was pretty cool to get an acoustic response every time they used that code, so they'd been merrily chirping away during those months, draining our batteries.

That sounds pretty awesome. Do you know if anyone from the biology department at MIT went back there to study that behavior? Since dolphins already use echolocation to navigate, I just wonder if they were doing more than amusing themselves, and actually managed to adapt to use the beacon system for their navigation. I'm not a biologist, and don't know much about dolphins, so I don't know if that's feasible or not, but it would be pretty amazing.

Burden of proof. If this were real, we would have links now. In the absence of evidence, we side with "don't believe the dubious claim". That's why I'm not asking you how you know there's not a monster under my bed, because if the monster is BS, "I'd like to know".

Thanks. Here is just a request from one of the teaming masses - I'd have preferred if you had said "this story is probably BS." Saying it's BS makes it sound like you have some fact or evidence that falsifies the claim, like a snopes report or personal knowledge. I wouldn't compare his story to monsters under the bed, but I do agree it has some traits in common with an urban legend.

That sounds pretty awesome. Do you know if anyone from the biology department at MIT went back there to study that behavior? Since dolphins already use echolocation to navigate, I just wonder if they were doing more than amusing themselves, and actually managed to adapt to use the beacon system for their navigation. I'm not a biologist, and don't know much about dolphins, so I don't know if that's feasible or not, but it would be pretty amazing.

Unfortunately, when they went back the dolphins had all been executed by Navy SEALS for reasons of national security.

You just say "I started transmitting at x and you received it at y. x-y/speed of sound at sea level = your result.

And then "your result" has at minimum a wavelength or two of precision, which sucks mightily at audio frequencies. This is why they use a nonperiodic (in this case chirped) waveform and correlation instead of "I started transmitting". You could have read this [wikimedia.org], at least, before making an ass of yourself.

Not that it's so novel as they try to make it sound, either -- SONAR and RADAR guys did all that long ago, and you'd get the basics needed to implement it in your first semester of DSP in any EE program. In fact, if they're even doing "semiautomatic frequency calibration", they're obviously using linear chirps -- exponential chirps are relatively immune to Doppler or other frequency shifts, and since there's no analog design, are no harder to implement -- suggesting they haven't had (or slept through) any formal education in the field.

It just bugs me when people who know even less run down every decent, if not outstanding, project like this with their own mix of even lamer approaches ("just as good!") and pie-in-the-sky fantasy ("then I'll get excited")

Please watch the second video, it shows how the app can be used with just one iOS device and headphones.

I agree that by having the clocks exactly synchronized this could be a lot easier. (But even 1 ms of deviation means an uncertainty of around 34cm.) The challenge was to do it without having the devices synced by an external source (it works on iPod touch devices and iPad as well) and without using a communication channel other than sound.

Seems impressive enough to me, maybe a bit useless though. Thinking of alternatives, it might be possible to do it with image recognition as well. Judge the distance by the size of the target iphone or whatever.

This mode works on the iPhone 3GS (and newer), iPod touch 3rd Generation (and newer) and on all iPads. You either need one of those devices and headphones with a mic or headphones without a mic and a iOS device with a built-in mic (excluding the iPod touch 3rd Generation). The resolution in this mode is 1mm or 1/10 of an inch, depending on the unit system you have selected.

Nobody in their right mind would buy two iOS devices just to use this app. But somebody who's got two of them already might consider buying this app for under a dollar. (Just one purchase required if you have both devices on the same iTunes account.)

One suspects that the primary use case for this application is not, "Hey, we need to measure this, let's go get two iPhones!" It's "Hey we need to measure this and happen to have two iPhones, but no tape measure." Most people carry their phones around with them all the time, but unless they're contractors don't carry tape measures. The point of near ubiquitous mobile computers is that you can use them for lots of things. This is a cute and clever thing that you can now use them for.

You can get enough accuracy for buying paint and fence length by counting your steps.

You have to walk from one end of the measurement to the other whether you're counting steps or just putting the phones in place, but you don't have to walk back to the starting point to pick up a phone, however, so the entire process is easier and faster if you just count steps.

Fuck, it's even easier just to use a damn tape measure. You don't have to synchronize them or any of this bullshit. Not only is a tape measure easier, but it's a fuck of a lot cheaper, too. Now you don't have to drop at least $1000 on some Apple devices.

That would be a valid comment if anyone actually bought an iPhone for use exclusively as a tape measure.

"Please note that while the resolution of the measurements may be as low as 1mm, the precision usually is not. While I have taken great care to make the app as reliable as possible, there are simply too many factors affecting the measurement process and the precision. That is why I want to be clear about one thing: there is absolutely no warranty that the measurements taken with Acoustic Ruler Pro are correct"

Another good example of incorrect usage of the word precision. In this case, the method is actually quite precious, as in measurements are very repeatable. What the author meant is that the accuracy is not very good.
I tried out the app just now, at the range of 22 inch (width of my monitor), it under estimates the distance by 1 inch; and for something half an inch apart, it over estimates by over an inch. It is possible to measure the non-linearity using a control setup, but the result would be largely us

The autofocus in {smartphone} doesn't measure distance. It does it heuristically using sharpness of image. When you touch an off-centre subject on the screen of an iPhone you are telling it "make this bit of the image sharper than any other part" and it figures out the "best" way to do that. It has no clue whether that's 1 foot away or whether it's the moon.

how is this news? what I don't get... there have been acoustic tape measure apps on AppStore for a couple years now (just search AppStore for tape measure).... and none of them require more than one phone. I have expect to see a slashdot summary soon announcing the new development of the combustion engine.

You can also use the device in a single device mode (with headphones), as shown in the second video. I just thought that the dual device mode would be more interesting and therefore emphasized it in my submission.

This could be really useful for navigation inside a building. You could position transponders like this in say a mall or warehouse store and integrate it into a app.Need to find the nearest restroom? Need to find Bed Bath and Beyond.Where can I buy spray paint? It would have to use sounds outside of human hearing to not drive people nuts but it could be very cool.

Weapons Officer: "Captain I can't get a fix on the enemies position."Science Officer: "We could try using an optimum-autocorrelation-signal and semi-automatic frequency calibration together with an averaging over multiple cycles."Captain: "Good idea."

Weapons Officer: "Captain I can't get a fix on the enemies position."Science Officer: "We could try using an optimum-autocorrelation-signal and semi-automatic frequency calibration together with an averaging over multiple cycles."Weapons Officer: "You mean, use the auto-shootey?"Science Officer: "Use the auto-shootey."

if we could have an app that sends clicks and chirps and processes the echos and creates a picture or 3D model.

I am not sound technician, but such app won't see the difference between open space and sound absorbing surface. Picasso might draw better 3d model than echo app given that different materials have different sound absorbing characteristics.

The iPhone does have 3 speakers and 2 microphones. Most aren't particularly good ones, mind you, and individual addressability is a problem. It's really not practical. Add in a gps and camera, and it's not quite as bad as you make it out to be.

Of course, using a camera properly would be cheating. But I'm a bit surprised nobody has done it on the iPhone.

As an iPhone owner, I'm curious how you claim it has 3 speakers and two microphones. Did you mean 2 speakers and 1 microphone? I see an earpiece speaker and a bottom speaker with a microphone on the other side of the dock connector.

It's got an earpiece speaker, a bottom speaker (maybe the OP thought it was stereo, meaning two bottom speakers), the regular speaking mic and a noise cancellation mic. So two speakers and two microphones.

What I want to know is why do i need a 3rd party app to turn on the flash emitter? This is Doom 3 levels of stupidity regarding the utility of a light source.

You know, that's a good question. However, the answer has to be a bit three-fold.

People have been using cellphones as makeshift flashlights pretty much since the first cellphones with a reasonably bright white screen came onto the market.. and why not, the screen was bright enough to navigate indoors, bright enough for the whole "finding the lock of y

Wouldn't it be smarter to, for the same $0.99, buy a combo laser pointer/led light keychain thingee, complete with batteries? You can also use it to tease the dogs and cats and enrage skunks (yes, skunks get REALLY mad when you try to tease them with a laser pointer, and will charge if you're not careful... been there done that, left the vicinity asap while there was still a fence between us).

Is it a better light source and comes with additional perks (such as the laser pointer, maybe a UV-B emitter, too)? Yes.

On the other hand.. it's an additional thing to carry. In combination with your keys, that might not seem so bad. On the other hand, the keys are prone to scratch them up. But there are models that have the emitters recessed within the body so that, even if aesthetically it ends up shredded, at least your beam would be fine.That d

Please re-read what I wrote. You don't need a battery charger - $0.99 includes the batteries - 3 coin cells.

The emitters are recessed 1mm.

Nobody's going to worry about keys scratching the aluminium surface of a 99-cent pointer.

Now please consider a real-life scenario - you have the light on your key-chain, so you don't have to fumble around looking for both your keys AND your light. And if you drop it in the dark, you won't break anything.

Please re-read what I wrote. You don't need a battery charger - $0.99 includes the batteries - 3 coin cells.

Which tend not to last very long. Yes, you can just buy new ones (or at $0.99 just buy a new keychain light). Not very environmentally awesome, but I realize some people find that rather shrugworthy anyway.

Anyway, this is getting further and further from the discussion of why one needs flashlight apps on phones and the use of a phone as a flashlight in the first place.

Is there an Android app (or, preferably, library) that can use the sonar to sense the size and rough shape of a whole room, making a 3D model? Maybe by correlating distance pings with the accelerometer (and GPS for added position context) while waving the device around.

That being said, there is nothing that says this won't work - as it worked extremely well 20 years ago on dedicated systems with far less processing power. (Those systems, however, used multiple arrayed transducers and tailored beam patterns to significantly reduce the effective noise floor.)

Keep in mind that all that 'specialised equipment' evolved out of a need to improve the simpler predecessor systems.

Sonar and sonic range finding systems use all that 'extra equipment' to achieve ranges far in excess of 25m and in mediums much more variable than air. The impulse response of miniature consumer grade condenser microphones and speakers are more than adequate for air use within an octave of the audible spectrum. The speakers in the iPhone are primarily limited by their output power, and the fairly omnidirectional nature of the microphones may lack overall sensitivity, but both are simple parameters that really only end up reducing total available range and accuracy (as compared to specialised custom hardware using the same algorithmic solutions).

Applying the same design principles that would normally be applied to a specialised system design to an iPhone implementation, would be very unlikely to provide anything unknown to someone in the industry. This is very similar to early stage engineering "proofs of concept" that are used to test various parameters within a system design, without the interactive complexity of implementing the entire system.

There is nothing within this extremely simple setup that hasn't been done as part of a larger system design. A single (consumer grade) speaker + microphone used in transmissive, active echo, or for passive echolocation is not unusual. Considering the iPhone has excessive processing capability to implement all the standard approaches (correlation, convolution, deconvolution, filtering, impulse response measurement, etc), there is no real need to be 'clever' as such.

'Back in the day', when trying to do this with a 10MIPS DSP in real time with moving objects, it was much more important to come up with better algorithms and shortcuts. Of course, this could otherwise have easily been done with standard theoretical methods and a modern processor a hundred times more powerful.

I see patents pop up all the time that describe things that are far from novel. Most of those patents are usually 'invented' by people with no real experience in the given fields. ie. Ideas that seem like earth shattering discoveries to the uninitiated, but are really just standard techniques used by properly skilled engineers.

I'm not saying that this iPhone app is bad/good, just that it is VERY unlikely to contain any actual improvements to the current state of the art (or the state of the art 20 years ago for that matter). I say this, because there is no real need to do anything new to achieve the results that they are claiming.

BTW, in the past I've worked on sonar/radar systems for air, ocean and rock. The biggest problem in 'noisy' environments is a lack of output level. Multipath isn't a major problem for a point to point (ie. line of sight, shortest path) ranging device - unless you're talking about wave guide shapes/sizes over long distances.

In my experience, the response of acoustic speakers is not that great for anything other than speech and music. If you have a smartphone, try generating a white noise signal and watch the spectrum of the audio input (there are audio spectrum analyzer apps around); it won't be anywhere near flat.

On the other hand, my acoustic signal processing experience mainly has been with signals that were somewhat spread out (data communications), which is where the non-linear frequency response of audio I/O messed things up. The chirp signals these guys use probably occupy a much more linear part of the spectrum.

Unless the transducer has a significantly non-linear or narrow band response, it is extremely easy to compensate for this with only a minor signal to noise / distortion degradation. An impulse response and deconvolution is a good starting point when trying to subtract out the effects of imperfect devices (obviously within reason - you can't reverse the effects of a null or total signal cancellation, but a +/-10dB variation across the required spectrum wouldn't be too much to ask).

Here's the part that I've found really problematic on iOS devices: audio processing latency. Most signal processing of the type you describe typically work with real-time systems which makes timing straightforward. But this is not so on iOS devices, even with their low-latency API... Worse than the latency, there is a fair bit of jitter. Sure, you can do real-time stuff at the driver-level, but an app in the app store does not get that level of control. I have some idea about how to account for it, but I'm not sure how these guys have done it (I'd guess the "multi-cycle" approach has something to do with it).

Is there an Android app (or, preferably, a library) that can use the sonar to sense the size and rough shape of a whole room, making a 3D model? Maybe by correlating distance pings with the accelerometer (and GPS for added position context) while waving the device around.

About 20 years ago, I had a hand-held device roughly the size of a smart phone but twice as thick that did distance measuring all by itself. It was infrared and as I recall, it was something like $25.00 from Rat Shack or Home Depot or some place like that. A 30 foot tape measure is about $8.00 and works a lot better.

Polaroid was selling ultrasonic sensors and an experimenters kit that could be used for this purpose thirty years ago. While there are definitely applications where this type of technology is useful, I agree that a tape measure works great for most purposes.

Is there any benefit to moving to ultrasonic frequencies? Other than making it inaudible (so you don't bother people but maybe dogs!), would this improve the resolution? Does the range decrease? Do consumer level devices cover such a broad spectrum?

By the way, has anyone made an iOS or Android App that can record in the ultrasound (or infrasonic) ranges and change it so that we can listen in audible ranges? Might be neat to see/hear what the bats are doing!

Also, how DO bats build up a good 3D map of their surroundings using just one "speaker" and two "microphones"? Do they send out beams or are their ears swiveling? And, with the limited amount of computing power on a smartphone, would it be able to duplicate it? A bat's brain doesn't seem particularly large and they are doing this FAST (on the fly, ha ha).

Is there any benefit to moving to ultrasonic frequencies? Other than making it inaudible (so you don't bother people but maybe dogs!), would this improve the resolution? Does the range decrease? Do consumer level devices cover such a broad spectrum?

As I mentioned in another comment, I've been experimenting with a similar application on iOS devices. Yes, consumer devices do cover ultrasonic frequencies, but barely. For average humans, ultrasound begins above 18 - 19 KHz, and devices with 48KHz range can produce up to 24KHz frequencies... in theory. The problem is that the commodity speakers/microphones in smart-phones are optimized for the human perceptual range, and since ultrasound is beyond that, the transducer dynamic range and/or the in-built signal processing conspire to significantly attenuate and distort ultrasonic signals. Using an iPad, in preliminary experiments, I could only get a range of ~5m using ultrasound, whereas these guys say they can go up to 25m.

Moving to ultrasound also can affect resolution negatively. Since you're effectively using a much smaller bandwidth signal, your positioning accuracy reduces, on top of which, multipath problems get much worse. (Smaller bandwidth because by limiting the signal to ultrasound, you only get a bandwidth between ~18KHz and 24KHz for a 48KHz sampling frequency, and the iPad microphone strongly attenuates signals after the 20KHz range.)

Also, how DO bats build up a good 3D map of their surroundings using just one "speaker" and two "microphones"? Do they send out beams or are their ears swiveling? And, with the limited amount of computing power on a smartphone, would it be able to duplicate it? A bat's brain doesn't seem particularly large and they are doing this FAST (on the fly, ha ha).

Bat ears are highly specialized. This link gives a brief overview of how bats do echo-location:http://science.howstuffworks.com/environmental/life/zoology/mammals/bat2.htm

I believe smartphones have, or will soon have, enough processing power to do the necessary signal processing if we can design the right algorithms. The problem is it would also need highly specialized audio transducers to get any useful signals, which may not necessarily fit into a smartphone form factor.

Is there an Android app (or, preferably, library) that can use the sonar to sense the size and rough shape of a whole room, making a 3D model? Maybe by correlating distance pings with the accelerometer (and GPS for added position context) while waving the device around..