Posted
by
Roblimo
on Wednesday March 27, 2013 @01:12PM
from the touching-nothing-but-a-picture-magically-appears-on-the-screen dept.

What the Leap Motion product (they only have one right now) does is allow you to control your computer with gestures. We're not talking about just jumping around, but "painting" on the screen with your fingers (or even chopsticks) with fine enough control that Autodesk and other drawing-orientd software vendors are working to make applications compatible with the Leap Motion Controller. And game developers? You bet! Lots of them -- and this is for a device that's not even supposed to start shipping until May 13. But, says CEO Michael Buckwald, they already have "hundreds of thousands of pre-orders," so it looks like they are developing a large market for developers (over 12,000 are in the Leap Motion developer program -- out of 50,000 who applied) so it's possible that Leap Motion could become a pretty big deal. (You can see the Leap Motion Controller in action at the end of the video.)

Tim: Let’s talk about Leap Motion for a little bit. Leap Motion is a company that came out of what sort of background?

Michael: So the company is based around technology developed by my cofounder and our CTO, David Holz, over four or five years while he was getting his math PhD. It is essentially born out of a deep frustration with the fact that even though computers today are radically different in every way from computers of 30 years ago, the ways that we interact with them haven’t really changed.

Tim: Now on that front there has been a lot of motion control at the consumer level, just in the last decade. What distinguishes what Leap Motion is introducing from things that already exist like the V and the Connect Controller?

Michael: Yeah. So we are different on two levels. One is the philosophical level. So basically most other players in the space in which people tend to think about gestures when they hear about this sort of technology and what they mean by gestures are these sort of binary sign language-esque inputs where a system looks for x and then y happens. And that doesn’t actually solve the problem. It doesn’t actually give the user more bandwidth; it is no different than pushing a key on the keyboard. What we are trying to do is bring the same sort of complexity and intuitiveness that people have when they interact with the real world by reaching out and grabbing things, pushing - making it as direct and interactive and dynamic as possible.

So basically we are trying to take advantage of this intuition that people have developed over thousands of years that lets us perform this incredibly complex action that is reaching out and grabbing something in a totally thoughtless way, and the way that we do that using much more accuracy and much lower latency than other devices. So we are one of the only devices that contract multiple fingers, we contract up to ten fingers and we do that at a sub-millimeter level. So hundreds of times accuracy of something like a Connect; and more importantly, our latency is extremely low. So we want it to be virtually undetectable to the user; we want them to feel like their hand is physically inside the device. And that is as important for us as the accuracy.

Tim: Now compared to some of the other motion controllers out there, it seems like the tradeoff that you’ve chosen is high accuracy versus large area.

Michael: So it is less the core innovation and the core algorithms can’t work over a large area, and more that we think that the types of applications where people most need this sort of accuracy and this sort of multi-finger tracking happen on computers right now. So we don’t think that people want to control their desktop from six feet away but we do think that when they are sitting in front of their desktop they would like to be able to mold a piece of clay, grab an object and move it, interact with it in a certain natural way; so the device we built today is intended to be the smallest, most accessible, most powerful that can sit on someone’s desk and perform that function. But if in the future, we wanted to build a different device and we didn’t have to worry about USB power it could cover a much larger area.

Tim: You know the other thing is that when you look at things like the Kinect Controller, it is obviously just the name and what it is tied to, it is really part of Microsoft’s gaming lineup. What kind of connections do you have? It seems like it is a much more open possibility for connecting your device to various types of computers and systems.

Michael: Yeah. So one of the most important things about Leap is our developer ecosystem. So there are over 50,000 developers from 150 countries that have applied to be part of the Leap Developer ecosystem. And today we’ve sent units to 12,000 of them, and they are working on applications as diverse as Zeptolabs with Cut the Rope, and Autodesk with Maya and then in between there is an incredible diversity of applications, everything from really intuitive ways for people to create audio, create video, virtual pottery wheels, ways for people to scroll up and down, and more mundane things like ways for people to move through presentations.

So we want this to be a platform. And we also announced Airspace, which is our apps store. And basically the goal is that the user should be able, within 30 seconds of connecting their Leap, to have a place to go and know that everything there works great with Leap. So we are not requiring developers to use Airspace but to make discovery for the end user as easy as possible, it is going to serve as a curated, centralized place for great Leap apps.

Tim: The bigger names, the ones that already have so many users in the world, they obviously have a lot of connections to gaming studios; can you talk about what sort of games are actually going to be available for the Leap Motion?

Michael: There will be a pretty diverse range. So there will be lots of casual games and a lot of the mobile gaming studios are very excited about Leap because obviously mobile gaming isn’t that compelling or fun as an experience with a mouse or keyboard. But it can be a very fun experience with Leap. But also more hardcore games. So both through mods and original games.

So we have developers working on mods or plugins for virtually every popular game that you could imagine. And in Airspace there might be hundreds of mods or plugins for a particular game, and the market or user preference will decide which are the best control schemes. But then we also have developers working on things like original first person shooters where each hand controls a gun that is fully independent, and then when you get close to something, maybe it switches to melee mode where you are actually punching and fighting basically we want to make sure that everything that developers build and everything that users use is a fundamentally better way of doing that thing. So we are very cautious about doing things that seem cool but aren’t fundamentally better.

Tim: It seems like that one thing the Leap Motion controller has quite different from the others is that nongaming uses - they are not an afterthought, they are actually it seems like what you talk about in a lot of your material, as opposed to just shooting up your enemy.

Michael: I think nongaming uses will be the majority. So right now it is about 50:50. Obviously it takes a lot less time for a developer to build especially for the mobile games, then something like a CAD program, so I expect that it will probably be at least 70 percent nongaming at launch.

Tim: For the idea of mobile games being used this way, does that mean there will be a smaller controller in the future, or can it use software or integrated embedded say from Leap that is using your phone’s hardware.

Michael: For May, when we ship the Leap peripheral what it means is basically just that many of the top mobile studios are taking our content for iOS or Android porting it to Windows or OS 10, because they realize that Leap can turn it from something that doesn’t make sense on those platforms to experience that it is better than it is on a tablet or phone. So the Leap peripheral in May won’t work with mobile devices, it is very focused on PCs; but we definitely want to embed this technology anywhere there is a computer. So in the near future, we would like to embed this not just in laptops, and desktops but also in everything from tablets, smart phones, cars, surgical robots, fighter jets the sky’s really the limit, I think.

Tim: Now those 12,000 developers to whom you’ve actually sent hardware and all the others who have access now and are working on things, what sort of access do they have? Is that STK, and is that going to be available to a wider pool at some point?

Michael: Yeah. In May, once anyone can get their hands on a Leap device, we will totally open the developer program, and anyone will be able to go to the developer portal and also anyone will be able to download the STK. So right now, the reason that we have selected those 12,000 is because this is both a beta test as well an opportunity for them to develop early things; then we have sent them physical units, but we want this platform to be as open as possible.

Tim: How many applicants did you have that you had to select from?

Michael: Just over 50,000.

Tim: That’s quite a winnowing process.

Michael: Yeah, it is a testament to people, particularly developers’ passion for what we are trying to do. I think we are very committed to using this technology to make the world a better place. We really believe that this is holding back computing in a fundamental way, and as accessible as possible, and I think that is one of the reasons we’ve gotten such a great response from developers, developers at their core, they became developers because they like creating things, they like building things, and I think they are as passionate as we are about making it easier for other people to create things.

Tim: Finally, it is just a few weeks or I guess a month and change before you start sending units. So where are they now? What is the process? Are things in boxes on ships?

Michael: Yes, we’ve received hundreds of thousands of preorders and we are building units to satisfy that demand as well as to get units to our retail partners like BestBuy and our bundling partners like ASUS; and we have units being mass produced; something that is in the US already being put in boxes; we are taking preorders globally so we have three global distribution centers that are currently preparing the devices. Our goal is to simultaneously ship to all many hundreds of thousands of preorder customers on the same day; it is a big logistical undertaking but it will be worth it.

Tim: Any last words that people should know about?

Michael: Well, South By Southwest has been a great show for us. It has been really great to see people that have preordered the device come for the first time, and get to use it , and see them to be happy and meet their expectations which is great. It has also been great to talk to people who have no idea who we are and what the device does, and get them to give us their feedback and thoughts as well.

How do you have something like the Kinect and not have patents all over something related that basically would prevent this, or at least cause it to have to license numerous patents? Missed opportunity indeed.

Also, am I the only one that thought this was from the kids' toy mfg Leapfrog?

"How do you have something like the Kinect and not have patents all over something related that basically would prevent this, or at least cause it to have to license numerous patents? Missed opportunity indeed."

I am in the Leap developer program and I have one. But I am not an expert on the Kinect. From what I understand, the Kinect uses cameras and visible light to do passive motion detection. The Leap works very differently. It uses active infrared signals and a pair of infrared detectors to do its magic. Unlike the Kinect, its active area is limited to just above the desktop. But also unlike Kinect, they claim precision down to a few microns. I haven't tried to measure the accuracy of mine, but it's pretty darned accurate.

Also, using the SDK, you can (A) detect all 10 fingers, (B) the position of each finger, (C) the direction each finger is pointing, (D) the position and orientation of the palm, and (E) the relative curvature of the palm (e.g., the diameter of an imaginary ball in your hand).

It's pretty impressive. The question is how well it will be integrated into software. Like any "alternative" controller, implementation in an individual application might be sad or might be great. There is no way to tell in advance, and I am sure we will see some of each.

The kinect is attacking the problem from a slightly different direction, which does mean less accuracy in the beginning but it'll pay off in the end. What the story fails to mention (on this 6 month old story) is the leap only works in a 2 foot cube above the device, and the tech used means it's quite difficult to expand that bubble (although you can link multiple leaps together it still needs to be beanth you and it only works 2 foot high). What the kinect did was develop a system to compute the whole room

kinnect was a huge success sales wise and the next version "looks" like they could be ready to continue that. Yes the games were crap for it on the 360 but most of that should improve on the next xbox. Really I think the leap is very much like kinnect V1, way to many limitations in the product for it to be more than a gimmic on this first iteration. Where as kinnect was limited in accuracy, leap is extremely limited in range making these really too different focus markets.

I'm going with the DUO for a couple of reasons. LEAP has been at this for how long now, and they still don't really have a product widely available; the DUO guys seem to be ready to go *today*. And while LEAK doesn't have any plans to produce an open source driver so that I can use this device and do my own processing of the data, the DUO guys say their driver will be open source. Indeed, their hardware will be licensed under Creative Commons. I voted with my wallet - DUO [kickstarter.com]

"I'm going with the DUO for a couple of reasons. LEAP has been at this for how long now, and they still don't really have a product widely available; the DUO guys seem to be ready to go *today*. And while LEAK doesn't have any plans to produce an open source driver so that I can use this device and do my own processing of the data, the DUO guys say their driver will be open source. Indeed, their hardware will be licensed under Creative Commons."

The LEAP and the DUO are two completely different kinds of devices. You are comparing apples and oranges.

I've checked it out some on their website. What you say is far from true.

The DUO guys aren't "ready to go *today*" at all! They're still in early development! They are WAY behind the LEAP.

This is what I learned from their website:

(A) They haven't decided (or announced) the degree to which it will be open source. They said so in a forum on their own website. So that part is still very much up in the air.

(B) DUO just finished building their development prototype. LEAP has had actual development units out for months. They have gone through 4 physical revisions, and many firmware revisions. Many developers have been actively developing for the LEAP for that same period. I happen to know because (disclaimer) I happen to be one of them. Nevertheless, I am not terribly biased toward the LEAP. I am simply reporting the facts as I see them.

(C) The LEAP looks pretty nice and it is small. The DUO looks about 3 or 4 times as big, has cameras awkwardly sticking out of it, and looks cheap. Yet for some reason getting on the bandwagon even during the development stage costs almost 2 times the retail price of a LEAP? Say what?

(D) While they make claims of high accuracy and low latency, I don't see any numbers anywhere. I'll believe that when I see it.

All in all, at this stage of the game, the LEAP has a hell of a lot going for it that the DUO lacks. Maybe it will be better in the long run. Maybe it will be open source. But neither of those things is anywhere near certain yet.

All they need is the funding to get the first of devices built - devices they plan on shipping as soon as they can assemble them. Look at the Kickstarter. Something tells me that the DUO will be more widely available before the LEAP, despite the "headstart" that LEAP has - they've been at this for how long now and all they have to show is a pretty webpage and a press onslaught in response to a Kickstarter project.

You can argue that the price of the DUO is high, but the DUO guys aren't funded by venture capi

"Something tells me that the DUO will be more widely available before the LEAP, despite the "headstart" that LEAP has - they've been at this for how long now and all they have to show is a pretty webpage and a press onslaught in response to a Kickstarter project."

But this was my point. The DUO does not even have devices in developer hands yet. LEAP has had them out for I think close to a year now. And they are not just "prototypes". They are beta versions of the commercial product.

Sorry, but it's just wrong. LEAP is WAY ahead of the DUO. The DUO folks still have to go through all the things that LEAP has already done, before the DUO will be a production product. I see absolutely nothing indicating otherwise.

While I'm all for new and exciting technology, I'm not sure I like having cameras around that can be hacked, and visual interfaces that may record motions I make that I do not intend to go into a computer.

The simplest example would be idly picking my nose, and then coming back later to find those exciting strokes recorded. For those of you who are pornography enthusiasts, a similar problem exists.

Although keyboards are arguably pretty bad, they don't interpret my actions for me. I have to deliberately seek out the keyboard and type on it. It can't watch me or misinterpret me.

Now my only enemy is my own tendency toward tyops and speeling errors.

OH NO, the guys at leap motion know can figure out what hand size the owner has. Don't worry that google reads all your emails and tracks web page vists, your phone with GPS and camera is taken with you everywhere, and facebook knows more about you than your mother.

Fair enough, I can't really deny that. But the default behavior is not to record, so it still requires external intervention to spy, which is exactly what is requisite for current technology. The only discernible difference is a little green LED.

Yeah, it would be terrible if videos of you picking your nose went up on the internet. As opposed to your bank account passwords, credit card details, actual pornography habits and home address and real name - because if someone can hack the camera its far easier to just log keystrokes and take a screen grab of your desktop every minute.

Right. But it's far quicker and easier to address the camera issue with a small piece of tape that will solve the problem (or lack thereof, granted) permanently in less than a minute than it is to satisfactorily address the other issues you mention.

hey now, if everyone were able to identify the actual risks involved with their everyday processes, I'd be out of a consulting job. Don't go trying to encourage people to make sane risk assessments! Not for a couple more years yet, at least - my goal of retiring on a beach in the Caribbean is too close!

Yes, that's actually somewhat consistent with paranoid personality disorder. But there's more than one symptom to that, and it would be insanity to diagnose a stranger on the internet from 2 sentences.

"While I'm all for new and exciting technology, I'm not sure I like having cameras around that can be hacked, and visual interfaces that may record motions I make that I do not intend to go into a computer."

The LEAP does not take pictures. It does not even contain a camera.

While it works kind of like a Kinect (in that they both use light), the similarity pretty much stops there.

and visual interfaces that may record motions I make that I do not intend to go into a computer.

I then go on to talk about visual gestures and how I don't want those recorded.

You seem to have mis-read the original message. I would apologize, but after having re-read it, I don't think it's unclear. I often read things hastily as well, and that can lead to this kind of misunderstanding.

Would be quite nice for tablet, phone or android stick attached to giant TV.

>>The Leap Motion Controller will change the way you work without changing what already works for you. So it doesn’t replace your keyboard, mouse, stylus, or trackpad. It works with them, and without special adapters. Just plug it into the USB on your Mac or PC, and you’re off.

Even if they do have Linux support, will it be open, or will it still more proprietary stuff? And haven't we heard this tune before anyways? An upstart (in this case, the duo [duo3d.com]) appears, and all of a sudden these guys that have been sitting pretty with their thumb out of view somewhere make a lot of noise about "hundreds of thousands of preorders"... yeah, well... preorders are fine and dandy, but when do we see the product? Those other guys seem to have everything ready to go and I'm going to vote with my do

From what I have understood from reading Leap' employees posts in the Linux subforum, a large amount of input processing is done in drivers on the host which will therefore have to stay proprietary.

Almost right, up until the "...processing is done in drivers", which is imprecise. It's done in software. There is no reason the drivers couldn't be open.And then, also wrong at "have to stay proprietary". Should read, "...is done in software on the host which they are likely to keep as proprietary."

It may seem like a moot point, but there's a significant difference there, especially since many of the algorithms can already be commonly found in OpenCV and similar areas. They *could* just make good hardware

Right... so why not support the DUO instead, which is a solution that promises to be open, so you can take the hardware and the driver and modify them and then playing with OpenCV?
And by the way, since you mentioned OpenCV: from the looks of it, the guys behind the DUO seem to have a long history of contributing to that project too - have the guys behind LEAP done the same?

While I know quite a bit about the LEAP, information seems pretty scarce about the DUO. Also, I would not be too sure about the Open Source bit. This is from the OS X forum on their website:

"Thanks for your API feedback. The vision pipeline is not Objective-C, and in answer to your question about source library or precompiled/headers, the degree to which the software will be open source is unannounced yet. However, your thoughts about the SDK are extremely valuable and the team is listening, keep the comments coming!"

So... they haven't even decided how much, if any, of it will be open source yet. Further, they are nowhere as far along in the development cycle as the LEAP is. Yes, the fact that LEAP was in development was announced quite a while ago. But they did not announce a projected shipping date until recently. What's wrong with

FWIW, I haven't supported either yet (neither financially nor investing time in them, other than talking about them).That said, the little I can glean about the DUO makes it sound much less polished, and I haven't seen any guarentee of the drivers being open source (though, it appears the hardware is, or will be, open hardware, and they certainly seem more involved in open source projects).

One thing that seems quite different about the two, and please correct me if I'm wrong, is the use of strobing IR's on

"Just to clear up a tiny bit of mystery. The device does not output any form of depth-map or point cloud over USB. There is no processor on the device."and"The developer units do not have firmware with plug-and-pl

"Here's two comments from leapmotion forums from the Co-Founder & CTO:"

But those two comments do not say what you appear to think they're saying.

"Just to clear up a tiny bit of mystery. The device does not output any form of depth-map or point cloud over USB. There is no processor on the device."

It does not have to have a processor onboard in order to do a lot of the heavy lifting. There are all kinds of ways to process data without a "processor" per se. It could be a custom IC, or a PLA, or any number of other kinds of electronics going on in there.

But more to the point: the comment was about whether it outputs a point cloud. So what? It does output a hell of a lot of data, it just isn't in the form of a point cloud. (Whi

"I just wanted to drop in and give you a little update. Earlier [yesterday], we released our first early build of the Linux SDK to developers on the developer portal. We will be working closely with devs to help make the Linux SDK as robust as possible, and we're looking forward to the feedback"

Exactly my thoughts. I use m 46" TV as my primary monitor for most gaming and run Windows 8 (Windows 8 is not bad, there are just a lot of complainers out there, or maybe I'm just old enough to be that negative towards new things yet). I pre-ordered this just for the reason you pointed out - can't wait for my 46" living room touchscreen that I don't actually have to "touch."

It is not exactly like a touch screen. There is no way to detect touch -- you can only wave your hands in front of the screen.Also, because there is no eye-hand-coordination in the system, there will have to be proxy objects (like mouse pointers) on the screen for your finger tips.

I was highly unimpressed with the demo at SXSW and I've heard mixed reviews from people that got into the dev program.It seems to be sunlight sensitive and it likes to randomly drop elements that it is tracking, maybe just a driver issue and I'm sure it could improve, but at the moment I'm not impressed.

I tried it at SXSW and it seemed fickle. To me it seems to be one of those devices that work when the planets align just right. And since their architecture is proprietary, I won't be able to modify things myself to try and improve the detection (or use it in other, cool ways). That's why personally, the DUO is a lot more appealing and I contributed to the DUO kickstarter [kickstarter.com].

I'm pulling my money out of foolish things like municipal bonds and buying stock in companies which make rotator cuff treatments and therapies because that is apparently going to be a huge growth area soon.

I'm pulling my money out of foolish things like municipal bonds and buying stock in companies which make rotator cuff treatments and therapies because that is apparently going to be a huge growth area soon.

Yeah, because doing things with your arms and hands that require more motion than moving a mouse around on a desk just isn't natural... or what?

I'm pulling my money out of foolish things like municipal bonds and buying stock in companies which make rotator cuff treatments and therapies because that is apparently going to be a huge growth area soon.

Yeah, because doing things with your arms and hands that require more motion than moving a mouse around on a desk just isn't natural... or what?

It's not about the motion, it's about not resting any part of your arm for extended periods of time. I mouse with my palm and/or elbow resting on a surface.

Am I the only one that couldn't handle listening to the audio in that clip? One guy is speaking into my left ear and the other guy into my right. It's like I'm standing between them staring off orthogonally, but the video has me looking right at the interviewee. Makes my head spin.

I would have been happy to buy a device and possibly develop for it, but they have really horrible developer support. They don't let you even download the SDK! Apparently 40,000 people applied to the developer program, but they are only giving access to people that they have decided to give free devices. Their actual end-user device is shipping soon, but we can't even download the SDK? WTF??? It is cheap and I would be happy to buy one. But, without an SDK it is pretty worthless to me...

Amen to that. That's why I'm excited about the DUO. Even at $110 it's a great deal if you look at the specs of the hardware, the capabilities it has and what their software does according to the video highlights (their tracking of a hand through a 180 degree turn is very impressive). I don't get why LEAP is restricting their SDK and limiting who can be a developer and what people can use the device for.

I'll say it before and I'll say it again, if we as members of a community that values open-source project