Aquifi is coming out of stealth to announce Fluid Experience, a new software platform that aims to create a new generation of gesture controls that work better than the Kinect technology for Microsoft’s Xbox One video game console.

Above: Fluid Experience

Image Credit: Aquifi

The main mission is to create adaptive gesture controls that make machines adapt to humans rather than vice versa, so that they really work as magically as they’re supposed to, given all of the hype around motion-sensing in the past few years. Aquifi says that more precise gesture technology will work over wider areas, interpret more than just hand gestures or body positions, and adapt based on machine learning.

The Fluid Experience will work on commodity gesture-control hardware, making innovations in human interface controls much more affordable than in the past, said Nazim Kareemi, the chief executive of Aquifi, in an interview with VentureBeat.

Sponsored by VB

Join us at GrowthBeat where thought leaders from the biggest brands will share winning growth strategies on August 17-18 in San Francisco. Sign up now!

“If Kinect was the first generation, we’re building the second generation,” he said. “In the past, you had to adapt to the machine. We want it to adapt to you.”

The Palo Alto, Calif.-based Aquifi has raised $9 million from Benchmark Capital (the backer of eBay and Instagram), and private investors including Blake Krikorian, founder of Sling Media, and Mike Farmwald, cofounder of Rambus.

“The vision that Aquifi’s founders saw a decade ago for 3D tracking and its consequences is becoming a reality today,” said Bruce Dunlevie, Benchmark Capital partner. “The team has learned a lot from the collective experience of its members, and understands what is needed to make a fluid experience available to everyone, on all their devices.”

Kareemi has assembled a team of veterans who worked at Canesta, the gesture-control chip company that Microsoft bought and used for its second generation of its Kinect motion-sensing game controls in the Xbox One. Kareemi was a cofounder of Canesta, and he believes that the Fluid Experience technology will improve upon it in several ways. They are developing technology based on computer vision, machine learning, and cloud services.

Costly custom sensors held back earlier versions of the technology in the Wii and Xbox 360 game consoles. The Xbox One uses the Canesta hardware and comes with Kinect, but that system still pricey at $500. Kareemi wants something that has a much lower cost and that’s based on software that can run on a lot of different platforms. People will be able to upgrade the software in the field.

“That will make it adaptable and easier to use,” he said.

It will also be more precise. Back in 2010, the first Kinect technology could detect 3D points in a 320-by-240 grid. The newer Kinect can detect on a 640-by-480 field, while Aquifi can detect points in a 1,280-by-720 grid.

Above: Why Fluid is better

Image Credit: Aquifi

He wants the machine to be able to interpret someone’s movements and gestures, removing barriers and anticipating actions. The technology will work on commodity image sensors and be available for use with smartphones, tablets, PCs, wearable devices and other machines.

The wearable apps could use augmented reality, or the combination of virtual imagery with the real word, for things like scanning an object or mapping a room with a smartphone. The technology could also work in safe car applications that combine voice recognition and motion detection.

Devices could go into “autolock” when a face not recognized looks at them, enhancing security. They could also power down when they detect that no one is looking at them, saving on power, and power up when they detect a face, reducing start times.

“Within the next decade, machines will respond to us and our needs through intuitive interpretation of our actions, movements, and gestures,” said Kareemi. “Our fluid experience platform represents the next generation in natural interfaces, and will enable adaptive interfaces to become ubiquitous, thanks to our technology’s breakthrough economics.”

Aquifi will introduce the technology to developers over the next six months, and it expects the first Fluid Experience devices will debut in the first half of 2015.

Kareemi cofounded Aquifi in 2011, and it has 29 employees. The founders addressed questions like how to make machines adapt to people, how to create an enduring platform, and how to enable a ubiquitous solution. The company has four patents and applied for more than 30.

]]>0Aquifi offers a new generation of gesture controls with Fluid ExperienceSixense’s MakeVR will let you design a 3D object with your hands — and then print it outhttp://venturebeat.com/2014/01/29/sixense-to-launch-kickstarter-campaign-for-its-makevr-software-for-designing-3d-printable-objects/
http://venturebeat.com/2014/01/29/sixense-to-launch-kickstarter-campaign-for-its-makevr-software-for-designing-3d-printable-objects/#commentsWed, 29 Jan 2014 14:00:39 +0000http://venturebeat.com/?p=888370Soon, anyone will be able design an object in 3D and then print it out -- provided Sixense's upcoming Kickstarter campaign for the MakeVR technology is successful.
]]>

You may soon no longer need to master complicated design software to dive into 3D printing.

Sixense Entertainment plans to launch a Kickstarter crowdfunding campaign next week to raise money for MakeVR, or virtual reality software that enables anyone to create 3D-printable objects by using wireless gesture-control devices with natural hand motions.

The Los Gatos, Calif.-based company hopes to ride on the popularity of 3D printing and make it far easier for mainstream consumers — not just hardware geeks and artists — to design things in a virtual world and print them out. With MakeVR and Sixense’s STEM-based motion-sensing controls, you can design something in a 3D work space with your hands.

The crowdfunding campaign will help Sixense complete MakeVR for launch later this year and, more important, draw attention to it as one of the easiest ways to create 3D-printed objects, said chief executive Amir Rubin in an interview with VentureBeat.

Sponsored by VB

Join us at GrowthBeat where thought leaders from the biggest brands will share winning growth strategies on August 17-18 in San Francisco. Sign up now!

“You can press the button and it goes straight to the 3D printer,” Rubin said. “It’s like taking something in [the sandbox building video game] Minecraft, building it, and printed it out.”

Within a couple of hours, the printer spits out your 3D design as a plastic object. Besides creating your own item, you can work collaboratively on a design in the same virtual space with someone else in real time.

That Sixense technology, which is based on motion-sensing magnets that give you 360 degrees of freedom to control objects in a 3D space, has been in the market for sometime via licenses with Razer, which makes the Hydra PC gaming controllers, and Valve for its Portal 2 In Motion game.

But the 3D-sensing controller has gotten a new life with the popularity of the Oculus VR virtual reality goggles. After all, if you’re going to explore virtual reality, you need more than a headset. You need a new user interface to help you naturally interact in a three-dimensional space. And that’s giving Sixense a chance to redefine itself and launch a new strategy that includes selling its own branded merchandise.

Rubin now says that, beyond gaming, the Sixense controllers can be a tool for artists and others to create objects in real time, with no delays and no need to learn painfully obtuse engineering programs.

Rubin introduced me to an intern named Tom, who was a 2D artist who had never created 3D animated object. He began using the Sixense controller and MakeVR. Within two days, he created a 3D-animated Mech (or heavily armored fighting mechanical robot). Professional 3D model builders would normally take much longer to do it.

“He’s more talented than most people, but he learned it from scratch,” Rubin said.

Right now, Sixense is creating its own branded product, the Stem System, for the wireless 3D controls. The Stem System consists of a base that contains a magnet that sends out pulses into a 3D space. The magnetic waves hit sensors and then bounce back to the base.

The system calculates how long it took for the interaction and then figures out exactly where the object is in a 3D space. It works whether you have a line of sight or not, which makes it different from Microsoft’s Kinect motion-sensing system in that regard. The Stem System is also modular; you can put the motion sensors in a variety of places and positions.

Intel is working on gesture-based PC technologies using 3D sensors inside webcams. Over time, that could replace Stem controllers, so Sixense wouldn’t have to create them anymore and could concentrate on software such as MakeVR.

Hardware makers will still be able to license designs from Sixense so they can embed it in their own products, as Sixense will support an open hardware and software development environment. Rubin encourages the community to create a wide variety of motion-sensing devices, such as swords, baseball bats, or head-mounted displays. The system works with Windows, Mac, and Linux.

Above: MakeVR will let you collaborate on the same design in real time. Sixense’s Steve Hansted shows it off.

Image Credit: Dean Takahashi

One of Rubin’s employees, Scott Szyjewicz, demoed the system to me in real time. He created a twisted fire hydrant over the course of a few minutes. With his hands on the controllers, he reached into the virtual space on the monitor, grabbed objects, twisted them around, and manipulated things as if he were doing it with his hands. He built the fire hydrant and was ready to print it out in a very short time.

He also started creating objects along with product marketer Steve Hansted. They were able to work within the same MakeVR work space and make changes in real-time. There was no delay, as it all happened instantly.

Paul Mlyniec, head of development for MakeVR, has been working on the virtual reality design technology for a long time.

“The hard part is taking computer-aided design software and putting it together with a two-handed user interface,” he said.

Rubin said, “The biggest challenge has been to find the problem this solves. It’s only really when the 3D printing community came to us that we saw what it could do.”

Rubin founded the company in 2007, but he has been working on motion-sensing technology for about two decades. Sixense has 18 employees and has raised $3 million to date. Sixense has created its own game studio in San Francisco to make demos that make use of its technology. But the MakeVR technology is more like a creativity application.

The company hasn’t determined exactly how much it will raise in the Kickstarter campaign. And it will sell the MakeVR software, but won’t reveal the price until later.

Above: Szyjewicz and Hansted show collaborative, real-time design with MakeVR.

Related articles

]]>0Sixense’s MakeVR will let you design a 3D object with your hands — and then print it outTabletKiosk embeds a 3D gesture camera in an enterprise tablethttp://venturebeat.com/2013/06/05/tabletkiosk-embeds-a-3d-gesture-camera-in-an-enterprise-tablet/
http://venturebeat.com/2013/06/05/tabletkiosk-embeds-a-3d-gesture-camera-in-an-enterprise-tablet/#commentsWed, 05 Jun 2013 08:01:32 +0000http://venturebeat.com/?p=751294The tablet shows that gesture controls are spreading to lots of gadgets.
]]>

SoftKinetic and TabletKiosk have created an enterprise tablet that has an embedded 3D gesture control camera in it.

The tablet is a prototype that uses the Sahara Slate PC i500 platform and SoftKinetic’s DepthSense 3D camera. It is on display at the Computex trade show in Taiwan.

“We are incredibly excited to work with TabletKiosk and to be at the forefront of a major shift in 3D capabilities for the all-in-one market,” said Mitch Reifel , vice president of sales at SoftKinetic. “TabletKiosk’s superior platform allows our cutting-edge camera and 3D technology to integrate seamlessly. The enterprise market is leading the change for portable 3D devices, and we are anxious to play a major role in reshaping the ways in which users interact with their devices.”

The prototype can recognize faces and gestures from short ranges, as close as 15 centimeters away.

“With our Sahara Slate PC i500 platform we have created the ability to integrate different types of hardware that have, up to now, only been available as peripherals,” said Martin Smekal , President and CEO at TabletKiosk. “Incorporating the SoftKinetic DS525 module has allowed us to create the 3D functionality we wanted, and to offer the enterprise market a completely new and unparalleled mobile computing experience.”

]]>0TabletKiosk embeds a 3D gesture camera in an enterprise tabletLeap Motion’s tiny gesture controller lands exclusively at Best Buyhttp://venturebeat.com/2013/01/16/leap-motion-gesture-best-buy/
http://venturebeat.com/2013/01/16/leap-motion-gesture-best-buy/#commentsWed, 16 Jan 2013 14:40:33 +0000http://venturebeat.com/?p=605353There's still no official release date, but Leap Motion has already scored a major deal for its long-awaited gesture control gadget with Best Buy.
]]>

There’s still no official release date, but Leap Motion has already scored a major deal for its long-awaited gesture control gadget with Best Buy.

The Leap Motion Controller will be available at Best Buy’s stores and website for $70 when it launches this spring, Leap announced this morning. Pre-orders will begin on Best Buy’s website in February, though you can also preorder via Leap’s website right now.

Leap’s device, which can track up to 10 fingers for Kinect-like motion controls, made a big splash when it debuted last year. The company says it can track movements smaller than a pin tip, and it can track your hands at fingers at up to 290 frames per second. Pretty impressive for a device about the size of a stick of gum.

Sponsored by VB

Join us at GrowthBeat where thought leaders from the biggest brands will share winning growth strategies on August 17-18 in San Francisco. Sign up now!

“Finding a major retail partner for our North American launch was a critical component to our strategy, and Best Buy was the obvious choice,” Leap Motion President and chief operating officer Andy Miller said in a statement today.

As a fan of Microsoft’s Kinect motion controls, I’m hoping the Leap controller lives up to the hype. The company is aiming to replace the mouse with gesture controls, which could lead to some interesting developments on touch-focused operating systems like Windows 8.

San Francisco, Calif.-based Leap has raised around $44 million in total from Highland Capital, Founders Fund, Andreessen Horowitz, and others.

]]>0Leap Motion’s tiny gesture controller lands exclusively at Best BuyIntel at CES: Ultrabooks, Comcast partnership, and gesture controlshttp://venturebeat.com/2013/01/07/intel-ces-2013/
http://venturebeat.com/2013/01/07/intel-ces-2013/#commentsTue, 08 Jan 2013 00:46:18 +0000http://venturebeat.com/?p=600448Intel hit CES with a number of big announcements, including a new partnership with Comcast that wasn't everything people expected.
]]>

Intel made a few big announcements at the Consumer Electronics show today concerning its ultrabook laptop and a new vision for cable.

Though people expected to hear about a new set-top box to revolutionize the way people purchase cable plans, Intel surprised people with a new partnership. And, as it has said for a year now, it is continually interested in putting Kinect-like gesture controls in all kinds of devices.

Ultrabooks at $599

Intel revealed today that its next line of ultrabooks will be released by the end of 2013. You’ll be able to pick one up for $599, and as The Next Web notes, Intel says the battery will last all day long. The ultrabooks at today’s press briefing had detachable keyboards and touchscreens, making them competitive in the tablet market despite being a PC. They also run the new Windows 8.

Comcast partnership

Intel also announced that it is partnering with cable-giant Comcast to bring all of its Xfinity channels to Intel-devices, as Forbes notes — without a set-top box. The channels would be processed and delivered inside the device, without needing an extra piece of machinery to serve the content.

This announcement was actually a bit of a bummer for Intel fans looking to see it do the opposite — release a set-top box. The box was rumored to change cable as we know it, with some saying it would enable people to purchase only the channels they wanted to watch. Instead, the partnership goes in the opposite direction.

Gesture controls

As VentureBeat’s Dean Takahashi writes, Intel is digging in deeper with gesture controls. Vice president of the PC client group Kirk Skaugen explained that Intel believes gesture controls can do a lot for the gaming and security industries. On the gaming side, being able to control a game simply by waving your hands in front of the screen is, well, awesome.

As far as security goes, you could also use gesture controls to evolve passwords. Right now, passwords are very weak, and the technology behind iris scans and fingerprinting isn’t cheap or easily distributed. Gesture control could change that by tailoring a gesture, or pairing it with face recognition to give access to accounts and systems.

]]>1Intel at CES: Ultrabooks, Comcast partnership, and gesture controlsOrange taps Movea to create a gesture-based set-top boxhttp://venturebeat.com/2013/01/03/orange-taps-movea-to-create-a-gesture-based-set-top-box/
http://venturebeat.com/2013/01/03/orange-taps-movea-to-create-a-gesture-based-set-top-box/#commentsThu, 03 Jan 2013 14:00:19 +0000http://venturebeat.com/?p=598090European telecom carrier Orange has teamed up with Movea to create a set-top box for the living room that operates on gesture controls.
]]>

European telecom carrier Orange has teamed up with Movea to create a set-top box for the living room that operates on gesture controls.

The set-top will use Grenoble, France-based Movea’s motion-sensing and processing technologies to enable TV watchers to control their television choices with hand gestures. They’ll also be able to play motion-sensing games and navigate through a user interface with the twist of a wrist.

The new Orange Livebox set-top uses Movea’s SmartMotion Server, a motion-processing engine. The box will ship with a special remote control that works much the way that a Wii game console remote does.

“With the rapid evolution of smart TVs and smart-home systems, we see the demand for more intelligent home entertainment devices and natural user interfaces,” said Jean-Bernard Willem, head of content marketing for Orange France.

Sponsored by VB

Join us at GrowthBeat where thought leaders from the biggest brands will share winning growth strategies on August 17-18 in San Francisco. Sign up now!

For Orange, the set-top is a way to control lots of things in the home beyond TV shows. You can use hand gestures to control volume, web browsing, and gamepads or joysticks as well as to navigate through music, video, and photos.

The SmartMotion Server works with 10 different gestures. You can make a “check” gesture to select an item and an “X” gesture to close an app.

“Movea and Orange are leading the charge in responding to consumers’ increasing appetite for a more powerful and intuitive home theater experience,” said Sam Guilaumé, chief executive of Movea.

Movea’s SmartMotion Server is platform agnostic, while Orange’s Livebox Play TV service delivers new sources of revenue, such as video on demand and TV apps. Orange’s Livebox set-top box with remote control is now available for preorder and will be ready for purchase in February.

]]>0Orange taps Movea to create a gesture-based set-top boxIn:play is the perfect iOS music player for joggers & irresponsible drivers (exclusive)http://venturebeat.com/2012/10/19/inplay-music-app/
http://venturebeat.com/2012/10/19/inplay-music-app/#commentsFri, 19 Oct 2012 17:14:41 +0000http://venturebeat.com/?p=560129The majority of mobile music player apps weren't built with the touch screen in mind. In:play was.
]]>

The majority of mobile music player apps weren’t built with the touch screen in mind. In:play was.

In:play is an iOS app that uses gesture-based controls to navigate through your music library. In theory, this should make it easier for you to find the perfect song while jogging, driving, or doing anything that requires most of your undivided attention.

Playing a song on the app pulls up three sets of large text: The artist’s name, followed by the album or playlist title and the name of the song. Pressing any one of these elements will allow you to sort through your music by flicking your fingers horizontally. For example, if I only wanted to hear certain songs by Wilco, I could tap the name of the artist (making it light up as blue) and flick my fingers until I’m satisfied. You can also scroll through an alphabetical list if you need a more precise search tool.

“The app rarely requires pushing a button (and) acts on swipes and touch,” In:play’s developers told me, adding that no one has quite figured out a truly intuitive interface for touch screen music players.

While I don’t doubt that In:play’s interface offers a smoother experience than Apple’s native player, it’s unfortunately a bit bland. It doesn’t use album artwork, and the text for each track isn’t easily distinguishable (everything is the same font, and the size of the text is based on the length of the song/artist/album name). For people who intimately know their music collection, this shouldn’t be an issue. For someone like me, who rarely pays attention to album or song titles anymore, it could serve as a giant pain in the ass.

Sponsored by VB

Join us at GrowthBeat where thought leaders from the biggest brands will share winning growth strategies on August 17-18 in San Francisco. Sign up now!

The app costs $1.99 and will be available in Apple’s App Store in the next few days. [Update: now available in the App Store.] In:play faces competition from Apple’s native iOS music player, EZ MP3 Player, Groove 2, Panamp, and other music player apps. The team plans to release an Android version at some point in the future.

Founded is January 2012, the Middletown, Calif.-based firm’s four employees are part of BigBlueCouch, the app development arm of Bespin Holdings. The development team has raised a total of $420,000 in funding to date from Geoff Malais, Lance Goldsmith, and former Motorola sales and marketing executive Robert Goldsmith.

Check out the demo video below for a better look at the app in action.

SoftKinetic grabbed some of the limelight at the Intel Developer Forum as the world’s biggest chip maker showed off “perceptual computing,” or how you can control a computer through hand movements, face recognition, voice commands, touchscreen swipes, or mouse-and-keyboard controls.

The Brussels, Belgium-based SoftKinetic makes gesture-control cameras and software much like the elements used in Microsoft’s Kinect motion-sensing system for the Xbox 360 game console. But SoftKinetic makes technology that can recognize gestures that are anywhere from 6 inches to 3 feet away from a DepthSense camera atop a laptop.

Intel believes that the close-range gesture-recognition technology is ideal for controlling thin and light laptops — dubbed “ultrabooks” — which resemble Apple’s MacBook Airs. SoftKinetic’s technology will be included in the software development kit (SDK) coming in 2013. Michel Tombroff, the chief executive of SoftKinetic, and his team showed us a hands-on demo of the technology. Here’s our video of it on display at the Intel Developer Forum at Moscone Center West in San Francisco.

]]>0Magic fingers: SoftKinetic is at the core of Intel’s ‘perceptual computing’ technology (video demo)Are you ready for Kinect-like gesture-controlled phones?http://venturebeat.com/2011/10/31/are-you-ready-for-kinect-like-gesture-controlled-phones/
http://venturebeat.com/2011/10/31/are-you-ready-for-kinect-like-gesture-controlled-phones/#commentsMon, 31 Oct 2011 18:24:19 +0000http://venturebeat.com/?p=346567South Korea’s Pantech plans to add gesture-recognition functions to its mobile phones. The maker of Android and messaging phones plans to use Kinect-like gesture technology from Israel’s eyeSight Mobile Technologies. Pantech plans to include the technology in its Vega LTE series of phones due to hit the market in November. The folks at eyeSight say […]
]]>South Korea’s Pantech plans to add gesture-recognition functions to its mobile phones.

The maker of Android and messaging phones plans to use Kinect-like gesture technology from Israel’s eyeSight Mobile Technologies. Pantech plans to include the technology in its Vega LTE series of phones due to hit the market in November.

The folks at eyeSight say that touch input is impractical at times, like when driving or wearing gloves. The gestures can answer calls or play music. Of course, voice activation can handle the same kinds of tasks that gesture controls can handle.

Microsoft developed its Kinect motion-sensing system for the Xbox 360 using in-house development and technology from PrimeSense and 3DV Systems. It also acquired Canesta and licensed patents from GestureTek.

The video is nice in that it demonstrates why you might want to answer a phone with a non-touch gesture, by swiping the air over the phone, if your hands are messy. But the odds of that happening are so low that it makes no sense to add extra cost to a phone just to cover that kind of possibility. File this one under technology for technology’s sake.