Microsoft Kinect Refuses to Die

Kinect was never for you. Yeah, you with the Xbox One that was bundled with a Kinect. That big honking spatial camera was an impressive piece of tech, but it never did you much good as a console add-on did it?

Microsoft announced the return of the Kinect on Monday at Build, its annual developer conference in Seattle. And its new iteration as a tool for developers powered by a more sophisticated camera module and Microsoft’s cloud-based AI technology might actually push Kinect over the hump to make it useful to a whole new generation of hackers and tinkerers.

“I think one of the things that is beginning to be understood is that Kinect was never really just the gaming peripheral,” Microsoft Director of Communications Greg Sullivan told Gizmodo at Build. “It was always more.”

When the company launched the original Kinect as an Xbox 360 accessory in 2010, it was purely intended for gamers. The spatial camera was seen as something of a response to the Nintendo Wii Remote—a way to incorporate motion into gaming, except now without the need for controllers you’d have to charge or could accidentally fling into your TV. Like the Wii Remote, Kinect’s potential outside the gaming realm was immediately evident. “This is showing us the future,” said Johnny Chung Lee about six months after launch when detailing Kinect’s appeal for hackers. Lee was one of the creators of Kinect, and would later leave Microsoft to head up Project Tango at Google. In Lee’s mind Kinect wasn’t just for slashing apples with you hands in Fruit Ninja Kinect or being a giant Rancor in Kinect Star Wars. Kinect was about interacting with computers in an entirely new way.

And he was kind of right! While studio-developed gaming was slow to emerge, Kinect found new and cool uses in the hands of developers. People built rigs that used the Kinect to scan a person and insert them into a video game or to guess the weight of astronauts. Another hack let you control a Mac OS X device. The hacks were kludged together with no official support from Microsoft. They weren’t always good, but they were exciting because this was before we had a whole slew of new technologies like HoloLens or Oculus VR or even Siri and Alexa. For a brief moment, Kinect was the coolest glimpse of future user interfaces out there.

Kinect vision.

Despite its promise, Kinect never caught on as a mainstream product for Xbox 360 or Windows. And it was downright poisonous when bundled with the Xbox One. By October last year, all consumer Kinect products were discontinued. All told, Microsoft sold some 35 million units, but most seemed to rest, inert, on top of televisions or shoved in closets.

Yet the Kinect dream never really died. Its technology has always been central to the HoloLens, Microsoft’s elusive, not-quite-ready-for-prime-time mixed reality headset. And now it’s back, stripped of the black plastic casing and stand, as a module for developers. (Microsoft’s Sullivan told Gizmodo the new module is also a central component to the next generation of HoloLens—mind you, that product has not yet officially been announced.)

In a blog post, Alex Kipman, who heads up Microsoft’s division focused on AI, perception, and mixed reality, described the company’s fourth-generation Kinect depth sensing technology (officially called Project Kinect for Azure). On a hardware level, the the module has a higher resolution 1024 x 1024 RGB camera and a new time-of-flight sensor which behaves sort of like the laser projector in the iPhone X’s True Depth camera system. The ToF sensor shoots out light which is then reflected back onto the sensor. It then uses the time it takes for the light to shoot out and reflect back to determine the precise distance of the object the sensor is focused on.

Of course, for Microsoft, the new module really just serves as another way to get people on board with Azure. Unless you’re a developer you wouldn’t know that Azure is something like Xcode, Android Things, or Amazon Web Services—Nebulous and dev-related technologies that serve as a backbone of our modern computing experience, but that ordinary people don’t use every day.

Azure is a series of services based in the cloud that allow software developers to use a wide variety of Microsoft-built tools and libraries as well as their own code to perform a wide-range of functions, from crunching data to running websites, and powering servers. Microsoft Build this year was less about cool devices for Windows and more about how developers can use Azure to access slick tools that would normally require having AI experts on staff.

The new Kinect module will make use of Azure’s AI assets, making it easier for developers to build for the depth-sensing platform without needing sophisticated AI knowledge. As for what, besides the enterprise-focused HoloLens, the new Project Kinect might actually be good for in the hands of developers, Kipman is maddeningly vague. He talks about how the new cutting edge sensor combined with Azure’s AI powers will enable developers to ascertain visual information more quickly and with lower power consumption. Surely, this would be a boon to whomever decides to build for the technology, and I’m sure that many of the companies will be boring IoT businesses that want to implement the technology in warehouses or storage facilities, or third party headset makers trying to create the killer AR tool that makes the technology indispensable.

In the new module, the technology gets a new future that’s mercifully open and undefined. As Microsoft’s Kinect tech moves into its post-consumer gadget existence, maybe it’s best that Microsoft’s not trying to tell people what to do with Kinect. It’s always been better off letting the tech have a life of its own.