The company decided to design (but not manufacture) its own chip — which follows the current Holographic Processing Unit — because it realised off-the-shelf components are just not powerful enough (or can’t offer enough optimisation) for the kind of tasks a device like HoloLens is built for.

The idea is that the amount of data AI algorithms need to munch through can’t be processed in the cloud and then sent back to the device, as such a task would be too slow. The processing needs to happen on the device, and if these chips are meant to run on a lithium battery on a mobile device, they must be as optimised for the task they’re doing.

“The consumer is going to expect to have almost no lag and to do real-time processing,” Jim McGregor, an analyst at Tirias Research, said in an interview with Bloomberg. “For an autonomous car, you can’t afford the time to send it back to the cloud to make the decisions to avoid the crash, to avoid hitting a person. The amount of data coming out of autonomous vehicles is tremendous; you can’t send all of that to the cloud.”

Cars might be one example, as autonomous and semi-autonomous vehicles are already among us, but the thinking behind it goes well beyond that, reaching out to almost every consumer electronics product. By 2025, McGregor says, “every device people interact with will have AI built in.”

The Redmond-based company is certainly not the only one working towards making “AI-everywhere” a reality. Amazon recently partnered with Nvidia to build a new highly powerful and efficient chip design named “Volta,” based on so-called field programmable gate arrays (FPGA), custom architectures meant specifically for AI intensive tasks.

Microsoft says that “The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery”; however, there are no details on when the second gen. headset will actually hit the market, so we might have to wait for a while to see how it actually performs.