The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML) so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.

HDR+ produces beautiful images, and we have evolved the algorithm over the past year to utilize the Pixel 2’s application processor efficiently and enable the user to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel with that engineering effort, we have also been working on creating capabilities which enable significantly greater computing power, beyond existing hardware, to bring HDR+ to third-party photographyapplications. To expand the reach of HDR+, to handle the most challenging imaging and machine learning applications, and to deliver lower-latency and even more power-efficient HDR+ processing, we have created Pixel Visual Core.

Pixel Visual Core is Google’s first custom-designed System on Chip (SoC) for consumer products. It is built into every Pixel 2, and in the coming months, we will turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.

Let’s delve into some of the details. The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of over 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than 1/10th the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.

In the coming weeks, we will enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we will enable it to all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography which you already get through your Pixel 2 camera also be available in your favorite photography apps.

HDR+ is the first application to run on Pixel Visual Core. As noted above, Pixel Visual Core is programmable and we are already preparing the next set of applications. The great thing is that as we follow up with more, new applications on Pixel Visual Core, Pixel 2 will continue to improve. We’ll keep rolling out other imaging and ML innovations over time—keep an eye out!