For those that have followed the state of camera software in AOSP and Google Camera in general, it’s been quite clear that this portion of the experience has been a major stumbling block for Android. Third party camera applications are almost always worse for options and camera experience than first party ones. Manual controls effectively didn’t exist because the underlying camera API simply didn’t support any of this. Until recently, the official Android camera API has only supported three distinct modes. These modes were preview, still image capture, and video recording. Within these modes, capabilities were similarly limited. It wasn’t possible to do burst image capture in photo mode or take photos while in video mode. Manual controls were effectively nonexistent as well. Even something as simple as tap to focus wasn’t supported through Android’s camera API until ICS (4.0). In response to these issues, Android OEMs and silicon vendors filled the gap in capabilities with custom, undocumented camera APIs. While this opened up the ability to deliver much better camera experiences, these APIs were only usable in the OEM’s camera applications. If there were no manual controls, there was no way for users to get a camera application that had manual controls.

With Android L, this will change. Fundamentally, the key to understanding this new API is understanding that there are no longer distinct modes to work with. Photos, videos, and previews are all processed in the same exact way. This opens up a great deal of possibility, but also means more work on the part of the developer to do things correctly. Now, instead of sending capture requests in a given mode with global settings, individual requests for image capture are sent to a request queue and are processed with specific settings for each request.

This sounds simple enough, but the implications are enormous. First, image capture is much faster. Before, if the settings for an image changed the entire imaging pipeline would have to clear out before another image could be taken. This is because any image that entered the pipeline would have settings changed while processing, which means that the settings would be inconsistent and incorrect. This slowed things down greatly because of this wait period after each change to capture settings. With the new API, you simply request captures with specific settings (device dependent) so there’s no need to wait on the pipeline with settings changes. This dramatically increases the maximum capture rate regardless of the format used. In other words, the old API set changes globally. This slowed down image capture every time image settings changed because all of the images in the pipeline had to be discarded once the settings were changed. In the new API, settings are done on a per-image basis. This means that no discarding has to happen, which means image capture stays fast.

The second implication is that the end user will have much more control over the settings that they can use. These have been discussed before in the context of iOS 8’s manual camera controls, but in effect it’s now possible to control shutter speed, ISO, focus, flash, white balance manually, along with options to control exposure level bias, exposure metering algorithms, and also select the capture format. This means that the images can be output as JPEG, YUV, RAW/DNG, or any other format that is supported.

While not an implication, the elimination of distinction between photo and video is crucial. Because these distinctions are removed, it’s now possible to do burst shots, full resolution photos while capturing lower resolution video, and HDR video. In addition, because the pipeline gives all of the information on the camera state for each image, Lytro-style image refocusing is doable, as are depth maps for post-processing effects. Google specifically cited HDR+ in the Nexus 5 as an example of what’s possible with the new Android camera APIs.

This new camera API will be officially released in Android L, and it’s already usable on the Android L preview for the Nexus 5. While there are currently no third party applications that take advantage of this API, there is a great deal of potential to make camera applications that greatly improve upon OEM camera applications. However, the most critical point to take away is that the new camera API will open up the possibility for applications that no one has thought of yet. While there are still issues with the Android camera ecosystem, with the release of Android L software won’t be one of them.

Post Your Comment

32 Comments

Like with everything else, google arrives late and indecent. Hopefully this will be an improvement, I am sick and tired of android camera continuously refocusing and ruining footage even when the phone is on a stand and shooting a stationary object. It is plainly retarded.Reply

BTW still waiting for low latency audio google is promising for like 5 years now... Pathetic considering android is just linux and the entire low latency audio stack for linux has been around for quite a while.Reply

It "was coming" with JB and KK as well. My expectations have been tuned down to save myself the disappointment, they will probably shave a few ms off the latency, but not really low level low latency audio that would be usable for music making applications.Reply

I've read about "rewriting" the audio system. Hopefully they did not only rewrite it, but rewrite it right and abstracted it away from the VM and java runtime, which was the reason it was so laggy to begin with. It was far from low-level, even when using the ALSA C API. Usable low latency audio will probably require real-time kernel, which unfortunate I don't see happening with android. Meaning expected latency won't be lower than 20 msec, which is still pretty high, 10 and below would be nice. I've been able to get 2 input and 3 input for a total of 5 msec latency in Linux with Jack and RT kernel on a machine about as powerful as current high end android devices.Reply

They still aren't low enough. They admitted as such, though it is around 20ms as it stands in L. They are using opengl es now to do audio or its counterpart.You can watch it for yourself if you search for the i/o schedule and watch the segment on audio.Reply

I'm really excited for RAW support for upcoming phones, here's a 100% crop comparison I made between a RAW out of the OPPO 7a that I ran through Lightroom, and a out-of-phone-jpeg. You can clearly notice the additional sharpness of the bricks, flora and the asphalt.

i2.minus.com/imZpFpS3b7PnH.png

This is the 13MP Sony IMX214 sensor that is probably on half of the Android devices that came out this year.

Colors and Gamma is slightly off on the RAW because Lightroom didn't have full support for the Oppo when I processed it.Reply