Pelican Imaging doesn't make smartphone cameras itself, but its algorithms and software could already be shaping the snapper in your next mobile.

Unlike traditional digital cameras, which have a single lens, array cameras have a number of lenses placed next to each other. Each operates independently and, in Pelican's case, is monochromatic — a single camera devoted to red, blue, green and so on.

When a photo is taken, each lens takes its own individual snap. Then Pelican's algorithms go to work, building up the final photo from the results of each lens.

"What our software essentially does is take all of these inputs and fuses all of the images together to create a high-resolution photo. And because you're fusing images that have some parallax between the cameras — like the way your eyes give your two slightly different inputs — when we fuse those images, the result is a high-resolution depth map of the scene. Every pixel has depth," Chris Pickett, CEO of Pelican Imaging, told ZDNet.

And, thanks to the various lenses, the final image can be altered — refocused — after it's been taken. Even if the original image is a blurred mess, it can be sharpened up after the fact, and the subject of the photo changed — background elements brought to the fore, and vice versa.

Array cameras' capabilities have been popularised by Lytro, whose first array cameras hit the shops late in 2011. Unlike Lytro, which makes standalone cameras, Pelican offers reference designs meant to work with its software. The design uses four rows of four lenses and is aimed squarely at the smartphone market.

According to Pickett, its array cameras are half the thickness of standard smartphone cameras and, at volume, cost the same as the module that comes in the iPhone 5.

Qualcomm and Nokia want in

It's enough to make Nokia, and others, sit up and pay attention: last month the VC arms of both the Finnish company and mobile chip maker Qualcomm — along with Pelican's existing investors — put $20m into the company.

It's a handy pair of investors to have. While Pelican was set up in 2008, its first array cameras won't hit the market until next year — in part because mobile chipsets weren't powerful enough to cope with the processing demands the camera software puts on it.

"There's no question that what we do is computationally intensive — we need the power, or the grunt, of an 800 series Qualcomm chip. The stills aren't so intensive but the video, like 1080p 30 frames per second where you're combining and fusing images as you create the video, is very computationally intensive," Pickett said.

"It's easier to do a lot of what we're doing when we know how to optimise for particular chips. The more information we have for [software] optimisation, the closer we are to the people designing those chips, the easier it has become."

And having Nokia onboard means a likely outlet for Pelican's wares. While neither company have, or will, confirm that Nokia's devices will carry a Pelican module in the future, it's not a huge stretch to see its involvement in the company, and its interest in imaging (including a number of photo-focused flagship devices) as a signal that a high-end Lumia with an array camera is not far away. It's an idea lent further credence by recent rumours that a Lumia phablet with what's being referred to as a "Lytro-style camera" is on its way in early 2014.

"We've had strategic partners come in. That's brought us closer to those partners, and allowed us to work much more collaboratively with them. When their businesspeople and engineers know they have a vested interest in success, it tends to open up lines of communication, and that's been very helpful in speeding things up."

While Pickett won't name the names of any customers, he will confirm the 2014 timeline.

"We're looking to have OEM handsets and tablets on the market in 2014, latter half," he said, and the company is currently "working with a number of OEMs".

Working with the supply chain

And while Pelican has its own idea of what a Pelican camera should look like, it's down with OEMs having other ideas. Instead of using the reference design, an OEM could pick a 4x5 array for near-IR, use a 1x9 array to make it fit as the front-facing camera, or choose a faster lens for a better effective resolution (for all its smarts, current Pelican arrays will turn out something with a resolution equivalent to a current four to five megapixel standard-issue smartphone).

"Our software's modular enough to handle all of those types of arrays, but it is a big job to work with those ecosystems to make sure they're available."

Thanks to having those OEMs now on board, the supply chain — which now finds itself having to put together potentially millions of non-standard cameras and parts — is starting to respond, allowing Pelican to partner with the manufacturers , providing specs, training and technology that they can use with the products of OEMs that license Pelican software.

A number of those OEMs already have Pelican modules in hand and are testing the kit with the company's alpha software "so they need to start scaling up, and we need to finish the optimisation on our software," Pickett said.

It's that optimisation that will take up the bulk of Pelican's efforts between now and when the first devices bearing its modules launch next year, and the 40-person company is now hiring extra staff to help with the hardware optimisation effort — paid for courtesy of the recent funding round — along with seeking extra bodies to help build relationships with the hardware companies and OEMs.

But it's not just hardware companies that Pelican is talking to. Discussions with social networks are also ongoing, with the idea of creating software for them that can be used once the smartphones that carry its modules hit the market.

Pelican's 4x4 reference design. Image: Pelican Imaging

Those phones will come with Pelican's photo editing software, which Pickett pitches as "Photoshop-level editing, right on the mobile device right with your finger. You can literally tap on something, select an object, scale that object, extract it, change the colours of it, the background of the scene, you can post focus, or select multiple post-focus elements."

Photos and videos taken on a phone don't stay there, he adds — they end up on social networks and other online services, and so will Pelican's editing software.

And that's not the only way photos and videos will make it off phones, if Pelican has its way.

Phones and 3D printers

Through its rows of separate lenses, Pelican's cameras can build up a depth map of a scene. In Pelican's dreams of the future, you'll be able to take a short film or series of pictures with your device, then output the result to a 3D printer (or, prosaically, to your desktop). You'd be able to walk around an object with your smartphone — a toy, a sculpture, a new vase — then print out a version of it at home. Or you could model a room, or have a friend walk around you with their phone to make a digital avatar of yourself.

"There's some very interesting implications of having the ability to map and 3D model with something that's in your pocket all the time. We're thinking of a lot of ways to exploit that, and we're working with third-party developers who are as creative as we are and very interested in using that data to put forward a new solution," Pickett said.

And SDK and a set of APIs is on the way, which will let developers make use of the camera's metadata and depth-mapping, for example, and should be available in the fourth quarter of this year.

While any commerical possibilities of such tech are way off in the distance, Pelican's been conducting its own experiments, and recently used one of its array cameras to map, model and 3D print objects in its offices. The first, according to the company, was a Kermit the Frog Pez dispenser.

But before array cameras wind up letting people create 3D selfies, there's more sober uses in the technology's future. Automotive are two areas Pelican is hoping will adopt array cameras. For cars, the array cameras could be turned into sensors that can gather information about what's ahead or behind the car, or map the driver's face to alert them if they're getting drowsy.

The array cameras don't perhaps have the accuracy for such uses yet — the current 4x4 Pelican camera has a depth resolution which is accurate to within 2mm at 20cm, while two 4x4 Pelican cameras placed 4cm apart, it is accurate to within 7mm at 5m — in the nearer term, using the cameras and their depth-mapping capabilities for gesture control of games consoles and smart TV is also a possibility. The company should have an announcement in the field in mid-summer.

So with all that in mind, Nokia should be poised to get its chequebook out. Why hasn't it already?

"I don't think the company's for sale," Pickett says. "The investors understand what they've invested in, and they're very excited about the future of this company and the field in general. There's no question that, much like digital took over from film photography, computational imaging will take over from standard digital imaging. Digital imaging has run out of gas."