I recently came across "Camera Depth" under render passes in Maya. I was thinking to myself, "why would anyone need to render camera depth as a separate pass? Even if you want to create depth of field, it can be done using Maya cameras, as I've previously tested. [CLICK HERE for my Depth of Field test video]. IN A PRACTICAL SITUATION, is there really any significant difference between generating depth of field using Maya and in any other compositing software?

So I started running tests. I created a scene in maya with primitives, but applied a number of different shaders, set up imaged based lighting, applied exposure correction lens shader to my renderCam and rendered using mental ray. This is what I got:

RGBA Render

NOTE: I purposely disabled primary visibility on the HDR image. I only wanted to use it for lighting, not for display in my final render.

I proceeded to render a Camer Depth pass:

Camera Depth Render Pass

Before moving on to compositing, I wanted to create the depth of field video in Maya. So I enabled the Depth of Field option in my renderCam, and made a test render to know my starting point. After 2 and a half long minutes, I ended up with this:

Depth of Field enabled in Maya's camera

OKAY!! So that's why they have a camera depth pass! Though it's possible to create DOF in maya, it'll take excruciatingly long to animate, and even longer to fine tune!

So I moved on to open up my first 2 images in an After Effects comp, added Camera Lens Blur to my 1st image, setting the Camera Depth Map image as the gradient layer. After a few minutes of work, I was able to create this: