I am an openGL newbie. I am developing a Maya plugin and I cannot get glEnable(GL_DEPTH_TEST) to work. Here is a project summary:

Extract RGBA frame data dynamically from mpeg4 movie file. Display image in Maya viewport based on position of custom locator that is child of Maya camera for that viewport. If it doesn't belong to that camera/view then pixel draw is skipped in that viewport. The image size is user input/Maya attribute and is scaled relative to viewport dimensions. The image data is updated dynamically based on user interaction with timeline and also viewport resizing.

Initially I used Maya api method: M3dView::writeColorBuffer (..). I believe that writeColorBuffer is a wrapper around glDrawPixels(). Everything worked but I was unable to implement z-depth occlusion. I was advised to switch to openGL method. I successfully refactored my code to use openGL calls to create texture based on extracted pixel data and assign to quad vertices. The refactored code works except GL_DEPTH_TEST doesn't seem to do anything.

Because of how this project evolved I am doing everything in screen coordinates.

I have a quad that is partially behind an object. If depth was working I would see the quad partly visible with its textured image.

My code explicitly calls: glEnable(GL_DEPTH_TEST);

The quad renders entirely with this statement:
glDepthFunc(GL_LESS);

The quad does not render at all with this statement: glDepthFunc(GL_GREATER);

I am wondering if this is clue that my zNear, zFar units are wrong in glOrtho call and/or my hudZ_depth in my glVertex3f call during quad creation.