I have a mesh file of a 3D model with n points and m triangles. I rendered this model using openGL and saved it as an image.

How can I make the correspondance between each pixel and point? I know openGL has gluUnProject function that converts window coord to obj coord. But if I have more points than pixels, how do I understand the correspondance?

Given what? I don't exactly understand the problem. If you have a true 2D vector that is the result of a projection, you can't go back to 3D without additional information. I'm not very familiar with OpenGL, but if I'm not mistaken, gluUnProject requires a 3D vector (x,y,z) - so that is your image + a z-buffer.

So, if for a point A with obj coord (objx,objy,objz), I get window corrd winX=197.456 winY=207.89 winZ=.75 Can I say that point A corresponds to pixel (197,207)? Does winZ have any function here?

Yes you can. Understand, the 3D space has virtually infinite precision. But the pixel matrix making up your device window is well finite. That's why you've got to "snap" the X and Y coords obtained from the 3D to integer values (your 197 and 207) - the pixels.

As for the Z, you can discard it entirely. That info only tells you how "far away" along the depth axis is the 3D point from which the "ray" that got unprojected onto your 2D window's plane was originated. It's not an actual 3D Z coordinate, but a scalar (usually) ranging from 0.0 to 1.0 ---- wherein 0.0 means the point is on the frustum's Near clipping plane, 1.0 means the point is on the Far clipping plane, 0.5 means the point is halfway between the Near and Far planes, and so on and so forth.

I'm curious. Why do you need this? The best way of doing this probably depends on what you are going to use it for. Is it for object selection in a 3D game? If so I would do the way Reedbeta said and just do a ray trace.

I'm curious. Why do you need this? The best way of doing this probably depends on what you are going to use it for. Is it for object selection in a 3D game?

No, its not for object selection. Given a 3d model, I am making a synthetic image by combining some reflectance models. Then from this image, I am trying to retrieve the information(i.e normals, elbedo,IOR etc) of the points in the 3d model.

I already implemented the idea for setting pixel to point correspondance and it works fine for my case.

Well, based on your posts, it's really not very clear either what your approach is, or what is the problem you are trying to solve in the first place.

If you want to determine what point(s) on a 3D model lie underneath a specific screen point, tracing a ray from the camera through the screen point and finding the intersection(s) with the model is one way to do it. (gluUnProject is insufficient, as it just does a matrix operation that is the inverse of projection - it will take a 2D screen position and a depth, and give you the corresponding 3D position, but if you don't know the depth ahead of time it's useless.) Another way would be to rasterize the model into a buffer - either a floating-point buffer that stores the 3D position directly at each pixel, or a depth buffer which can be used to reconstruct the 3D position of a pixel on demand, via gluUnProject or equivalent.

There are upsides and downsides to both, so which is better will depend on how you want to use it. The raytracing approach is relatively costly in time, but cheap in memory (discounting acceleration structures you might need, such as BSP trees), is able to give sub-pixel precision, and can return all surfaces under the 2D screen point (not just the nearest one). The rasterization approach is less costly on balance if you want to query a large number of points, because you can rasterize once up-front and re-use the buffer for as many queries as you like. But it needs memory to store the buffer, the precision is limited to the resolution at which it was rasterized, and it can give you only the nearest surface at each pixel.