"Backface culling" GL_LINES

I have a dynamically generated "grid" (more of a lattice, if anything) with variable distances between lines that I'm drawing via VBO and GL_LINES. All the lines are in the same plane, and they are offset somewhat from the modelview's center of rotation (which appears in the center of the screen).

I'm allowing the user to rotate the modelview freely, and I have a perspective projection matrix in place.

Is there any way I can piggyback OpenGL's backface culling and/or matrix system to hide one side of this grid? I'm currently using a huge kluge: using gluUnProject to convert a vector that's perpendicular to the screen and taking the dot product of the resultant vector and the normal vector of the grid. It doesn't really work as well as I would have hoped, because of the perspective projection.

I've considered a large quad with a texture or drawing individual lines with polys, but these approaches don't preserve the thickness of the lines as they are rotated. I think it might be possible with a custom shader and some linear interpolation, but I'm not sure.

I'm not sure to really understand your problem ...
The plane (who contains 'all the line') is visible or not visible (cull or not cull), even with a perspective projection.
So, when you're want to draw your grid, just compute the visibility of the plane and decide (client side) if u want to draw or not the grid (lattice, set of lines in that plane).

If you want to do this in device side (gpu),
you can simply add normal information for all vertex that define yours lines (the plane normal)
and compute the culling a vertex shader (just transform the normal with the 'normal' matrix transformation, do a dot product and compare sign),
it's work (i think).

The problem is that in perspective projection, computing visibility isn't as simple as I'd like it. In an orthographic projection, all the "vision vectors" are parallel, so you need only pick one and take its dot product with the polygon normal. In perspective, however, the "vision vectors" are not parallel (when considered in world coordinates), so the previous method doesn't work as well.

Your post got me thinking: if you apply backface culling before applying the perspective matrix, will it work? I'm trying to figure out how OpenGL does it on the GPU, and that seems to be the only logical way. If so, I'll probably get to trying to break that down with the remarkably handy matrix multiplication functions gluUnproject and gluProject.

Originally Posted by yoyonel

If you want to do this in device side (gpu),
you can simply add normal information for all vertex that define yours lines (the plane normal)
and compute the culling a vertex shader (just transform the normal with the 'normal' matrix transformation, do a dot product and compare sign),
it's work (i think).

Good Luck,
YoYo

This might work, actually, but I'm not sure if it will give anti-aliased lines. I will give it a try if my client-side method proves too slow.

Thanks for your help. I'll post how well this turns out later.

EDIT: As of this edit, I haven't made much progress, so for the time being I'm going to change my approach so I won't have to do this in the first place. I still prefer it the old way, though, so I'll still gladly take any suggestions anybody may have.

Following your advice, I went back to the math and found that a little bit of reworking with the "vision vector" pointing at one of the vertices of the plane and taking the dot product of that gave me the result I wanted.

Turns out I was just too focused on getting my old (orthogonal projection) Windows GDI+ code to work with OpenGL.