I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space:

I rotate, translate and project my points to get a 2d space representation of each triangle.
Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels.

This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm.

The concept seems clear enough, I first find Za and Zb with these calculations:

And if current z is closer to the viewer than the previous z at that index THEN write the color to the color buffer AND write the new z to the z buffer. (my coordinate system is x: left -> right; y: top -> bottom; z: your face -> computer screen;)

The problem is, it goes haywire.
The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that I use the painter's algorithm (-only- to draw the wireframe) in "Z-Buffered" mode for debugging purposes)

PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change.
What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where(if) to turn it back?)

[EDIT] Here's some data on what maximum and minimum z values I get:

max z: 1; min z: -1; //<-- obvious, original z of the vertices of the triangles (I think it's called w)
max z: 7.197753398761272; min z: 3.791703256899924; //<-- z of the points that were drawn to screen (you know, after rotation, translation), by the scanline with zbuffer, gotten with interpolation but not 1/z.
max z: 0.2649908532179404; min z: 0.13849507306889008;//<-- same as above except I interpolated 1/z instead of z.
//yes, I am aware that changing z to 1/z means flipping the comparison in the zBuffer check. otherwise nothing gets drawn.

Before I go into painstaking debugging, can someone confirm that my concept so far is correct?

[EDIT2]

I have solved the z-buffering. As it turns out, the drawing order wasn't messed up at all. The z coordinates were being calculated correctly.

The problem was, in an attempt to increase my frame rate, I was drawing 4px/4px boxes, every 4th pixel, instead of actual pixels on screen. So I was drawing 16px per pixel, but checking the z buffer for only one of them. I'm such a boob.

TL/DR: The question still stands: How/why/when do you have to use the reciprocal of Z (as in 1/z) instead of Z? Because right now, everything works either way. (there's no noticeable difference).

Re: "And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index." Its not clear to me that you worded that how you meant to but wanted to make sure that what you meant was "if current z is closer to the viewer than the previous z at that index THEN write the color to the color buffer AND write the new z to the z buffer" The z buffer's purpose is to block color writes if a color at that pixel was already written closer to the camera eye.
–
AlturisOct 4 '12 at 2:09

That is correct. Sorry, It was late when I worded my question. I will revise.
–
TwodordanOct 4 '12 at 10:54

1 Answer
1

Quick answer: Z is not a linear function of (X', Y'), but 1/Z is. Since you interpolate linearly, you get correct results for 1/Z, but not for Z.

You don't notice because as long as the comparison between Z1 and Z2 is correct, the zbuffer will do the right thing, even if both values are wrong. You will definitely notice when you add texture mapping (and to answer the question you'll have then: interpolate 1/Z, U/Z and V/Z, and reconstruct U and V from these values: U = (U/Z)/(1/Z), V = (V/Z)/(1/Z). You'll thank me later)

An example. Get a piece of paper. Top-down view, so forget the Y coordinate. X is the horizontal axis, Z is the vertical axis, the camera is at (0, 0), the projection plane is z=1.

Consider the points A(-2, 2) and B(2, 4). The mid point M of the segment AB is (0, 3). So far so good.

You project A into A': X' = X/Z = -1, so A' is (-1, 1). Likewise, B' is (0.5, 1). But note that the projection of M is (0, 1), which is NOT the midpoint of A'B'. Why? Because the right half of the segment is farther away from the camera than the left half, so it looks smaller.

Why? Because for every step in the X' direction, you don't move the same amount in the Z direction (or, in other words, Z is not a linear function of X'). Why? Because the more you go right, the farther away the segment is from the camera, so one pixel represents a longer distance in space.

Finally, what happens if you interpolate 1/Z instead? First you compute 1/Z at A and B: 0.5 and 0.25 respectively. Then you interpolate: dx = (0.5 - -1) = 1.5, dz = (0.25 - 0.5) = -0.25, so at X' = 0 you compute 1/Z = 0.5 + (-0.25/1.5)*(0 - -1) = 0.3333. But that's 1/Z, so the value of Z is... exactly, 3. As it should be.

Oh, and regarding "when": compute the 1/Z values before starting to rasterize the triangle (e.g. just before the vertical loop), so you get interpolated 1/Z at the left and right of the scanline. Interpolate these linearly (do NOT do 1/Z again - the interpolated values are already 1/Z!), and undo the transform just before checking the zbuffer.
–
ggambettOct 4 '12 at 20:50

And finally, why. A plane (where the triangle is embedded) is Ax + By + Cz + D = 0. z is clearly linear function of (x, y). You project so x'=x/z and y'=y/z. From there, x=x'z and y=y'z. If you replace these in the original equation you get Ax'z + By'x + Cz + D = 0. Now z = -D / (Ax' + By' + C), where it's clear that z is not a linear function of (x', y'). But 1/z is therefore (Ax' + By' + C) / -D, which is a linear function of (x', y').
–
ggambettOct 4 '12 at 20:57

You know, I read quite a few articles and courses, and none of them were quite as clear as your answer. For posterity, I will also note that "The letters "U" and "V" denote the axes of the 2D texture because "X", "Y" and "Z" are already used to denote the axes of the 3D object in model space. UV texturing permits polygons that make up a 3D object to be painted with color from an image." - Wikipedia - UV Mapping
–
TwodordanOct 4 '12 at 21:02

Glad to hear that. I did, in fact, teach Computer Graphics in a previous life :)
–
ggambettOct 4 '12 at 21:04

Thanks so much - I've always been curious about this - and I don't know if I would have ever found a better answer! +1
–
AUTOFeb 18 '13 at 3:53