The stored model is drawn by Panel3D's paintComponent() method. After drawing the x, y, and z axes, paintComponent() enumerates the Vector's Edges. For each Edge, the start and end Vertex objects are obtained, and their x, y, and z world coordinate values are used in drawing the world:

Although the load() method provides error checking in a parsing context, it does nothing to ensure that an edge's start and end vertices were previously defined. If an edge references a vertex that hasn't been defined, paintComponent() throws a NullPointerException.

World modeling could be simplified by modifying the model file format and the 3D panel's load() method to read the modeling transformation—a sequence of translations, rotations, and scalings that map object coordinates to world coordinates—from the model file and apply this transformation to the object coordinates of each object's vertices. Perhaps you might be interested in this advanced exercise.

World viewing

After you finish modeling a world, you'll want to view this world on the 2D display screen. The first step in viewing this world is to locate the world's objects in the eye coordinate system via the viewing transformation, a sequence of translations, rotations, and scalings that map world coordinates to eye coordinates.

An example is the best way to understand the viewing transformation. The example I've chosen places the eye coordinate system's viewpoint at world point (6, 8, 7) and points the eye coordinate system's positive z axis at the (0, 0, 0) origin of the world coordinate system. Achieving this objective requires five steps that progressively establish this eye coordinate system. The first step is illustrated in Figure 3.

Figure 3. Step 1: Translate the world coordinate system to (6, 8, 7)

The first step translates the world coordinate system's (0, 0, 0) origin to point (6, 8, 7). Because the coordinate system—not a point—is being moved, the translation requires negative x, y, and z values. translate (-6, -7, -8); is what this operation looks like from a source code perspective. The second step is shown in Figure 4.

Figure 4. Step 2: Rotate the world coordinate system around the x' axis by -90 degrees

The second step rotates the world coordinate system -90 degrees about the x' axis so that the z axis parallels the y axis and points towards the xz plane. Because we are rotating a coordinate system (not a point), the rotation requires a positive angle. This operation is represented as rotateX (90.0); in source code. The third step appears in Figure 5.

Figure 5. Step 3: Rotate the world coordinate system around the y' axis by 216.8 degrees, so that (0, 0, 7) lies on the z' axis

The third step rotates the world coordinate system 216.8 degrees about the y' axis so that world point (0, 0, 7) lies on the z' axis. The negative version of this angle is used because we are rotating a coordinate system; this operation is represented as rotateY (-216.8); at the source code level. The fourth step is revealed in Figure 6.

Figure 6. Step 4: Rotate the world coordinate system around the x' axis by 35 degrees, so that its (0, 0, 0) origin lies on the z' axis

The fourth step rotates the world coordinate system (approximately) 35 degrees about the x' axis so that the world's (0, 0, 0) origin lies on the z' axis. The negative version of this angle is passed to rotateX() at the source code level. This step sets the stage for the final step, which changes the direction of the z' axis so that positive z' points towards the world's origin. Check out Figure 7.

Figure 7. Step 5: Reverse the z' axis to create a left-handed coordinate system that conforms to the eye coordinate system's conventions

Figure 7 labels the axes Xe, Ye, and Ze. This labeling reminds you that the previous five steps established the eye coordinate system. Although the text indicated otherwise, these steps didn't change the existing world coordinate system: all they did was establish a transformation pipeline. This pipeline's input consists of a world point; the output consists of the equivalent eye point.

The lookFrom(double x, double y, double z) method sets up the viewing transformation. After initializing the current transformation matrix to the identity matrix so that the initial matrix multiplication is the same as multiplying a number by 1, a translation, three rotations, and a scaling are performed:

The values passed to x, y, and z can be positive or negative. This fact affects the third step in the formation of the viewing transformation. If both values are positive, angle = Math.toDegrees (Math.atan (x/y)) + 180.0; executes. Using (6, 8, 7) as the viewpoint, angle = Math.toDegrees (Math.atan (6.0/8.0)) + 180.0; yields 216.8, as expected.

For each of the -x/+y, +x/-y, and -x/-y quadrants, the first step orients the x', y', and z' axes in the same way as shown in Figure 3. Similarly, the second step rotates the x'/y'/z' coordinate system in the same direction as shown in Figure 4. In Step 3, the size of the rotation angle or its direction (or both) changes. Figure 8 reveals these steps for viewpoint (-6, 8, 7)—in the -x/y quadrant.

Perspective

The previous step in viewing the world established an eye coordinate system with an origin located at the viewpoint and with a direction of view pointed at the world's origin. This step doesn't take into account how much of the world is to be seen—the field of view. Because field of view relates to perspective, let's first examine this concept. Take a look at Figure 9.

Figure 9. Perspective is based on the distance from the viewpoint to the screen (D) and half the screen size (S)

Perspective is a way to picture objects on a flat surface so as to give the appearance of distance. This is accomplished, in the real world, via a camera's lens, which determines the amount of perspective shown in the picture. In the computer world, this is accomplished by dividing the distance from the viewpoint to the screen (D) by half the screen size (S), which results in a zoom ratio.

The perspective(double ds) method concatenates the perspective transformation to the viewing transformation so that perspective will be taken into account. After creating and initializing (to the D/S zoom ratio) a 4-row by 4-column perspective Matrix object, this method multiplies the current transformation matrix by the perspective matrix and repaints the 3D panel:

It's more convenient to think in terms of field of view, an angle that determines how much of the world is viewable, than the D/S zoom ratio. This angle ranges from 0 degrees to 180 degrees. As the angle increases, you observe more of the world—a smaller D/S zoom ratio. Figure 10 shows how the field-of-view angle relates to D/S.

Figure 10. Relating field-of-view angle to D/S zoom ratio

Clipping

After applying perspective to a world (in terms of the eye coordinate system), the portion of the world that should not be seen when projected onto a 2D display screen must be clipped. This is accomplished in Panel3D's private void lineto(Graphics g, double x, double y, double z) method, after it and private void moveto(double x, double y, double z) transform the line's endpoints:

Clipping is best understood in terms of points. If point (xe, ye, ze) lies within the viewing pyramid (that portion of the eye coordinate system in which objects can be seen by the viewer)—see Figure 11—the point is displayed; otherwise, the point is rejected. To be displayed, the point must satisfy conditions -ze <= (D/S)xe <= +ze and -ze <= (D/S)ye <= +ze.

Figure 11. The visible region of the eye coordinate system is located within the viewing pyramid. Click on thumbnail to view full-sized image.

Although it's trivial to test points, it's also time-consuming. It's much faster to test each line's endpoints rather than all of a line's points. Through a repeated search that involves endpoint testing, a line is clipped against a viewing pyramid's limits and the visible portion's endpoints are found. To facilitate this task, a clipping coordinate system is introduced in terms of the eye coordinate system, via the matrix operation below:

My textbook's 3D-clipping algorithm refers to a line's endpoints in terms of the clipping coordinate system. Essentially, it locates the visible portion of a line such that conditions -zc <= xc <= +zc and -zc <= yc <= +zc are satisfied for each of the line's two endpoints. To perform this task, the algorithm first classifies each endpoint according to a 4-bit code:

First bit: xc is to the pyramid's left: xc < -zc

Second bit: xc is to the pyramid's right: xc > zc

Third bit: yc is below the pyramid: yc < -zc

Fourth bit: yc is above the pyramid: yc > zc

The line lies entirely within the viewing pyramid (and is displayed) if both codes are zero. If the logical intersection of these codes is not zero, the line locates outside the pyramid and is rejected. Otherwise, the line crosses one or more pyramid planes, its intersection with each plane is calculated, and the clipping algorithm repeats.