The first low-resolution rendering of the heightfield in Terragen, with Pico de Orizaba in the background. Foreground elevation is about 4,000 meters above sea level.

We’ll use Terragen to generate our landscape. Terragen is a fractal-based modeling application used to render photorealistic environments. With it, you can create imaginary landscapes or landscapes based on the real world.

The low-resolution rendering here shows what we see after importing the EarthExplorer DEM (digital elevation model) file and locating the HAWC Observatory site. The camera is facing northeast toward Pico de Orizaba, a dormant volcano with an altitude of 5,636 meters (18,491 feet) above sea level.

Now, a technical challenge: I need a level area on which to install the HAWC water tanks. Flat places do not occur by themselves in Terragen, and as a fairly new user of the application I’m not sure how to go about making one.

Terragen’s interface is daunting at first. The 3D display renders at best a low-resolution view of the image:

A look at our scene via the 3D view in Terragen’s interface.

But that’s OK because the real work is done under the hood in the application’s node editor.

How part of our scene looks like when viewed through Terragen’s node editor.

Each box in the display represents a “node,” a package of functions and data. Each node is connected to other nodes in series. You start at the top with the raw landscape, in our case the heightfield generated by the DEM. Each node in turn adds more information – rough and smooth areas, grass, rocks, trees, and finally sunlight and clouds – to generate the finished scene. Simply put, you create the scene by plugging in nodes and editing their functions and outputs.

Once you get used to it, this a quick and intuitive way to work. But when the nodes don’t cooperate it can stop you dead in your tracks.

I’m stuck and end up posting a query on the Terragen user forum. Within a day or two I get responses from an expert user and a Planetside employee. The solution (of course) is deceptively simple, and after much additional trial and error I finally achieve flatness. Brilliant! And during the back-and-forth I’ve discovered another, nearly as simple solution. There’s more than one way to skin this cat.

New low-resolution rendering with the observatory site leveled and ready for construction.

The site location is based on Google Maps and Landsat imagery (also available from USGS EarthExplorer), and I’m confident that it’s placed within a few meters of where it should be. The water tank array, modeled in Maya, is imported into the scene and set in place.

Landscape with water tank array (modeled in Maya) inserted into the scene.

This rendering provides the basis of our comprehensive layout, or “comp,” a rough design of the entire graphic. Headlines and callouts are represented by greeked text, and a simple arrow indicates the position of the gamma-ray air shower. The comp is a proof-of-concept document that can be shown to the the magazine editor and the writer who is working on the story. Once the concept is approved we can move ahead and finish the graphic.

The “comp” is a very rough draft that allows us to check the alignment and position of the background rendering and to consider the text needed and flow of information in the final illustration.