PovRay for scientific illustration: diagrams to photorealism

The following is a short illustration of how PovRay (a free and powerful rendering
engine) can be used to create images suitable for all stages of scientific
research: investigation, diagrams for papers, and high quality images for
promotional purposes. The three versions below are all described identically
with regard to their geometry, the differences are only in how that geometry
is rendered. Note the PovRay files provided here are only intended to illustrate
the rendering modes, there are external (#included) files and textures not included.

This is the diagram as it appeared in the
journal,
almost all features
of PovRay are turned off, it is flat shaded, one light source at the camera
position, no shadows, and the camera employs an orthographic projection.
This is in the style typical of scientific or mathematical illustration.

Self-organization of synapses to form local maps.
Left: polar plane representation of activity in an efferent field.
Angles 0-2pi are represented by the colours of the spectrum, repeated twice.
Middle: input map; saturation of synapses projected to the afferent
field forms a Möbius projection from the efferent field.
Distance relations are preserved, but angular relations are doubled to 0-4pi.
Right: local map; synapses within the afferent field become
saturated so as to form an intertwined mesh of bi-directional
connections, closed over 0-4pi.
[Figure legend from J.J. Wright et al. / Vision Research 46 (2006) 2703-2720]

This is similar to the renderings of the geometry that were created during
the scientific visualisation process. This is exactly the same geometry as
in figure 1 above, the only difference is the rendering style, surface
properties, lighting, and the placement of the components. Images of this
form were used to solidify concepts in geometry
and for discussion between researchers.

The following was created as a presentation form of the same geometry. It is rendered
using radiosity (MegaPov),
the lighting is defined by a HDR lighting environment of the staff
tearoom (there are no other light sources).
Additionally the result of the rendering was saved as a HDR image file
(Radiance format) allowing exposure changes
and filtering to be applied in post production without the usual artefacts
that one encounters when using a limited 8 bits per r,g,b colour space.
The page in the scene is how the diagram in figure 1 appeared in the draft version of the
published paper. The final image was rendered at
6400 x 4800 pixels for a high resolution photographic print.

Using POVRay as a volume renderer

A complete example using the MRI data
illustrated on the rightmri.tar.gz
Render with: povray +Imri.pov mri.ini

POVRay, from version 3.1, supports a volumetric density media. This can
be used to represent arbitrary media distributions, the volumetric
values can be mapped to variations in density (using a density_map) or
colour (using a colour_map). While this was primarily introduced as a
powerful way to specify user defined media effects, it can also be used for
certain volume rendering applications.

For a simple example, the following contains a number of Gaussian smeared
points of different standard deviations. The POVRay file is here:
example.pov, note that the density
in this example just controls the
emission in the media. While the density_map in this case is quite boring,
different mappings are possible depending on the application.
For more information on the media settings see the POVRay documentation.

A couple of simple examples adding a colour map are given below.

color_map {
[0 rgb 0]
[1 rgb <1,0,0>]
}

color_map {
[0.0 rgb 0]
[0.3 rgb <0,0,1>]
[1.0 rgb 2]
}

By adding an absorption "absorption <1,1,1>" and turning on media
interaction for the light sources, the media will cast shadows.

The format of a DF3 file is straightforward, it consists of a 6 byte
header containing the 3 dimensions of the volume, this is followed by
the voxel cells. Each voxel can be represented as a single byte (unsigned char),
a 2 byte integer (unsigned short), or 4 byte integer (unsigned int).
The 2 and 4 byte versions are written as big-endian (Intel chips are little endian,
Mac PowerPC or G4/G5 are big-endian).
POVRay works out which data type is used by the size of the file.
The voxels are ordered
as x varying the fastest, then y, and finally z varying the slowest.

The following gives the outline of how one might create a 3D volume
and save it as a DF3 file where each voxel is a single unsigned byte.
Using floats for the volume is simply to
make the maths easier if one is creating arbitrary functions. For many
applications it is OK to use short ints or even char directly.
Note the endian of the short ints in the header and the order the
volume voxels are written.

Frustum Clipping for PovRay

In normal rendering applications the potential exists for any piece
of the geometry making up a scene to be visible or to have some
effect on the visible part of the scene. For example, objects
that can't be directly seen may be visible in reflective
surfaces or they may cast shadows into the visible area.

It is however quite common in scientific visualisation
projects that there is simply too much data for PovRay to render directly.
One needs to get up to "tricks", for example, creating geometry at variable
resolutions depending on the distance from the camera. Another trick that
will discussed here is to prune away any geometry not directly visible.
The particular rendering here involved a topology model of Mars containing
over 130 million polygons. The polygonal approximation was adjusted
depending on the distance of the camera to the surface to give about
one polygon per pixel. In order to further reduce the polygons given to
PovRay the polygons outside the view frustum were removed.

In order to compute which polygons are outside the frustum
one needs to be able to define the 4 planes making up the view frustum
against which each vertex of the polygons will be tested.
If one defines the horizontal aperture as thetah then the vertical
aperture thetav is given by the following.

thetav = 2 * atan(HEIGHT * tan(thetah/2) / WIDTH)

Where WIDTH and HEIGHT are the image dimensions.

The points p0, p1, p2, p3 making up the corners of frustum can be computed
using the C source below.

Where vp is the view position vector, vd is the unit view direction
vector, vu is the unit up vector, and right is the unit vector to
the right (the cross product between vd and vu). The 4 frustum planes
are (vp,p0,p1), (vp,p1,p2), (vp,p2,p3), (vp,p3,p0) from which the
normal of each plane can be computed.

A simple C function that determines which side of a plane a vertex
lies might be as follows.

/*
Determine which side of a plane the point p lies.
The plane is defined by a normal n and point on the plane vp.
Return 1 if the point is on the same side as the normal,
otherwise -1.
*/
int WhichSide(XYZ p,XYZ n,XYZ vp)
{
double D,s;
D = -(n.x*vp.x + n.y*vp.y + n.z*vp.z);
s = n.x*p.x + n.y*p.y + n.z*p.z + D;
return(s<=0 ? -1 : 1);
}

The example below shows a portion of the landscape rendered on the
left. Moving the camera back a bit shows that the geometry (of the
whole planet) has been clipped to remove any polygons not within the view
frustum.
Note that as well as frustum clipping, back facing polygons have also
been removed.

POVRAY CSG modelling

The following image and accompanying geometry file
csg.pov illustrate the basics of the CSG
(Constructive Solid Geometry) operations supported by POVRAY.
The operations are performed on an intersecting circle and cylinder.
Note that while the union and merge appear to give the same result,
in the later the interior structure does not exist. This can be
demonstrated by doing the rendering using a transparent material.

Bump Maps in PovRay

Bump maps are a way of creating the appearance of surface detail
without needing to create additional geometric detail. This is achieved by
perturbing the normals at each point on the surface. Since the
normal is used to determine how light interacts with the surface
the appearance of the surface is affected. The effect is very
powerful, in some of the examples below it is hard to imagine
that the surface hasn't been geometrically modified.
The following illustrate some of the procedural bump maps provided
by PovRay. Not all the bump maps are shown but the more interesting
ones are. This gives a quick visual index when looking for a particular
effect. Note that some bump maps create artefacts on the curved surface
used below, this is because they are designed for planar surfaces.

Media (PovRay 3.5)

The use of media within PovRay to achieve a particular effect can
be challenging. The process is generally one of repeated trial and
error where the result of a parameter change mostly
hard to predict. The following attempts to illustrate
a range of effects, hopefully as a starting point for the readers
own exploration.

The test scene consists of a unit sphere resting on a ground surface,
this sphere will contain the media.
There are two white lights, one directly above the sphere and the other
closer to the centre of the sphere.
In order to observe the effect of the media on objects,
cylinders are placed behind the media and spheres are placed through the
centre of the media sphere.

Media sphere

The media in these examples is bound within a sphere.
The density is varied radially using the "spherical" pattern modifier
which returns 1 in the centre and 0 at a radius greater than 1.
The sphere below has all the media attributes (scattering, emission,
and absorption) set to zero, effectively so the media has no effect.
The resulting rendering is on the right.

Three renderings on the right illustrate setting non-zero absorption,
emission, and scattering. In each case there is no colour variation,
so white light is scattered, white light is emitted, or white light is
absorbed. Note that in the absorbing case the light passing through the
media sphere from the light above ends up being reddish. A word of
warning, scattering media can be very CPU expensive.

Multiple density

Multiple densities result in the densities being multiplied together
(compared to multiple media where the densities are added together).
In this example the goal was to have a slow transition from high
to 0 density but to have turbulent variation within the media, this
can be achieved with the following density description.

Colour variation can be added throughout the media by adding
a colour map. Because there are two density sections the effects are
multiplied together, the result modulated by the amount of
scattering, emission, and absorption. The following colour map
creates a green central region (spherical = 1) and red rim (spherical
approaching 0).

POVRAY quality settings

The following illustrates renderings from POVRAY using the different
quality settings. The scene being rendered for this example is
quality.pov. The quality setting in POVRAY
is set by using +Qn on the command line or
Quality=n in a ".ini" file.
All the rendering below were done with antialiasing on (otherwise default
antialiasing settings) and rendered at 400x400, the relative times
are multiples of the quality=1 raytracing time.

Quality=1

Quick colours and ambient light

Time unit = 1

Quality=3

Calculate specific diffuse and ambient light

Time unit = 1.6

Quality=5

Include shadows and extended lights

Time unit = 2.4

Quality=7

Include texture patterns

Time unit = 3.0

Quality=9

Calculate reflected, refracted, and transmitted light

Time unit = 7.2

Texture billboarding in PovRay

Billboarding is a well established technique that maps a texture
onto a planar surface that stays perpendicular to the camera. It is
commonly used in interactive OpenGL style applications where textures
are a much more efficient means of representing detail than creating
actual geometry. The classic example is to represent pine trees (relatively
radially symmetric), the texture image of the tree always faces the
camera giving the impression of a 3D form.

The example that will be used here is the creation of a galaxy
that rotates slowly and stays facing a camera that moves along a
flight path. For each section and image below a PovRay file is
provided which illustrates the step and can be used to create the
image.
path.

The texture will be mapped onto a disc, used because the galaxy images
were mostly circular, a polygon could just as easily have been used.
The default PovRay texture lies on the x-y plane at z=0 as shown below.

The first step is to consider how to transform the texture so that it
faces the camera. The camera model used here is as follows, where normally
the camera position (VP), camera view direction (VD), and up vector (VU)
are set by the flight path description.

The view direction, up vector, and right vector need to be kept
mutually perpendicular, to be more precise, orthonormal.
The PovRay transform statement is used to orientate the texture
coordinate system so it is perpendicular to the camera view
direction.

In the above the disc and texture are still centered on the origin
so they now need to be translated to the correct position. This could
be done in the transform above (see last row of zeros) but a separate
translate has been used here.

Fog (PovRay 3.5)

The scene on the right was created in order to test the effect of
various fog types and variable.

Constant fog

The simplest type of fog (type 1)
is uniform in all directions, the rate at which the fog
colour alters the colour of objects is proportional to the exp(-d/d0) function
where d0 is the argument in the "distance" variable. So, if the object is
d0 away the colour contribution for that pixel will be 0.63 of the
fog colour plus 0.37 of the object colour. An object twice d0 away
the colour will be 0.86 of the fog colour and 0.14 of the object colour.

The transmittance and filter of the fog colour is used to set the
minimum translucency and the degree of filtering of light passing through
the fog. So a transmittance of 0 and filter of 0
(eg: colour rgbft <1,0.6,0,0,0>)
would give the same
result as "colour rgb <1,0.6,0>". Increasing the filtering means the
light sources become increasingly coloured by the fog, this in turn
will affect the colour of objects illuminated by the light. Increasing the
transmittance sets an upper limit on the degree to which the fog blocks
distant objects. An example of the right show the result with a 50%
filtering.

fog {
distance 6
color rgbft <1,0.6,0,0.5,0>
fog_type 1
}

and a 50% transmittance.

fog {
distance 6
color rgbft <1,0.6,0,0,0.5>
fog_type 1
}

Ground Fog

PovRay has a second type of fog (type 2) called "ground fog", this
has a vertical density dependence. Control of the vertical dependence
is made with two variables, the first (fog_offset) sets a height below which the
fog has a constant density, the second variable (fog_alt)
controls the rate of density falloff above that height. A small value of
"fog_alt" compared to "fog_offset" results in a sharp transition. The
exact equation for heights above the fog_offset is

(1 + (height - fog_offset) / fog_alt)-2

So at a height of fog_alt above the fog_offset the fog density is 1/4
of what it is at (or below) fog_offset. At twice fog_alt above fog_offset
the density is 1/9th.

The turbulence variable is used the same as when applied to patterns,
the argument dictates the degree of turbulence. An additional variable
"turb_depth" determines where along the ray the turbulence is calculated.
0 indicates at the camera, 1 is at the first object the traced ray
strikes. The other parameters for turbulence such as octaves,
lambda, and omega can be specified to control the turbulence function.

The Beneventum Stadium

The lens types available in version 3 of PovRay are illustrated here
along with the camera specifications for each one. The way (and order)
in which PovRay treats the various camera attributes isn't always
obvious, the section of the manual dealing with the camera settings
should be read carefully.

A model of the Roman Beneventum stadium has been used to illustrate
the various lens types.
The geometry for the model was imported into MicroStation and converted
into a PovRay scene through the WRL export, this creates the cleanest
and most convenient geometry output of all the formats provided by
AutoCAD and MicroStation.

The plans and elevations of this model are given on the right.
A rendering using a standard perspective projection (aperture=60)
is shown below.

The camera aperture for the following is 90, 120, and 150.
The model used by PovRay is that of a straightforward
pinhole camera, that is, there are no lens effects. The "angle" argument
is the preferred method of specifying the camera aperture, in earlier
version of PovRay the relative length of the "direction" and "right"
vector determined the camera aperture.

The same camera position and view direction (VP and VD) will be used
for the remainder of the renderings unless otherwise specified. Note the
negative value for the right vector, this changes the coordinate system
from a left to a right hand one (this is what the modelling software used).
The 4/3 scale factor for the right vector creates the correct aspect
ratio for an image size of 800x600 as specified in the ini file.

Width=800
Height=600

Fisheye

A fisheye lens implements a standard spherical projection, the "angle"
may range from 0 to 360 degrees.
At 180 degrees half the visible space is visible, at 360 degrees the
whole visual space is visible (note that in this case a point behind
the viewer is stretched around the perimeter of the projection circle).
The fisheye lens examples here correspond to camera apertures of 180, 270,
and 360.

This is a special purpose projection for Omnimax theatres. The angle
is fixed at 180 degrees. Normally Omnimax images/movies are filmed
with a matching lens system to the intended projection system, the
projection system "undoes" the fisheye like distortion introduced
with the matching camera.

For this projection the scene is mapped onto a cylindrical band, PovRay
allows the band to run vertically as well as horizontally. PovRay also
supports two modes, one where the view point remains in the same place
and another where the view point moves around the cylinder.

Making QuickTime Navigable objects using PovRay

QuickTime VR navigable objects are one of the original features
(along with panoramic) Apple built into
QuickTime after simple movie playing.
They allow exploration of an object by moving a virtual
camera around the object, generally on the surface of a sphere. Internally
they are just a linear QuickTime movie but with some extra information to
indicate which frame sets form the lines of longitude and latitude.

The three steps in creating a QT navigable object are as follows:

1.
Create the frames in the correct order.

2.
Build a QuickTime movie from these frames. This is normally done
using QuickTime Pro but there are other software solutions a well.
Any QuickTime codec can be used to compress this movie.

3.
Add the extra information that instructs a QuickTime player that
this is a navigable object and how the lines of latitude and longitude
are sampled. There are some tools from Apple (and others) that will
do this, I used something called "Make QTVR Object" from the
"QuickTime VR Dev Kit".

The rest of this document will describe how to perform the first step
above using PovRays animation support based upon the clock variable,
namely, creating all the images required and
in the correct order. In polar coordinates (R,Theta,Phi), sometimes
called spherical coordinates, given a fixed radius (R) all the points
lie on a sphere. The two angles (Theta,Phi) determine the lines of
longitude (0 to 360) and latitude (90 to -90) respectively.

These lines of latitude and longitude can be "unwrapped" from the
sphere and represented as a grid.

QuickTime VR object movies allow any rectangular part of this grid
to be used. In most cases the entire grid is used in which case panning
right past theta=360 will wrap to theta=0 for any line of latitude.
Note that the line along the top and bottom edge of the grid map to a
single point at the north and south pole.
Whichever part of the grid is used one will need to give the longitude
and latitude bounds to the software that performs step 3 above.

In the following example using PovRay the object to be explored is assumed
to be located at the origin, if this isn't the case it can readily be
translated there or the PovRay code below modified to create the views
about some other position. In the first section given below, the parameters
that determine the grid resolution and range are specified. In the example here
the resulting navigation will be in 5 degree steps left/right (longitude) and
10 degree steps up/down (latitude).

In the above the tweaking of PHI at 90 and -90 (the poles)
is a nasty solution to the problem of
creating the correct up vector (VU) at the poles, there are more elegant ways
but this seems to work OK. This method works in combination with the
cross product to calculate a right vector and then the correct
up vector that is at right angles to the view direction.

These camera variables (position, view direction and up vector)
are finally combined into a camera definition which may look something
like the following.

camera {
location VP
up y
right x
angle 60
sky VU
look_at <0,0,0>
}

An unfortunate reality of these navigable object movies is their size. For
D1 degree steps in longitude and
D2 latitude, the total number of frames for a full sphere is given by
((180+D2)/D2)*(360/D1). So for the example above D1 = 5 and D2 = 10 degrees so
there are 1368 frames. For a smoother sequence where there 2 degree steps in
both directions there would be 16380 frames! For this reason I haven't included
an actual QuickTime VR object example movie.

Screen shot examples of various QT VR tools

Representing WaveFront OBJ files in POVRay

With the introduction of the mesh2 primitive in POVRay there is now a nice one to one
mapping between textured mesh files in WaveFront OBJ format and POVRay.

A textured mesh in OBJ format may be represented as follows, note that long lists
of vertices, normals, etc have been left out in the interests of clarity.
There are essentially 4 common sections, "v" are vertices, "vn" are normals,
usually one per vertex, "vt" texture uv coordinates on the range 0 to 1, and
finally faces defined by indices into the vertex, normal, and texture lists.