MegaVoxel Technology

MegaVoxel Technology

What is voxel processing?

Voxel processing is a means of visualizing 3-dimensional shapes and structures implied by a series of cross-sectional images.

MRI, CT, PET, confocal microscopy, and volumetric ultrasound are examples of some of the more common methods of non-invasive volumetric sampling techniques. Brains, hearts, microscopic tissue sections, and even whole human bodies are examples of the types of things that are routinely sampled via these methods.

Note that there are many ways of obtaining the volumetric data required for voxel processing and that the sampling methods and types of objects listed in the previous paragraph are by no means to be considered a comprehensive list.

How does voxel processing work?

Voxel processing presupposes that a series of cross-sectional images, representing some volume which was regularly sampled at some constant interval, exists in digital form. A series of cross-sectional digital images of this type is referred to as a volumetric dataset or simply as a dataset.

Each image or slice in a given dataset is made up of a number of picture elements or pixels. The distance between any two consecutive pixel centers in any slice within a dataset represents a real world distance referred to as the interpixel distance. Similarly, the distance between any two consecutive slices represents some constant real world depth with which the volume was sampled. This constant depth is referred to as the interslice distance.

Processing a volumetric dataset begins by stacking the slices of a given dataset in computer memory according to the interpixel and interslice distances so that the data exists in a "virtual" coordinate space which accurately reflects the real world dimensions of the originally sampled volume.

The next step is to create additional slices to be inserted between the dataset's actual slices so that the entire volume, as it exists in computer memory, is represented as one solid block of data. The number of slices needed to fill in the blanks is based on the dataset's interpixel and interslice spacing and the slices needed are created through interpolation.

Once a dataset exists in computer memory as a solid block of data, the pixels in each slice take on an additional dimension. In effect, the pixels become volume pixels or voxels.

Once loaded into memory, a volume can be translated and rotated and a rendering of the dataset can be obtained.

Seems like a lot of trouble for such a crappy image!

One of the more important concepts that must be understood in order to consistently obtain satisfactory renderings is voxel opacity.

Because voxels, by definition, exist in 3 dimensions, any voxel has the capability of obscuring the view of any other voxel, depending on the orientation of the dataset. To get around this, voxels are given an opacity value through an opacity transformation function. By default, this function gives a direct, linear relationship between voxel intensity and opacity. In other words, by default, the higher a voxel's intensity value, the more opaque (less transparent) that voxel is when rendered.

The reason that the initial rendering of the dataset was a bit of a disappointment was because the default state of the intensity transformation function implied that only voxels with an intensity value of zero were treated as completely transparent. Apparently, there are lots of dark voxels in this particular dataset with intensity values greater than zero.

Luckily, modifying a dataset's opacity transformation function can be accomplished quite easily. In this particular dataset, only the voxels with intensity values above 127 are of any real interest. This means that the opacity transformation function must be altered so that voxels with intensity values less than or equal to 127 are completely transparent when rendered.

Note the difference a slight alteration to a dataset's opacity transformation function can have in this final rendering.

One more thing before you go...

Static 3-dimensional renderings, as interesting as they can sometimes be, aren't always enough. Additional queues are often needed in order to fully understand what it is that is being visualized and movement tends to be one of the more important visual queues.

Generating a series of renderings, each one a rendering of the volume as it is rotated by some constant increment about some arbitrary axis, allows for the creation of a movie and provides a very important visual queue which can greatly enhance the interpretation of the dataset.