3 InterpretersVTK provides automatic wrapping for the following interpreted languages:TclJavaPythonInterpreters provide faster turn-around (no compilation) but suffer from slower executionVTK can be accessed from C++, but sometimes it’s easier to test a particular part of the code from an interpreted language instead, especially because you can make modifications and see the effects without recompiling your program. In VTK, the interpreted languages we chose to support are Tcl, Java, and Python.As is noted, programs written in interpreted languages execute more slowly than their compiled counterparts, so we tend to use C++ for application development rather than one of the interpreted languages.Open Source Software PracticeLecture 8

4 Tcl InterpreterTo use VTK from Tcl, add the following line to the beginning of your script:package require vtkCreate an actor in Tcl: vtkActor actorInvoke a method: actor SetPositionWe’ll walk through how to use VTK from within the Tcl language.Note the similarities between creating a vtkActor and invoking a method on it from C++ and from Tcl. It is fairly straightforward to convert a C++ example to Tcl and vice versa.Open Source Software PracticeLecture 8

5 Tcl InterpreterA special package provides a Tcl interpreter when the ‘u’ key is pressed in the render window:package require vtkinteractioniren AddObserver UserEvent {wm deiconify .vtkInteract}These 2 lines of Tcl code allow you to bring up a Tcl interpreter from a running VTK program. From the interpreter you can modify the parameters of your program and see the results immediately.Open Source Software PracticeLecture 8

6 Tcl Interpreter vtkActor ListInstances: list all vtkActor objectsvtkActor ListMethods: list all vtkActor methodsanActor Print: print internal state of anActorvtkCommand DeleteAllObjects: delete all VTK objectsvtkTkRenderWidget: embed a render window in TkvtkTkImageViewerWidget: embed an image window in TkThere are a few special VTK commands that are useful from within Tcl.Open Source Software PracticeLecture 8

7 Exercise 2 Run an example exercise2.tclUse ‘u’ (user-defined) key to bring up the interactor interfaceTry some commandsvtkLODActor ListInstancessphereActor PrintsphereActor ListMethodsDouble-click on the exercise2.tcl file. After you bring up the interactor, type the each of these commands and press Enter to see the results in the interactor interface.Open Source Software PracticeLecture 8

8 The Visualization PipelineA sequence of algorithms that operate on data objects to generate geometry that can be rendered by the graphics engine or written to a fileSourceFilterMapperActorto graphics systemDataDataIn VTK, this is the definition of the visualization pipeline.In this diagram, the items labeled “Source”, “Filter”, and “Mapper” are process objects. The items labeled “Data” are data objects. (We’ll discuss algorithms and data objects next.) “Actor” represents the geometry to be rendered. Data could also be passed to a writer to be saved to a file.DataFilterMapperActorDataOpen Source Software PracticeLecture 8

9 Visualization Model Data Objects represent data provide access to datacompute information particular to data (e.g., bounding box, derivatives)AlgorithmsIngest, transform, and output data objectsIn the previous slide, you were briefly introduced to data objects and algorithms. Data objects represent the data in a VTK program. They are the classes you use to access data and retrieve particular information about the data.Algorithms operate on those data objects. They are the sources (e.g., data readers), filters (e.g., visualization algorithms), and mappers from the previous slide. We will discuss these different types of algorithms in more detail later.Open Source Software PracticeLecture 8

10 vtkDataObject / vtkDataSetvtkDataObject represents a “blob” of datacontain instance of vtkFieldDataan array of arraysno geometric/topological structureSuperclass of all VTK data objectsvtkDataSet has geometric/topological structureConsists of geometry (points) and topology (cells)Has associated point- and cell-centered data arraysConvert data object to data set with vtkDataObjectToDataSetFilterThe most general way to represent data in VTK is with a vtkDataObject. vtkDataObject does not have any associated geometry/topology – there are not associated points/cells. A data object has field data – an array of arrays – but these arrays are not associated with points/cells in the data set since data object does not define points/cells.vtkDataSet is a subclass of vtkDataObject. Geometry (points) and topology (cells – how the points are connected to each other) are defined at this level. Point-centered and cell-centered attribute data is defined.The next slide illustrates the distinction between data objects and data sets.Open Source Software PracticeLecture 8

12 Dataset Model A dataset is a data object with structureDataset structure consists ofpoints (x-y-z coordinates)cells (e.g., polygons, lines, voxels) which are defined by connectivity list referring to points idsAccess is via integer IDimplicit representationsexplicit representationsAs mentioned earlier, datasets contain both points (location in 3D), and cells (connectivity of groups of points), so a cell’s X-Y-Z position is determined by the points that it is composed of. Both points and cells are accessed by integer Ids.The points and cells making up a data set can be specified either implicitly (e.g., vtkImageData or data defined by an implicit function) or explicitly (e.g., vtkUnstructuredGrid).CellPointsOpen Source Software PracticeLecture 8

14 Data Set Attributes vtkDataSet also has point and cell attribute data:ScalarsVectors - 3-vectorTensors - 3x3 symmetric matrixNormals - unit vectorTexture Coordinates 1-3DArray of arrays (I.e. FieldData)In addition to geometry and topology, data sets also have attribute data for points and for cells. vtkPointData and vtkCellData are both subclasses of vtkFieldData – an array of arrays. The specific attribute data arrays (scalars, vectors, etc.) are just pointers to field data arrays. Additional point or cell attribute data arrays can also be stored.Historical note: In previous versions of VTK, scalars, vectors, etc., were not contained in field data, but were specially-treated arrays. vtkPointData and vtkCellData each contained an instance of vtkFieldData where additional arrays could be stored, but special methods had to be used to move an array from field data to a specific type of attribute data. Now, all that is required to do this is resetting a pointer.Open Source Software PracticeLecture 8

15 Data Set Attributes (cont.)In addition to geometry and topology, data sets also have attribute data for points and for cells. vtkPointData and vtkCellData are both subclasses of vtkFieldData – an array of arrays. The specific attribute data arrays (scalars, vectors, etc.) are just pointers to field data arrays. Additional point or cell attribute data arrays can also be stored.Historical note: In previous versions of VTK, scalars, vectors, etc., were not contained in field data, but were specially-treated arrays. vtkPointData and vtkCellData each contained an instance of vtkFieldData where additional arrays could be stored, but special methods had to be used to move an array from field data to a specific type of attribute data. Now, all that is required to do this is resetting a pointer.Open Source Software PracticeLecture 8

16 Scalars (An Aside) Scalars are represented by a vtkDataArrayScalars are typically single valuedScalars can also represent colorI (intensity)IA (intensity-alpha: alpha is opacity)RGB (red-green-blue)RGBA (RGB + alpha)Scalars can be used to generate colorsmapped through lookup tableif unsigned char  direct color specificationScalars are one of the most common types of attribute data. They (and all other attribute data) are represented as vtkDataArrays.Scalars are usually used to determine the colors used in displaying the data set. They can either represent the colors directly (1 – 4 component scalars) or they can be used to generate colors (single-component scalars mapped through a lookup table).In directly representing colors, I and IA are grayscale, and RGB and RGBA are color.In generating colors, the color lookup table is associated with a range of scalar values. The scalars in the data correspond to particular colors in this range.Open Source Software PracticeLecture 8

17 Algorithms Algorithms operate on data objects 1 or more inputs Source1 or more outputsFilterWe mentioned algorithms briefly earlier. They operate on the data objects we have just been discussing. There are three types of algorithms in VTK: sources, filters, and mappers. Sources produce data objects but have no inputs (e.g., file readers, cone source, etc.). Filters make changes to the data (the visualization algorithms in VTK). They take a data object as input and produce another data object as output. Mappers take data objects as input but do not produce output. They are the connection between the data objects and the rendering parts of VTK. They convert the data into rendering primitives.1 or more outputsMapperOpen Source Software PracticeLecture 8

18 Pipeline Execution Model (conceptual depiction)direction of data flow (via RequestData())SourceFilterMapperRender()DataDataThe data object and algorithms we’ve discussed can be linked together in VTK to form a visualization pipeline, resulting in a particular visualization of some data. When the pipeline is executed, the data is produced from a source and flows down the pipeline through any filters to a mapper and into the rendering subsystem. In VTK, pipeline execution is demand-driven. This means that the pipeline is not executed until Update is called – either by the rendering subsystem, by a writer, or called directly. Update traces back up the pipeline to the point where the data is up-to-date. Execute happens from that point down the pipeline to produce the new visualization.direction of update (via Update())Open Source Software PracticeLecture 8

19 Creating Pipeline TopologybFilter->SetInputConnection(aFilter->GetOutputPort());bFilter->SetInputConnection(1,aFilter->GetOutputPort(2));aFilterConnectionbFilterOutput Ports1Input PortsConnection21In order to produce a visualization, we must set up a pipeline. The SetInputConnection method (in filters and mapper) and the GetOutputPort method (in sources and filters) are used to connect the pieces of the pipeline. Inputs and outputs are both vtkDataSets (or a subclass of vtkDataSet.)Several connections on an input port:AddInputConnection() if allowed bythe filter (ex: vtkAppendFilter)Reuse an output port: OKOpen Source Software PracticeLecture 8

20 Role of Type-CheckingFillInputPortInformation() specifies input dataset typeFillOutputPortInformation() specifies output dataset typeType-checking performed at run-timeint vtkPolyDataAlgorithm::FillInputPortInformation(int vtkNotUsed(port),vtkInformation *info){info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(),”vtkPolyData”);return 1;}Different algorithms operate on different dataset types, so in order to connect two algorithms, the output dataset type of one must match the input dataset type of the next one in the pipeline.Open Source Software PracticeLecture 8

21 Example Pipeline Decimation, smoothing, normals Implemented in C++vtkBYUReadervtkDecimateProNote: data objects are not shown  they are implied from the output type of the filtervtkSmoothPolyDataFiltervtkPolyDataNormalsvtkPolyDataMapperOpen Source Software PracticeLecture 8

26 Exercise 2b Compile and run exercise2b.cxxUnderstand what the example does...How many source objects are there?How many filter objects?How many mapper objects?How many data objects are there?Open Source Software PracticeLecture 8

29 Volume RenderingVolume rendering is the process of generating a 2D image from 3D data.The line between volume rendering and geometric rendering is not always clear. Volume rendering may produce an image of an isosurface, or may employ geometric hardware for rendering.Open Source Software PracticeLecture 8

32 Volume Rendering StrategiesImage-Order Approach: Traverse the image pixel-by-pixel and sample the volume via ray-casting.In an image-order volume rendering method, the image is traversed pixel-by-pixel, and rays are cast out from each pixel through the volume to determine the final pixel color value. Image-order volume rendering is typically referred to as ray casting. Image-order ray casting, and object-order texture mapping are actually equivalent except for possible precision problems in the texture mapping method due to the limited hardware frame buffer size. The speed of an image-order volume rendering method is mostly driven by the size of the image while object-order methods depend mostly on the size of the volume. Some volume rendering methods combine both object-order and image-order techniques and are known as hybrid methods.Ray CastingOpen Source Software PracticeLecture 8

33 Volume Rendering StrategiesObject-Order Approach: Traverse the volume, and project to the image plane.Most volume rendering methods fall into two categories: object-order or image-order. In object-order volume rendering, the rectilinear grid is traversed, and the data is projected onto the image plane. One example of this is splatting where the volume is traversed sample-by-sample, a spherical kernel is placed around the sample, and then the sample is “splatted” onto the image plane. Alternatively, the volume may be processed plane-by-plane by texture mapping the scalar values onto axis-aligned rectangular polygons, then rendering these polygons using conventional graphics hardware. The polygons are perpendicular to the axis most parallel to the viewing direction. During rotation, this axis will change, leading to some image artifacts. If 3D texture mapping hardware is available, this technique can be performed with view-plane aligned polygons eliminating these artifacts.Splattingcell-by-cellTexture Mappingplane-by-planeOpen Source Software PracticeLecture 8

34 Scalar Value Interpolation(1,1,1)v = (1-x)(1-y)(1-z)S(0,0,0) +(x)(1-y)(1-z)S(1,0,0) +(1-x)(y)(1-z)S(0,1,0) +(x)(y)(1-z)S(1,1,0) +(1-x)(1-y)(z)S(0,0,1) +(x)(1-y)(z)S(1,0,1) +(1-x)(y)(z)S(0,1,1) +(x)(y)(z)S(1,1,1)xzy(0,0,0)In many rendering methods, such as ray casting and texture mapping, it is often necessary to know the scalar value at any location in three-dimensional space. Since scalar values exist only on the grid points of the dataset, we must employ an interpolation function that defines the scalar value between sample points. The simplest form of interpolation is known as nearest neighbor interpolation, where the value of the nearest sample point is returned for any location within the dataset. Smoother results are obtained at the cost of higher computational complexity with trilinear interpolation, which linearly interpolates the scalar value along each of the three main axis.v = S(rnd(x), rnd(y), rnd(z))Nearest NeighborTrilinearOpen Source Software PracticeLecture 8

35 Scalar Value InterpolationThe importance of interpolation is seen in these images generated from a 50x50x50 sphere. On the left, nearest neighbor interpolation was used, and the underlying voxel structure of the dataset is clearly visible. On the right, trilinear interpolation is used, and the sphere appears smooth even in the close-up image. Trilinear interpolation requires more computing time than nearest neighbor interpolation, and this can be significant on workstations with poor floating point performance unless these calculations are converted to fixed-point arithmetic. Trilinear interpolation is a continuous function, but is not continuous in the first derivative. It would be beneficial to use tricubic interpolation to obtain first derivative continuity, but this is generally too computationally expensive for practical implementation in interactive environments.Nearest NeighborInterpolationTrilinearInterpolationOpen Source Software PracticeLecture 8

37 Material ClassificationScalar value can be classified into color and opacity (RGBA)RGBAScalar ValueGradient magnitude can be classified into opacityOpacityOne of the hardest tasks in volume rendering is material segmentation. For example if you have a CT dataset that contains bone and skin, you must define transfer functions that map color and opacity to these features based on properties of the dataset. Generally, three transfer functions define the appearance of the volume. Two of these transfer functions are functions of scalar value, mapping the scalar value into color (rgb) and opacity (alpha). Another useful transfer function maps the magnitude of the gradient value to an opacity. To determine the opacity at some sample location, both the scalar value and the gradient magnitude opacity values are multiplied together. Scalar value opacity transfer functions are useful for narrowing the range of scalar values that are rendered. Gradient magnitude opacity transfer functions can highlight areas of sharp change.Gradient MagnitudeFinal opacity is obtained by multiplying scalar value opacity by gradient magnitude opacityOpen Source Software PracticeLecture 8

38 Material ClassificationThe choice of transfer functions greatly affects the final image produced from a dataset. Consider the three top images. These images were all generated using a compositing technique on a simulated dataset of a high potential iron protein molecule. The transfer functions mapping scalar value to RGBA were varied, producing images that convey different information about the dataset. In the bottom row we see two images generated from CT data. Color transfer functions were used to map scalar values that most likely belong to regions of the bone to a white color, while those samples most likely to indicate skin were mapped to a light brown color. Opacity transfer functions were used to map skin values to a low opacity and bone values to a higher opacity. In addition, homogenous areas were assigned a low opacity.Open Source Software PracticeLecture 8

39 Implementation Renderer Prop Collection Property ... Volume VolumeImageDataVolumeMapperThe vtkRenderer contains a vtkPropCollection. This collection can hold all types of props including 2D props, 3D actors, and 3D volumes. Each vtkVolume has a vtkVolumeProperty (called the Property), and a vtkVolumeMapper (called the Mapper), which is similar to the structure of an actor. The vtkVolumeMapper has an Input which points to some vtkImageData to represent the 3D dataset. This data may be obtained by connecting the output of a vtkSLCReader to the Input of the vtkVolumeMapper. As opposed to the other objects colored orange, the volume mapper is colored in blue to illustrate that it is an abstract superclass. A helper class, vtkRayCaster, was created to encapsulate the ray casting methods. It is an automatically generated class that is not visible to the user and therefore does not appear in this architecture chart.InputMapperOpen Source Software PracticeLecture 8

42 Volume Rendering IssuesQuality – is it accurate?Speed – is it fast?Intermixing – what can’t I do?Features / Flexibility – what features does it have, and can I extend it?There are four major issues that I will discuss in relation to each of the volume rendering approaches covered in this talk. The first issue is quality – how accurate is the generated image? This will depend on many issues including data interpolation, accumulation method, precision of the calculations, and other approximations used by the rendering technique to improve speed. Another important issue is speed – how fast can a image be generated? Interactivity is important in many application areas, and often a less accurate method is preferred if that is the only way to ensure interactivity. In a complex application it is often desirable to intermix multiple volumes with various geometry – I will discuss the limitations imposed by each of the volume rendering technique on intermixing. Finally, does the method have all the features I need, and if not is it flexible? Can I get into the code and add additional features?Open Source Software PracticeLecture 8

43 Standard FeaturesTransfer Functions – define color and opacity per scalar value. Modulate opacity based on magnitude of the scalar gradient for edge detection.Shading – specular / diffuse shading from multiple light sources.Cropping – six axis-aligned cropping planes form 27 regions. Independent control of regions (limited with VolumePro)Cut Planes – arbitrary cut planes. Limited by hardware (6 with OpenGL, 2 parallel with VolumePro)Some of the standard features supported (at least in part) across all of the volume rendering strategies are: transfer functions, shading, cropping, and cut planes. The transfer functions are used to map scalar value into color and opacity, and the magnitude of the scalar value into an opacity modulation value. Shading due to multiple light sources is supported, and in ray casting and texture mapping these light sources can be colored. Six cropping planes define 27 regions in the volume that can be turned on or off independently (there are only a few predefined options with the VolumePro (but these are the most useful options) to, for example, view a subvolume or to chop a corner out of a volume. Arbitrary cut planes can also be applied to the volume, although this is limited to 6 for texture mapping, and only 2 which must be parallel to each other for the VolumePro hardware. With two parallel cut planes, thick reformatting can be performed.Open Source Software PracticeLecture 8

44 Intermixed Geometry High potential iron proteinIn the image on the left, the positive wave function values in the high potential iron protein dataset are volume rendered using a ray casting compositing scheme, while the negative wave function values have been extracted as a geometric isosurface using marching cubes. The image on the right shows CT data from the visible woman dataset. The skin surface in the knee has been extracted using marching cubes, and rendered with graphics hardware. Ray casting is used to display the bone as a “fuzzy surface” using a composite volume rendering technique.High potential iron proteinCT scan of the visible woman's kneeOpen Source Software PracticeLecture 8

45 Volume Ray Casting Volume RayCast Function Volume RayCast MapperGradientShaderGradientEstimatorGradientEncoderSome additional classes are need to support the volume ray casting architecture. The vtkVolumeRayCastFunction is the class that performs most of the work in ray casting since it is responsible for traversing the ray and computing a final pixel value. The vtkGradientEstimator is used to create the 3D array of normals, encoded into two byte direction and one byte magnitude values according to the scheme implemented in the vtkGradientEncoder. The ray cast mapper contains an automatically generated class, the vtkEncodedGradientShader, which is used to calculate illumination for each encoded normal.Open Source Software PracticeLecture 8

46 Ray Cast FunctionsA Ray Function examines the scalar values encountered along a ray and produces a final pixel value according to the volume properties and the specific function.A ray cast function takes a ray and transforms it into "volume space" where the axes of the coordinate system are aligned with the axes of the dataset, and a scaling factor is applied to make the volume look like the spacing is 1 unit by 1 unit by 1 unit. This one transformation simplifies many later calculations such as interpolation. The ray cast function examines the scalar values that are encountered along the ray, and according to the color and opacity transfer functions, the shading information, the interpolation type, and the gradients, produces a final pixel value. Each specific implementation of a ray cast function defines how these values are combined (and in some cases, which values are ignored) in order to obtain that final value. We will cover three common ray cast functions in this section.Scalar ValueRay DistanceOpen Source Software PracticeLecture 8

47 Maximum Intensity FunctionMaximum ValueScalar ValueRay DistanceOpacityScalar ValueA maximum intensity function is one that examines the values and selects the maximum. One way to do this is to pick the maximum scalar value encountered. This value would then be turned into an RGBA pixel value by applying the color and opacity transfer functions. This ray function is an example of one that would ignore certain properties that do not make sense for the function. For example, shading is ignored by a maximum intensity function since the set of maximum intensities in the final image do not generally represent a surface structure in the volume. As an alternative to finding the maximum scalar value, this function could locate the maximum intensity encountered along the ray. The name of this class is vtkVolumeRayCastMIPFunction, where MIP stands for Maximum Intensity Projection.OpacityMaximize Scalar ValueGradient MagnitudeOpen Source Software PracticeLecture 8

48 Composite Function Ii = ci ai + Ii+1 (1-ai) Scalar Value Ray DistanceOpacityScalar ValueCompositing is generally performed using the following recursive equation:Ii = ci ai + Ii+1 (1-ai)where Ii is the intensity at location i along the ray, ci is the color, and ai is the opacity at that location. The ray is traversed until it exits the volume or until (1-ai) drops below some tolerance indicating that full opacity has been obtained. The opacity value is actually an opacity per unit length, and represents the amount of light reflected by the material at that location. The (1-ai) value represents the amount of light transmitted unaltered by the material at that location.Use a-blending along the ray to produce final RGBA value for each pixel.Ii = ci ai + Ii+1 (1-ai)OpacityGradient MagnitudeOpen Source Software PracticeLecture 8

49 Isosurface FunctionIsosurface ValueScalar ValueRay DistanceStop ray traversal at isosurface value. Use cubic equation solver if interpolation is trilinear.An isosurface image can be generated from a dataset using a composting ray function with the following transfer function mapping scalar value to opacity:Since compositing is performed using a sampling method, this will lead to samples taken not quite on the isosurface, which in turn will lead to artifacts. To improve this, a special ray cast function can be used which exactly locates the surface. With trilinear interpolation, this results in a cubic equation to solve for the intersection of the ray with the surface.Scalar ValueOpacityOpen Source Software PracticeLecture 8

50 Sampling Distance 0.1 Unit Step Size 1.0 Unit Step Size 2.0 UnitThe sampling distance must be selected with care when using a ray function that performs sampling. Consider the above example in which a vase is rendered using compositing with three different step sizes. The transfer functions are defined such that the color of a sample changes from black to white within a short distance (approximately one unit). If the sampling distance is set too high, as it is in the image on right, artifacts will appear in the resulting image. On the other hand, if the sampling distance is set too low, then the image will require a long time to render. In some cases it is possible to adaptively adjust the step size during the ray casting by examining the gradient magnitude along the ray. In areas of rapid change the sampling distance should be small, while large step sizes can be used in homogenous regions.0.1 UnitStep Size1.0 UnitStep Size2.0 UnitStep SizeOpen Source Software PracticeLecture 8

51 Speed / Accuracy Trade-OffMulti-resolution ray casting:1x1 Sampling2x2 Sampling4x4 SamplingCombined approach: vtkLODProp3D can be used to hold mappers of various types. A volume can be represented at multiple levels-of-detail using a geometric isosurface, texture mapping, and ray casting. A decision between levels-of-detail is made based on allocated render time.Even on a multi-processor machine with a fairly good graphics card, the render rate of a large volumetric dataset may not be high enough for interactivity. Since the time to render with ray casting is proportional to the number of pixels in the image, reducing the number of pixels from which rays are cast will decrease the render time. For texture mapping, additional version of the dataset can be generated at lower resolution. Rendering lower resolution data requires smaller textures and therefore performance is improved. Several techniques can be combined by using a vtkLODProp3D - a low resolution dataset with a texture mapping approach, a full resolution dataset with texture mapping, and a full resolution dataset with ray casting can all be added to the vtkLODProp3D. At render time, the vtkLODProp3D will choose between the different techniques based on the allocated render time for the volume.Open Source Software PracticeLecture 8