Abstract

In this paper, we introduce a novel framework for the compositing
of interactively rendered 3D layers tailored to the needs of scientific
illustration. Currently, traditional scientific illustrations are
produced in a series of composition stages, combining different pictorial
elements using 2D digital layering. Our approach extends the layer
metaphor into 3D without giving up the advantages of 2D methods.
The new compositing approach allows for effects such as selective
transparency, occlusion overrides, and soft depth buffering. Furthermore,
we show how common manipulation techniques such as masking can be
integrated into this concept. These tools behave just like in 2D,
but their influence extends beyond a single viewpoint. Since the
presented approach makes no assumptions about the underlying rendering
algorithms, layers can be generated based on polygonal geometry,
volumetric data, pointbased representations, or others. Our implementation
exploits current graphics hardware and permits real-time interaction
and rendering.

Published

Documents and Links

Additional Media

BibTeX

@ARTICLE{Bruckner-2010-HVC,
author = {Stefan Bruckner and Peter Rautek and Ivan Viola and Mike Roberts
and Mario Costa Sousa and M. Eduard Gr{\"o}ller},
title = {Hybrid Visibility Compositing and Masking for Illustrative Rendering},
journal = {Computers \& Graphics},
year = {2010},
volume = {34},
pages = {361--369},
number = {4},
month = aug,
abstract = {In this paper, we introduce a novel framework for the compositing
of interactively rendered 3D layers tailored to the needs of scientific
illustration. Currently, traditional scientific illustrations are
produced in a series of composition stages, combining different pictorial
elements using 2D digital layering. Our approach extends the layer
metaphor into 3D without giving up the advantages of 2D methods.
The new compositing approach allows for effects such as selective
transparency, occlusion overrides, and soft depth buffering. Furthermore,
we show how common manipulation techniques such as masking can be
integrated into this concept. These tools behave just like in 2D,
but their influence extends beyond a single viewpoint. Since the
algorithms, layers can be generated based on polygonal geometry,
volumetric data, pointbased representations, or others. Our implementation
exploits current graphics hardware and permits real-time interaction
and rendering.},
doi = {10.1016/j.cag.2010.04.003},
keywords = {compositing, masking, illustration},
url = {http://www.cg.tuwien.ac.at/research/publications/2010/bruckner-2010-HVC/}
}