This IEEE Transactions on Visualization and Computer Graphics ( TVCG) special section brings to you extended versions of four excellent articles originally presented at the Symposium on Interactive 3D Graphics and Games (I3D) in 2010. Since 1986, I3D has been a venue for cutting-edge computer science research in interactive graphics and human interaction. The field has grown significantly since the founding of I3D, but the symposium has maintained its premier status. Today, I3D is known for the close integration of academia and industry, and such is clear from the mixture of attendees and from the mixture of authors of the articles in this section. The symposium itself is also very popular, for it provides a great venue for meeting fellow researchers, forming new collaborations, and brainstorming new solutions.

I3D 2010 received 71 submissions from 19 countries in four continents. Each paper received at least three independent reviews by a group of 86 experts in the field of computer graphics and interactive techniques. After a discussion phase, 23 papers were selected for publication and presentation at the symposium (an acceptance rate of 32 percent). There have been no limits imposed on the number of accepted papers, and the decisions were based purely on the submissions' merits. Following a tradition started with I3D 2009, we invited the authors of four of the accepted papers to expand their work with new insights and results for publication in this special section of TVCG on I3D. The choice of the invited papers was based on the evaluation and comments of our expert reviewers. These papers went through a complete journal review process, which included multiple iterations of editing and review.

“Stochastic Transparency” by Eric Enderton, Erik Sintorn, Peter Shirley, and David Luebke addresses the problem of achieving order-independent transparency for interactive rendering. The solution provided by the authors presents several desirable features: the required memory does not depend on the scene's depth complexity; it renders a fixed number of passes, with its running time growing linearly with the number of fragments; it is general in the sense that it can replace multiple transparency algorithms for different scenarios; it is also easy to implement, and is a good fit to modern, massively parallel GPUs. The technique provides a practical solution for interactive applications, being capable of rendering all kinds of transparent geometry, and providing a unifying approach for order-independent transparency, anti-aliasing, and shadowing.

“Efficient Sparse Voxel Octrees” by Samuli Laine and Tero Karras explores the use of voxel representations for rendering complex detailed geometry on current and future GPUs. In order to efficiently ray cast the resulting models, the authors propose a compact data structure for storing voxels. In this representation, voxels are augmented with contour information to increase geometry resolution. The paper also describes a new normal compression format for storing high-precision object-space normals. Using the proposed infrastructure, the authors challenge the current graphics trend of using a base mesh augmented with fine details represented by textures and displacement maps: as the amount of color and geometric detail information grows beyond a certain limit, wouldn't it make more sense to use a voxel representation to store both geometry and its associated attributes?

“Frankenrigs: Building Character Rigs from Multiple Sources” by Christian Miller, Okan Arikan, and Don Fussel addresses an important practical problem in 3D character animation. Rigging and skinning a character is a fundamental animation task, but a time-consuming and tedious one even for experts. The authors present an automatic solution for generating good quality rigging and skinning that can greatly simplify the work of animators. Given a character's 3D mesh with annotated joints, the technique finds good matches for individual parts of the model by scanning a database of partial rigs. By transferring information from the partial rigs to the target mesh, the approach produces a skeleton and skinning weights whose quality are similar to the ones produced by expert animators.

“Improving Shape Depiction under Arbitrary Rendering” by Romain Vergne, Romain Pacanowski, Pascal Barla, Xavier Granier, and Christophe Schlick introduces a technique for enhancing shape depiction by scaling the reflected light as a function of curvature and material properties. Their solution can be used with any kind of material and works with both direct and global illumination. It can also be used with different lighting environments, supports inter-reflections, and works in real time. By providing intuitive control over the resulting shading, this technique can be an invaluable tool for allowing artists to achieve higher levels of expressiveness.

The scope and quality of the papers presented every year at I3D are well represented by these four articles, whose original and insightful ideas have been extended with more detailed analysis and discussions. For this, we would like to thank the authors and reviewers. Special thanks go to Thomas Ertl, TVCG EIC at the time of this process, for his continued support to I3D, and to Amitabh Varshney and Chris Wyman for organizing I3D 2010.

For information on obtaining reprints of this article, please send e-mail to: tvcg@computer.org.

Manuel M. Oliveira received the PhD degree from the University of North Carolina at Chapel Hill, in 2000. He is an associate professor of computer science at the Federal University of Rio Grande do Sul (UFRGS) in Brazil. Before joining UFRGS in 2002, he was an assistant professor of computer science at the State University of New York at Stony Brook (2000 to 2002). In the 2009-2010 academic year, he was a visiting associate professor at the Camera Culture Group at the Massachusetts Institute of Technology (MIT) Media Lab. His research interests cover most aspects of computer graphics, but especially the frontiers among graphics, image processing, and vision (both human and machine).

Daniel G. Aliaga received the PhD degree from the University of North Carolina (UNC). He is an associate professor of computer science at Purdue University. He is a researcher in computer graphics and computer vision, and in particular in acquiring, modeling, and rendering 3D objects and scenes. He has served on numerous program committees, on several US National Science Foundation (NSF) panels, as journal editor, as conference and paper chair, and authored more than 60 papers.