Ok, it’s been done on time. Yesterday Unity 5.0 has been released and so has been redLights 2.0. If you want high quality area lighting in your Unity 5.0 projects, you should check out our plugin in the assetstore.

To celebrate all this, we prepared a new web demo that shows off our latest feature. Support for Enlighten realtime GI. As usual you can find it in our redLab.

Yesterday the redPlant team launched a new area on their website. It’s the redLab, where we will showcase latest experiments from our office in Germany.

The first entry is about redLights (SSAL). A new technique for illuminating scenes that have been built with Unity3d. The respective unity package will hit the asset store soon. So stay tuned!

The next article will follow shortly. It will contain an in-depth explanation of the deferred rendering pipeline, that we have developed and contributed to the three.js engine. It’s part of the stable branch since release r54.

I’m happy to announce that our SSAL package will be released on the unity asset store soon.

Unity users will now have the ability to use realtime arealights in their applications by using our screen space implementation.

It supports all the goodness, such as deferred and forward rendering, correctly works together with the builtin unity materials, lights and shadows. And since it is rendered as image effect, it is decoupled from the geometric complexity of the rendered scene.

There is a new demo at our redPlant demo section showing an WebGL based implementation of a deferred spotlight with deferred shadow mapping. It again utlitizes the three.js library. However if you are planning on doing deferred stuff with three.js, I highly recommend waiting for alteredqualia’s WebGLDeferredRenderer, which will include a more sophisticated integration with the three.js libary. You can follow the discussion on github.

It took a while, but I finally found the time to play around with WebGL and three.js in particular. I was overly impressed by Florian Bösch’s WebGL demo showing an implementation of deferred irradiance volumes, and so I decided to try to write my own deferred renderer for three.js. You can check the live demo at our demo section at redPlant.

It currently supports point light sources as well as deferred shadow maps. It does not yet support everything else such as spot lights, point light shadows, etc… There’s still a lot to do.

The pipeline works roughly as follows:

In a first pass store depth as postprojection z/w to a floating point rendertarget.

The second pass stores view space normals into a rendertarget.

In a third pass render a shadow map for a simple directional light source. This is just plain old shadowmapping.

Next render a proxy sphere geometry for each point light inside the scene. Inside the fragment shader for this sphere, sample the depth buffer and reconstruct the pixels view space position by unprojecting z/w and multiplying it with the inverse projection matrix.

Figure out the lights position in view space and calculate its attenuation with respect to the pixels view space position. I’m using the attenuation formula from this guy.

Write the result into the framebuffer. Repeat this step for each light source and accumulate every lights contribution.

In the last pass the lightbuffer is sampled by using the uv’s of a fullscreen quad.

The pixel’s view space position is again reconstructed as described above.

The occlusion of the shadow map is determined by projecting the reconstructed view space position into light space. This is done by multiplying view space position with the inverse view matrix. The result is the world position. Multiplying with the light’s viewProjectionMatrix yields the light clip space position.

Depth of the projected position is compared with the corresponding pixel from the shadowmap and the pixel’s occlusion can be determined. Standard deferred shadow mapping, so to say.

In a last step compute the directional light’s contribution based on it’s position and the view space normal from the second rendertarget.

The performance is not necessarily jaw-dropping, it can however handle 1.500 point lights on a GTX560. The demo renders around 440 point lights plus one directional light.

I’m sure I will release the source some day, but before that I need to clean up a couple of things. There is still a lot of room for improvements.

In this context, I would like to also point out that we have another demo, that demonstrates how WebGL can be used for realtime product configuration and visualization. The example shows a realtime configurator for furniture and allows the user to freely switch between different materials for pillows, seats and other components.

That’s it for now.

Edit: I reduced the number of lights in the demo, as 440 seemed a bit too heavy for some mobile cards.

My latest attempt on SSAO. In a next step I’m going to implement SSDO and this demo is supposed to serve as a basis for that.

It uses a linear depth buffer which is packed into a A8R8G8B8 render target. Check this article for details. Position reconstruction is done via the frustum corner method as described here. The amount of occlusion contribution of taken samples is based on this tutorial. However I replaced the sampling pattern with a basic spherical filter kernel. No blurring is done on the occlusion term. The final image is antialiased with Nvidia’s FXAA filter.

And it is… a website! Finally, after 9 weeks of intensive working we finally released the fresh redPlant website in a new design. We integrated a detailed project site, which gives information about our capabilities and services. On the news page you can always check the current status of the little redPlant and “virtually” watch it grow. We hope you like the new site.
So check it out: www.redplant.de

Local Illumination Shading based on the paper “Measuring and Modeling Anisotropic Reflection” by Gregory J. Ward. This was realised since we recently needed a way to simulate anisotropic highlights as can be seen on many brushed metal surfaces. I think it turned out quite nicely.

Casting a ray through an octree and keeping track of all intersected nodes. Additionally the exact intersection points between adjacent cells are registered.
This will be integrated into a Volume Renderer to accelerate Raycasting by skipping empty cells within the volume that do not contain any meaningful data.

The Beetle dataset is taken from TU Wien and has a resolution of 832x832x494 at 8 bit.

This is a Unity3D Surface Shader with an implementation of the Cook Torrance Lighting Model. It uses a Beckmann distribution for evaluation of the surface roughness. The series below shows the effect of varying values for the surface roughness (0.1 – 0.9) and different reflectance indices (top row 1.0, bottom row 2.0).

Last week we went up to Paderborn to present our Yacht Configurator at the Paderborner Workshop Augmented & Virtual Reality in Product Development which took place as one event during the Wissenschaftsforum Intelligente Technische Systeme 2011 at the Heinz Nixdorf Institute in Paderborn. Annika and Thomas gave the talk regarding our paper “3D-Produktkonfiguration mit natürlichen Interaktionstechniken – eine Fallstudie im Kontext Messepräsentation” that was published in the proceedings to the conference.

The last post has been relatively short and was showing only a bit of skeleton code calling a simple kernel function from the host side via a wrapper function. This time I would like to post a snippet which actually does something. It is an example showing how to implement a basic blur-filter using the CUDA programming environment. In addition to the former snippet, this example also contains the missing parts showing how to allocate/deallocate device memory and how to transfer data from the host to the device and vice versa.

I figured that as subject for testing good old Lena would be suitable.Read more »

Since I just installed the WordPress Plugin for using the Google Syntax Highlighter, I thought it would be a good opportunity to actually post a little bit of code in order to check it out.

For certain reasons I recently needed to wrap my head around integrating CUDA into an existing application that was written in C++. As mostly this turned out to be fairly easy, once you have discovered about ten ways how to do it wrong…Read more »

This technique is not really brand new, but today was the first time I heard about it. Using Interior Mapping one can add a lot of detail to buildings which have been created from simple geometric primitives without creating additional geometry. All the work is done completely within the pixel shader.

This technique is especially useful for scenes which take place inside a large city for example. The buildings can be created completely from simple primitives. Besides the obvious elevation for artists, this also reduces the amount of geometry that has to be processed.

I just found this article accidentally while having my first cup of coffee in the morning. From time to time I urgently need something like this to replace the default windows taskmanager.Doom as a tool for system administration