Follow our progress!

I’ve moved to working on the rendering engine, which will be my main focus in the next 3 months. There’s still some fixing needed on the sculpt tools but I won’t do any big changes anymore there. The planned rendering development consists of four parts.

Shading System

The shading code will be refactored to make a clean separation between materials and lamps, and some corrections will be made to the current lighting calculations. But mostly the intention is a more modern system to design materials, one that is not so much geared to direct lighting from lamps only, but works well also for indirect light. Nodes will also be central to the way materials work rather than something glued on top of it. It’s basically a merger between physically based rendering materials that are design for advanced lighting algorithms, and production rendering materials that can do things like output passes or use some tricks for speed.

The current design is on the wiki. We most likely won’t implement the full thing for Durian, but the intention is to implement the foundation and the parts that we use ourselves. Improved raytracing can then be implemented by others later.

Further people have been asking about OpenCL for GPU acceleration. That is something we’re not planning, it wouldn’t be even remotely possible given the time constraints. Also the recently released Open Shading Language by Sony Imageworks would be good to have, but a shading language is not something we can spend time on now, though I think what they are doing is in the same spirit, bringing together physically based and production rendering. I’m looking at their design to see how compatible we can be so someone can implement support for this later, but it looks quite similar, the big difference being of course that we are building a node system and they’re making a shading language.

Indirect Diffuse Light

There’s already an incomplete implementation in trunk based on the approximate AO algorithm. We’ll try to extend this method to do proper shadowing. This could be done using either the recent micro-rendering algorithm (a bit simpler and more flexible) or the pixar technique (proven to work). The main challenge here is keeping performance high enough, it is expected to be quite a bit slower, but hopefully still faster than raytracing, and working on scenes that don’t fit in memory.

Disk Caching

Memory usage is a big problem, especially when rendering in 4K. The plan is that many data structures in the rendering engine will be cached to disk and loaded only when needed. The main implementation issues here are how to do this threaded efficiently and trying to avoid latency killing render performance. There’s many things that could be cached to disk, hopefully we can implement it for most of these:

Image textures

Shadow maps

Multires Displacements

Smoke/Voxel data

SSS tree

Point Based Occlusion/GI tree

Per Tile Subdivision

This is probably the most complex one, we want to subdivide meshrd per tile to render very finely displaced meshes. One challenge is that this requires a patch based subdivision surface algorithm that does not need the full mesh in memory. The existing subdivision library could be modified to do this, but it would not be very efficient. Another possibility is to integrate the integrate QDune Catmull-Clark code.

Other problems are grid cracks, though perhaps these are not too difficult to solve if we take them into account from the start. Another concern is the filtering of multires displacements, this is quite a complicated problem, if it doesn’t get solved we’ll need a simple workflow for baking multires to displacements maps. Existing displacement code also needs to be improved to do filtering properly.

Other issues are how to deal with threading, sharing diced patches between threads, and distributing objects/patches across tiles efficiently. Will be a fun challenge :).

For Interested Developers

Of course these are just plans, we’ll see how far we get, though I hope we can do all of them. If you’re a developer interested in helping out, the disk caching would be a good project to pick up as it doesn’t involve that much knowledge of the render engine. Also approximate indirect diffuse lighting could be a good thing to help on, as most of the data structures are already there, the point cloud is already built, it is mostly a matter of implementing the rasterization.

Brecht.

This entry was posted
on Wednesday, January 13th, 2010 at 9:48 pm and is filed under Development.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

too bad for OCL though :'( :'(, does that mean it will never be, or that you won’t have time in those 3 months.

About shader and material, do you plan to make it easier to load written glsl in material instead of python bidding. so we could write glsl or re-use even for rendering ? if it could work like in 3ds max and hlsl would be very great, and easier to use external glsl. python binding is a bit of a pain a limited to BGE 🙁

This are the greatest news I could hope for!
Go Brecht go! I blindly trust you, and thinking of your talent at work on a single area for 3 months makes me really happy!
(it’s just too bad that February has only 28 days…)

Brecth . one small request which i have been wanting for a long time . I have had problems when creating a forest like scene . the huge number of poligons crashes blender . This problem can be solved by a proxy system . ie. we can place low poly models which are acting as a placeholder for a high poly object which might be residing in a diffrent blend file (so we can also make libraries :)). the renderer replaces the low poly ones with the high poly ones only while renderring . this can help a lot in huge scenes and also might help a lot in durian for rendering large scnenes with many mesh objects .

Really an interesting post. I have one small question/request, will these updates allow for rendering sss as a separate pass? I have seen this down with other programs which allows for some interesting work with compositing.

Long long time ago, during O.T. rendering days, I had a persistent problem when there were small faces in the scene (really small that after subdivided get even smaller) and consisted in some flashy flickering tiny white dots (way bigger then the faces that originated them) appearing in the animation.
A year and something later I rendered another animation thing and the same issue still was present.
I suppose that the event only shows in low resolution renderings, because I don’t spot them in BBB or ED… but anybody knows if this is a known issue?

“This could be done using either the recent micro-rendering algorithm (a bit simpler and more flexible) or the pixar technique (proven to work).”

Brecht, or Matt, who know… 🙂
if that can be fully implemented, it will answer to more than 90% of the Blender Community wishes and hopes from many years to now.
This is a core-feature Blender can miss no more.

All my encouragement for you and your hard work so far, and your promising plans!

Disk Caching Please don’t forget the lessons of the vanish proxy. http://varnish.projects.linpro.no/ . Don’t fight with the OS. Most OS’s are good at managing virtual memory. If anything use more mem mapping of files so reducing data to be sent to swapfile. Most OS’s will auto dump mapped files with need.

Squid is a classic case of Disk Caching fighting with OS Caching. Swapfile’s are not fast. Having to have items recovered from swap to be transfered to disk cache completely undermines the objects of discaching.

Mem mapping prevents items in the Disk Cache entering the swapfile so removing the risk of double slow drive event.

Basically we need a memory mapping of files management system. Clearing a memory map will not touch swap. Adding a memory map will not touch swap. So double handling event causing lag ie transfer from swap to disk or disk to swap is impossible. Lowest IO possible is important. Mapping can achieve this.

I think this is the right time to make the render pass customisable and more like professionnal renderer, so my compositor will stop hatin’ on me 😀
If you can make it it would be awesome and everyone could use blender in big pipeline projects.