I've long thought that more can be achieved through greater cross-pollenation between discipline domains, that people should specialise in a little more than one area, that we should explore the possibilities of mixing techniques together in creative and interesting ways, and do it through the play that using highly interactive tools and flexible, modular architecture allows.

Sunday, 3 July 2016

The Rendering System

Previously
we covered how geometry is created. This
time we will look at how it is managed and rendered.

Models

It was mentioned in
my last post that geometry is built into fixed size buffers. This limits the amount of geometry that can
be created by one a procedure. For a
small object, or one of low detail (farther away), this may not be a problem,
but if we are to build huge, detailed worlds then it most certainly is. To overcome this, a number of systems and
techniques are used.

Refinement

Models are managed
within a spacial octree, each node being responsible for any models that fit
reasonably within its own bounds.
Smaller models are managed by the small nodes deeper in the octree.

During the synthesis
process, the sub-procedures used (and any bounding information that can be
obtained from them) are analysed, and in certain cases stored. The aim here is to capture a set of sub
procedures that fully represent the model built, but as smaller component
parts. These parts can then be used to
build more detailed versions of parts of the whole model, and which can be
managed by the smaller octree nodes that are more suitably sized. This effectively provides a way of
re-synthesising successively smaller parts of any model as we need the extra
detail deeper in the octree. This
'refinement' process is driven by proximity to the viewpoint, using the deeper
more detailed model parts in areas that are nearer the camera.

Successive octree levels, and the geometry managed by each

Authoring

Procedures do need
to be built with this process in mind somewhat.
There are certainly ways to help or hinder the process and prevent the
system from operating at its best, but the tools provide feedback and diagnostics
to help you optimise them. This is
another area that I will dig into in more detail in another post.

Rendering

The rendering engine
for Apparance has always been fairly basic as most of the work has been in
proving out the procedure synthesis and detail refinement techniques. All that the renderer needed to be able to do
was render some coloured triangles with a couple of fixed light sources. This was implemented in DirectX 9 and based
on a fairly simple cube rendering sample.
Even with no materials, no texturing, and simple primitives I have been
able to make quite a wide range of examples.

Small sample of results achieved with basic renderer

The renderer itself
has been written to be fairly robust and flexible, with support for multiple
viewports, cameras, and scenes, it runs on its own thread, and supports window
resizing and device loss properly.

Shaders

Current focus

Driven mainly by the
need to start blending between meshes of different detail levels, I decided
that I needed to add shader support and this is my current focus.

With the flexibility and power shader based
rendering brings I will be able to implement an elegant blending system, as
well as better lighting, and start experimenting with more realistic surface
properties.

I decided that I
should certainly allow run-time authoring of shaders as this is an important
premise of the Apparance tool philosophy.
To do this I also decided that the shader code should be procedurally
constructed by the same systems the models are built. Not only does this mean I can easily re-use
shader functions and constructs, but pieces of code, and even allowing
parameterisation of the shader code itself.
This should have all sorts of interesting effect creation potential.

Trouble

During the testing
of DirectX 9 shaders I hit some nasty snags to do with background compilation
of shaders during rendering, shader lifetime management, and finally with a
crash on ending and releasing of shader resources that I couldn't resolve. Even using my simple training app I couldn't
solve the issue and under Windows 10 it turns out that debugging and
diagnostics in DirectX 9 isn't supported, so no help there. My solution was to bite the bullet and
upgrade the engine to DirectX 11, which represents a significant improvement in
features and support, as well as being fully integrated into the OS and with
significant debugging support.
Unfortunately this did mean learning about all the differences and
writing another learning app, but it seems like it will be a good move in the
long run as I was probably going to need it at some point anyway and DirectX 11 has
some nice improvements in the way you handle shaders that it will be good to
get used to.

New rendering and shader test app for DirectX 11

Graphics Fu

Eventually I am going to need some fairly fancy rendering features to show off the models properly, such as multi-texturing, advanced light sources, high quality shadows, ambient occlusion, and maybe even global illumination. I am treating these as 'solved' problems and prioritising many other, more unique, features over them. I am also likely to need help with the harder graphics tech and should start to involve others in the project more closely, but that will depend on how much interest I can raise in the project and whether I can find funds to build a team around it in the future. We shall see...

Next

I was going to
describe my development setup a little here, but I think I'll leave it until a later post. Next time I'll talk about the
editor and how it is used to develop procedures.

Coming from a technical and practical upbringing I've always been passionate about computers and electronics; writing programs and building contraptions from primary school age upwards. Studied electronics at university, and working in games development since 1995.