Then our connector can either use this small interface, or we can just set a lambda is case we can't be bother ed adding one file + namespaces + new class, get rid of object oriented verbosity basically ;)

Now we can actually simply reflect type, create IO on rules, but operate directly on properties.
Direct access costs us a few virtual calls, but since we haven't got a layer on top, this is even more minimized.

Once we have this, let's looks at it further, let's show how to display our element, with custom formatted string per type, in this example I just log to console, bu this is easily extendable to user interface, writing, serialization...

Our final code, that gets an instance and displays any supported data:

Code Snippet

DisplayPropertyRegistry registry =newDisplayPropertyRegistry(

newDoubleDisplayFactory(),

newIntDisplayFactory());

DisplayPropertyFilter filter =newDisplayPropertyFilter(registry);

DisplayObjectFactory builder =newDisplayObjectFactory(filter);

Prop p =newProp()

{

Hello =20.0,

Integer =20

};

var display = builder.GetDisplay(p);

display.Display();

You can easily think that's a lot of classes and a lot of code to just display properties of a class (we could just iterate through property list and do some "tostring"

But...

This is more extensible, and if you need to access your properties a lot, you get a pretty big gain over reflection.

You can register any type, replace as you wish, so you avoid the dreaded massive switch.

By having more classes, and less work per class, you have much more invariants, pretty much every class is ready to go once constructed, no temporal coupling (only edge case is CanDisplay/GetFactory which could create a related exception).

You are not limited to just a "tostring", create a controller, network serializer, add a proxy for automatic property change dispatch, possibilities are endless ;)

Here we are for now, promised I'll get back into Scene Graph next post (or maybe not ;)

Converters : You will likely want to convert your data into some internal representation. For example, prepare a 3d model in a way that you can directly upload to gpu, or take a bulk of textures and convert to texture array.

Generators : Most times we also need more procedural assets, so a simple generator function is also eligible as an asset (random buffers is something I widely use).

Child asset : For example, from a 3d model asset, you want a distance field representation.

Metadata : 2D Texture, 3D Texture don't make any sense on it's own, so we also need to indicate what data is contained in the texture (distance field/potential field/velocity/normal map/height map....)

Drag drop handler so when you drop on the patch it creates appropriate node.

Hierarchy : so if you have a texture folder (namespace), you can prefetch all resources in that folder for fast switching.

2/Render Elements

At some point in our scene, we need some resources which are not part of our asset system (some shaders, some buffers....)

Here are all render elements in FlareTic

Fully immutable: Those resources are loaded once per node, and will never change. (Shaders are a very good example for it).

Low update frequency : Those are the ones that don't change very often (actually mostly at design time). So they are built as immutable in GPU, and dumped/recreated when required by the user. Very common example will be standard primitives. Since resource is immutable in GPU, it can be uploaded async without a problem.

Dynamic Resources (CPU) : Those are resources which change pretty often, like any camera textures. In FlareTic they are built as dynamic, and uploaded to gpu using MapSubResource. They require a rendercontext, so note that they do not fit async loading model very well. As a side note, please Microsoft and beloved DirectX team, can we just async upload content in an existing resource one day, without the need for a render context? Seems (hope) DirectX12 is getting this way.

Dynamic Resources (GPU) : This is increasingly important (or I could call it fundamental) in FlareTic. A lot of resources are updated once per frame, either via Stream Output,Compute Shaders (for example, geometry via modifiers, particle systems,splines...). They also require Render Context, but no upload, so they are quite easier to schedule.

Renderables : Once we have our particle system ready, we might want to render it on the screen as well ;) Renderables can be rendered several times per frame (shadow map/depth pre pass...)

Post processors : Who can survive without it ;) Or course you want to also provide a forward path and a deferred path.

3/Stages

From the time we start our application to the time our scene is rendered on the screen, we of course have several stages.

A/Load/Unload

This is when we load a scene graph patch, and this is more tricky than many people think.

We of course want to be able to load/dump a scene graph patch in async way.

So let's see the load process

Create all nodes: We first create all nodes, there is no mix of node/link load, and json files format reflects this pretty well (no messed xml). One place for all nodes, one place for all links (life can be so simple sometimes ;)

Assign parameters : We parameter values for nodes, please note in (most) cases, this is immediate, bt in some cases they can be retained (see below)

Connect nodes : Since our nodes already told on loading what thy expect, we can safely connect them together. Since some connections can also create a compile task, this is also important to keep in background.

Apply retained parameters : As mentioned, a compile task can be created at link time (feels a bit reverse here ;) So some shader parameters are created at this stage. Once this happens, retained parameters are finally flushed to the node. On a side note, since dx11.2, this is now deprecated feature (since I can directly reflect a function instead).

So now what we really want is modify that to make it more flexible:

Load behaviour: We want to load patch without any resource this can easily be done in a thread and is pretty fast. Loading only the object graph doesn't consume much memory either, so it's rather simple to have a lot of scene graphs ready to show off their coolness.

Load resources : We indicate to this scene that we want it ready to show off, so it should get dressed. Same, we want this async (and ideally with a little progress bar as an eye candy ;)

B/Render

I'll keep rendering as one stage, but obviously this is divided is several sub stages:

Update low frequency resources : Those are trivial to do async, some concurrency control is still useful

Update per frame resources : those do not depend on graph in general so they can also be pushed, but in this case in serial (render context).

Update GPU resource : Particles, Geometry modifiers.... All the bits that needs to be made look good after

Render : Make look good

Apply post processing : Make look better

C/Fine tuned priority

Nodes in FlareTic scene graphs have no centralized priority system (like a pull based dataflow for example).

As opposite, a node has full decision power on how to process it's own children.

This gives some pretty nice advantages:

Since a node can decide, we have full flexibility and can make complex decisions trivially.

Easy to cut a part of the graph, just don't call child ;)

Easy for user to just connect things and they work, instead of having a mess because you connected something in the wrong order.

Of course there is also some drawbacks:

A lot of work is dealt by the node. This can of course be offset using abstract classes, but still quite some work.

It's harder to have a more global control (logging/error handling is very easy when you have centralized traversal)

No global control also makes it harder to process parts in parallel, which is something that increasingly needs to be taken into account.

So we need the best of both worlds in some ways, dataflow doesn't give the flexibility we need, but we still want this global control.

Luckily, as we will see this can the solved rather elegantly.

Of course there's a lot of other features (that are already there, like Shader caching, Dynamic compilation, Handlers, Resource pools...) that are part of a scene graph, but they are not so much "core features", so they might be discussed in separate posts at some point.