True, but I think the design idea still stands. If you have a parent reference in every class, you can always navigate up the hierarchy if your platform requires something to carry out it's functionality.

I.e your ResourceLoader class might need to obtain a pointer to the X11 Display struct in order for it to create or load an image (i.e XCreatePixmap())

If your ResourceLoader can't fully encapsulate the process of loading resources, then you need to rethink your design. Of course it may need a reference to the underlying device, but it should never have to go searching for it - you should pass a reference to the device into the constructor.

As for the discussion "should a mesh draw itself, i.e. should I have a function Mesh::Draw()?" the answer is strongly no. In a typical 3D scene, you never do anything just once. You never just draw a single mesh, you never just update the physics of a single rigid body, and so on. The most important design decision for a performant 3D engine is to allow batch, batch, batch! This means that you will need a centralized renderer that can control the render process, query the renderable objects, sort by state changes, and submit render calls as optimally as possible. The information required to do that is scene-wide, and inside a single Mesh::Draw() function you can't (or shouldn't) make these state optimization decisions. It's the renderer's role to know how to draw a mesh, and it's also the renderer's role to know how to do that as efficiently as possible with respect to all other content in the scene.

I think the Unreal Engine approach is descent.
The User can adjust at runtime in a Config file wich specific *.DLL (win32/win64) / *.SO (Linux/Unix/Mac) Renderer Driver should be used.
You can interface with an abstract Renderdriver proxy class and let the Renderdriver do its thing. Simple but effective and ellegant (just my 2 cents).

That's not truly how I've implemented it, but just a pseudo-code expression of the idea. For instance, I don't really use Queue<T>, I use a custom collection type that allows me to choose LIFO, FIFO or custom sorting of batches and all sorts of stuff. Any comments/criticisms/suggestions concerning this concept?

Ok... here's another thing I'm trying to work out: vertex types and input layouts (can't remember what the OpenGL counterpart of an input layout is called... usage hint, maybe?)...

I need to design a sub-system through which new vertex structures can be implemented beyond the common ones the engine will already offer, and the ones I do offer need to adhere to a clean, consistent format. It needs to be written so that the same data can be used to create an D3D "input layout" or an OpenGL "usage hint" (or whatever it's called) on-the-fly as the vertex data is pushed to the renderer. It's hard for me to decide on things sometimes because I'm not sure what parts/features of D3D and OpenGL are so seldomly used that they can just be cut out, and which people are going to be pissed if I don't let them have... Anyway, some of the ideas I have are:

The first thing to consider is how we designate what fields of a vertex are for (and how big they are). We could, like DirectX, just use a string (e.g., "POSITION"). Or we could use some type of enumeration, like this:

What might the pros/cons of each method be? And what would be a good way to represent the size of vertex fields without using a platform-specific enumeration like SlimDX's "Format" enum? Or is there yet another unthought-of way of doing this that would be superior to both?

Next, what would be the best way to implement a cohesive vertex typing system that can be broken-down and understood by virtually any type of renderer? I have some thoughts already, and I'll show you what ideas I'm toying with:

1) A common interface all vertex structures inherit from. For example:

All vertex types would implement that interface if such a method was used, and they would have to return a static value which is not part of the memory of an actual vertex instance on the stack (as that would throw things off).

2) Create a new struct/class (e.g., "VertexDescription") that houses a nice description of a vertex-type and tells you what's in its guts. The essence of it might look like this (incomplete example):

In addition to this structure, perhabs it might be an idea to implement a new enumeration type which replaces platform-specific enumerations like SlimDX's "Format" but offers the same data in a new way; potential even giving the size in bytes of an element as its own numerical value!?

Anyway, I hope the wisdom of the community can once again offer me some excellent ideas!

EDIT: The idea of assigning the enum values of "ElementFormat" the size on the element in bytes actually wont work because C# treats enums as numeric values and would not be able to distinguish between them. My bad, didn't think about that. Please disregard that erroneous idea.

I think this pretty straith forward, But i think you can change your rederque with task based sheduling like (i recommendedthe free OpenSource Version of Intel Thread building blocks). Then your Renderque can spread over multiple cores.In TBB you have Namespaces and OOP Classes instead of dealing with native Posix or Winthreads so it is muchmore easier - and - it is crossplatform and works with the Intel/Visual C/C++ and GNU GCC Compiler.

I think this pretty straith forward, But i think you can change your rederque with task based sheduling like (i recommendedthe free OpenSource Version of Intel Thread building blocks). Then your Renderque can spread over multiple cores.In TBB you have Namespaces and OOP Classes instead of dealing with native Posix or Winthreads so it is muchmore easier - and - it is crossplatform and works with the Intel/Visual C/C++ and GNU GCC Compiler.

Peter

To be implemented in the D3D11-specific renderer implementation and its OpenGL counterparts. :-)

The engine already contains a robust and "battle-proven" sub-library I call the "HAI" (Hardware Abstraction Interface). It can pull all of the important information about a machine's graphics hardware from DXGI or OpenGL, and it also finds (among other things) the amount of CPU cores, total physical memory (RAM), HDD space, logical drives, etc. My D3D11 renderer implementation will of course use that to allocate rendering tasks to dynamically-generated threads, the number of which is selected to be optimal for the CPU speed and the amount of cores.

To be implemented in the D3D11-specific renderer implementation and its OpenGL counterparts. :-)

The engine already contains a robust and "battle-proven" sub-library I call the "HAI" (Hardware Abstraction Interface). It can pull all of the important information about a machine's graphics hardware from DXGI or OpenGL, and it also finds (among other things) the amount of CPU cores, total physical memory (RAM), HDD space, logical drives, etc. My D3D11 renderer implementation will of course use that to allocate rendering tasks to dynamically-generated threads, the number of which is selected to be optimal for the CPU speed and the amount of cores.

Wow, i think this was a lot of work and many ifdefs. (-;

I thnik allways the solution is what fit for your needs and if you have selfmade code you understand you get even more productiveenstead of learning lots of diffrent tools and Version changes.

I thnik allways the solution is what fit for your needs and if you have selfmade code you understand you get even more productiveenstead of learning lots of diffrent tools and Version changes.

Peter

It might surprise you, then, to hear there are actually very few ifdefs in the entire code-base. Furthermore, the engine can switch between DirectX versions, OpenGL versions and entire rendering APIs (e.g., DirectX to OpenGL) while it runs. :-)

Most of the #ifs and #elses have only to do with Debug vs Release builds and handling exceptions; often in resource disposal code (no sense in throwing an exception in a release build if the application can continue or is shutting down, for example).

Thanks again to everyone for the incredibly helpful insights, advice, examples and information. Things have progressed very far very fast, and I hope to be rendering this engine's first scenes very soon!

I'm most likely going to call for some of the community's brilliant guidance on a very robust shader/material and lighting system. But I'm going to spend some time working out the design the best I can and come prepared with good questions and examples.

Alright, I could use some inspiration about the best way to design my shading/lighting/material system in a clean, platform agnostic way.

What sort of model/hierarchy might be the best approach for the engine, keeping in mind extensibility and allowing users of the engine to create custom shaders or modify existing ones? What key differences in OpenGL and D3D shading might cause problems, and how might be the best way to get around them?

One thing I'm finding a bit tricky is how I'm going to adhere to the data-driven design concept but allow user code to specify how a shader, say for DirectX, chooses its technique and desired passes, applies render states, blends passes, set variables and transformations, works dynamically with varying numbers and types of light, etc? And how might we design our system to keep all the platform-specific stuff separate from base implementations?

This is an area of graphics programming I'm not terribly skilled with, and I'm sure someone with more skill and experience can help me get the right ideas and write some nice code befitting the level of quality this project requires.

I really need some sort of platform-independent and elegantly-designed system which allows for "shader setups"... Differing materials, lighting, etc and setting variables on a shader. So far I've tried a few things but have hit deadend designs. For instance, I haven't really figured out a good way that a "RenderOp", when running on the D3D10 or D3D11 renderer, can specify one or more techniques to use and which passes within it to use/not use; and furthermore to setup render states and properties on the device as required...

...and then in the actual "Renderer", we specify a shader "globaly" for a render batch and use the Material to set variables (Textures, lighting parameters, etc) on it? It would seem to follow that there could be some sort of "SetGlobalTransforms" method that operates on the active shader instance and, for example, set the "World" matrix of an entire model; eliminating the need to set it over and over on the individual meshes that make up the model?

For instance, I haven't really figured out a good way that a "RenderOp", when running on the D3D10 or D3D11 renderer, can specify one or more techniques to use and which passes within it to use/not use; and furthermore to setup render states and properties on the device as required...

I tend to view those as much earlier concerns. By the time you are specifying a RenderOp, you should already have a renderer-specific shader that you know will execute in this environment.

Also keep in mind that a 'RenderOp' is conceptually a single render command - it shouldn't include multiple passes or techniques.

For this reason, I am not a big fan of the 'effects framework' often used along side D3D. You are better off rolling your own technique/pass system in the front end, that will pass the minimal set of shaders/data to the renderer for each RenderOp, and will generate a separate RenderOp for each pass/technique.

I tend to view those as much earlier concerns. By the time you are specifying a RenderOp, you should already have a renderer-specific shader that you know will execute in this environment.

Also keep in mind that a 'RenderOp' is conceptually a single render command - it shouldn't include multiple passes or techniques.

For this reason, I am not a big fan of the 'effects framework' often used along side D3D. You are better off rolling your own technique/pass system in the front end, that will pass the minimal set of shaders/data to the renderer for each RenderOp, and will generate a separate RenderOp for each pass/technique.

I see. I feared I had screwed up a bit by not implementing this earlier. Definitely going to require some going back and refactoring/rewriting a few things. Thankfully, I have a pretty decent codebase that won't be terribly hard to edit and fix up.

Is there any more you can tell me about this, by any chance? A bare-bones, pseudo-code example of how a shading engine's key components and hierarchy might fit together and how to use the "RenderOp" type with it properly? This is the first data-driven rendering system I've implemented, so its taking some getting use to and changing the way I think about things.

Also keep in mind that a 'RenderOp' is conceptually a single render command - it shouldn't include multiple passes or techniques.

^^^ There is a lot of wisdom and insight in that remark which is just now hitting me. Treating it that way could potentially make the design a lot simpler and more effecient. But I'm wondering if it might be a bit "wasteful" to make numerous draw calls to render out techniques with multiple passes when I could just iterate through and apply them in one batch... If you could elaborate on this in particular it would be quite helpful!

What also troubles me is how different DirectX vs OpenGL shaders are... I have no real experience with shaders in OpenGL, so I'm scared to let too much DirectX influence rub off on the design and screw myself when I have to implement the OpenGL-specific side of things...

Treating it that way could potentially make the design a lot simpler and more effecient. But I'm wondering if it might be a bit "wasteful" to make numerous draw calls to render out techniques with multiple passes when I could just iterate through and apply them in one batch... If you could elaborate on this in particular it would be quite helpful!

The effects framework just does the same loop for you, under the hood, rendering one pass/technique at a time. So there isn't any performance cost to doing the loop yourself.

And once you start to sort your RenderOps to minimise on state changes, you may be able to increase performance significantly. The effects framework doesn't have enough information available to interleave rendering operations from different effects.

What also troubles me is how different DirectX vs OpenGL shaders are... I have no real experience with shaders in OpenGL, so I'm scared to let too much DirectX influence rub off on the design and screw myself when I have to implement the OpenGL-specific side of things...

If you ignore the effects framework (which OpenGL doesn't have), then there really aren't that many differences.

OpenCL vs Compute shaders is a different topic, but that may or may not affect you currently.