Something like a month ago I wrotea post introducting the reflection engine I’m writing as part of a C++ course for people in the game programming master degree of my university. Why me, an undergrad that has the end of his CS grade faaaaaaar away from his horizon, is giving such a course is a mater of another post (A post on Why I hate the university sooooooo much “…).

Writing a reflection engine has two primary goals:

Game programmers these days are used to a wide pipeline of tools that ease with game development, in a way most of them doesn’t even write their own game engine but only do the game . That is, game programming has become a game itself. Regardless you think this is good or not (I think is not, game programming is not such a challenging/interesting task that was the old days. But others may think that now everybody could write his own game, which is good.), the point is that most game programming courses focus on tools instead of game enignes/design, and people exit the course (the master in this case) without knowing how these tools work. This is true even in courses focused on engine programming, since most people don’t have the required programming skills to deep into the techniques involved in the implementation of AAA engines nor frameworks such as unity.

It’s fun. Nothing more, nothing less.

So the idea is to show people how systems like Unreal Engine’s blueprint integration could work, and have fun in the process.

“Could” is an important word here, As I usually tell people in my classes, what I show is just the way I did the thing, not THE WAY to do it. I took decissions based on my own context (It should be “teachable”, I’m lazy, etc), but others might follow different approaches. That’s what being an engineer means, right?

As the roadmap in the previous post shows, one of the first tasks we have to aboard is the way we store C++ type information for our engine, in a way it can be used at runtime to instance objects, check function signatures, etc. Today, I will show you what I did to store C++ type information at compile time and use that info for tagging C++ types .

RTTI

Since what we are doning with runtime reflection is a kind of dynamic type system for C++, the first thing we need is a way to store type information so it can be checked at runtime, compare types for equality, etc.

It can be queried at runtime only: Please read that again. I find so funny that a language that uses its “static type system” almost as an advertising motto cannot query the name of a type at compile time. Amazing.

No practical guarantees: The name that .name() gives is mangled, so it’s almost useless in most scenarios. Of course you can demangle the name with your vendor API, but having to maintain a portable demangling API is something I would like to avoid. I did once, and was… well, let’s follow. Also the supposed “unique id” that .hash_code() returns is only guaranteed to be the same for equal types, but is not guaranteed that different types yield different hashes . Kind of useful.

Size bloat: AAA game programmers often dislike RTTI since it bloats their executable by adding an entry in the symbol table for each type. RTTI data is usually backed in classes as a “negative” entry in the vtable of the type, like:

But, really, I don’t care. I’m sure deploying to such constrained systems like game consoles is hard. But, come on… I think this is a common case of bias like the “exceptions are slow” topic. Of course YMMV, but from my experience exceptions and RTTI worked pretty damn well on embedded devices.

CTTI

To solve the two issues above (As I said, I don’t consider the third an issue) my friend Jonathan “foonathan” Müller and I wrote the ctti library . CTTI (From “ Compile Time Type Information ”) aims to provide both demangled type names and unique hashes at compile time, thanks to C++11 constexpr:

How it works

This morning as part of the pre-work to write this post I found myself checking the code and documentation of Don Williamson’s clReflect engine , a clang-based reflection engine very similar to this one.

Williamson was the lead engine programmer of games such as Fable and Splinter Cell Conviction.

Here ’s a great gamasutra article where he shares the reflection API they wrote for Conviction

This trick is exactly what CTTI does. Using a constexpr function CTTI “parses” __FUNCSIG__ and similar expressions at compile time to get the name of the type out of the string. “Parses” is a very generous word I think. What we did is to wrap vendor-specific __FUNCSIG__ -like expressions and check the format of its output, so we can get an specific substring (The type name) out of the string.

This is a bit tricky: Did you notice I never said “string literals” nor “macros”? It’s because such constructs ( __PRETTY_FUNCTION__ , __FUNCSIG__ , etc) are not macros nor string literals but implicitly declared identifiers similar to standard __func__ (From C99, added to C++ with C++11), which is probably one of the worst specified points of the standard…

The weird part of CTTI was to write a constexpr string class able to build substrings at compile time, with C++11 constexpr only, supporting Visual Studio . Cannot bold that last item enough. That was such a pain in the ass. See THE ISSUE .

But, what’s a constexpr class? What’s constexpr ?

constexpr

Feel free to jump over this if you already know about constexpr .

constexpr is feature available since C++11 which gives the option of writing C++ code to be evaluated (Actually interpreted by the compiler) at compile time. Here’s an example:

This has lots of benefits, since until C++11 the only way to do complex computations at compile time was by using hard to read/maintain/easy-to-throw-up template meta-programming techniques. And this only involved computations on integral types. I once wrote a floating point template for TMP, since I wanted to do 3d transformations at compile time. Trust me, you wouldn’t like to put that kind of code in production…

In the example above, add() is a constexpr function and FLOAT_CONSTANT a constexpr constant. A constexpr function is guaranteed to be evaluated at compile time as long as its arguments can, like in this case. Else, functions are “downgraded” into a normal C++ function, to be evaluated at runtime. The constexpr constant there is just a way to force constexpr evaluation of add() : These are constants that have to be initialized at compile time, else compilation fails.

constexpr not only applies to plain C functions, but to member functions, even constructors . So you end up having the ability to write classes and instancing objects that are completely evaluated at compile time . That’s so cool.

Here’s an example of a useful constexpr member function: std::array::size() :

std::array<ichar,1024>buffer;read(file,&buffer[0],buffer.size());

No more #define LENGTH (x) (sizeof(x)/sizeof(&x[0])) tricks. No more C arrays please. It’s 2016 and there’s still people introducing bugs in the linux kernel because of this… Hey, would you like to hear a TCP joke?

A constexpr class is just a class that has at least one constexpr declared contructor, so the compiler can make instances at compile time:

What’s next?

Today we learnt how to write a compile-time type info class with all the information we will need to follow with the reflection engine. In next posts we will implement MetaType , the class that manages runtime types and knows how to instance and destroy objects dynamically.