Introduction

LibReflection is a little library (well, a header to be specific) that gives reflection capabilities to C++ classes. When we talk about reflection, we don't mean just RTTI, but a rich palette of capabilities useful in every day programming:

specify and examine class inheritance

declare and examine normal and static fields

declare and examine normal, virtual, and static methods

declare and use properties and events

set and get field values

call methods and get results

create instances without having the headers at hand, by using a class registry

And all the above almost happen automatically, with very few macros that the programmer has to put in the application's classes...and you also get the added benefit of class properties and events, something that C++ does not provide by default.

Demo

Using LibReflection is very easy. The following piece of code shows a class with fields, properties and methods, all reflected in the class' Class object:

Documentation

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Comments and Discussions

I have plan to use this great library in my project but a problem I have with this library is that if I have a property in one object and want to set it into another property in second object then I have to know the type of the property at compile time. Is it possible to somehow get the value of the property without knowing type at compile time.

I'm trying to compile this library int Windows CE, almost everything works, except constructors stuff.I know that problem is in ARM compiler from EVC because the library compiles fine in Visual Studio.the part that doesn't work is the following:

(1) Constructor information is now captured. You can actually invoke a constructor -- it will automatic create an object using the new operator. And the newInstance() method on the class will automatically called default constructor if one is defined.

(3) Class Method, Static Method, Constructor can be overloaded. But the MACRO to defined them wasn't as nice.

(4) port to gcc. Well, I only use gcc, so that is the only choice.

(5) added class registary to lookup or listing of all known classes

(6) When invoking a method object with type mismatched, the exception threw will include detailed information about which parameter/ret/object ref are mismatched. This should greater improve the ability to fix it.

(7) Full class name (including namespace) are now captured and returned.

(8) NEW! NEW! Automatic casting of object class are now working. If your method like a Base * but you pass in a Dervied *, it will automatic cast to the Base *. Also you can cast any class * cast to void *. This will allow alot of code to work in a generic way. Requirement is that both Base & Dervied must be defined in the reflection. This works in reference as well (i.e. Derviced & to Base &) . For example, if your method is defined in class Dervied as

Dervied * createThis(Base * b);

You can invoke it using:

Base *result_ptr;Dervied d;

d.getClass().getMethod("createThis").invoke(result_ptr, &d, &d);

(9) NEW! NEW! NEW! Dynamic pointer casting are now working. The known object pointer will automatic test for dynamic castable to its dervied type. This allow you to invoke method using Base class pointer. For the sample above, we can invoke it as:

(1) The changes is based on axilmar's work. I don't want look as if I am looking for ways to claim credit for his work. But on the other hand, I don't think he is reading this site anymore as someone else is also looking for his help.

(2) The code works in gcc. But I have not tested them in VC and I notice this is a MSVC/.net site, so it would probably made sense for someone to compile the code in MSVC first.

Some of the things you say make sense, but if you were to write an article it would be YOUR article and you wouldn't be infringing on anybody else's work especially if in your article you make it clear that your work is to be considered an extension of the other author's previous submission. Your article will essentially be containing NEW work. So where is the infringement?

Besides, in your article (if it'd make you feel more comfortable), you can clearly give the other author credit for planting the seeds of ideas in your head for what you have submitted.

I personally don't see any conflict of anything.

With regards to this being a ".net" site, you must be fairly new, because there are (literally) thousands upon thousands of non-".net" materials here.

Very interesting library. We are working on a reflection library for a few months (includes more than one rewrite ). We've given your library a try and added quite a few things. We'd like to contact you in order to talk with you about possible uses, modifications, addons etc if you're interested. We sent you e-mails to at least two different e-mail adresses but they probably didn't reach you.

Unfortunately, there are lots of reasons for NOT using the CLR, be it with Managed C++ or C# (and also for not using Java) - but in those cases Reflection can still be very useful.

Some of the reasons why I haven't been able to use Managed C++ or C# (despite incentive to try) include:

1) Immature libraries. Because the languages haven't been around very long, the libraries have a huge number of weaknesses - the last time I tried to use C# I found that I couldn't create a directory browser from the file dialog since the file dialog didn't include that feature and it was a final class. In C++ I would simply inherit off CFileDialog.

2) The syntax is impenetrable. Managed C++ has improved slightly with the newer work Microsoft has done, but it is still nowhere near as clear as well written C++.

3) The syntax is non-standard. Except in the rare cases where the CLR exists for other platforms and compilers, you are stuck with VC++ and Windows.

4) The CLR is incredibly slow compared to raw C++. Because you are working through an intermediate language, and you have the overhead of garbage collection and smart pointers, for time critical applications Managed C++ just doesn't hold up. I write CAD applications for a living and Managed C++ just wouldn't cut it.

5) It is very difficult to use Managed and Unmanaged code side by side. Again, last time I checked, ALL your code had to be recompiled with the compiler switch for CLR included. That scares me because calls from unmanaged code into managed code are going to have loads on non-deterministic wrappers around them.

I'll get off my soapbox now - but suffice it to say that I haven't been impressed with the CLR - and it certainly doesn't remove the need to provide decent reflection libraries under certain circumstances in C++.

Dave Handley wrote:1) Immature libraries. Because the languages haven't been around very long, the libraries have a huge number of weaknesses - the last time I tried to use C# I found that I couldn't create a directory browser from the file dialog since the file dialog didn't include that feature and it was a final class. In C++ I would simply inherit off CFileDialog.

CFileDialog isn't really standard C++, it is a MFC class, and thus will only work in Microsoft Windows. But should't you use the Win32 function SHBrowseForFolder, and not derive from CFileDialog anyway?

Dave Handley wrote:2) The syntax is impenetrable. Managed C++ has improved slightly with the newer work Microsoft has done, but it is still nowhere near as clear as well written C++.

True, Managed C++ wasn't very neat to use, and that is why C++/CLI[^] was developed. C++/CLI is as penetrable as standard C++, atleast when you are doing the same thing. I wouldn't state that standard C++ is clear, either.

Dave Handley wrote:3) The syntax is non-standard. Except in the rare cases where the CLR exists for other platforms and compilers, you are stuck with VC++ and Windows.

The CLR, CLI, C# and C++/CLI are standards, but you're stuck in .NET, that is true. But .NET is ported to other systems than Windows. See DotGnu[^] and Mono[^].

Dave Handley wrote:4) The CLR is incredibly slow compared to raw C++. Because you are working through an intermediate language, and you have the overhead of garbage collection and smart pointers, for time critical applications Managed C++ just doesn't hold up. I write CAD applications for a living and Managed C++ just wouldn't cut it.

You are not working through an intermediate language, the JIT-compiler will create machine code. In C++/CLI you don't have to use garbage collection, you can use deterministic destruction instead.Vertigo Software ported Quake 2[^] to .NET with a performance decrease of only 15%. And that was done in 5 days: 4 days for porting from C to C++ and one day to port to Managed C++.

"CFileDialog isn't really standard C++, it is a MFC class, and thus will only work in Microsoft Windows. But should't you use the Win32 function SHBrowseForFolder, and not derive from CFileDialog anyway?"

Unfortunately, even using SHBrowseForFolder isn't trivial in the .Net Framework - see http://www.netomatix.com/FolderBrowser.aspx

"I wouldn't state that standard C++ is clear, either."

ANSI Standard C++ is very clear in my opinion - much more so than many other languages. Especially given the amount of explicit control you have over data management.

"The CLR, CLI, C# and C++/CLI are standards, but you're stuck in .NET, that is true. But .NET is ported to other systems than Windows. See DotGnu and Mono."

C++/CLI is a standard but it still is NOT ANSI Standard C++ which makes it much less portable. .Net may be ported to a few other platforms, but it still isn't properly cross-platform.

"You are not working through an intermediate language, the JIT-compiler will create machine code. In C++/CLI you don't have to use garbage collection, you can use deterministic destruction instead."

As far as I know the only way to do deterministic destruction is with the IDisposable interface, and even then the Garbage Collector does some of the work so it isn't strictly fully deterministic. Also any form of Just in Time Compiler directly implies the existence of intermediate code - of course the IL is converted to machine code before it is used, but it is still converted and that takes time.

"Vertigo Software ported Quake 2 to .NET with a performance decrease of only 15%. And that was done in 5 days: 4 days for porting from C to C++ and one day to port to Managed C++."

Unfortunately, something like Quake 2 isn't really a good example of a performance intensive application. From what I remember Quake 2 was written to run on a Pentium 133, and as with many games most of the really performance intensive stuff is done inside the OpenGL library down on the Graphics Card. That will still be the same in a managed port. When someone gets a ray-tracer working in managed code with similar performance to native C++ I'll be impressed, but I can't see it happening. Given that most of the really intensive stuff in Quake 2 is happening down on the graphics card, a 15% performance drop is huge.

I agree that that may be true now - but don't forget we were talking about Quake 2 which runs on a P200 quite happily. That means that on a modern 2-3GHz processor it needs to use less than 10% of the processor power available.

Having said that, on my cutting edge PC, I find the graphics performance on PC games to be the first thing to roll-off. For example, I can't run Half-Life 2 or Doom 3 at maximum resolution (1600x1200) - but can run them alongside other tasks at lower resolutions!

Yes if we talk about today's games where ai and maybe physic simulation plays a big part. Quake2 is an opengl application and is Graphics limited. It seems really to me that you don't have any experience in C++/CLR programming.

And the quake series happens to be part of the other 10%, if you take a look at what happens with most of carmacks code when it is run on an SLI platform (2 graphics cards instead of one) performance increases by +80% on games like Doom 3, showing how little the processor matters in carmacks highly optimized games. Both carmacks skills as an optimizer and the fact that the type of games he makes can be fun with relatively little CPU computation are the root cause of this fact.

Reflection is generally the same as RTTI in the sense that it is information retrieved dynamically at runtime as opposed to compile time. Basically it allows the program to query a class instance dynamically at runtime for information about itself, such as it's class name (we already have this from typeid().name()) or it's super class, or even it's methods, fields and properties. The most obvious usage is in development IDE's for UI form builders, as the form builder can be made quite generic, and you can manipulate the form and it's child controls all dynamically at runtime. RTTI/Reflection all help do this. Languages like Smalltalk, ObjectiveC, ObjectPascal (or at least Borland variant of it), and Java have all had this kind of support from the get go. C++ hasn't. For toolkit's that want any sort of extensible, sophisticated UI, RTTI is a must. To my knowledge only the VCF[^] and Qt[^] really go the whole nine yards to provide support comparable to what you get in .Net, Java, or Delphi (ObjectPascal).

¡El diablo está en mis pantalones! ¡Mire, mire!

Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!