I was recently trying to reduce the sizes of my ROOT files. Since the TLorentzVector class uses Doubles, it eats up lots of memory without actually needing it. To go for smaller classes, I looked at the GenVector package that allows to set the type of the fields by a template parameter. However, I noticed something I don’t really understand when I wrote these classes in trees:

As you can see the internal fields fX, fY , fZ and fT that should be Float16_t as declared in the template, are streamed just like regular Float_t and I would like to understand why this happens and whether there is a way to stream them correctly.

Unfortunatly none of these methods work as expected. I generated the required dictonaries in a shared library which is always loaded during the start of ROOT. Furthermore trying to generate the dictonary via gInterpreter like this:

However, even with your approach the datamembers are still streamed as regular Float_t.

There is another thing I noted: When I run the code on the command line of ROOT, I get the desired output. The problem I described only appears when I compile the macro.
I thought it might be due to our outdated ROOT version so I tried it on my private machine which is running the most recent release 6.18.04, but the problem also appears there.

When I run the code on the command line of ROOT, I get the desired output.

That is intriguing. Do you mean that you get the desired output with or without specifying the name of the class explicitly in the call to TTree::Branch?

However, even with your approach the datamembers are still streamed as regular Float_t.

This might indicate that the dictionary for the Float version and for the Float16 version have been generated in the same dictionary.

So it seems that the next steps are to understand the different between your compiled case and the interpreted case and to figure out which dictionary is being used for the Float16 version in each case.

which is exactly what I had expected in the first place. It also does not matter whether I open ROOT and copy paste the code lines there or whether I run the macro without compiling it root test.cc. I only get the wrong behavior when I run root test.cc++.

This might indicate that the dictionary for the Float version and for the Float16 version have been generated in the same dictionary.

It also does not matter whether I open ROOT and copy paste the code lines there or whether I run the macro without compiling it root test.cc . I only get the wrong behavior when I run root test.cc++ .

For the original code to work, we must be in a situation where only the Float16 dictionary is loaded. This is because the TTree::Branch version where the type is not specified use the C++ keyword typeid to detect the type. typeid returns a type_info and because both Float_t and Float16_t are a typedef to float they have the exact same type_info … as far as the real C++ compiler is concerned.

However, I don’t know what these statements do exactly.

They register with ROOT I/O that the 2 classes are ‘equivalent’ and the I/O should allow reading one onfile version into the other in-memory version.

Actually that would surprise me, since the dictionary for the regular Float_t version is already being created by ROOT, since it is being linked in the LinkDef_GenVector.h of the Math package.

Yes but it does not contain the Float16_t version (as far as I can tell) and thus:

Indicates that something else in your environment does and may or may not have the conflict.

Yes, as I already said, I have a shared library that generates the dictionary for the Float16_t version. I needed this, because the shared library also contains a class inheriting from the Lorentz Vector. This class by the way is always being streamed correctly.

I just tested what happens when I don’t load the shared library and indeed: Without the shared library loaded, the problem also appears in the interpreter, but the question why it does not work in the compile case with the shared library loaded remains.

Oh, I was’nt aware of this since I the rootlogon macro has some console output which I also get when I compile the macro. How can I test / make sure the macro is taken into account in the compiled case?

Note from every that we discuss so far, I believe that the result of
the original code would depend of the order in which the libraries are loaded (the one with Float_t and the one with Float16) … and I can think of any reason why the case with the classname explicitly introduced would not work (unless I got the order of parameter wrong).