Would it make any sense to wrap numerical primitive types into classes, so that information like amount of bits, signedness, max and min values and possibly mathematical functions would be easily accessible.

For example a template class for a 3D grid where you pass in the edge lenght. If it was wrapped, you could do something like pow(T.GetMaxValue(),3) so you know how large datatype you need to index the cells (this could be combined with some log and other fancy stuff to get the required byte count and use a template thing to get the best type for the situation)

Pros and cons i came up with:

Pros:
-Easily accessible information about the capabilities of the type
-Easily accessible methods for operating on it (The ide will list all the available methods i assume...)
-Possible to add debug code or such? (not that great benefit though?)
-Makes primitive types more OOP

Cons:
-There would be a lot of methods and code to add (to convert between types etc.)
-Possibly not optimized as well? (I would expect the compiler to make the same code as without wrapping though...)
-Possible issues with existing code?

I thought max/min values were already globally defined... You would als have a seriously large amount of operators to overload.

I don't think it's an efficient way of going about doing it, mainly because it would make your classes too bulky. I mean, you would need another int to store MAX_INT, and another double to store MAX_DOUBLE. Much easier to just find the global defininition and go from there.

I thought max/min values were already globally defined... You would als have a seriously large amount of operators to overload.

I don't think it's an efficient way of going about doing it, mainly because it would make your classes too bulky. I mean, you would need another int to store MAX_INT, and another double to store MAX_DOUBLE. Much easier to just find the global defininition and go from there.

The max and min values would of course be static, so there is zero overhead.

I know you can already find this data, but it would ve nicer to have it directly in the class like with everything else...

Would be nice if it was added to some later c++ standard (as the current ints etc are kind of legacy from c?) on top of the legacy stuff.

Maybe even make literals act like classes, so you could do something like

0x1234.BitCount()

would make c++ more consistent in my opinion. (if i go perfectionist i say the language must not have hard coded arithmetic because those clearly should be compiler intrinsics acting on byte arrays xP)

It is consistent with everything in the standard library that has to to with template programming. Look at, for example, the numeric limit class I mentioned earlier, as well as the type traits classes.

I know this is existing functionality, it would just be nice to use a system where i can create lets say an integer by doing integer<32> and all the information i might need is accessible directly through it, no need to put it ib some other data structure.

Just like if you made a fixed size array class. You dont create a class for each array size and then create separate template magic to retrieve data about it...

Having separate specialized classes is a good thing, not a bad thing. Your solution requires a huge class responsible for everything you want to know, and you cannot easily extend the class. By using small separate functions and classes, you can extend your type information as much as you like with whatever information you like, anywhere you like.

Just like if you made a fixed size array class. You dont create a class for each array size and then create separate template magic to retrieve data about it...

You realize that actual computer hardware works with set numbers of bits at a time, right? The reason why your average PC C++ compiler has 8-bit, 16-bit, 32-bit and (usually) 64-bit integers is because those are the number of bits that your CPU works with most efficiently at a time. There's no 13-bit integer type because your standard x86-family processor doesn't have any assembly operations that work with 13-bit operands. If you created your hypothetical integer template, then implementing that template would require some hardcore special template magic in order to work anywhere near efficiently.

You can get arbitrary-precision arithmetic libraries. One may some day be adopted into the standard library (propose it!). This sort of thing is not a part of the language proper because (1) C++ is a systems programming language, and arbitrary precision is not generally a system programming concept (although the fast_* and min_* integral types are system programming concepts) and (2) one of the fundamental design concepts of C++ is pay only for what you use and adding such support would add increased cost to everyone for very common operations to support a rare and little-used function.

Making primitive types more OO is not a goal of C++ either. C++ is not an "OO language," it's a multi-paradigm language that has OO support.

There's no reason why you can't write a library to do what you propose. It's unlikely to have enough widespread applicability to end up in the standard, but I would have thought the same thing of special numeric functions yet there they are. The usual place for such specialized things is an external library, though.

The reason is that in #1, the private details such as m_Baz have been unnecessarily exposed to the utility/helper function. This means that if you're investigating a bug involving m_Baz, then you have to treat UtilityHelper as a suspect, increasing the amount of code to read/maintain.In the 2nd style, the utility is known to only have access to the public interface, not the private details.

The number of bits passed down into the example integer<bits> would of course find the smallest integer type that has at least that many bits and is native to the processor.

Maybe utility functions werent a good idea to put there, but is there any major reasons why not to make a class like

integer<minimumBits> (or bytes?)

where it internally has a typedef for the primitive data type to use, and specialized operator etc. templates for each type? (if needed, probably a single template is fine for char, short and int...)

So basically just a class to make the different integer types a single class for the sole purpose of being able to pass in the amount of bits needed and get the best type for it.

C++ guarantees each of the primitive types to have some amount of bits, which probably was fine in C when the programmer had to copy the code anyways if lets say dimension or size changed, but now with templates the computer should do it, which in my opinion is easiest to achieve by making the integers template classes. This also makes them all carry their bit count with them, and doesnt require external template classes to lets say pick tge best integer type for a number of bits or to get the number of bits...

Boost has a class to make integer type selection based on size; boost::int_t<N>. It has three nested typedefs; fast, least and exact, representing the fastest type with at least N bits, the smallest type than can accommodate at least N bits, and a type with exactly N bits. The exact typedef is only available if there actually is a type with that many bits though.

The reason is that in #1, the private details such as m_Baz have been unnecessarily exposed to the utility/helper function. This means that if you're investigating a bug involving m_Baz, then you have to treat UtilityHelper as a suspect, increasing the amount of code to read/maintain.In the 2nd style, the utility is known to only have access to the public interface, not the private details.

I think that as Brother Bob said std::numeric_limits should serve your purpose. However, I don't think wrapping numeric types in templates would be a bad idea otherwise. I'm doing it for atomic variables.

I think that as Brother Bob said std::numeric_limits should serve your purpose. However, I don't think wrapping numeric types in templates would be a bad idea otherwise. I'm doing it for atomic variables.