Because casting can be very expensive, especially when casting to unsigned int. Better to risk the loss of data than have to match the cast result against <font size="-1">4,294,967,295 possible values of unsigned int. </font>

As a side note, a co-worker pointed out to me that the group that now maintains that code "doesn't like typecasting." Even though the compiler will have to do it anyway since the function returns an unsigned int.

As a side note, a co-worker pointed out to me that the group that now maintains that code "doesn't like typecasting." Even though the compiler will have to do it anyway since the function returns an unsigned int.

Sounds like they picked up some bad habits when they were playing with VB.

Because casting can be very expensive, especially when casting to unsigned int. Better to risk the loss of data than have to match the cast result against <font size="-1">4,294,967,295 possible values of unsigned int. </font>

I guess the system runs out of memory long before there are more than 4294967295 entries in m_propList.<font face="Lucida Console" size="2"></font>

Hmmm... what the heck is a size_t, anyway? I'm writing a few test programs in Visual Studio 2005 here, and although sizeof(size_t) == 4, it behaves like neither a typedef of an unsigned int, nor an enum. Implicitly casting it to any integer type smaller than long long results in a warning, never an error. I guess there must be specific compiler support for it.

Hmmm... what the heck is a size_t, anyway? I'm writing a few test programs in Visual Studio 2005 here, and although sizeof(size_t) == 4, it behaves like neither a typedef of an unsigned int, nor an enum. Implicitly casting it to any integer type smaller than long long results in a warning, never an error. I guess there must be specific compiler support for it.

It's always a typedef, usually of a long int or a long long int. It is supplied by the compiler, because it depends on exactly what the target platform is.

Hmmm... what the heck is a size_t, anyway? I'm writing a few test programs in Visual Studio 2005 here, and although sizeof(size_t) == 4, it behaves like neither a typedef of an unsigned int, nor an enum. Implicitly casting it to any integer type smaller than long long results in a warning, never an error. I guess there must be specific compiler support for it.

size_t "is the unsigned integer type of the result of the sizeof operator" (C99). It's also frequently used in the standard library for various indexes and counts.

Hmmm... what the heck is a size_t, anyway? I'm writing a few test programs in Visual Studio 2005 here, and although sizeof(size_t) == 4, it behaves like neither a typedef of an unsigned int, nor an enum. Implicitly casting it to any integer type smaller than long long results in a warning, never an error. I guess there must be specific compiler support for it.

size_t "is the unsigned integer type of the result of the sizeof operator" (C99). It's also frequently used in the standard library for various indexes and counts.

And in the more practical context of the full POSIX spec, rather than just the C one, it's the maximum size of any single object allocated in memory. Pretty much all of the library uses are in this context.

The other notable types are ptrdiff_t, which is the difference between two pointers, and hence also the maximum size of addressable memory (and the integer which is the same width as a pointer), and off_t, which is the index into a random-addressable file, and hence also the maximum size of any single file.

Hmmm... what the heck is a size_t, anyway? I'm writing a few test programs in Visual Studio 2005 here, and although sizeof(size_t) == 4, it behaves like neither a typedef of an unsigned int, nor an enum. Implicitly casting it to any integer type smaller than long long results in a warning, never an error. I guess there must be specific compiler support for it.

size_t "is the unsigned integer type of the result of the sizeof operator" (C99). It's also frequently used in the standard library for various indexes and counts.

And in the more practical context of the full POSIX spec, rather than just the C one, it's the maximum size of any single object allocated in memory. Pretty much all of the library uses are in this context.

The other notable types are ptrdiff_t, which is the difference between two pointers, and hence also the maximum size of addressable memory

Well, not really. You're not guaranteed to be able to meaningfully subtract two pointers that are not from the same object, so ptrdiff_t only needs to be large enough to cover the maximum size of any single object as well.

(and the integer which is the same width as a pointer),

You mean intptr_t, right? (incidentally, an optional type - there doesn't have to be ANY such type, even in POSIX)

As for the original question, do you get an error (rather than a warning) for "implicitly casting" between any other integer types?

As for the original question, do you get an error (rather than a warning) for "implicitly casting" between any other integer types?

Well, if we consider an enum an "integer type" (which may not be pedantically correct but is how they're treated in practice), attempting implicit type conversion between an enum and a proper integer always gives an error.

As for regular integer types, they never give errors for converting between them no, but their warning behavior is subtly different from a size_t. They give warnings only when casting to a smaller kind of integer or between signed and unsigned. Meanwhile, on a 32-bit system, even though a size_t is of the same size as an unsigned long, Visual Studio 2005 still gives you a warning when you implicitly convert (as seen in the thread parent). Only implicit conversions to signed long longs or unsigned long longs yield no warning. I presume the _W64 seen in the size_t typedef causes this behavior, as an intentional measure to help prevent 32-bit programs from breaking after being ported to a 64-bit system.

Well, if we consider an enum an "integer type" (which may not be
pedantically correct but is how they're treated in practice),
attempting implicit type conversion between an enum and a proper
integer always gives an error.

Actually that's not true at all. I pass enums as int arguments to functions all the time and the compiler doesn't even show a whisper of a warning, let alone an error. Trying to implicitly cast an int to an enum, however, is another story. From the compiler's point of view, an enum is just a range-limited more-or-less-sequential list of ints (they can be negative), or maybe longs.

Hmm, okay, that's sensible behavior I suppose. Until this point I hadn't really given the matter a lot of thought, since I just do what I like with enums and then add explicit casts after the fact when the compiler complains. I vaguely remembered that I got errors with enums all the time when I would expect them to work. E.g. a for loop through the values of an enum with an enum-typed iterator is always a pain in the neck and requires explicit casts everywhere.

The other notable types are ptrdiff_t, which is the difference between two pointers, and hence also the maximum size of addressable memory

Well, not really. You're not guaranteed to be able to meaningfully subtract two pointers that are not from the same object, so ptrdiff_t only needs to be large enough to cover the maximum size of any single object as well.

It's not really possible to create a sane conforming implementation with ptrdiff_t implemented in that fashion, hence it's always the same as the maximum size of addressable memory. It's difficult to explain, but there's basically only one sane way to implement pointer arithmetic.

(and the integer which is the same width as a pointer),

You mean intptr_t, right? (incidentally, an optional type - there doesn't have to be ANY such type, even in POSIX)

That one's a historical blunder - it's impossible for intptr_t and ptrdiff_t to be different types, so there's no particularly good reason for intptr_t to exist, except as an observation that "ptrdiff_t" is not a very good name.

That one's a historical blunder - it's impossible for intptr_t and ptrdiff_t to be different types, so there's no particularly good reason for intptr_t to exist, except as an observation that "ptrdiff_t" is not a very good name.

Is there a guarantee that any two pointers a,b can be substracted? I imagine a non-flat memory model with discrete memory segments where two pointers can point to locations in different memory segments so substracting them is not possible in any useful manner. So let's for a moment assume that trying to do so results in an exception (sigsev or whatever) rather than a meaningless result.

In such a memory model, sizeof(ptrdiff_t) could be smaller than sizeof(intptr_t).

Which may of course trigger more 4267 at the function's call sites (and if it was a v-func, will require changing the base/sibling classes too but the compiler will give helpful warnings to guide you through that too). While this sounds like a lot of work it will result in improved code quality and is not just a variation of covering the problem.

Chipping away at these warnings will put you in a good position when the time comes to throw the lever over to 64 bit and is grunt work that can be handed off to junior developers. Plastering over these problems with pragmas and forms of casting (both old and new) just leaves something in the code that will be much harder to deal with later and will therefore require more expensive people to mop up.

Which may of course trigger more 4267 at the function's call sites (and if it was a v-func, will require changing the base/sibling classes too but the compiler will give helpful warnings to guide you through that too). While this sounds like a lot of work it will result in improved code quality and is not just a variation of covering the problem.

Chipping away at these warnings will put you in a good position when the time comes to throw the lever over to 64 bit and is grunt work that can be handed off to junior developers. Plastering over these problems with pragmas and forms of casting (both old and new) just leaves something in the code that will be much harder to deal with later and will therefore require more expensive people to mop up.

You confidently give this advice but you don't even know anything about the context. For all we know the class in question can be expected to have on the order of a dozen "properties" and this number can never blow out of control. There is a limit to perfectionism in code cleanliness, especially when it leads to painful cascades of changes.

There is a limit to perfectionism in code cleanliness, especially when it leads to painful cascades of changes.

Granted... and my wife is always complaining that I spend far too much time at the office and I have to admit that I do spend a fair amount of time grooming code. OTOH, I do feel that my penchant for this (and the habit of using different compilers to help identify other potential problems) leads to code that works... the hours I spend up front making clean code is paid back by reduced problems later on and code that I'm happy to hand off to co-workers.

As for being painful... I did suggest that 'chipping' at the problem is a good approach... and once you are on top of it keeping it clean is not hard.

I fail to see anything particularly clever in shutting the compiler
up with ugly casts. The first version at least documents that the
compiler found something warnable here but for the time being we are
going to ignore it.

(OK, may-be the maintainers were indeed complete noobies but the proposed solution doesn't seem any better...)

Which may of course trigger more 4267 at the function's call sites (and if it was a v-func, will require changing the base/sibling classes too but the compiler will give helpful warnings to guide you through that too). While this sounds like a lot of work it will result in improved code quality and is not just a variation of covering the problem.

Chipping away at these warnings will put you in a good position when the time comes to throw the lever over to 64 bit and is grunt work that can be handed off to junior developers. Plastering over these problems with pragmas and forms of casting (both old and new) just leaves something in the code that will be much harder to deal with later and will therefore require more expensive people to mop up.

You confidently give this advice but you don't even know anything about the context. For all we know the class in question can be expected to have on the order of a dozen "properties" and this number can never blow out of control. There is a limit to perfectionism in code cleanliness, especially when it leads to painful cascades of changes.

That is precisely my opinion. This std::list is never expected to have more than a dozen or so "properties" in it, so there will never be overflow issues. On 32-bit processors (the only platform our software is supported on right now) a size_t is, quite literally, a typedef of an unsigned int, so there's no danger there. Having the function return size_t is probably the "right" solution, and in all fairness, I use size_t inside functions as local variables when I'm dealing with stl positions and sizes.

The issue is this. The team that maintains that code now "doesn't like size_t" and "doesn't like casting", so they'd rather simply use pragmas to disable the warning entirely - even though the compiler has to implicitly cast it to an unsigned int anyway. Note that the original function returns an unsigned int.

If you're going to have a function return an unsigned int, and you return something that isn't an unsigned int, (imho) it's better to cast it explicitly so people know you meant to do it - you shouldn't shut up the compiler.

As for static_cast... I'm not really familiar with the *_cast<>s, sorry That would probably be better than a raw cast, assuming it does essentially the same thing.

Having the function return size_t is probably the "right" solution, and in all fairness, I use size_t inside functions as local variables when I'm dealing with stl positions and sizes.

Not for STL classes. The correct type for this function is std::list<T>::size_type. All STL classes provide all the necessary typedefs as members; all STL containers provide the common types value_type, size_type, difference_type, pointer, and iterator, which correspond to the meanings of T, size_t, ptrdiff_t, T*, and int in the C library. (There's also a member called 'reference' corresponding to T&, although obviously that has no C equivalent, and const versions of reference and iterator)

You should be familiar with the importance of using the iterator type, rather than whatever it happens to typedef on your system. This applies to all the others as well, for the same reasons.

As for static_cast... I'm not really familiar with the *_cast<>s, sorry That would probably be better than a raw cast, assuming it does essentially the same thing.

A C-style cast is the same thing as a reinterpret_cast<>: it blindly casts the variable regardless of how meaningful this may be; it is the type-system equivalent of getting an oversized package through a letter box by hitting it with a hammer until it goes through. A static_cast<> is an explicit type conversion and no more - it is still type-checked, and the compiler will reject any conversions that neither make sense nor have explicit C++-style conversion functions defined. They are not necessarily type-safe but they're pretty damn close. There is never a good excuse for using reinterpret_cast<> (or the equivalent C-style cast) in C++ code: it exists only as a method for working around the flaws of legacy C libraries. It is not a coincidence that it has an offensively long name.

The issue is this. The team that maintains that code now "doesn't like size_t" and "doesn't like casting", so they'd rather simply use pragmas to disable the warning entirely - even though the compiler has to implicitly cast it to an unsigned int anyway.

Which part of casting (in this particular context) isn't shutting the compiler up? (AKA disabling the warning.)

<Note that the warning was re-enabled after this particular piece of code, so it wasn't being disabled totally.>

PJH, you are correct, when you get down to it, casting is just another way to shut the compiler up. But which looks better - a pair of #pragmas that make it look like you don't know what you're doing, or a cast that makes it explicit that you know it's changing something to an unsigned int and you're doing it intentionally?

Not for STL classes. The correct type for this function is std::list<T>::size_type

Fair point, I frequently use the value_type and iterator typedefs when writing templates that take STL container types as template parameters... totally forgot that size_type was in there too.

I'll admit to being a bit lazy on the size_type side of things though as all the STL implementations that I use (>1) have size_type typedef'd to size_t... if that laziness threatens to bite me though the compiler will tell me (I hope) and I'll switch to the technically correct but wordier form or use apropos typedefs to keep it brief/clear.