Advertisements

Noob <root@127.0.0.1> wrote:
> Typically when I want to initialize an auto struct to
> "all-0" values, I write
> struct foo bar = { 0 };
> and let the compiler correctly set arithmetic values
> to the appropriate 0 or 0.0 and pointers to NULL.
> But when I have a malloced struct, I am often tempted
> to write
> struct foo *bar = malloc(sizeof *bar);
> memset(bar, 0, sizeof *bar);
> even though I know this is not guaranteed to set
> floating-point fields to 0.0 and pointers to NULL.

While it is true that floating point zero doesn't have to
be all zero bits, hardware designers would have to have a very
good reason for not doing it. (Assuming that they want people
to buy and use the hardware.)

Even more, consider the fate of a hardware designer for a system
where all zero bits was not an integer zero?

Advertisements

glen herrmannsfeldt <> writes:
> Noob <root@127.0.0.1> wrote:
>
>> Typically when I want to initialize an auto struct to
>> "all-0" values, I write
>
>> struct foo bar = { 0 };
>
>> and let the compiler correctly set arithmetic values
>> to the appropriate 0 or 0.0 and pointers to NULL.
>
>> But when I have a malloced struct, I am often tempted
>> to write
>
>> struct foo *bar = malloc(sizeof *bar);
>> memset(bar, 0, sizeof *bar);
>
>> even though I know this is not guaranteed to set
>> floating-point fields to 0.0 and pointers to NULL.
>
> While it is true that floating point zero doesn't have to
> be all zero bits, hardware designers would have to have a very
> good reason for not doing it. (Assuming that they want people
> to buy and use the hardware.)
>
> Even more, consider the fate of a hardware designer for a system
> where all zero bits was not an integer zero?

Such an implementation would violate the C standard, which says:

For any integer type, the object representation where all the bits
are zero shall be a representation of the value zero in that type.

That was added by one of the Technical Corrigenda to C99.

(There's no such requirement for pointer or floating-point types.

At least for pointers, there could be a very good reason to make null
something other than all-bits-zero. I once worked on a (non-C) system
that used 0x00000001 as its null pointer representation because the
hardware would trap on an attempt to dereference an odd address (at
least for types bigger than one byte).

--
Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Am 25.04.2013 21:19, schrieb Keith Thompson:
> glen herrmannsfeldt <> writes:
>> Noob <root@127.0.0.1> wrote:
>>
>>> Typically when I want to initialize an auto struct to
>>> "all-0" values, I write
>>
>>> struct foo bar = { 0 };
>>
>>> and let the compiler correctly set arithmetic values
>>> to the appropriate 0 or 0.0 and pointers to NULL.
>>
>>> But when I have a malloced struct, I am often tempted
>>> to write
>>
>>> struct foo *bar = malloc(sizeof *bar);
>>> memset(bar, 0, sizeof *bar);
>>
>>> even though I know this is not guaranteed to set
>>> floating-point fields to 0.0 and pointers to NULL.
>>
>> While it is true that floating point zero doesn't have to
>> be all zero bits, hardware designers would have to have a very
>> good reason for not doing it. (Assuming that they want people
>> to buy and use the hardware.)
>>
>> Even more, consider the fate of a hardware designer for a system
>> where all zero bits was not an integer zero?
>
> Such an implementation would violate the C standard, which says:
>
> For any integer type, the object representation where all the bits
> are zero shall be a representation of the value zero in that type.
>
> That was added by one of the Technical Corrigenda to C99.
>
> (There's no such requirement for pointer or floating-point types.
>
> At least for pointers, there could be a very good reason to make null
> something other than all-bits-zero. I once worked on a (non-C) system
> that used 0x00000001 as its null pointer representation because the
> hardware would trap on an attempt to dereference an odd address (at
> least for types bigger than one byte).
>
Does this still hold?

The C11 standard (actually N1570 Committee Draft April 12, 2011) says in
paragraph 6.3.2.3 Pointers, #3: "An integer constant expression with the
value 0, or such an expression cast to type void *, is called a null
pointer constant."

And in footnote #66: "The macro NULL is defined in <stddef.h> (and other
headers) as a null pointer constant; see 7.19."

On 04/25/2013 03:19 PM, Keith Thompson wrote:
> glen herrmannsfeldt <> writes:
>> Noob <root@127.0.0.1> wrote:
>>
>>> Typically when I want to initialize an auto struct to
>>> "all-0" values, I write
>>
>>> struct foo bar = { 0 };
>>
>>> and let the compiler correctly set arithmetic values
>>> to the appropriate 0 or 0.0 and pointers to NULL.
>>
>>> But when I have a malloced struct, I am often tempted
>>> to write
>>
>>> struct foo *bar = malloc(sizeof *bar);
>>> memset(bar, 0, sizeof *bar);
>>
>>> even though I know this is not guaranteed to set
>>> floating-point fields to 0.0 and pointers to NULL.
>>
>> While it is true that floating point zero doesn't have to
>> be all zero bits, hardware designers would have to have a very
>> good reason for not doing it. (Assuming that they want people
>> to buy and use the hardware.)
>>
>> Even more, consider the fate of a hardware designer for a system
>> where all zero bits was not an integer zero?
>
> Such an implementation would violate the C standard, which says:
>
> For any integer type, the object representation where all the bits
> are zero shall be a representation of the value zero in that type.
>
> That was added by one of the Technical Corrigenda to C99.
>
> (There's no such requirement for pointer or floating-point types.
>
> At least for pointers, there could be a very good reason to make null
> something other than all-bits-zero. I once worked on a (non-C) system
> that used 0x00000001 as its null pointer representation because the
> hardware would trap on an attempt to dereference an odd address (at
> least for types bigger than one byte).
>
I last worked on a system where all-bits-zero was not a floating point
zero back in 1985. It was a non-normalized floating point zero, one of
the consequences being that adding FLT_EPSILON/2 to it would produce
underflow. I think IEEE-754 was set up intentionally to encourage use of
all-bits-zero as a normal zero. So, although C covers such cases, their
undesirability has eased them into extinction.
Unless you count the systems of the 80's where int was stored in 32 bits
of a 64-bit word and didn't affect the other 32 bits, which were
invisible to integer arithmetic (but were affected by | ^ & operators).

Werner Wenzel <> writes:
> Am 25.04.2013 21:19, schrieb Keith Thompson:
>> glen herrmannsfeldt <> writes:
[...]
>>> Even more, consider the fate of a hardware designer for a system
>>> where all zero bits was not an integer zero?
>>
>> Such an implementation would violate the C standard, which says:
>>
>> For any integer type, the object representation where all the bits
>> are zero shall be a representation of the value zero in that type.
>>
>> That was added by one of the Technical Corrigenda to C99.
>>
>> (There's no such requirement for pointer or floating-point types.
>>
>> At least for pointers, there could be a very good reason to make null
>> something other than all-bits-zero. I once worked on a (non-C) system
>> that used 0x00000001 as its null pointer representation because the
>> hardware would trap on an attempt to dereference an odd address (at
>> least for types bigger than one byte).
>>
> Does this still hold?

This says nothing about the *representation* of a null pointer, which is
commonly all-bits-zero but needn't be.

If you write:

void *ptr = 0;

there's an implicit conversion of the null pointer constant 0 to type
void*. That conversion may not be trivial; for example, it could result
in a stored pointer value with a representation of, say, 0xffffffff.
> And in footnote #66: "The macro NULL is defined in <stddef.h> (and
> other headers) as a null pointer constant; see 7.19."
>
> (Although paragraph 7.19 Common definitions <stddef.h>, #3, describes
> NULL as an "implementation-defined" null pointer constant.)
>
> If so, why not just use calloc instead of malloc?

Because calloc() doesn't necessarily set pointers to null or
floating-point objects to 0.0.

And because, in many (but not all) cases, zeroing allocated memory is a
waste of time if your program doesn't read objects until after it's
explicitly written to them. Furthermore, a compiler can perform data
flow analysis that can tell you, in some cases, when you've read an
uninitialized object; initializing it to zero makes such analysis
impossible.

--
Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

On Thu, 25 Apr 2013 17:10:55 +0200, Noob wrote:
> But when I have a malloced struct, I am often tempted
> to write
>
> struct foo *bar = malloc(sizeof *bar);
> memset(bar, 0, sizeof *bar);
>
> even though I know this is not guaranteed to set
> floating-point fields to 0.0 and pointers to NULL.
>
> What are my options then?

Write a configure test to check that the platform uses all-zeros for 0.0
and NULL, and generate a "this program will not work on your system" error
if it doesn't.

There isn't actually a law requiring C code to be portable to every system
with a conforming C implementation.

Or do you allow for the case where sizeof(void*)>sizeof(size_t) (i.e.
8086 "compact" and "large" memory models)?

On 04/25/2013 05:18 PM, Werner Wenzel wrote:
> Am 25.04.2013 21:19, schrieb Keith Thompson:
....
>> For any integer type, the object representation where all the bits
>> are zero shall be a representation of the value zero in that type.
>>
>> That was added by one of the Technical Corrigenda to C99.
>>
>> (There's no such requirement for pointer or floating-point types.
>>
>> At least for pointers, there could be a very good reason to make null
>> something other than all-bits-zero. I once worked on a (non-C) system
>> that used 0x00000001 as its null pointer representation because the
>> hardware would trap on an attempt to dereference an odd address (at
>> least for types bigger than one byte).
>>
> Does this still hold?
>
> The C11 standard (actually N1570 Committee Draft April 12, 2011) says in
> paragraph 6.3.2.3 Pointers, #3: "An integer constant expression with the
> value 0, or such an expression cast to type void *, is called a null
> pointer constant."
>
> And in footnote #66: "The macro NULL is defined in <stddef.h> (and other
> headers) as a null pointer constant; see 7.19."

That's not a change from C99, and there's no contradiction here. A null
pointer constant, if used in various contexts, gets implicitly converted
to a null pointer. That null pointer, if saved in a pointer object, will
have a representation. Just because the null pointer constant
necessarily involves a integer expression with a value of 0 doesn't mean
that representation of null pointer must have all bits 0.

Note that i is an integer expression with a value of zero, but it is not
an integer constant expression, because 'i' doesn't qualify as any of
the things allowed by 6.6p6 in integer constant expressions. Therefore,
there is no null pointer constant here involved in the initialization of
q. The value of (void*)0 is therefore governed only by 6.3.2.3p5, which
allows, among other things, the possibility that q might have a trap
representation. Even if it isn't, it need not be a null pointer.
If q has a trap representation, the first assert() has undefined
behavior before it even gets a chance to be triggered. Even if that's
not the case, the C standard says nothing that prevents any of those
asserts from triggering.
> (Although paragraph 7.19 Common definitions <stddef.h>, #3, describes
> NULL as an "implementation-defined" null pointer constant.)

That's true. NULL can expand into 0, '\0', 0U, 0L, L'\0', (5-5), or
((short)3.14F - (long)3.41), (void*)0, among infinitely many other
possibilities. I don't know of any good reason for defining it as
anything other than 0 or (void*)0, but all of those other definitions
are allowed.

James Kuyper <> writes:
[...]
> That's true. NULL can expand into 0, '\0', 0U, 0L, L'\0', (5-5), or
> ((short)3.14F - (long)3.41), (void*)0, among infinitely many other
> possibilities. I don't know of any good reason for defining it as
> anything other than 0 or (void*)0, but all of those other definitions
> are allowed.

NULL can't expand to (void*)0, because of N1570 7.1.2p5 (essentially
unchanged from earlier versions of the standard):

Any definition of an object-like macro described in this clause
shall expand to code that is fully protected by parentheses where
necessary, so that it groups in an arbitrary expression as if it
were a single identifier.

Without the parentheses, sizeof NULL would be a syntax error.

(An overly literal reading of the standard suggests that ((void*)0)
is also disallowed, because 6.5.1p5 doesn't say actually that a
parenthesized null pointer constant is a null pointer constant,
but I think the intent is clearly to allow it.)

One interesting way to define NULL would be:

enum { __NULL__ };
#define NULL __NULL__

It could make it easier for the compiler to diagnose misuses of NULL
(e.g., as a null character constant), since macros often are not
visible to the phase of the compiler that generates the diagnostics.

--
Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

(snip)
>>>> But when I have a malloced struct, I am often tempted
>>>> to write
>>>> struct foo *bar = malloc(sizeof *bar);
>>>> memset(bar, 0, sizeof *bar);
>>>> even though I know this is not guaranteed to set
>>>> floating-point fields to 0.0 and pointers to NULL.

(snip, then I wrote)
>>> While it is true that floating point zero doesn't have to
>>> be all zero bits, hardware designers would have to have a very
>>> good reason for not doing it. (Assuming that they want people
>>> to buy and use the hardware.)
>>> Even more, consider the fate of a hardware designer for a system
>>> where all zero bits was not an integer zero?
>> Such an implementation would violate the C standard, which says:
>> For any integer type, the object representation where all the bits
>> are zero shall be a representation of the value zero in that type.

But that doesn't stop people from trying to sell them. Demand will
be low from those hoping to run C programs on them.
>> That was added by one of the Technical Corrigenda to C99.
>> (There's no such requirement for pointer or floating-point types.
>> At least for pointers, there could be a very good reason to make null
>> something other than all-bits-zero. I once worked on a (non-C) system
>> that used 0x00000001 as its null pointer representation because the
>> hardware would trap on an attempt to dereference an odd address (at
>> least for types bigger than one byte).

Note that segment selector zero is special in Intel hardware starting
with the 80286 on through current systems, as the null selector.
> I last worked on a system where all-bits-zero was not a floating point
> zero back in 1985. It was a non-normalized floating point zero, one of
> the consequences being that adding FLT_EPSILON/2 to it would produce
> underflow. I think IEEE-754 was set up intentionally to encourage use of
> all-bits-zero as a normal zero. So, although C covers such cases, their
> undesirability has eased them into extinction.

In systems that don't have a hidden one, and allow for the possibility
of an unnormalized value, you can get surprising results adding values
with a zero significand and other than the smallest exponent.

Prenormalization for addition is based on the difference in the
exponents, usually before checking to see that the significand is zero.

For IBM hexadecimal floating point, for example, the Fortran AINT
(truncate to an integer value, but keep in floating point form) is
implemented by adding 0 with an exponetn of 7 (biased exponent X'47').
The exponent should have the smallest value, which is usually the
biased value of zero.
> Unless you count the systems of the 80's where int was stored in 32 bits
> of a 64-bit word and didn't affect the other 32 bits, which were
> invisible to integer arithmetic (but were affected by | ^ & operators).

The IBM 36 bit systems, such as the 704 and 7090, store floating point
values in 36 bits, and fixed point in 16 bits of a 36 bit word.
(Sign magnitude for those machines.)

glen herrmannsfeldt <> writes:
> Tim Prince <> wrote:
>> On 04/25/2013 03:19 PM, Keith Thompson wrote:
>>> glen herrmannsfeldt <> writes:
[...]
>>>> Even more, consider the fate of a hardware designer for a system
>>>> where all zero bits was not an integer zero?
>
>>> Such an implementation would violate the C standard, which says:
>
>>> For any integer type, the object representation where all the bits
>>> are zero shall be a representation of the value zero in that type.
>
> But that doesn't stop people from trying to sell them. Demand will
> be low from those hoping to run C programs on them.

No, what stop people from trying to sell such systems is the fact that
they don't exist.

Or are you saying that there are systems where all-bits-zero is not a
representation of the integer 0?

[...]

--
Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

(snip, I wrote)
>>>>> Even more, consider the fate of a hardware designer for a system
>>>>> where all zero bits was not an integer zero?
>>>> Such an implementation would violate the C standard, which says:
>>>> For any integer type, the object representation where all the bits
>>>> are zero shall be a representation of the value zero in that type.
>> But that doesn't stop people from trying to sell them. Demand will
>> be low from those hoping to run C programs on them.
> No, what stop people from trying to sell such systems is the fact
> that they don't exist.

Well, people figure out before building them that no-one would
buy them.
> Or are you saying that there are systems where all-bits-zero is not a
> representation of the integer 0?

Biased arithmetic is usual for the exponent of floating point, and
it would seem possible to build hardware with a biased integer form.
The advantage over twos complement is pretty small, though.
(That is, take twos complement and invert the sign bit.)

As far as I know, other langauges don't have the requirement on the
representation of zero, but even so it would probably be enough of
a discouragement not to build one.

One possibility is to define a single static const object of struct
foo and reference that whenever you want to initialize a new struct.

static const struct foo xX_invalid_foo_Xx = { 0 };

If you want the symbol to stay in an implementation '.c' file, one
could create a function that returns a pointer to xX_invalid_foo_Xx
and wrap a macro on it, kind of like how 'errno' might be implemented.

Just out of curiosity, does this mean y'all are obligated (either
contractually or practically) to deliver "warning-free" code, but
which circumstances produce warnings is not well-defined? Or is
it well-defined but just not under your control?

On Mon, 06 May 2013 19:42:28 -0700, Tim Rentsch wrote:
> Just out of curiosity, does this mean y'all are obligated (either
> contractually or practically) to deliver "warning-free" code, but which
> circumstances produce warnings is not well-defined? Or is it
> well-defined but just not under your control?

IIRC, it has been a customer requirement to deliver "warning-free" code
in a few projects (these were projects for the automotive industry).
But practically speaking, I have also seen the effects when no effort was
made to keep the code warning free. There were literally hundreds of
warnings generated by a single build, varying from spurious and innocent
to problems that should have been looked at seriously. But by the time
you have that many it has become infeasible to weed out the really
serious ones.

For a project that starts from scratch, you don't know for sure which
circumstances you will encounter that produce spurious warnings from the
compiler. In that case, I would rather start out with a warning level
that is likely to be too strict, than run into problems later on because
you forgot to turn on an important warning.
When you run into a spurious warning that you can't elegantly avoid and
all (senior) team members agree that the warning is harmless, then it is
the time to adjust the warning level to avoid that warning from being
generated.

Bart van Ingen Schenau <> writes:
> On Mon, 06 May 2013 19:42:28 -0700, Tim Rentsch wrote:
>
>> Just out of curiosity, does this mean y'all are obligated (either
>> contractually or practically) to deliver "warning-free" code, but which
>> circumstances produce warnings is not well-defined? Or is it
>> well-defined but just not under your control?
>
> IIRC, it has been a customer requirement to deliver "warning-free"
> code in a few projects (these were projects for the automotive
> industry). But practically speaking, I have also seen the effects
> when no effort was made to keep the code warning free. There were
> literally hundreds of warnings generated by a single build,
> varying from spurious and innocent to problems that should have
> been looked at seriously. But by the time you have that many it
> has become infeasible to weed out the really serious ones.

I am all in favor of writing warning-free code. I routinely use
the -Werror flag when compiling.

However, the case I'm talking about is where what constitutes a
warning condition is not well defined, or may change over time
(unpredictably), or both. No sensible development organization
should ever agree to such requirements.
> For a project that starts from scratch, you don't know for sure
> which circumstances you will encounter that produce spurious
> warnings from the compiler. In that case, I would rather start
> out with a warning level that is likely to be too strict, than
> run into problems later on because you forgot to turn on an
> important warning.

IMO any developer who takes this attitude is not doing his job.
If part of the requirements are to produce warning-free code,
then it damn well better be specified just which cases will
produce warnings, and if it isn't then whoever is responsible
for supplying such specifications should be politely asked to
supply them, and told that work will proceed only after they
have been.
> When you run into a spurious warning that you can't elegantly
> avoid and all (senior) team members agree that the warning is
> harmless, then it is the time to adjust the warning level to
> avoid that warning from being generated.

If you look again at my message I think you will see that I'm
asking about cases where the choice of what constitutes a warning
condition is not under control of the development team, senior or
otherwise.

Share This Page

Welcome to The Coding Forums!

Welcome to the Coding Forums, the place to chat about anything related to programming and coding languages.

Please join our friendly community by clicking the button below - it only takes a few seconds and is totally free. You'll be able to ask questions about coding or chat with the community and help others.
Sign up now!