"Sean Kelly" <sean@f4.ca> wrote in message news:d3u7os$240m$1@digitaldaemon.com...> Related question. Would you suggest relying on the default initializers
when
> the default value is acceptable or is always initializing variables the preferred method? I ask this because of comments you've made in the past
that
> you'd prefer if variables were all default initialized to trap values
rather
> than "usable" values. Basically, I'm finding myself beginning to rely on integral variables default initializing to 0 and not bothering to
explicitly
> initialize them, and I don't want to fall into this habit if there's any
chance
> of the default values changing.
That's a very good question. If there was a 'trap' value for, say, ints, D would use it as the default initializer. If one does appear in the future sometime, would D be changed to use it? No, as that would break about every D program in existence, including yours and mine <g>.
Stylistically, however, I think it is better style to put in an explicit initializer when that value will be used. That way, the maintenance programmer knows it's intentional.
Also:
void foo(out int x) { ... }
...
int i = 0;
foo(i);
I would argue is bad style, as i is intentionally set to a value that is
never used. (Let's dub these things 'dead initializers'.) This kind of thing
is common in C/C++ programs to get the compiler to quit squawking about
"possible uninitialized use of 'i'". Dead initializers and dead code are
sources of confusion for maintenance programming, and a language should not
force programmers to put them in.

In article <d3s81s$isa$1@digitaldaemon.com>, Ben Hinkle says...
>>I'm not sure how to differentiate principle and practice. For example
> int return1(){ return 0; } // written by an intern
> void user_code() {
> int x = return1();
> assert( x == 1 );
> printf("I got a 1.\n");
> }
>Now the code above will assert every time and obviously no real code would look exactly like that but it's not uncommon to see asserts check something that if it fails can be easily recovered from. Is returning 0 from a function that says it returns 1 a contract violation? yes. Is it a big deal? it depends.
This is somewhat of a slippery slope. To me, preconditions guarantee the criteria for which a function should succceed, and postconditions guarantee that nothing unpredictable has happened in generating that result. One of the terrific things about contracts is that I don't need to wrap every function call in an error recovery framework to have a reasonable guarantee that it has done what it's supposed to. Can many contract violations be recovered from in practice? Certainly. But to build a language on that assuption, IMO, violates the core principals of DBC. At the very least, I would like to have the option that contract violations are unrecoverable (this would be trivial with my suggestion of throwing auto classes, as it's largely a library solution anyway). Then it would be up to the client to specify recoverability based on project requirements. Though this has me leaning towards specifying recovery on a per-thread basis... perhaps a property of the Thread class that defaults based on a global flag?
Sean

Matthew wrote:
> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d3i5dn$oda$1@digitaldaemon.com...> ...
>>In some sense for the user it's like when Windows tells you at some point you have to reboot after an install (well... not all installs but you know what I mean). Windows doesn't pop up a dialog that says "Reboot?" that just has an OK button. Error recovery should inform the user but let them decide what to do - or worst case let the application developer decide.
> > > Actually, some things do. I think Norton Anti Virus just puts an ok button and nothing else. Needless to say I have an running the background that detects this and kills the window without rebooting. Damn! Did I just shoot down my own thesis? :-)
Actually, I've seen this something like this, too (it must have been back when I used Win95 -- yuck!). Since I don't like to follow directions, I'd move the OK box under the taskbar if I wasn't ready to reboot yet.
--
jcc7
http://jcc_7.tripod.com/d/

Walter wrote:
> "Sean Kelly" <sean@f4.ca> wrote in message news:d3u7os$240m$1@digitaldaemon.com...> >> Related question. Would you suggest relying on the default
>> initializers
> > when
> >> the default value is acceptable or is always initializing variables
>> the preferred method? I ask this because of comments you've made
>> in the past
> > that
> >> you'd prefer if variables were all default initialized to trap
>> values
> > rather
> >> than "usable" values. Basically, I'm finding myself beginning to
>> rely on integral variables default initializing to 0 and not
>> bothering to
> > explicitly
> >> initialize them, and I don't want to fall into this habit if
>> there's any
> > chance
> >> of the default values changing.
> > > That's a very good question. If there was a 'trap' value for, say,
> ints, D would use it as the default initializer. If one does appear
> in the future sometime, would D be changed to use it? No, as that
> would break about every D program in existence, including yours and
> mine <g>.
They wouldn't.
Since these trap values will never come to 32bit ints. But 64bit is different.
> Stylistically, however, I think it is better style to put in an
> explicit initializer when that value will be used. That way, the
> maintenance programmer knows it's intentional.
> > Also: void foo(out int x) { ... } ... int i = 0; foo(i); I would
> argue is bad style, as i is intentionally set to a value that is never used. (Let's dub these things 'dead initializers'.) This kind
> of thing is common in C/C++ programs to get the compiler to quit
> squawking about "possible uninitialized use of 'i'". Dead
> initializers and dead code are sources of confusion for maintenance
> programming, and a language should not force programmers to put them
> in.
In the "bad old days" of ints, we only had 16 bits to work with. That meant, that we regularly did need to use the entire gamut of values of the data type.
The day we get "64bit ints" as default (which is 12 months from now), we'll have "ints" that have such a latitude of values that we can spare some for Exceptional Conditions. (Just as is today done with floats.)
In other words: that day, we could stipulate, that one particular value of "int" is illegal. For any purpose. (!!!)
The same could be stipulated for signed "int64s".
------
Say we decide that (int64_max_value - 1) is an official NAN-INT. And for the signed, (int64_max_value/2 -1) would be the value.
Questions:
1) Can anybody give a thoroughly solid reason why we _cannot_ skip using these particular values as "NAN"s? (In other words, can someone pretend that we really (even in theory) will actually need _all_ 64bit integer values in production code?)
2) Could it be possible to persuade the "C99 kings" to adopt this practice?
3) If so, could we/they persuade to have chip makers implement hardware traps for these values? (As today null checks and of/uf are done?)
4) Is it possible at all to have the Old Farts, who hold the keys to Programming World, to become susceptible to our demands?
-----
What's so different today?
In the old days the amount of memory was more than the width of the processor. (8 bits, and 64k of memory.)
Today, they're about equal. (32 bits and 4gigs.)
Tomorrow, we'll have 64bit processors, and it'll take a while before we get to 18 Terabytes of mainboard ram. (Likely we'll have 128bit machines before 18TB of memory.)
Plus, 64bit integers can cover stuff like the World Budget counted in cents.
So, the time has arisen, when we actually can spare a single int value for "frivolous purposes", i.e. NAN functionality.
----
Actually, I could make a deal with Walter: the day you're going to talk with The White Bearded Men, I'll tag along, with a baseball bat on my shoulder, wearing dark Wayfarers.

"J C Calvarese" <jcc7@cox.net> wrote in message news:d3uirs$2ecv$1@digitaldaemon.com...> Matthew wrote:
>> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d3i5dn$oda$1@digitaldaemon.com...>>> ...
>>>In some sense for the user it's like when Windows tells you at some point you have to reboot after an install (well... not all installs but you know what I mean). Windows doesn't pop up a dialog that says "Reboot?" that just has an OK button. Error recovery should inform the user but let them decide what to do - or worst case let the application developer decide.
>>>>>> Actually, some things do. I think Norton Anti Virus just puts an ok button and nothing else. Needless to say I have an running the background that detects this and kills the window without rebooting. Damn! Did I just shoot down my own thesis? :-)
>> Actually, I've seen this something like this, too (it must have been back when I used Win95 -- yuck!). Since I don't like to follow directions, I'd move the OK box under the taskbar if I wasn't ready to reboot yet.
I do that for arbitrary things, but it leaves you open to inadvertently pressing it, since it's still in the top-window Z-order, and if you're a mad Alt-TAB-ber like me you can find that you've pressed the OK button by switching too quickly. That's _very_ annoying. :-)

In article <d3u9t4$25qd$1@digitaldaemon.com>, Walter says...
><snip>
>That's a very good question. If there was a 'trap' value for, say, ints, D would use it as the default initializer. If one does appear in the future sometime, would D be changed to use it? No, as that would break about every D program in existence, including yours and mine <g>.
>>Stylistically, however, I think it is better style to put in an explicit initializer when that value will be used. That way, the maintenance programmer knows it's intentional.
I like this approach somewhat, but let me offer a counter argument:
I've heard a rule, that if you have a lot of code that is constantly checking for empty loops like this:
if (x.size != 0) {
for( loop over x ) {
}
}
.. that you are probably coding your end conditions or containers wrong.
Similarly, I've found that the value of zero seems to be the correct starting value for integer types in the overwhelming majority of cases. For example, the "straightforward" implementation of almost any class in the STL can initialize all its fields to 0 or null.
/// (fake) vector class that tracks statistics on its contents
statistical_vector!(T) {
..
private:
// zeroes look okay here...
size_t size, capacity, max_index, min_index, mean_index;
// (.init) looks okay here...
T default, total, max, min, mean;
// explicit values; not terrible, but not pulitzer material... size_t size=0, capacity=0, max_index=0, min_index=0, mean_index=0; T default=T.init, total=T.init, max=T.init, min=T.init, mean=T.init; }
>Also:
> void foo(out int x) { ... }
> ...
> int i = 0;
> foo(i);
>I would argue is bad style, as i is intentionally set to a value that is never used. (Let's dub these things 'dead initializers'.) This kind of thing is common in C/C++ programs to get the compiler to quit squawking about "possible uninitialized use of 'i'". Dead initializers and dead code are sources of confusion for maintenance programming, and a language should not force programmers to put them in.
Particularly when it comes to warnings and default-generated methods, C++ tends to have many cases where you need to overspecify or disable things manually. In some cases, the design decision is understandable, but it does produce clutter.
If warnings do become a standard part of D, let's have a default set of them defined by the language and not up to the implementor.
I've spent a lot of time (at work) compiling my C++ code and then fixing (for
correct code) seperate sets of warnings on ia32, ia64, Solaris, Windows and
OS/X...
Kevin

"Georg Wrede" <georg.wrede@nospam.org> wrote in message news:4262E94B.400@nospam.org...> Say we decide that (int64_max_value - 1) is an official NAN-INT. And for
> the signed, (int64_max_value/2 -1) would be the value.
>> Questions:
>> 1) Can anybody give a thoroughly solid reason why we _cannot_ skip using
> these particular values as "NAN"s? (In other words, can someone pretend
> that we really (even in theory) will actually need _all_ 64bit integer
> values in production code?)
We can't. 64 bit ints are often used, for example, to implement fixed point arithmetic. Furthermore, NANs work best when the hardware is set up to throw an exception if they are used.
> 2) Could it be possible to persuade the "C99 kings" to adopt this
practice?
I just think that no way would that ever happen.
> 3) If so, could we/they persuade to have chip makers implement hardware
> traps for these values? (As today null checks and of/uf are done?)
The hardware people could do it by adding an "uninitialized" bit for each byte, much like the parity bit. Then, a hardware fault could be generated when an "uninitialized" memory location is read. This won't break existing practice.
> 4) Is it possible at all to have the Old Farts, who hold the keys to Programming World, to become susceptible to our demands?
LOL.

"Kevin Bealer" <Kevin_member@pathlink.com> wrote in message news:d3v3hp$2rge$1@digitaldaemon.com...> I've spent a lot of time (at work) compiling my C++ code and then fixing
(for
> correct code) seperate sets of warnings on ia32, ia64, Solaris, Windows
and
> OS/X...
Yes, that's a very common problem. It gets really annoying when two compilers each issue warnings for the complement of the way the other one insists the code should be written.

"Walter" <newshound@digitalmars.com> wrote in message news:d3v6kc$2ucq$2@digitaldaemon.com...>> "Kevin Bealer" <Kevin_member@pathlink.com> wrote in message news:d3v3hp$2rge$1@digitaldaemon.com...>> I've spent a lot of time (at work) compiling my C++ code and then fixing
> (for
>> correct code) seperate sets of warnings on ia32, ia64, Solaris, Windows
> and
>> OS/X...
>> Yes, that's a very common problem. It gets really annoying when
> two
> compilers each issue warnings for the complement of the way the
> other one
> insists the code should be written.
Hmph! Tell me about it!

Georg Wrede wrote:
> xs0 wrote:
> >>> Don't use an assert then. Use a different mechanism.
>>>>>> But I want to use assert, even if
> > > Assert exists for the very purpose of halting the entire program when the assertion fails.
> > One might even say that "assert" is a concept. And that concept is then implemented in quite a few languages.
> > D should not be the one language which destroys the entire concept.
Java throws a catchable/quenchable exception (or error or whatever); you're advised not to catch it, but you still can. There is a choice.
PHP issues a warning by default, you can define a callback and do whatever you want, or you can choose to abort on errors. There is a choice.
Those are #2 and #5 in the TIOBE index, not some niche stuff!
D throws an exception that can currently be quenched. If you don't want to do that, you have all the power you need not to. There is a choice.
What some of you seem to want would take that choice away, and I (still) fail to see the benefit; really, you can abort if you want to, what's the big deal??
xs0