I am starting a new programming project, and one of things I need to do is set
the time. I started thinking about who should be responsible for verifying if
the parameters to setTime() are correct. For instance, a minute has the range
0..59. There are two choices. The caller can ensure the proper range or the
callee can test for it. The setTime() function could use an assert to test its
parameters. This results in extra code to write and to execute. In trying to
think about the general case for all functions and whether or not to trust the
input, asserts could result in signficantly more code. Further, if this code is
to be placed into a library, where someday eventually this code could end up in
a critical tight loop, the extra run-time overhead might not be desired.
For the first time, I came to the understanding of the value of Pascal's sub
range types. If both the parameter and argument have the same sub-range type
then both reliablity and execution speed are ensured.

I accidentally cut-off part of my message.
I wanted to suggest Range Types as a feature for the D-Language.
I am not a language designer though. I don't know if Pascal's implementation is
the best or if there are problems with this idea in general.

I accidentally cut-off part of my message.
I wanted to suggest Range Types as a feature for the D-Language.
I am not a language designer though. I don't know if Pascal's implementation is
the best or if there are problems with this idea in general.

I wanted to suggest Range Types as a feature for the D-Language.
I am not a language designer though. I don't know if Pascal's implementation
is
the best or if there are problems with this idea in general.

Seems a good idea (Ada has this feature too).
I question, however, how checking of the subrange
should be achieved, ie. should there be an assert
for each write to the subranged type variable, or
should the value wrap - ie.
assigned = newValue;
becomes
assigned = ( ( newValue - minimum ) % ( maximum - minimum + 1 )
) + minimum;
this would be more semantically consistant with the
current integer types, (of course, a more efficient
implementation would need to be found.) The former
version, however, would seem closer to the D way of
doing things - ie. to aim for maximum efficiency in
the release version.
Also should numerical values outside that normally
stored within the basic type be allowed. This may
be useful for storing a set of large values,values
which only occur over a small interval. So a byte
type could for instance store numbers which are in
the range 100000 .. 100020.
Secondly what syntax should be used ...
typedef <integerType> ( <minima> .. <maxima> ) <newType>;
seems a good idea, eg.
typedef uint ( 0 .. 59 ) Minutes;
typedef byte ( 0 .. 9 ) Digit;
C 2003/6/7

I question, however, how checking of the subrange
should be achieved, ie. should there be an assert
for each write to the subranged type variable, or
should the value wrap - ie.

Assert should be in a cast. One also has to be warned, in the case you
are converting an integer to a subrange type implicitly. So, whenever
you need to work with a subrange type, you will take care to convert as
early as possibly, from then on static typechecking should be enough.

callee can test for it. The setTime() function could use an assert to test its
parameters. This results in extra code to write and to execute. In trying to
think about the general case for all functions and whether or not to trust the
input, asserts could result in signficantly more code. Further, if this code
is
to be placed into a library, where someday eventually this code could end up in

Asserts are for debug *only*, and they are never executed
in release code.
Asserts are, OTOH, *mandatory* for a debug-mode library
code, whenever the caller is responsible to feed proper
input to a library function. (This is the very idea of
design-by-contract).
Cheers,
Sz.

LabVIEW can not only configure min/max/increment for any numeric type, but also
the behavior for out-of-range situations:
min: [ignore | coerce]
max: [ignore | coerce]
inc: [ignore | coerce to nearest | coerce up | coerce down]
(Where min/max can have values up to and including +/-inf.) Whether it's
desirable to translate this scheme into D I don't know, but I thought it worth a
mention.
Note that LabVIEW is a compiled, garbage-collected language and I have seen it
perform as fast as C even for low-level bit shuffling tasks one would normally
classify as C-centric (e.g. encryption and hash functions).
Mark