>Replies are scarce. How shall I interpret this silence? That the technique is
>perfect, and requires no further improvement? That the technique is useless, and
>not worth discussing? That no one really understands what the hell I'm on about?
You are talking about what is generally known as predicate methods.

In article <d6mkfr$2cb9$1@digitaldaemon.com>, Nod says...
>
>Replies are scarce. How shall I interpret this silence?
I can only speak for myself: I was skeptical at first, but now I see some merit
in this feature. I also dismissed things as being more applicable to the in{}
block with asserts, until I realized that you're attempting to override based on
constant values.
I'm also a big supporter of this approach, so please, continue refining your
work here in the newsgroup. :)
If I may, I have a few critiques and suggestions.
- How does this specification affect the D calling convention and linkages? I
suspect you'd rely on the compiler to find the predicates in the D source code,
and then use the symbol for that particular method. In that case, you'd need to
find a way to use D's name mangling to somehow tell a set of similar methods
apart. I suspect this would need a revision of the name-mangling spec,
including how to reliably map specific predicates (read: not dependent on the
order they were parsed/compiled) to D-style symbols.
- From your specification: ">:void foo(int i in a & b) { ... } // joins ranges"
Wouldn't the predicate expression be better off defined as a typical D
statement, provided that it operates strictly on constants? If a and b are both
arrays, wouldn't '~' be more appropriate? If that causes trouble, then why not
use the typical boolean and operator '&&'.
- One of the things on the table for D is that templates matches be deduced, so
the template call syntax (via '!') would only be needed to solve ambiguities
(like in C++). Should such a feature become a part of D, wouldn't these
predicate expressions be better suited for templates instead?
> template foo(A : int in [3,4,5]){
> foo(A a){ /* specialized foo }
> }
> template foo(A : int){
> foo(A a){ /* generalized foo }
> }
>
> void main(){
> foo(4); // matches specialized template for foo
> foo(1); // matched generalized template for foo
> foo(new Object()); // error: no match for foo(Object);
> }
Not to turn your spec completely on its head, but it would mesh well with the
current separation of responsibility between templates and functions. Templates
already do some crude constant matching. It also makes sure that the compiler
can supply the appropriate symbols for the linker as this already works with
explicit template calls.
Then again, if predicates were extended to templates (like above), then your
syntax would simply be a shortcut for function templates.
- EricAnderton at yahoo

Nod wrote:
> Replies are scarce. How shall I interpret this silence? That the technique is
> perfect, and requires no further improvement? That the technique is useless, and
> not worth discussing? That no one really understands what the hell I'm on about?
All right, you asked for it.
At the core, writing one function this way:
void foo (int a)
{
if (a < 5)
...
else if (a < 10)
...
}
And another function this way:
void foo (int a in int.min .. 4) ...
void foo (int a in 5 .. 9) ...
Doesn't matter, they're conceptually equivalent. However, there are
four extra factors in the second example. There's the repeating
declaration (names, return type, any additional parameters), the need to
understand exactly how the first example works, the ability to separate
the declarations so that they're no longer related, and the fact that
the range doesn't behave like a regular D range. So it has higher
complexity, not lower: real control flow is more concise, more flexible,
and easier to deal with.
This leads me to believe that this would simply cause confusion because
it makes the feature hard to use properly. How would I document these
functions? How do I share common processing between them? How do I
override these methods - without looking at the specification to be
certain I'm doing this right?
This also screws up function overloading, making it a lot more complex
and hazardous - particularly inner functions and overrides and whatever
else you had going on in there. That entirely misses the spirit of D's
function overloading.
Now, templates COULD have benefited from this, but that was before
static if. Now that there is control flow in the templates, there's no
reason to have it in specialization. I won't speak to your mixins and
overrides and the other dozen things you have going on here because then
we'll be here for weeks.
Where's the dire need - or ANY need? This is all just syntax sugar, and
in this case syntax sugar that increases complexity! I don't see any
reason at all why I would spend a few weeks implementing this feature; I
would be in the exact same place as before, only with a much more
complex language.

Nod wrote:
> Replies are scarce. How shall I interpret this silence? That the technique is
> perfect, and requires no further improvement? That the technique is useless, and
> not worth discussing? That no one really understands what the hell I'm on about?
>
I was enthusiastic about the idea, but I got lost in the second take.
Too complicated for me to follow. (I'm just lazy).

In article <d6nhtr$51a$1@digitaldaemon.com>, pragma says...
>
>In article <d6mkfr$2cb9$1@digitaldaemon.com>, Nod says...
>>
>>Replies are scarce. How shall I interpret this silence?
>
>I can only speak for myself: I was skeptical at first, but now I see some merit
>in this feature. I also dismissed things as being more applicable to the in{}
>block with asserts, until I realized that you're attempting to override based on
>constant values.
>
>I'm also a big supporter of this approach, so please, continue refining your
>work here in the newsgroup. :)
>
Music to my ears! Ehm.. eyes. :)
A small clarification though: The technique is not limited to constant values.
This a side-effect the fact that any overloads may be perfectly rewritten as an
if-else or switch-case construct which can be executed at run-time (incurring a
small run-time overhead on each function call). For example:
:int a = 0, b = 10; // globals
:void foo(int i in a..b) { ... }
:void foo(int i) { ... }
:int main()
:{
: foo(2); // calls first
: a = 5;
: foo(2); // calls second
:}
>If I may, I have a few critiques and suggestions.
>
Critiques/suggestions are highly welcomed.
>- How does this specification affect the D calling convention and linkages? I
>suspect you'd rely on the compiler to find the predicates in the D source code,
>and then use the symbol for that particular method. In that case, you'd need to
>find a way to use D's name mangling to somehow tell a set of similar methods
>apart. I suspect this would need a revision of the name-mangling spec,
>including how to reliably map specific predicates (read: not dependent on the
>order they were parsed/compiled) to D-style symbols.
>
I admit I have not yet given this much thought, but on first glance, what you're
saying seems to be correct. Constant ranges would have to be uniquely mangled.
Though, the order in which they were parsed/compiled doesn't matter currently
either, as required for forward referencing, so that shouldn't be a problem.
Functions with dynamic ranges should only need one symbol, as the by-value
overload logic gets folded into the function. Hmm, this may not always be the
best way to do it though, as the compiler could inline the overload logic for
performance. I'm not yet sure how to solve this case. Tricky stuff, mangling.
>- From your specification: ">:void foo(int i in a & b) { ... } // joins ranges"
>Wouldn't the predicate expression be better off defined as a typical D
>statement, provided that it operates strictly on constants? If a and b are both
>arrays, wouldn't '~' be more appropriate? If that causes trouble, then why not
>use the typical boolean and operator '&&'.
>
The a & b example assumes array operations has been implemented, they currently
are not, but the docs say they are planned. I assume that they would work on
arrays in the same way as bit operations work on bits, allowing not only merging
but also subtraction |, difference ^, and negation !.
Using ~ gives an odd condition when using overlapping ranges. One could of
course special-case this to be allowed in this context.
:0..4 ~ 2..6 // 0123423456
:0..4 & 2..6 // 0123456
>- One of the things on the table for D is that templates matches be deduced, so
>the template call syntax (via '!') would only be needed to solve ambiguities
>(like in C++). Should such a feature become a part of D, wouldn't these
>predicate expressions be better suited for templates instead?
>
>> template foo(A : int in [3,4,5]){
>> foo(A a){ /* specialized foo }
>> }
>> template foo(A : int){
>> foo(A a){ /* generalized foo }
>> }
>>
>> void main(){
>> foo(4); // matches specialized template for foo
>> foo(1); // matched generalized template for foo
>> foo(new Object()); // error: no match for foo(Object);
>> }
>
>Not to turn your spec completely on its head, but it would mesh well with the
>current separation of responsibility between templates and functions. Templates
>already do some crude constant matching.
Yes, this should be possible, but in addition to the "shortcut syntax" (see
below).
>It also makes sure that the compiler
>can supply the appropriate symbols for the linker as this already works with
>explicit template calls.
I don't fully understand this. Could you elaborate, or give an example?
>
>Then again, if predicates were extended to templates (like above), then your
>syntax would simply be a shortcut for function templates.
>
>- EricAnderton at yahoo
The shortcut syntax is, in itself, valuable. It allows easy extension/splitting
of functions without having to touch the syntax of the original function. That
means less risk for error, and less work for your fingers :)
-Nod-

In article <d6n9bf$2v7v$1@digitaldaemon.com>, Matthias Becker says...
>
>>Replies are scarce. How shall I interpret this silence? That the technique is
>>perfect, and requires no further improvement? That the technique is useless, and
>>not worth discussing? That no one really understands what the hell I'm on about?
>You are talking about what is generally known as predicate methods.
>
Nice, now I don't have to invent new names for it all the time :)
Have you got any resources about this, or examples of languages in which it is
implemented?
-Nod-

In article <d6nncg$8ps$1@digitaldaemon.com>, TechnoZeus says...
>
>> <snip>
>
>Nothing positive to add... so simply observing.
>
>Observations so far indicate that it's a viable idea that represents a different way of writing something like...
>
>void main()
>{
> int v;
> /* assign a value to v ... */
> printf("%d",a(v));
>}
>
>
>int a(int v)
>{
> int a_lm5(int v){ /*...*/ }
> int a_m5t5(int v){ /*...*/ }
> int a_g5(int v){ /*...*/ }
> if (v < -5) return a_lm5(v);
> if (v <= -5 && v >= 5) return a_m5t5(v);
> if (v > 5) return a_g5(v);
>
>/.....
>and whether or not that is of value to a person would depend, I would think, on that person's point of view...
>so if I have misunderstood, or missed something... please let me know.
You have understood things very well. And your example is a nice one too. The
idea is much about making things more clear and maintainable. Wouldn't you agree
that the following is easier to read and understand?
void main()
{
int v;
/* assign a value to v ... */
printf("%d",a(v));
}
int a(int v in int.min..-5) { /*...*/ }
int a(int v in -5..5) { /*...*/ }
int a(int v in 5..int.max) { /*...*/ }
>In the mean time, this reason and the lack of sufficient time to keep up as well as I would like to are why I have been silent on this subject.
>
Sorry, I'm too impatient I guess :)
>I would like to say though, that I have noticed your use of structures like 0 .. 9 such as...
> if (v in -5 .. 5) return a_m5t5(v);
>...and I have yet to find documentation on this type of structure in the D language, although it would be nice to see.
The addition of ranges like that is something of a "side-proposal". It's not
strictly necessary for value-based overloading, but it makes things more clear.
By using it in the drafts, I'm doing a bit of advertisement for that :)
>In other languages, I have seen the equivilent as [0 .. 9] which in D represents (to my knowledge) at array slice,
>although I see no reason why it couldn't also be used in something like...
> if (v in [-5 .. 5]) return a_m5t5(v);
>...except that in D array slices, [-5 .. 5] means from -5 up to but not including 5, while the range [-5 .. 5] as I have seen elsewhere
>would mean from and including -5, to and including 5.
>
>TZ
>
>
In my proposal, slices are always inclusive at both ends. I feel that adding
support for exclusive ranges would be more trouble than it's worth in terms of
syntactical clutter vs. usefulness.
-Nod-

In article <d6o7sg$kdh$1@digitaldaemon.com>, Burton Radons says...
>
>Nod wrote:
>
>> Replies are scarce. How shall I interpret this silence? That the technique is
>> perfect, and requires no further improvement? That the technique is useless, and
>> not worth discussing? That no one really understands what the hell I'm on about?
>
>All right, you asked for it.
>
Sure did. *provocative grin*
>At the core, writing one function this way:
>
> void foo (int a)
> {
> if (a < 5)
> ...
> else if (a < 10)
> ...
> }
>
>And another function this way:
>
> void foo (int a in int.min .. 4) ...
> void foo (int a in 5 .. 9) ...
>
>Doesn't matter, they're conceptually equivalent. However, there are
>four extra factors in the second example. There's the repeating
>declaration (names, return type, any additional parameters), the need to
>understand exactly how the first example works, the ability to separate
>the declarations so that they're no longer related, and the fact that
>the range doesn't behave like a regular D range. So it has higher
>complexity, not lower: real control flow is more concise, more flexible,
>and easier to deal with.
>
>This leads me to believe that this would simply cause confusion because
>it makes the feature hard to use properly.
You are missing the extra factors in the upper block of code. There's the extra
level of indentation, the dependency on block order, the inability to override
blocks in specific scopes, and the denseness of code that results from cramming
all the cases into one function.
So it turns out about equal. But if you also take into account that the
repeating declarations - and more - is needed also in your example when
branching out into single functions, that you equally need to understand all of
the code in both examples, that separating the declarations can equally well be
beneficial, and that the range issue is just a pet peeve of mine, and can be
resolved... then my version wins <g>.
But scorn as I might, I agree with you. It does not really matter which one you
choose. What matters is that you have a choice. Because sometimes one of them is
the better choice.
>How would I document these functions?
You would document them as you would any other overload. Explain that there are
overloads, and what the differences are. As you will probably be writing for an
audience that knows D, they'll understand what you mean.
>How do I share common processing between them?
Try mixins. Or functions. Or just don't use them when there's a lot of common
processing.
>How do I
>override these methods - without looking at the specification to be
>certain I'm doing this right?
>
By actually *learning* the specification?!
>This also screws up function overloading, making it a lot more complex
>and hazardous - particularly inner functions and overrides and whatever
>else you had going on in there. That entirely misses the spirit of D's
>function overloading.
>
If you would actually *read* the drafts you'll see that the feature integrates
perfectly with inner functions, and screws up no old code whatsoever. It does
make the compiler more complex, I agree with that, but not as much as you'd
think. Most features are simple extensions of already existing features. As for
making things more hazardous, that's only true if you misuse it; like anything
else.
And what exactly *is* the spirit of D function overloading, that I'm so totally
missing? I thought D was a practical language, for practical programmers, and
aims to make programming less error-prone. By modularizing the code, this
feature fits.
>Now, templates COULD have benefited from this, but that was before
>static if. Now that there is control flow in the templates, there's no
>reason to have it in specialization. I won't speak to your mixins and
>overrides and the other dozen things you have going on here because then
>we'll be here for weeks.
>
You have been here for years already. What's the rush?
>Where's the dire need - or ANY need? This is all just syntax sugar, and
>in this case syntax sugar that increases complexity! I don't see any
>reason at all why I would spend a few weeks implementing this feature; I
>would be in the exact same place as before, only with a much more
>complex language.
It is syntactic sugar. Like being able to overload functions based on type is
syntactic sugar. It is not required, but it makes life easier.
You are wrong when you make the generalized statement that it makes code more
complex. There are cases in which using this feature leads to clearer code and
increased maintainability. A few of those cases are described in the drafts, but
there are certainly more. In my opinion, being able to choose the best method
for the job in these cases gives this feature all the mojo it needs.
-Nod-