I have a question about this and some pieces of code in the standard
library, notably std.algorithm: some of the templated functions use
template constraints even when no template overloading is taking place.
Wouldn't some static asserts help print more accurate messages when
these functions are misused?
Nicolas

The intent is to allow other code to define functions such as e.g. "map"
or "sort". Generally any generic function should exclude via a
constraint the inputs it can't work on. That way no generic function
chews off more than it can bite.

Redirected from another thread.
Having written a few of these functions with template constraints, I
wondered if there was ever any discussion/agreement on reducing verbosity
when specializing template constraints?
For instance, if you want two overloads of a template, one which accepts
types A and one which accepts types B where B implicitly converts to A
(i.e. a specialization), you need to explicitly reject B's when defining
the overload for A's. For example:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
It seems redundant to specify !isRandomAccessRange!R in the second
overload, but the compiler will complain otherwise. What sucks about this
is the 'definition' of the first overload is partially in the second.
That is, you don't really need that clause in the second overload unless
you define the first one. Not only that, but it makes the template
constraints grow in complexity quite quickly. Just look at a sample
function in std.array that handles 'the default' case:
void popFront(A)(ref A a) if(!isNarrowString!A && isDynamicArray!A &&
isMutable!A && !is(A == void[]))
Any idea how this can be 'solved' or do we need to continue doing things
like this? My naive instinct is to use the declaration order to determine
a match (first one to match wins), but that kind of goes against other
overloads in D.
-Steve

I have a question about this and some pieces of code in the standard
library, notably std.algorithm: some of the templated functions use
template constraints even when no template overloading is taking place.
Wouldn't some static asserts help print more accurate messages when
these functions are misused?
Nicolas

The intent is to allow other code to define functions such as e.g.
"map" or "sort". Generally any generic function should exclude via a
constraint the inputs it can't work on. That way no generic function
chews off more than it can bite.

Redirected from another thread.
Having written a few of these functions with template constraints, I
wondered if there was ever any discussion/agreement on reducing
verbosity when specializing template constraints?
For instance, if you want two overloads of a template, one which accepts
types A and one which accepts types B where B implicitly converts to A
(i.e. a specialization), you need to explicitly reject B's when defining
the overload for A's. For example:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
It seems redundant to specify !isRandomAccessRange!R in the second
overload, but the compiler will complain otherwise. What sucks about
this is the 'definition' of the first overload is partially in the
second. That is, you don't really need that clause in the second
overload unless you define the first one. Not only that, but it makes
the template constraints grow in complexity quite quickly. Just look at
a sample function in std.array that handles 'the default' case:
void popFront(A)(ref A a) if(!isNarrowString!A && isDynamicArray!A &&
isMutable!A && !is(A == void[]))
Any idea how this can be 'solved' or do we need to continue doing things
like this? My naive instinct is to use the declaration order to
determine a match (first one to match wins), but that kind of goes
against other overloads in D.

I thought of a number of possibilities, neither was good enough. I
decided this is a small annoyance I'll need to live with.
Andrei

I have a question about this and some pieces of code in the standard
library, notably std.algorithm: some of the templated functions use
template constraints even when no template overloading is taking place.
Wouldn't some static asserts help print more accurate messages when
these functions are misused?
Nicolas

The intent is to allow other code to define functions such as e.g.
"map" or "sort". Generally any generic function should exclude via a
constraint the inputs it can't work on. That way no generic function
chews off more than it can bite.

Redirected from another thread.
Having written a few of these functions with template constraints, I
wondered if there was ever any discussion/agreement on reducing
verbosity when specializing template constraints?
For instance, if you want two overloads of a template, one which accepts
types A and one which accepts types B where B implicitly converts to A
(i.e. a specialization), you need to explicitly reject B's when defining
the overload for A's. For example:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
It seems redundant to specify !isRandomAccessRange!R in the second
overload, but the compiler will complain otherwise. What sucks about
this is the 'definition' of the first overload is partially in the
second. That is, you don't really need that clause in the second
overload unless you define the first one. Not only that, but it makes
the template constraints grow in complexity quite quickly. Just look at
a sample function in std.array that handles 'the default' case:
void popFront(A)(ref A a) if(!isNarrowString!A && isDynamicArray!A &&
isMutable!A && !is(A == void[]))
Any idea how this can be 'solved' or do we need to continue doing things
like this? My naive instinct is to use the declaration order to
determine a match (first one to match wins), but that kind of goes
against other overloads in D.

I thought of a number of possibilities, neither was good enough. I
decided this is a small annoyance I'll need to live with.
Andrei

Should we think about making constrained templates more specialized than
unconstrained? For example, the use case when I need one or more
constrainted templates and an unconstrained catch-the-rest template pops
up quite often. Consider the current behavior:
void foo(T : int)(T x)
{
}
void foo(T)(T x) // catches types not implicitly castable to int
{
}
foo(1); // ok
But:
void foo(T)(T x) if (is(T : int))
{
}
void foo(T)(T x)
{
}
foo(1);
Error: template test.foo(T) if (is(T == int)) foo(T) if (is(T == int))
matches more than one template declaration, test.d(7):foo(T) if (is(T ==
int)) and test.d(11):foo(T)
Also note that the compiler doesn't detect the ambiguity if there are
constraints in the template parameter list:
void foo(T : int)(T x)
{
}
void foo(T)(T x) if (is(T == int))
{
}
foo(1); // first template is instantiated
Essentially, interactions between template parameter constraints and
if-constraints are not specified. What do you think?

is the 'definition' of the first overload is partially in the second. =20
That is, you don't really need that clause in the second overload unless =

you define the first one. Not only that, but it makes the template =20
constraints grow in complexity quite quickly. Just look at a sample =20
function in std.array that handles 'the default' case:
=20
=20
void popFront(A)(ref A a) if(!isNarrowString!A && isDynamicArray!A && =20
isMutable!A && !is(A =3D=3D void[]))
=20
Any idea how this can be 'solved' or do we need to continue doing things =

like this? My naive instinct is to use the declaration order to determin=

a match (first one to match wins), but that kind of goes against other =20
overloads in D.

Another issue: it is rather bug prone: if you introduce another specialised=
case, then it must be added to the series of negative constraints for the =
default case. And, in case this new specialisation is (partially) orthogona=
l, it must also be added as negative constraint to same or all other specia=
lisations...
I seems to me D already all syntactic & semantic elements to cleanly expres=
s template constraints. I do not not understand why we do not reuse interfa=
ces. Interfaces, I guess, allow clearly defining what is now implemented by=
ad hoc template check functions like isRandomAccessRange(T), and/or with e=
soteric uses of is().
Let us say we also reuse ':' as constraint-check operator.
void f(T) () if (T:X)
would mean:
* if X is a type
~ T 'is' X
~ or T inherits X
* if X is an interface
~ T explicitely implements X (its definition starts with "struct/type T=
: X")
~ T factually implements X
interface Writable { string toString (); }
...
string listText (T) (T[] elements, string sep, string lDelim, string rDe=
lim)
if (T : Writable)
{ // returns eg "(e1 e2 e3...)" }
The notion of "factual implementation" is analog to duck typing, except it =
is a static, compile-time, fact. This does not solve the above-mentioned is=
sue, but allows expressing it more clearly:
void foo(R)(R r) if (R : InputRange && R !: RandomAccessRange) {...}
To fully solve the issue, we should have a way to express "non-implementati=
on" in interfaces, for instance reuse constraints(!):
interface StrictInputRange (T) : InputRange
if (T !: RandomAccessRange && T !: ForwardRange ...) {}
Anyway, the result is:
void foo(R)(R r) if (R : StrictInputRange) {...}
Denis
-- -- -- -- -- -- --
vit esse estrany =E2=98=A3
spir.wikidot.com

Any idea how this can be 'solved' or do we need to continue doing things
like this? My naive instinct is to use the declaration order to
determine a match (first one to match wins), but that kind of goes
against other overloads in D.

One big plus of current solution is that everything you need for that
specialization lies in the signature.
I can't see another approach that scales better at this. If scaling for
constraints is something important.
In:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
We can deduce it is equal to:
void foo(R)(R r) if(isInputRange!R) {...}
For single/two constraints it isn't hard, when things get ugly determining
what means what is not quite easy as far as i can see.
Just consider if your first specialization had two constraints.
--
Using Opera's revolutionary email client: http://www.opera.com/mail/

Any idea how this can be 'solved' or do we need to continue doing
things like this? My naive instinct is to use the declaration order to
determine a match (first one to match wins), but that kind of goes
against other overloads in D.

One big plus of current solution is that everything you need for that
specialization lies in the signature.
I can't see another approach that scales better at this. If scaling for
constraints is something important.
In:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
We can deduce it is equal to:
void foo(R)(R r) if(isInputRange!R) {...}
For single/two constraints it isn't hard, when things get ugly
determining what means what is not quite easy as far as i can see.
Just consider if your first specialization had two constraints.

On the contrary, I think it scales very poorly from a function-writer
point of view.
Let's say I wanted to add a version that implements a specialization for
forward ranges, I now have to modify the constraint on the one that does
input ranges. This is opposite to how derived classes or specialized
overloads work, I just define the specialization, and if it doesn't match
it falls back to the default.
Imagine now if I wanted to define a foo that worked only on my specific
range, I now have to go back and modify the constraints of all the other
functions. What if I don't have control over that module?
-Steve

Any idea how this can be 'solved' or do we need to continue doing
things like this? My naive instinct is to use the declaration order
to determine a match (first one to match wins), but that kind of goes
against other overloads in D.

One big plus of current solution is that everything you need for that
specialization lies in the signature.
I can't see another approach that scales better at this. If scaling
for constraints is something important.
In:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
We can deduce it is equal to:
void foo(R)(R r) if(isInputRange!R) {...}
For single/two constraints it isn't hard, when things get ugly
determining what means what is not quite easy as far as i can see.
Just consider if your first specialization had two constraints.

On the contrary, I think it scales very poorly from a function-writer
point of view.
Let's say I wanted to add a version that implements a specialization for
forward ranges, I now have to modify the constraint on the one that does
input ranges. This is opposite to how derived classes or specialized
overloads work, I just define the specialization, and if it doesn't
match it falls back to the default.
Imagine now if I wanted to define a foo that worked only on my specific
range, I now have to go back and modify the constraints of all the other
functions. What if I don't have control over that module?

It scales poorly on artificial examples. It scales well on real-world
examples, because those usually have disjunctive constraints.
Andrei

Imagine now if I wanted to define a foo that worked only on my specific
range, I now have to go back and modify the constraints of all the other
functions. What if I don't have control over that module? =20

It scales poorly on artificial examples. It scales well on real-world=20
examples, because those usually have disjunctive constraints.

Not sure of this. Look a the thread about "write, toString, formatValue & r=
ange interface". It seems (but I may be wrong) that input range cases where=
added to the (big) set of formatValue templates. If my reasoning is correc=
t, this addition was not properly done, precisely negative constraints for =
mutual exclusion are missing, which leads to 2-3 bugs.
Denis
-- -- -- -- -- -- --
vit esse estrany =E2=98=A3
spir.wikidot.com

I'll cautiously say "looks okay", but in terms of allowing us to do
great things, it is way below many other things.
Consider for example the annoying limitation with the eponymous trick:
you can't define any other symbols. That is unnecessary and causes a lot
of code and name bloat. I'd much prefer that issue were fixed instead of
the above.
Andrei

It is exactly your proposal (first one to match wins), with uglier syntax
:D
Would it even fulfill your requirements? For example "What if I don't have
control over that module?"
This one would make it impossible.

It is exactly your proposal (first one to match wins), with uglier
syntax :D

Not exactly. It fits within the syntax of D (if-else), and order of
evaluation is explicit, whereas one might expect with my original proposal
that order does not matter. There is no disputing which template should
be instantiated. But yes, it is the same premise.
And I don't agree the syntax is uglier. Maintenance would be easier (only
one signature need be modified). Also only need to document in one spot.

Would it even fulfill your requirements? For example "What if I don't
have control over that module?"
This one would make it impossible.

It doesn't fulfill that requirement, no. But it gets us less verbose
definition where you do control the module. Solving some of the
requirements without solving them all is allowed.
Actually, now that I think about it, that kind of fits D as well.
Overload resolution is not done across modules.
-Steve

I'll cautiously say "looks okay", but in terms of allowing us to do
great things, it is way below many other things.
Consider for example the annoying limitation with the eponymous trick:
you can't define any other symbols. That is unnecessary and causes a lot
of code and name bloat. I'd much prefer that issue were fixed instead of
the above.

I agree. My original question was just "have we thought about this", not
"this must be solved immediately!"
If this was implemented, it would be backwards-compatible anyways, so it's
not pressing.
-Steve

Any idea how this can be 'solved' or do we need to continue doing things
like this? My naive instinct is to use the declaration order to
determine a match (first one to match wins), but that kind of goes
against other overloads in D.

Yeah, I'd be extremely reluctant to change from a best match to a first match.
One reason is that first match cannot deal well with partial ordering, as it by
definition requires a total ordering.
Another reason is D's attempt to get away from C/C++'s declaration ordering
dependencies.