Haskell is full of function calls, so the Haskell designers have used/invented
several different ways to avoid some parenthesys in the code.
From what I've seen if you remove some parenthesis well, in the right places,
the resulting code is less noisy, more readable, and it has less chances to
contain a bug (because syntax noise is a good place for bugs to hide).
One of the ways used to remove some parenthesys is a standard syntax that's
optionally usable on any dyadic function (function with two arguments):
sum a b = a + b
sum 1 5 == 1 `sum` 5
The `name` syntax is just a different way to call a regular function with two
arguments.
In Haskell there is also a way to assign an arbitrary precedence and
associativity to such infix operators, but some Haskell programmers argue that
too much syntax sugar gives troubles (
http://www.haskell.org/haskellwiki/Use_of_infix_operators ).
In D the back tick has a different meaning, and even if in D you use a
different syntax, like just a $ prefix, I don't know how much good this syntax
is for D:
int sum(int x, int y) { return x + y; }
int s = sum(1, sum(5, sum(6, sum(10, 30))));
Equals to (associativity of $ is fixed like this):
int s = 1 $sum 5 $sum 6 $sum 10 $sum 30;
So I think it's not worth adding to D.
Bye,
bearophile

I agree.
It would be nice in some situations (like cross and dot products for
vectors), but otherwise it's unnecessary and just adds confusion in
exchange for a tiny but of convenience in a handful of scenarios.

On Sun, Mar 6, 2011 at 12:24 PM, Peter Alexander <peter.alexander.au
<http://peter.alexander.au> gmail.com <http://gmail.com>> wrote:
On 6/03/11 4:22 PM, bearophile wrote:
So I think it's not worth adding to D.
But if you don't agree... talk.
Bye,
bearophile
I agree.
It would be nice in some situations (like cross and dot products for
vectors), but otherwise it's unnecessary and just adds confusion in
exchange for a tiny but of convenience in a handful of scenarios.
With C++, for example, Eigen uses expression templates. How does one do
expression templates in D? Could someone rewrite this
http://en.wikipedia.org/wiki/Expression_templates this D?

How does one do expression templates in D?

Same basic idea but it should be a lot saner because D metaprograming
isn't a Turing tar pit like with C++. The return type of your function
is a template encoding in types of the inputs. So you could define an +
operator that has a return type of
opBinary(RHS,"+",LHS)
Or similar. The type of RHS and LHS could be other opBinary
instansiations or a leaf time. Sooner or later you've got something that
looks like a parse tree and you can process that at compile time using
CTFE and then mixin the result. Heck, you might even be able to do
something crazy like flatten the tree and re-parse and thus screw with
the operator precedence or run your own specialized compiler backend and
output inline asm.
Trouble is, each time I feel motivated to give this a go I run into
compiler bugs. Hopefully things will get better soon.

I agree.
It would be nice in some situations (like cross and dot products for
vectors), but otherwise it's unnecessary and just adds confusion in exchange
for a tiny but of convenience in a handful of scenarios.

Haskell is full of function calls, so the Haskell designers have used/invented
several different ways to avoid some parenthesys in the code.
From what I've seen if you remove some parenthesis well, in the right places,
the resulting code is less noisy, more readable, and it has less chances to
contain a bug (because syntax noise is a good place for bugs to hide).
One of the ways used to remove some parenthesys is a standard syntax that's
optionally usable on any dyadic function (function with two arguments):
sum a b = a + b
sum 1 5 == 1 `sum` 5
The `name` syntax is just a different way to call a regular function with two
arguments.
In Haskell there is also a way to assign an arbitrary precedence and
associativity to such infix operators, but some Haskell programmers argue that
too much syntax sugar gives troubles (
http://www.haskell.org/haskellwiki/Use_of_infix_operators ).
In D the back tick has a different meaning, and even if in D you use a
different syntax, like just a $ prefix, I don't know how much good this syntax
is for D:
int sum(int x, int y) { return x + y; }
int s = sum(1, sum(5, sum(6, sum(10, 30))));
Equals to (associativity of $ is fixed like this):
int s = 1 $sum 5 $sum 6 $sum 10 $sum 30;
So I think it's not worth adding to D.
Bye,
bearophile

If we had UFCS this could be written as,
int s = 1.sum(5.sum(6.sum(10.sum(30))));
or, knowing sum is associative,
int s = 1.sum(5).sum(6).sum(10).sum(30);

How is it a hack? I can understand there being implementation problems
that can make it undesirable to add, but calling it hack?
It's one of the most elegant syntax proposals I've ever seen! It
unifies objects and other functions in syntax. It improves
encapsulation by giving full support to non-member functions. It
improves modularity for the same reason.
With ufcs, there'd be no desire to add useless members due to
object syntax. Everything is equal - easy extensibility, better
protection, cleaner interfaces.
It's the opposite of a hack.

How is it a hack? I can understand there being implementation problems
that can make it undesirable to add, but calling it hack?
It's one of the most elegant syntax proposals I've ever seen! It
unifies objects and other functions in syntax. It improves
encapsulation by giving full support to non-member functions. It
improves modularity for the same reason.
With ufcs, there'd be no desire to add useless members due to
object syntax. Everything is equal - easy extensibility, better
protection, cleaner interfaces.
It's the opposite of a hack.

It is _not_ a hack. Whether it's desirable or not is another matter, but it is
_not_ a hack. And really, the term hack is very imprecise and often subjective.
It's the sort of accusation that pretty much kills any legitimate debate. It's
generally unsupportable and subjective, so it adds nothing to the debate, but
it
has such a stink about it that it tends to make people avoid whatever was
declared to be a hack.

I set out to write a post with pretty much the same message. During our
long discussions about D2 at the Kahili coffee shop, one of us would
occasionally affix that label to one idea or another (often in an
attempt to make "I don't like it" seem stronger). It was very jarring.
Andrei

How is it a hack? I can understand there being implementation problems
that can make it undesirable to add, but calling it hack?
It's one of the most elegant syntax proposals I've ever seen! It
unifies objects and other functions in syntax. It improves
encapsulation by giving full support to non-member functions. It
improves modularity for the same reason.
With ufcs, there'd be no desire to add useless members due to
object syntax. Everything is equal - easy extensibility, better
protection, cleaner interfaces.
It's the opposite of a hack.

It is _not_ a hack. Whether it's desirable or not is another matter, but it is
_not_ a hack. And really, the term hack is very imprecise and often subjective.
It's the sort of accusation that pretty much kills any legitimate debate. It's
generally unsupportable and subjective, so it adds nothing to the debate, but
it
has such a stink about it that it tends to make people avoid whatever was
declared to be a hack.
Sure, you still have lots of parens with UFCS, but you _do_ get the argument
order that Bearophile was looking for. And while I've generally found the idea
of using UFCS with primitives to be pointless, this is actually an example
where
it's _useful_ with primitives.
No, UFCS is not a hack. Its implementation has enough problems due to
ambiguities and the like that it may never make it into the language even if
pretty much everyone would _like_ it in the language, but it's not a hack.
- Jonathan M Davis
P.S. Entertainingly enough, www.merriam-webster.com's definition for hack
doesn't
make it look bad at all:
"a usually creative solution to a computer hardware or programming problem or
limitation"
It makes me wonder if the usage of the word (and thus its common meaning) has
shifted over time or if the poor non-techy, dictionary folk just plain got it
wrong. The hacker's dictionary definition makes it look more like the typical
usage, but even it is a bit of a mixed bag in that respect:
1. /n./ Originally, a quick job that produces what is needed, but not well.
2. /n./ An incredibly good, and perhaps very time-consuming, piece of work that
produces exactly what is needed.

Haskell is full of function calls, so the Haskell designers have
used/invented several different ways to avoid some parenthesys in the
code.
From what I've seen if you remove some parenthesis well, in the right
places, the resulting code is less noisy, more readable, and it has less
chances to contain a bug (because syntax noise is a good place for bugs
to hide).
One of the ways used to remove some parenthesys is a standard syntax
that's optionally usable on any dyadic function (function with two
arguments):
sum a b = a + b
sum 1 5 == 1 `sum` 5
The `name` syntax is just a different way to call a regular function with
two arguments.
In Haskell there is also a way to assign an arbitrary precedence and
associativity to such infix operators, but some Haskell programmers
argue that too much syntax sugar gives troubles (
http://www.haskell.org/haskellwiki/Use_of_infix_operators ).
In D the back tick has a different meaning, and even if in D you use a
different syntax, like just a $ prefix, I don't know how much good this
syntax is for D:
int sum(int x, int y) { return x + y; }
int s = sum(1, sum(5, sum(6, sum(10, 30))));
Equals to (associativity of $ is fixed like this):
int s = 1 $sum 5 $sum 6 $sum 10 $sum 30;
So I think it's not worth adding to D.

LOL. And _what_ benefit would banishing classic operator overloading have? A
function named add could be abused in _exactly_ the same ways that + can be.

You could implement operator overloading without any special
cases/support in the language, like Scala does. In Scala
3 + 4
Is syntax sugar for:
3.+(4)
It's possible because of the following three reasons:
* Everything is an object
* Method names can contain other characters than A-Za-z_
* The infix syntax discussed in this thread
Implementing operator overloading like this also allows you to add new
operators and not just overloading existing ones.
--
/Jacob Carlborg

You could implement operator overloading without any special
cases/support in
the language, like Scala does. In Scala
3 + 4
Is syntax sugar for:
3.+(4)
It's possible because of the following three reasons:
* Everything is an object
* Method names can contain other characters than A-Za-z_
* The infix syntax discussed in this thread
Implementing operator overloading like this also allows you to add new
operators and not just overloading existing ones.

We could give a standard name to each character in an allowed class, so
that
x !%# y
maps to
x.opBangPercentHash(y);
;-)

Another solution is to specify operators in method defs:
X opBangPercentHash as "!%#" (X y) {...}
Or even use them directly there:
X !%# (X y) {...}
possibly with an annotation to warn the parser:
operator X !%# (X y) {...}
In any case, /this/ is not a big deal to manage in symbol tables, since
an operator is just a string like (any other) name. The big deal is to
map such features to builtin types, I guess (which are not object types).

The big deal is it makes parsing more difficult (precedence and
associativity need to be determined) with no significant benefit.

Haskell is full of function calls, so the Haskell designers have
used/invented several different ways to avoid some parenthesys in the
code.
From what I've seen if you remove some parenthesis well, in the right
places, the resulting code is less noisy, more readable, and it has
less
chances to contain a bug (because syntax noise is a good place for bugs
to hide).
One of the ways used to remove some parenthesys is a standard syntax
that's optionally usable on any dyadic function (function with two
arguments):
sum a b = a + b
sum 1 5 == 1 `sum` 5
The `name` syntax is just a different way to call a regular function
with
two arguments.
In Haskell there is also a way to assign an arbitrary precedence and
associativity to such infix operators, but some Haskell programmers
argue that too much syntax sugar gives troubles (
http://www.haskell.org/haskellwiki/Use_of_infix_operators ).
In D the back tick has a different meaning, and even if in D you use a
different syntax, like just a $ prefix, I don't know how much good this
syntax is for D:
int sum(int x, int y) { return x + y; }
int s = sum(1, sum(5, sum(6, sum(10, 30))));
Equals to (associativity of $ is fixed like this):
int s = 1 $sum 5 $sum 6 $sum 10 $sum 30;
So I think it's not worth adding to D.

LOL. And _what_ benefit would banishing classic operator overloading
have? A
function named add could be abused in _exactly_ the same ways that +
can be.

You could implement operator overloading without any special
cases/support in the language, like Scala does. In Scala
3 + 4
Is syntax sugar for:
3.+(4)
It's possible because of the following three reasons:
* Everything is an object
* Method names can contain other characters than A-Za-z_
* The infix syntax discussed in this thread
Implementing operator overloading like this also allows you to add new
operators and not just overloading existing ones.

Haskell is full of function calls, so the Haskell designers have
used/invented several different ways to avoid some parenthesys in the
code.
From what I've seen if you remove some parenthesis well, in the right
places, the resulting code is less noisy, more readable, and it has
less
chances to contain a bug (because syntax noise is a good place for
bugs
to hide).
One of the ways used to remove some parenthesys is a standard syntax
that's optionally usable on any dyadic function (function with two
arguments):
sum a b = a + b
sum 1 5 == 1 `sum` 5
The `name` syntax is just a different way to call a regular function
with
two arguments.
In Haskell there is also a way to assign an arbitrary precedence and
associativity to such infix operators, but some Haskell programmers
argue that too much syntax sugar gives troubles (
http://www.haskell.org/haskellwiki/Use_of_infix_operators ).
In D the back tick has a different meaning, and even if in D you use a
different syntax, like just a $ prefix, I don't know how much good
this
syntax is for D:
int sum(int x, int y) { return x + y; }
int s = sum(1, sum(5, sum(6, sum(10, 30))));
Equals to (associativity of $ is fixed like this):
int s = 1 $sum 5 $sum 6 $sum 10 $sum 30;
So I think it's not worth adding to D.

LOL. And _what_ benefit would banishing classic operator overloading
have? A
function named add could be abused in _exactly_ the same ways that +
can be.

You could implement operator overloading without any special
cases/support in the language, like Scala does. In Scala
3 + 4
Is syntax sugar for:
3.+(4)
It's possible because of the following three reasons:
* Everything is an object
* Method names can contain other characters than A-Za-z_
* The infix syntax discussed in this thread
Implementing operator overloading like this also allows you to add new
operators and not just overloading existing ones.

How about precedence?
Andrei

It's basically determined by the first character in the name of the
method. Associativity it determined by the last character in the name,
if it ends with a colon it's right associative, otherwise left.
Have a look at: http://www.scala-lang.org/docu/files/ScalaReference.pdf
6.12.3 InfixOperations
--
/Jacob Carlborg

You could implement operator overloading without any special cases/support in
the language, like Scala does. In Scala
3 + 4
Is syntax sugar for:
3.+(4)
It's possible because of the following three reasons:
* Everything is an object
* Method names can contain other characters than A-Za-z_
* The infix syntax discussed in this thread
Implementing operator overloading like this also allows you to add new
operators and not just overloading existing ones.

We could give a standard name to each character in an allowed class, so that
x !%# y
maps to
x.opBangPercentHash(y);
;-)
Another solution is to specify operators in method defs:
X opBangPercentHash as "!%#" (X y) {...}
Or even use them directly there:
X !%# (X y) {...}
possibly with an annotation to warn the parser:
operator X !%# (X y) {...}
In any case, /this/ is not a big deal to manage in symbol tables, since an
operator is just a string like (any other) name. The big deal is to map such
features to builtin types, I guess (which are not object types).
Denis
--
_________________
vita es estrany
spir.wikidot.com

Haskell is full of function calls, so the Haskell designers have
used/invented several different ways to avoid some parenthesys in the
code.
=20
From what I've seen if you remove some parenthesis well, in the right
places, the resulting code is less noisy, more readable, and it has less
chances to contain a bug (because syntax noise is a good place for bugs
to hide).
=20
One of the ways used to remove some parenthesys is a standard syntax
that's optionally usable on any dyadic function (function with two
arguments):
=20
sum a b =3D a + b
=20
sum 1 5 =3D=3D 1 `sum` 5
=20
The `name` syntax is just a different way to call a regular function wi=

two arguments.
=20
In Haskell there is also a way to assign an arbitrary precedence and
associativity to such infix operators, but some Haskell programmers
argue that too much syntax sugar gives troubles (
http://www.haskell.org/haskellwiki/Use_of_infix_operators ).
=20
In D the back tick has a different meaning, and even if in D you use a
different syntax, like just a $ prefix, I don't know how much good this
syntax is for D:
=20
int sum(int x, int y) { return x + y; }
=20
int s =3D sum(1, sum(5, sum(6, sum(10, 30))));
Equals to (associativity of $ is fixed like this):
int s =3D 1 $sum 5 $sum 6 $sum 10 $sum 30;
=20
So I think it's not worth adding to D.

LOL. And _what_ benefit would banishing classic operator overloading have? =
A=20
function named add could be abused in _exactly_ the same ways that + can be.
The main benefit that infix syntax would provide would be if you had a vari=
ety of=20
mathematical functions beyond what the built in operators give you, and you=
want=20
to be able to treat them the same way. Whether classic operator overloading=
=20
exists or not is irrelevant.
Regardless, I don't think that adding infix syntax to the language is worth=
it. D=20
is already pretty complicated and _definitely_ more complicated than most=20
languages out there. One of the major complaints of C++ is how complicated =
it=20
is. We don't want to be adding extra complexity to the language without the=
=20
benefit outweighing that complexity, and I don't think that it's at all cle=
ar=20
that it does in this case. As as KennyTM~ pointed out, if UFCS is ever=20
implemented, it gives you most of the benefit of this anyway, and there are=
=20
already a lot of people around here interested in UFCS. So, I find it _far_=
more=20
likely that UFCS gets implemented than an infix function call syntax.
=2D Jonathan M Davis

I've worked on a financial system written in Java which used BigDecimal ext=
ensively. And, of course, I LOLed at that. But after having spent time with=
the code, a few benefits surfaced. It was clear which function was user-im=
plemented. Displaying the docs by mousing over was nice too (outside the ID=
E grepping 'add' is easier than '+'). And above all, no abuse whatsoever. I=
t all didn't outweigh the loss in terseness of syntax but did make up for s=
ome of it.
I'm bringing up this case because it's extremely in favour of operator over=
loading. Java is not big on number crunching and BigDecimal is one of the f=
ew spots on the vast programming landscape where overloaded operators make =
sense. And yet, the final verdict was: it doesn't suck.

A function named add could be abused in _exactly_ the same ways that + ca=

There's far less incentive for abuse as there's no illusory mathematical el=
egance to pursue.

The main benefit that infix syntax would provide would be if you had a va=

mathematical functions beyond what the built in operators give you, and y=

to be able to treat them the same way. Whether classic operator overloadi=

exists or not is irrelevant.

That's mixing vect1 + vect2 with vect1 `dot` vect2. I'd rather see them tre=
ated the same way.

Regardless, I don't think that adding infix syntax to the language is wor=

is already pretty complicated and _definitely_ more complicated than most=

languages out there. One of the major complaints of C++ is how complicate=

is. We don't want to be adding extra complexity to the language without t=

benefit outweighing that complexity, and I don't think that it's at all c=

that it does in this case.

I agree. Hence the idea of trading operator overloading for infixing. The a=
dded complexity is zero, if not less.

As as KennyTM~ pointed out, if UFCS is ever=20
implemented, it gives you most of the benefit of this anyway, and there a=

already a lot of people around here interested in UFCS. So, I find it _fa=