Arrow notation, etc.

--IDYEmSnFhs3mNXr+
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
So when I read the "Syntactic Sugar for Arrows" proposal, my initial
reaction is "Wow, that's a little complicated. It doesn't look like
syntactic sugar to me." (Err, no offense, I hope.) This contrasts
with the do-notation, which does look like syntactic sugar: you can
rewrite any do expression in terms of the basic combinators with a
bounded amount of pain.[1] Somehow with Arrows the point-free syntax
you are forced into is extraordinarily unwieldy, and the arrows
"syntactic sugar" is much handier. I presume people have tried and
failed to come up with a more efficient set of combinators? Any
thoughts as to why the arrow combinators need to be so unwieldy?
A possibly related question: are there any general results on the
verboseness of lambda calculus versus combinators?
Incidentally, it seems to me that this is one case where a Lisp-like
macro facility might be useful. With Haskell, it is impossible to
play with bindings, while presumably you can do this with good Lisp
macro systems.
Best,
Dylan Thurston
Footnotes:=20
[1] A quick look at the Haskell report reveals that named fields,
pattern matching, and deriving declarations are not "syntactic sugar"
in this sense. Of these, pattern matching is fundamental, named
fields have clear semantics, and deriving declarations are more iffy
[though very handy]. Did I miss any?
--IDYEmSnFhs3mNXr+
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
iD8DBQE7xtT7Veybfhaa3tcRAtO9AJ9IVkga+g22JTl8VCDlW3BLxiC04ACfZW4m
uadNS4pOKwHwS6SvM7kohmI=
=M0es
-----END PGP SIGNATURE-----
--IDYEmSnFhs3mNXr+--