Howdy again D folks.
This is my second message on what I think could be improved about C++, and what
I'm hoping D either already supports, or could in the future (although for this
one I'm pretty sure D doesn't have this currently).
This comment is about template metaprogramming and metaprogramming in general.
It's a little more pie-in-the-sky than my last comment.
Projects like Blitz++ and Boost in the C++ world have proven that
metaprogramming techniques can be very useful. The general idea is to be able
to do some general purpose computation on types or constants but to have it run
at compile time rather than run time. The C++ template mechanism, it turns out,
can be made to do this. Popular parlor tricks include things like making your
templates compute integer factorials at compile time, but there are plenty of
more useful things to do with it too. :-) Anyway, I presume many folks are
familiar with the idea. If not, google for "template metaprogramming". Gobs of
hits.
The problem is that the C++ template mechanism just wasn't designed as a general
purpose compile-time computing language. It's the same old C vs C++ argument
all over again. Yes you _can_ write programs that are object oriented in C, but
C just doesn't do much to help you. It doesn't facilitate the paradigm.
Similarly you _can_ do metaprogramming with C++ templates, but the results are
probably the least readable, least maintainable gobbledy-gook I've ever had the
misfortune to see.
What's needed is actual language support for the paradigm. I'm not sure what
that would look like, but a fresh C++-like language such as D seems have great
potential for offering such a solution. Basically the idea is you'd be able to
write code to execute at compile time that looks more or less like regular code,
but in which types are first class values and can be passed around and returned,
etc.
Here's one really simple example of one common use of metaprogramming to give a
feel for what I'm talking about. The idea is you want to be able to say
something like integer<24> to get the smallest native integral type that can
hold 24 bits. A C++ template metaprogramming solution can be seen here:
http://www.eptacom.net/pubblicazioni/pub_eng/paramint.html
(Scroll down to the bottom to see two different versions of the code).
Yikes! All that is, really, is just an if-else, but dressed up in C++ template
metaprogramming it takes about a page of code! My thought is that if the
language actually had first-class support for metaprogramming, then you could
just do something like write a short metafunction that returns a type:
metafun type integer(int numbits)
{
if (numbits<=sizeof(char)) return char;
if (numbits<=sizeof(short)) return short;
if (numbits<=sizeof(int)) return int;
if (numbits<=sizeof(long)) return long;
if (numbits<=sizeof(cent)) return cent;
metathrow "Compiler error";
}
Or something like that. The logic and intent of the above code is infinitely
more clear to the reader than the C++ metaprogramming code linked to above.
Obviously there are tons of issues to be worked out with making something like
this work (syntax, what meta types are needed, compile-time error handling,
etc), but it just seems to me that the past 7 or 8 years or so of work on C++
template metaprogramming has proven that this stuff is useful, but it has also
proven that to do anything useful with it you have to write pages of
"write-only" code. This stuff is so ugly that it sends people running to perl!
:-P Yes, it's that bad, folks. :-)
Thoughts? Any chances something like this could ever come to D?
Bill Baxter

Howdy again D folks.
This is my second message on what I think could be improved about C++, and what
I'm hoping D either already supports, or could in the future (although for this
one I'm pretty sure D doesn't have this currently).
This comment is about template metaprogramming and metaprogramming in general.
It's a little more pie-in-the-sky than my last comment.
Projects like Blitz++ and Boost in the C++ world have proven that
metaprogramming techniques can be very useful. The general idea is to be able
to do some general purpose computation on types or constants but to have it run
at compile time rather than run time. The C++ template mechanism, it turns out,
can be made to do this. Popular parlor tricks include things like making your
templates compute integer factorials at compile time, but there are plenty of
more useful things to do with it too. :-) Anyway, I presume many folks are
familiar with the idea. If not, google for "template metaprogramming". Gobs of
hits.
The problem is that the C++ template mechanism just wasn't designed as a general
purpose compile-time computing language. It's the same old C vs C++ argument
all over again. Yes you _can_ write programs that are object oriented in C, but
C just doesn't do much to help you. It doesn't facilitate the paradigm.
Similarly you _can_ do metaprogramming with C++ templates, but the results are
probably the least readable, least maintainable gobbledy-gook I've ever had the
misfortune to see.
What's needed is actual language support for the paradigm. I'm not sure what
that would look like, but a fresh C++-like language such as D seems have great
potential for offering such a solution. Basically the idea is you'd be able to
write code to execute at compile time that looks more or less like regular code,
but in which types are first class values and can be passed around and returned,
etc.
Here's one really simple example of one common use of metaprogramming to give a
feel for what I'm talking about. The idea is you want to be able to say
something like integer<24> to get the smallest native integral type that can
hold 24 bits. A C++ template metaprogramming solution can be seen here:
http://www.eptacom.net/pubblicazioni/pub_eng/paramint.html
(Scroll down to the bottom to see two different versions of the code).
Yikes! All that is, really, is just an if-else, but dressed up in C++ template
metaprogramming it takes about a page of code! My thought is that if the
language actually had first-class support for metaprogramming, then you could
just do something like write a short metafunction that returns a type:
metafun type integer(int numbits)
{
if (numbits<=sizeof(char)) return char;
if (numbits<=sizeof(short)) return short;
if (numbits<=sizeof(int)) return int;
if (numbits<=sizeof(long)) return long;
if (numbits<=sizeof(cent)) return cent;
metathrow "Compiler error";
}
Or something like that. The logic and intent of the above code is infinitely
more clear to the reader than the C++ metaprogramming code linked to above.
Obviously there are tons of issues to be worked out with making something like
this work (syntax, what meta types are needed, compile-time error handling,
etc), but it just seems to me that the past 7 or 8 years or so of work on C++
template metaprogramming has proven that this stuff is useful, but it has also
proven that to do anything useful with it you have to write pages of
"write-only" code. This stuff is so ugly that it sends people running to perl!
:-P Yes, it's that bad, folks. :-)
Thoughts? Any chances something like this could ever come to D?
Bill Baxter

This has been discussed, a little, on here before. I think Walter said he's had
email discussions and sees some promise in this direction or thereabouts, but
I'm not positive.
But meta-programming is something that all modern optimizing compilers do to a
*degree*. They do it whenever you write code that can be unrolled and
precomputed.
:int foo() {
: int x = 5;
: for(int j = 0; j<10; j++) {
: x = x + j;
: }
: return x;
:}
This could be a compiled loop.... Or, the compiler can unroll the loop and do
ALL of it at compile time.
What if there was a "version-like" statement that asked the compiler to do this?
You could do something like:
: compileTime {
: char[] x = "abcd";
: int abcd_hash = x[0] + x[1] + x[2] + x[3];
: }
This would be something like the "inline" statement in C++. Can the compiler
really fold that string into an integer? Will it?
The compileTime{} pragma would essentially say "do this at compile time, or
issue a warning if you can't".
This solves the factorial problem:
int factorial(int i) { /whatever/ };
compileTime {
int x = factorial();
}
compileTime{} would mean the compiler is expected to unroll loops, inline every
function, and basically run the code at compile time. If the above "string"
example did not work (ie it used a user-provided string), then the compileTime
statement fails with an error or warning.
If reflection is added to the D language, then you could provide the ability to
iterate over methods and fields, or build classes. Combine reflection and
compileTime, and you can do template meta tasks.
Kevin

If reflection is added to the D language, then you could provide the ability to
iterate over methods and fields, or build classes. Combine reflection and
compileTime, and you can do template meta tasks.

To clarify... What I meant was, you could solve problems in the template
meta-programming domain, but use natural "D" syntax to code it up.
It would be a hint, like inline, but more powerful, pointing the compiler to
places where it could "optimize completely".
Computing calculated constants and unrolling loops is a small step on the road
to the full "programming at compile time" goal.
A more powerful step down the road is the ability to edit classes at compile
time. Mark a bunch of classes as inheriting from "serializable" and write some
generic pickling code for primitives. Then the "serializable" base class needs
the ability to iterate over the fields of its subclasses. Instant XML from D
structs.
(I want the power to write programs that write the rest of themself. A little
more juice, just a little more and I'll be happy...)
Kevin

(I want the power to write programs that write the rest of themself. A
little
more juice, just a little more and I'll be happy...)

Ummm...
I would rather want to have a debugger which has a power to debug
programs that write the rest of themselves :)
Btw: what is the problem now to write such programs?
Derek did it, name is Build.exe.
Huh?

Right on, Kevin. One thing to keep in mind, though, when you talk about giving
these "compileTime{}" blocks "all the power of D", is that D is designed to be
an efficient, statically-typed, *compiled* language, and to achieve that it has
to sacrifice some of the niceties of more dynamic typing to achieve the desired
performance. I personally think a compile-time language should be as dynamic as
possible. It effectivly is going to be an interpreted rather than compiled
thing anyway, so why not make it as dynamic as possible? Like you said, the
dream is basically to write metacode that can use introspection on classes at
compile time to algorithmically generate types, classes, and the code that
eventually gets compiled. If I were going to write an external code generator,
I'd definitely use a scripting language, so why not give the same degree of
flexibility to the built-in meta-language? And give it good text manipulation
operators too, search, replace, munge, regex etc. Just like you'd expect in a
scripting language.
One issue, however, is that that doesn't really mesh well with the concept you
have of "falling back" to run-time computation if the compiler determines it
can't do it at compile time. Though, I don't know if that's such a great idea,
really. I don't know why for sure, but my gut tells me that compile-time
execution and run-time execution aren't quite as similar as inline and
non-inline are to each other and that's going to cause problems. For one thing,
whereas it's always possible to just not inline something, it won't always be
possible to not run some code at compile-time, say if the return value of the
code is in fact a type rather than a number. If you're working with types as
data, then you absolutely can't run the code at runtime.
From a aethetics point of view, though, I do agree that it would be nice if the
meta-language were as similar to the D language as possible. It would
definitely be nice if it were easy to convert "compile-time" code to run-time
code where the conversion made sense. But still I don't want to give up the
flexibility of a more dynamically-typed language.
I think something like this would also really once and for all make macros
obsolete. I've heard various gurus saying that the combination of inline
functions, typedefs, templates, and const variables makes macros obsolete, but
there's one more case where people have historically used macros that the gurus
seem to have been forgotten about: language extensions. Anyone ever seen how
you make a usable object system that works in C? Answer: lots of macros. Anyone
taken a look at the "Aspect Oriented Programming" extensions for C++? Any guess
as to how they implement it? Yep. Bunch o' Macros (among other things). Ever
heard about the trick for making the broken for loop scope in MSVC6 work
properly? Macros again. Unfortunately macros are pretty indiscriminate in how
the go and muck things up and the lack of scoping etc causes real messes. Maybe
it's possible make all those sorts of language extension types of things
possible but in a cleaner way.
I'm just thinking out loud here, but it's definitely all related. Templates,
metaprogramming, code generation, macros. Wouldn't it be cool to have it all
unified and done right?
Just a small matter of figuring out what "done right" means. :-) Unfortunately
I'm just an armchair compiler writer. :-)
Oh, one final thing to throw into the mix -- special tag keywords. Things like
'synchronized' that change some aspect of how a method runs. Or Qt's "signal"
and "slot" keywords. Qt runs a lex/yacc parser ('moc') on header files to
handle those keywords and generate boilerplate code to implement them. Wouldn't
it be nice if the language supported the ability for users to add such tags?
Another one that would be nice to have is "script", i.e. generate wrapper code
for binding this method with a scripting language. Boost.python for instance
lets you put code _elsewhere_ that says generate wrapper code for this method,
but sometimes it would be nice if you could just flag the method itself right
there where you declare it.
--bb
In article <d5c8s4$d82$1 digitaldaemon.com>, Kevin Bealer says...

In article <d5c1mu$640$1 digitaldaemon.com>, Bill Baxter says...

Howdy again D folks.
This is my second message on what I think could be improved about C++, and what
I'm hoping D either already supports, or could in the future (although for this
one I'm pretty sure D doesn't have this currently).
This comment is about template metaprogramming and metaprogramming in general.
It's a little more pie-in-the-sky than my last comment.
Projects like Blitz++ and Boost in the C++ world have proven that
metaprogramming techniques can be very useful. The general idea is to be able
to do some general purpose computation on types or constants but to have it run
at compile time rather than run time. The C++ template mechanism, it turns out,
can be made to do this. Popular parlor tricks include things like making your
templates compute integer factorials at compile time, but there are plenty of
more useful things to do with it too. :-) Anyway, I presume many folks are
familiar with the idea. If not, google for "template metaprogramming". Gobs of
hits.
The problem is that the C++ template mechanism just wasn't designed as a general
purpose compile-time computing language. It's the same old C vs C++ argument
all over again. Yes you _can_ write programs that are object oriented in C, but
C just doesn't do much to help you. It doesn't facilitate the paradigm.
Similarly you _can_ do metaprogramming with C++ templates, but the results are
probably the least readable, least maintainable gobbledy-gook I've ever had the
misfortune to see.
What's needed is actual language support for the paradigm. I'm not sure what
that would look like, but a fresh C++-like language such as D seems have great
potential for offering such a solution. Basically the idea is you'd be able to
write code to execute at compile time that looks more or less like regular code,
but in which types are first class values and can be passed around and returned,
etc.
Here's one really simple example of one common use of metaprogramming to give a
feel for what I'm talking about. The idea is you want to be able to say
something like integer<24> to get the smallest native integral type that can
hold 24 bits. A C++ template metaprogramming solution can be seen here:
http://www.eptacom.net/pubblicazioni/pub_eng/paramint.html
(Scroll down to the bottom to see two different versions of the code).
Yikes! All that is, really, is just an if-else, but dressed up in C++ template
metaprogramming it takes about a page of code! My thought is that if the
language actually had first-class support for metaprogramming, then you could
just do something like write a short metafunction that returns a type:
metafun type integer(int numbits)
{
if (numbits<=sizeof(char)) return char;
if (numbits<=sizeof(short)) return short;
if (numbits<=sizeof(int)) return int;
if (numbits<=sizeof(long)) return long;
if (numbits<=sizeof(cent)) return cent;
metathrow "Compiler error";
}
Or something like that. The logic and intent of the above code is infinitely
more clear to the reader than the C++ metaprogramming code linked to above.
Obviously there are tons of issues to be worked out with making something like
this work (syntax, what meta types are needed, compile-time error handling,
etc), but it just seems to me that the past 7 or 8 years or so of work on C++
template metaprogramming has proven that this stuff is useful, but it has also
proven that to do anything useful with it you have to write pages of
"write-only" code. This stuff is so ugly that it sends people running to perl!
:-P Yes, it's that bad, folks. :-)
Thoughts? Any chances something like this could ever come to D?
Bill Baxter

This has been discussed, a little, on here before. I think Walter said he's had
email discussions and sees some promise in this direction or thereabouts, but
I'm not positive.
But meta-programming is something that all modern optimizing compilers do to a
*degree*. They do it whenever you write code that can be unrolled and
precomputed.
:int foo() {
: int x = 5;
: for(int j = 0; j<10; j++) {
: x = x + j;
: }
: return x;
:}
This could be a compiled loop.... Or, the compiler can unroll the loop and do
ALL of it at compile time.
What if there was a "version-like" statement that asked the compiler to do this?
You could do something like:
: compileTime {
: char[] x = "abcd";
: int abcd_hash = x[0] + x[1] + x[2] + x[3];
: }
This would be something like the "inline" statement in C++. Can the compiler
really fold that string into an integer? Will it?
The compileTime{} pragma would essentially say "do this at compile time, or
issue a warning if you can't".
This solves the factorial problem:
int factorial(int i) { /whatever/ };
compileTime {
int x = factorial();
}
compileTime{} would mean the compiler is expected to unroll loops, inline every
function, and basically run the code at compile time. If the above "string"
example did not work (ie it used a user-provided string), then the compileTime
statement fails with an error or warning.
If reflection is added to the D language, then you could provide the ability to
iterate over methods and fields, or build classes. Combine reflection and
compileTime, and you can do template meta tasks.
Kevin

Hi, Bill,
(not so) Wild idea : to put your sources under httpd and config
it for use lets say PHP processing on D files.
So

compileTime {
int x = factorial();
}

will be just a
int x = <% factorial(N) %>;
This is reliable and can be used now.
You even can assemble such preprocessor
in D using DMDScript for example.
Huh?
I like my own idea :) The ultimate preprocessor.
The way better as it clearly separates compile/runtime namespaces.
And you can debug your meta and programs separately.
And finally close this metaprogramming theme for now and beyond.
Andrew.
"Bill Baxter" <Bill_member pathlink.com> wrote in message
news:d5cf7e$hh1$1 digitaldaemon.com...

Right on, Kevin. One thing to keep in mind, though, when you talk about
giving
these "compileTime{}" blocks "all the power of D", is that D is designed
to be
an efficient, statically-typed, *compiled* language, and to achieve that
it has
to sacrifice some of the niceties of more dynamic typing to achieve the
desired
performance. I personally think a compile-time language should be as
dynamic as
possible. It effectivly is going to be an interpreted rather than
compiled
thing anyway, so why not make it as dynamic as possible? Like you said,
the
dream is basically to write metacode that can use introspection on classes
at
compile time to algorithmically generate types, classes, and the code that
eventually gets compiled. If I were going to write an external code
generator,
I'd definitely use a scripting language, so why not give the same degree
of
flexibility to the built-in meta-language? And give it good text
manipulation
operators too, search, replace, munge, regex etc. Just like you'd expect
in a
scripting language.
One issue, however, is that that doesn't really mesh well with the concept
you
have of "falling back" to run-time computation if the compiler determines
it
can't do it at compile time. Though, I don't know if that's such a great
idea,
really. I don't know why for sure, but my gut tells me that compile-time
execution and run-time execution aren't quite as similar as inline and
non-inline are to each other and that's going to cause problems. For one
thing,
whereas it's always possible to just not inline something, it won't always
be
possible to not run some code at compile-time, say if the return value of
the
code is in fact a type rather than a number. If you're working with types
as
data, then you absolutely can't run the code at runtime.
From a aethetics point of view, though, I do agree that it would be nice
if the
meta-language were as similar to the D language as possible. It would
definitely be nice if it were easy to convert "compile-time" code to
run-time
code where the conversion made sense. But still I don't want to give up
the
flexibility of a more dynamically-typed language.
I think something like this would also really once and for all make macros
obsolete. I've heard various gurus saying that the combination of inline
functions, typedefs, templates, and const variables makes macros obsolete,
but
there's one more case where people have historically used macros that the
gurus
seem to have been forgotten about: language extensions. Anyone ever seen
how
you make a usable object system that works in C? Answer: lots of macros.
Anyone
taken a look at the "Aspect Oriented Programming" extensions for C++? Any
guess
as to how they implement it? Yep. Bunch o' Macros (among other things).
Ever
heard about the trick for making the broken for loop scope in MSVC6 work
properly? Macros again. Unfortunately macros are pretty indiscriminate in
how
the go and muck things up and the lack of scoping etc causes real messes.
Maybe
it's possible make all those sorts of language extension types of things
possible but in a cleaner way.
I'm just thinking out loud here, but it's definitely all related.
Templates,
metaprogramming, code generation, macros. Wouldn't it be cool to have it
all
unified and done right?
Just a small matter of figuring out what "done right" means. :-)
Unfortunately
I'm just an armchair compiler writer. :-)
Oh, one final thing to throw into the mix -- special tag keywords. Things
like
'synchronized' that change some aspect of how a method runs. Or Qt's
"signal"
and "slot" keywords. Qt runs a lex/yacc parser ('moc') on header files to
handle those keywords and generate boilerplate code to implement them.
Wouldn't
it be nice if the language supported the ability for users to add such
tags?
Another one that would be nice to have is "script", i.e. generate wrapper
code
for binding this method with a scripting language. Boost.python for
instance
lets you put code _elsewhere_ that says generate wrapper code for this
method,
but sometimes it would be nice if you could just flag the method itself
right
there where you declare it.
--bb
In article <d5c8s4$d82$1 digitaldaemon.com>, Kevin Bealer says...

In article <d5c1mu$640$1 digitaldaemon.com>, Bill Baxter says...

Howdy again D folks.
This is my second message on what I think could be improved about C++,
and what
I'm hoping D either already supports, or could in the future (although
for this
one I'm pretty sure D doesn't have this currently).
This comment is about template metaprogramming and metaprogramming in
general.
It's a little more pie-in-the-sky than my last comment.
Projects like Blitz++ and Boost in the C++ world have proven that
metaprogramming techniques can be very useful. The general idea is to be
able
to do some general purpose computation on types or constants but to have
it run
at compile time rather than run time. The C++ template mechanism, it
turns out,
can be made to do this. Popular parlor tricks include things like making
your
templates compute integer factorials at compile time, but there are
plenty of
more useful things to do with it too. :-) Anyway, I presume many folks
are
familiar with the idea. If not, google for "template metaprogramming".
Gobs of
hits.
The problem is that the C++ template mechanism just wasn't designed as a
general
purpose compile-time computing language. It's the same old C vs C++
argument
all over again. Yes you _can_ write programs that are object oriented in
C, but
C just doesn't do much to help you. It doesn't facilitate the paradigm.
Similarly you _can_ do metaprogramming with C++ templates, but the
results are
probably the least readable, least maintainable gobbledy-gook I've ever
had the
misfortune to see.
What's needed is actual language support for the paradigm. I'm not sure
what
that would look like, but a fresh C++-like language such as D seems have
great
potential for offering such a solution. Basically the idea is you'd be
able to
write code to execute at compile time that looks more or less like
regular code,
but in which types are first class values and can be passed around and
returned,
etc.
Here's one really simple example of one common use of metaprogramming to
give a
feel for what I'm talking about. The idea is you want to be able to say
something like integer<24> to get the smallest native integral type that
can
hold 24 bits. A C++ template metaprogramming solution can be seen here:
http://www.eptacom.net/pubblicazioni/pub_eng/paramint.html
(Scroll down to the bottom to see two different versions of the code).
Yikes! All that is, really, is just an if-else, but dressed up in C++
template
metaprogramming it takes about a page of code! My thought is that if the
language actually had first-class support for metaprogramming, then you
could
just do something like write a short metafunction that returns a type:
metafun type integer(int numbits)
{
if (numbits<=sizeof(char)) return char;
if (numbits<=sizeof(short)) return short;
if (numbits<=sizeof(int)) return int;
if (numbits<=sizeof(long)) return long;
if (numbits<=sizeof(cent)) return cent;
metathrow "Compiler error";
}
Or something like that. The logic and intent of the above code is
infinitely
more clear to the reader than the C++ metaprogramming code linked to
above.
Obviously there are tons of issues to be worked out with making something
like
this work (syntax, what meta types are needed, compile-time error
handling,
etc), but it just seems to me that the past 7 or 8 years or so of work on
C++
template metaprogramming has proven that this stuff is useful, but it has
also
proven that to do anything useful with it you have to write pages of
"write-only" code. This stuff is so ugly that it sends people running to
perl!
:-P Yes, it's that bad, folks. :-)
Thoughts? Any chances something like this could ever come to D?
Bill Baxter

This has been discussed, a little, on here before. I think Walter said
he's had
email discussions and sees some promise in this direction or thereabouts,
but
I'm not positive.
But meta-programming is something that all modern optimizing compilers do
to a
*degree*. They do it whenever you write code that can be unrolled and
precomputed.
:int foo() {
: int x = 5;
: for(int j = 0; j<10; j++) {
: x = x + j;
: }
: return x;
:}
This could be a compiled loop.... Or, the compiler can unroll the loop and
do
ALL of it at compile time.
What if there was a "version-like" statement that asked the compiler to do
this?
You could do something like:
: compileTime {
: char[] x = "abcd";
: int abcd_hash = x[0] + x[1] + x[2] + x[3];
: }
This would be something like the "inline" statement in C++. Can the
compiler
really fold that string into an integer? Will it?
The compileTime{} pragma would essentially say "do this at compile time,
or
issue a warning if you can't".
This solves the factorial problem:
int factorial(int i) { /whatever/ };
compileTime {
int x = factorial();
}
compileTime{} would mean the compiler is expected to unroll loops, inline
every
function, and basically run the code at compile time. If the above
"string"
example did not work (ie it used a user-provided string), then the
compileTime
statement fails with an error or warning.
If reflection is added to the D language, then you could provide the
ability to
iterate over methods and fields, or build classes. Combine reflection and
compileTime, and you can do template meta tasks.
Kevin

Heh heh. Nice. Now tell me how you propose to handle this one?
------------------------------
compileTime {
type integer(int numbits)
{
if (numbits<=sizeof(char)) return char;
if (numbits<=sizeof(short)) return short;
if (numbits<=sizeof(int)) return int;
if (numbits<=sizeof(long)) return long;
if (numbits<=sizeof(cent)) return cent;
throw "Compiler error";
}
}
integer(24) my24bitVar;
integer(40) my40bitVar;
------------------------------
And if you get that one, let us know how to tackle the binding generator
problem. I.e. Boost.python type functionality
(http://www.boost.org/libs/python/doc/tutorial/doc/html/python/exposing.html)
where to generate python bindings for a class (World) and a couple of its
methods (greet and set) all you have to write is this little bit of C++:
class_<World>("World")
def("greet", &World::greet)
def("set", &World::set)
;
Since the compiler knows the number and types of arguments for World::greet it's
possible for the above C++ code to generate complete typesafe wrappers to call
these methods from python. To get this kind of functionality, I think you're
going to need to write a full D parser in php. Should be fun!
--bb
In article <d5ch4n$jul$1 digitaldaemon.com>, Andrew Fedoniouk says...

(not so) Wild idea : to put your sources under httpd and config
it for use lets say PHP processing on D files.
So

compileTime {
int x = factorial();
}

will be just a
int x = <% factorial(N) %>;
This is reliable and can be used now.
You even can assemble such preprocessor
in D using DMDScript for example.
Huh?
I like my own idea :) The ultimate preprocessor.
The way better as it clearly separates compile/runtime namespaces.
And you can debug your meta and programs separately.
And finally close this metaprogramming theme for now and beyond.

This is how standard C/C++ preprocessor works (BTW, you can easily use it
with D too, as well as with assembler and any other language). PHP has a
lot more features, but the idea is the same.
http://www.digitalmars.com/d/pretod.html explains why Walter do not like it.
--
Vladimir

whereas it's always possible to just not inline something, it won't always be
possible to not run some code at compile-time, say if the return value of the
code is in fact a type rather than a number. If you're working with types as
data, then you absolutely can't run the code at runtime.

Depends how far reflection progresses, I think. If types were first-class
objects, and
foo = new Foo();
foo.doStuff();
became (optimizable) syntactic sugar for
foo = ClassFoo.createNewObject();
ClassFoo.invokeMethod(foo, "doStuff");
then I don't see why this couldn't run at runtime.

Oh, one final thing to throw into the mix -- special tag keywords. Things like
'synchronized' that change some aspect of how a method runs.

Agreed, but I'd prefer something like dotnet's attribute syntax:
[MyAttribute] someDeclaration
rather than keywords alone, just because it's so much more extensible - you
don't need to be as conservative about introducing new ones.
- Mike

whereas it's always possible to just not inline something, it won't always be
possible to not run some code at compile-time, say if the return value of the
code is in fact a type rather than a number. If you're working with types as
data, then you absolutely can't run the code at runtime.

Depends how far reflection progresses, I think. If types were first-class
objects, and
foo = new Foo();
foo.doStuff();
became (optimizable) syntactic sugar for
foo = ClassFoo.createNewObject();
ClassFoo.invokeMethod(foo, "doStuff");
then I don't see why this couldn't run at runtime.

Ok. Yeh, I'm not completely aware of what all is possible or in the works with
D. If D can support adding members to classes and things like that at run time,
then there's probably not much that couldn't be shunted to run-time if need be.
I was thinking D probably didn't have that sort of thing, being a speed-oriented
compiled language.

Oh, one final thing to throw into the mix -- special tag keywords. Things like
'synchronized' that change some aspect of how a method runs.

Agreed, but I'd prefer something like dotnet's attribute syntax:
[MyAttribute] someDeclaration
rather than keywords alone, just because it's so much more extensible - you
don't need to be as conservative about introducing new ones.

Certainly. That does sound better. What kind of attributes can you define in
NET? Because after I wrote that I started thinking that maybe there aren't
more than like 5 things you'd ever want to use that type of thing for.
synchronized, scriptable... if D is going to have introspection, then you
probably don't need signal or slot. So hmm... I'm at two. Oh "timed"?
Automatically time the execution of a method? Maybe some contract programming
or aspect oriented programming things?
--bb

Ok. Yeh, I'm not completely aware of what all is possible or in the works with
D. If D can support adding members to classes and things like that at run time

(I'm just speculating wildly here too; please don't take any of it too
seriously.)
*Adding* members at runtime - probably not feasible, unless the compiler is part
of the standard runtime library. Setting or calling named members at runtime,
OTOH, is perfectly possible in principle - I do it quite a bit in C#. I think
it's universal in languages supporting reflection.

What kind of attributes can you define in NET?

See http://tinyurl.com/ddjap (links to an MSDN doc page) for the system ones;
users can and do define their own. Note that there's substantial crossover
between attributes and "marker" interfaces a la Java; dotnet itself is rather
schizophrenic on the subject. (It has both a Serializable attribute and an
ISerializable interface, and they do different things.) Attributes tend to be
used more for metadata, particularly interop- and tool-oriented metadata,
whereas interfaces are used more for the mainline semantics. Attributes can also
be parameterized, e.g.
[MyAttribute("foo", 92)] someDeclaration;
whereas interfaces obviously can't (though you might be able to do something
template-y in D); attributes can also be applied to just about anything, whereas
marker interfaces can only be applied to classes.
- Mike

Right on, Kevin. One thing to keep in mind, though, when you talk about giving
these "compileTime{}" blocks "all the power of D", is that D is designed to be
an efficient, statically-typed, *compiled* language, and to achieve that it has
to sacrifice some of the niceties of more dynamic typing to achieve the desired
performance. I personally think a compile-time language should be as dynamic as
possible. It effectivly is going to be an interpreted rather than compiled
thing anyway, so why not make it as dynamic as possible? Like you said, the
dream is basically to write metacode that can use introspection on classes at
compile time to algorithmically generate types, classes, and the code that
eventually gets compiled. If I were going to write an external code generator,
I'd definitely use a scripting language, so why not give the same degree of
flexibility to the built-in meta-language? And give it good text manipulation
operators too, search, replace, munge, regex etc. Just like you'd expect in a
scripting language.

I think for the most part this is true --- some scripting like features could be
included. But, before deciding on this, I think it would be good to examine
what we mean by the distinction between scripting and compiled languages. The
result is a list of tradeoffs and features.
We could divide the list three ways. Some of those features could also be
available in compiled D (even as a library); some only in meta-programming; some
not included at all. A lot of features like foreach() were common in scripting
languages but now appear in D and Java 1.5. They were a good idea, they just
*look* inefficient. ;)

One issue, however, is that that doesn't really mesh well with the concept you
have of "falling back" to run-time computation if the compiler determines it
can't do it at compile time. Though, I don't know if that's such a great idea,
really. I don't know why for sure, but my gut tells me that compile-time
execution and run-time execution aren't quite as similar as inline and
non-inline are to each other and that's going to cause problems. For one thing,
whereas it's always possible to just not inline something, it won't always be
possible to not run some code at compile-time, say if the return value of the
code is in fact a type rather than a number. If you're working with types as
data, then you absolutely can't run the code at runtime.

Currently, the "varargs" concept *can* do some type processing at runtime.
Typeinfos, such as is used by writef(), for example.

From a aethetics point of view, though, I do agree that it would be nice if the
meta-language were as similar to the D language as possible. It would
definitely be nice if it were easy to convert "compile-time" code to run-time
code where the conversion made sense. But still I don't want to give up the
flexibility of a more dynamically-typed language.

I feel the same way, but see below, where I marked [Impossible].

I think something like this would also really once and for all make macros
obsolete. I've heard various gurus saying that the combination of inline
functions, typedefs, templates, and const variables makes macros obsolete, but
there's one more case where people have historically used macros that the gurus
seem to have been forgotten about: language extensions. Anyone ever seen how
you make a usable object system that works in C? Answer: lots of macros. Anyone
taken a look at the "Aspect Oriented Programming" extensions for C++? Any guess
as to how they implement it? Yep. Bunch o' Macros (among other things). Ever
heard about the trick for making the broken for loop scope in MSVC6 work
properly? Macros again. Unfortunately macros are pretty indiscriminate in how
the go and muck things up and the lack of scoping etc causes real messes. Maybe
it's possible make all those sorts of language extension types of things
possible but in a cleaner way.

Yep. I think we're definitely on the same page.

I'm just thinking out loud here, but it's definitely all related. Templates,
metaprogramming, code generation, macros. Wouldn't it be cool to have it all
unified and done right?
Just a small matter of figuring out what "done right" means. :-) Unfortunately
I'm just an armchair compiler writer. :-)
Oh, one final thing to throw into the mix -- special tag keywords. Things like
'synchronized' that change some aspect of how a method runs. Or Qt's "signal"
and "slot" keywords. Qt runs a lex/yacc parser ('moc') on header files to
handle those keywords and generate boilerplate code to implement them.

Yes. I can imagine the D concept of versioning, ie debug { ... } code, being
extended to support user-designed code modification. The details, though...

Wouldn't
it be nice if the language supported the ability for users to add such tags?
Another one that would be nice to have is "script", i.e. generate wrapper code
for binding this method with a scripting language. Boost.python for instance
lets you put code _elsewhere_ that says generate wrapper code for this method,
but sometimes it would be nice if you could just flag the method itself right
there where you declare it.
--bb

Here is what I am thinking, syntax is flexible of course.
: struct xy_data {
: userTag(mutable, pickled) {
: int x;
: }
: int refcount;
:
: userTag(pickled) {
: double y;
: }
: }
Once you provide this, the following new kind of "foreach" would be legal at
compileTime only. Its job is to add a reset method that resets all the
"mutable" tagged members, and a "pickling" field that sends all "pickled"
members to a pickler class. Note that the "refcount" field is neither pickled
nor reset.
: compileTime {
: char[] resetCode;
: char[] pickling;
:
: foreach(type t; variable y; xy_data.TypeInfo.members) {
: if (y.has_tag(mutable)) {
: mutations = mutations ~ y.name ~ " = " ~ toString(t.init) ~ ";";
: }
: if (y.has_tag(pickling)) {
: pickling = pickling ~ "S.WriteData(" ~ y.name ~ ");";
: }
: }
:
: // If there are mutable members, add a reset method.
: // It resets all fields marked "mutable" to T.init.
:
: if (mutable.length != 0) {
: addMethod(xy_data, "void Reset()", resetCode);
: }
:
: // If there are pickled members, add a pickler.
: // It sends all fields marked "pickled" to a PickleBuffer object.
:
: if (mutable.length != 0) {
: addMethod(xy_data, "void Pickle(PickleStream S)", pickling);
: }
: }
The foreach() over types changes my initial idea -- it could not be D code at
runtime.
BTW, for those worried about compile complexity, I have no problem if fabs() or
cos() or "memmov" is not usable at compile time. There would be restrictions on
compile-time-able code and run-time-able code.
I do think that seperating the language into perl + D is unnecessary and
requires too many conflicting idioms.
The [Impossible] dilemma:
I confess, I have a problem with the "this is impossible in a non-scripting
language".
If something is considered impossible because it is too inefficient, then I
would say "too inefficient compared to not having the ability?". If it provides
important, unreplaceable functionality, AND can be done in a "pay as you go"
manner, then to me, it at least a candidate for inclusion. 'Slow is better than
No', IMHO, as long as it doesn't make other code slower or more dangerous.
On the other hand, if we are delaying for 6 months because we are fixing bugs or
hashing out the "right way" to do something, let me applaud that diligence. I'm
not asking for the perl "slap in into the language, noone will mind" approach.
(Not that I dislike Perl per se.)
Kevin

And of course, all that could also be put in a "compile time" function with the
struct as an argument, so that you could easily perform the same augmentation on
every struct that needs it with one line of code, like
'decorateStruct(xy_data)'.
I like it.

The [Impossible] dilemma:
I confess, I have a problem with the "this is impossible in a non-scripting
language".

Yeh, I agree. But there are some things you just can't do if you don't have an
interpreter. Like, it might be nice to be able to create simple executable
statements out of strings at runtime and run them, but without going the big
step of including an interpreter in the run-time library, that can't happen.
But I guess that's not really a counter-argument. Having a run-time interpreter
around is just another "price you have to pay" if you want the functionality.
Yeh, what you say really makes a lot of sense. I just spent a few weeks back in
february figuring out how to interface a scripting language with a C++ program.
Yuck. I hear a number of people these days saying you've got to have C++ for
the speed but scriptablility is also a must (true in games and for office type
apps), and so the solution is these cobbled together solutions that try to marry
C++ with python or what have you. But what's the point? If it's generally
agreed that scripting is better for some things, why not just have "one language
to rule them all"? Why can't you have a language like D provide a run-time
interpreter too? If you don't need it, you don't pay for it. Basically it
would be like having compiler-level support for generating all the gobbledy-gook
glue you need to interface with a script language, with the added bonus of
having a unified syntax.
So putting it all together we've now got 3 phases of code execution. :-)
Interpreted compile-time execution (metaprogramming), compiled run-time
execution (the norm), and interpreted run-time execution (scripting). Are we
missing anything? Compiled compile-time execution? Sure, why not! I guess
this would be what you'd do if all the metaprogramming tricks got to to be so
time consuming to compile that they needed to be executed natively. Let's hope
that doesn't happen. Or you could maybe think of 'compiled compile-time
execution' as being a code generation phase in which metaprogramming constructs
are actually physically transformed into new code. So it's compiling metacode
into regular program code. Hmm. Maybe that's not so useful.
But I like the idea of adding run-time interpreting capabilities right into the
language. That would be neat. That would be even cooler than having first
class support for meta-programming, I think.

Maybe, after compiling a program, the compiler can execute any function call
that has constant (compile-time evaluable) arguments and replace the call
with the return value.
It would simply execute the function after compiling, passing the constant
arguments. This will result in another constant that will perhaps result in
another function being evaluated compile-time, etc..
I think the nicest thing about meta-programming will be the possibility to
introduce new operators. If we can add types, functions in code, why not
also operators? You want "**" to mean "to the power of"? Just define it.
"<=>" should call opCmp? Want to be able to do "&" on floats? Or maybe "^"
should be "to the power of" if done on reals?
I can hardly wait!
L.

Maybe, after compiling a program, the compiler can execute any function call
that has constant (compile-time evaluable) arguments and replace the call
with the return value.
It would simply execute the function after compiling, passing the constant
arguments. This will result in another constant that will perhaps result in
another function being evaluated compile-time, etc..

The most interesting aspects of meta-programming are not computing the values of
constants. If that was all there was to it, it would be pointless. Just
compute the value of the constant once at start up time, and quit yer whining
type of thing. Look again at the simple "integer big enough to hold N bits"
example I mentioned. The compiler can't "simply execute the function after
compiling" because that function is computing the type of some variables in code
that then need to be compiled. So at the very least the process has to be
"execute the function *before* compiling". But even that's a little simplistic.
In the example above it really needs to be executed anytime integer<N> is
encountered in the code _during_ compiling.

I think the nicest thing about meta-programming will be the possibility to
introduce new operators. If we can add types, functions in code, why not
also operators? You want "**" to mean "to the power of"? Just define it.
"<=>" should call opCmp? Want to be able to do "&" on floats? Or maybe "^"
should be "to the power of" if done on reals?

I'm not sure this is really feasible or desirable (or that it is even really
meta-programming). What on earth does "&" on floats mean, for example? And how
do you deal with precedence? You want ** to mean power of, that's fine, but the
compiler has to know somehow what the precedence of that operator is or all your
math is going to come out funny. And you better be sure that you don't define
any operators that are ambiguous. Did you mean to exponentiate "a**b" or to
multiply "a * (*b)"?

I can hardly wait!

Or maybe this is faux-enthusiasm? Are you being sarcastic? Sorry if I didn't
catch it. Most oo folks strongly advise against going crazy with operator
overloading. It may seem nifty when you're doing it, but it can make the code
totally impossible to read. Is that "*" really a "*" or is it some funky
overloaded operator? If so then where the heck is it defined? And where else
is this overloaded operator being used in this code? Oh snap! I can't grep for
it because it looks exactly like every other "*" in my code! Etc.
"Thinking in C++" puts it well:
http://www.camtp.uni-mb.si/books/Thinking-in-C++/TIC2Vone-distribution/html/Chapter12.html#Heading349
--bb

The most interesting aspects of meta-programming are not computing the
values of
constants. If that was all there was to it, it would be pointless. Just
compute the value of the constant once at start up time, and quit yer
whining
type of thing. Look again at the simple "integer big enough to hold N
bits"
example I mentioned. The compiler can't "simply execute the function
after
compiling" because that function is computing the type of some variables
in code
that then need to be compiled. So at the very least the process has to be
"execute the function *before* compiling". But even that's a little
simplistic.
In the example above it really needs to be executed anytime integer<N> is
encountered in the code _during_ compiling.

I guess you're right. Still, the syntax for these meta-functions should not
really have to change. The compiler would need the ability to execute the
code before compiling, like you said. In can run its interpreter over the
source code and use the results for the actual compile phase. A difference
might be that "meta-functions" will be able to return types and not only
values.

I think the nicest thing about meta-programming will be the possibility to
introduce new operators. If we can add types, functions in code, why not
also operators? You want "**" to mean "to the power of"? Just define it.
"<=>" should call opCmp? Want to be able to do "&" on floats? Or maybe "^"
should be "to the power of" if done on reals?

I'm not sure this is really feasible or desirable (or that it is even
really
meta-programming). What on earth does "&" on floats mean, for example?

It could be the bit-wise and on the floats. I don't know, it was just an
example :-/

And how
do you deal with precedence?

That should be handled in the syntax for these custom operators.

You want ** to mean power of, that's fine, but the
compiler has to know somehow what the precedence of that operator is or
all your
math is going to come out funny. And you better be sure that you don't
define
any operators that are ambiguous. Did you mean to exponentiate "a**b" or
to
multiply "a * (*b)"?

True. The (meta-?) compiler will complain if it ends up with an unparsable
language.

I can hardly wait!

Or maybe this is faux-enthusiasm? Are you being sarcastic? Sorry if I
didn't
catch it.

No no, not at all. I really think it would be nice to be able to define
operators the way you define classes/types/functions. Just imagine how many
discussion in this list would be solved by this. You want "isnot"? Define
it.
It's the old discussion all over again: should a feature be built in the
compiler or in the library?
Why not define "isnot" in a "library" instead? (It won't be a .lib or
anything like it, since it will just get compiled in, like templates)

Most oo folks strongly advise against going crazy with operator
overloading. It may seem nifty when you're doing it, but it can make the
code
totally impossible to read. Is that "*" really a "*" or is it some funky
overloaded operator? If so then where the heck is it defined? And where
else
is this overloaded operator being used in this code? Oh snap! I can't
grep for
it because it looks exactly like every other "*" in my code! Etc.
"Thinking in C++" puts it well:
http://www.camtp.uni-mb.si/books/Thinking-in-C++/TIC2Vone-distribution/html/Chapter12.html#Heading349

That's a valid argument, though also applicable to function names: I can use
one-character variables and functions, which won't be readable to any other
coder but myself. I can already define a function "myprintf" that deletes
all files in the current directory. Also, modern IDE's will be able to give
custom operators some fancy color. I agree that some kind of prefix for
custom operators would be helpful, though.
L.

No no, not at all. I really think it would be nice to be able to define
operators the way you define classes/types/functions. Just imagine how many
discussion in this list would be solved by this. You want "isnot"? Define
it.

Every had a look at Forth? http://www.forth.org/
One of my very favorite languages. You can define *everything*, including
operators. You can really get yourself messed up with this one ;-)
Here is how you redefine the '+' operator to mean '-'
: + - ;
Here is 5 'plus' 4 program ...
==> 5 4 + .
==> 1 ok
--
Derek
Melbourne, Australia
6/05/2005 6:32:50 PM

Damn :-) That's really messy, especially that stack notation.
But the idea is clear: why should symbols as + - / be treated any different
from function names?
Seems to me that operator overloading is much worse than defining new
operators.
New operators have no meaning so you know that custom functionality is used.
L.

Seems to me that operator overloading is much worse than defining new
operators.
New operators have no meaning so you know that custom functionality is used.

But in a math-related class, usually the standard operators do have meaning and
so it makes much more sense to just be able to overload them. Say you create a
matrix class, you really want to be able to overload the + operator for matrix
addition, because that's the symbol everyone uses in real life.
One thing that makes crazy operator overloading manageable in FORTH and
Lisp-like languages is that there are no infix operators, and so you don't need
precedence. Every operator is pre-fix (Lisp) or post-fix (FORTH) so effectively
the burden of making order of operations correct is on the user. The result is
that instead of being able to write 23+34*5 in a natural way, you have to write
(+ 23 (* 34 5)) or in forth I presume something like 34 5 * 23 +. Which is
unfortunate because the point of doing operator overloading in the first place
was to make the syntax more natural.
One thing about operators, though, is that usually they are trouble for IDEs.
With MSVC, even with Whole Tomato's Visual Assist plug-in, I can't jump to the
definition of an overloaded operator.
When you think about it, yes, overloaded operators are just like any other
functions, but it is seldom that you have an ordinary function like foo() that
is overloaded a billion different ways, whereas operators always have lots of
overloads. I guess my point is just that if you overload anything extensively
it becomes harder to track whats going on in the code, and operator overloading
is no different.

I guess you're right. Still, the syntax for these meta-functions should not
really have to change. The compiler would need the ability to execute the
code before compiling, like you said. In can run its interpreter over the
source code and use the results for the actual compile phase. A difference
might be that "meta-functions" will be able to return types and not only
values.

Needs the ability to return types, yes, but I think it also needs operators to
manipulate types as data. Like adding or removing members from a class or
struct, querying what members exist in a class, comparing equality of types etc.
Hmm maybe that's going too far though. It means you could have classes in your
program that aren't clearly declared anywhere:
: type generateClass(method[] methodlist) {
: class foo {
: int baseMember;
: };
: foreach (method m; methodlist) {
: if (checksomething(m)) {
: foo += m; // adds new method to foo
: }
: }
: return foo;
: }
: generateClass(myMethods) A;
This is a hypothetical 'meta-function' that returns a class type built from a
list of methods passed in. So you've now got some class A in your code but
looking at the above source you will have no idea what it contains. Is that
useful? Hmm, well maybe, actually. This isn't a totally off the wall example.
I have seen almost that exact thing done with the preprocessor in C. The WINTAB
tablet interface library lets you define your own custom packet struct by
#defining a bunch of symbols before #including the header. This way you can get
exactly the packet type you want with no extraneous members taking up memory.
If you only want pressure and x-y position, then you make a struct with just
that. If you want the stylus id and event number etc, then you can add those.
The difference is that in the preprocessor version, the way it works is that
every possible memeber is listed in the header, but surrounded by a #if block.
So all the members _are_ listed in one place. But the 'myMethods' list does
have to be initialized somewhere.
(Incidentally, the fact that that kind of trick exists is one of the main
reasons I'm skeptical whenever someone tells me "Language X makes macros
completely unnecessary!". It was said by Stroustrup with C++, and now by Walter
for D. Basically Walter's "why we don't need macros page" says "C++ wasn't
supposed to need macros but they forgot about all these important cases where
macros are used, but now with D I have _really_ forseen all possible cases, so
now we _really_ don't need macros." Riiiight. Enumerating all the uses of
macros is like trying to list every kind of program you can write in C. At
_best_ you can come up with a list of everything that's been done to date, but
who knows what someone will think up to do with it next year?)

Want to be able to do "&" on floats?

I'm not sure this is really feasible or desirable (or that it is even
really
meta-programming). What on earth does "&" on floats mean, for example?

It could be the bit-wise and on the floats. I don't know, it was just an
example :-/

Ok, but it's pretty much a perfect example of how *not* to use operator
overloading.

And how
do you deal with precedence?

That should be handled in the syntax for these custom operators.

Is that technically feasible when you have 5 different libraries all of which
could define their own operators, and some of them possibly define the same
operators with different precedence? And how do I specify the precendence of my
super-cool '<*>==' operator relative to all the other operators that could be
defined by some other library in the future? Seems like that would make it
difficult for a compiler writer, because the syntax tree becomes somewhat
malleable. But I can't seem to think of a good example where the clash would
really be unresolvable. You can always put the operators in namespaces to
prevent clashes, for instance.
Still it seems like a thorny nest. And also I think a completely separate issue
from metaprogramming. Just out of curiosity, has any programming language ever
offered the ability to define new infix operators with changeable precedence? I
can't think of any.
--bb

Maybe, after compiling a program, the compiler can execute any function call
that has constant (compile-time evaluable) arguments and replace the call
with the return value.
It would simply execute the function after compiling, passing the constant
arguments. This will result in another constant that will perhaps result in
another function being evaluated compile-time, etc..

The most interesting aspects of meta-programming are not computing the values of
constants. If that was all there was to it, it would be pointless. Just
compute the value of the constant once at start up time, and quit yer whining
type of thing. Look again at the simple "integer big enough to hold N bits"
example I mentioned. The compiler can't "simply execute the function after
compiling" because that function is computing the type of some variables in code
that then need to be compiled. So at the very least the process has to be
"execute the function *before* compiling". But even that's a little simplistic.
In the example above it really needs to be executed anytime integer<N> is
encountered in the code _during_ compiling.

I think the nicest thing about meta-programming will be the possibility to
introduce new operators. If we can add types, functions in code, why not
also operators? You want "**" to mean "to the power of"? Just define it.
"<=>" should call opCmp? Want to be able to do "&" on floats? Or maybe "^"
should be "to the power of" if done on reals?

I'm not sure this is really feasible or desirable (or that it is even really
meta-programming). What on earth does "&" on floats mean, for example? And how
do you deal with precedence? You want ** to mean power of, that's fine, but the
compiler has to know somehow what the precedence of that operator is or all your
math is going to come out funny. And you better be sure that you don't define
any operators that are ambiguous. Did you mean to exponentiate "a**b" or to
multiply "a * (*b)"?

I can hardly wait!

Or maybe this is faux-enthusiasm? Are you being sarcastic? Sorry if I didn't
catch it. Most oo folks strongly advise against going crazy with operator
overloading. It may seem nifty when you're doing it, but it can make the code
totally impossible to read. Is that "*" really a "*" or is it some funky
overloaded operator? If so then where the heck is it defined? And where else
is this overloaded operator being used in this code? Oh snap! I can't grep for
it because it looks exactly like every other "*" in my code! Etc.
"Thinking in C++" puts it well:
http://www.camtp.uni-mb.si/books/Thinking-in-C++/TIC2Vone-distribution/html/Chapter12.html#Heading349
--bb

This comment is about template metaprogramming and metaprogramming in general.
It's a little more pie-in-the-sky than my last comment.

I might be in the minority here, but I have a strong dislike for both
meta-programming and code generation.
From time to time, I'll encounter some smug programmer bragging about
the fact that he's not just "writing code" anymore. Instead, now he's
using some sort of code generator, and now he's "writing code that
writes code".
A code shivver goes through my spine. Like someone just walked over my
grave.
Invariably, the code-generation code is far more difficult to read,
understand, and debug (for anyone except the original author) than the
non-code-generated code would have been.
Metaprogramming (in my opinion) is the same sort of animal. Programmers
like metaprogramming because it's tricky and fun. It's the same reason
why lisp programmers like to write self-modifying code. It's the same
reason why perl programmers have one-liner competitions (ever heard of
perl golf?)
Progammers enjoy making their code perform funky tricks.
But in the long run, I think the benefits of code-generation and
meta-programming (experienced during code authorship) are cancelled out
by the problems they cause later in the software lifecycle (during
debugging and maintenance).
--BenjiSmith

Yes, I agree that metaprogramming type things can definitely be abused. But a
couple of responses:
First like any technique the costs must be weighed against the benefits. Just
like operator overloading, you don't just go blindly applying it anywhere and
everywhere just because you can. You apply it only if it will make the
resulting code significantly easier to read and maintain.
Second, what I'm advocating is that by supporting metaprogramming more directly
in the language, such code will become less obfuscated. The majority of
template metaprogramming techniques only *look* complicated because the syntax
for doing it sucks so bad. (See the example I posted earlier -- more than a
page of code for a simple if-then-else). Most metaprogramming code boils down
to simple comparisons, if-then-else, and simple loops. The horrible syntax just
obscures that fact. Of course, that's what the "smug programmers" who are
abusing it like about it, so they can say to their buddies "I bet you can't
figure out what my code does -- see, I'm smarter than you". But if the syntax
could be made to look like something readable, these people would hopefully be
bored away, and move on to entering perl obfuscation contests or something.
Finally I agree that anything like this must be used judiciously, and if not it
will cause you major headaches down the line with support, maintenance, etc.
I'm no super expert on metaprogramming, mainly because I got disgusted with what
a pain it was to use and maintain in C++ after experimenting with it for a
while. But I don't think that's an intrinsic fault of the technique, I think
it's an implementation issue, namely that C++ templates suck for that sort of
thing.
To be honest I haven't thought about this that deeply, and maybe I'm wrong and
the usage case really can't be made. But that's why I posted the message here
-- to see if a bunch of smart folks interested in improving upon C++ could help
flesh out the idea more.
I think to move the discussion forward it would be helpful for me (or anyone in
the know) to throw out a few more real-world examples of places where people use
metaprogramming today. I really haven't touched it for a few years (ever since
I ran away from the C++ implementation of screaming :-), so I've forgotten some
of the techniques that I thought seemed really cool and useful back then. With
some more concrete examples "of look what metaprogramming can do!" then the D
experts can come back with how you'd approach it in D. I'll see if I can dig
some more up. But for now the best real world examples I can think of are what
scripting interface generators are doing. Like Boost.Python
(http://www.boost.org/libs/python/doc/), or luaBind
(http://luabind.sourceforge.net). So how would people go about doing that type
of thing in D?
Fundamental issue: just thinking this out now, but I think one fundamental
source of the obfuscation in most template metaprogramming techniques is that
templates support conditionals in a round-about way via the pattern matching
engine for template specialization. This is both a benefit and a curse.
Benefit in that it makes matching rules very flexible and is easy to extend by
ploping down new specializations whenever and wherever you like, but a curse in
that it basically becomes a giant 'if-then-else' that happens to have all its
cases scattered all over the place out of order. The whole pattern-matching
aspect is a lot like some functional programming languages, e.g. ML (I'm not the
first to notice that-- another similarity is that recursion is the only way to
implement loops in template metaprogramms).
So to try and bite off a manageable morsel, here's a simple question: is it
possible and feasible to provide an alternative, more procedural, if-else
construct for the purpose of deciding on template specializations? I.e. is it
possible to provide a specialization syntax that instead of this:
template TFoo(T) { ... } // #1
template TFoo(T : T[]) { ... } // #2
template TFoo(T : char) { ... } // #3
template TFoo(T,U,V) { ... } // #4
Looks like this:
template TFoo(T) {
if (T: T[]) { ... } #2
else if (T : char) { ... } // #3
else if (T,U,V) { ... } // #4
else { ... } // #1
}
The main difference/advantage would be that this gives the developer control
over matching order, so there would never be compiler errors from ambiguity. I
envision there are probably other things you could do in those if's too like
test if a type has a particular method, if it's an object or a POD type (plain
old data), specifying ranges of value parameters if(T: 0<T<=8), etc. Similarly,
it would be cool if you could use the same matching expressions for localizing
the specialization to just the code that really needs to be specialized. Some
times the only differences between three versions of a templated function comes
down to one or two lines, so then you either have to factor that code out into a
different little mini-template or just go and duplicate the whole function 3
times and just change the one line that's different.
For example, say if the type parameter is an object with lock/unlock methods
then I want to call those, but otherwise I'll just go ahead with my business.
Eg something like:
template Frobulator(T)
{
void frobulate(out T to, T from)
{
if (hasmethod(T,lock)) { from.lock(); }
{ here's where the frobulation happens, about 50 lines of code }
if (hasmethod(T,unlock)) { from.unlock(); }
}
}
This should be something that could be done at compile time when instantiating
the template. Is that possible in D? Or is it more of what might be allowed
with these introspection features that are supposedly in the works? A related
example would be if I'm deriving from the template paramter and I want the
derived class to implement lock/unlock methods if they don't already exist in
the base class being derived from. (Looks like that answer to adding a method
is no... http://www.digitalmars.com/d/template.html. "limitations" part says you
can't add nonstatic methods with templates).
Dang, everytime I think about this stuff it makes me wonder if I shoulda gone
into systems programming rather than graphics. :-) As it is, I just don't have
the time or training really to work effectively on this stuff (one and a half
compiler courses is about all I got). So my ulterior motive here is to
hopefully light a fire under someone's butt so they'll go off and create it
someday and become my hero. I don't really expect it to happen overnight, but
if I just keep bringing this up in different places where smart people hang out,
surely someone will pick it up and run with it eventually... :-)
--bb
P.S. Kudos if you managed to read all the way down to this sentence, oh brave
reader. ;-)
In article <d5dlc7$1jat$1 digitaldaemon.com>, Benji Smith says...

Bill Baxter wrote:

This comment is about template metaprogramming and metaprogramming in general.
It's a little more pie-in-the-sky than my last comment.

I might be in the minority here, but I have a strong dislike for both
meta-programming and code generation.
From time to time, I'll encounter some smug programmer bragging about
the fact that he's not just "writing code" anymore. Instead, now he's
using some sort of code generator, and now he's "writing code that
writes code".
A code shivver goes through my spine. Like someone just walked over my
grave.
Invariably, the code-generation code is far more difficult to read,
understand, and debug (for anyone except the original author) than the
non-code-generated code would have been.
Metaprogramming (in my opinion) is the same sort of animal. Programmers
like metaprogramming because it's tricky and fun. It's the same reason
why lisp programmers like to write self-modifying code. It's the same
reason why perl programmers have one-liner competitions (ever heard of
perl golf?)
Progammers enjoy making their code perform funky tricks.
But in the long run, I think the benefits of code-generation and
meta-programming (experienced during code authorship) are cancelled out
by the problems they cause later in the software lifecycle (during
debugging and maintenance).
--BenjiSmith

So my ulterior motive here is to
hopefully light a fire under someone's butt so they'll go off and create it
someday and become my hero. I don't really expect it to happen overnight, but
if I just keep bringing this up in different places where smart people hang
out,
surely someone will pick it up and run with it eventually... :-)

Ok, so is that what Andrew was talking about when he said "Derek already did
that [compile time code generation]; it's called build.exe"? I searched for
'build.exe' but wasn't able to find anything that looked remotely like it had
anything to do with code generation or template metaprogramming. (Maybe a less
generic name might help when it comes to people finding your project via web
searches. Like dbuild or buildd or something like that).
Anyway where can I find a description of your macro processor?
--bb
In article <10s431exril0t.13imwd7sdj6zh$.dlg 40tude.net>, Derek Parnell says...

On Fri, 6 May 2005 02:46:53 +0000 (UTC), Bill Baxter wrote:
[snip]

So my ulterior motive here is to
hopefully light a fire under someone's butt so they'll go off and create it
someday and become my hero. I don't really expect it to happen overnight, but
if I just keep bringing this up in different places where smart people hang
out,
surely someone will pick it up and run with it eventually... :-)

Ok, so is that what Andrew was talking about when he said "Derek already did
that [compile time code generation]; it's called build.exe"? I searched for
'build.exe' but wasn't able to find anything that looked remotely like it had
anything to do with code generation or template metaprogramming.

(Maybe a less
generic name might help when it comes to people finding your project via web
searches. Like dbuild or buildd or something like that).

Well I figure if Unix can do 'make', then I can do 'build'. And have you
every tried locating 'D' using google? ;-) [Oops, I just tried and it came
up as the tenth site! out of 762,000,000 sites). However I did google
"derek parnell build" and got it on the second site.

I remembered another really cool example of where metaprogramming is being used
extensively to good effect. The Sh shading language by Michael McCool.
http://libsh.org/http://www.cgl.uwaterloo.ca/Projects/rendering/Talks/metashader/aw.ppt
These days graphics hardware like the GeForce 6800 can do some really neat
tricks, but programming it to do those tricks is a major pain in the butt. The
simplest "hello graphics hardware world" example is a few pages of code to set
up graphics contexts, compile and load and a simple shader, and then render a
quad to the screen to get something to happen. The neat thing is that shader
compilation is very quick and can be done at runtime. So Sh is a C++ library
that lets you write pretty normal looking code that just happens to get executed
on the graphics hardware.
I guess all in all, it's pretty similar to the binding generation problem that
Luabind or Boost.Python solve. Just the interface is graphics hardware rather
than a scripting lanuguage API. Anyway it's pretty cool.
If D made it significantly easier to develop something like Sh I bet graphics
folks would really eat it up. I mean they've been happy to learn a new shader
language like every other year to keep up with the cutting edge of graphics
hardware, so why not D if it can make their lives easier? Similarly I'm sure
other folks in CS academia would use it if it meant they could get prototype
systems built faster, so that they could publish results faster.
The great thing about targeting academics is that academic projects often don't
have much legacy code. So there's not a high price to pay for abandoning C++,
unlike in the industry at large. Academics were pretty quick to adopt Java
also, for instance, simply because it made their lives easier, and it sped up
prototype development. And I think this was before there were even decent IDEs
and whatnot. But it's not like they don't still want speed, its just that
they're willing to give up a little to get their results out faster. So if D
can give them speed of development with speed of execution, I think they'll jump
on it. And of course eventually a student who writes in D at the Uni will go
out into the real world and bring his D experience with him.
I'd like to mention one more potential area in which D could excel. Someone
mentioned it before on this newsgrop, but it is numerical/scientific computing.
These people are still using Fortran in part because C++ just isn't very good
for numerical computing. A C++ that was as good as Fortran (i.e. potentially D)
could be a big hit.
I've mentioned Blitz++, but there are also a few linear algebra packages that
use template metaprogramming techniques: MTL and TNT. I haven't tried TNT, but
I did use MTL. It doesn't look like it's been worked on since 2001, though.
This is conjecture, but I think MTL basically collapsed under its own weight.
It was just too complicated to maintain all that template mess in C++, and when
the original developer left it behind there was really no one else who could
understand it. But _despite_ that, somehow it managed to attract a pretty
active user community. So my point of mentioning all this is that the
scientific computing world is also one in which D could definitely take hold.
But it has to make the right choices. Scientific computing is still mostly
fortran, and it'll probably stay that way, because nobody wants to rewrite all
the millions of lines of code out there that works. But thats fine because D
can call legacy fortran routines just fine. Today, some folks in scientific
computing are starting to use C++ for new work, but I think it's still at a
point where they have no major attachments to C++. It definitely presents an
oportunity for D.
--bb

This comment is about template metaprogramming and metaprogramming in general.
It's a little more pie-in-the-sky than my last comment.

I might be in the minority here, but I have a strong dislike for both
meta-programming and code generation.

..

Invariably, the code-generation code is far more difficult to read,
understand, and debug (for anyone except the original author) than the
non-code-generated code would have been.

..

But in the long run, I think the benefits of code-generation and
meta-programming (experienced during code authorship) are cancelled out
by the problems they cause later in the software lifecycle (during
debugging and maintenance).
--BenjiSmith

There is a kind of code generation where you tell some program (usually a big,
badly written script) that you want to write a computer game with "sprite" type
animation, or a business application with such and such db tables. The code
then goes off, generates a butt-ugly gui and some windows code, and you open an
editor and start digging through 200K of code with variables named like
CheckBoxWidget_X1128.
In this case, yeah, I'm shivering right with ya.
But in some things, like generating ASN.1 or XML loaders from a 100 page ASN
spec, the code looks at a common, standard "interface language" and writes up
some stream extraction "loader" code that you never, ever, will want to edit.
In these cases, I believe that the code generation can be a benefit to
maintainability and reliability.
Python can pickle, Java can do (something, what's it called?), but for fast C or
C++ with ASN.1, you probably need a code generator. Otherwise you are coding
hundreds of classes that could change whenever the DTD or ASN.1 spec changes.
(Good luck if the guy that understands those classes splits for Ohio.)
Kevin

I agree with you on all points - especially on the one that if
metaprogramming was easier to do, it would be a lot more practical. I'd like
to bring this to D.

Whee! Really looking forward to see how it turns out.
Are you in contact with folks like Andrescu who know these tricks for C++ inside
out? Seems like that would be really valuable to pick those folks' brains to
find out what they would add to or change about C++ if they could.
--bb

I agree with you on all points - especially on the one that if
metaprogramming was easier to do, it would be a lot more practical. I'd like
to bring this to D.

Whee! Really looking forward to see how it turns out.
Are you in contact with folks like Andrescu who know these tricks for C++ inside
out? Seems like that would be really valuable to pick those folks' brains to
find out what they would add to or change about C++ if they could.

Static if? This isn't possible in DMD .123, is it? And what about static
while
loops and other such logic?

.123 doesn't support this but I see no reason why "static if" would be
hard to implement. Just enhance the version/debug condition
implementation a bit and you get the same features as as "static if".
Thomas
-----BEGIN PGP SIGNATURE-----
iD8DBQFChTOA3w+/yD4P9tIRAvkuAJ0e1vo88VYhyevT93WBI7YMM7eQagCgp8by
AZVdILdc8ImJki9bcPX1E9o=
=z18g
-----END PGP SIGNATURE-----

.123 doesn't support this but I see no reason why "static if" would be
hard to implement. Just enhance the version/debug condition
implementation a bit and you get the same features as as "static if".

Yup, that's nearly all there is to it. What's interesting, though, it how it
transforms template metaprogramming. C++ template metaprogramming was
'discovered' rather than designed, and as Bill Baxter's reference shows, it
is very hard to use.

.123 doesn't support this but I see no reason why "static if" would be
hard to implement. Just enhance the version/debug condition
implementation a bit and you get the same features as as "static if".

Yup, that's nearly all there is to it. What's interesting, though, it how it
transforms template metaprogramming. C++ template metaprogramming was
'discovered' rather than designed, and as Bill Baxter's reference shows, it
is very hard to use.

Does this paragraph mean you can do metaprogamming with static if, or am I
reading too much into this?
Kevin

.123 doesn't support this but I see no reason why "static if" would be
hard to implement. Just enhance the version/debug condition
implementation a bit and you get the same features as as "static if".

Yup, that's nearly all there is to it. What's interesting, though, it how it
transforms template metaprogramming. C++ template metaprogramming was
'discovered' rather than designed, and as Bill Baxter's reference shows, it
is very hard to use.

Does this paragraph mean you can do metaprogamming with static if, or am I
reading too much into this?
Kevin

Sorry - I was vague; I meant can looping like the "factorial" case be done; but
I think this was asked and answered in another post.
Kevin

True enough, and as is common with such template solutions, it works on the
easy cases but gets pretty unwieldy on more complex ones, like needing to do
more than just a simple typedef for the Then or Else.

I thought they might be :) Only reason I ask is because it's currently possible
to implement all the standard control structures (if, while, switch) in template
code as it exists now, so if the language were to support 'if' I'd expect it to
support others as well. Though I suppose recursion seems more of an obvious
method for looping than it does for switching on a boolean expression.
Sean

Would static if support blocks such as...
static if (condition){...}
and if so, will everything in that block be considered static as well?
Also, are you considering a static switch?
I would think that should be almost as straight forward to implement as static
if.
Any other static logic support planned, or contemplated?
TZ

YUCK;
I hate uppercase, personally. I was glad to see D got rid of the
preproccessor and that silly convention :P.
My distaste of uppercase probably comes from the fact that my father
used nothing but it; there was not a thing I remember him writing or
typing that was not in uppercase. When I was young, I remember him
hitting the "CAPS LOCK" key every time the computer started up.
It also reminds me of QuickBASIC, which isn't a good thing :P.
-[Unknown]

It looks really good. In C++ there is two *differect* styles for normal
programing (imperative style) and metaprograming (functional style by means
of haskell). Just compare function that evaluates factorial and template
template that evaluates factorial. It's cool to have ONE approach for
programing and metaprograming in D.
The only thing is that 'static' keyword seems to be overused. It has at
least three different meanings in D, and even more in other languages. What
about using something like 'meta' or 'compile_time', or UPPERCASE letters ?
--
Vladimir

It looks really good. In C++ there is two *differect* styles for normal
programing (imperative style) and metaprograming (functional style by means
of haskell). Just compare function that evaluates factorial and template
template that evaluates factorial. It's cool to have ONE approach for
programing and metaprograming in D.

I don't get you. Have you ever seen any functional language without an if
statement? If is nigther functional nor imperative.
And as long as there is nothing like a compile-time variable that can be
re-assigned the style will stay functional and not imperative.

It looks really good. In C++ there is two *differect* styles for normal
programing (imperative style) and metaprograming (functional style by means
of haskell). Just compare function that evaluates factorial and template
template that evaluates factorial. It's cool to have ONE approach for
programing and metaprograming in D.
The only thing is that 'static' keyword seems to be overused. It has at
least three different meanings in D, and even more in other languages. What
about using something like 'meta' or 'compile_time', or UPPERCASE letters ?
--
Vladimir

I agree that it would be nice to have some clear way of distinguishing this
functionality from other functionality,
such as to have a "compiletime" keyword, but then there are limitations to that
also.
For example, if anyone ever wants to make a D interpreter for testing programs
as you write them,
the concept of compile time wouldn't apply to the interpreter but would still
have to be dealt with to properly emulate running compiled code.
The "static" concept in D does in fact encompass the concept of resolving
something once at compile-time and having it be constant at run-time. While
this concept is also used in the formation of constants, it is what is passed
to the constant that supports the concept of resolving at compile-time,
rather than the constant declaration itself.
The concept being applied is roughly equal to the pseudocode concept of
"evaluate once"
except that it is allowed access only to information that is available before
the point in program actuation
where the system sets the processor to jump to the entry point of the program
as designated in the source code.
This could be done at compile time, or in run-time code at the actual entry
point that runs once and then passes control to the virtual entry point.
This versatility of implementation is actually quite consistant with the
overall concept of "static" implemented in D so far, as I understand it...
so as such, I would think it should make sense to stick with the "static"
keyword which covers the concept well,
rather than arbitrarily adding yet another keyword with no benefits to it's
existance other than this minor distinction. (my opinion)
TZ

I think this is showing up a previous design error. This use of static
isn't logically consistent:
if (nbits <= 8)
static if (nbits <= 8)
Versus:
int c = 4;
static int c = 4;
So this should be "const" instead. Rather than overloading the meaning
of the keyword as is done with static, it would just be extending it to
act a little more uniform.

I think this is showing up a previous design error. This use of static
isn't logically consistent:
if (nbits <= 8)
static if (nbits <= 8)
Versus:
int c = 4;
static int c = 4;
So this should be "const" instead. Rather than overloading the meaning
of the keyword as is done with static, it would just be extending it to
act a little more uniform.

I thought it would be consistent with the use of assert(exp) versus static
assert(exp).

I think this is showing up a previous design error. This use of static
isn't logically consistent:
if (nbits <= 8)
static if (nbits <= 8)
Versus:
int c = 4;
static int c = 4;
So this should be "const" instead. Rather than overloading the meaning
of the keyword as is done with static, it would just be extending it to
act a little more uniform.

I thought it would be consistent with the use of assert(exp) versus static
assert(exp).

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.
What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.

I agree and very good point.
The use of static for both assert and if in this case is really non-intuitive
and inconsistent with how it is used in other contexts (runtime evaluation).
'const' would seem to be better in both cases because they are evaluated
strictly at compile time.
- Dave

What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.

I agree and very good point.
The use of static for both assert and if in this case is really non-intuitive
and inconsistent with how it is used in other contexts (runtime evaluation).
'const' would seem to be better in both cases because they are evaluated
strictly at compile time.
- Dave

What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

When you extend the idea to more complex compile-time structures, const runs
into a similar problem.
Consider, for example, the case of a compile-time recursive loop.
It may evaluate to structure that is constant, but what it "does" is a
different story.
I think it's mainly a matter of perspective, and in which directions it may go
from here.
Since Walter controls that aspect of the D language's forseeable future,
I would think his vision of it is what matters most...
although it's fair for us to share our vision with him, which may in the long
run alter his...
as Bill Baxter has obviously done in this case. (Nice work Bill.)
Hopefully Walter will give (or has given) your comments about when "const"
would be better than "static" careful consideration,
and as such, think ahead to avoid the potential porblems... but I also hope he
will consider (or has considered) the other side of that coin.
Personally, I think "static" was a good choice, for the implementations being
discussed in the present context,
and that future extensions of that concept should prove to be quite useful.
Nothing ruling out the possibility of implementing both ideas though in the
long run, I would think... as they have divergent potentials, conceptually.
TZ

The use of static for both assert and if in this case is really non-intuitive
and inconsistent with how it is used in other contexts (runtime evaluation).
'const' would seem to be better in both cases because they are evaluated
strictly at compile time.

Personally, I like the idea of "static" to mean "compile-time" while
"const" would mean that "you can't change this". The compiler may
evaluate all const stuff at compile time, but it doesn't have to.
Granted, using "static" in this manner differs from using "static"
within a class, but that's not all that bad.
Brian
( bcwhite precidia.com )
-------------------------------------------------------------------------------
Leave it to the computer industry to shorten "Year 2000" to "Y2K".
It's that sort of thinking that led to the problem in the first place.

The use of static for both assert and if in this case is really non-intuitive
and inconsistent with how it is used in other contexts (runtime evaluation).
'const' would seem to be better in both cases because they are evaluated
strictly at compile time.

Personally, I like the idea of "static" to mean "compile-time" while
"const" would mean that "you can't change this". The compiler may
evaluate all const stuff at compile time, but it doesn't have to.
Granted, using "static" in this manner differs from using "static"
within a class, but that's not all that bad.
Brian
( bcwhite precidia.com )
-------------------------------------------------------------------------------
Leave it to the computer industry to shorten "Year 2000" to "Y2K".
It's that sort of thinking that led to the problem in the first place.

Exactly what I was thinking.
By the way, I like your signature line. Is that a quote, and of so.. of whom?
(if you happen to know)
TZ

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So
it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.
What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.
What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

That is a good idea, but there's another angle to it. There is a
characteristic of functions which could be called "atomic" meaning it
depends only on its parameters and not on any globals. Atomic would also
mean that the function has no side effects beyond its return value. Hence,
an optimizing compiler could take an atomic function, see that its arguments
have known values, and so compute the functions return value at compile
time. The const attribute for a function would be a subset of this behavior,
and not necessary.
(An example of an atomic function would be std.math.sin(x).)
PS: The "static if" is not parsed as a static attribute followed by an if
statement, it is parsed as the two keywords juxtaposed have a special
meaning, like "static this" and "static assert". In other words, it's
treated as if it were a keyword "staticif". The following would not work:
static
{
if (...) ... // this is not a static if
}

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So

it

should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.
What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

That is a good idea, but there's another angle to it. There is a
characteristic of functions which could be called "atomic" meaning it
depends only on its parameters and not on any globals. Atomic would also
mean that the function has no side effects beyond its return value.
Hence,
an optimizing compiler could take an atomic function, see that its
arguments
have known values, and so compute the functions return value at compile
time. The const attribute for a function would be a subset of this
behavior,
and not necessary.
(An example of an atomic function would be std.math.sin(x).)

I have one more related idea about this. I think it would be nice to have
some
virtual methods that are not dependent on instance of object. For example:
: class A {
: this() {
: int v = getSomeValue(); // virtual method call
: ...
: }
: // some kind of attrubute required here to tell compiler that
: // overridables of this method can't use fields and/or other methods
without
: // thesame attribute
: int getSomeValue() {
: return 0;
: }
: }
:
: class B : A {
: this () { f_C = new C();}
: int getSomeValue() {
: return f_C.getValue(); // runtime access violation here
: }
: C f_C;
: }
: class C {
: int getValue() {return f_Value;}
: int f_Value = 5;
: }
I wrote simple test to verify virtual method call from c-tors and d-tors
and found that current behaviour is different from C++. Now D is allowing
this virtual call where as C++ calls not overriden version of method. For
me it's nice (personally i don't like this in C++), but i understand reason
why it's was done and example mentioned before is shows the case. I think
such cases can be detected at compile time. Moreover sometimes extension
to abstract methods is needed like to tell that some method is requred to
be overridden in final class.

PS: The "static if" is not parsed as a static attribute followed by an if
statement, it is parsed as the two keywords juxtaposed have a special
meaning, like "static this" and "static assert". In other words, it's
treated as if it were a keyword "staticif". The following would not work:
static
{
if (...) ... // this is not a static if
}

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.
What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

That is a good idea, but there's another angle to it. There is a
characteristic of functions which could be called "atomic" meaning it
depends only on its parameters and not on any globals. Atomic would also
mean that the function has no side effects beyond its return value. Hence,
an optimizing compiler could take an atomic function, see that its arguments
have known values, and so compute the functions return value at compile
time. The const attribute for a function would be a subset of this behavior,
and not necessary.
(An example of an atomic function would be std.math.sin(x).)
PS: The "static if" is not parsed as a static attribute followed by an if
statement, it is parsed as the two keywords juxtaposed have a special
meaning, like "static this" and "static assert". In other words, it's
treated as if it were a keyword "staticif". The following would not work:
static
{
if (...) ... // this is not a static if
}

So here's a thought... why not make a way to specify that a particular function
(or for that matter anything else that could possibly be atomic) is "meant to
be atomic" such as...
atomic int f( /*...*/ ) { /*...*/ }
...and as such, the compiler could know that it would be
alright to replace it with anything that would return the same results,
at any time... including at compile time.
If a function or expression labeled as atomic has side effects, those side
effects should be supressed.
If any side effects of a function or expression can not be supressed
(or the compiler doesn't know how) the result would be a complie-time error.
What do you think, Walter? Sound good to you?
TZ

"atomic" -- sounds like a transaction; "pure" is another way of
describing it AFAIK.
Reminds me of a project were we used javadoc tags to document an API.
Certain methods were marked up as pure, others as primitive -- meaning
they didn't depend on methods of the same class that could be
overridden. Such details can be very useful for clients, but a pain to
maintain when its only in the docs.
Walter wrote:

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

"static assert" comes from C++ terminology for the same thing.

C++ sucks.
What about if you decide to add an attribute to functions which declare
that they must be constant-folded if their parameters are constant,
which allows the standard to declare exactly how much constant-folding
is necessary for implementations. What attribute do you use? You can't
use static, because it's already used with an entirely different
meaning; however, static would be the consistent attribute. This
inconsistency is just setting up a problem which will take decades to
sort out.

That is a good idea, but there's another angle to it. There is a
characteristic of functions which could be called "atomic" meaning it
depends only on its parameters and not on any globals. Atomic would also
mean that the function has no side effects beyond its return value. Hence,
an optimizing compiler could take an atomic function, see that its arguments
have known values, and so compute the functions return value at compile
time. The const attribute for a function would be a subset of this behavior,
and not necessary.
(An example of an atomic function would be std.math.sin(x).)
PS: The "static if" is not parsed as a static attribute followed by an if
statement, it is parsed as the two keywords juxtaposed have a special
meaning, like "static this" and "static assert". In other words, it's
treated as if it were a keyword "staticif". The following would not work:
static
{
if (...) ... // this is not a static if
}

I think this is showing up a previous design error. This use of static
isn't logically consistent:
if (nbits <= 8)
static if (nbits <= 8)
Versus:
int c = 4;
static int c = 4;
So this should be "const" instead. Rather than overloading the meaning
of the keyword as is done with static, it would just be extending it to
act a little more uniform.

I thought it would be consistent with the use of assert(exp) versus static
assert(exp).

No, no, I mean that "static assert" was the wrong design because it
shows an inconsistency when extended to apply to more statements. So it
should be "const assert" as well as "const if" instead, while "static
assert" should be an error.

Different people, different opinions. To me, "static if" fits the usage better
than "const if" does... but that's just my opinion.
TZ

Hey, that's a nifty start.
I see some folks are unhappy with the use of 'static'. Is it too hard for the
compiler to just figure out what is statically executable and avoid the use of
the extra keyword altogether? Or is the thought that it's better for the
programmer to have to declare their intent, so that e.g. the compiler can tell
them it's impossible to do what they want at compile time. At least it would be
nice to be able to put it all in a block so you only have to type 'static' once.

Hmm and why isn't it 'static else'?
Also, speaking of keywords sounding strange, it also seems kind of strange to
call something like that a 'template', since it has little to do with the
original meaning of the word. Not that I really care, but it's sort of
reminiscent of how C++ started off using the 'class' keyword for template type
arguments, and eventually came around to using 'typename' instead. I presume
that was because it sounds nonsensical to pass in an 'int' as the 'class'
parameter.
--bb
In article <d62m56$2h67$1 digitaldaemon.com>, Walter says...

Also, speaking of keywords sounding strange, it also seems kind of strange

to

call something like that a 'template', since it has little to do with the
original meaning of the word. Not that I really care, but it's sort of
reminiscent of how C++ started off using the 'class' keyword for template

type

arguments, and eventually came around to using 'typename' instead. I

presume

that was because it sounds nonsensical to pass in an 'int' as the 'class'
parameter.

Given the potential ambiguity of the meaning of the word 'static' in this
context, and the rather inelegant need for it's repetition, how about extending
what I think is one of D'd most powerful new features; attributes:
template Integer(int nbits){
compile_time {
if (nbits <= 8)
alias byte Integer;
else if (nbits <= 16)
alias short Integer;
else if (nbits <= 32)
alias int Integer;
else if (nbits <= 64)
alias long Integer;
else
static assert(0);
}
}
That seems cleaner, simpler and more intuative to me?

It does, but there's a problem: you'll get an error "Integer is undefined"
because Integer will be local to the if block. Fundamentally, static if has
to parse like debug and version declarations, which do not introduce a new
scope with the dependent declarations.

It does, but there's a problem: you'll get an error "Integer is undefined"
because Integer will be local to the if block. Fundamentally, static if has
to parse like debug and version declarations, which do not introduce a new
scope with the dependent declarations.

Ah but :)
In the docs for the debug predicate is says:
debug(Identifier) statements are compiled in when the debug identifier
matches Identifier.
If Statement is a block statement, it does not introduce a new scope.
That's a very similar compile-time directive. Couldn't a simiilar non-scoping
block predicate be used for meta-programming?
How about calling it 'meta'? I dont see that conflicting with any existing term
in any other language.
: template Integer(int nbits){
: meta {
: if (nbits <= 8)
: alias byte Integer;
: else if (nbits <= 16)
: alias short Integer;
: else if (nbits <= 32)
: alias int Integer;
: else if (nbits <= 64)
: alias long Integer;
: else
: static assert(0);
: }
: }
anon.

Ah but :)
In the docs for the debug predicate is says:
debug(Identifier) statements are compiled in when the debug identifier
matches Identifier.
If Statement is a block statement, it does not introduce a new scope.
That's a very similar compile-time directive. Couldn't a simiilar

non-scoping

block predicate be used for meta-programming?

They are similar, but to avoid confusion, they are not the same. If
statements introduce a new scope, static if's do not.

How about calling it 'meta'? I dont see that conflicting with any existing

term

in any other language.

You're right, it doesn't. But I don't see the advantage over 'static if',
and I think it will sow confusion to have the regular if not introduce a new
scope.

This comment is about template metaprogramming and metaprogramming in
general. It's a little more pie-in-the-sky than my last comment.

I've just finished reading this entire thread.
Wow, I'm impressed at how many good points and examples came up! Seems
the interest in D meta programming is much wider than I thought only
this winter. We are in a good position to actually get somewhere with
it! We have enough people wanting it, I see some convergence on how to
do it, and folks have largely similar thoughts on it.
I bet this will become one of the hottest topics in this NG after the
summer, when D1.0 has been out long enough for the immediate brouhaha to
wane off, and everybody gets back to Important Long Term Issues.
And I'm now more convinced than ever that D meta programming will first
of all become a reality, and secondly that it will _kick_ass_big_time_!
georg
PS, right now I don't even lurk regularly here, let alone do regular
posts. But I ain't gone, take my word for it.