The c# solution works well if you will *only* develop from the IDE but is
a total pain as soon as you need to work with non-language aware tools.

I think Microsoft thinks that an IDE is a part of a modern language. So they
have tried to design a language that almost needs an IDE. Fortress language
looks to need an IDE even more. There are languages (most Smalltalk, and some
Forth and some Logo) that are merged with their development environment.
Bye,
bearophile

The c# solution works well if you will *only* develop from the IDE but is
a total pain as soon as you need to work with non-language aware tools.

I think Microsoft thinks that an IDE is a part of a modern language.
So they have tried to design a language that almost needs an IDE.
Fortress language looks to need an IDE even more. There are languages
(most Smalltalk, and some Forth and some Logo) that are merged with
their development environment.

Hmm. Come to think of it, that's not totally unreasonable. One might
even admit, it's modern.
In the Good Old Days (when it was usual for an average programmer to
write parts of the code in ASM (that was the time before the late
eighties -- be it Basic, Pascal, or even C, some parts had to be done in
ASM to help a bearable user experience when the mainframes had less
power than today's MP3 players), the ASM programing was very different
on, say, Zilog, MOS, or Motorola processors. The rumor was that the 6502
was made for hand coded ASM, whereas the 8088 was more geared towards
automatic code generation (as in C commpilers, etc.). My experiences of
both certainly seemed to support this.
Precisely the same thinking can be applied to programming languages and
whether one should use them with an IDE or "independent tools".
(At the risk of flame wars, opinion storms, etc.) I'd venture to say,
that the D programming language is created for the Hand Coder. (Meaning
somebody with an independent text editor (Notepad, vi, Emacs, or
whatever), and a command line compile invocation.
The opposite might be C# (if I understand the rumors here correctly, I'm
not familiar with the language itself), or Java, as an even better example.
Java, as a language, is astonishingly trivial to learn. IMHO, it should
take at most half the time that D1 does. The book "The Java Programming
Language" (by Arnold and Gosling, 3p 1996), is a mere 300 pages, printed
in a huge font, with plenty of space before and after subheadings, on
thick paper (as opposed to the 4 other books published at the same time,
that Sun presumed (quite right) folks would order together, so it
wouldn't look inferior in the book shelf.
But, to use Java at any productive rate, you simply have to have an IDE
that helps with class and method completion, class tree inspection, and
preferably two-way UML-tools.
So, in a way, Microsoft may be right in assuming that (especially when
their thinking anyway is that everybody sits at a computer that's
totally dedicated to the user's current activity anyhow) preposterous
horse power is (or, should be) available at the code editor.
It's not unthinkable that this actually is The Way of The Future.
----
If we were smart with D, we'd find out a way of leapfrogging this
thinking. We have a language that's more powerful than any of C#, Java
or C++, more practical than Haskell, Scheme, Ruby, &co, and more
maintainable than C or Perl, but which *still* is Human Writable. All we
need is some outside-of-the-box thinking, and we might reap some
overwhelming advantages when we combine *this* language with the IDEs
and the horsepower that the modern drone takes for granted.
Easier parsing, CTFE, actually usable templates, practical mixins, pure
functions, safe code, you name it! We have all the bits and pieces to
really make writing + IDE assisted program authoring, a superior reality.
"Ain't nobody gonna catch us never!"

If we were smart with D, we'd find out a way of leapfrogging this
thinking. We have a language that's more powerful than any of C#, Java
or C++, more practical than Haskell, Scheme, Ruby, &co, and more
maintainable than C or Perl, but which *still* is Human Writable.

So, in a way, Microsoft may be right in assuming that (especially when
their thinking anyway is that everybody sits at a computer that's
totally dedicated to the user's current activity anyhow) preposterous
horse power is (or, should be) available at the code editor.

I think that any real programing project now days (regardless of language)
needs tools to help the programmer. The difference between D and C# is that
with D you /can/ get away without an IDE and with C# you won't get much at
all done without one.

It's not unthinkable that this actually is The Way of The Future.
----
If we were smart with D, we'd find out a way of leapfrogging this
thinking. We have a language that's more powerful than any of C#, Java
or C++, more practical than Haskell, Scheme, Ruby, &co, and more
maintainable than C or Perl, but which *still* is Human Writable. All
we need is some outside-of-the-box thinking, and we might reap some
overwhelming advantages when we combine *this* language with the IDEs
and the horsepower that the modern drone takes for granted.

I think we /already/ have a language that will get there sooner or later.
D is commuted to a path where that is it's only /logical/ conclusion.

"Ain't nobody gonna catch us never!"

Well, not if we play our hand right. Nothing man ever made is invulnerable
to man.

So, in a way, Microsoft may be right in assuming that (especially when
their thinking anyway is that everybody sits at a computer that's
totally dedicated to the user's current activity anyhow) preposterous
horse power is (or, should be) available at the code editor.

I think that any real programing project now days (regardless of
language) needs tools to help the programmer. The difference between D
and C# is that with D you /can/ get away without an IDE and with C# you
won't get much at all done without one.

I can't agree with this. Most of the time I use an IDE for the
autocompletion, not much for the build-and-jump-to-error stuff. And I
don't see D being easier with regards to remembering what's the name of
that function, which members does a class have, in which module are all
these.
Why do you say that with D you can get away without an IDE and with C#
you can't? I think you can do the same as in C#, don't use an IDE and
get away with pretty much everything, except you'll be slower at it
(same goes for D without an IDE).
Again, this also applies to Java. When I started using Java I used the
command line and an editor with just syntax highlighting, and made
programs of several classes without problem. Refactoring was a PITA, and
I'm thinking it's like that in D nowadays. :-P

So, in a way, Microsoft may be right in assuming that (especially when
their thinking anyway is that everybody sits at a computer that's
totally dedicated to the user's current activity anyhow) preposterous
horse power is (or, should be) available at the code editor.

I think that any real programing project now days (regardless of
language) needs tools to help the programmer. The difference between D
and C# is that with D you /can/ get away without an IDE and with C#
you won't get much at all done without one.

I can't agree with this. Most of the time I use an IDE for the
autocompletion, not much for the build-and-jump-to-error stuff. And I
don't see D being easier with regards to remembering what's the name of
that function, which members does a class have, in which module are all
these.
Why do you say that with D you can get away without an IDE and with C#
you can't? I think you can do the same as in C#, don't use an IDE and
get away with pretty much everything, except you'll be slower at it
(same goes for D without an IDE).
Again, this also applies to Java. When I started using Java I used the
command line and an editor with just syntax highlighting, and made
programs of several classes without problem. Refactoring was a PITA, and
I'm thinking it's like that in D nowadays. :-P

An IDE provides for a different and IMO much better work flow than using
a text editor.
a programmer using a text editor + batch/command-line compiler
implements a single threaded work flow:
while (not finished) {
1. write code
2. run compiler
3. run debugger (optional)
}
an IDE allows a concurrent implementation:
you have two threads that run simultaneously - a "Programmer" and an "IDE".
the programmer "thread" writes code and at the same time the IDE
"thread" parses it and provides feedback: marks syntax errors, provides
suggestions, reminds you of missing imports, etc..
The second approach is clearly superior.
btw, this has nothing to do with the language. For instance, there are
eclipse/netbeans plugins for C++ and once clang is finished and
integrated with those, C++ will have the full power of a modern IDE,
just like Java or smalltalk have.
Of course the language can be designed to make this easier, but it is
just as possible for non cooperating languages like C++.
IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

Do you have some more focused suggestions, then?
Bye,
bearophile

first and foremost, the attitude of people about IDEs needs to be changed.
1st rule of commerce is "the customer is always right", and the fact is
that the industry is relying on tools such as IDEs. If we ignore this
fact D will become another niche academic language that no one uses.
second, D needs to update its stone age compilation model copied from
C++. I'm not saying we need to copy the C# or Java models exactly, but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

second, D needs to update its stone age compilation model copied from
C++. I'm not saying we need to copy the C# or Java models exactly, but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight
that tooth and nail!
One of the best thing about D IMNSHO is that a D program is "just a collection
of text files". I can, without any special tools, dive in an view or edit
any file I want. I can build with nothing but dmd and a command line. I can
use the the source control system of my choice. And very importantly, the
normal build model produces a stand alone OS native executable.
(Note: the above reasons applies to a pure D app, as for non pure D apps,
your toast anyway as D or the other language will have to fit in the opposite
language's model and something will always leak. The best bet in that system
is the simplest system possible and that to is "just text files".

second, D needs to update its stone age compilation model copied from
C++. I'm not saying we need to copy the C# or Java models exactly, but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight
that tooth and nail!
One of the best thing about D IMNSHO is that a D program is "just a
collection of text files". I can, without any special tools, dive in an view
or edit any file I want. I can build with nothing but dmd and a command
line. I can use the the source control system of my choice. And very
importantly, the normal build model produces a stand alone OS native
executable.

file' compilation model will make you sacrifice any of that. He's
proposing something else, like a custom object format. It has nothing
to do with the way source is stored, or with how you invoke the
compiler. Java hasn't destroyed any of that by using .class files,
has it?
We already have a proof-of-concept of this sort of thing for D: LDC.
The LLVM intermediate form is far more amenable to cross-module and
link-time optimization.

And how about certain metaprogramming things that are otherwise infeasible? To
me, the lack of ability to use templates to add virtual functions to classes
seems
like a pretty severe leak of D's compilation model into higher levels of
abstraction. The same can be said for the lack of ability to get information
about classes that inherit from a given class via compile time reflection.
Change
the compilation model to something that is modern and designed with these things
in mind and the problem goes away.
As an example use case, a few months back, I wrote a deep copy template that
would
generate functions to deep copy anything you threw at it using only compile time
reflection and a little bit of RTTI. The only problem is that, because I could
not get information about derived classes at compile time, I couldn't make it
work
with classes whose runtime type was a subtype of the compile time type of the
reference.

So what would you suggest to make the things you mentioned work? That was:
1. templated virtual functions
2. finding all derived classes (from other source files)
The problem is that D wants to support dynamic linking on the module
level, more or less.
I still wonder how serialization is supposed to work. Yeah, we can get
all information at compile time using __traits. But right now, we had to
register all classes manually using a template function, like "void
registerForSerialization(ClassType)();". What you'd actually need is to
iterate over all classes in your project. So this _needs_ a solution.
My best bet would be to allow some kind of "module preprocessor": some
templated piece of code is called each time a module is compiled.
Something like "dmd a.d b.d c.d -preprocessor serialize.d". serialize.d
would be able to iterate over the members of each module (a, b, c) at
compiletime.
(We also could need some C# like attributes to mark transient class
members and the like.)

So what would you suggest to make the things you mentioned work? That was:
1. templated virtual functions
2. finding all derived classes (from other source files)
The problem is that D wants to support dynamic linking on the module
level, more or less.

Well, if you use dynamic linking then all bets are off. As long as you compile
the project into a single binary using static linking, though, you're good.
This
would solve a large portion of the cases.

So what would you suggest to make the things you mentioned work? That
was:
1. templated virtual functions
2. finding all derived classes (from other source files)
The problem is that D wants to support dynamic linking on the module
level, more or less.
I still wonder how serialization is supposed to work. Yeah, we can get
all information at compile time using __traits. But right now, we had
to register all classes manually using a template function, like "void
registerForSerialization(ClassType)();". What you'd actually need is
to iterate over all classes in your project. So this _needs_ a
solution.

I'm very interested on any ideas you have for this as I'm planning on writeing
just smutchy a library. Currently I'm at the "list the problems I see" stage
and I'll have a post on it some time soon.

second, D needs to update its stone age compilation model copied from
C++. I'm not saying we need to copy the C# or Java models exactly, but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight
that tooth and nail!
One of the best thing about D IMNSHO is that a D program is "just a
collection of text files". I can, without any special tools, dive in an
view or edit any file I want. I can build with nothing but dmd and a
command line. I can use the the source control system of my choice. And
very importantly, the normal build model produces a stand alone OS
native executable.
(Note: the above reasons applies to a pure D app, as for non pure D
apps, your toast anyway as D or the other language will have to fit in
the opposite language's model and something will always leak. The best
bet in that system is the simplest system possible and that to is "just
text files".

hmm. that is *not* what I was suggesting.
I was discussing the compilation model and the object file problems.
D promises link-time compatibility with C but that's bullshit - you
can't link on windows obj files for C and object files for D _unless_
you use the same compiler vendor (DMD & DMC) or you use some conversion
tool and that doesn't always work.
obj files are arcane. each Platform has its own format and some
platforms have more than one format (windows).
compare to Java where your class files will run on any machine.
I'm not suggesting coping Java's model letter for letter or using a VM
either, but rather using a better representation.
one other thing, this thread discusses also the VS project files. This
is completely irrelevant. those XML files are VS specific and their
complexity is MS' problem. Nothing prevents a developer from using
different build tools like make, rake or scons with their C# sources
since VS comes with a command line compiler. the issue is not the build
tool but rather the compilation model itself.

second, D needs to update its stone age compilation model copied from
C++. I'm not saying we need to copy the C# or Java models exactly, but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will
fight that tooth and nail!
One of the best thing about D IMNSHO is that a D program is "just a
collection of text files". I can, without any special tools, dive in
an view or edit any file I want. I can build with nothing but dmd and
a command line. I can use the the source control system of my choice.
And very importantly, the normal build model produces a stand alone OS
native executable.
(Note: the above reasons applies to a pure D app, as for non pure D
apps, your toast anyway as D or the other language will have to fit in
the opposite language's model and something will always leak. The best
bet in that system is the simplest system possible and that to is
"just text files".

hmm. that is *not* what I was suggesting.
I was discussing the compilation model and the object file problems.
D promises link-time compatibility with C but that's bullshit - you
can't link on windows obj files for C and object files for D _unless_
you use the same compiler vendor (DMD & DMC) or you use some conversion
tool and that doesn't always work.

Just because it doesn't work on your shitty (SCNR) platform, it doesn't
mean it's wrong. On Unix, there's a single ABI for C, and linking Just
Works (TM).
But I kind of agree. The most useful thing about compiling each module
to an object file is to enable separate compilation. But this is
useless: it doesn't work because of bugs, it doesn't "scale" (because a
single module is likely to have way too many transitive dependencies).

I'm not suggesting coping Java's model letter for letter or using a VM
either, but rather using a better representation.

Ew, that's even worse. Java's model is right out retarded.
I'd just compile a D project to a single (classic) object file. That
would preserve C compatibility. Because the compiler knows _all_ D
modules at compilation, we could enable some spiffy stuff, like virtual
template functions or inter-procedural optimization.

Just because it doesn't work on your shitty (SCNR) platform, it doesn't
mean it's wrong. On Unix, there's a single ABI for C, and linking Just
Works (TM).

do YOU want D to succeed?
that shitty platform is 90% of the market.

But I kind of agree. The most useful thing about compiling each module
to an object file is to enable separate compilation. But this is
useless: it doesn't work because of bugs, it doesn't "scale" (because a
single module is likely to have way too many transitive dependencies).

I'm not suggesting coping Java's model letter for letter or using a VM
either, but rather using a better representation.

Ew, that's even worse. Java's model is right out retarded.
I'd just compile a D project to a single (classic) object file. That
would preserve C compatibility. Because the compiler knows _all_ D
modules at compilation, we could enable some spiffy stuff, like virtual
template functions or inter-procedural optimization.

Instead of compiling per module, it should be more course grained like
on the package/project level. in C# you can compile a single file and
get a "module" file (IIRC), but that's a rare thing. usually you work
with assemblies.

Just because it doesn't work on your shitty (SCNR) platform, it
doesn't mean it's wrong. On Unix, there's a single ABI for C, and
linking Just Works (TM).

do YOU want D to succeed?
that shitty platform is 90% of the market.

But I kind of agree. The most useful thing about compiling each module
to an object file is to enable separate compilation. But this is
useless: it doesn't work because of bugs, it doesn't "scale" (because
a single module is likely to have way too many transitive dependencies).

I'm not suggesting coping Java's model letter for letter or using a
VM either, but rather using a better representation.

Ew, that's even worse. Java's model is right out retarded.
I'd just compile a D project to a single (classic) object file. That
would preserve C compatibility. Because the compiler knows _all_ D
modules at compilation, we could enable some spiffy stuff, like
virtual template functions or inter-procedural optimization.

Instead of compiling per module, it should be more course grained like
on the package/project level. in C# you can compile a single file and
get a "module" file (IIRC), but that's a rare thing. usually you work
with assemblies.

for C link-time compatibility you need to be able to _read_ C object
files and link them to your executable. you gain little from the ability
to _write_ object files.
if you want to do a reverse integration (use D code in your C project)
you can and IMO should have created a library anyway instead of using
object files and the compiler should allow this as a separate option via
a flag, e.g. --make-so or whatever

oh, I forgot my last point:
for C link-time compatibility you need to be able to _read_ C object
files and link them to your executable. you gain little from the ability
to _write_ object files.

You can transitivity. Two compilers for different languages that both
produce C object files can link to each other; two compiler that can
only read C object files cannot.

if you want to do a reverse integration (use D code in your C project)
you can and IMO should have created a library anyway instead of using
object files and the compiler should allow this as a separate option via
a flag, e.g. --make-so or whatever

If you can read and write compatible library files, you don't need to
read or write compatible object files, since library files can take the
place of object files.
--
Rainer Deyke - rainerd eldwood.com

oh, I forgot my last point:
for C link-time compatibility you need to be able to _read_ C object
files and link them to your executable. you gain little from the ability
to _write_ object files.

You can transitivity. Two compilers for different languages that both
produce C object files can link to each other; two compiler that can
only read C object files cannot.

good point.

if you want to do a reverse integration (use D code in your C project)
you can and IMO should have created a library anyway instead of using
object files and the compiler should allow this as a separate option via
a flag, e.g. --make-so or whatever

If you can read and write compatible library files, you don't need to
read or write compatible object files, since library files can take the
place of object files.

that's even better. just allow 2-way usage of C libs and that's it. no
need to support the C object file formats directly.

second, D needs to update its stone age compilation model copied
from
C++. I'm not saying we need to copy the C# or Java models exactly,
but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

fight that tooth and nail!
One of the best thing about D IMNSHO is that a D program is "just a
collection of text files". I can, without any special tools, dive in an
view or edit any file I want. I can build with nothing but dmd and a
command line. I can use the the source control system of my choice. And
very importantly, the normal build model produces a stand alone OS
native executable.
(Note: the above reasons applies to a pure D app, as for non pure D
apps, your toast anyway as D or the other language will have to fit in
the opposite language's model and something will always leak. The best
bet in that system is the simplest system possible and that to is "just
text files".

hmm. that is *not* what I was suggesting.
I was discussing the compilation model and the object file problems.
D promises link-time compatibility with C but that's bullshit - you
can't link on windows obj files for C and object files for D _unless_
you use the same compiler vendor (DMD & DMC) or you use some
conversion tool and that doesn't always work.

aside from the CGG stack I'm not sure anyone can in general. But this is
getting to be a minor point as VS/GCC are the only compilers I've ever seen
used on on windows.

one other thing, this thread discusses also the VS project files. This
is completely irrelevant. those XML files are VS specific and their
complexity is MS' problem. Nothing prevents a developer from using
different build tools like make, rake or scons with their C# sources
since VS comes with a command line compiler. the issue is not the
build tool but rather the compilation model itself.

I think you are in error here as the c# files don't contain enough information
for the compiler to know where to resolve symbols. You might be able to get
away with throwing every single .cs/.dll/whatever file in the project at
the compiler all at once. (Now if you want to talk about archaic!) Aside
from that, how can it find meta-data for your types?

one other thing, this thread discusses also the VS project files. This
is completely irrelevant. those XML files are VS specific and their
complexity is MS' problem. Nothing prevents a developer from using
different build tools like make, rake or scons with their C# sources
since VS comes with a command line compiler. the issue is not the
build tool but rather the compilation model itself.

I think you are in error here as the c# files don't contain enough
information for the compiler to know where to resolve symbols. You might
be able to get away with throwing every single .cs/.dll/whatever file in
the project at the compiler all at once. (Now if you want to talk about
archaic!) Aside from that, how can it find meta-data for your types?

one other thing, this thread discusses also the VS project files. This
is completely irrelevant. those XML files are VS specific and their
complexity is MS' problem. Nothing prevents a developer from using
different build tools like make, rake or scons with their C# sources
since VS comes with a command line compiler. the issue is not the
build tool but rather the compilation model itself.

I think you are in error here as the c# files don't contain enough
information for the compiler to know where to resolve symbols. You
might be able to get away with throwing every single .cs/.dll/whatever
file in the project at the compiler all at once. (Now if you want to
talk about archaic!) Aside from that, how can it find meta-data for
your types?

saw this in Scons last time I looked.

Maybe you should back up your statements instead of just guessing.
http://www.scons.org/wiki/CsharpBuilder
Oh look, you have to list all the source files because C# source files
*do not contain enough information*.
A C# source file containing "using Foo.Bar;" tells you exactly ZERO
about what other files it depends on.
-- Daniel

one other thing, this thread discusses also the VS project files.
This is completely irrelevant. those XML files are VS specific and
their complexity is MS' problem. Nothing prevents a developer from
using different build tools like make, rake or scons with their C#
sources since VS comes with a command line compiler. the issue is
not the build tool but rather the compilation model itself.

information for the compiler to know where to resolve symbols. You
might be able to get away with throwing every single
.cs/.dll/whatever file in the project at the compiler all at once.
(Now if you want to talk about archaic!) Aside from that, how can it
find meta-data for your types?

I saw this in Scons last time I looked.

http://www.scons.org/wiki/CsharpBuilder
Oh look, you have to list all the source files because C# source files
*do not contain enough information*.
A C# source file containing "using Foo.Bar;" tells you exactly ZERO
about what other files it depends on.
-- Daniel

Exactly. The only practical way to deal with C# is an IDE or build system
of some kind that is aware of C#. You /can/ deal with it by hand but IMHO
that would be about half way from D to using C without even a make file or
build script.

one other thing, this thread discusses also the VS project files.
This is completely irrelevant. those XML files are VS specific and
their complexity is MS' problem. Nothing prevents a developer from
using different build tools like make, rake or scons with their C#
sources since VS comes with a command line compiler. the issue is
not the build tool but rather the compilation model itself.

information for the compiler to know where to resolve symbols. You
might be able to get away with throwing every single
.cs/.dll/whatever file in the project at the compiler all at once.
(Now if you want to talk about archaic!) Aside from that, how can it
find meta-data for your types?

I saw this in Scons last time I looked.

http://www.scons.org/wiki/CsharpBuilder
Oh look, you have to list all the source files because C# source files
*do not contain enough information*.
A C# source file containing "using Foo.Bar;" tells you exactly ZERO
about what other files it depends on.
-- Daniel

Exactly. The only practical way to deal with C# is an IDE or build
system of some kind that is aware of C#. You /can/ deal with it by hand
but IMHO that would be about half way from D to using C without even a
make file or build script.

first, thanks Daniel for the evidence I missed.
BCS wrote that a programmer needs to compile all the source files at
once to make it work without an IDE. as I already said, he's wrong, and
Daniel provided the proof above.
sure, you don't get the full power of an IDE that can track all the
source files in the project for you. That just means that it's worth the
money you pay for it.
you can write makefiles or what ever (scons, rake, ant, ...) in the same
way you'd do for C and C++. In other words:
if you prefer commnad line tools you get the same experience and if you
do use an IDE you get a *much* better experience.
same goes for D - either write your own makefile or use rebuild which
uses the compiler front-end to parse the source files just like you
suggested above for C#.
where in all of that, do you see any contradiction to what I said?
again, I said the D compilation model is ancient legacy and should be
replaced and that has nothing to do with the format you prefer for your
build scripts.

one other thing, this thread discusses also the VS project files.
This is completely irrelevant. those XML files are VS specific
and their complexity is MS' problem. Nothing prevents a developer
from using different build tools like make, rake or scons with
their C# sources since VS comes with a command line compiler. the
issue is not the build tool but rather the compilation model
itself.

information for the compiler to know where to resolve symbols. You
might be able to get away with throwing every single
.cs/.dll/whatever file in the project at the compiler all at once.
(Now if you want to talk about archaic!) Aside from that, how can
it find meta-data for your types?

think I saw this in Scons last time I looked.

http://www.scons.org/wiki/CsharpBuilder
Oh look, you have to list all the source files because C# source
files *do not contain enough information*.
A C# source file containing "using Foo.Bar;" tells you exactly ZERO
about what other files it depends on.
-- Daniel

system of some kind that is aware of C#. You /can/ deal with it by
hand but IMHO that would be about half way from D to using C without
even a make file or build script.

first, thanks Daniel for the evidence I missed.
BCS wrote that a programmer needs to compile all the source files at
once to make it work without an IDE. as I already said, he's wrong,
and Daniel provided the proof above.

minor point; I said you have to give the compiler all the source files. You
might not actually nned to compile them all, but without some external meta
data, it still needs to be handled the full because it can't find them on
it's own. And at that point you might as well compile them anyway.

sure, you don't get the full power of an IDE that can track all the
source files in the project for you. That just means that it's worth
the money you pay for it.
you can write makefiles or what ever (scons, rake, ant, ...) in the
same way you'd do for C and C++. In other words:
if you prefer commnad line tools you get the same experience and if
you do use an IDE you get a *much* better experience.
same goes for D - either write your own makefile or use rebuild which

uses the compiler front-end to parse the source files just like you
suggested above for C#.

where did I suggest that?

where in all of that, do you see any contradiction to what I said?
again, I said the D compilation model is ancient legacy and should be
replaced and that has nothing to do with the format you prefer for
your build scripts.

I think that you think I'm saying something other than what I'm trying to
say. I'm struggling to make my argument clear but can't seem to put it in
words. My thesis is that, in effect, C# is married to VS and that D is married
only to the compiler.
My argument is that a D project can be done as nothing but a collection of
.d files with no extra project files of any kind. In c# this is theoretically
possible, but from any practical standpoint it's not going to be done. There
is going to be some extra files that list, in some form, extra information
the compiler needs to resolve symbols and figure out where to look for stuff.
In any practical environment this extra bit that c# more or less forces you
to have (and the D doesn't) will be maintain by some sort of IDE.
To put it quantitatively:
productivity on a scale of 0 to whatever
c# w/o IDE -> ~1
D w/o IDE -> 10
c# w/ IDE -> 100+
D w/ IDE -> 100+
Either C# or D will be lots more productive with an IDE but D without an
IDE will be lots more productive than C# without an IDE. D is designed to
be used however you want, IDE or not. C# is *designed* to be used from within
VS. I rather suspect that the usability of C# without VS is very low on MS
"things we care about" list.

BCS wrote:
> minor point; I said you have to give the compiler all the source files.

You might not actually nned to compile them all, but without some
external meta data, it still needs to be handled the full because it
can't find them on it's own. And at that point you might as well compile
them anyway.

you are only considering small hobby projects. that's not true for big
projects where you do not want to build all at once. Think of DWT for
instance.
besides, you do NOT need to provide all sources, not even just for
partially processing them to find the symbols.
there is no difference between C#'s /r <someAssembly> and GCC's -l<lib>
I don't think you fully understand the C# compilation model -
in C# you almost never compile each source file separately, rather you
compile a bunch of sources into an assembly all at once and you provide
the list of other assemblies your code depends on. so the dependency is
on the package level rather than on the file level. this make so much
more sense since each assembly is a self contained unit of functionality.

sure, you don't get the full power of an IDE that can track all the
source files in the project for you. That just means that it's worth
the money you pay for it.
you can write makefiles or what ever (scons, rake, ant, ...) in the
same way you'd do for C and C++. In other words:
if you prefer commnad line tools you get the same experience and if
you do use an IDE you get a *much* better experience.
same goes for D - either write your own makefile or use rebuild which

uses the compiler front-end to parse the source files just like you
suggested above for C#.

where did I suggest that?

I replied to both you and Daniel. I think I was referring to what Daniel
said here.

where in all of that, do you see any contradiction to what I said?
again, I said the D compilation model is ancient legacy and should be
replaced and that has nothing to do with the format you prefer for
your build scripts.

I think that you think I'm saying something other than what I'm trying
to say. I'm struggling to make my argument clear but can't seem to put
it in words. My thesis is that, in effect, C# is married to VS and that
D is married only to the compiler.

I understand your thesis and disagree with it. what i'm saying is that
not only C# is NOT married to VS but also that VS isn't even the best
IDE for C#. VS is just a fancy text-editor with lots of bells and
whistles. if you want a real IDE for C# you'd probably use Re-Sharper or
a similar offering.

My argument is that a D project can be done as nothing but a collection
of .d files with no extra project files of any kind. In c# this is
theoretically possible, but from any practical standpoint it's not going
to be done. There is going to be some extra files that list, in some
form, extra information the compiler needs to resolve symbols and figure
out where to look for stuff. In any practical environment this extra bit
that c# more or less forces you to have (and the D doesn't) will be
maintain by some sort of IDE.

this is wrong. you cannot have a big project based solely on .d files.
look at DWT as an example. no matter what tool you use, let's say DSSS,
it still has a config file of some sort which contains that additional
meta-data. a DSSS config file might be shorter than what's required for
a C# project file but don't forget that this comes from DSSS relying on
rebuild which embeds the entire DMDFE.
in practice, both languages need more than just the compiler.

To put it quantitatively:
productivity on a scale of 0 to whatever
c# w/o IDE -> ~1
D w/o IDE -> 10
c# w/ IDE -> 100+
D w/ IDE -> 100+
Either C# or D will be lots more productive with an IDE but D without an
IDE will be lots more productive than C# without an IDE. D is designed
to be used however you want, IDE or not. C# is *designed* to be used
from within VS. I rather suspect that the usability of C# without VS is
very low on MS "things we care about" list.

minor point; I said you have to give the compiler all the source
files. You might not actually nned to compile them all, but without
some external meta data, it still needs to be handled the full
because it can't find them on it's own. And at that point you might
as well compile them anyway.

(BTW: that is only referring to c#)

you are only considering small hobby projects. that's not true for big
projects where you do not want to build all at once. Think of DWT for
instance. besides, you do NOT need to provide all sources, not even
just for partially processing them to find the symbols.
there is no difference between C#'s /r <someAssembly> and GCC's -l<lib>
I don't think you fully understand the C# compilation model -
in C# you almost never compile each source file separately, rather you
compile a bunch of sources into an assembly all at once and you provide
the list of other assemblies your code depends on. so the dependency is
on the package level rather than on the file level. this make so much
more sense since each assembly is a self contained unit of
functionality.

That is more or less what I thought it was. Also, that indicates that the
design of c# assumes a build model that I think is a bad idea; the "big dumb
all or nothing build" where a sub part of a program is either up to date,
or rebuilt by recompiling everything in it.

where in all of that, do you see any contradiction to what I said?

be replaced and that has nothing to do with the format you prefer
for your build scripts.

trying to say. I'm struggling to make my argument clear but can't
seem to put it in words. My thesis is that, in effect, C# is married
to VS and that D is married only to the compiler.

not only C# is NOT married to VS but also that VS isn't even the best
IDE for C#.

Maybe I should have said it's married to having *an IDE*, it's just VS by
default and design.

VS is just a fancy text-editor with lots of bells and
whistles. if you want a real IDE for C# you'd probably use Re-Sharper
or a similar offering.

Last I heard Re-Sharper is a VS plugin, not an IDE in it's own right, and
even if that has changed, it's still an IDE. Even so, my point is Any IDE
vs. No IDE, so it dosn't address my point.

My argument is that a D project can be done as nothing but a
collection of .d files with no extra project files of any kind. In c#
this is theoretically possible, but from any practical standpoint
it's not going to be done. There is going to be some extra files that
list, in some form, extra information the compiler needs to resolve
symbols and figure out where to look for stuff. In any practical
environment this extra bit that c# more or less forces you to have
(and the D doesn't) will be maintain by some sort of IDE.

look at DWT as an example. no matter what tool you use, let's say DSSS,
it still has a config file of some sort which contains that additional
meta-data.

So DWT depends on DSSS's meta data. That's a design choice of DWT not D.
What I'm asserting that that C# projects depending on meta data is a design
choice of C# not the project. D project can (even if some don't) be practically
designed so that they don't need that meta data where as, I will assert,
C# projects, for practical purposes, can't do away with it.
--------------
I'm fine with any build system you want to have implemented as long as a
tool stack can still be built that works like the current one. That is that
it can practically:
- support projects that need no external meta data
- produce monolithic OS native binary executables
- work with the only language aware tool being the compiler
I don't expect it to requiter that projects be done that way and I wouldn't
take any issue if a tool stack were built that didn't fit that list. What
I /would/ take issue with is the the language (okay, or DMD in particular)
were altered to the point that one or more of those *couldn't* be done.

in C# you almost never compile each source file separately, rather you
compile a bunch of sources into an assembly all at once and you provide
the list of other assemblies your code depends on. so the dependency is
on the package level rather than on the file level. this make so much
more sense since each assembly is a self contained unit of
functionality.

That is more or less what I thought it was. Also, that indicates that
the design of c# assumes a build model that I think is a bad idea; the
"big dumb all or nothing build" where a sub part of a program is either
up to date, or rebuilt by recompiling everything in it.

C# has a different compilation model which is what I was saying all
along. However I disagree with your assertion that this model is bad.
It makes much more sense than the C++/D model. the idea here is that
each self contained sub-component is compiled by itself. this self
contained component might as well be a single file, nothing in the above
prevents this.
consider a project with 100 files where you have one specific feature
implemented by 4 tightly coupled classes which you put in separate
files. each of the files depends on the rest. what's the best compiling
strategy here?
if you compile each file separately than you parse all 4 files for each
object file which is completely redundant and makes little sense since
you'll need to recompile all of them anyway because of their dependencies.

Last I heard Re-Sharper is a VS plugin, not an IDE in it's own right,
and even if that has changed, it's still an IDE. Even so, my point is
Any IDE vs. No IDE, so it dosn't address my point.

My use of the term IDE here is a loose one. let me rephrase:
yes, Re-sharper is a plugin for VS. without it VS provides just
text-editing features and I don't consider it an IDE like eclipse is.
Re-sharper provides all the features of a real IDE for VS. so, while
it's "just" a plugin, it's more important than VS itself.

So DWT depends on DSSS's meta data. That's a design choice of DWT not D.
What I'm asserting that that C# projects depending on meta data is a
design choice of C# not the project. D project can (even if some don't)
be practically designed so that they don't need that meta data where as,
I will assert, C# projects, for practical purposes, can't do away with it.
--------------

What I was saying was not specific for DWT but rather that _any_
reasonably big project will use such a system and it's simply not
practical to do otherwise. how would you handle a project with a hundred
files that takes 30 min. to compile without any tool whatsoever except
the compiler itself?

I'm fine with any build system you want to have implemented as long as a
tool stack can still be built that works like the current one. That is
that it can practically:
- support projects that need no external meta data
- produce monolithic OS native binary executables
- work with the only language aware tool being the compiler
I don't expect it to requiter that projects be done that way and I
wouldn't take any issue if a tool stack were built that didn't fit that
list. What I /would/ take issue with is the the language (okay, or DMD
in particular) were altered to the point that one or more of those
*couldn't* be done.

- support projects that need no external meta data

both languages.

- produce monolithic OS native binary executables

executables. I never said I want this aspect to be brought to D.

- work with the only language aware tool being the compiler

just to clarify: you _can_ compile C# files one at a time just like you
would with C or D, and the is an output format for that that is not an
assembly.

in C# you almost never compile each source file separately, rather
you compile a bunch of sources into an assembly all at once and you
provide the list of other assemblies your code depends on. so the
dependency is on the package level rather than on the file level.
this make so much more sense since each assembly is a self contained
unit of functionality.

the design of c# assumes a build model that I think is a bad idea;
the "big dumb all or nothing build" where a sub part of a program is
either up to date, or rebuilt by recompiling everything in it.

object file which is completely redundant and makes little sense since
you'll need to recompile all of them anyway because of their
dependencies.

All of the above is (as far as D goes) an implementation detail[*]. What
I'm railing on is that in c# 1) you have no option BUT to do it that way
and 2) the only practical way to build is from a config file
[*] I am working very slowly on building a compiler and am thinking of building
it so that along with object files, it generates "public export" (.pe) files
that have a binary version of the public interface for the module. I'd set
it up so that the compiler never parses more than one file per process. If
you pass it more, it forks and when it runs into imports, it loads the .pe
files after, if needed, forking off a process to generating it.

without it VS provides just
text-editing features and I don't consider it an IDE like eclipse is.

The IDE features I don't want the language to depend on are in VS so this
whole side line is un-important.

So DWT depends on DSSS's meta data. That's a design choice of DWT not
D. What I'm asserting that that C# projects depending on meta data is
a design choice of C# not the project. D project can (even if some
don't) be practically designed so that they don't need that meta data
where as, I will assert, C# projects, for practical purposes, can't
do away with it.
--------------

reasonably big project will use such a system and it's simply not
practical to do otherwise.

I assert that the above is false because...

how would you handle a project with a hundred
files that takes 30 min. to compile without any tool whatsoever
except the compiler itself?

I didn't say that the only tool you can use is the compiler. I'm fine with
bud/DSSS/rebuild being used. What I don't want, is a language that effectively
_requiters_ that some config file be maintained along with the code files.
I suspect that the bulk of pure D projects (including large ones) /could/
have been written so that they didn't need a dsss.conf file and many of those
that do have a dsss.conf, I'd almost bet can be handed without it. IIRC,
all that DSSS really needs is what file to start with (where as c# needs
to be handed the full file list at some point).

I'm fine with any build system you want to have implemented as long
as a tool stack can still be built that works like the current one.
That is that it can practically:
- support projects that need no external meta data
- produce monolithic OS native binary executables
- work with the only language aware tool being the compiler
I don't expect it to requiter that projects be done that way and I
wouldn't take any issue if a tool stack were built that didn't fit
that list. What I /would/ take issue with is the the language (okay,
or DMD in particular) were altered to the point that one or more of
those *couldn't* be done.

- support projects that need no external meta data

in both languages.

As I said, I think this is false.

- produce monolithic OS native binary executables

executables. I never said I want this aspect to be brought to D.

Mostly I'm interested in the monolithic bit (no DLL hell!) but I was just
pulling out my laundry list.

- work with the only language aware tool being the compiler

ditto the point on 1

just to clarify: you _can_ compile C# files one at a time just like
you would with C or D, and the is an output format for that that is
not an assembly.

I think we won't converge on this.
I think I'm seeing a tools dependency issue that I don't like in the design
of C# that I _known_ I'm not seeing in D. You think that D is already just
as dependent on the tools and don't see that as an issue.
One of the major attractions for me to DMD is its build model so I tend to
be very conservative and resistant to change on this point.

in C# you almost never compile each source file separately, rather
you compile a bunch of sources into an assembly all at once and you
provide the list of other assemblies your code depends on. so the
dependency is on the package level rather than on the file level.
this make so much more sense since each assembly is a self contained
unit of functionality.

the design of c# assumes a build model that I think is a bad idea;
the "big dumb all or nothing build" where a sub part of a program is
either up to date, or rebuilt by recompiling everything in it.

object file which is completely redundant and makes little sense since
you'll need to recompile all of them anyway because of their
dependencies.

All of the above is (as far as D goes) an implementation detail[*]. What
I'm railing on is that in c# 1) you have no option BUT to do it that way
and 2) the only practical way to build is from a config file

prevents you to create your own compiler for C# as well.

[*] I am working very slowly on building a compiler and am thinking of
building it so that along with object files, it generates "public
export" (.pe) files that have a binary version of the public interface
for the module. I'd set it up so that the compiler never parses more
than one file per process. If you pass it more, it forks and when it
runs into imports, it loads the .pe files after, if needed, forking off
a process to generating it.

sounds like an interesting idea - basically your compiler will generate
the meta data just as an IDE does for C#.

without it VS provides just
text-editing features and I don't consider it an IDE like eclipse is.

The IDE features I don't want the language to depend on are in VS so
this whole side line is un-important.

So DWT depends on DSSS's meta data. That's a design choice of DWT not
D. What I'm asserting that that C# projects depending on meta data is
a design choice of C# not the project. D project can (even if some
don't) be practically designed so that they don't need that meta data
where as, I will assert, C# projects, for practical purposes, can't
do away with it.
--------------

reasonably big project will use such a system and it's simply not
practical to do otherwise.

I assert that the above is false because...

how would you handle a project with a hundred
files that takes 30 min. to compile without any tool whatsoever
except the compiler itself?

I didn't say that the only tool you can use is the compiler. I'm fine
with bud/DSSS/rebuild being used. What I don't want, is a language that
effectively _requiters_ that some config file be maintained along with
the code files. I suspect that the bulk of pure D projects (including
large ones) /could/ have been written so that they didn't need a
dsss.conf file and many of those that do have a dsss.conf, I'd almost
bet can be handed without it. IIRC, all that DSSS really needs is what
file to start with (where as c# needs to be handed the full file list at
some point).

you miss a critical issue here: DSSS/rebuild/etc can mostly be used
without a config file _because_ they embed the DMDFE which generates
that information (dependencies) for them. There is no conceptual
difference between that and using an IDE. you just moved some
functionality from the IDE to the build tool.
both need to parse the code to get the dependencies.

I'm fine with any build system you want to have implemented as long
as a tool stack can still be built that works like the current one.
That is that it can practically:
- support projects that need no external meta data
- produce monolithic OS native binary executables
- work with the only language aware tool being the compiler
I don't expect it to requiter that projects be done that way and I
wouldn't take any issue if a tool stack were built that didn't fit
that list. What I /would/ take issue with is the the language (okay,
or DMD in particular) were altered to the point that one or more of
those *couldn't* be done.

- support projects that need no external meta data

in both languages.

As I said, I think this is false.

- produce monolithic OS native binary executables

executables. I never said I want this aspect to be brought to D.

Mostly I'm interested in the monolithic bit (no DLL hell!) but I was
just pulling out my laundry list.

- work with the only language aware tool being the compiler

ditto the point on 1

just to clarify: you _can_ compile C# files one at a time just like
you would with C or D, and the is an output format for that that is
not an assembly.

I think we won't converge on this.
I think I'm seeing a tools dependency issue that I don't like in the
design of C# that I _known_ I'm not seeing in D. You think that D is
already just as dependent on the tools and don't see that as an issue.
One of the major attractions for me to DMD is its build model so I tend
to be very conservative and resistant to change on this point.

the monolithic executable case and ignore the fact that in real life
projects the common case is to have sub-components, be it Java jars, C#
assemblies, C/C++ dll/so/a or D DDLs.
in any of those cases you sstill need to manage the sub components and
their dependencies.
one of the reasons for "dll hell" is because c/c++ do not handle this
properly and that's what Java and .net and DDL try to solve. the
dependency is already there for external tools to manage this complexity.

if you compile each file separately than you parse all 4 files for
each object file which is completely redundant and makes little
sense since you'll need to recompile all of them anyway because of
their dependencies.

What I'm railing on is that in c# 1) you have no option BUT to do it
that way and 2) the only practical way to build is from a config file

prevents you to create your own compiler for C# as well.

I disagree, see below:

[*] I am working very slowly on building a compiler and am thinking
of building it so that along with object files, it generates "public
export" (.pe) files that have a binary version of the public
interface for the module. I'd set it up so that the compiler never
parses more than one file per process. If you pass it more, it forks
and when it runs into imports, it loads the .pe files after, if
needed, forking off a process to generating it.

generate the meta data just as an IDE does for C#.

Maybe that's the confusion: No it won't!
That's not the meta data I've been talking about. The meta data that c# needs
that I'm referring to is the list of file that the compiler needs to look
at. In D this information can be derived from the text of the import statements
in the .d files (well it also needs the the import directory list). In c#
this can't be done even within a single assembly. Without me explicitly telling
the compiler what files to look in, it cant find anything! It can't even
just search the local dir for file that have what it's looking for because
I could have old copies of the file laying around that shouldn't be used.

I didn't say that the only tool you can use is the compiler. I'm fine
with bud/DSSS/rebuild being used. What I don't want, is a language
that effectively _requiters_ that some config file be maintained
along with the code files. I suspect that the bulk of pure D projects
(including large ones) /could/ have been written so that they didn't
need a dsss.conf file and many of those that do have a dsss.conf, I'd
almost bet can be handed without it. IIRC, all that DSSS really needs
is what file to start with (where as c# needs to be handed the full
file list at some point).

without a config file _because_ they embed the DMDFE which generates
that information (dependencies) for them. There is no conceptual
difference between that and using an IDE. you just moved some
functionality from the IDE to the build tool.
both need to parse the code to get the dependencies.

Again, in c# you /can't get that information/ by parsing the code. And that
is my point exactly.

I think we won't converge on this.
I think I'm seeing a tools dependency issue that I don't like in the
design of C# that I _known_ I'm not seeing in D. You think that D is
already just as dependent on the tools and don't see that as an
issue.
One of the major attractions for me to DMD is its build model so I
tend to be very conservative and resistant to change on this point.

you're right that we will not converge on this. you only concentrate on
the monolithic executable case and ignore the fact that in real life
projects the common case is to have sub-components, be it Java jars, C#
assemblies, C/C++ dll/so/a or D DDLs.

Yes the common case, but that dosn't make it the right case. See below.

in any of those cases you sstill need to manage the sub components and
their dependencies.
one of the reasons for "dll hell" is because c/c++ do not handle this
properly and that's what Java and .net and DDL try to solve. the
dependency is already there for external tools to manage this
complexity.

I assert that very rare that a programs NEEDS to use a DLL/so/DDL type of
system. The only unavoidable reasons to use them that I see are:
1) you are forced to use code that can't be had at compile time (rare outside
of plugins and they don't count because they are not your code)
2) you have lots of code that is mostly never run and you can't load it all
(and that sounds like you have bigger problems)
3) you are running into file size limits (outside of something like a kernel
image, this is unlikely)
4) booting takes to long (and that says your doing something else wrong)
It is my strongly held opinion that the primary argument for dlls and friends,
code sharing, is attempting to solve a completely intractable problem. As
soon as you bring in versioning, installers and uninstallers, the problem
becomes flat out impossible to solve. (the one exception is for low level
system things like KERNEL32.DLL and stdc*.so)
In this day and age where HDD's are ready to be measured in TB and people
ask how many Gigs of RAM you have, *who cares* about code sharing?

I assert that very rare that a programs NEEDS to use a DLL/so/DDL type
of system. The only unavoidable reasons to use them that I see are:
1) you are forced to use code that can't be had at compile time (rare
outside of plugins and they don't count because they are not your code)
2) you have lots of code that is mostly never run and you can't load it
all (and that sounds like you have bigger problems)
3) you are running into file size limits (outside of something like a
kernel image, this is unlikely)
4) booting takes to long (and that says your doing something else wrong)
It is my strongly held opinion that the primary argument for dlls and
friends, code sharing, is attempting to solve a completely intractable
problem. As soon as you bring in versioning, installers and
uninstallers, the problem becomes flat out impossible to solve. (the one
exception is for low level system things like KERNEL32.DLL and stdc*.so)
In this day and age where HDD's are ready to be measured in TB and
people ask how many Gigs of RAM you have, *who cares* about code sharing?

so, in your opinion Office, photoshop, adobe acrobat, can all be
provided as monolithic executables? that's just ridiculous.
My work uses this monolithic model approach for some programs and this
brings so much pain that you wouldn't believe. Now we're trying to
slowly move away from this retarded model. I'm talking from experience
here - the monolithic approach does NOT work.
just so you'd understand the scale I'm talking about - our largest
executable is 1.5 Gigs in size.
you're wrong on both accounts, DLL type systems are not only the common
case, they are the correct solution.
the "DLL HELL" you're so afraid of is mostly solved by using
jars/assemblies (smart dlls) that contain meta-data such as versions.
this problem is also solved on Linux systems that use package managers,
like Debian's APT.
monolithic design like you suggest is in fact bad design that leads to
things like - Windows Vista running slower on my 8-core machine than
Window XP on my extremely weak laptop.

just so you'd understand the scale I'm talking about - our largest
executable is 1.5 Gigs in size.

How is 1.5 GB of dlls better than a 1.5 GB executable? (And don't
forget, removing dead code across dll boundaries is a lot more difficult
than removing it within a single executable, so you're more likely to
have 3 GB of dlls.)

you're wrong on both accounts, DLL type systems are not only the common
case, they are the correct solution.
the "DLL HELL" you're so afraid of is mostly solved by using
jars/assemblies (smart dlls) that contain meta-data such as versions.
this problem is also solved on Linux systems that use package managers,
like Debian's APT.

You have a curious definition of "solved". Package managers work
(sometimes, sort of) so long as you get all of your software from a
single source and you never need a newer versions of your software that
is not yet available in package form. I've got programs that I've
almost given up on deploying at all because of assembly hell. Plain old
DLLs weren't anywhere near as bad as that.
My favorite deployment system is the application bundle under OS X.
It's a directory that looks like a file. Beneath the covers it has
frameworks and configuration files and multiple executables and all that
crap, but to the user, it looks like a single file. You can copy it,
rename it, move it (on a single computer or between computers), even
delete it, and it just works. Too bad the system doesn't work under any
other OS.
--
Rainer Deyke - rainerd eldwood.com

My favorite deployment system is the application bundle under OS X.
It's a directory that looks like a file. Beneath the covers it has
frameworks and configuration files and multiple executables and all
that
crap, but to the user, it looks like a single file. You can copy it,
rename it, move it (on a single computer or between computers), even
delete it, and it just works. Too bad the system doesn't work under
any
other OS.

Oh man would I love to have that :) I've day dreamed of a system a lot like
that. One thing I'd mandate if I ever designed my ideal system is that
installation
/in total/ is plunking in the dir, and removal is simply deleting it. Once
it's gone, it's gone. Nothing may remain that can in ANY WAY effect other
apps. That would implies that after each boot nothing is installed and
everything
is installed on launch.

It is my strongly held opinion that the primary argument for dlls and
friends, code sharing, is attempting to solve a completely
intractable problem. As soon as you bring in versioning, installers
and uninstallers, the problem becomes flat out impossible to solve.
(the one exception is for low level system things like KERNEL32.DLL
and stdc*.so)

provided as monolithic executables? that's just ridiculous.

Office, I wouldn't mind. Photoshop, it's got lots of plugins (#1) right?
adobe acrobat, it might as well BE a plugin (#1 again).

My work uses this monolithic model approach for some programs and this
brings so much pain that you wouldn't believe.

How exactly?

just so you'd understand the scale I'm talking about - our largest
executable is 1.5 Gigs in size.

That's point #3 and I'd love to known how you got that big. (I guess I should
add a #5: Resource only DLLs.)

you're wrong on both accounts, DLL type systems are not only the
common case, they are the correct solution.

I didn't say that aren't common. I said it's a bad idea IMO.

the "DLL HELL" you're so afraid of is mostly solved by using
jars/assemblies (smart dlls) that contain meta-data such as versions.
this problem is also solved on Linux systems that use package
managers, like Debian's APT.

If you ignore system libraries like .NET its self, I'd almost bet that if
you look at it long enough those systems, from a piratical standpoint, are
almost the same as installing dll/so files to be used only by one program.
That is that the average number of programs/applications that depend on any
given file is 1. And as I all ready pointed out, I'll burn disk space to
get the reliability that static linkage gets me.
I seem to recall running into this issue with .NET assemblies and .so files
within the last year.

monolithic design like you suggest is in fact bad design that leads to
things like - Windows Vista running slower on my 8-core machine than
Window XP on my extremely weak laptop.

If the same design runs slower with static linkage than with dynamic linkage,
then there is something wrong with the OS. I can say that with confidence
because everything that a static version needs to do the dynamic version
will also, and then a pile more.

if you compile each file separately than you parse all 4 files for
each object file which is completely redundant and makes little
sense since you'll need to recompile all of them anyway because of
their dependencies.

What I'm railing on is that in c# 1) you have no option BUT to do it
that way and 2) the only practical way to build is from a config file

prevents you to create your own compiler for C# as well.

I disagree, see below:

[*] I am working very slowly on building a compiler and am thinking
of building it so that along with object files, it generates "public
export" (.pe) files that have a binary version of the public
interface for the module. I'd set it up so that the compiler never
parses more than one file per process. If you pass it more, it forks
and when it runs into imports, it loads the .pe files after, if
needed, forking off a process to generating it.

generate the meta data just as an IDE does for C#.

Maybe that's the confusion: No it won't!
That's not the meta data I've been talking about. The meta data that c#
needs that I'm referring to is the list of file that the compiler needs
to look at. In D this information can be derived from the text of the
import statements in the .d files (well it also needs the the import
directory list). In c# this can't be done even within a single assembly.
Without me explicitly telling the compiler what files to look in, it
cant find anything! It can't even just search the local dir for file
that have what it's looking for because I could have old copies of the
file laying around that shouldn't be used.

I didn't say that the only tool you can use is the compiler. I'm fine
with bud/DSSS/rebuild being used. What I don't want, is a language
that effectively _requiters_ that some config file be maintained
along with the code files. I suspect that the bulk of pure D projects
(including large ones) /could/ have been written so that they didn't
need a dsss.conf file and many of those that do have a dsss.conf, I'd
almost bet can be handed without it. IIRC, all that DSSS really needs
is what file to start with (where as c# needs to be handed the full
file list at some point).

without a config file _because_ they embed the DMDFE which generates
that information (dependencies) for them. There is no conceptual
difference between that and using an IDE. you just moved some
functionality from the IDE to the build tool.
both need to parse the code to get the dependencies.

Again, in c# you /can't get that information/ by parsing the code. And
that is my point exactly.

I think we won't converge on this.
I think I'm seeing a tools dependency issue that I don't like in the
design of C# that I _known_ I'm not seeing in D. You think that D is
already just as dependent on the tools and don't see that as an
issue.
One of the major attractions for me to DMD is its build model so I
tend to be very conservative and resistant to change on this point.

you're right that we will not converge on this. you only concentrate on
the monolithic executable case and ignore the fact that in real life
projects the common case is to have sub-components, be it Java jars, C#
assemblies, C/C++ dll/so/a or D DDLs.

Yes the common case, but that dosn't make it the right case. See below.

in any of those cases you sstill need to manage the sub components and
their dependencies.
one of the reasons for "dll hell" is because c/c++ do not handle this
properly and that's what Java and .net and DDL try to solve. the
dependency is already there for external tools to manage this
complexity.

I assert that very rare that a programs NEEDS to use a DLL/so/DDL type
of system. The only unavoidable reasons to use them that I see are:
1) you are forced to use code that can't be had at compile time (rare
outside of plugins and they don't count because they are not your code)
2) you have lots of code that is mostly never run and you can't load it
all (and that sounds like you have bigger problems)
3) you are running into file size limits (outside of something like a
kernel image, this is unlikely)
4) booting takes to long (and that says your doing something else wrong)

5) The most common case - your program relies on some third-party middleware
that doesn't provide any source code.

It is my strongly held opinion that the primary argument for dlls and
friends, code sharing, is attempting to solve a completely intractable
problem. As soon as you bring in versioning, installers and
uninstallers, the problem becomes flat out impossible to solve. (the one
exception is for low level system things like KERNEL32.DLL and stdc*.so)
In this day and age where HDD's are ready to be measured in TB and
people ask how many Gigs of RAM you have, *who cares* about code sharing?

I assert that very rare that a programs NEEDS to use a DLL/so/DDL type
of system. The only unavoidable reasons to use them that I see are:
1) you are forced to use code that can't be had at compile time (rare
outside of plugins and they don't count because they are not your code)
2) you have lots of code that is mostly never run and you can't load it
all (and that sounds like you have bigger problems)
3) you are running into file size limits (outside of something like a
kernel image, this is unlikely)
4) booting takes to long (and that says your doing something else
wrong)

5) The most common case - your program relies on some third-party
middleware that doesn't provide any source code.

They /sould/ ship static libs as well IMNSHO. Also the same aside as for
#1 fits here.

What I was saying was not specific for DWT but rather that _any_
reasonably big project will use such a system and it's simply not
practical to do otherwise. how would you handle a project with a hundred
files that takes 30 min. to compile without any tool whatsoever except
the compiler itself?

Make?
And if you're smart, a verson control system. (Whether you use an IDE or
not.)

What I was saying was not specific for DWT but rather that _any_
reasonably big project will use such a system and it's simply not
practical to do otherwise. how would you handle a project with a
hundred files that takes 30 min. to compile without any tool
whatsoever except the compiler itself?

Make?
And if you're smart, a verson control system. (Whether you use an IDE or
not.)

What I was saying was not specific for DWT but rather that _any_
reasonably big project will use such a system and it's simply not
practical to do otherwise. how would you handle a project with a
hundred files that takes 30 min. to compile without any tool
whatsoever except the compiler itself?

Make?
And if you're smart, a verson control system. (Whether you use an IDE
or not.)

Make _is_ a build tool

Yes. But since it's on every Unix since almost 40 years back, it doesn't
count here. :-)
Besides, it has tons of other uses, too. One might as well say that a
text editor is a build tool. You construct (or erect) software with it. ;-)

What I was saying was not specific for DWT but rather that _any_
reasonably big project will use such a system and it's simply not
practical to do otherwise. how would you handle a project with a
hundred files that takes 30 min. to compile without any tool
whatsoever except the compiler itself?

Make?
And if you're smart, a verson control system. (Whether you use an IDE
or not.)

Make _is_ a build tool

Yes. But since it's on every Unix since almost 40 years back, it doesn't
count here. :-)
Besides, it has tons of other uses, too. One might as well say that a
text editor is a build tool. You construct (or erect) software with it. ;-)

doesn't count here. :-)
Besides, it has tons of other uses, too. One might as well say that a
text editor is a build tool. You construct (or erect) software with
it. ;-)

OK and so can bash because it can run scripts.
But that's not the point. Neither make nor VS's equivalent is what this thread
was about. At least not where I was involved. My point is that the design
of c# *requiters* the maintenance (almost certainly by a c# specific IDE)
of some kind of external metadata file that contains information that can't
be derived from the source code its self, where as with D, no such metadata
is *needed*. If you wanted, you could build a tool to take D source code
and generate a makefile or a bash build script from it

doesn't count here. :-)
Besides, it has tons of other uses, too. One might as well say that a
text editor is a build tool. You construct (or erect) software with
it. ;-)

OK and so can bash because it can run scripts.

No, the main purpose of make is to build software. You probably wouldn't
think to use a makefile to automate converting flac files to ogg files,
for instance. Or look at bashburn -- it has a user interface (albeit
using text menus rather than graphics). You might be able to do that
with a makefile, but it would be seriously awkward, and you'd mainly be
using shell scripting.
And bash does not have any special features to assist in building software.

But that's not the point. Neither make nor VS's equivalent is what this
thread was about. At least not where I was involved. My point is that
the design of c# *requiters* the maintenance (almost certainly by a c#
specific IDE) of some kind of external metadata file that contains
information that can't be derived from the source code its self, where
as with D, no such metadata is *needed*. If you wanted, you could build
a tool to take D source code and generate a makefile or a bash build
script from it

If you wanted, you could create a tool to do the same with C# source
code, assuming there exists a directory containing all and only those
source files that should end up in the resulting assembly. If you follow
C# best practices, this is what you will do -- and your directory
structure will match your namespaces besides. But this is not enforced.

But that's not the point. Neither make nor VS's equivalent is what
this thread was about. At least not where I was involved. My point is
that the design of c# *requiters* the maintenance (almost certainly
by a c# specific IDE) of some kind of external metadata file that
contains information that can't be derived from the source code its
self, where as with D, no such metadata is *needed*. If you wanted,
you could build a tool to take D source code and generate a makefile
or a bash build script from it

code, assuming there exists a directory containing all and only those
source files that should end up in the resulting assembly.

I'm /not/ willing to assume that (because all to often it's not true) and
you also need the list of other assemblies that should be included.

But that's not the point. Neither make nor VS's equivalent is what
this thread was about. At least not where I was involved. My point is
that the design of c# *requiters* the maintenance (almost certainly
by a c# specific IDE) of some kind of external metadata file that
contains information that can't be derived from the source code its
self, where as with D, no such metadata is *needed*. If you wanted,
you could build a tool to take D source code and generate a makefile
or a bash build script from it

code, assuming there exists a directory containing all and only those
source files that should end up in the resulting assembly.

I'm /not/ willing to assume that (because all to often it's not true)
and you also need the list of other assemblies that should be included.

you can't create a standalone executable in D just by parsing the D
source files (for all the imports) if you need to link in external libs.
you need to at least specify the lib name if it's on the linker's search
path or provide the full path otherwise.
Same thing with assemblies.
you have to provide that meta-data (lib names) anyway both in C# and D.
the only difference is that C# (correctly) recognizes that this is the
better default.

C# assemblies are analogous to C/C++/D libs.
you can't create a standalone executable in D just by parsing the D
source files (for all the imports) if you need to link in external libs.
you need to at least specify the lib name if it's on the linker's
search path or provide the full path otherwise.

pagma(lib, ...); //?

Same thing with assemblies.
you have to provide that meta-data (lib names) anyway both in C# and
D. the only difference is that C# (correctly) recognizes that this is
the better default.

IMHO the c# way is the /worse/ default. Based on that being my opinion, I
think we have found where we will have to disagree. Part of my reasoning
is that in the normal case, for practical reasons, that file will have to
be maintained by an IDE, thus /requiring/ development to be in an IDE of
some kind. In D, that data in can normally be part of the source code, and
only in unusual cases does it need to be formally codified.

C# assemblies are analogous to C/C++/D libs.
you can't create a standalone executable in D just by parsing the D
source files (for all the imports) if you need to link in external libs.
you need to at least specify the lib name if it's on the linker's
search path or provide the full path otherwise.

pagma(lib, ...); //?

that's a compiler directive. nothing prevents a C# compiler to implement
this. In general though this is a bad idea. why would you want to embed
such outside data inside your code? info needed for building your task
should not be part of the code. what if I want to rename the lib, do I
have to recompile everything? what if I don't have the source? what if I
want to change the version? what if I want to switch a vendor for this lib?

Same thing with assemblies.
you have to provide that meta-data (lib names) anyway both in C# and
D. the only difference is that C# (correctly) recognizes that this is
the better default.

IMHO the c# way is the /worse/ default. Based on that being my opinion,
I think we have found where we will have to disagree. Part of my
reasoning is that in the normal case, for practical reasons, that file
will have to be maintained by an IDE, thus /requiring/ development to be
in an IDE of some kind. In D, that data in can normally be part of the
source code, and only in unusual cases does it need to be formally
codified.

C# assemblies are analogous to C/C++/D libs.
you can't create a standalone executable in D just by parsing the D
source files (for all the imports) if you need to link in external
libs.
you need to at least specify the lib name if it's on the linker's
search path or provide the full path otherwise.

implement this. In general though this is a bad idea. why would you
want to embed such outside data inside your code?

Because it's needed to build the code

info needed for building your task should not be part of the code.

IMO it should. Ideally it should be available in the code in a form tools
can read. At a minimum, it should be in the comment header. The only other
choice is placing it outside your code and we have already covered why I
think that is a bad idea.

what if I want to rename the lib,

So you rename the lib and whatever references to it (inside the code or
outside)
end up needing to be update. Regardless, you will need to update something
by hand or have a tool do it for you. I see nothing harder about updateing
it in the code than outside the code.

do I have to recompile everything?

Nope. pragma(lib, ...) just passes a static lib to the linker and doesn't
have any effect at runtime. (if you are dealing with .dll/.so libraries then
you link in a export .lib with a pragma or load them manually and don't even
worry about it at all)

what if I don't have the source?

It's pointing at a static library so source doesn't matter. If you are working
from source then you don't need the pragma and what matters is DMD's import
path (if it is an unrelated code tree in which cases the path can be developer
specific and needs to be set up per system).

what if I want to change the version?

In that case you change the pragma. (again assuming static libs and the same
side note for dynamic libs)

what if I want to switch a vendor for this lib?

I have never heard of this being possible without major changes in the calling
code so it don't matter.

C# assemblies are analogous to C/C++/D libs.
you can't create a standalone executable in D just by parsing the D
source files (for all the imports) if you need to link in external
libs.
you need to at least specify the lib name if it's on the linker's
search path or provide the full path otherwise.

implement this. In general though this is a bad idea. why would you
want to embed such outside data inside your code?

Because it's needed to build the code

info needed for building your task should not be part of the code.

IMO it should. Ideally it should be available in the code in a form
tools can read. At a minimum, it should be in the comment header. The
only other choice is placing it outside your code and we have already
covered why I think that is a bad idea.

what if I want to rename the lib,

So you rename the lib and whatever references to it (inside the code or
outside) end up needing to be update. Regardless, you will need to
update something by hand or have a tool do it for you. I see nothing
harder about updateing it in the code than outside the code.

do I have to recompile everything?

Nope. pragma(lib, ...) just passes a static lib to the linker and
doesn't have any effect at runtime. (if you are dealing with .dll/.so
libraries then you link in a export .lib with a pragma or load them
manually and don't even worry about it at all)

what if I don't have the source?

It's pointing at a static library so source doesn't matter. If you are
working from source then you don't need the pragma and what matters is
DMD's import path (if it is an unrelated code tree in which cases the
path can be developer specific and needs to be set up per system).

what if I want to change the version?

In that case you change the pragma. (again assuming static libs and the
same side note for dynamic libs)

what if I want to switch a vendor for this lib?

I have never heard of this being possible without major changes in the
calling code so it don't matter.

What I was trying to say is that you're hardcoding the lib name and
version inside the code. I see two problems with this: if the pragma is
in my code than I need to re-compile my code if I want to edit the
pragma (rename lib, change version, change vendor, etc...)
if the pragma is in some 3rd party component which I don't have the
source for than I can't change the pragma.
either way, it conflicts with my work-flow and goals.
I do not wish to recompile a 1.5GB standalone executable if I just
changed a minor version of a lib.
IIRC, the math lib in C/C++ comes in three flavors so you can choose
your trade-off (speed or accuracy) and the only thing you need to do is
just link the flavor you want in your executable.
you seem keen on combining the build process with compilation which is
in my experience a very bad thing. it may simplify your life for your
small projects but as I was telling you before it's a pain in the neck
for the scale of projects I work on. I don't get why you refuse to see
that.
what you suggest is _not_ a good solution for me.

C# assemblies are analogous to C/C++/D libs.
you can't create a standalone executable in D just by parsing the
D
source files (for all the imports) if you need to link in external
libs.
you need to at least specify the lib name if it's on the linker's
search path or provide the full path otherwise.

implement this. In general though this is a bad idea. why would you
want to embed such outside data inside your code?

info needed for building your task should not be part of the code.

tools can read. At a minimum, it should be in the comment header. The
only other choice is placing it outside your code and we have already
covered why I think that is a bad idea.

what if I want to rename the lib,

or outside) end up needing to be update. Regardless, you will need to
update something by hand or have a tool do it for you. I see nothing
harder about updateing it in the code than outside the code.

do I have to recompile everything?

doesn't have any effect at runtime. (if you are dealing with .dll/.so
libraries then you link in a export .lib with a pragma or load them
manually and don't even worry about it at all)

what if I don't have the source?

are working from source then you don't need the pragma and what
matters is DMD's import path (if it is an unrelated code tree in
which cases the path can be developer specific and needs to be set up
per system).

what if I want to change the version?

the same side note for dynamic libs)

what if I want to switch a vendor for this lib?

the calling code so it don't matter.

version inside the code. I see two problems with this: if the pragma
is
in my code than I need to re-compile my code if I want to edit the
pragma (rename lib, change version, change vendor, etc...)
if the pragma is in some 3rd party component which I don't have the
source for than I can't change the pragma.
either way, it conflicts with my work-flow and goals.
I do not wish to recompile a 1.5GB standalone executable if I just
changed a minor version of a lib.

I see you point but I think it is invalid.
For starters, I could be wrong but I think that the use of pragma(lib,) can't
be detected in object code, I think It just instructs DMD to pass the lib
on to the linker when it gets called by DMD. If I am wrong about that I still
think it doesn't matter because (as far as static libraries go) I think it
would a very BAD idea to try and switch them out from under a closed source
lib. Third, if you really want to go mucking around with those internals,
you can always copy the new lib over the old one.

IIRC, the math lib in C/C++ comes in three flavors so you can choose
your trade-off (speed or accuracy) and the only thing you need to do
is just link the flavor you want in your executable.

Everything needs a math lib so there will be a default. I'd not willing to
second guess the original programmer if they choose to switch to another
lib. The same goes for other libs as well. If you start switching to libs
that the lib's programmer doesn't explicitly support, your already on your
own and you have bigger problems than what I'm talking about.

you seem keen on combining the build process with compilation which is
in my experience a very bad thing. it may simplify your life for your
small projects but as I was telling you before it's a pain in the neck
for the scale of projects I work on. I don't get why you refuse to see
that. what you suggest is _not_ a good solution for me.

What I want is a language where most of the time you build a project from
only the information in the source code. What I don't want is a language
where the only way to keep track of the information you need to build a
project,
is with an external data file. I don't want that because the only practical
way to do that is _force_ the programmer to use an IDE and have it maintain
that file.

What I don't want is a language where the only way to keep track
of the information you need to build a project, is with an external
data file.

People have been developing projects using an "external data file"
for decades. It's called the make file.

I don't want that because the only practical way to do that is _force_
the programmer to use an IDE and have it maintain that file.

What exactly is it about C# that makes you think you are FORCED
to use an IDE to write the code?
MSBuild.exe is nothing than Microsoft's replacement to make.exe.
It is nothing more than a version of make.exe that takes XML
make files as it's input.

What I want is a language where most of the time you build a project
from only the information in the source code.

You can build this Simple.cs file:

to create a Simple.exe using nothing but this command line:
csc.exe /r:System.dll; D:\temp\simple.cs

Most any language has what I want for single file programs. But when you
start getting dozens of file in a project (including some file mixed into
the working directory that shouldn't be included) it breaks down.

What I don't want is a language where the only way to keep track of
the information you need to build a project, is with an external data
file.

decades. It's called the make file.

C doesn't have the property I want, although it's not as bad as c# because
makefiles are intended to be edited by hand. I'd rather not need make at
all until I start having extra language build steps (yacc, rpm/deb generation,
regression tests, etc.).

I don't want that because the only practical way to do that is
_force_ the programmer to use an IDE and have it maintain that file.

an IDE to write the code?

The only practical way to keep track for what files do and do not get compiled
is a .csproj file and the only resonable way to mantain them is VS or the
equivelent.

MSBuild.exe is nothing than Microsoft's replacement to make.exe.
It is nothing more than a version of make.exe that takes XML make
files as it's input.

second, D needs to update its stone age compilation model copied from
C++. I'm not saying we need to copy the C# or Java models exactly, but
we need to throw away the current legacy model.
Java has convenient Jar files: you can package everything into nice
modular packages with optional source code and documentation.
similar stuff is done in .net.

NO ABSOLUTELY NOT! (and I will /not/ apologies for yelling) I will fight
that tooth and nail!
One of the best thing about D IMNSHO is that a D program is "just a
collection of text files". I can, without any special tools, dive in an view
or edit any file I want. I can build with nothing but dmd and a command
line. I can use the the source control system of my choice. And very
importantly, the normal build model produces a stand alone OS native
executable.

I don't think changing from a decades-old 'one object file per source
file' compilation model will make you sacrifice any of that. He's
proposing something else, like a custom object format. It has nothing
to do with the way source is stored, or with how you invoke the
compiler. Java hasn't destroyed any of that by using .class files,
has it?
We already have a proof-of-concept of this sort of thing for D: LDC.
The LLVM intermediate form is far more amenable to cross-module and
link-time optimization.

I disagree on all your points.
read inside for comments.
Brad Roberts wrote:

Yigal Chripun wrote:

IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

Support or enable.. sure. Require, absolutely not.
I've become convinced that the over-reliance on auto-complete and other IDE
features has lead to a generation of developers that really don't know their
language / environment. The number of propagated typo's due to first time
mis-typing of name (I see lenght way too often at work) that I wanna ban the
use
of auto-complete, but I'd get lynched.

first, typos - eclipse has a built-in spell checker so all those
"lenght" will be underlined with an orange squiggly line.
regarding the more general comment of bad developers - you see a
connection where there is none. A friend of mine showed me a a graph
online that clearly shows the inverse correlation between the number of
pirates in the world and global worming. (thanks to the Somalian
pirates, that means the global effort to reduce emissions somewhat helps)
A better analogy would be automotive: if you're Michael Schumacher than
an automated transmission will just slow you down, but for the rest of
the population it helps improve driving. the transmission doesn't make
the driver good or bad, but it does help the majority of drivers to
improve their driving skills.
there are bad programmers that use a text editor as much as the ones
that use an IDE. there are also good programmers on both sides.
An IDE doesn't create bad programmers, rather the IDE helps bad
programmers to write less buggy code.

If the applications library space is so vast or random that you can't keep
track
of where things are, a tool that helps you type in code is papering over a more
serious problem.

false again, using a tool that helps writing code does not mean there's
a design problem in the code. auto-complete prevents typos, for instance
and that has nothing to do with anything you said.
For me, many times I remember that there's a method that does something
I need by I can't remember if it's called fooBar(int, char) or
barFoo(char, int) or any other permutation. you'd need to go check the
documentation, I save time by using the auto-complete.
Another use case is when I need to use some API I can get the list of
methods with the documentation by using the auto-complete feature.

My other problem with IDE's, such as eclipse, is that it's such an all or
nothing investment. You can't really just use part of it. You must buy in to
it's editor, it's interface with your SCM, it's scriptures of indentation
style,
etc. Trying to deviate from any of it is such a large pain that it's just not
worth it -- more so as the team working on a project gets larger.

completely wrong. You forget - Eclipse is just a plug-in engine with
default plug-ins that implement a Java IDE.
editor: prefer vim/emacs? there are eclipse plugins that implement both.
SCM: there are _tons_ of SCM plug-ins! just use what ever you prefer. I
use git and there's a neat UI for that. *But*, sometimes I prefer git's
command line. what to do? no problem, I can open a terminal window
inside eclipse and run any command I want!
I work on unix and my local eclipse (on windows) can open remote files
on the unix machine. eclipse does everything for me including giving me
a shell to run remote commands.
indentation style: there's nothing easier. go to eclipse properties. for
each language you have installed you can configure "styles" and eclipse
will ident, color, format your code in what ever way you want.
you don't have to like or use eclipse, or any other IDE but if you are
not familiar with the tool, don't provide mis-information.

I disagree on all your points.
read inside for comments.
Brad Roberts wrote:

Yigal Chripun wrote:

IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

Support or enable.. sure. Require, absolutely not.
I've become convinced that the over-reliance on auto-complete and
other IDE
features has lead to a generation of developers that really don't know
their
language / environment. The number of propagated typo's due to first
time
mis-typing of name (I see lenght way too often at work) that I wanna
ban the use
of auto-complete, but I'd get lynched.

first, typos - eclipse has a built-in spell checker so all those
"lenght" will be underlined with an orange squiggly line.
regarding the more general comment of bad developers - you see a
connection where there is none. A friend of mine showed me a a graph
online that clearly shows the inverse correlation between the number of
pirates in the world and global worming. (thanks to the Somalian
pirates, that means the global effort to reduce emissions somewhat helps)
A better analogy would be automotive: if you're Michael Schumacher than
an automated transmission will just slow you down, but for the rest of
the population it helps improve driving. the transmission doesn't make
the driver good or bad, but it does help the majority of drivers to
improve their driving skills.
there are bad programmers that use a text editor as much as the ones
that use an IDE. there are also good programmers on both sides.
An IDE doesn't create bad programmers, rather the IDE helps bad
programmers to write less buggy code.

If the applications library space is so vast or random that you can't
keep track
of where things are, a tool that helps you type in code is papering
over a more
serious problem.

false again, using a tool that helps writing code does not mean there's
a design problem in the code. auto-complete prevents typos, for instance
and that has nothing to do with anything you said.
For me, many times I remember that there's a method that does something
I need by I can't remember if it's called fooBar(int, char) or
barFoo(char, int) or any other permutation. you'd need to go check the
documentation, I save time by using the auto-complete.
Another use case is when I need to use some API I can get the list of
methods with the documentation by using the auto-complete feature.

My other problem with IDE's, such as eclipse, is that it's such an all or
nothing investment. You can't really just use part of it. You must
buy in to
it's editor, it's interface with your SCM, it's scriptures of
indentation style,
etc. Trying to deviate from any of it is such a large pain that it's
just not
worth it -- more so as the team working on a project gets larger.

completely wrong. You forget - Eclipse is just a plug-in engine with
default plug-ins that implement a Java IDE.
editor: prefer vim/emacs? there are eclipse plugins that implement both.
SCM: there are _tons_ of SCM plug-ins! just use what ever you prefer. I
use git and there's a neat UI for that. *But*, sometimes I prefer git's
command line. what to do? no problem, I can open a terminal window
inside eclipse and run any command I want!
I work on unix and my local eclipse (on windows) can open remote files
on the unix machine. eclipse does everything for me including giving me
a shell to run remote commands.
indentation style: there's nothing easier. go to eclipse properties. for
each language you have installed you can configure "styles" and eclipse
will ident, color, format your code in what ever way you want.
you don't have to like or use eclipse, or any other IDE but if you are
not familiar with the tool, don't provide mis-information.

Sorry, I'll stop ranting.
Sigh,
Brad

As I said.. "I have become convinced..." it might not be actually true, and it
might not hold for everyone, but I've seen it frequently enough that I've
started to doubt statements to the contrary. I could well be wrong, but I'm not
going to accept your word any more than you accept mine.
You are correct that for every generalization that there are good exceptions.
To address a few of your points, I've tried several of the various plugs, both
at the editor and scm layers. I've talked with a whole bunch of other people I
consider experts who have done the same. The answer that's come back every
single time... the plugins suck. The only ones that actually are long-term
usable are the defaults. Maybe one year that'll change, but forgive me for not
holding my breath. That doesn't mean it's not possible, just means that the
effort of doing a _good_ job hasn't been worth the communities time, and that's
fine.
Anyway.. since I'm fairly confident that D isn't ever going to abandon the
pieces I care about, and might well enable the pieces you care about, it's kinda
pointless to argue about it.
Later,
Brad

IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

I'm not sure about this. D is designed to be easier to parse than C++
(but that's saying nothing) to allow better tools made for it. I think this
should be enough.
C# & friends not only better supports working inside an IDE, but makes it a
pain to
do without. Autocomplete dictates that related functions should be named with
the exact same prefix - even when this isn't logical. It also encourages names
to be
as descriptive as possible, in practice leading to a part of the api docs
encoded in
the function name. Extremely bloated names are the consequence of this. It
doesn't
always make code more readable imho.
The documentation comments are in xml: pure insanity. I tried to generate
documentation
for my stuff at work once, expecting to be done in max 5 min. like ddoc. Turns
out nobody at
work uses documentation generation for a reason: it isn't really fleshed out
and one-click
from the IDE, in fact it is a pain in the arse compared to using ddoc.
I should stop now before this turns into a rant.

IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

I'm not sure about this. D is designed to be easier to parse than C++
(but that's saying nothing) to allow better tools made for it. I think this
should be enough.
C# & friends not only better supports working inside an IDE, but makes it a
pain to
do without. Autocomplete dictates that related functions should be named with
the exact same prefix - even when this isn't logical. It also encourages names
to be
as descriptive as possible, in practice leading to a part of the api docs
encoded in
the function name. Extremely bloated names are the consequence of this. It
doesn't
always make code more readable imho.

already answered.
an IDE does _not_ create bad programmers, and does _not_ encourage bad
code. it does encourage descriptive names which is a _good_ thing.
writing "strcpy" ala C style is cryptic and *wrong*. code is read
hundred times more than it's written and a better name would be for
instance - "stringCopy".
it's common nowadays to have tera-byte sized HDD so why people try to
save a few bytes from their source while sacrificing readability?
the only issue I have with too long names is when dealing with C/C++
code that prefixes all symbols with their file-names/namespaces. At
least in C++ this is solved by using namespaces. but this is a problem
with the languages themselves and has nothing to do with the IDE.

The documentation comments are in xml: pure insanity. I tried to generate
documentation
for my stuff at work once, expecting to be done in max 5 min. like ddoc. Turns
out nobody at
work uses documentation generation for a reason: it isn't really fleshed out
and one-click
from the IDE, in fact it is a pain in the arse compared to using ddoc.
I should stop now before this turns into a rant.

I agree fully with this. XML documents are a mistake made by MS. javadoc
is a much better format and even that can be improved.
This however has nothing to do with the IDE. the important part is that
the IDE parses whatever format is used and can show you the
documentation via simple means. no need for you to spend time to find
the documentation yourself.

already answered.
an IDE does _not_ create bad programmers, and does _not_ encourage bad
code. it does encourage descriptive names which is a _good_ thing.
writing "strcpy" ala C style is cryptic and *wrong*. code is read
hundred times more than it's written and a better name would be for
instance - "stringCopy".
it's common nowadays to have tera-byte sized HDD so why people try to
save a few bytes from their source while sacrificing readability?

This is not what I was saying.
I'm not talking about strcpy vs stringCopy. stringCopy is short. I'm talking
about things like SetCompatibleTextRenderingDefault.
And this example isn't even so bad. Fact is, it is easier to come up with long
identifiers and there is no penalty in the form of typing cost for doing so.
It's not about bad programmers (or saving bytes, that's just ridiculous), but
IDE
does encourage some kind of constructs because they are easier in that
environment. Good programmers come up with good, descriptive names, whether
they
program in an IDE or not.
At work I must program in VB.NET. This language is pretty verbose in describing
even the most common things. It's easier to parse when you're new to the
language, but after a while I find all the verbosity gets in the way of
readability.

I think that any real programing project now days (regardless of
language) needs tools to help the programmer. The difference between
D and C# is that with D you /can/ get away without an IDE and with C#
you won't get much at all done without one.

autocompletion, not much for the build-and-jump-to-error stuff. And I
don't see D being easier with regards to remembering what's the name
of that function, which members does a class have, in which module are
all these.

I'm not referring to editing. For that, most any language you can get away
without an IDE if you are willing, but it will cost you something.

Why do you say that with D you can get away without an IDE

Because off ten as not I do. For some reason I have never gotten Descent
working correctly. Most of the time, the code outlining works but I've never
gotten auto-compleat or integrated building to work.

and with C#
you can't? I think you can do the same as in C#, don't use an IDE and
get away with pretty much everything, except you'll be slower at it
(same goes for D without an IDE).

As above, I'm not talking about editing, but rater about the rest of the
tools like the compiler and debugger. Has anyone ever tried building a c#
project without an IDE? I don't even know if it can be done. Yes you can
trigger a build from the command line, but setting up a project without it
would require hand editing of XML (yuck) and the build tool IS visual studio.

Again, this also applies to Java. When I started using Java I used the
command line and an editor with just syntax highlighting, and made
programs of several classes without problem. Refactoring was a PITA,
and I'm thinking it's like that in D nowadays. :-P

I think it's like that in every language. The programs people work on now
days are, no matter their representation, to complex for a person to navigate
without tools.

Yes you can trigger a build from the command line, but setting
up a project without it would require hand editing of XML (yuck)
and the build tool IS visual studio.

It is true that Visual Studio creates XML project/solution files
and the contents of these files is overly complex.
But these XML files are Visual Studio specific and a lot of their
complexity comes from the fact that they contain extra information
that is only needed by the IDE.
If you use the MsBuild approach the amount of XML that is needed to
create a project/solution is much smaller and in general the XML is
fairly trivial.

I will give you this, though: D's toolchain could use improvement in
more complex builds. But it's still a hell of a lot simpler than
anything C#'s toolchain can do.

Have you tried DSSS? It's surprisingly feature-rich, and its syntax is a
lot simpler than MSBuild's.
IMO, any build more complex than setting a few options should be handled
by a scripting language, though. Knocking together a Perl script to call
your builder is often a lot easier than mucking around with huge
configuration files (anyone who's used Ant can attest).

I will give you this, though: D's toolchain could use improvement in
more complex builds. But it's still a hell of a lot simpler than
anything C#'s toolchain can do.

Have you tried DSSS? It's surprisingly feature-rich, and its syntax is a
lot simpler than MSBuild's.

-_-
You realise that in order to be using rebuild, I HAVE to also have DSSS,
right? I'm pretty sure Gregor stopped releasing rebuild-only packages
quite some time ago.
DSSS itself is OK, but I can't let you get away with saying it's syntax
is simpler than MSBuild's. Oh sure, it's not XML, but it's just...
inscrutable.
For example, I once had to ditch DSSS for a project because I needed to
add a flag for Windows builds. Adding that flag killed all the other
flags for all the other builds for some reason, even using the += thing.
The lack of any sort of stable ordering for build steps is a pain, too.
And it annoys me that I can't specify what a default build should do.
DSSS is weird; even after Gregor wrote all the documentation for it, it
still just didn't make sense. Maybe it's cursed or something. :P

IMO, any build more complex than setting a few options should be handled
by a scripting language, though. Knocking together a Perl script to call
your builder is often a lot easier than mucking around with huge
configuration files (anyone who's used Ant can attest).

I do have a build script for one of my projects. It's fairly large.
The problem is, it's doing what this makefile would accomplish:
%.d: %.dw
dtangle $ # I forget the exact syntax
It's always annoyed the crap out of me that we've lost such a basic
transformative tool.
-- Daniel
P.S. No, I can't just use make; I'm on Windows. I really, REALLY don't
want to have to deal with that bullshit again.

You realise that in order to be using rebuild, I HAVE to also have DSSS,
right? I'm pretty sure Gregor stopped releasing rebuild-only packages
quite some time ago.

Not to trumpet my own horn, but have you considered my build tool called
'Bud'? And if you have then what is missing from it that you need?
http://www.dsource.org/projects/build
Gregor derived DSSS from my project.
--
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell

You realise that in order to be using rebuild, I HAVE to also have DSSS,
right? I'm pretty sure Gregor stopped releasing rebuild-only packages
quite some time ago.

Not to trumpet my own horn, but have you considered my build tool called
'Bud'? And if you have then what is missing from it that you need?
http://www.dsource.org/projects/build
Gregor derived DSSS from my project.

Actually, the project that has the big build script is using bud. I
think bud is supposed to have some sort of transformative feature, but I
just couldn't make it work. Aside from that, it works just fine, so I
never felt the need to replace it. [1]
The two main reasons I switched to rebuild over bud was -oq and -dc.
Developing with various combinations of stable compilers, unstable
compilers, custom compilers, phobos, tango stable and tango trunk makes
-dc a godsend.
And I'm just a neat freak when it comes to folders, hence my love of -oq :D.
-- Daniel
[1] This project actually has a private copy of the entire D toolchain
because I'm completely paranoid about breaking it.

P.S. No, I can't just use make; I'm on Windows. I really, REALLY don't
want to have to deal with that bullshit again.

http://gnuwin32.sourceforge.net/packages.html
My current build script is cobbled together from Perl, Make, and DSSS.
It sounds ugly, but when I tried it out on Linux (I usually use
Windows), the entire thing built without a single change.

P.S. No, I can't just use make; I'm on Windows. I really, REALLY don't
want to have to deal with that bullshit again.

http://gnuwin32.sourceforge.net/packages.html
My current build script is cobbled together from Perl, Make, and DSSS.
It sounds ugly, but when I tried it out on Linux (I usually use
Windows), the entire thing built without a single change.

Mine is Python, bud and a modified version of Knuth's CWEB. I don't
trust win32 "ports" of GNU tools [1] because there's usually some
horrible incompatibility lurking in the shadows waiting to bite you on
the arse.
-- Daniel
[1] I exclude Cygwin from this because it's running inside proper bash
with largely proper UNIX semantics. It's also so fiddly and annoying to
get to that I don't bother any more. :P

Um, Step 1. OK. Step2. OK. Step 3. Yup, no longer practical. If that's what
it take, you have just proven my point to my satisfaction. D can be built
with no non-code files C# can't. I'll tolerate something like make but nothing
like that .proj file.

Yes you can trigger a build from the command line, but setting up a
project without it would require hand editing of XML (yuck) and the
build tool IS visual studio.

the contents of these files is overly complex.
But these XML files are Visual Studio specific and a lot of their
complexity comes from the fact that they contain extra information
that is only needed by the IDE.
If you use the MsBuild approach the amount of XML that is needed to
create a project/solution is much smaller and in general the XML is
fairly trivial.

Smaller than huge and fairly trivial in comparison to what? The only people
who are going to do this (c# w/o someones IDE) are 1) people forced both
to use c# and to not use an IDE or 2) the same kind of people who run OSX
on an Xbox ( :-O Oh my, someone actually did that, I didn't known:
http://www.forevergeek.com/2005/06/os_x_for_the_xbox/
) because they like thumbing their nose at the big guys.

Um, Step 1. OK. Step2. OK. Step 3. Yup, no longer practical. If that's
what it take, you have just proven my point to my satisfaction. D can be
built with no non-code files C# can't. I'll tolerate something like make
but nothing like that .proj file.

I can at least say that building a GTK# program with Mono isn't much
different from D or C family. Link in your libraries and give it your
*.cs files. My projects haven't been very large, but haven't yet had to
provide non-source code for a build.

IMO, designing the language to support this better work-flow is a good
decision made by MS, and D should follow it instead of trying to get
away without an IDE.

Support or enable.. sure. Require, absolutely not.
I've become convinced that the over-reliance on auto-complete and other IDE
features has lead to a generation of developers that really don't know their
language / environment. The number of propagated typo's due to first time
mis-typing of name (I see lenght way too often at work) that I wanna ban the use
of auto-complete, but I'd get lynched.
If the applications library space is so vast or random that you can't keep track
of where things are, a tool that helps you type in code is papering over a more
serious problem.
My other problem with IDE's, such as eclipse, is that it's such an all or
nothing investment. You can't really just use part of it. You must buy in to
it's editor, it's interface with your SCM, it's scriptures of indentation style,
etc. Trying to deviate from any of it is such a large pain that it's just not
worth it -- more so as the team working on a project gets larger.
Sorry, I'll stop ranting.
Sigh,
Brad

My other problem with IDE's, such as eclipse, is that it's such an all
or nothing investment. You can't really just use part of it. You
must buy in to it's editor, it's interface with your SCM, it's
scriptures of indentation style, etc. Trying to deviate from any of
it is such a large pain that it's just not worth it -- more so as the
team working on a project gets larger.

For VS, you might have a point. However for D, I use Descent and I haven't
found any of those to be a problem. Getting people to agree on how to set
it up I expect would be a bigger problem.

My other problem with IDE's, such as eclipse, is that it's such an all
or nothing investment. You can't really just use part of it. You
must buy in to it's editor, it's interface with your SCM, it's
scriptures of indentation style, etc. Trying to deviate from any of
it is such a large pain that it's just not worth it -- more so as the
team working on a project gets larger.

For VS, you might have a point. However for D, I use Descent and I
haven't found any of those to be a problem. Getting people to agree on
how to set it up I expect would be a bigger problem.

Actually, Descent isn't perfect, either. For example, it mandates that
cases in a switch MUST be aligned with the braces. What's more fun is
that you can't override it until AFTER it's corrected YOU.
Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.
Don't get me wrong, I quite like Descent. But as soon as you try to
make a program "smart", you're going to start getting it wrong.
</rant>
-- Daniel

Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.

This is why I don't like IDEs. Plus, every time you type something,
stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING
PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto
completion hints, or "intelligent" indentation. It's ridiculous. When I
hit a key, I want the text editor insert that key. Not do.... random....
stuff.
How do Eclipse user deal with it? Not look at the screen when typing?

Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.

This is why I don't like IDEs. Plus, every time you type something,
stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING
PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto
completion hints, or "intelligent" indentation. It's ridiculous. When I
hit a key, I want the text editor insert that key. Not do.... random....
stuff.
How do Eclipse user deal with it? Not look at the screen when typing?

Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.

This is why I don't like IDEs. Plus, every time you type something,
stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING
PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto
completion hints, or "intelligent" indentation. It's ridiculous. When I
hit a key, I want the text editor insert that key. Not do.... random....
stuff.
How do Eclipse user deal with it? Not look at the screen when typing?

I do look at the screen because I WANT to use those features. I don't
try to work around them I try to use them instead. If there's a feature
I don't like I disable it, you just have to configure the application in
the way you like it. An application can't be configured from the
beginning to satisfy all peoples needs.

Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.

Hmm... At least for Descent there are tickets #168 and #169, which are
now fixed in trunk and will be in the next release.
If something is not available in the IDE or it annoys you, just disable
it or make a feature request. :)

Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.

This is why I don't like IDEs. Plus, every time you type something,
stuff BLINKS around, grabbing your attention, saying I'M SO ANNOYING
PLEASE DISABLE ME AS A FEATURE. Like documentation tooltips, auto
completion hints, or "intelligent" indentation. It's ridiculous. When I
hit a key, I want the text editor insert that key. Not do.... random....
stuff.
How do Eclipse user deal with it? Not look at the screen when typing?

that sounds like an old man complaining that modern television has sound
and colors.
you can disable all features or just use a primitive text editor since
that's what your used too, but those thing are *not* problems.
It is extremely useful to have the documentation tooltips instead of
spending time on searching manually in some book or whatever.
the smart indentation is a godsend, if I paste a snippet it is adjusted
to my code so I can see how many braces I need to have at the end.
I certainly do *NOT* want to go back to writing shell scripts or emacs
LISP functions just to copy some snippet from one file to another!

My other problem with IDE's, such as eclipse, is that it's such an all
or nothing investment. You can't really just use part of it. You
must buy in to it's editor, it's interface with your SCM, it's
scriptures of indentation style, etc. Trying to deviate from any of
it is such a large pain that it's just not worth it -- more so as the
team working on a project gets larger.

haven't found any of those to be a problem. Getting people to agree on
how to set it up I expect would be a bigger problem.

Actually, Descent isn't perfect, either. For example, it mandates that
cases in a switch MUST be aligned with the braces. What's more fun is
that you can't override it until AFTER it's corrected YOU.

Just file a ticket.

Oh, and how it indents multiline function calls is completely retarded.
And every time I try to autocomplete a templated function call, it
insists on inserting ALL of the template arguments, even when they're
supposed to be derived.

Well, I didn't know it was *that* important for using it. If you
consider it really important, post something in the forums, reply to
that ticket, or something like that.
Well... posting in the newsgroup works too. ;-)
http://dsource.org/projects/descent/changeset/1347

So, in a way, Microsoft may be right in assuming that (especially when
their thinking anyway is that everybody sits at a computer that's
totally dedicated to the user's current activity anyhow) preposterous
horse power is (or, should be) available at the code editor.

I think that any real programing project now days (regardless of
language) needs tools to help the programmer. The difference between D
and C# is that with D you /can/ get away without an IDE and with C#
you won't get much at all done without one.

I can't agree with this. Most of the time I use an IDE for the
autocompletion, not much for the build-and-jump-to-error stuff. And I
don't see D being easier with regards to remembering what's the name of
that function, which members does a class have, in which module are all
these.
Why do you say that with D you can get away without an IDE and with C#
you can't? I think you can do the same as in C#, don't use an IDE and
get away with pretty much everything, except you'll be slower at it
(same goes for D without an IDE).

The more boilerplate code a language requires, the more important it is
to have an IDE. Features that a language provides that allow you to
write less code make an IDE less important.
I really like IDEs. They let me think less when creating code.
Of course, the other feature is notifying the user about errors sooner
than their next compile. This saves a lot of time, regardless of whether
your language requires significant cruft or not.

It wouldn't be hard to do a competent IDE for D. After all, D is
designed to make that job easy.

Like, for example, if you have this:
---
char[] someFunction(char[] name) {
return "int " ~ name ~ ";";
}
class Foo {
mixin(someFunction("variable"));
}
void main() {
Foo foo = new Foo();
foo. --> I'd really like the IDE to suggest me "variable"
}
---
Do you really think implementing a *good* IDE for D is easy now? :-P
(of course Descent works in this case, but just because it has the full
dmdfe in it... so basically a good IDE will need to be able to do CTFE,
instantiante templates, etc., and all of those things are kind of
unclear in the specification of the D language, so if you don't use
dmdfe... well... I hope you get my point)

Do you really think implementing a *good* IDE for D is easy now? :-P
(of course Descent works in this case, but just because it has the full
dmdfe in it... so basically a good IDE will need to be able to do CTFE,
instantiante templates, etc., and all of those things are kind of
unclear in the specification of the D language, so if you don't use
dmdfe... well... I hope you get my point)

The dmdfe is available, so one doesn't have to recreate it. That makes
it easy :-)

Do you really think implementing a *good* IDE for D is easy now? :-P
(of course Descent works in this case, but just because it has the
full dmdfe in it... so basically a good IDE will need to be able to do
CTFE, instantiante templates, etc., and all of those things are kind
of unclear in the specification of the D language, so if you don't use
dmdfe... well... I hope you get my point)

The dmdfe is available, so one doesn't have to recreate it. That makes
it easy :-)

In the Good Old Days (when it was usual for an average programmer to
write parts of the code in ASM (that was the time before the late
eighties -- be it Basic, Pascal, or even C, some parts had to be done in
ASM to help a bearable user experience when the mainframes had less
power than today's MP3 players), the ASM programing was very different
on, say, Zilog, MOS, or Motorola processors. The rumor was that the 6502
was made for hand coded ASM, whereas the 8088 was more geared towards
automatic code generation (as in C commpilers, etc.). My experiences of
both certainly seemed to support this.

The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit
processors were a terrible fit for C, which was designed for 16 bit
CPUs. Everyone who coded professional apps for the 6502, 6800, 8080 and
Z80 (all 8 bit CPUs) wrote in assembler. (Including myself.)

If we were smart with D, we'd find out a way of leapfrogging this
thinking. We have a language that's more powerful than any of C#, Java
or C++, more practical than Haskell, Scheme, Ruby, &co, and more
maintainable than C or Perl, but which *still* is Human Writable. All we
need is some outside-of-the-box thinking, and we might reap some
overwhelming advantages when we combine *this* language with the IDEs
and the horsepower that the modern drone takes for granted.
Easier parsing, CTFE, actually usable templates, practical mixins, pure
functions, safe code, you name it! We have all the bits and pieces to
really make writing + IDE assisted program authoring, a superior reality.

Right, but I can't think of any IDE feature that would be a bad fit for
using the filesystem to store the D source modules.

The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit
processors were a terrible fit for C, which was designed for 16 bit
CPUs. Everyone who coded professional apps for the 6502, 6800, 8080 and
Z80 (all 8 bit CPUs) wrote in assembler. (Including myself.)

Forth interpreters can be very small, it's a very flexible language, you can
metaprogram it almost as Lisp, and if implemented well it can be efficient
(surely more than interpreter Basic, but less than handwritten asm. You can
have an optimizing Forth in probably less than 4-5 KB).
But the people was waiting/asking for the Basic Language, most people didn't
know Forth, Basic was common in schools, so Basic was the language shipped
inside the machine, instead of Forth:
http://www.npsnet.com/danf/cbm/languages.html#FORTH
The Commodore 64 with built-in Forth instead of Basic may have driven computer
science in a quite different direction.
Do you agree?
Bye,
bearophile

Forth interpreters can be very small, it's a very flexible language,
you can metaprogram it almost as Lisp, and if implemented well it can
be efficient (surely more than interpreter Basic, but less than
handwritten asm. You can have an optimizing Forth in probably less
than 4-5 KB).
But the people was waiting/asking for the Basic Language, most people
didn't know Forth, Basic was common in schools, so Basic was the
language shipped inside the machine, instead of Forth:
http://www.npsnet.com/danf/cbm/languages.html#FORTH
The Commodore 64 with built-in Forth instead of Basic may have driven
computer science in a quite different direction.
Do you agree?

Forth interpreters can be very small, it's a very flexible language,
you can metaprogram it almost as Lisp, and if implemented well it can
be efficient (surely more than interpreter Basic, but less than
handwritten asm. You can have an optimizing Forth in probably less
than 4-5 KB).
But the people was waiting/asking for the Basic Language, most people
didn't know Forth, Basic was common in schools, so Basic was the
language shipped inside the machine, instead of Forth:
http://www.npsnet.com/danf/cbm/languages.html#FORTH
The Commodore 64 with built-in Forth instead of Basic may have driven
computer science in a quite different direction.
Do you agree?

I remember lots of talk about Forth, and nobody using it.

It can quickly degenerate into a write-only language because it encourages
one to extend the syntax, and even semantics, of the language. It takes
extreme discipline to make a Forth program maintainable by anyone other
than the original author.
The other difficulty with it is that most people don't use Reverse Polish
Notation often enough for it to become second nature, thus making it hard
for people to read a Forth program and 'see' what its trying to do.
However, it has its own elegance and simplicity that can be very alluring.
I see it as the Circe of programming languages.
--
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell

Hehe...
And of course the Ruby book has the obligatory distasteful sexual
reference.
Only today I was reading another book on Rails and within the third page
I got the notion that good website development is like good porn: you
know it when you see it. Yeah, you've apparently seen to much of it. Get
a date. :o/
I'm all for sexual jokes, but give me a break with "the lucky stiff".
The subtler the better. I made one such joke in a talk at ACCU, and it
took people 30 seconds to even suspect it. (Walter of course got it in a
femtosecond.)
Andrei

The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit
processors were a terrible fit for C, which was designed for 16 bit
CPUs. Everyone who coded professional apps for the 6502, 6800,
8080 and Z80 (all 8 bit CPUs) wrote in assembler. (Including
myself.)

Forth interpreters can be very small, it's a very flexible language,
you can metaprogram it almost as Lisp, and if implemented well it can
be efficient (surely more than interpreter Basic, but less than
handwritten asm. You can have an optimizing Forth in probably less
than 4-5 KB).
But the people was waiting/asking for the Basic Language, most people
didn't know Forth, Basic was common in schools, so Basic was the
language shipped inside the machine, instead of Forth:
http://www.npsnet.com/danf/cbm/languages.html#FORTH
The Commodore 64 with built-in Forth instead of Basic may have driven
computer science in a quite different direction.
Do you agree?

Forth isn't exactly user friendly, whereas any housewife could at least
pretend to almost understand some of Daddy's Basic code. :-)
C-64 with Forth, no market.
An example is HP. They made a super cool calculator (almost computer)
that was Forth programmable. Few bought it, and even fewer bothered to
program in Forth. Even the 28S that I've got, has a Forth dialect, but
most people only did algebra stuff on it.
Bootstrapping an new system, that's where Forth really shines. But
today's programmers are so set in the collective mindset of the popular
programming languages, that the change of perspective Forth requires,
will feel too tedious. People become lazy.
But had the C-64 come with Simon's Basic (which was an expensive add-on
that very few bought, mostly because they didn't understand its vast
difference to "regular" Basic), less people would have quit programming
at the 1000 line mark. They just thought it was hard, or that they
weren't smart enough.
---------
Heck, I just remembered, I've got a Forth cartridge for the VIC-20
somewhere! Oh, maybe some rainy day I'll do some time travel!! :-)

In the Good Old Days (when it was usual for an average programmer to
write parts of the code in ASM (that was the time before the late
eighties -- be it Basic, Pascal, or even C, some parts had to be done
in ASM to help a bearable user experience when the mainframes had less
power than today's MP3 players), the ASM programing was very different
on, say, Zilog, MOS, or Motorola processors. The rumor was that the
6502 was made for hand coded ASM, whereas the 8088 was more geared
towards automatic code generation (as in C commpilers, etc.). My
experiences of both certainly seemed to support this.

The 6502 is an 8 bit processor, the 8088 is 16 bits. All 8 bit
processors were a terrible fit for C, which was designed for 16 bit
CPUs. Everyone who coded professional apps for the 6502, 6800, 8080 and
Z80 (all 8 bit CPUs) wrote in assembler. (Including myself.)

Sloppy me, 8080 was what I meant, instead of the 8088. My bad.
And you're right about ASM coding. But over here, with smaller software
companies, stuff was done win S-Basic (does anyone even know that one
anymore???), C-Basic, and Turbo Pascal. Ron Cain's SmallC wasn't really
up to anything serious, and C wasn't all that well known around here
then. But Turbo Pascal was already at 3.0 in 1985, and a good
investment, because using it was the same on the pre-PC computers and
the then new IBM-PC.

If we were smart with D, we'd find out a way of leapfrogging this
thinking. We have a language that's more powerful than any of C#, Java
or C++, more practical than Haskell, Scheme, Ruby, &co, and more
maintainable than C or Perl, but which *still* is Human Writable. All
we need is some outside-of-the-box thinking, and we might reap some
overwhelming advantages when we combine *this* language with the IDEs
and the horsepower that the modern drone takes for granted.
Easier parsing, CTFE, actually usable templates, practical mixins,
pure functions, safe code, you name it! We have all the bits and
pieces to really make writing + IDE assisted program authoring, a
superior reality.

Right, but I can't think of any IDE feature that would be a bad fit for
using the filesystem to store the D source modules.

I remember writing something about it here, like 7 years ago. But today
there are others who have newer opinions about it. I haven't thought
about it since then.
I wonder how a seasoned template author would describe what the most
welcome help would be when writing serious templates?

I wonder how a seasoned template author would describe what the most
welcome help would be when writing serious templates?

"Breakpoint debugging" of template explanation. Pick a template, feed it
values and see (as in syntax highlighting and foreach unrolling) what happens.
Pick an invoked template and dive in. Real breakpoint debugging of CTFE where
it will stop on the line that is not CTFEable.
Oh and auto complete that works with meta but doesn't fall over on it's side
twiching with larger systems.

Do you really think implementing a *good* IDE for D is easy now? :-P
(of course Descent works in this case, but just because it has the
full dmdfe in it... so basically a good IDE will need to be able to do
CTFE, instantiante templates, etc., and all of those things are kind of
unclear in the specification of the D language, so if you don't use
dmdfe... well... I hope you get my point)

The dmdfe is available, so one doesn't have to recreate it. That makes
it easy :-)

The source to gcc is available, so that makes porting gcc to another
platform easy.
-Steve