Community

On Thu, Jul 23, 2009 at 12:13 PM, Walter
Bright<newshound1@digitalmars.com> wrote:
> Jarrett Billingsley wrote:
>>
>> Will you rename the DMD2 compiler to 'dmd2' as well?
>
> No. If they're in different directory trees, there's no reason to. After
> all, that's the whole point of having directories!
That seems to be the way they do things in Windows, but usually they
set up various extra symlinks on unix systems so that you can call a
particular version of a program when there are multiple installed.
Like gcc-2.95, etc.
--bb

On Thu, Jul 23, 2009 at 3:13 PM, Steven
Schveighoffer<schveiguy@yahoo.com> wrote:
>
> Can you do that with classes, I don't know. I thought it was impossible to
> allocate several classes in one block. It was pretty easy to do the custom
> allocator with structs...
Ah, that's a good point. It is in fact possible; you can use
typeid(Class).classinfo.init.length to get the size of an instance at
runtime, or with D2, __traits(classInstanceSize, Class) will get you
the same thing, at compile time. Though your other reasons are
certainly more than enough justification to move to structs (which,
like you said, they probably should have been in the first place) :)

On Thu, Jul 23, 2009 at 3:22 PM, Walter
Bright<newshound1@digitalmars.com> wrote:
> Jarrett Billingsley wrote:
>>
>> Yeah, let me know when that happens. Until then, I'd like to continue
>> to be able to use my build tools that were designed for D1 without
>> having to modify all their config files.
>
> I don't know what build tools you're using, but consider make:
Walter, I'm pretty shocked by this response. How long have Bud and
DSSS existed? D users who *don't* use them are the exception. You
even patched DMD recently to make it easier for xfBuild to do its job.
This "D" programming language is great because it obviates make. You
should try it sometime! I think you might like it ;)

On Thu, 23 Jul 2009 15:11:07 -0400, Walter Bright
<newshound1@digitalmars.com> wrote:
> Michiel Helvensteijn wrote:
>> Properties. Your syntactic sugar:
>> int i = c.p; // int i = c.p()
>> p = i // c.p(i)
>> They can't do these things:
>> * No control over their use by class designer: ANY member function
>> with one
>> or zero parameters may be called using 'property syntax'. This is not a
>> good thing.
>
> Why not? Seriously, what is the semantic difference?
It leads to making things look like properties when they are not.
For example, writefln = "hi";
The worst is the assign statement, since there are plenty of legitimate
uses of having a single argument function not actually assign anything.
It can also lead to ambiguities, such as a property that returns a
delegate must be called with double parens:
c.member()();
It has nothing to do with how the compiler looks at it semantically, but
it has everything to do with how the user reads it. I want the compiler
to tell the user "no you can't use that member that way." so I don't get
complaints that my "properties" are badly implemented (and yes, I've had
this happen, see http://www.dsource.org/projects/tango/ticket/1184)
It also forces better coding habits. For example, in C#, when I declare a
property it's:
int x
{
get { return _x; }
set { _x = value; } // value is a context keyword
}
With D properties, x() and x(int value) could be scattered anywhere in the
class.
Notice also, that I can document x as a property, not as it's individual
functions.
>> * No parameterized properties: c.f(5) = 6; // c.f(5, 6)
>
> Ok.
Ugh, don't do this. You can implement this behavior easily enough. We
already have opIndexAssign.
-Steve

Walter Bright wrote:
> Michiel Helvensteijn wrote:
>> Properties. Your syntactic sugar:
>>
>> int i = c.p; // int i = c.p()
>> p = i // c.p(i)
>>
>> They can't do these things:
>>
>> * No control over their use by class designer: ANY member function
>> with one
>> or zero parameters may be called using 'property syntax'. This is not a
>> good thing.
>
> Why not? Seriously, what is the semantic difference?
Semantic difference: a property doesn't have *visible* side effects. If
you invoke it one hundred times, it should always return the same thing.
And nothing else in your program should change. So it's kind of like
pure functions.
I say "visible" because you might want to implement a property lazily.
But the logic remains inside your class and it's visible in the outside
world.

On Thu, Jul 23, 2009 at 3:54 PM, Walter
Bright<newshound1@digitalmars.com> wrote:
> Jarrett Billingsley wrote:
>>
>> This "D" programming language is great because it obviates make.
>
>
> Ok, now about make's ability to switch compilers without having to edit
> config files?
So editing every make file you have is better? :P

Walter Bright wrote:
> Knud Soerensen wrote:
...
>>
>> I think one of D's strongest points for people to make the switch is
>> build in unit testing. (at least this is the strongest point for me)
>> But the very simple implementation of unit testing in D nearly ruin the
>> advantage it gives. (see suggestion from the wishlist below)
>
> Even at its very simple support, it's a huge win. It raises the bar on
> what is minimally acceptable, and has been responsible for a big
> improvement in the quality of Phobos.
It's interesting why unittest (and assert) are such big success. My idea is
that it's not in spite of, but because of their utter simplicity. I
speculate that if it would have been different, for example if you would had
to create a new file for a unittest, it would not have been used so much.
...
>
>> ** Unit test isolation
>> I would like to be able to isolate the unit test,
>> so that if one fail the next still runs.
>
> You can do this by simply not using "assert" for the actual test. Use:
> if (!x) writeln("test for x failed");
> instead of:
> assert(x);
> You can, of course, make a template like Andrei's enforce() to make the
> if test and message a bit more convenient.
But within sight there is something much better. druntime defines a function
setAssertHandler to configure a user defined function for handling
assertions! Combine this with version(unittest) and the
Runtime.moduleUnitTester callback et voila!: profit*
I still see two major points of improvement:
- unittests are anonymous things. It would be a big improvement to be able
to say unittest("Test foobar") { } and retrieve the named test via the
runtime provided hooks
- all unittests from one module are lumped together in it's ModuleInfo
object. I would rather like to see an array of named unittests instead.
The rationale for these improvements is that the language and standard
library only defines very minimal, low impact ways of writing tests. At the
same time, the building blocks are provided to create more advanced tools.
This way you can also start out writing simple tests, and then not have to
rewrite those again when you want to use some fancy continuous integration
suite for D. At the moment, it's simply not possible to progress to more
elaborate testing without breaking everything and starting from scratch.
* Well I think so, I haven't been able to make use of it (segfaults) but it
would be sweet.

Walter Bright wrote:
>> I know the real focus for D system programing and the C++ people.
>>
>> I think one of D's strongest points for people to make the switch is
>> build in unit testing. (at least this is the strongest point for me)
>> But the very simple implementation of unit testing in D nearly ruin
>> the advantage it gives. (see suggestion from the wishlist below)
>
> Even at its very simple support, it's a huge win. It raises the bar on
> what is minimally acceptable, and has been responsible for a big
> improvement in the quality of Phobos.
Yes, but the chose is not about unit test or no unit test.
It is between using D with its very simple unit test framework or
C++/java/etc with a very good unit testing framework.
I think that D should provide a framework on the same level
or maybe just make the best unit testing framework on the planet.
>
>> A simple way to ensure that could if the compiler issued a
>> error/warning if a function had no unit tests or contracts.
>
> I worry that such would be crying wolf. But dmd does have a coverage
> analyzer built in - which I think is more useful. It'll show which lines
> were executed by the unit tests, and which were not.
Yes, the coverage analyzer is very good, but how do you ensure that the
library developers actually use it ??
The feature should be introduce slowly, by first printing a warning when
a testing binary was build.
Then step it up to a error for testing binary.
Then a warning for production binaries
and in a dissent future also errors on production binaries.
Already at the first step, we would see if the D community find it useful.
In its last step it would unsure that everybody using
some unknown D code would know that it had some level of test coverage
and quality.
It is not as fine masked as 100% coverage test but would ensure that
ever function had some test code and that without run a coverage analyzes.
How hard do you think it would be to make ?
>
>
>>
>> What follows is some unit test suggestions from
>> http://all-technology.com/eigenpolls/dwishlist
>> Because I would like to hear your opinion about them.
>>
>> ** unit test & code separation
>> I think it would be more useful if the code
>> and the unit test where two separate binaries.
>>
>> Sometimes you need to move the production binary to another
>> machine/environment.
>> It would be nice if one could move the test binary with it and test
>> that everything works.
>>
>> It would also allow for arguments to the test binary,
>> so that you would be able to run a specific unit test.
>
> I made the decision at one point that unit tests weren't there to test
> that the compiler generated code correctly, they were to test the logic
> of the user code. Hence, one does a separate build for unit tests than
> production release.
>
Running unit test in another environment is god for testing
if the assumptions you make in your code about the environment is correct.
> The release version should get the black box tests, not unit tests.
>
>
>
>> ** black box unit testing
>> The d compiler should enforce black box unit tests.
>>
>> Which is unit tests that only use the classes exposed
>> interface.(public, protected)
>>
>> Together with 100% unit test coverage it helps ensure that
>> the code is modular,decoupled and that you can change
>> the private parts without changing the unit tests.
>>
>> For those how is not ready for this high code standard,
>> there might be a --allow-white-box switch.
>
> The compiler doesn't need to help here. There's nothing preventing one
> from using unittests in this manner. Just import the .di file, which is
> the exposed interface, then write a unit test block.
Yes, you can do that.
But why should you have to jump trough loops to write good test code.
I think the default behavior should support writing good modular tests.
Imagine we have a team with 20 programmers working a big project.
The project standard for unit testing is to include .di file and write
unit test on the public interface.
Now imagine one bad programmer broke the standard and wrote a unit test
on a private function.
Now, how would you discover that, without going trough every d file ?
If it is default behavior was not to allow unit testing private function
then the bad programmer would have to use the --allow-white-box switch
and you could just compile without it to discover the problem.
--
Join me on
CrowdNews http://crowdnews.eu/users/addGuide/42/
Facebook http://www.facebook.com/profile.php?id=1198821880
Linkedin http://www.linkedin.com/pub/0/117/a54
Mandala http://www.mandala.dk/view-profile.php4?profileID=7660