Community

Stewart Gordon wrote:
> Pragma wrote:
> <snip>
>> But I'd like to echo the other comments in this thread regarding
>> structs. IMO, we're not there yet. I think folks are looking for a
>> solution that does this:
>>
>> - A ctor like syntax for creating a new struct
>> - No more forced copy of the entire struct on creation
>
> What do you mean by this?
I'm glad you asked. :)
Static opCall() is not a ctor. It never was. People have been
clamoring to be able to use this() inside of a struct, much like they
can with classes and modules. But the desire here goes beyond mere
symmetry between type definitions.
The forced copy issue is something that is an artifact of emulating a
constructor for a struct. Take the standard approach for example:
struct Foo{
int a,b,c;
}
Foo f = {a:1, b:2, c:3};
Foo f = {1,2,3}; // more succinct version
So here we create a struct in place, and break encapsulation in the
process. What we really want is an opaque type, that has a little more
smarts on creation. Taking advantage of in/body/out would be nice too.
No problem, we'll just use opCall():
struct Foo{
int a,b,c;
static Foo opCall(int a,int b,int c){
Foo _this;
_this.a = a;
_this.b = b;
_this.c = c;
return _this;
}
}
Foo f = Foo(1,2,3);
That's better, but look at what's really happening here. Inlining and
compiler optimization aside, the 'constructor' here creates a Foo on the
stack which is then returned and *copied* to the destination 'f'.
To most, that won't ever seem like a problem. But for folks who are
working with Vector types or Matrix implementations, that's something to
scream about. In a nutshell, any struct wider than a register that is
populated in the 100's to 1000's is wasting cycles needlessly.
So that brings us to something like this:
struct Foo{
int a,b,c;
this(int a,int b,int c){
this.a = a;
this.b = b;
this.c = c;
}
}
Foo f = Foo(1,2,3);
Ambiguity aside, this fixes encapsulation, gives a familiar syntax, and
almost fixes the allocation issues. (see below)
>
>> - Something that is disambiguated from static opCall
> <snip>
>
> Do you mean that constructors for structs should have a notation
> distinct from S(...)?
>
Well, I think it's one of the reasons why we don't have ctors for
structs right now. The preferred syntax for a "struct ctor" would
probably be this:
S foo = S(a,b,c);
Which is indistinct from "static opCall". Throwing 'new' in there
wouldn't work either, since that would be a dynamic allocation:
S* foo = new S(a,b,c);
So that leaves us with "something else" that provides both a way to
invoke a ctor, yet allocates the data on the stack and doesn't force you
to create an additional copy:
S foo(a,b,c); // c++ style
S foo = stackalloc S(a,b,c); // alloca() style (in place of new)
S foo = new(stack) S(a,b,c): // another idea
--
- EricAnderton at yahoo

Ivan Senji wrote:
> Boris Kolar wrote:
>> PS: I really appreciate all the work that has been put in D. D is going
>> to be a much needed successor of C++. I don't see D as competitor to
>> Java or C# (I think that when using .NET or JVM is an option, D would
>> be a terrible choice), but it sure can rock the non-VM world!
>
> Why not? I think that D could rock the .NET world too. I always wished
> (and still do) for a d.net implementation.
.NET is not the solution for all problems. There will be other library
approaches that maybe even perform better and use features of D that
make it like .NET on steroids. I bet after some time of an official,
fixed D 1.0 specification, there will some libraries pop up that may
serve needs like .NET or J2EE and what not do, also.
Apart from that.. hey, you're posting this in the official D newsgroup ;) .
Kind regards,
Alex

"Pragma" <ericanderton@yahoo.removeme.com> wrote in message
news:elmhtj$1qv7$1@digitaldaemon.com...
> Well, I think it's one of the reasons why we don't have ctors for structs
> right now. The preferred syntax for a "struct ctor" would probably be
> this:
>
> S foo = S(a,b,c);
>
> Which is indistinct from "static opCall". Throwing 'new' in there
> wouldn't work either, since that would be a dynamic allocation:
>
> S* foo = new S(a,b,c);
>
> So that leaves us with "something else" that provides both a way to invoke
> a ctor, yet allocates the data on the stack and doesn't force you to
> create an additional copy:
>
> S foo(a,b,c); // c++ style
> S foo = stackalloc S(a,b,c); // alloca() style (in place of new)
> S foo = new(stack) S(a,b,c): // another idea
Personally for in-place stack allocation, I like the C++ style the best.
It's the simplest to deal with for the compiler (it's 100% obvious that it's
being created on the stack and there's no need for optimization to take care
of anything). And it makes sense -- it says "make this a local variable,
but call the constructor on it as well." There's no assignment so it
doesn't look like any copying is going on.

Pragma wrote:
> Foo f = Foo(1,2,3);
>
> That's better, but look at what's really happening here. Inlining and
> compiler optimization aside, the 'constructor' here creates a Foo on the
> stack which is then returned and *copied* to the destination 'f'.
About this optimization business, is this an issue? Since Walter stated
that such copies are optimized away (trivially?), my assumption was that
the syntax as it is now relies on this optimization being present. Or to
put it in other words, static opCall would not be supported if there was
no such optimization possible.
Perhaps it is similar to how the use of functors with templates in C++
rely on inlining, STL would be so slow without such optimizations.
My question is if it is reasonable to make this assumption or can you
put compiler optimization aside?

"Lutger" <lutger.blijdestijn@gmail.com> wrote in message
news:elmjl8$1tdg$1@digitaldaemon.com...
> About this optimization business, is this an issue? Since Walter stated
> that such copies are optimized away (trivially?), my assumption was that
> the syntax as it is now relies on this optimization being present. Or to
> put it in other words, static opCall would not be supported if there was
> no such optimization possible.
> Perhaps it is similar to how the use of functors with templates in C++
> rely on inlining, STL would be so slow without such optimizations.
>
> My question is if it is reasonable to make this assumption or can you put
> compiler optimization aside?
The impression I get from Walter is that _eeeevery_ compiler has
optimization, so it's a nonissue. :P
Optimization should be an entirely optional pass. Making language features
rely on it seems hackish at best.

Jarrett Billingsley wrote:
> "Lutger" <lutger.blijdestijn@gmail.com> wrote in message
> news:elmjl8$1tdg$1@digitaldaemon.com...
>
>> About this optimization business, is this an issue? Since Walter stated
>> that such copies are optimized away (trivially?), my assumption was that
>> the syntax as it is now relies on this optimization being present. Or to
>> put it in other words, static opCall would not be supported if there was
>> no such optimization possible.
>> Perhaps it is similar to how the use of functors with templates in C++
>> rely on inlining, STL would be so slow without such optimizations.
>>
>> My question is if it is reasonable to make this assumption or can you put
>> compiler optimization aside?
>
> The impression I get from Walter is that _eeeevery_ compiler has
> optimization, so it's a nonissue. :P
>
> Optimization should be an entirely optional pass. Making language features
> rely on it seems hackish at best.
Exactly.
Moreover, it's not always possible to inline or optimize even by a
compiler that can do it well, so it *must* be optional by definition.
Also there are some rather significant "edge cases" involved here. What
about libraries, or reflection?
--
- EricAnderton at yahoo

Jarrett Billingsley wrote:
> "Lutger" <lutger.blijdestijn@gmail.com> wrote in message
> news:elmjl8$1tdg$1@digitaldaemon.com...
>
>> About this optimization business, is this an issue? Since Walter stated
>> that such copies are optimized away (trivially?), my assumption was that
>> the syntax as it is now relies on this optimization being present. Or to
>> put it in other words, static opCall would not be supported if there was
>> no such optimization possible.
>> Perhaps it is similar to how the use of functors with templates in C++
>> rely on inlining, STL would be so slow without such optimizations.
>>
>> My question is if it is reasonable to make this assumption or can you put
>> compiler optimization aside?
>
> The impression I get from Walter is that _eeeevery_ compiler has
> optimization, so it's a nonissue. :P
>
> Optimization should be an entirely optional pass. Making language features
> rely on it seems hackish at best.
>
>
Can you explain why? 'Rely' in this context doesn't mean the language is
broken right? It just means it is slower, but isn't that expected from a
non-optimizing compiler anyway?

Walter Bright wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>
>> Classes are different from structs in two essential ways:
>>
>> 1. Polymorphism
>> 2. Referential semantics
>>
>> The two are actually interdependent, as you can't have polymorphism
>> comfortably unless you have reference semantics.
>
>
> That's one of the things I felt in my bones, but was unable to put my
> finger on it.
I feel it in my bones too. It hurts when it rains :o).
Andrei

"Lutger" <lutger.blijdestijn@gmail.com> wrote in message
news:elmkes$1ult$1@digitaldaemon.com...
> Can you explain why? 'Rely' in this context doesn't mean the language is
> broken right? It just means it is slower, but isn't that expected from a
> non-optimizing compiler anyway?
Yes, I guess that's true. But if a simple addition i.e.
x = a + b;
Compiled to
mov _TEMP1, a
mov _TEMP2, b
add _TEMP1, _TEMP2
mov x, _TEMP1
Instead of
mov x, a
add x, b
It'd still be semantically correct, but would it make sense?
In the same way, I don't see why the compiler should introduce a needless
bit-copy of a (possibly large) structure which can *always* be optimized out
when it would be much simpler to skip the bit-copy in the first place.
It's an optimization which can always be performed, and so should not be an
optimization. It should be the default behavior.

Boris Kolar wrote:
> == Quote from Walter Bright (newshound@digitalmars.com)'s article
>
>>Andrei Alexandrescu (See Website For Email) wrote:
>>
>>>Classes are different from structs in two essential ways:
>>>
>>>1. Polymorphism
>>>2. Referential semantics
>>>
>>>The two are actually interdependent, as you can't have polymorphism
>>>comfortably unless you have reference semantics.
>>
>>That's one of the things I felt in my bones, but was unable to put my
>>finger on it.
>
>
> Actually, polymorphism does not imply referential semantics (nor does
> referential semantics imply polymorphism, of course).
>
> 1) Value sematics can be achieved with references as well, by using
> copy-on-write strategy (or by making classes immutable). All that
> without sacrificing polymorphism.
Of course. It's a "true but uninteresting" fact. Even in current D you
can think that int is a reference and that ++i rebinds i to another
value. But when talking about reference semantics, that means object
is unique, references are many. Doing copy-on-write takes reference-ness
out of references, so of course then they start behaving like value.
> 2) Polymorphism can be achieved without references as well, but then
> the size of struct could no longer be determined at comile-time. In
> other words, function 'sizeof' could no longer be parameterless. Or,
> alternatively, every such 'polymorphic struct' would have to contain
> two pointers: VMT and additional 'data' pointer (which is actually
> pretty close to using references anyway).
Yup, that's reference all right. Some Java implementations do this.
Andrei