I'm referring to...
digitalmars.D/26148
"On the other hand, look at C++ "const". Nothing about "const" says that
some
other thread or reference cannot change the value out from under you at any
moment. Furthermore, it can be cast away and modified anyway. The semantic
value of C++ "const" is essentially zip. This is why I am often flummoxed
why it is given such weight in C++."
In contrast Walters idea is true "deep" immutability:
digitalmars.D/26096
"Deep immutable parameters would have unchanging values for the scope of
that
parameter, and also every sub-object reachable by that parameter would also
be unchangeable. The values wouldn't change even by another reference or
another thread. (This is quite unlike C++ "const".)"
In other words, Walters idea is "deep", C++ is not.
Regan
On Tue, 5 Jul 2005 19:12:19 -0700, Andrew Fedoniouk
<news terrainformatica.com> wrote:

I'm referring to...
digitalmars.D/26148
"On the other hand, look at C++ "const". Nothing about "const" says that
some
other thread or reference cannot change the value out from under you at
any
moment. Furthermore, it can be cast away and modified anyway. The semantic
value of C++ "const" is essentially zip. This is why I am often flummoxed
why it is given such weight in C++."
In contrast Walters idea is true "deep" immutability:
digitalmars.D/26096
"Deep immutable parameters would have unchanging values for the scope of
that
parameter, and also every sub-object reachable by that parameter would
also
be unchangeable. The values wouldn't change even by another reference or
another thread. (This is quite unlike C++ "const".)"
In other words, Walters idea is "deep", C++ is not.

"Walter" <newshound digitalmars.com> wrote in message
news:daf95o$15e6$1 digitaldaemon.com...
Thanks for the references, those are good reads.
Yep, the best thing that they are essential - 30 slides.
"brevity is a sister of talent" as they say.
Andrew.

Case #5 (Ben, Walter) is not clear for such Shallow/Deep classifcation.
Note in/out/final wouldn't be type modifiers. Actually that could make the
'out' suggestion I had impractical - but since that was a brainstorming
request I didn't bother thinking if the idea was implementable or not ;-)
Also Walter's suggestion would apply to any reference-based semantics - not
just arrays and pointers.

Attached is a classification table of results we've got on round III.
Clearly there are just two types of constness proposed:
Deep (C++) - I assume it is proposal #1;
and Shallow (#, Delphi, etc) - proposals #2, #3, #4
Case #5 (Ben, Walter) is not clear for such Shallow/Deep classifcation.
Simple description of constness types (immutability types) is in document
attached.
Javari Language - Java extended by readonly and mutable keywords:
"A Practical Type System and Language for Reference Immutability"
http://pag.csail.mit.edu/~mernst/pubs/ref-immutability-oopsla2004-slides.pdf

This document pretty much describes exactly the 'readonly' idea I have
been posting for the past few weeks.
Minor differences include:
- They've detailed how to flag methods as 'readonly safe'
- They've detailed a downcast mechanism 'mutable'.
- I used 'in' rather than 'readonly' to identify readonly function
arguments.
- They restricted the idea to references.
The "Dynamic checking for downcasts" section describes my runtime readonly
flag concept (restricted to references).
Regan

Attached is a classification table of results we've got on round III.
Clearly there are just two types of constness proposed:
Deep (C++) - I assume it is proposal #1;
and Shallow (#, Delphi, etc) - proposals #2, #3, #4
Case #5 (Ben, Walter) is not clear for such Shallow/Deep classifcation.
Simple description of constness types (immutability types) is in document
attached.
Javari Language - Java extended by readonly and mutable keywords:
"A Practical Type System and Language for Reference Immutability"
http://pag.csail.mit.edu/~mernst/pubs/ref-immutability-oopsla2004-slides.pdf

This document pretty much describes exactly the 'readonly' idea I have
been posting for the past few weeks.
Minor differences include:
- They've detailed how to flag methods as 'readonly safe'
- They've detailed a downcast mechanism 'mutable'.
- I used 'in' rather than 'readonly' to identify readonly function
arguments.
- They restricted the idea to references.
The "Dynamic checking for downcasts" section describes my runtime readonly
flag concept (restricted to references).
Regan

Regan,
general idea: if something could be done in compile time
than it shall be done there. As you may see there is a penalty
on runtime readonly checks ( <= 10% ) - not good.
As stated in first article it is possible to prove immutability
in "99.something %". I think it is enough.
I even think that even 50% cases (shallow) in compile time will be extremely
nice.
IMHO: shallow implementation is a) essential, b) easily implementable,
c) effective and d) honest . Yes it is compromise but as an exit condition
for v.1.0
it is sufficient. It also allows to build full immutability system in v.2.0.
IMHO, IMHO and one more time IMHO.
Andrew.

Attached is a classification table of results we've got on round III.
Clearly there are just two types of constness proposed:
Deep (C++) - I assume it is proposal #1;
and Shallow (#, Delphi, etc) - proposals #2, #3, #4
Case #5 (Ben, Walter) is not clear for such Shallow/Deep classifcation.
Simple description of constness types (immutability types) is in
document
attached.
Javari Language - Java extended by readonly and mutable keywords:
"A Practical Type System and Language for Reference Immutability"
http://pag.csail.mit.edu/~mernst/pubs/ref-immutability-oopsla2004-slides.pdf

This document pretty much describes exactly the 'readonly' idea I have
been posting for the past few weeks.
Minor differences include:
- They've detailed how to flag methods as 'readonly safe'
- They've detailed a downcast mechanism 'mutable'.
- I used 'in' rather than 'readonly' to identify readonly function
arguments.
- They restricted the idea to references.
The "Dynamic checking for downcasts" section describes my runtime
readonly
flag concept (restricted to references).
Regan

Regan,
general idea: if something could be done in compile time
than it shall be done there.

Yep.

As you may see there is a penalty
on runtime readonly checks ( <= 10% )

Yep.

- not good.

You cannot say that, without first weighing the cost up with the benefit.
The cost is ____?
The benefit is ____?
Therefore as cost < benefit, not good.

it is possible to prove immutability
in "99.something %". I think it is enough.

Where? I must have missed that bit.

I even think that even 50% cases (shallow) in compile time will be
extremely nice.

Define "shallow", because obviously we have different definitions of what
exactly it means.
"deep" as used by Walter earlier appeared to me to mean "completely and
utterly" as in the case where he used it, it guaranteed immutability
essentially for the lifetime of the variable/data from all references.

IMHO: shallow implementation is a) essential, b) easily implementable,
c) effective and d) honest . Yes it is compromise but as an exit
condition for v.1.0 it is sufficient. It also allows to build full
immutability system in v.2.0.
IMHO, IMHO and one more time IMHO.

Once you define "shallow" I'm sure I'll know what you mean, and I'll
probably agree.
Regan

Attached is a classification table of results we've got on round III.
Clearly there are just two types of constness proposed:
Deep (C++) - I assume it is proposal #1;
and Shallow (#, Delphi, etc) - proposals #2, #3, #4
Case #5 (Ben, Walter) is not clear for such Shallow/Deep classifcation.
Simple description of constness types (immutability types) is in
document
attached.
Javari Language - Java extended by readonly and mutable keywords:
"A Practical Type System and Language for Reference Immutability"
http://pag.csail.mit.edu/~mernst/pubs/ref-immutability-oopsla2004-slides.pdf

This document pretty much describes exactly the 'readonly' idea I have
been posting for the past few weeks.
Minor differences include:
- They've detailed how to flag methods as 'readonly safe'
- They've detailed a downcast mechanism 'mutable'.
- I used 'in' rather than 'readonly' to identify readonly function
arguments.
- They restricted the idea to references.
The "Dynamic checking for downcasts" section describes my runtime
readonly
flag concept (restricted to references).
Regan

Regan,
general idea: if something could be done in compile time
than it shall be done there.

Yep.

As you may see there is a penalty
on runtime readonly checks ( <= 10% )

Yep.

- not good.

You cannot say that, without first weighing the cost up with the benefit.
The cost is ____?
The benefit is ____?
Therefore as cost < benefit, not good.

Obviously it is hard to provide numbers here.
But motivation of doing constness solely in
CT is the same as motivation of having natively
compileable code versus bytecode.
Having readonly violations in runtime is not desireable:
char[] s = "hello";
... getting AV in the code far far away (and might be not always)...
s[0] = 0;
... is not acceptable at all if you can prevent it by declaring
readonly char[] s = "hello";
Again, this is not about 100% readonly protection.
Some reasonable compromise but without delayed
runtime checks please.

it is possible to prove immutability
in "99.something %". I think it is enough.

Where? I must have missed that bit.

Mea culpa, this number is about completely different stuff.

I even think that even 50% cases (shallow) in compile time will be
extremely nice.

Define "shallow", because obviously we have different definitions of what
exactly it means.
"deep" as used by Walter earlier appeared to me to mean "completely and
utterly" as in the case where he used it, it guaranteed immutability
essentially for the lifetime of the variable/data from all references.

immutability is well known term introduced I believe in:
Donald E. Knuth, The Art of Computer Programming: Fundamental Algorithms
(Vol. 1),
twenty years ago. I mean problem is well known and discussed too many times:
"utter immutability" can be verified at some extent only in reasonable
amount of compilation time
and only for pure (abstract) languages.
Again deep/shallow immutability is a term and has nothing common with
probability of
verification. E.g. language can have shallow immutability which can be
staticly verifiable in 100% cases.
Or cannot. Depends on implementation and language features like const_cast<>
in C++.

IMHO: shallow implementation is a) essential, b) easily implementable,
c) effective and d) honest . Yes it is compromise but as an exit
condition for v.1.0 it is sufficient. It also allows to build full
immutability system in v.2.0.
IMHO, IMHO and one more time IMHO.

Once you define "shallow" I'm sure I'll know what you mean, and I'll
probably agree.

Page 16 in:
http://www-sop.inria.fr/everest/events/cassis05/Transp/poll.pdf
There it is about classes. I propose to use the same schema
but only for arrays and pointers - do not propagate constness in depth.
Such immutability can be easily verified on 100% and IMHO reasonable
compromise.
In fact D compiler already has everything in place to implement it.
Andrew.

Again deep/shallow immutability is a term and has nothing common with
probability of
verification. E.g. language can have shallow immutability which can be
staticly verifiable in 100% cases.
Or cannot. Depends on implementation and language features like
const_cast<> in C++.

not to beat a fairly-dead horse but const_cast<> is the least of the
worries:
char x;
const char* y = &x;
x = 'a';
With Walter's in proposal such code wouldn't be allowed but I wouldn't be
surprised if the compiler would catch it. I think it would help to
distinguish between a readonly view of a shared resource and an immutable
resource.

Again deep/shallow immutability is a term and has nothing common with
probability of
verification. E.g. language can have shallow immutability which can be
staticly verifiable in 100% cases.
Or cannot. Depends on implementation and language features like
const_cast<> in C++.

not to beat a fairly-dead horse but const_cast<> is the least of the
worries:
char x;
const char* y = &x;
x = 'a';
With Walter's in proposal such code wouldn't be allowed but I wouldn't be
surprised if the compiler would catch it.

Ben, I understand the intention. Theoretically such verification is possible
but practically it is too complex task to consider it.
In example you've provided variable is being aliases which immediately
demands verification of not only write points but also all read points for
each resource.

I think it would help to distinguish between a readonly view of a shared
resource and an immutable resource.

Yep. Excellent point.
Walter is saying that control of immutabile resurces in C++
is impossible ( and he is right here ) so D does not need any immutability
at all - not for immutable
resources and not for immutable observers (const in C++).
And latter corollary is causing dispute here.
Immutable observers make real sense especially in context of D's arrays.
D has the same notation for array (resource) and slice ( array observer ).
And this situation highly demands notion of immutable observers - at least
for arrays and pointers.
Andrew.

Again deep/shallow immutability is a term and has nothing common with
probability of
verification. E.g. language can have shallow immutability which can be
staticly verifiable in 100% cases.
Or cannot. Depends on implementation and language features like
const_cast<> in C++.

not to beat a fairly-dead horse but const_cast<> is the least of the
worries:
char x;
const char* y = &x;
x = 'a';
With Walter's in proposal such code wouldn't be allowed but I wouldn't
be
surprised if the compiler would catch it.

Ben, I understand the intention. Theoretically such verification is
possible
but practically it is too complex task to consider it.
In example you've provided variable is being aliases which immediately
demands verification of not only write points but also all read points
for
each resource.

I think it would help to distinguish between a readonly view of a shared
resource and an immutable resource.

Yep. Excellent point.
Walter is saying that control of immutabile resurces in C++
is impossible ( and he is right here ) so D does not need any
immutability
at all - not for immutable
resources and not for immutable observers (const in C++).
And latter corollary is causing dispute here.
Immutable observers make real sense especially in context of D's arrays.
D has the same notation for array (resource) and slice ( array observer
).
And this situation highly demands notion of immutable observers - at
least
for arrays and pointers.

This is an important distinction I think many people might miss at first
glace/read.
You can have immutable data i.e. the original copy/source of the data is
itself immutable. In this case it's impossible to have a mutable reference
to that data as the data simply cannot be modified.
You can have mutable data, and in that case mutable references to that
data. But, you can also have an immutable reference to mutable data. So
while the data may be mutable, it's not *via* an immutable reference.
Something which causes confusion... The phrase "immutable reference" can
be used to mean a reference which cannot change, or a reference to data
which cannot change (I have used the latter above).
The terms Andrew has used above: resource, observer etc are then useful to
highlight the differences.
An "observer" is an immutable reference, or to be clear "a reference which
cannot be used to modify the data", it simply observes the data.
A "resource" is the data itself.
A "reference" is what we have currently, a reference which can be used to
modify the data.
Regan

An "observer" is an immutable reference, or to be clear "a reference which
cannot be used to modify the data", it simply observes the data.
A "resource" is the data itself.
A "reference" is what we have currently, a reference which can be used to
modify the data.
Regan

Again deep/shallow immutability is a term and has nothing common with
probability of
verification. E.g. language can have shallow immutability which can be
staticly verifiable in 100% cases.
Or cannot. Depends on implementation and language features like
const_cast<> in C++.

not to beat a fairly-dead horse but const_cast<> is the least of the
worries:
char x;
const char* y = &x;
x = 'a';
With Walter's in proposal such code wouldn't be allowed but I wouldn't be
surprised if the compiler would catch it. I think it would help to
distinguish between a readonly view of a shared resource and an immutable
resource.

I can still hear the horse whinnying, so I had to beat it some more (strong
horse)..
Maybe we are expecting too much functionality to start out with? Maybe by
tightening the rules on what 'readonly' (ro) vars. can be and how they are used,
we could at least get some what we want and have the compiler (with not
un-reasonable effort) enforce this immutability so it meets Walter's criteria of
having semantic value?
For example, using Ben and Walter's earlier ideas and syntax: in, out and final
(except the compiler would have to throw errors, not just warnings):
- Assigning something to an 'ro' var. would demote the rvalue to readonly for
the rest of that scope. That along with the rules below should cover the example
for both pointers and implicit references.
- After initialization, ro vars. can't be assigned to or dereferenced as an
lvalue, i.e.:
// Verboten because ro p, 'object' and 'arr' are derefenced
*p = 10;
object.member = x;
arr[0][10] = y;
- declaration and initialization of final locals would have to be in one
statement so the compiler wouldn't have to keep track of if they have been
initialized.
- 'out' return values could only be assigned to a final var.
- ro vars. can only be passed via an ro param.
- The reference or cast operators couldn't be used on an ro rvalue.
- Methods couldn't be called on ro objects - even including byval structs passed
with 'in'. Member vars. could be an rvalue using the rules above.
- Only ro arrays of value types or arrays of arrays could be used as the
aggragate of foreach statements (other ro objects couldn't be the aggragate) and
the value expression would have to be ro as well:
void foo(in X[] arr)
{
foreach(in X x; arr){...}
}
- Value types could also be specified as ro.
Maybe (probably) I'm forgetting something(s) and this already might seem pretty
restrictive, but I'm guessing that we would find a lot of uses for even very
tightly controlled ro vars. and the right set of rules would keep it from
becoming a nightmare for compiler implementors.
- Dave

Regan,
general idea: if something could be done in compile time
than it shall be done there.

Yep.

As you may see there is a penalty
on runtime readonly checks ( <= 10% )

Yep.

- not good.

You cannot say that, without first weighing the cost up with the
benefit.
The cost is ____?
The benefit is ____?
Therefore as cost < benefit, not good.

Obviously it is hard to provide numbers here.
But motivation of doing constness solely in
CT is the same as motivation of having natively
compileable code versus bytecode.

Of course, I agree that CT const is better than RT const. However, if
you're saying RT is useless, well..
If it's not possible to catch "a reasonable percentage" (I have no exact
figures either) of cases with CT in D, then a RT solution that can catch
the remaining cases may be of benefit.
As I tried to suggest above, it's a cost-benefit equation.

Having readonly violations in runtime is not desireable:
char[] s = "hello";
... getting AV in the code far far away (and might be not always)...
s[0] = 0;
... is not acceptable at all if you can prevent it by declaring
readonly char[] s = "hello";

I agree.
Using my idea the "readonly" above wouldn't be required, the original code
would produce a readonly violation. The reason it would is that the
readonly flag of the constant "hello" would be passed to "s" upon
assignment (assuming we decide constant string literals are readonly).

Again, this is not about 100% readonly protection.
Some reasonable compromise but without delayed
runtime checks please.

That's your personal choice. I'm all for choice, which is why I suggested
the runtime solution be optional, enabled/disabled by compile time flags
etc. It's exactly comparable to array bounds checking and other DBC
features disabled for release mode.
In other words, if you dont want it, you wont use it. If I want it, I will
use it. Everyone is happy.

I even think that even 50% cases (shallow) in compile time will be
extremely nice.

Define "shallow", because obviously we have different definitions of
what
exactly it means.
"deep" as used by Walter earlier appeared to me to mean "completely and
utterly" as in the case where he used it, it guaranteed immutability
essentially for the lifetime of the variable/data from all references.

immutability is well known term introduced I believe in:
Donald E. Knuth, The Art of Computer Programming: Fundamental Algorithms
(Vol. 1),
twenty years ago. I mean problem is well known and discussed too many
times:

I am not confused about the definition of "immutable", just "shallow" and
"deep" as I've seen them used in this thread (and others) several times by
different people and they (apparently) meant different things in each case.
eg. Walters use of "deep" compared to your use here for C++ (which we
agree is different - yet both are called "deep"??)

IMHO: shallow implementation is a) essential, b) easily implementable,
c) effective and d) honest . Yes it is compromise but as an exit
condition for v.1.0 it is sufficient. It also allows to build full
immutability system in v.2.0.
IMHO, IMHO and one more time IMHO.

Once you define "shallow" I'm sure I'll know what you mean, and I'll
probably agree.

Page 16 in:
http://www-sop.inria.fr/everest/events/cassis05/Transp/poll.pdf
There it is about classes. I propose to use the same schema
but only for arrays and pointers - do not propagate constness in depth.
Such immutability can be easily verified on 100% and IMHO reasonable
compromise.
In fact D compiler already has everything in place to implement it.

Ok, so the definition of "shallow" is essentially: an object that cannot
be modified directly, but may be modified by obtaining(or previously
obtaining) a referece to one of the objects sub components (typically via
a member function returning them).
And "deep" is in turn what Walter described here:
digitalmars.D/26096
"Deep immutable parameters would have unchanging values for the scope of
that
parameter, and also every sub-object reachable by that parameter would also
be unchangeable. The values wouldn't change even by another reference or
another thread. (This is quite unlike C++ "const".)"
In which case my idea is "shallow" immutability with additional runtime
DBC style "deep" immutability checks. A compromise if you will.
Regan

Regan,
general idea: if something could be done in compile time
than it shall be done there.

Yep.

As you may see there is a penalty
on runtime readonly checks ( <= 10% )

Yep.

- not good.

You cannot say that, without first weighing the cost up with the
benefit.
The cost is ____?
The benefit is ____?
Therefore as cost < benefit, not good.

Obviously it is hard to provide numbers here.
But motivation of doing constness solely in
CT is the same as motivation of having natively
compileable code versus bytecode.

Of course, I agree that CT const is better than RT const. However, if
you're saying RT is useless, well..
If it's not possible to catch "a reasonable percentage" (I have no exact
figures either) of cases with CT in D, then a RT solution that can catch
the remaining cases may be of benefit.
As I tried to suggest above, it's a cost-benefit equation.

Having readonly violations in runtime is not desireable:
char[] s = "hello";
... getting AV in the code far far away (and might be not always)...
s[0] = 0;
... is not acceptable at all if you can prevent it by declaring
readonly char[] s = "hello";

I agree.
Using my idea the "readonly" above wouldn't be required, the original code
would produce a readonly violation. The reason it would is that the
readonly flag of the constant "hello" would be passed to "s" upon
assignment (assuming we decide constant string literals are readonly).

Again, this is not about 100% readonly protection.
Some reasonable compromise but without delayed
runtime checks please.

That's your personal choice. I'm all for choice, which is why I suggested
the runtime solution be optional, enabled/disabled by compile time flags
etc. It's exactly comparable to array bounds checking and other DBC
features disabled for release mode.
In other words, if you dont want it, you wont use it. If I want it, I will
use it. Everyone is happy.

I even think that even 50% cases (shallow) in compile time will be
extremely nice.

Define "shallow", because obviously we have different definitions of
what
exactly it means.
"deep" as used by Walter earlier appeared to me to mean "completely and
utterly" as in the case where he used it, it guaranteed immutability
essentially for the lifetime of the variable/data from all references.

immutability is well known term introduced I believe in:
Donald E. Knuth, The Art of Computer Programming: Fundamental Algorithms
(Vol. 1),
twenty years ago. I mean problem is well known and discussed too many
times:

I am not confused about the definition of "immutable", just "shallow" and
"deep" as I've seen them used in this thread (and others) several times by
different people and they (apparently) meant different things in each
case.
eg. Walters use of "deep" compared to your use here for C++ (which we
agree is different - yet both are called "deep"??)

IMHO: shallow implementation is a) essential, b) easily implementable,
c) effective and d) honest . Yes it is compromise but as an exit
condition for v.1.0 it is sufficient. It also allows to build full
immutability system in v.2.0.
IMHO, IMHO and one more time IMHO.

Once you define "shallow" I'm sure I'll know what you mean, and I'll
probably agree.

Page 16 in:
http://www-sop.inria.fr/everest/events/cassis05/Transp/poll.pdf
There it is about classes. I propose to use the same schema
but only for arrays and pointers - do not propagate constness in depth.
Such immutability can be easily verified on 100% and IMHO reasonable
compromise.
In fact D compiler already has everything in place to implement it.

Ok, so the definition of "shallow" is essentially: an object that cannot
be modified directly, but may be modified by obtaining(or previously
obtaining) a referece to one of the objects sub components (typically via
a member function returning them).
And "deep" is in turn what Walter described here:
digitalmars.D/26096
"Deep immutable parameters would have unchanging values for the scope of
that
parameter, and also every sub-object reachable by that parameter would
also
be unchangeable. The values wouldn't change even by another reference or
another thread. (This is quite unlike C++ "const".)"
In which case my idea is "shallow" immutability with additional runtime
DBC style "deep" immutability checks. A compromise if you will.

Yes, there are cases when you need deep immutability.
For example of how COW string type might be implemented
if D would have opAssign/ctor/dtor for structs.
It is a sort of deep immutability - DI with runtime checks.
struct string
{
private struct data
{
int refcnt;
int length;
//chars go here
char* chars() { return cast( char* ) (this + 1); }
}
private data* d;
void opIndexAssign(char c, int idx)
{
if( d.refcnt != 0 )
d = copyData(d); // make unique buffer
d.chars[idx] = c;
}
void opAssign(string s) // assuming opAssign is there
{
if( --d.refcnt == 0 ) delete d; // we were sole
// owner of the data
d = s.d;
++d.refcnt; // do not dup, just increase refcnt
}
void opAssign(char#[] s)
{
d = allocData(s); // famous dup
}
char#[] opSlice() // for temporary use
{
return d.chars[0..d.length ]; // no dup
}
~this()
{
if( --d.refcnt == 0 ) delete d;
// determenistic memory
// management
}
}
Interesting observation:
it is possible to implement allocData as an allocation
out of GC memory space using malloc/free.
In this case GC will not scan (?) string data so
string intensive applications may benefit from this even
further.
Comment: if there will be any sort of char#[] then pressure
of having such strings is not so high. It depends
on particular use cases to be short.
IMHO #2387-bis : if D will have immutable
arrays/pointers and opAssign/ctor/dtor then
anyone can consider D as a full superset of
C++ and Java together - feature complete state.
And keeping in mind that D already has
better templates, clean grammar
and other nice features then D v.1.0 will be
really the Thing.
Andrew.