Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Wow! I know of one other language that (almost) implements this.
Does 'scope' mean any type scope, and not just function scope? For
example, can it include module scope? ... if/while/for/foreach scope? ...
block scope?
And I assume 'statement' can be a block of statements ...
on_exit_scope { stmt1; stmt2; ... stmtn;}
--
Derek Parnell
Melbourne, Australia

Progess. Its a Database 4GL. It handles the on_exit, on_failure situations
but not to the extent that D now does with the 'statement' flexibility. A
very neat solution.
--
Derek Parnell
Melbourne, Australia

Does 'scope' mean any type scope, and not just function scope? For
example, can it include module scope? ... if/while/for/foreach scope? ...
block scope?

Yes.

On module scope? No, it doesn't actually work on module scope, right?
Wouldn't make sense, as a module scope is a declaration scope, and not a
instruction scope/block.
--
Bruno Medeiros - CS/E student
"Certain aspects of D are a pathway to many abilities some consider to
be... unnatural."

Ruby's rescue thing looks like try-catch. on_scope is much more than
try-catch, or I wouldn't have implemented it <g>.

on_scope is quite cute, and certainly cleaner than the C++ implementation of
ScopeGuard. Compared to ctor/dtor RAII, though, it still has the same problem as
try-catch{-finally} - a naive use of a resource is wrong by default, and the
programmer has to do something extra to make it right.
When I first saw Andrei's paper it struck me that what made things painful in
C++ was the lack of usable closures. Since D is pretty good in this regard,
could the same effect have been achieved with a less sugary but more regular
approach reusing RAII-auto syntax, e.g.
# import std.scopeguard;
# auto Guard onOk = new OnScopeSuccessGuard(&mySuccessFunc);
# auto Guard onFail = new OnScopeFailureGuard(function { /* ad hoc code */ });
with something like C++'s std::uncaught_exception in the Guard subclass
destructor implementations to discriminate exit conditions?
cheers
Mike

When I first saw Andrei's paper it struck me that what made things painful
in
C++ was the lack of usable closures. Since D is pretty good in this
regard,
could the same effect have been achieved with a less sugary but more
regular
approach reusing RAII-auto syntax, e.g.
# import std.scopeguard;
# auto Guard onOk = new OnScopeSuccessGuard(&mySuccessFunc);
# auto Guard onFail = new OnScopeFailureGuard(function { /* ad hoc code
*/ });
with something like C++'s std::uncaught_exception in the Guard subclass
destructor implementations to discriminate exit conditions?

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Again the release is very nice!
A question: I seemed to miss the scope guard discussion (whatever). Can you
enlighten me with an example (or guide me with a link) to illustrate the
usefulness of this feature?
Thanks,
P.S.: In the new docs for the scope guard stuff there is a TYPO at "[snip] close
of the scope, they also are interleaved with the OnScopeStatements in the
reverse *lexeical* order in which they appear."
Tom;

A question: I seemed to miss the scope guard discussion (whatever). Can
you
enlighten me with an example (or guide me with a link) to illustrate the
usefulness of this feature?

This article has some good examples:
www.digitalmars.com/d/exception-safe.html
Also, do a google groups search on "Alexandrescu on_scope_exit" for a good
thread on it.

Thanks,
P.S.: In the new docs for the scope guard stuff there is a TYPO at "[snip]
close
of the scope, they also are interleaved with the OnScopeStatements in the
reverse *lexeical* order in which they appear."

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Nice feature, but rather ugly to look at; although, I'm not sure how
something like that could be made to look pretty. Nonetheless the scope
guard looks like something I should read up on.
Thanks for another good release.
-JJR

finally does work for on_scope_exit, but it leaves on_scope_success and
on_scope_failure hanging.

Similar to allowing multiple "finally" statements, even without "try", you
could allow multiple "catch", also without "try":
finally writefln("done");
dofoo();
catch dofoo_undo();
dobar();
where the catch would also rethrow the exception being caught.
Now we just need something sensible for on_scope_success and we're done ; )
L.

Now we just need something sensible for on_scope_success and we're done
; )

"leave" seems like a good enough word, but since on_scope_failure and
on_scope_success are mutually exclusive, we could use ~catch, or !catch.
Pretty ugly too, but no new keywords.
on_scope_exit => finally
on_scope_failure => catch
on_scope_success => ~catch

Honestly, as ugly as the syntax is, I must admit I like the grouping of
the start and end conditions together.
That's one thing I've always thought strange about programming; people
talk about functions and classes and all the different aesthetic and
linear/non-linear relationships between them, but in the end it is all
completely and totally linear, when for many cases that doesn't make sense.
-[Unknown]

Honestly, as ugly as the syntax is, I must admit I like the grouping of
the start and end conditions together.
That's one thing I've always thought strange about programming; people
talk about functions and classes and all the different aesthetic and
linear/non-linear relationships between them, but in the end it is all
completely and totally linear, when for many cases that doesn't make
sense.

Consider the for loop:
for (expr; expr; expr)
The 3rd expression is executed at the *end* of the loop, yet it is placed at
the beginning. So there is precedent for the utility of putting code where
it conceptually belongs rather than where it is executed.

I'm not at all disagreeing; but aside from language constructs like
looping, which seem to be (in my experience) the easiest thing to get a
good grasp on for newcomers, there aren't many ways in which (general)
programming is non-linear.
I am saying this is strange; tasks, in many cases, are not linear.
Obviously it must map well to something the computer understands, but
that can often be handled by the compiler (as in this case.)
For example, academic programming typically teaches that multiple
returns in a function are evil. This is because it is mixing non linear
programming (not returning always at the very end) with linear programming.
Scope exit and such cases are a good example of a clean way to resolve
this problem without saying that "returns and continues are evil." What
more, they are logical.
-[Unknown]

Consider the for loop:
for (expr; expr; expr)
The 3rd expression is executed at the *end* of the loop, yet it is placed at
the beginning. So there is precedent for the utility of putting code where
it conceptually belongs rather than where it is executed.

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Nice feature, but rather ugly to look at; although, I'm not sure how
something like that could be made to look pretty. Nonetheless the scope
guard looks like something I should read up on.
Thanks for another good release.
-JJR

I thought the same, perhaps the "scope" portion is understood? So that the
keywords become: onsuccess, onfailure, ...?

I thought the same, perhaps the "scope" portion is understood? So that the
keywords become: onsuccess, onfailure, ...?

Personally I think it's better not to overuse simple words and phrases as
keywords. onsuccess/onSuccess etc. are a bit too likely to be chosen as a
function name, especially since these are not common keywords in other
languages.
Nick

I thought the same, perhaps the "scope" portion is understood? So that the
keywords become: onsuccess, onfailure, ...?

Personally I think it's better not to overuse simple words and phrases as
keywords. onsuccess/onSuccess etc. are a bit too likely to be chosen as a
function name, especially since these are not common keywords in other
languages.
Nick

I have to agree with this. The "scope" prefix, however unsightly,
serves its purpose here. Yet I wonder if there is a way to improve upon
that syntax.
-JJR

Yes I agree with Kyle but not to overuse simple keywords, what about
something like "onSuccess:", "onFailure:", etc...
To me all "_" look ugly. Perhaps the author could ask the community for
suggestions when naming new features?
Or perhaps just "onScopeSuccess", "onScopeFailure"...
Kyle Furlong wrote:

John Reimer wrote:

Walter Bright wrote:

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Nice feature, but rather ugly to look at; although, I'm not sure how
something like that could be made to look pretty. Nonetheless the
scope guard looks like something I should read up on.
Thanks for another good release.
-JJR

I thought the same, perhaps the "scope" portion is understood? So that
the keywords become: onsuccess, onfailure, ...?

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Thanks Walter very cool.
I like kyle's suggestion on using.
onExit, onFailure, onSuccess so that it looks a little like javascript.
Walter have you considered like publishing new releases on
http://freshmeat.net ??
Like GDC and DStress do ??
Freshmeat is the slashdot equivalent for software.
Knud

That's a nice, circular definition. =P
What's a square?
Well, it's like a rectangle but with even sides.
What's a rectangle?
Well, it's like a square but only opposite side pairs need to be the
same length.
--
Regards,
James Dunne

Still, what is it? I've been there before .. didn't quite get it. Why is
it so popular?!

It's technology and scientific news and covers a lot of 'nerd' interest stuff;
it's highly configurable if you have an account.
Every article has free-form user comments and discussion, which means you not
only see the news article, but can pick up dozens of links to related info, get
opinions on both sides, etc. (Many of the participants are highly uninformed,
but that is hard to avoid on the web...)
It's news, discussion, conversation, and a sort of "commonwealth" community, and
each discussion is anchored off of a "real" news article, book review, or
writeup, usually somewhere else. In this way its like a mixture of newsgroup
discussions, journalism, and wiki-like community authoring.
Like almost all human communities, the appeal is determined mostly by the types
of people that exist there (i.e. prisons versus country clubs versus rock
concerts). So its probably very grating if you don't like techie/trekkie/IT
types.
[But it's starting to look commonplace now because everyone copied them. Every
revolution is unthinkable beforehand and obvious afterwards...]
Kevin

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Just came to my mind:
This version is especially neat because it may be simplified to:
void LongFunction()
{
State save = UIElement.GetState();
onscope {
case success: UIElement.SetState(save); break;
case failure: UIElement.SetState(Failed(save));
}
...lots of code...
}

Just came to my mind:
This version is especially neat because it may be simplified to:
void LongFunction()
{
State save = UIElement.GetState();
onscope {
case success: UIElement.SetState(save); break;
case failure: UIElement.SetState(Failed(save));
}
...lots of code...
}

I really like this idea, cons anyone?

I don't like the words 'success' and 'failure' used all over the place.
The meaning of the function's success isn't based on only if no
exceptions were thrown. Similarly, the function doesn't have to 'fail'
if an exception were thrown.
I think they should be renamed to represent what they actually do. That
is, 'no exception thrown'/'pass', 'exception thrown'/'catch', 'scope
exited'/'exit'. Also, I think it would be more pleasing to the eye (and
the maintainer) if the effect of the scope guard were more obvious in
the code. Something like:
void LongFunction()
{
scope (State save = UIElement.GetState())
catch {
UIElement.SetState(Failed(save));
}
pass {
UIElement.SetState(save);
}
body {
... lots of code ...
}
}
This way, it looks like a function contract, except it applies to any
scope. The catch block is reused but only for purposes of noting that
there was an exception caught. The pass block is quite simply the
'else' to the catch block, and should happen only if no exceptions were
caught. Furthermore, there could be a general 'exit' block.
One could add as many of these blocks as necessary, and the compiler
will guarantee to call them in order of definition, much like the
original solution but not requiring one to think backwards.
bool LongFunction()
{
bool passed = false;
scope (State save = UIElement.GetState())
catch { UIElement.SetState(Failed(save)); writef('0'); }
pass { UIElement.SetState(save); writef('1'); }
pass { passed = true; writef('2'); }
body {
... lots of code ...
}
exit { writef('3'); }
return passed;
}
So, if an exception were thrown in '... lots of code ...', you'd see
'03' and the function would return false. However, if no exception were
thrown you'd see '123' and the function would return true. This
demonstrates the order of execution across multiple blocks. In this
example, the order of the 'exit' block relative to the 'catch' and
'pass' blocks is important.
Consequently, I don't think the order of the 'body' block should matter
at all, relative to the other blocks; and it should be fixed to only
allow one instance of it. Then, it is a matter of personal/project
style where the 'body' block would fall.
Thoughts on this?
One last thought I had was to remove the parenthesized expression form
of scope (expr) ... and use another block name to represent the
initialization stage, just to be consistent.
--
Regards,
James Dunne

This is the best I've seen but:
- Why two "pass" blocks?.
- About "exit" would be better before the "body" block and maybe replace
it with "finally" to save one more keyword.
I'm also assuming that we could also do one liners without the braces:
bool LongFunction()
{
bool passed = false;
scope State save = UIElement.GetState();
catch UIElement.SetState(Failed(save));
pass UIElement.SetState(save);
finally writef('3');
body {
... lots of code ...
}
return passed;
}

Just to demonstrate that the ordering of the blocks matters, and that
multiple blocks can be defined.

- About "exit" would be better before the "body" block and maybe replace
it with "finally" to save one more keyword.

I noted this was a matter of personal style where the body block goes,
as it does not depend on the order of the other blocks. Personally, I
like seeing the exit block after the body, as it should flow naturally.

I don't like the words 'success' and 'failure' used all over the place.

Yes. I have to agree they are little missused. Changing them to "pass" as
success, "fail"/"catch" as "failure" and "default" as exit is good idea.

void LongFunction()
{
scope (State save = UIElement.GetState())
catch {
UIElement.SetState(Failed(save));
}
pass {
UIElement.SetState(save);
}
body {
... lots of code ...
}
}
This way, it looks like a function contract, except it applies to any
scope. The catch block is reused but only for purposes of noting that
there was an exception caught. The pass block is quite simply the
'else' to the catch block, and should happen only if no exceptions were
caught. Furthermore, there could be a general 'exit' block.
One could add as many of these blocks as necessary, and the compiler
will guarantee to call them in order of definition, much like the
original solution but not requiring one to think backwards.
bool LongFunction()
{
bool passed = false;
scope (State save = UIElement.GetState())
catch { UIElement.SetState(Failed(save)); writef('0'); }
pass { UIElement.SetState(save); writef('1'); }
pass { passed = true; writef('2'); }
body {
... lots of code ...
}
exit { writef('3'); }
return passed;
}

IMHO now it looks overcomplicated. You are explicitly declaring new scope
using body {} . This is for what whole idea was invented - to prevent it.
Using "catch" keyword is missleading. Keyword scope (expr.) works now as try
{ expr. } . And I have to say that I needed to read this code few times to
understand it. It looks for me as reinventing try { } catch { } finally {}
way of handling exceptions. Sorry, but I don't like it.

I don't like the words 'success' and 'failure' used all over the place.

Yes. I have to agree they are little missused. Changing them to "pass" as
success, "fail"/"catch" as "failure" and "default" as exit is good idea.

Yeah, after posting I realized this was the best idea out of my post.
Really, there's no *nice* way of fixing the syntax problem. You have a
fixed statement keyword - nobody will like what you name it; or you have
the curly-brace blocks nesting - which looks busy.

void LongFunction()
{
scope (State save = UIElement.GetState())
catch {
UIElement.SetState(Failed(save));
}
pass {
UIElement.SetState(save);
}
body {
... lots of code ...
}
}
This way, it looks like a function contract, except it applies to any
scope. The catch block is reused but only for purposes of noting that
there was an exception caught. The pass block is quite simply the
'else' to the catch block, and should happen only if no exceptions were
caught. Furthermore, there could be a general 'exit' block.
One could add as many of these blocks as necessary, and the compiler
will guarantee to call them in order of definition, much like the
original solution but not requiring one to think backwards.
bool LongFunction()
{
bool passed = false;
scope (State save = UIElement.GetState())
catch { UIElement.SetState(Failed(save)); writef('0'); }
pass { UIElement.SetState(save); writef('1'); }
pass { passed = true; writef('2'); }
body {
... lots of code ...
}
exit { writef('3'); }
return passed;
}

IMHO now it looks overcomplicated. You are explicitly declaring new scope
using body {} . This is for what whole idea was invented - to prevent it.
Using "catch" keyword is missleading. Keyword scope (expr.) works now as try
{ expr. } . And I have to say that I needed to read this code few times to
understand it. It looks for me as reinventing try { } catch { } finally {}
way of handling exceptions. Sorry, but I don't like it.

Reusing the catch keyword was not intended to be misleading, but instead
to keep other whining about introducing new keywords. Though,
technically there's no reason for any keyword to actually be a reserved
word in the first place... ;)
Yes, it does somewhat analogue try/catch/finally, but it is more
explicit in its meaning.
Thanks for your opinion.
--
Regards,
James Dunne

Just came to my mind:
This version is especially neat because it may be simplified to:
void LongFunction()
{
State save = UIElement.GetState();
onscope {
case success: UIElement.SetState(save); break;
case failure: UIElement.SetState(Failed(save));
}
...lots of code...
}

I really like this idea, cons anyone?

I don't like the words 'success' and 'failure' used all over the place.
The meaning of the function's success isn't based on only if no
exceptions were thrown. Similarly, the function doesn't have to 'fail'
if an exception were thrown.
I think they should be renamed to represent what they actually do. That
is, 'no exception thrown'/'pass', 'exception thrown'/'catch', 'scope
exited'/'exit'. Also, I think it would be more pleasing to the eye (and
the maintainer) if the effect of the scope guard were more obvious in
the code. Something like:
void LongFunction()
{
scope (State save = UIElement.GetState())
catch {
UIElement.SetState(Failed(save));
}
pass {
UIElement.SetState(save);
}
body {
... lots of code ...
}
}
This way, it looks like a function contract, except it applies to any
scope. The catch block is reused but only for purposes of noting that
there was an exception caught. The pass block is quite simply the
'else' to the catch block, and should happen only if no exceptions were
caught. Furthermore, there could be a general 'exit' block.
One could add as many of these blocks as necessary, and the compiler
will guarantee to call them in order of definition, much like the
original solution but not requiring one to think backwards.
bool LongFunction()
{
bool passed = false;
scope (State save = UIElement.GetState())
catch { UIElement.SetState(Failed(save)); writef('0'); }
pass { UIElement.SetState(save); writef('1'); }
pass { passed = true; writef('2'); }
body {
... lots of code ...
}
exit { writef('3'); }
return passed;
}
So, if an exception were thrown in '... lots of code ...', you'd see
'03' and the function would return false. However, if no exception were
thrown you'd see '123' and the function would return true. This
demonstrates the order of execution across multiple blocks. In this
example, the order of the 'exit' block relative to the 'catch' and
'pass' blocks is important.
Consequently, I don't think the order of the 'body' block should matter
at all, relative to the other blocks; and it should be fixed to only
allow one instance of it. Then, it is a matter of personal/project
style where the 'body' block would fall.
Thoughts on this?
One last thought I had was to remove the parenthesized expression form
of scope (expr) ... and use another block name to represent the
initialization stage, just to be consistent.

for any scope:
in
{
}
out
{
}
failure
{
}
success
{
}
body
{
}
Pros: fits into and expands current language structures
Cons: still a bit verbose as compared to current syntax

for any scope:
in
{
}
out
{
}
failure
{
}
success
{
}
body
{
}
Pros: fits into and expands current language structures
Cons: still a bit verbose as compared to current syntax

I'm sorry to say but IMO this miss the whole point.
Current keywords: in, out, body are place-insensitive . They are just blocks
that may be moved relatively for themselves and this will not change
anything. The whole point in "scope guarding" is that expression you type
_registers_ some piece of code (to be called in some conditions) in place
where it appears.
You gain nothing with such syntax. If you want to deal with whole scope then
you can use finally and catch blocks.
Examples:
Current syntax:
void LongFunction()
{
foo1();
on_scope_failure clean_what_foo1_did();
foo2();
on_scope_failure clean_what_foo2_did();
foo3();
on_scope_failure clean_what_foo3_did();
foo4();
}
What I understand from your proposal:
void LongFunction()
{
foo1();
failure {
on_scope_failure clean_what_foo1_did();
}
body {
foo2();
failure {
on_scope_failure clean_what_foo2_did();
}
body {
foo3();
failure {
on_scope_failure clean_what_foo3_did();
}
body {
foo4();
}
}
}
}
If you think about:
void LongFunction()
{
foo1();
failure{
clean_what_foo1_did();
}
foo2();
failure {
clean_what_foo2_did();
}
foo3();
failure {
on_scope_failure clean_what_foo3_did();
}
foo4();
}
Then concept is OK, but you can no longer say this "will fit and expand
current syntax", because it's used totally diffrent from body {} in {} out
{} blocks.

for any scope:
in
{
}
out
{
}
failure
{
}
success
{
}
body
{
}
Pros: fits into and expands current language structures
Cons: still a bit verbose as compared to current syntax

I'm sorry to say but IMO this miss the whole point.
Current keywords: in, out, body are place-insensitive . They are just blocks
that may be moved relatively for themselves and this will not change
anything. The whole point in "scope guarding" is that expression you type
_registers_ some piece of code (to be called in some conditions) in place
where it appears.
You gain nothing with such syntax. If you want to deal with whole scope then
you can use finally and catch blocks.
Examples:
Current syntax:
void LongFunction()
{
foo1();
on_scope_failure clean_what_foo1_did();
foo2();
on_scope_failure clean_what_foo2_did();
foo3();
on_scope_failure clean_what_foo3_did();
foo4();
}
What I understand from your proposal:
void LongFunction()
{
foo1();
failure {
on_scope_failure clean_what_foo1_did();
}
body {
foo2();
failure {
on_scope_failure clean_what_foo2_did();
}
body {
foo3();
failure {
on_scope_failure clean_what_foo3_did();
}
body {
foo4();
}
}
}
}
If you think about:
void LongFunction()
{
foo1();
failure{
clean_what_foo1_did();
}
foo2();
failure {
clean_what_foo2_did();
}
foo3();
failure {
on_scope_failure clean_what_foo3_did();
}
foo4();
}
Then concept is OK, but you can no longer say this "will fit and expand
current syntax", because it's used totally diffrent from body {} in {} out
{} blocks.

That is definitely *NOT* what I was saying *AT ALL*. Obviously that is a
terrible syntax.
Here's what it would look like:
Current syntax:
void LongFunction()
{
foo1();
on_scope_failure clean_what_foo1_did();
foo2();
on_scope_failure clean_what_foo2_did();
foo3();
on_scope_failure clean_what_foo3_did();
foo4();
}
What I understand from your proposal: (how it should have been)
void LongFunction()
failure
{
clean_what_foo1_did();
clean_what_foo2_did();
clean_what_foo3_did();
clean_what_foo4_did();
}
body
{
foo1();
foo2();
foo3();
foo4();
}
I think you missed the point that each function call does not introduce a new
scope. Now, its true that if you wanted the
operations foox() to each be in its own scope, then it would look like this:
void LongFunction()
{
failure
{
clean_what_foo1_did();
}
body
{
foo1();
}
failure
{
clean_what_foo2_did();
}
body
{
foo2();
}
failure
{
clean_what_foo3_did();
}
body
{
foo3();
}
failure
{
clean_what_foo4_did();
}
body
{
foo4();
}
}

for any scope:
in
{
}
out
{
}
failure
{
}
success
{
}
body
{
}
Pros: fits into and expands current language structures
Cons: still a bit verbose as compared to current syntax

I'm sorry to say but IMO this miss the whole point.
Current keywords: in, out, body are place-insensitive . They are just
blocks
that may be moved relatively for themselves and this will not change
anything. The whole point in "scope guarding" is that expression you type
_registers_ some piece of code (to be called in some conditions) in place
where it appears.
You gain nothing with such syntax. If you want to deal with whole
scope then
you can use finally and catch blocks.
Examples:
Current syntax:
void LongFunction()
{
foo1();
on_scope_failure clean_what_foo1_did();
foo2();
on_scope_failure clean_what_foo2_did();
foo3();
on_scope_failure clean_what_foo3_did();
foo4();
}
What I understand from your proposal:
void LongFunction()
{
foo1();
failure {
on_scope_failure clean_what_foo1_did();
}
body {
foo2();
failure {
on_scope_failure clean_what_foo2_did();
}
body {
foo3();
failure {
on_scope_failure clean_what_foo3_did();
}
body {
foo4();
}
}
}
}
If you think about:
void LongFunction()
{
foo1();
failure{
clean_what_foo1_did();
}
foo2();
failure {
clean_what_foo2_did();
}
foo3();
failure {
on_scope_failure clean_what_foo3_did();
}
foo4();
}
Then concept is OK, but you can no longer say this "will fit and expand
current syntax", because it's used totally diffrent from body {} in
{} out
{} blocks.

That is definitely *NOT* what I was saying *AT ALL*. Obviously that is a
terrible syntax.
Here's what it would look like:
Current syntax:
void LongFunction()
{
foo1();
on_scope_failure clean_what_foo1_did();
foo2();
on_scope_failure clean_what_foo2_did();
foo3();
on_scope_failure clean_what_foo3_did();
foo4();
}
What I understand from your proposal: (how it should have been)
void LongFunction()
failure
{
clean_what_foo1_did();
clean_what_foo2_did();
clean_what_foo3_did();
clean_what_foo4_did();
}
body
{
foo1();
foo2();
foo3();
foo4();
}
I think you missed the point that each function call does not introduce
a new scope. Now, its true that if you wanted the operations foox() to
each be in its own scope, then it would look like this:
void LongFunction()
{
failure
{
clean_what_foo1_did();
}
body
{
foo1();
}
failure
{
clean_what_foo2_did();
}
body
{
foo2();
}
failure
{
clean_what_foo3_did();
}
body
{
foo3();
}
failure
{
clean_what_foo4_did();
}
body
{
foo4();
}
}

David has brought some things to my attention on IRC that invalidate this
syntax, sorry for the clutter.

In the above:
- is clean_what_foo1_did only called if foo1() has returned successfully
and foo2,foo3,or foo4 failed?
- is clean_what_foo2_did only called if foo2() has returned successfully
and foo3,or foo4 failed?
- is clean_what_foo3_did only called if foo3() has returned successfully
and foo4 failed?
(there is actually no point to clean_what_foo4_did)
Because, this is an important part of the scope guard feature, each
on_scope_failure statement registers something which is then called on
failure of the scope in which it is registered. The order in which they
are registered WRT the other code is important.
Regan

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Good point. Intention was to make it similar to switch (smth.) block.
void LongFunction()
{
State save = UIElement.GetState();
register (scope) {
pass: UIElement.SetState(save); break;
fail: UIElement.SetState(Failed(save));
}
...lots of code...
}
This looks better. I think about "(scope)" part. It's redundant _but_ very
informative and leaves open doors for expanding.
Currently I like this syntax best. IMO it naturally fits, "register" keyword
is old (but rarely used) old C-keyword and it realy means: "register this
piece of code to be called when something with scope happens".

VERSION I: (yeah, I know ...)
h3r3tic on #D channel said that VERSION H is too long and too switch-like.
This is hybrid of two concepts - verbose and informative "register" keyword
and short usage without switch-like syntax.
void LongFunction()
{
State save = UIElement.GetState();
register (scopepass) UIElement.SetState(save);
register (scopefail) UIElement.SetState(Failed(save));
...lots of code...
}

Now you have just to persuade Walter Bright or to write your own
compiler/language :)
Dawid Ci─Ö┼╝arkiewicz wrote:

Dawid Ci─Ö┼╝arkiewicz wrote:

What do you think? Maybe later we'll come with better ideas.

VERSION I: (yeah, I know ...)
h3r3tic on #D channel said that VERSION H is too long and too switch-like.
This is hybrid of two concepts - verbose and informative "register" keyword
and short usage without switch-like syntax.
void LongFunction()
{
State save = UIElement.GetState();
register (scopepass) UIElement.SetState(save);
register (scopefail) UIElement.SetState(Failed(save));
...lots of code...
}

Now you have just to persuade Walter Bright or to write your own
compiler/language :)
Dawid Ci─Ö┼╝arkiewicz wrote:

Dawid Ci─Ö┼╝arkiewicz wrote:

What do you think? Maybe later we'll come with better ideas.

VERSION I: (yeah, I know ...)
h3r3tic on #D channel said that VERSION H is too long and too switch-like.
This is hybrid of two concepts - verbose and informative "register" keyword
and short usage without switch-like syntax.
void LongFunction()
{
State save = UIElement.GetState();
register (scopepass) UIElement.SetState(save);
register (scopefail) UIElement.SetState(Failed(save));
...lots of code...
}

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

I like the idea, but I don't like the names. Can something else be used instead?
--
Carlos Santander Bernal

I don't see how this is marginally better than try - catch - finnaly.
Thanks to garbage collection , exception safety is dramatically less
complicated in D then in C++ ( in fact other than the mutex lock -- can
anyone think of examples for exception safety in D ? ). I think the
problem here is exceptions, not exception safety -- and I think that
point is illustrated with all the many work arounds for 'exception safety'.
I love that D is "thinking outside the box" , but I have to vote against
this one. Instead of improving exception safety, we should be improving
exceptions.
Charlie
Walter Bright wrote:

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Awesome :-) I've been wondering when this would make it into D. It
certainly beats the heck out of the RAII approach.
Sean

This looks like a great new D feature. Curse you. It makes it all that much
worse that I still have to do my day-job programming in C++. Why did you
ever have to invent D anyway? Why couldn't you just let me go on thinking
C++ was a good thing? I thought I was happy then. :-)
Seriously, I read the Alexandrescu's article and downloaded the C++
Scopeguard code. I may end up using it. It's not as nice as D scopeguards
though because you need to remember to call Dismiss at the end of functions.
Plus, you probably want to avoid returning successfully in the middle of
functions.
Jim

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

cleanup only corresponds with on_scope_exit, not the other two. It also only
runs a function with a local variable as a parameter - i.e. it is the same
thing as RAII. It doesn't allow arbitrary code to be executed, nor manage
things like state of a class member, etc.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

There is probably room to improve the syntax a bit more (although I too
don't yet see how), but it is already a fine feature.
--
Bruno Medeiros - CS/E student
"Certain aspects of D are a pathway to many abilities some consider to
be... unnatural."

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This format looks good to me:
scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope` keyword,
name in () doesn't need to be a keyword but is still treated special, and
doesn't look bad.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This format looks good to me:
scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope` keyword,
name in () doesn't need to be a keyword but is still treated special, and
doesn't look bad.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This format looks good to me:
scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope`
keyword, name in () doesn't need to be a keyword but is still treated
special, and doesn't look bad.

This gets my vote too.
And then it would look natural, if you ever need more than one function run:
scope(failure) foo();
scope(success) { bar(); haveAParty(); writeHome(); }
scope(exit) yawn();

Yeap. It's nice. Maybe s/success/pass/ s/failure/fail/ would even
improve it
a little.

Good one.

That's kind of neat, too. Having all three alternatives with same number
of characters, makes it tidy:
scope(exit) foo();
scope(pass) bar();
scope(fail) baz();
Except of course, that "pass" might look like "skip" to somebody new to D.

While I don't really like the _'s in on_scope_exit, etc, I don't like
this proposed syntax any more, in fact I think I don't like it as much.
With this proposed syntax it seems like an expression such as an
if/while/for/etc, however, in all of those the condition is user
defined. So this proposed syntax would appear inconsistent.
<rant>
And if there's one thing I dislike in languages and libraries, it's
inconsistencies, right up with it is non descriptive or inconsistent
functions. Don't even get me started on those, I'm looking at you Phobos :)
writefln ... what? write file name? write first line? write f language?
isxdigit ... is x a digit? of course not! why do I need this function?
pardir ... parrot directory? not everyone has the parrot vm.
altsep ... Alt key seperator? does this get the Alt key?
ifind ... integer find? iFind? iWork? is this for Mac only?
First, I don't see why all of the above aren't readable. And second, why
don't they follow the D guidelines of camelCase? (I prefer the
PascalCase myself, but I could live with camelCase if it was
consistent). For example:
writeLine(string), writeLine(string, format)
isHexDigit
parentDirectory
windowsAlternateSeperator
FindFirst
The only reason I can see is "it will make it easier for C/C++
programmers." or, it's shorter. The first has little merit, as
templates, operator overloading, streams, reals etc. are all different
(for good reason of course), and C/C++ programmers will have to learn D
no matter what. As for shorter, you may gain a few seconds (after a
couple hundred lines), but I'd much rather be able to tell what I was
doing at a glance...
(notice: Yes I know, the functions come from C... it's good we hold on
to our history just like C++ did :) )
</rant>
I'd much rather prefer something like:
scopeexit
scopesuccess
scopefailure
Or some derivative:
scopeabort, scopefault, scopedone, scopefinish, scope?
ps. I like the looks of this new feature and find it very elegant. I
prefer this to try/catch/finally.

While I don't really like the _'s in on_scope_exit, etc, I don't like
this proposed syntax any more, in fact I think I don't like it as much.
With this proposed syntax it seems like an expression such as an
if/while/for/etc, however, in all of those the condition is user
defined. So this proposed syntax would appear inconsistent.
<rant>
And if there's one thing I dislike in languages and libraries, it's
inconsistencies, right up with it is non descriptive or inconsistent
functions. Don't even get me started on those, I'm looking at you Phobos
:)
writefln ... what? write file name? write first line? write f language?
isxdigit ... is x a digit? of course not! why do I need this function?
pardir ... parrot directory? not everyone has the parrot vm.
altsep ... Alt key seperator? does this get the Alt key?
ifind ... integer find? iFind? iWork? is this for Mac only?
First, I don't see why all of the above aren't readable. And second, why
don't they follow the D guidelines of camelCase? (I prefer the
PascalCase myself, but I could live with camelCase if it was
consistent). For example:
writeLine(string), writeLine(string, format)
isHexDigit
parentDirectory
windowsAlternateSeperator
FindFirst
The only reason I can see is "it will make it easier for C/C++
programmers." or, it's shorter. The first has little merit, as
templates, operator overloading, streams, reals etc. are all different
(for good reason of course), and C/C++ programmers will have to learn D
no matter what. As for shorter, you may gain a few seconds (after a
couple hundred lines), but I'd much rather be able to tell what I was
doing at a glance...
(notice: Yes I know, the functions come from C... it's good we hold on
to our history just like C++ did :) )
</rant>
I'd much rather prefer something like:
scopeexit
scopesuccess
scopefailure
Or some derivative:
scopeabort, scopefault, scopedone, scopefinish, scope?
ps. I like the looks of this new feature and find it very elegant. I
prefer this to try/catch/finally.

Sorry, but I find the proposed statements very consistent with D's existing
version, debug, etc statements.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This format looks good to me:
scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope`
keyword, name in () doesn't need to be a keyword but is still treated
special, and doesn't look bad.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This format looks good to me:
scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope` keyword,
name in () doesn't need to be a keyword but is still treated special, and
doesn't look bad.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This format looks good to me:
scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope`
keyword, name in () doesn't need to be a keyword but is still treated
special, and doesn't look bad.

Here's some more experimental ideas:
scope.onExit fooexpr;
scope.onSuccess barexpr;
scope.onFailure bazexpr;
Now a more clean/pure version, but that doesn't allow block statements:
scope.onExit(fooexpr);
scope.onSuccess(barexpr);
scope.onFailure(bazexpr);
Don't really 100% like them, but I'll present them anyway, maybe it will
inspire someone for a better idea. (The problem with this one is that is
makes scope seem like a workable proper object, while it's not)
--
Bruno Medeiros - CS/E student
"Certain aspects of D are a pathway to many abilities some consider to
be... unnatural."

scope(exit) foo();
scope(success) bar();
scope(failure) baz();
similar to extern(name), pragma(name), etc, requires one `scope`
keyword, name in () doesn't need to be a keyword but is still treated
special, and doesn't look bad.

I am with Charles here...
I don't understand why
on_scope_failure & co. are significantly better than
try catch finally?
What is wrong with them?
Semantically try-catch-finally are well known and
widely recognizable constructions.
BTW: Am I right in my assumption that
proposed on_scope_exit / on_scope_success / on_scope_failure
is a direct equivalent of the following:
try
{
[scope code]
my_on_scope_success().
}
catch(...)
{
my_on_scope_failure().
}
finally {
my_on_scope_exit().
}
If yes then again what it wrong with them in principle?
Andrew.
http://terrainformatica.com

Semantically try-catch-finally are well known and
widely recognizable constructions.

agreed

BTW: Am I right in my assumption that
proposed on_scope_exit / on_scope_success / on_scope_failure
is a direct equivalent of the following:
try
{
[scope code]
my_on_scope_success().
}
catch(...)
{
my_on_scope_failure().
}
finally {
my_on_scope_exit().
}
If yes then again what it wrong with them in principle?

This is not rhetorical, but honest: in the cases where you said "state managed
outside", how do you do that? i.e., how do you know for sure that the files will
be closed, provided that destructors are not guaranteed to be run and you can't
return auto references?
--
Carlos Santander Bernal

This is not rhetorical, but honest: in the cases where you said "state
managed
outside", how do you do that? i.e., how do you know for sure that the
files will
be closed, provided that destructors are not guaranteed to be run and you
can't return auto references?

"get" in getOutputFile implies that file exists somewhere and opened by
someone else
so code here cannot delete it.
In contrary "create" in createTempFile says that it creates new instance
so we are owning it - must delete.
If getOutputFile creates new instance then
finally {
delete tmpf;
delete outf;
delete inf;
}
will be enough.
In any case it shall be something one: either
try-catch-finally or on_scope_xxx() - two similar
and probably conflicting mechanisms is a bad design.
( This is why I like Java - with some minor exceptions
its grammar just perfect for the the domain it serves.
Ascetic but clean and simple. )
Andrew Fedoniouk.
http://terrainformatica.com

managed
outside", how do you do that? i.e., how do you know for sure that the
files will
be closed, provided that destructors are not guaranteed to be run and you
can't return auto references?

"get" in getOutputFile implies that file exists somewhere and opened by
someone else
so code here cannot delete it.

If I understand correctly, then that someone else who opened the file would
have
to close it (or delete it, as you put it). How does the opener know when to
close it?

In contrary "create" in createTempFile says that it creates new instance
so we are owning it - must delete.
If getOutputFile creates new instance then
finally {
delete tmpf;
delete outf;
delete inf;
}
will be enough.

See Walter's reply.

In any case it shall be something one: either
try-catch-finally or on_scope_xxx() - two similar
and probably conflicting mechanisms is a bad design.

Not necessarily. RAII and finally could be seen as "similar mechanisms", but I
don't think having both is a bad design. How about for and foreach? do and
while? function and delegate?

( This is why I like Java - with some minor exceptions
its grammar just perfect for the the domain it serves.
Ascetic but clean and simple. )
Andrew Fedoniouk.
http://terrainformatica.com

I don't see how it could be real life, since tmpf is not in scope in the
finally statement, so it won't compile. Furthermore, even if tmpf was in
scope, if getOutputFile() throws, then tmpf wouldn't have even been created
when the finally statement attempts to delete it. The log.close() also does
not reliably happen, as the throw e; statement would cause it to be skipped.
These kinds of issues are what on_scope_xxx are for. Let's rewrite it:
Logger log = getLogger();
on_scope_exit log.close();
on_scope_failure log.logFailure("...");
File outf = getOutputFile();
File inf = getInputFile();
File tmpf = createTempFile();
on_scope_exit delete tmpf;
...lots of code...
We don't have any more resource leaks.

I am with Charles here...
I don't understand why
on_scope_failure & co. are significantly better than
try catch finally?
What is wrong with them?
Semantically try-catch-finally are well known and
widely recognizable constructions.
BTW: Am I right in my assumption that
proposed on_scope_exit / on_scope_success / on_scope_failure
is a direct equivalent of the following:
try
{
[scope code]
my_on_scope_success().
}
catch(...)
{
my_on_scope_failure().
}
finally {
my_on_scope_exit().
}
If yes then again what it wrong with them in principle?

For a simple example it is very similar, for a more complex one it's not.
Have you read this:
http://www.digitalmars.com/d/exception-safe.html
Here is my attempt at "explaination by example", a more complex example
and it's equivalent using try/catch/finally.
Transaction abc()
{
Foo f;
Bar b;
Def d;
f = dofoo();
on_scope_failure dofoo_unwind(f);
b = dobar();
on_scope_failure dobar_unwind(b);
d = dodef();
return Transaction(f, b, d);
}
as try/catch/finally:
Transaction abc()
{
Foo f;
Bar b;
Def d;
f = dofoo();
try {
b = dobar();
try {
d = dodef();
return Transaction(f, b, d);
}
catch(Object o) {
dobar_unwind(b);
throw o;
}
}
catch(Object o) {
dofoo_unwind(f);
throw o;
}
}
Note, the order of the unwind calls is important:
http://www.digitalmars.com/d/statement.html#scope
"If there are multiple OnScopeStatements in a scope, they are executed
in the reverse lexical order in which they appear."
There are many benefits of on_scope over try/catch, they are:
1. less verbose, with less clutter it's easier to see the purpose of the
code. on scope scales well to handle many 'transactions' which require
cleanup, like the example above, try/catch/finally does not, it gets
horribly nested and confusing.
2. groups the cleanup code with code that requires it, less seperation
between the thing that is done, and the thing that cleans up after it.
try/catch/finally has the cleanup code in a seperate 'catch' or 'finally'
scope often a long way from the code that creates the need for that
cleanup.
Regan

I am with Charles here...
I don't understand why
on_scope_failure & co. are significantly better than
try catch finally?
What is wrong with them?
Semantically try-catch-finally are well known and
widely recognizable constructions.
BTW: Am I right in my assumption that
proposed on_scope_exit / on_scope_success / on_scope_failure
is a direct equivalent of the following:
try
{
[scope code]
my_on_scope_success().
}
catch(...)
{
my_on_scope_failure().
}
finally {
my_on_scope_exit().
}
If yes then again what it wrong with them in principle?

For a simple example it is very similar, for a more complex one it's not.
Have you read this:
http://www.digitalmars.com/d/exception-safe.html
Here is my attempt at "explaination by example", a more complex example
and it's equivalent using try/catch/finally.
Transaction abc()
{
Foo f;
Bar b;
Def d;
f = dofoo();
on_scope_failure dofoo_unwind(f);
b = dobar();
on_scope_failure dobar_unwind(b);
d = dodef();
return Transaction(f, b, d);
}
as try/catch/finally:
Transaction abc()
{
Foo f;
Bar b;
Def d;
f = dofoo(); try {
b = dobar();
try {
d = dodef();
return Transaction(f, b, d);
}
catch(Object o) {
dobar_unwind(b);
throw o;
}
}
catch(Object o) {
dofoo_unwind(f);
throw o;
} }
Note, the order of the unwind calls is important:
http://www.digitalmars.com/d/statement.html#scope
"If there are multiple OnScopeStatements in a scope, they are executed
in the reverse lexical order in which they appear."
There are many benefits of on_scope over try/catch, they are:
1. less verbose, with less clutter it's easier to see the purpose of the
code. on scope scales well to handle many 'transactions' which require
cleanup, like the example above, try/catch/finally does not, it gets
horribly nested and confusing.
2. groups the cleanup code with code that requires it, less seperation
between the thing that is done, and the thing that cleans up after it.
try/catch/finally has the cleanup code in a seperate 'catch' or 'finally'
scope often a long way from the code that creates the need for that
cleanup.
Regan

Seems like I realy don't understand something here...
Why you think that your example is aesthetically
better (technically they are the same) than this?

Wait, your example below is not technically the same as mine because:
1. My example (actually taken from the docs on digitalmars.com) used
on_scope_failure, not on_scope_exit. on_scope_exit has the same effect as
"finally" except...
2. The important feature of all of these statements is that they allow you
to add items to the list of things to do _as you go_ and to then execute
them in reverse lexical order at the appropriate time i.e. on_scope_exit
or on_scope_failure etc.
The same cannot be achieved with catch or finally without nesting several
of them. That is what the example I gave was designed to show I believe.

For the fun of it, lets re-write your example to use on_scope_exit:
Transaction abc()
{
Foo f; Bar b; Def d;
f = dofoo();
on_scope_exit delete f;
b = dobar();
on_scope_exit delete b;
d = dodef();
on_scope_exit delete d;
return Transaction(f, b, d);
}
I believe this is better that your try/finally example because:
1- delete is not called on an unitialized class reference as in your
example. (thankfully D initialises them to null)
2- the init and delete of each object is in the same place (the larger
the block of code this is used in, the better it gets).
3- the resulting code is linear, which is easier to follow than some
branching, indented, or nested try/catch/finally block.
Regan

Seems like I realy don't understand something here...
Why you think that your example is aesthetically
better (technically they are the same) than this?

Wait, your example below is not technically the same as mine because:
1. My example (actually taken from the docs on digitalmars.com) used
on_scope_failure, not on_scope_exit. on_scope_exit has the same effect as
"finally" except...
2. The important feature of all of these statements is that they allow you
to add items to the list of things to do _as you go_ and to then execute
them in reverse lexical order at the appropriate time i.e. on_scope_exit
or on_scope_failure etc.

BTW:
Walter is using "reverse lexical order" incorectly in the doc I beleive.
"lexical order" is something from different opera.
Andrew

Technically, they are not the same. The above version *always* deletes f, b
and d, but it's not supposed to if the function succeeds. Only if dofoo(),
dobar(), or dodef() throws are f, b and d supposed to be deleted.
This is the whole problem with try-finally. It works great with trivial
problems, and all the tutorials and example code unsurprisingly only show
trivial problems. Try scaling it up for things like multi-step transactions,
and it gets very complicated very fast, and it gets very hard to get right.
Here's the on_scope version:
Transaction abc()
{
Foo f = dofoo();
on_scope_failure delete f;
Bar b = dobar();
on_scope_failure delete b;
Def d = dodef();
on_scope_failure delete d;
return Transaction(f, b, d);
}
Scaling it up is as easy as linearly adding more on_scope statements, and
easy to get it right.

Technically, they are not the same. The above version *always* deletes f,
b and d, but it's not supposed to if the function succeeds. Only if
dofoo(), dobar(), or dodef() throws are f, b and d supposed to be deleted.

Ok, mea culpa.

This is the whole problem with try-finally. It works great with trivial
problems, and all the tutorials and example code unsurprisingly only show
trivial problems. Try scaling it up for things like multi-step
transactions, and it gets very complicated very fast, and it gets very
hard to get right.

Well, I spent year programming on PocketPC in eVC where are no such
things as exceptions. In principle. Not implemented in C++ compiler.
Still alive :)

It is worse because it won't even compile. f, b, and d are not in scope in
the catch statement. Even if they were, there's still a serious bug - if
dofoo() throws an exception, then the catch statement will attempt to delete
b and d, which are not even initialized yet.

It's worth mentioning that an allocation failure will cause an exception
to be thrown, in addition to divide by zero and other errors on Windows.
So code doesn't need to explicitly throw to generate exceptions.
The PocketPC really doesn't have exceptions? And they still call it
C++? :-)

It is worse because it won't even compile. f, b, and d are not in scope in
the catch statement. Even if they were, there's still a serious bug - if
dofoo() throws an exception, then the catch statement will attempt to delete
b and d, which are not even initialized yet.

Is it really a bug to call delete on a null reference? This is
well-defined behavior in C++.
Sean

It is worse because it won't even compile. f, b, and d are not in scope
in the catch statement. Even if they were, there's still a serious bug -
if dofoo() throws an exception, then the catch statement will attempt to
delete b and d, which are not even initialized yet.

Is it really a bug to call delete on a null reference? This is
well-defined behavior in C++.

in the catch statement. Even if they were, there's still a serious bug -
if dofoo() throws an exception, then the catch statement will attempt to
delete b and d, which are not even initialized yet.

well-defined behavior in C++.

It's not even a null reference.

Not in the above example, but then the above example doesn't even
compile. I assumed you were talking about something like this:
Foo f;
Bar b;
try {
f = dofoo();
b = dobar();
} catch( Object o ) {
delete f; delete b;
}
It's quite possible for f and b to both be uninitialized, yet I would
expect the delete calls to be well-defined anyway.
Sean

Not in the above example, but then the above example doesn't even compile.
I assumed you were talking about something like this:
Foo f;
Bar b;
try {
f = dofoo();
b = dobar();
} catch( Object o ) {
delete f; delete b;
}
It's quite possible for f and b to both be uninitialized, yet I would
expect the delete calls to be well-defined anyway.

delete is well defined, and works properly with null. For more complex
unwinding, you'll need to add state variables to keep track of what has been
set and what hasn't.
The new bug in the above code is that the catch block fails to rethrow o.
I hope that these buggy examples show just how hard it is to get
try-catch-finally to be correct, and how easy it is to get the on_scope
correct. This leads me to believe that try-catch-finally is just
conceptually wrong, as it does not match up with how we think despite being
in common use for over a decade.

Not in the above example, but then the above example doesn't even compile.
I assumed you were talking about something like this:
Foo f;
Bar b;
try {
f = dofoo();
b = dobar();
} catch( Object o ) {
delete f; delete b;
}
It's quite possible for f and b to both be uninitialized, yet I would
expect the delete calls to be well-defined anyway.

delete is well defined, and works properly with null. For more complex
unwinding, you'll need to add state variables to keep track of what has been
set and what hasn't.

That's all I was asking. I wondered at the implications of:
"there's still a serious bug - if dofoo() throws an exception, then the
catch statement will attempt to delete b and d, which are not even
initialized yet"

The new bug in the above code is that the catch block fails to rethrow o.

Arguably not a bug, as this could be intended behavior. However, I
would think it's clear that the above code was merely intended to
clarify a previous statement. I wouldn't suggest that it is correct or
well-written :-)

I hope that these buggy examples show just how hard it is to get
try-catch-finally to be correct, and how easy it is to get the on_scope
correct. This leads me to believe that try-catch-finally is just
conceptually wrong, as it does not match up with how we think despite being
in common use for over a decade.

I've been sold on Andrei's method since he first introduced it, so no
issue there. It's certainly far simpler and more meaningful than the
backflips often required by RAII. And while the proposed C++ shared_ptr
syntax makes some progress for this purpose, it's still a far cry from
on_scope_*, particularly when paired with inner functions.
Sean

I hope that these buggy examples show just how hard it is to get
try-catch-finally to be correct, and how easy it is to get the on_scope
correct. This leads me to believe that try-catch-finally is just
conceptually wrong, as it does not match up with how we think despite
being in common use for over a decade.

Execellent point, Walter. The concept of scope guards in D is very
exciting and a real winner in my opinion. I'm looking forward to the
syntax improving and stabilizing so I can use it in real applications.
And may I be so bold as to paraphase you ...
: I hope that these buggy examples show just how hard it is to get
: C-style booleans to be correct, and how easy it is to get the
semantically
: boolean correct. This leads me to believe that the C-style boolean is
just
: conceptually wrong, as it does not match up with how we think despite
: being in common use for many decades.
<G><G><G>
--
Derek Parnell
Melbourne, Australia

Not in the above example, but then the above example doesn't even
compile. I assumed you were talking about something like this:
Foo f;
Bar b;
try {
f = dofoo();
b = dobar();
} catch( Object o ) {
delete f; delete b;
}
It's quite possible for f and b to both be uninitialized, yet I would
expect the delete calls to be well-defined anyway.

delete is well defined, and works properly with null. For more complex
unwinding, you'll need to add state variables to keep track of what has
been set and what hasn't.
The new bug in the above code is that the catch block fails to rethrow o.

Let's say it was an intention. Pretty common by the way.

I hope that these buggy examples show just how hard it is to get
try-catch-finally to be correct, and how easy it is to get the on_scope
correct. This leads me to believe that try-catch-finally is just
conceptually wrong, as it does not match up with how we think despite
being in common use for over a decade.

Well try first to explain what will happen on
on_scope_success { delete baz; throw foo; }
on_scope_exit { delete baz; throw foo; }
What "lexical order" will be used here? What scope guards will be invoked
and so on.
Having on_scope_*** spread all other the "... lots of code ..." will create
code maintainance nightmare as to trace visually what will happen and when
will not
be a task for human anymore.
And as far as I understand main idea of on_scope_exit is a sort of poor man
struct destructor...
I suspect that you are thinking about how to remove 'auto'/RAII, right?
Andrew.
http://terrainformatica.com

The new bug in the above code is that the catch block fails to rethrow o.

Perhaps, but the stated point of the example was to show equivalent
try-catch code to the on_scope example. It isn't equivalent.

Well try first to explain what will happen on
on_scope_success { delete baz; throw foo; }
on_scope_exit { delete baz; throw foo; }
What "lexical order" will be used here? What scope guards will be invoked
and so on.

Throwing from inside an on_scope statement is as bad an idea as throwing
inside a destructor or inside a finally block. Doing it in an on_scope_exit
or on_scope_failure will result in a double-fault exception. Doing it in an
on_scope_success will be like throwing at the } of a scope.

Having on_scope_*** spread all other the "... lots of code ..." will
create
code maintainance nightmare as to trace visually what will happen and when
will not be a task for human anymore.

I submit that based on what's been posted in this thread, it's easier to get
on_scope *correct* than try-finally, because most of the try-finally
examples posted here do not work as intended by the author. Not only that,
the try-finally examples fail in ways that are difficult to write test cases
for, so the errors are likely to go unnoticed.

And as far as I understand main idea of on_scope_exit is a sort of poor
man struct destructor...
I suspect that you are thinking about how to remove 'auto'/RAII, right?

No. RAII is for managing resources, which is different from managing state
or transactions. try-catch is still needed, as on_scope doesn't catch
exceptions. It's try-finally that becomes redundant, though it is useful to
keep it because so many people are used to it.

The new bug in the above code is that the catch block fails to rethrow o.

Perhaps, but the stated point of the example was to show equivalent
try-catch code to the on_scope example. It isn't equivalent.

That may have been the point of the original example, but it wasn't the
point of mine. But it's water under the bridge, as you answered my
question either way :-)

Having on_scope_*** spread all other the "... lots of code ..." will
create
code maintainance nightmare as to trace visually what will happen and when
will not be a task for human anymore.

I submit that based on what's been posted in this thread, it's easier to get
on_scope *correct* than try-finally, because most of the try-finally
examples posted here do not work as intended by the author. Not only that,
the try-finally examples fail in ways that are difficult to write test cases
for, so the errors are likely to go unnoticed.

The only complaint I've heard about on_scope that I consider valid is
that program flow may be a tad confusing in excessively long functions.
But traditional RAII is no better, and try-finally introduces
maintenance and readability problems by breaking the logical connection
between unwinding code and the code block it's associated with. It's
also helpful that throwing an exception while another is in-flight in D
does not result in program termination, even if doing so is not advisable.

And as far as I understand main idea of on_scope_exit is a sort of poor
man struct destructor...
I suspect that you are thinking about how to remove 'auto'/RAII, right?

No. RAII is for managing resources, which is different from managing state
or transactions. try-catch is still needed, as on_scope doesn't catch
exceptions. It's try-finally that becomes redundant, though it is useful to
keep it because so many people are used to it.

That may have been the original intent, but RAII has since become almost
indispensable for writing exception-safe code, be it with resources,
transactions, or something else. Personally, I've never liked
try-finally, but I attribute that to my C++ background. If I were a
Java person it may be a different story.
Sean

RAII is for managing resources, which is different from managing state or
transactions. try-catch is still needed, as on_scope doesn't catch
exceptions. It's try-finally that becomes redundant, though it is useful
to keep it because so many people are used to it.

That may have been the original intent, but RAII has since become almost
indispensable for writing exception-safe code, be it with resources,
transactions, or something else.

RAII is used for those other things in C++ because there is *no other
choice*. Nevertheless, the reality of using destructors to manage
transaction processing is:
1) it's so hard to do that most programmers simply ignore the problem,
trusting to luck that exceptions won't happen
2) those that do try it, most of the time get it wrong
3) it's pretty hard to visually inspect the code and determine that it is
exception safe
This suggests to me that RAII and try-finally are the wrong paradigms for
doing transaction programming. I've attended Scott Meyer's insightful
lecture on doing transaction programming in C++. There is no hope for it to
be reliably done by anyone but experts.

Personally, I've never liked try-finally, but I attribute that to my C++
background. If I were a Java person it may be a different story.

Try-finally and RAII are like goto's. They work, but aren't using whiles,
fors, switches, etc., much more natural?

This suggests to me that RAII and try-finally are the wrong paradigms for
doing transaction programming. I've attended Scott Meyer's insightful
lecture on doing transaction programming in C++. There is no hope for it to
be reliably done by anyone but experts.

This is actually my concern about C++ in general. The language is
becoming so complicated--including the recommended solutions to common
problems--that I wonder if any but experts will be able to reliably
write correct code. And in my personal experience, C++ experts are few
and far between.
Sean

This suggests to me that RAII and try-finally are the wrong paradigms for
doing transaction programming. I've attended Scott Meyer's insightful
lecture on doing transaction programming in C++. There is no hope for it
to be reliably done by anyone but experts.

This is actually my concern about C++ in general. The language is
becoming so complicated--including the recommended solutions to common
problems--that I wonder if any but experts will be able to reliably write
correct code. And in my personal experience, C++ experts are few and far
between.

True. I run into a lot of C++ programmers, but few who really enjoy it. They
usually use C++ because they need the performance, not because they like it.
The leading edge of C++ thought is to use increasingly complex templates for
everything. The new C++0x proposed features are aimed at more complex
template support.
By doing this, C++ has opened the door wide for Java, Ruby, C#, etc. C++ is
never going to go away, but it will increasingly marginalize itself as the
C++ experts leave the average programmer further and further behind.

This suggests to me that RAII and try-finally are the wrong paradigms for
doing transaction programming. I've attended Scott Meyer's insightful
lecture on doing transaction programming in C++. There is no hope for it
to be reliably done by anyone but experts.

This is actually my concern about C++ in general. The language is
becoming so complicated--including the recommended solutions to common
problems--that I wonder if any but experts will be able to reliably write
correct code. And in my personal experience, C++ experts are few and far
between.

True. I run into a lot of C++ programmers, but few who really enjoy it. They
usually use C++ because they need the performance, not because they like it.
The leading edge of C++ thought is to use increasingly complex templates for
everything. The new C++0x proposed features are aimed at more complex
template support.

I think this is C language history repeating itself. A book called Principia
Mathematica was once written, an attempt to build a minimal set of mathematical
axioms from which all others could be proven. It kept getting bigger and
bigger, but never had enough axioms. They wanted all necessary axioms, but none
that were provable from the others. Godel later proved that its impossible to
build such a list. But its a very magnetic idea for mathematicians.
C and C++ both strive to provide a sort of Principia for programming, a set of
language features from which all possible other features could be derived via
syntax, using macros, functions, and in C++, templates. They never want to add
anything that can already be defined with syntax.
The C language was kept simple, so if you wanted magic (as in, anything outside
the 'physics' of the language), you used the macro system. Like all magic, this
was 50% style and 50% fakery... If you poke at it, the illusion breaks down.
There were many "better" languages but nothing "better and committed in a big
way to high-performance code", until C++.
In C++, the language designers still feel pressured to keep things simple - e.g.
they try hard to avoid introducing keywords, and things like foreach are always
done in the STL not in the language. But they have a tendency to introduce
features and concepts that multiply the complexity of using the language. These
can't be done with macros, so they belong in the set of axioms. As they add
these multipliers, the knowledge needed to use it gets worse geometrically.
So the C++ team adds hard-to-implement/use stuff to the language, and rarely
adds easy-to-implement/use stuff, because the easy-to-implement stuff is doable
with macros/template magic. Macro/template language is much more powerful than
macro-only magic. But the illusion still breaks down ... everyone has an
ITERATE macro that doesnt quite work seamlessly.

By doing this, C++ has opened the door wide for Java, Ruby, C#, etc. C++ is
never going to go away, but it will increasingly marginalize itself as the
C++ experts leave the average programmer further and further behind.

The plethora of similar languages creates division, which causes tension. The
performance and low level abilities of C and C++ is not available in Ruby, C#,
or Java. I know the gap you mention creates tension on the programming teams
I've been on. The C++ wonks want the others to just do the right think (use all
the advanced methods all the time). The others just want the compiler to stop
bugging them and get out of their way.
I think the crown might pass to any language that could fix C++ performance and
syntax... once it declared for the throne. D does very well syntactically and
performance wise, but I think it still has the "not ready yet" label stuck on
the front of the box, because of the sub-1.0 version and lack of ... (insert
wishlist items). (Question: What was the "trigger" that made people start
adopting Java for projects?)
Kevin

About 1996 the web and serverside programming took off. The options were:
- c++ - complicated and not fast enough turn around time for average users.
- visual basic - big problems with vb runtime on the server and windows only.
- scripting languages
- perl - complex for prog in the large
- php - too simple for prog in the large
- others - not mature enough
- delphi - excellent but owned by borland
- java - originally for applets at which it proved to be fairly useless - found
a niche on the server by default. The prog model was simple and the package and
jar features enabled prog in the large.
The things i like a about D are:
- makes the same trades offs as delphi
- fast compiled language without pointers being so "in your face".
- stucts on the stack, objects on the heap
- parameter passing to functions does not require pointers
- has a standards based scripting language - DMDScript/ECMAScript - which is
written in D.
- generic/templates
Pity it was not available a few years ago.

About 1996 the web and serverside programming took off. The options were:
- c++ - complicated and not fast enough turn around time for average users.
- visual basic - big problems with vb runtime on the server and windows only.
- scripting languages
- perl - complex for prog in the large
- php - too simple for prog in the large
- others - not mature enough
- delphi - excellent but owned by borland
- java - originally for applets at which it proved to be fairly useless - found
a niche on the server by default. The prog model was simple and the package and
jar features enabled prog in the large.
The things i like a about D are:
- makes the same trades offs as delphi
- fast compiled language without pointers being so "in your face".
- stucts on the stack, objects on the heap
- parameter passing to functions does not require pointers
- has a standards based scripting language - DMDScript/ECMAScript - which is
written in D.
- generic/templates
Pity it was not available a few years ago.

A lot of excellent comments. Some more from me embedded:
"Kevin Bealer" <Kevin_member pathlink.com> wrote in message
news:du5hbu$2bag$1 digitaldaemon.com...

So the C++ team adds hard-to-implement/use stuff to the language, and
rarely
adds easy-to-implement/use stuff, because the easy-to-implement stuff is
doable
with macros/template magic. Macro/template language is much more powerful
than
macro-only magic. But the illusion still breaks down ... everyone has an
ITERATE macro that doesnt quite work seamlessly.

Yes, my thoughts exactly.

I think the crown might pass to any language that could fix C++
performance and
syntax... once it declared for the throne. D does very well syntactically
and
performance wise, but I think it still has the "not ready yet" label stuck
on
the front of the box, because of the sub-1.0 version and lack of ...
(insert
wishlist items).

The wishlist thing is an ever-growing thing. It's a problem, mostly my
fault.

(Question: What was the "trigger" that made people start
adopting Java for projects?)

The initial trigger was the idea you could embed Java applets into a web
page, coupled with the idea that it was a C++ like language that was, in
contrast, easilly mastered.

So the C++ team adds hard-to-implement/use stuff to the language, and
rarely
adds easy-to-implement/use stuff, because the easy-to-implement stuff is
doable
with macros/template magic. Macro/template language is much more powerful
than
macro-only magic. But the illusion still breaks down ... everyone has an
ITERATE macro that doesnt quite work seamlessly.

Yes, my thoughts exactly.

I think the crown might pass to any language that could fix C++
performance and
syntax... once it declared for the throne. D does very well syntactically
and
performance wise, but I think it still has the "not ready yet" label stuck
on
the front of the box, because of the sub-1.0 version and lack of ...
(insert
wishlist items).

The wishlist thing is an ever-growing thing. It's a problem, mostly my
fault.

For some reason I always want to write "the only thing on the wishlist I really
care about is...", but then I think of 10 things right away. When I write here
I tend to write a lot of stuff off the cuff, in a sort of "noone's ever done X,
I wonder if it would be useful..." style.
But:

(Question: What was the "trigger" that made people start
adopting Java for projects?)

The initial trigger was the idea you could embed Java applets into a web
page, coupled with the idea that it was a C++ like language that was, in
contrast, easilly mastered.

You and Ianc wrote about this side of it, and it makes sense now - I have been
thinking of Java strictly in terms of language design.
Now I realize something..
Business people, and for that matter, most other people, tend to explain their
own logic (and history itself) via "narratives". Everyone used X, but then
problem Y happened. Language Z came along and solved this. Political movement
X was replaced by movement Y when World War Z occurred. Cause->Effect.
But the original rationale rarely stays put. C was for systems programming (but
took over everything). Java was designed for the web, but now is used for
desktop applications, etc. Computers originally were marketed for the home on
the rationale that they can manage recipes for dishes.
So... from this kind of marketing point of view, what problems are just over the
horizon that D can get ahead of the curve on? It's very good at what C, C++,
and Java already can do, better than those languages, IMHO.
But I'm coming around to the idea that to "win the election", D needs to look
like a solution to a crisis, or at least some phenomena that Bill Manager can
claim was a new problem that needed a new solution. Little problems won't cut
it, even if your project is dying of thousands of little papercuts.
(As an illustration, I changed the title of this post - snappier, right? The
following may seem a bit cynical, but I'm just trying to be practical.)
People who pick (or at least sign off on) programming languages will drag their
heels to adopt unless there is a motivation they can "feel". Reduces pointer
errors by 35% and code length by 20% is not good enough, even if it would save
the software industry. I once described D to a colleage as a language that
"improves hundreds of little things that taken together are revolutionary". But
I couldn't think of a "killer app" story for D.
This is not a criticism of business, its just that to take a big risk, managers
need to know (1) what problem they are solving, (2) how to measure whether the
solution is working. Otherwise they feel they could easily get conned by silver
bullet salesmen -- it's happened before, right? If you ask a big-name CEO what
problem he is solving with a new initiative, its a "moment of truth" question
for him. He won't argue incremental improvements, he'll strike at the core of
something he thinks is close to your heart and explain how his company can help.
Let me propose some general hype; these will look a little like hyperbole, but
most of them are more or less true (or could be without *too* much pain):
1. D unites scripting style and C/C++ performance.
2. D is like C++ after the lessons learned from Java, Python, and Perl.
3. D is a high performance Java.
4. D can do Perl or Python-style rapid development, but is safe enough and fast
enough for "BIG" problems.
5. D is like Java, but for systems programming tasks.
6. D has all the power of C++ with the safety of Java.
[ Noone knows exactly what the previous means, but it leverages Java's claims.
Since much of Java's safety claims were on the basis of pointer elimination, D
can legitimately claim much of the same benefit. ]
5. D is compile anywhere, run anywhere.
This is not true yet; would be immense to accomplish unless the D compiler was
distributed with GCC. This is unfortunate, since the DMD compiler is so fast
and focused, but I think lack of platform portability is a killer for a compiler
(but not so much for an interpreter for some reason.)
I see two possible futures in which D is ubiquitous, i.e. as much as Java now.
FUTURE 1: The DMD compiler is used for fast development and to run D "scripts".
The high compile speed makes it perfect for scripting and rapid development on
the platforms it does support, and GCC's D compiler makes sure the rest of the
platforms don't have to go begging. Everyone gets fast development and every
platform can build binaries. But GCC would need to support D out of the box.
FUTURE 2: The D compiler occupies a niche like Perl or Python, Tcl, etc, where
there is one "blessed" compiler, but it still gets installed by everyone. All
of these seem to be scripting languages (why is that??). The only way this can
happen (I think) is if D supports every platform (very very much work, I
think...) or can at least produce and run some kind of bytecode.
[ D in bytecode sounds tricky, but I don't know enough to know how many genuine
obstacles there are. I would think the "Java subset" of D could be done this
way, but... ]
Compiled languages generally don't seem to take off until there is a compiler
available for every big platform. (Am I wrong about this?)
And to top it off, a "customer anecdote":
I work at a government job, writing public services and C++ libraries for
biotech apps. I imagine walking into a meeting and saying that there is this
cool new language, D, "but it only works on 32 bit amd/intel". This would be
enough to reject D (all our servers are 64 bit amd/intel already, and we also
provide Solaris, IRIX, OSX, and Windows binaries).
Which is too bad, because we are a high-performance project (we worry about
cache lines and register pressure all the time), that suffers from C language
pointer-related mishaps and C++ syntax related burn out. In many ways, D is
perfect for the niche we are in.
But on the other hand, our project has spent the last 3-5 years shifting from C
to a rewritten-in-C++ version. So it's probably a no-fly zone for new language
adoption in any case, for the next few years.
Kevin

5. D is compile anywhere, run anywhere.
This is not true yet; would be immense to accomplish unless the D compiler was
distributed with GCC. This is unfortunate, since the DMD compiler is so fast
and focused, but I think lack of platform portability is a killer for a
compiler
(but not so much for an interpreter for some reason.)
I see two possible futures in which D is ubiquitous, i.e. as much as Java now.
FUTURE 1: The DMD compiler is used for fast development and to run D "scripts".
The high compile speed makes it perfect for scripting and rapid development on
the platforms it does support, and GCC's D compiler makes sure the rest of the
platforms don't have to go begging. Everyone gets fast development and every
platform can build binaries. But GCC would need to support D out of the box.

I just don't see why GDC would have to be bundled with the rest of
the GNU Compiler Collection in order for it to be used everywhere ?
Yes, I know that the main GCC distribution also contains frontends for
Objective C, Fortran, Java, and Ada. But those are more "different"... ?
As it is now, most GCC-using systems support C and C++ "out of the box",
and if you want an "odd" language like Pascal or D - you go download it:
http://www.gnu-pascal.de/ (GNU Pascal Compiler, gpc)
http://home.earthlink.net/~dvdfrdmn/d/ (GNU D Compiler, gdc)
We *should* probably get GDC listed at http://gcc.gnu.org/frontends.html
(putting it on the TODO list for next GDC release, along with new site)
But then again most people don't want GDC, but want DMD on all platforms
and only want a Digital-Mars-supported graphical interface such as DWT ?
And there isn't much anyone but Walter himself can do about those, IMHO.
(I don't have time for both, so I'll worry about the GNU / wx versions)

FUTURE 2: The D compiler occupies a niche like Perl or Python, Tcl, etc, where
there is one "blessed" compiler, but it still gets installed by everyone. All
of these seem to be scripting languages (why is that??). The only way this can
happen (I think) is if D supports every platform (very very much work, I
think...) or can at least produce and run some kind of bytecode.
[ D in bytecode sounds tricky, but I don't know enough to know how many genuine
obstacles there are. I would think the "Java subset" of D could be done this
way, but... ]

Color me sceptic, but I'm not too sure about "D the scripting language".
It just doesn't seem to be in the nature of the language, to do that...
I'd love to be wrong, by someone providing a .NET or Parrot backend :-)
But DMDScript is probably more likely to fill the DM scripting needs ?
--anders

5. D is compile anywhere, run anywhere.
This is not true yet; would be immense to accomplish unless the D compiler was
distributed with GCC. This is unfortunate, since the DMD compiler is so fast
and focused, but I think lack of platform portability is a killer for a
compiler
(but not so much for an interpreter for some reason.)
I see two possible futures in which D is ubiquitous, i.e. as much as Java now.
FUTURE 1: The DMD compiler is used for fast development and to run D "scripts".
The high compile speed makes it perfect for scripting and rapid development on
the platforms it does support, and GCC's D compiler makes sure the rest of the
platforms don't have to go begging. Everyone gets fast development and every
platform can build binaries. But GCC would need to support D out of the box.

I just don't see why GDC would have to be bundled with the rest of
the GNU Compiler Collection in order for it to be used everywhere ?
Yes, I know that the main GCC distribution also contains frontends for
Objective C, Fortran, Java, and Ada. But those are more "different"... ?
As it is now, most GCC-using systems support C and C++ "out of the box",
and if you want an "odd" language like Pascal or D - you go download it:
http://www.gnu-pascal.de/ (GNU Pascal Compiler, gpc)
http://home.earthlink.net/~dvdfrdmn/d/ (GNU D Compiler, gdc)
We *should* probably get GDC listed at http://gcc.gnu.org/frontends.html
(putting it on the TODO list for next GDC release, along with new site)
But then again most people don't want GDC, but want DMD on all platforms
and only want a Digital-Mars-supported graphical interface such as DWT ?
And there isn't much anyone but Walter himself can do about those, IMHO.
(I don't have time for both, so I'll worry about the GNU / wx versions)

FUTURE 2: The D compiler occupies a niche like Perl or Python, Tcl, etc, where
there is one "blessed" compiler, but it still gets installed by everyone. All
of these seem to be scripting languages (why is that??). The only way this can
happen (I think) is if D supports every platform (very very much work, I
think...) or can at least produce and run some kind of bytecode.
[ D in bytecode sounds tricky, but I don't know enough to know how many genuine
obstacles there are. I would think the "Java subset" of D could be done this
way, but... ]

Color me sceptic, but I'm not too sure about "D the scripting language".
It just doesn't seem to be in the nature of the language, to do that...
I'd love to be wrong, by someone providing a .NET or Parrot backend :-)
But DMDScript is probably more likely to fill the DM scripting needs ?
--anders

Anders, despite my inital thoughts D does not probably need to be part of gcc
install. The desktop has settled on x86 and dmd is strong there. Any idea what
Digital Mars biz model is for D when version 1 is released?

I just don't see why GDC would have to be bundled with the rest of
the GNU Compiler Collection in order for it to be used everywhere ?

install. The desktop has settled on x86 and dmd is strong there.

We probably do agree that packaging GDC up neatly is a Good Thing ?
And *that* is something I've been pretty much involved in, already.
A GDC-only package is in the 3-4 MB range, while a full GCC/G++/GDC
distribution is more in the 30-40 MB range. Still very downloadable ?
This could be discussed a lot more, but should go in the D.gnu group.
(or contact me privately - by email, if not for public NG discussion)

Any idea what
Digital Mars biz model is for D when version 1 is released?

No idea, but it would be interesting if they (he?) could expand on it...
I *guess* it would be similar to their C/C++ business model, though ?
http://www.digitalmars.com/shop.html
--anders

5. D is compile anywhere, run anywhere.
This is not true yet; would be immense to accomplish unless the D compiler was
distributed with GCC. This is unfortunate, since the DMD compiler is so fast
and focused, but I think lack of platform portability is a killer for a
compiler
(but not so much for an interpreter for some reason.)
I see two possible futures in which D is ubiquitous, i.e. as much as Java now.
FUTURE 1: The DMD compiler is used for fast development and to run D "scripts".
The high compile speed makes it perfect for scripting and rapid development on
the platforms it does support, and GCC's D compiler makes sure the rest of the
platforms don't have to go begging. Everyone gets fast development and every
platform can build binaries. But GCC would need to support D out of the box.

I just don't see why GDC would have to be bundled with the rest of
the GNU Compiler Collection in order for it to be used everywhere ?
Yes, I know that the main GCC distribution also contains frontends for
Objective C, Fortran, Java, and Ada. But those are more "different"... ?
As it is now, most GCC-using systems support C and C++ "out of the box",
and if you want an "odd" language like Pascal or D - you go download it:
http://www.gnu-pascal.de/ (GNU Pascal Compiler, gpc)
http://home.earthlink.net/~dvdfrdmn/d/ (GNU D Compiler, gdc)
We *should* probably get GDC listed at http://gcc.gnu.org/frontends.html
(putting it on the TODO list for next GDC release, along with new site)

There is no technical reason. But go to your boss and tell him you want to use
a new language. He'll be sceptical... Now tell him you need to build your own
patched version GCC to get a compiler for it...

But then again most people don't want GDC, but want DMD on all platforms
and only want a Digital-Mars-supported graphical interface such as DWT ?
And there isn't much anyone but Walter himself can do about those, IMHO.
(I don't have time for both, so I'll worry about the GNU / wx versions)

DMD is much better, but since DMD produces machine code directly, it seems
likely that IA64, and even moreso, every non-intel version is going to require a
big chunk of Walter's time. This seems non-scalable right now... maybe the
code for DMD is flexible enough to make this happen quickly, but I don't know
enough.... correct me if I'm wrong, please.

FUTURE 2: The D compiler occupies a niche like Perl or Python, Tcl, etc, where
there is one "blessed" compiler, but it still gets installed by everyone. All
of these seem to be scripting languages (why is that??). The only way this can
happen (I think) is if D supports every platform (very very much work, I
think...) or can at least produce and run some kind of bytecode.
[ D in bytecode sounds tricky, but I don't know enough to know how many genuine
obstacles there are. I would think the "Java subset" of D could be done this
way, but... ]

Color me sceptic, but I'm not too sure about "D the scripting language".
It just doesn't seem to be in the nature of the language, to do that...

I think the main feature scripting languages have that make them so great in
their niche is that container classes, regexes, and a fair amount of system call
oriented syntax (i.e. backticks to run commands) are built right in.
D doesn't have all of this yet, but it's far ahead of C and C++ here. I think
it can handle the "scripty" end of the application spectrum better than most
other compiled languages.
It may never have bytecode, but users don't actually care about that if
compile/link is fast -- which seems to be true for dmd but not necessarily gdc.
The dmd compile feels faster than just starting the Java interpreter. I haven't
benchmarked it though.

I'd love to be wrong, by someone providing a .NET or Parrot backend :-)
But DMDScript is probably more likely to fill the DM scripting needs ?
--anders

I just don't see why GDC would have to be bundled with the rest of
the GNU Compiler Collection in order for it to be used everywhere ?

There is no technical reason. But go to your boss and tell him you want to use
a new language. He'll be sceptical... Now tell him you need to build your own
patched version GCC to get a compiler for it...

But one can still build binary packages for it (GDC), without it first
being a FSF project ? The only caveat being that if your system compiler
is too far away from what GDC works with, you'll need *another* GCC too.
(which could still be offered as an alternative installation, though...)
I don't necessarily think it's a bad idea. Just wonder why it has to be.
I'd worry more about the language spec not being open, or even finished.
[...]

It may never have bytecode, but users don't actually care about that if
compile/link is fast -- which seems to be true for dmd but not necessarily gdc.
The dmd compile feels faster than just starting the Java interpreter. I
haven't
benchmarked it though.

Part of this speed comes from optlink, I think ? (when it doesn't crash)
At least I've found the GCC linking to be slower, doubly so for C++...
--anders

I just don't see why GDC would have to be bundled with the rest of
the GNU Compiler Collection in order for it to be used everywhere ?

There is no technical reason. But go to your boss and tell him you want to use
a new language. He'll be sceptical... Now tell him you need to build your own
patched version GCC to get a compiler for it...

But one can still build binary packages for it (GDC), without it first
being a FSF project ? The only caveat being that if your system compiler
is too far away from what GDC works with, you'll need *another* GCC too.
(which could still be offered as an alternative installation, though...)

Like I said .. no *technical* reason, per se -- you could do this. But asking
your boss to approve a toolchain that you patch together yourself, is going to
be like telling him that he doesn't need to *buy* a car, he can just go to the
junkyard and get a nearly complete one running. He could, but he won't.
If your boss does programming or you have a lot of autonomy, you may have a
little flexibility here. But it will never make it in a "Java shop" programming
environment if you have to spend several hours fixing rejected patches to GCC
every time the next revision of GDC or GCC is desired.
It doesn't have to be part of the GCC found on GNU's FTP site, but you need to
at least be able to install the compiler and libphobos from "RPMs" or build them
from one or two tar files.

I don't necessarily think it's a bad idea. Just wonder why it has to be.
I'd worry more about the language spec not being open, or even finished.
[...]

It may never have bytecode, but users don't actually care about that if
compile/link is fast -- which seems to be true for dmd but not necessarily gdc.
The dmd compile feels faster than just starting the Java interpreter. I
haven't
benchmarked it though.

Part of this speed comes from optlink, I think ? (when it doesn't crash)
At least I've found the GCC linking to be slower, doubly so for C++...
--anders

I don't know - I usually work in Linux, and there it looks like dmd uses gcc to
do the link. There is an option that lets you see the commands its running.
Kevin

Kevin, I think your analysis is totally correct.
Future 1 and 2 are both possibilitites.
But for anything to happen (as has i think been discussed prev):
- D v1.0 with decent standard libs needs to be reached.
- D frontend standard with gcc install with dmd the optimised commercial option.
A while back on Slashdot there was a discussion about what language the gnome
framework, desktop and apps should be written in. C was felt to be too low, C++
too tricky, python was a scripting language, mono and java were vm based and
tied to ms/sun.
Walter - or someone using his name - suggested D. D would be perfect for this.
Imagine a desktop stack build in D...ahhh. But i can't see it happening.
(Actually, C# on gcc would be good i.e c# as a compiled language. c# syntax and
semantics are pretty good. I think someone may have had a go at this but it
didn't progress).

Now I realize something..
Business people, and for that matter, most other people, tend to explain their
own logic (and history itself) via "narratives". Everyone used X, but then
problem Y happened. Language Z came along and solved this. Political movement
X was replaced by movement Y when World War Z occurred. Cause->Effect.

I think this is true of people in general. In fact, Bruce Schneier has
made the same observation with respect to security issues in that most
security initiatives have attempted to address "movie scenarios" rather
than actually improve security in a general sense. There's something
very compelling about a situation we can envision.

Compiled languages generally don't seem to take off until there is a compiler
available for every big platform. (Am I wrong about this?)

If you have Microsoft's marketing strength I think it's quite possible
to push a Windows-only solution. And I suspect people are willing to
take what they can get with niche platforms. But in general I suspect
this is true. Just look at ObjectiveC.

And to top it off, a "customer anecdote":
I work at a government job, writing public services and C++ libraries for
biotech apps. I imagine walking into a meeting and saying that there is this
cool new language, D, "but it only works on 32 bit amd/intel". This would be
enough to reject D (all our servers are 64 bit amd/intel already, and we also
provide Solaris, IRIX, OSX, and Windows binaries).

We're actually heading the opposite direction. Our software was
designed for 64-bit Sparc machines and we've recently ported it to
64-bit AMD. But 32-bit will never happen--memory requirements are
simply too high.

Which is too bad, because we are a high-performance project (we worry about
cache lines and register pressure all the time), that suffers from C language
pointer-related mishaps and C++ syntax related burn out. In many ways, D is
perfect for the niche we are in.

Same here. Though we'd have to make limited use of the GC for memory
size and performance reasons. Still, I think that would fit in nicely
with the way the system is designed.
Sean

[snip]
Let me propose some general hype; these will look a little like hyperbole, but
most of them are more or less true (or could be without *too* much pain):
[snip]

I'd add:
7. D has all (well, most of) the power of C++ but with a grammar that is sane
enough to allow decent tools. The state of C++ tooling is abysmal, because it's
so pathologically awful to parse. Going back to C++ development after working in
a refactoring environment (e.g. VS+ReSharper for C#, or Eclipse/IDEA for Java)
feels like being dragged back to the Stone Age; productivity drops through the
floor.
cheers
Mike

Kevin Bealer wrote:
> Let me propose some general hype; these will look a little like
hyperbole, but

most of them are more or less true (or could be without *too* much pain):
1. D unites scripting style and C/C++ performance.
2. D is like C++ after the lessons learned from Java, Python, and Perl.
3. D is a high performance Java.
4. D can do Perl or Python-style rapid development, but is safe enough and fast
enough for "BIG" problems.
5. D is like Java, but for systems programming tasks.
6. D has all the power of C++ with the safety of Java.

#3 and #6 is exactly what brought me to D, and I think it can bring many
others!!

I think the crown might pass to any language that could fix C++ performance and
syntax... once it declared for the throne. D does very well syntactically and
performance wise, but I think it still has the "not ready yet" label stuck on
the front of the box, because of the sub-1.0 version and lack of ... (insert
wishlist items). (Question: What was the "trigger" that made people start
adopting Java for projects?)
Kevin

Java launched for 3 reasons(outside of Suns marketing), IMO:
1. Server side web programming was just taking off. This was Java's
killer app. Case in point: Flash does most interactive embedded stuff
today in web pages.
2. No pointers. No memory leaks eliminates large classes of errors. Even
though it existed in other languages, coupled with #1 this made Java a
much better C++.
3. A comprehensive set of tools and a company willing to support them.
In getting business to adopt your tech they need support and Sun
delivers(at least on Solaris and Win32 initially).
-DavidM

Java launched for 3 reasons(outside of Suns marketing), IMO:
1. Server side web programming was just taking off. This was Java's killer
app. Case in point: Flash does most interactive embedded stuff today in
web pages.
2. No pointers. No memory leaks eliminates large classes of errors. Even
though it existed in other languages, coupled with #1 this made Java a
much better C++.
3. A comprehensive set of tools and a company willing to support them. In
getting business to adopt your tech they need support and Sun delivers(at
least on Solaris and Win32 initially).

4. Execution model: VM + real GC. Safe sandbox. Pure byte code cannot GPF
in principle.
5. Extremely simple Java Native Interface mechanism - bridge to native code
for
mission crititcal pieces.
6. Extremely simple bytecode system. I know around 12 working
implementations
of Java VM.
7. ClassLoader as an entity. Killer thing, IMO.
8. Reflection as part of runtime.
------------------
Having said that Java has problems of course. It is good for some set of
tasks.
It is not an universal language.
But it is damned stable. Last 7 years or so it is working as it is.

Having said that Java has problems of course. It is good for some set of
tasks.
It is not an universal language.
But it is damned stable. Last 7 years or so it is working as it is.

the literal sense and the programming sense).
The generics add nothing useful besides Homogenous collections and
removal of casts, at the expense of loads of complexity.
At the same time Sun *rejected* Design by Contract proposals for the
language...disapointing.
I have a lot more fun with Groovy honestly.
-DavidM

Java launched for 3 reasons(outside of Suns marketing), IMO:
1. Server side web programming was just taking off. This was Java's
killer
app. Case in point: Flash does most interactive embedded stuff today in
web pages.
2. No pointers. No memory leaks eliminates large classes of errors. Even
though it existed in other languages, coupled with #1 this made Java a
much better C++.
3. A comprehensive set of tools and a company willing to support them. In
getting business to adopt your tech they need support and Sun delivers(at
least on Solaris and Win32 initially).

I beg to differ here. I've worked on several projects on Linux that use JNI
extensively and it's anything but easy to use. It may work for simple
cases, but it has all kinds of quirks and it's unnecessarily verbose. Every
JVM has its JNI functions in different libraries, so there's no hope of
simple portability between them. To top it all, just try to embed the JVM
inside a process that uses threads and Unix signals to see what "all hell
breaks loose" really means.
I think the guy who designed JNI really didn't want it to be used.

Java launched for 3 reasons(outside of Suns marketing), IMO:
1. Server side web programming was just taking off. This was Java's
killer
app. Case in point: Flash does most interactive embedded stuff today in
web pages.
2. No pointers. No memory leaks eliminates large classes of errors. Even
though it existed in other languages, coupled with #1 this made Java a
much better C++.
3. A comprehensive set of tools and a company willing to support them. In
getting business to adopt your tech they need support and Sun delivers(at
least on Solaris and Win32 initially).

I beg to differ here. I've worked on several projects on Linux that use JNI
extensively and it's anything but easy to use. It may work for simple
cases, but it has all kinds of quirks and it's unnecessarily verbose. Every
JVM has its JNI functions in different libraries, so there's no hope of
simple portability between them. To top it all, just try to embed the JVM
inside a process that uses threads and Unix signals to see what "all hell
breaks loose" really means.
I think the guy who designed JNI really didn't want it to be used.

yeh even ID wanted to use java for gameplay in quake 2 but found the JNI a
jumbled mess - and so different on each platform it was not possible to use a
single codebase. luckally they saw the light of day, threw java away, and wrote
quakec.
coming from a gamedev background, i cant really stand java....

Mobility of state and behavior -- this was one of the original
requirements for the Oak language and runtime when it was first
implemented. It is interesting to note that the platform was initially
designed for PDAs and other mobile gadgets with wireless connectivity.
The internet, browsers and applets came much later; and "accidentally"
turned out to be the most effective vehicle for marketing the system.
Leave out the byte code and we might instead be programming in Dylan or
some other "exotic" language.

7. ClassLoader as an entity. Killer thing, IMO.

across units that are loaded at runtime? For server-side programming,
even D will be challenged on this front.

8. Reflection as part of runtime.
------------------
Having said that Java has problems of course. It is good for some set of
tasks.
It is not an universal language.
But it is damned stable. Last 7 years or so it is working as it is.

Agreed. D is a great step forward for C and C++ programmers, as these
languages compete in the same application domains. Java and C# will
continue to dominate in managed environments. IMHO, Scala is the kind of
language that might challenge Java/C# in the long run.
Matthias

RAII is for managing resources, which is different from managing state
or transactions.

This is a fantastic summary statement. I'd like to see it in the docs.
(In fact, error.html should have a link to exception-safe.html. Right
now, error.html is (for a C++ programmer at least) a "motherhood"
statement that doesn't have any D-specific substance).
try-catch is still needed, as on_scope doesn't catch

I hope that these buggy examples show just how hard it is to get
try-catch-finally to be correct, and how easy it is to get the on_scope
correct. This leads me to believe that try-catch-finally is just
conceptually wrong, as it does not match up with how we think despite being
in common use for over a decade.

This is very interesting. Seems to me that these on_scope_xxx are kind
of "naked destructors". If you consider them to be the low-level
construct, then putting a destructor into a class means: when the
constructor is called, insert "on_scope_exit ~this()" at the same time.
Is the convoluted nature of the RAII solution simply because
the same destructor is executed regardless of whether the function was
exited normally, or whether an exception occurred? And because there's
no easy way of determining if you are inside an exception handler. So
you only have "finally", without "catch".
I wonder if destructors could be jazzed up, so that they can insert an
"on_scope_failure" as well as "on_scope_exit"?
Maybe called ~catch()?
class Foo
{
~catch() {
unwind_foo();
}
}
class Bar
{
~this() { // destroy bar
}
~catch() {
unwind_bar();
// now we go to ~this(), which behaves like finally.
}
}
Transaction abc()
{
Foo f;
Bar b;
Def d;
f = dofoo();
b = dobar();
d = dodef();
return Transaction(f, b, d);
}
Doesn't have the flexibility of on_scope, where you have access to all
variables -- but on the other hand, it has the RAII benefit that users
of the class don't need to remember to do it.
Just an idea.

Is the convoluted nature of the RAII solution simply because
the same destructor is executed regardless of whether the function was
exited normally, or whether an exception occurred? And because there's
no easy way of determining if you are inside an exception handler. So
you only have "finally", without "catch".
I wonder if destructors could be jazzed up, so that they can insert an
"on_scope_failure" as well as "on_scope_exit"?
Maybe called ~catch()?

I hope that these buggy examples show just how hard it is to get
try-catch-finally to be correct, and how easy it is to get the
on_scope correct. This leads me to believe that try-catch-finally is
just conceptually wrong, as it does not match up with how we think
despite being in common use for over a decade.

This is very interesting. Seems to me that these on_scope_xxx are kind
of "naked destructors". If you consider them to be the low-level
construct, then putting a destructor into a class means: when the
constructor is called, insert "on_scope_exit ~this()" at the same time.

on_scope does practically eliminate the need for an 'auto' keyword,
assuming auto classes would be allocated on the heap either way.

Is the convoluted nature of the RAII solution simply because
the same destructor is executed regardless of whether the function was
exited normally, or whether an exception occurred? And because there's
no easy way of determining if you are inside an exception handler. So
you only have "finally", without "catch".
I wonder if destructors could be jazzed up, so that they can insert an
"on_scope_failure" as well as "on_scope_exit"?

I've considered adding functionality to Ares to allow a user to
determine if an exception is currently in flight--C++ offers this but
it's little used as it's not terribly reliable. But with on_scope there
seems little need for this.

True. But I would argue that the burden of writing exception-safe code
is on the function writer moreso than the class writer, largely because
the class writer can't predict or address every way that his class may
be used. on_scope also allows a bit more flexibility:
{
Foo f = acquireFoo();
on_scope_exit f.commitAll();
on_scope_failure f.unwindAll();
f.setFirst();
{
f.setSecond();
on_scope_failure f.unwindLast();
}
}
Sean

the former does still (I assume) have the advantage of stopping you from
inadvertently reassigning to yuiop.

Yes, whereas the latter allows state unwinding for things that aren't class
objects. Without on_scope_exit, extra dummy classes would have to be defined
for each, turning a one liner into a dozen lines that appear out of context.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

The only thing I see that's amazingly useful about this is the
on_scope_success. Having a block of code that is only executed when an
exception is NOT thrown would be nice. However, the rest of this stuff
seems like rocks under the water. Your example of the new programmer
coming in reads like this to me: "The new programmer may not take the
time to actually read the code he's modifying, so lets stick hidden
stuff in there to take care of things he might have missed." Which
doesn't seem very logical to me, as it may be just as important to
modify those on success/on failure blocks and miss them.
I'd say add another option to try..catch..finally paradigm.
-S.

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

The only thing I see that's amazingly useful about this is the
on_scope_success. Having a block of code that is only executed when an
exception is NOT thrown would be nice. However, the rest of this stuff
seems like rocks under the water. Your example of the new programmer
coming in reads like this to me: "The new programmer may not take the
time to actually read the code he's modifying, so lets stick hidden
stuff in there to take care of things he might have missed." Which
doesn't seem very logical to me, as it may be just as important to
modify those on success/on failure blocks and miss them.
I'd say add another option to try..catch..finally paradigm.

Seconded. For more fun, next we can debate whether the syntax should be:
1. try..pass..catch..finally
2. try..catch..pass...finally
3. try..catch..finally..pass

No, on_scope gives us more than try/catch/finally. Let me try this another
way.
"catch" from try/catch/finally allows:
- you to execute a static/pre-defined set of code in the event that there
is a failure in the current scope.
"finally" from try/catch/finally allows:
- you to execute a static/pre-defined set of code at the exit of the
current scope in all cases.
Compare that to:
on_scope_failure allows:
- you to add one or more sets of code, at the points at which they become
required, to the list of things to execute in the event of a failure.
on_scope_exit allows:
- you to add one or more sets of code, at the points at which they become
required, to the list of things to execute at the exit of the scope in all
cases.
To achieve the same thing that on_scope gives with try/finally requires
you to store state somewhere to indicate which parts of the finally block
to execute, or, it requires that you define several finally blocks and
nest them. Both of those options are no where near as neat as on_scope.
I'm honestly baffled that people can't see the difference.
Regan

Regan,
Your last post finally made me get it.
on_scope_* is a different kind of construct. It's kind of like (imperfect
analogy coming) descending down a dynamic ladder. If you have to go up again
on_scope_failure tells you which how to undo each "code rung" so you can go back
up. The on_scope_success adds a last rung from the floor and on_scope_exit adds
codes at both ends of the ladder. The analogy wears a bit thin because you can
do on_scope_exit failure and success multiple times.
I have rtfa and all the previous messages and it still wasn't apparent that all
this was possible.
Thanks,
Tom
In article <ops5n86slk23k2f5 nrage.netwin.co.nz>, Regan Heath says...

Seconded. For more fun, next we can debate whether the syntax should be:
1. try..pass..catch..finally
2. try..catch..pass...finally
3. try..catch..finally..pass

No, on_scope gives us more than try/catch/finally. Let me try this another
way.
"catch" from try/catch/finally allows:
- you to execute a static/pre-defined set of code in the event that there
is a failure in the current scope.
"finally" from try/catch/finally allows:
- you to execute a static/pre-defined set of code at the exit of the
current scope in all cases.
Compare that to:
on_scope_failure allows:
- you to add one or more sets of code, at the points at which they become
required, to the list of things to execute in the event of a failure.
on_scope_exit allows:
- you to add one or more sets of code, at the points at which they become
required, to the list of things to execute at the exit of the scope in all
cases.
To achieve the same thing that on_scope gives with try/finally requires
you to store state somewhere to indicate which parts of the finally block
to execute, or, it requires that you define several finally blocks and
nest them. Both of those options are no where near as neat as on_scope.
I'm honestly baffled that people can't see the difference.
Regan

To achieve the same thing that on_scope gives with try/finally requires
you to store state somewhere to indicate which parts of the finally
block to execute, or, it requires that you define several finally blocks
and nest them. Both of those options are no where near as neat as on_scope.
I'm honestly baffled that people can't see the difference.

Seconded. For more fun, next we can debate whether the syntax should be:
1. try..pass..catch..finally
2. try..catch..pass...finally
3. try..catch..finally..pass

No, on_scope gives us more than try/catch/finally. Let me try this
another way.
"catch" from try/catch/finally allows:
- you to execute a static/pre-defined set of code in the event that
there is a failure in the current scope.
"finally" from try/catch/finally allows:
- you to execute a static/pre-defined set of code at the exit of the
current scope in all cases.
Compare that to:
on_scope_failure allows:
- you to add one or more sets of code, at the points at which they
become required, to the list of things to execute in the event of a
failure.
on_scope_exit allows:
- you to add one or more sets of code, at the points at which they
become required, to the list of things to execute at the exit of the
scope in all cases.
To achieve the same thing that on_scope gives with try/finally requires
you to store state somewhere to indicate which parts of the finally
block to execute, or, it requires that you define several finally
blocks and nest them. Both of those options are no where near as neat
as on_scope.
I'm honestly baffled that people can't see the difference.
Regan

Ah. I see what you want. I was a bit confused about this before.
This is just syntatic sugar to a relatively easy to write class though.
Maybe it should be part of the standard library?
It essentially acts like a multi-homed delegate which is implicitly
called at the end of the scope.
Something like this would be equivalent?
Transaction abc()
{
Foo f;
Bar b;
Def d;
Auto scoped = new Scoper(); //scoped.exit() can be called by the
destructor. We won't add a finally block.
try {
scoped.failures ~= void delegate () { dofoo_unwind(f); }
f = dofoo();
scoped.failures ~= void delegate () { dobar_unwind(b); }
b = dobar();
scoped.success()
} catch (Exception o) {
scoped.failed()
throw o;
}
return Transaction(f, b, d);
}
It seems to me Walter's class example doesn't exactly do this justice.
I find this fully acceptable for the few places I would use this. The
implicit instantiation of an object like this might be nice though, if
it only added the extra code when it was referenced. Which should be
trivial to do. I'd much rather have an "implicit" scope object than
this, or the current syntax.
-S.

Should be:
f = dofoo();
scoped.failures ~= void delegate () { dofoo_unwind(f); }
etc. Other than that, it looks like it'll work, but it's a lot more code
than on_scope. You also need to be sure that the delegates don't refer to
any variables declared inside the try block, as those variables no longer
exist in the catch block - and the compiler can't catch that error. This
isn't a problem with on_scope.

Should be:
f = dofoo();
scoped.failures ~= void delegate () { dofoo_unwind(f); }
etc. Other than that, it looks like it'll work, but it's a lot more
code than on_scope. You also need to be sure that the delegates don't
refer to any variables declared inside the try block, as those
variables no longer exist in the catch block - and the compiler can't
catch that error. This isn't a problem with on_scope.

Why would that be the case? If dofoo() throws an error, that delegate
would not yet be appended to the failures delegate array. Thus it
wouldn't be able to do it's job in the catch block. I know your and
derek's order was like this originally, but it seemed like a typo so I
fixed it.
-S.

f = dofoo();
scoped.failures ~= void delegate () { dofoo_unwind(f); }
etc. Other than that, it looks like it'll work, but it's a lot more
code than on_scope. You also need to be sure that the delegates don't
refer to any variables declared inside the try block, as those
variables no longer exist in the catch block - and the compiler can't
catch that error. This isn't a problem with on_scope.

Why would that be the case? If dofoo() throws an error, that delegate
would not yet be appended to the failures delegate array. Thus it
wouldn't be able to do it's job in the catch block. I know your and
derek's order was like this originally, but it seemed like a typo so I
fixed it.

The idea of dofoo_unwind is to reverse the changes made by dofoo when
dofoo succeeds and a later step fails. If/when dofoo throws it should undo
it's own changes before returning.
Regan

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

The only thing I see that's amazingly useful about this is the
on_scope_success. Having a block of code that is only executed when an
exception is NOT thrown would be nice. However, the rest of this stuff
seems like rocks under the water. Your example of the new programmer
coming in reads like this to me: "The new programmer may not take the
time to actually read the code he's modifying, so lets stick hidden stuff
in there to take care of things he might have missed." Which doesn't seem
very logical to me, as it may be just as important to modify those on
success/on failure blocks and miss them.
I'd say add another option to try..catch..finally paradigm.
-S.

Scope guards are a novel feature no other language has. They're based
on Andrei Alexandrescu's scope guard macros, which have led to
considerable interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

The only thing I see that's amazingly useful about this is the
on_scope_success. Having a block of code that is only executed when an
exception is NOT thrown would be nice. However, the rest of this stuff
seems like rocks under the water. Your example of the new programmer
coming in reads like this to me: "The new programmer may not take the
time to actually read the code he's modifying, so lets stick hidden
stuff in there to take care of things he might have missed." Which
doesn't seem very logical to me, as it may be just as important to
modify those on success/on failure blocks and miss them.
I'd say add another option to try..catch..finally paradigm.
-S.

Scope guards are a novel feature no other language has. They're based on
Andrei Alexandrescu's scope guard macros, which have led to considerable
interest in the idea. Check out the article
www.digitalmars.com/d/exception-safe.html

This is excellent!
is it the first really new thing in D?
thinking hard (for 30 seconds) seems to be the first one.
excellent in any case.
Ant