>which warns about uncommitted DBI transactions unless the destruction
>is due to an exception (in which case the transaction will be rolled
>back anyway) [2].

You already have the correct logic for deciding to roll back: you roll
back unless there was an explicit commit. So the only thing depending
on exceptionness is the warning. I'm afraid I don't have any good
suggestion for what to do about the warning.
-zefram

>
> To achieve what you want from this was already impossible. You could
> see
> the exception being thrown when one is thrown, but during unwinding
> from
> a local exit you were not guaranteed any particular value in $@. In
> that
> case $@ would just have whatever value it had before the local exit.

When you say "unwinding" - do you mean out-of-order exit() destruction?
If yes - this is handled by enforcing proper order via a clever closure
in an END{}.
Show quoted text

> >which warns about uncommitted DBI transactions unless the destruction
> >is due to an exception (in which case the transaction will be rolled
> >back anyway) [2].

>
> You already have the correct logic for deciding to roll back: you roll
> back unless there was an explicit commit. So the only thing depending
> on exceptionness is the warning. I'm afraid I don't have any good
> suggestion for what to do about the warning.

The whole point of the guard *is* the warning. The rollback will happen
on $dbh destruction anyway, it just attempts to rollback in the guard to
warn of potential issues earlier.
In essence we will have to deprecate this module asap if it is certain
that a DESTROY block will have no knowledge of an existing exception in
future versions of perl.
Note - we care *whether* the DESTROY is a result of an exception, not
*what* the exception was. If there is some sort of way to determine this
(a sibling of $^S) or some sort of minimal XS module - all my problems
go away.

Greetings everyone,
I would like to rehash the consequences of commit 96d9b9c made back on
2010-04-20. While originally the problem manifested itself as failing
tests within DBIx::Class, it later became apparent that the change
deprives perl of an entire class of object-based guards.
I will try to briefly summarize the "what's so special" part below,
however for more info I recommend reading the full write-up from the
pov of DBIx::Class at:
http://git.shadowcat.co.uk/gitweb/gitweb.cgi?p=dbsrgits/DBIx-Class.git;a=blob_plain;f=useful_guard_objects.html;hb=refs/heads/txn_guard_breakage
So the TLDR version is: If you have a guard object whose DESTROY has
important consequences (e.g. eats your data) - it is extremely useful
to be able to tell if an object is destroyed due to simply passing out
of scope or due to an exception-induced jump. If the destruction is not
caused by an exception, the chance is very high that the data is eaten
due to a programming error, and not notifying the user of this fact can
and will introduce nearly impossible to track bugs. For an explanation
of why would someone want to use an object for something so sensitive to
control-flow - read the link above.
I am all for the actual change (it's now impossible to clobber $@
accidentally \o/), the problem is in the way this fix was implemented.
There undoubtedly is a way to make intermediate frames aware that an
exception is taking place. While I don't know much about the perl
internals something as simple as setting $@ *twice* could be sufficient
to resolve this (while unfortunately keeping the caveats Zefram listed
above). If a new mechanism is implemented signaling to DESTROY why is it
being called - that's even better! Then the corner cases are also
solved, since any residual value of $@ from previous eval{}s no longer
matters.
Cheers

Greetings everyone,
I would like to rehash the consequences of commit 96d9b9c made back on
2010-04-20. While originally the problem manifested itself as failing
tests within DBIx::Class, it later became apparent that the change
deprives perl of an entire class of object-based guards.
I will try to briefly summarize the "what's so special" part below,
however for more info I recommend reading the full write-up from the
pov of DBIx::Class at:
http://git.shadowcat.co.uk/gitweb/gitweb.cgi?p=dbsrgits/DBIx-Class.git;a=blob_plain;f=useful_guard_objects.html;hb=refs/heads/txn_guard_breakage
So the TLDR version is: If you have a guard object whose DESTROY has
important consequences (e.g. eats your data) - it is extremely useful
to be able to tell if an object is destroyed due to simply passing out
of scope or due to an exception-induced jump. If the destruction is not
caused by an exception, the chance is very high that the data is eaten
due to a programming error, and not notifying the user of this fact can
and will introduce nearly impossible to track bugs. For an explanation
of why would someone want to use an object for something so sensitive to
control-flow - read the link above.
I am all for the actual change (it's now impossible to clobber $@
accidentally \o/), the problem is in the way this fix was implemented.
There undoubtedly is a way to make intermediate frames aware that an
exception is taking place. While I don't know much about the perl
internals something as simple as setting $@ *twice* could be sufficient
to resolve this (while unfortunately keeping the caveats Zefram listed
above). If a new mechanism is implemented signaling to DESTROY why is it
being called - that's even better! Then the corner cases are also
solved, since any residual value of $@ from previous eval{}s no longer
matters.
Cheers

So about a year has passed, and I'd like to rehash this RT once again. I
figured this recent writeup is a good thing to share with perl5-porters.
Comments welcome!
http://www.perlmonks.org/?node_id=924524
Cheers