Denis Koroskin wrote:
> On Thu, 23 Oct 2008 17:47:52 +0400, Andrei Alexandrescu
> <SeeWebsiteForEmail@erdani.org> wrote:
>
>> Denis Koroskin wrote:
>>> On Thu, 23 Oct 2008 04:02:22 +0400, Sean Kelly
>>> <sean@invisibleduck.org> wrote:
>>>
>>>> Jarrett Billingsley wrote:
>>>>> On Wed, Oct 22, 2008 at 7:13 PM, Sean Kelly
>>>>> <sean@invisibleduck.org> wrote:
>>>>>> Errors represent situations which are typically
>>>>>> non-recoverable--program
>>>>>> logic errors, for example, or situations where data corruption may
>>>>>> have
>>>>>> occurred--while Exceptions represent the bulk of normal execution
>>>>>> errors,
>>>>>> including OutOfMemory conditions.
>>>>> How, pray tell, is an app supposed to recover from an
>>>>> out-of-memory condition?
>>>>
>>>> By releasing dynamically allocated memory. I'd expect some to be
>>>> released automatically as the stack is unrolled to the catch point
>>>> anyway. For example:
>>>>
>>>> void main()
>>>> {
>>>> try { fn(); }
>>>> catch( Exception e ) {}
>>>> int[] x = new int[16384];
>>>> }
>>>>
>>>> void fn()
>>>> {
>>>> int[] x = new int[16384];
>>>> fn();
>>>> }
>>>>
>>>> Eventually this app will run out of memory (hopefully before it runs
>>>> out of stack space) and an OutOfMemoryException will be thrown. As
>>>> the stack is unwound, all valid references to this memory will be
>>>> released. So the allocation in main() should trigger a collection
>>>> which frees up all the now-unreferenced memory, thus allowing the
>>>> allocation in main() to succeed.
>>>>
>>>> For manual recovery, consider an app that does a great deal of
>>>> internal caching. On an OutOfMemory condition the app could clear
>>>> its caches and then retry the operation. This is probably a bad
>>>> example, but I think the general idea of trapping and recovering
>>>> from such a state is potentially valid.
>>>>
>>>>
>>>> Sean
>>> I think that OutOfMemoryException should *not* be recoverable.
>>> Instead, language should provide some hookable callback (like
>>> onOutOfMemoryError()) which is called when memory limit is reached so
>>> that program may free some unused memory (which is held by user since
>>> it is not garbage-collected) and tries to allocate the memory again
>>> without failure (return true from callback). User might decide to
>>> re-throw some other kind of *exception*, like
>>> NotEnoughMemoryException(), to catch and recover, or pass it (return
>>> false from callback) which in turn will finally throw
>>> OutOfMemoryError().
>>
>> But one of the best things possible to do is unwind the stack and fall
>> back to a higher position with less state. A function cannot do that,
>> an exception can.
>
> You can do that, of course, just don't handle the onOutOfMemory custom
> callback (per my proposal).
>
>> Why do you guys want to avoid exceptions in one of the few cases when
>> they are exactly, but exactly what the doctor prescribed?
>>
>> Andrei
>
> My concern is to avoid program flow interrupt and fall-back to some
> recovery code if possible. One of such cases is OutOfMemoryException.
>
> For example, I recieve some network message. An object is constructed
> and about to be inserted into the message list. Imagine that an
> OutOfMemoryException is thrown during the insertion. It is quite hard
> (if possible) and not generally disirable to revert network state: I
> have to emulate that message is not recieved yet so that I get it later
> (at a second attempt, after OutOfMemory exception is processed and some
> memory freed), etc. Besides, where would you put the
> catch(OutOfMemoryError) code? Far from the exception source point, most
> likely, which is even worse for recovery.
>
> The solution I would prefer is to avoid that situation at all! No
> memory? Fine, I'll clean this memory pool and that one, too. Try again,
> please! Still no memory? Then re-throw the exception so that upper
> forces handle the situation.
>
> I don't say that exception recovery is not needed - of course it is! -
> but I prefer to avoid it if possible. It is just safer.
>
> Leave the user unaware that an out of memory error has been occured and
> recovered from is my best wish.
I understand your motivation. But you can easily implement all of that
within client code without changing anything anywhere. Notice that the
problem is more general, e.g. if creating a socket fail you might ask
the user to plug a cable or start a wireless card etc. Just catch the
exception at the appropriate level, take the appropriate measure, and
goto RETRY. :o)
Andrei

Robert Fraser wrote:
>
> Didn't see this discussion before I went off my tirade. I agree it's
> recoverable and in a perfect world this would be so, but look through
> any large codebase for how many catch(Exception) blocks there are. I'll
> bet you that NONE of the general catch(Exception) blocks (except the
> ones that print an error and exit the program) expect to see, or are
> prepared for, an out of memory exception.
I'd argue that anyone who catches Exception, prints a message, and
continues blindly is just asking for trouble. But this seems to have
become an ingrained practice anyway, so it's a fair point. However, I'm
not sure this is sufficient reason to relabel an out of memory condition
as ostensibly unrecoverable.
> Asking programmers to think about out of memory errors is too much.
> We're trained to assume computers have infinite memory and when they run
> out, the system/runtime is supposed to do drastic things like crashing
> our programs - not start introducing strange logic errors all over the
> place because programmers didn't realize their catch(Exception) blocks
> had to deal with more than "file doesn't exist"
Perhaps I'm simply getting old... when did memory use become irrelevant?
I grant that making an application exception safe is more difficult if
out of memory conditions are considered recoverable, but I don't think
it's tremendously more difficult.
Sean

Denis Koroskin wrote:
>
> I think that OutOfMemoryException should *not* be recoverable. Instead,
> language should provide some hookable callback (like
> onOutOfMemoryError()) which is called when memory limit is reached so
> that program may free some unused memory (which is held by user since it
> is not garbage-collected) and tries to allocate the memory again without
> failure (return true from callback). User might decide to re-throw some
> other kind of *exception*, like NotEnoughMemoryException(), to catch and
> recover, or pass it (return false from callback) which in turn will
> finally throw OutOfMemoryError().
This is a fair point, but how does one implement this safely? Let's say
for the sake of argument that onOutOfMemoryError() can be hooked by the
user and may try to recover. Now:
auto x = new int[BIG_NUMBER];
Let's say that gc_malloc() is implemented like so:
void* gc_malloc( size_t sz )
{
void* ptr;
do
{
ptr = doMalloc(sz);
if( ptr is null )
onOutOfMemoryError();
} while( ptr is null );
return ptr;
}
Now let's assume that the first attempted allocation fails and
onOutOfMemoryError() frees some memory rather than throwing so the
allocation is attempted again. But BIG_NUMBER is so darn big that the
allocation fails again. And once again onOutOfMemoryError() tries to
free some memory and returns instead of throwing. Is there any way to
structure the recovery mechanism to avoid this situation given that
onOutOfMemoryError() has no idea that it's being called in such a loop?
gc_malloc() could certainly give up and throw after a certain number
of failed attempts, but putting the onus on the caller of
onOutOfMemoryError() to deal with this situation is not reasonable in my
opinion.
Sean

Sean Kelly wrote:
> Denis Koroskin wrote:
>>
>> I think that OutOfMemoryException should *not* be recoverable.
>> Instead, language should provide some hookable callback (like
>> onOutOfMemoryError()) which is called when memory limit is reached so
>> that program may free some unused memory (which is held by user since
>> it is not garbage-collected) and tries to allocate the memory again
>> without failure (return true from callback). User might decide to
>> re-throw some other kind of *exception*, like
>> NotEnoughMemoryException(), to catch and recover, or pass it (return
>> false from callback) which in turn will finally throw OutOfMemoryError().
>
> This is a fair point, but how does one implement this safely? Let's say
> for the sake of argument that onOutOfMemoryError() can be hooked by the
> user and may try to recover. Now:
>
> auto x = new int[BIG_NUMBER];
>
> Let's say that gc_malloc() is implemented like so:
>
> void* gc_malloc( size_t sz )
> {
> void* ptr;
> do
> {
> ptr = doMalloc(sz);
> if( ptr is null )
> onOutOfMemoryError();
> } while( ptr is null );
> return ptr;
> }
>
> Now let's assume that the first attempted allocation fails and
> onOutOfMemoryError() frees some memory rather than throwing so the
> allocation is attempted again. But BIG_NUMBER is so darn big that the
> allocation fails again. And once again onOutOfMemoryError() tries to
> free some memory and returns instead of throwing. Is there any way to
> structure the recovery mechanism to avoid this situation given that
> onOutOfMemoryError() has no idea that it's being called in such a loop?
> gc_malloc() could certainly give up and throw after a certain number of
> failed attempts, but putting the onus on the caller of
> onOutOfMemoryError() to deal with this situation is not reasonable in my
> opinion.
Perfectly put. Again: why prevent use of exceptions for the one case
that they fit the best?
Andrei

"Andrei Alexandrescu" wrote
> Sean Kelly wrote:
>> Andrei Alexandrescu wrote:
>>> Robert Fraser wrote:
>>>>
>>>> Option B:
>>>> ---------
>>>> try
>>>> {
>>>> new Socket(30587);
>>>> }
>>>> catch(Exception e)
>>>> {
>>>> if(e.type == ExceptionType.Socket)
>>>> printf("Could not open socket\n");
>>>> else
>>>> throw e;
>>>> }
>>>
>>> I think you'd be hard-pressed to justify the "if" inside the second
>>> example. You couldn't create a Socket, period. It doesn't matter where
>>> exactly the exception was generated from.
>>>
>>> That's one thing about large exception hierarchies: everybody can come
>>> with cute examples on how they could be useful. As soon as the rubber
>>> hits the road, however, differentiating exceptions by type becomes
>>> useless.
>>
>> It may be different in a user application, but in services it's fairly
>> common to have specialized code for handling different exception types.
>> And more importantly, it's common to want different exception types to
>> propagate to different levels for handling. Sure, one could use a
>> generic exception handler at each level that rethrows if the detected
>> type isn't one that handler cares about but why do this when filtering on
>> type is a language feature?
>>
>> For example, let's say I have a network service that's backed by a SQL
>> database. My main program loop may look something like this:
>>
>> bool connected = false;
>> while( true )
>> {
>> try
>> {
>> while( true )
>> {
>> auto r = acceptRequest();
>> scope(failure) r.tellFailed();
>> if( !connected )
>> connectToDB();
>> handleRequest( r );
>> }
>> }
>> catch( SqlException e )
>> {
>> connected = false;
>> log( e );
>> }
>> catch( Exception e )
>> {
>> log( e );
>> }
>> }
>>
>> ...
>>
>> void handleRequest( Request r )
>> {
>> scope(failure) r.tellFailed( "General error" );
>>
>> try
>> {
>> // process r
>> }
>> catch( AuthException e )
>> {
>> r.tellFailed( "Authentication failure" );
>> }
>> catch( ValidationException e )
>> {
>> r.tellFailed( "Invalid request format" );
>> }
>> }
>>
>> Being able to trap specific types of exceptions makes this code cleaner
>> and more succinct than it would be otherwise. If this weren't possible
>> I'd have to trap, check, and rethrow certain exceptions at different
>> levels to ensure that the proper handler saw them.
>
> Thanks for fueling my argument. There's duplication in code examples, as
> in many other examples I've seen in favor of by-type handling.
>
> First example:
>
> catch( Exception e )
> {
> if (e.origin = "sql") connected = false;
> log( e );
> }
>
> Less code and no duplication. Second example is even starker:
>
> catch( AuthException e )
> {
> r.tellFailed( e.toString );
> }
>
> Clearly the need is to factor in the message to print in the exception, at
> least in this case and many like it.
For the second example, you are assuming that ValidationException is a
subclass of AuthException.
I think in this case, ValidationException and AuthException might share a
common parent (SqlException) which you do NOT want to catch. It is
advantageous in this case to have both a hierarchy through inheritance and a
hierarchy through parameters.
Here's another idea, to avoid rethrows, what about some pre-handler code
that tests if an exception is what you want? Perhaps the same mechanism
that is used by template constraints:
catch(SqlException e) if (e.reason is Auth || e.reason is Validation)
{
r.tellFailed(e.toString());
}
That gives you the same mechanism used with typing and without the rethrows,
without creating N subclasses of SqlException, and without multiple
repetitive blocks that do the same thing. It looks more readable to me too.
Not sure how exception code works, so I don't know how plausible this is.
-Steve

Sean Kelly wrote:
> Really, all I was getting at is that I don't think it's a good idea for
> there to be no type distinctions at all. The tricky thing, as you've
> clearly pointed out, is to distill those distinctions to only the
> pertinent ones rather than simply designating a new type for every error.
Well put. But I think it's way more effective to start conservatively
(with few exception types) and fan out, instead of as it is now - the
first thing to be found in each std/ file ever is emperor's hairy belly
pretending it's wearing a great cardigan.
In fact, by this I'm requesting community's permission to throw (sic!) a
large fraction of the exception classes out of Phobos.
Andrei