In response to this question of how to debug optimised code I can
contribute the following.

The problems of debugging optimised code due to code motion,
inlining, etc are very difficult to compensate for in source level
debugging of imperative languages. However, for declarative languages
the problem is much simpler.

In a purely declarative language, the outputs of a function (or predicate)
are solely dependent on the inputs. This property gives rise to the
idea of declarative debugging (a variant of which is called rational
debugging). The idea is that if to evaluate a function f(x) we have
to evaluate g(y) and h(z), the debugger can ask the programmer a series
of questions like the following:

If g and h are the only functions f calls, and they both
evlauate correctly, then the bug must lie in the way the f
composes the results of g and h (or possibly in the arguments
passed to g and h).

If you make smarter use of data dependecy information, you could
extend the technique further and have the programmer indicate which
part of the result of f(x) is incorrect, and the debugger can take
the programmer straight to the place in the code where the relevant
value was constructed.

I expect that as declarative languages become more important and better
implemented, we will see techniques such as declarative debugging become
more widely understood. The point with declarative debugging is that it
is independent of how the underlying implementation *does* the computation,
and depends only on the specification of what the program should compute
in the program.

I'm not sure that I've done full justice to declarative debugging and how
it copes with optimising implementations. If you have questions or think
I've left something out, I'd be pleased to engage in further communication
on the subject.