I can't see this being "automatic" in any normal sense of the word.
It requires an effort to collect and analyse profile data and a good
deal of insight to explout it wisely.

> and does it particularly well because> the data set used for the profiling is the live run. The standard> "compile ; run collecting data; recompile with PDF" cycle can suffer> from artefacts in the data set used to "train" the PDF.

So can dynamic run-time profiling: A program does not have a constant
load during its runtime, so information you collect during the first
half of the execution may be completely wrong in the second half. The
problem is that predicting future behaviour from past behavior is not
always easy (and certainly not perfect). In a sense, information
collected during the same run is late: It only talks about the past,
and you want to compile for the future. Getting profile information
from complete executions can analyse how usage changes over the
execution time of the program and (in theory) use this to make several
variants of teh code for different phases of execution and know when
to switch to new versions.

But all this is about how well you can do in the limit, and that isn't
really interesting. What you want is to know how you get the most
optimisation with a given effort. And I doubt dynamic profile
gathering is the best approach for this.