On a slightly under-powered server, it occasionally gets hit by lots of robots crawling simultaneously. Whilst one solution is to look into throwing more processing power at the web server, it has highlighted that some of the scripts are less than wonderfully optimised.

That I'd love to be able to do is calculate the CPU time on a per-request basis, and focus on fixing those first. Is there anything built into Apache or PHP that could help achieve this? I guess MySQL would have its own metrics that would identify the most server intensive queries as well?

This should quickly point out the bottlenecks; It will also help with the MySQL part. Even though MySQL has its own reporting, you may be in a situation where the script does the same (quick) query for 1000 times consecutively. MySQL would not report this, but you'd notice from XDebug that a function has been called that many times for just one page.

You may do an initial survey on a dev server; any optimization problems will show up quickly. From your production logs find out which are the most visited pages and analyze them first on the dev server.

If you still need to do profiling on the production server, consider enabling it randomly for a subset of the requests to minimize load. From the docs:

You can also selectively enable the profiler with the xdebug.profiler_enable_trigger setting set to 1. If it is set to 1, then you can enable the profiler by using a GET/POST or COOKIE variable of the name XDEBUG_PROFILE

Apache mod_rewrite will help in adding the GET variable transparently without it being passed to/from the user.

There are a few possible ways of going about this - which way will be best is, ultimately, going to depend upon how much time and effort you're willing to dedicate to a solution.

PHP Benchmarking: Generally a high time/effort implementation if your code does not already include benchmarking. Add a snippet of code to set time and microtime timers at the start of each script and log the total time and microtime elapsed at the end of the script (along with the URI which was called). You can add additional benchmarking timers for specific functions if you may need to refactor your code at a later time (just remember to wait until after all regular operations have completed to write your benchmark data - otherwise you'll interfere with the benchmarking information).

wget Benchmarking: Possibly the laziest way to benchmark. Grab a list of requested URI's from your webserver logs and feed them into wget as local requests against the webserver (ideally you would do this several times while traffic is very low to get representative results)