Search

In AWR analysis, what appears to be the root cause of the issue, can easily turn out to be just a symptom. Last week, Rajat sent me an AWR report which is a perfect illustration of this (thanks Rajat), I posted the key sections from this report below (sorry for less than perfect formatting — I had to manually re-format the HTML version of the report into text).

The first thing that leaps into the eye is the I/O problem which manifests itself via free buffer waits and abnormally high average times for I/O-related events: 55 ms for sequential file read (!) and 85 ms for scattered file reads (!!) is 10-20 times more than normal. That alone should be enough to bring any application to its knees.

So the next question is what could have caused all this. It’s possible that this abnormality stems from a hardware failure or a misconfiguration in the I/O subsystem, so it would make sense to contact the SA and ask him to check the I/O stats from his angle. However, if we look a bit further in the AWR report, we will find yet another problem:

BUSY_TIME 16,749,988
IDLE_TIME 510,692

i.e. the CPU was 97% busy (!!) during an 6-hour interval (!!!). Now that we have established that both I/O and CPUs were bottlenecks, it’s quite unlikely that the two problems are unrelated to each other. Most probably, there is a causality link between them, but what is the cause and what is the consequence? An I/O problem is unlikely to affect the CPU, but the inverse is possible. If all CPUs on the system are busy, then it can take forever for the OS to complete an O/S request, and eventually the entire system virtually comes to a standstill.

But that’s not yet the end of the story; it would be also nice to know why the CPU usage went up so dramatically, so let’s take at “SQL ordered by CPU time” section of the report (column names and sql id’s have been obfuscated upon the sender’s request):

We can see that one statement is responsible for about 50% of the CPU utilization during the analyzed period, and a bunch of very similar statements (which probably only differ by values of literals) is responsible for at least yet another 15%.

So everything is pointing to the top statement as the sole culprit of the entire issue — and indeed, after I was provided additional information about that statement, I saw that its plan flipped right around the time of the incident. If I had access to the database, I would’ve pursued the matter further — e.g. by using dbms_xplan.display_awr to see the different plans and dbms_stats.diff_table_stats_in_history to see what changes in statistics may have provoked this change.

Conclusion

This post hopefully illustrates following points:

1) CPU starvation has many different appearance, among other things, it can look like an I/O issue

2) Don’t jump to conclusions. After finding a symptom, resist the temptation to pronounce it the root cause of the problem. Note that in this specific case if we decided that the I/O problem was the root cause, then we most likely would have proceeded to “SQL ordered by reads” section of the report rather than “SQL ordered by CPU time”, which doesn’t point in the right direction (the culprit query isn’t anywhere near the top in that list)

3) Oracle databases contain tons of diagnostic information which allow to identify the root cause of almost any performance problem in great detail. You don’t have to settle for a vague answer like “something’s wrong with the I/O subsystem” or “the workload was probably higher”.

Update: I spotted an error in my CPU time calculation: 6 hours is not 216,000 s, but 21,6000. With this taken into account, busy_time is almost exactly equal to NUM_CPUS x elapsed_time, and we don’t even to refer to os_cpu_wait_time to know that there was CPU starvation — so I’ve removed that part from my post.

Too many folks forget that physical I/O finishes with the kernel marking the blocked process as ‘runable’. The time waiting for the scheduler to put the process on a core is added to the I/O time. Also, an 11g AWR would more clearly point out the waits for CPU.

Years ago I wrote a post that shows this exact phenomenon adding to Log File Sync. If you’ll allow I’d like to leave a link to that post:

it’s really great to see you in my blog. I have been following yours for a while now, great stuff. And I remember this article on log file sync waits you referenced — it’s probably the best study on the subject, ever!

I have one question..
“We can see that one statement is responsible for about 50% of the CPU utilization during the analyzed period, and a bunch of very similar statements (which probably only differ by values of literals) is responsible for at least yet another 15%.”

it’s actually quite simple: the report shows total CPU usage to be 160,000 CPU seconds (see e.g. “CPU time” in “top 5 timed events”), while the top statement took 80,000 CPU seconds out of it. 80,000/160,000 = 1/2 = 50%. And similarly for the other statements.

it depends on the database version. In 10g, you can find “DB CPU” (which is essentially the same thing) in “Time model statistics” section.

In 11g there is an “Instance CPU” which shows both % of CPU available used by the instance and % of CPU usage that the instance is responsible for (i.e. the former shows CPU usage in terms of total CPU capactity, while the latter shows what is the share of this database instance among all CPU consumers on the host).