The CPU Costing Model: A Few Thoughts Part IV (Map of the Problematique) January 7, 2010

It’s called the CPU Costing model because among other things, it includes the time associated with performing CPU operations.

The CPU Costing model formula once again:

(sum of all the single block I/Os x average wait time for a single block I/O +
sum of all the multiblock I/Os x average wait time for a multiblock I/O +sum of all the required CPU cycles / CPU cycles per second)
/average wait time for a single block I/O

So the portion detailing the sum of all required CPU cycles divided by the CPU cycles per second can potentially contribute a significant proportion of the overall costs.

However, these CPU specific costs in this example may vary from database to database, although the FTS might be identical as might the required CPU cycles. However, a variable in all this is the CPU cycles per second system statistic (CPUSPEED) associated with a particular database environment.

Obviously, the faster the CPUs, the quicker it can perform the necessary CPU operations associated with the FTS (or any operation for that matter). Conversely, the slower the CPUs, the longer it will take to complete the necessary CPU related operations. The CPU costing model formula automatically takes all this into consideration.

In the previous example, the CPUSPEED system statistic was 1745.

Let’s now run the identical FTS but this time with a faster and a slower CPU and see how this might adjust the overall related costs when using the CPU costing model.

One can simulate a “faster” CPU by simply adjusting the CPUSPEED system statistic. Let’s make the CPUs appear 10 times faster:

We notice that the overall cost has reduced from 70 down to 67. The cost of 3 that was previously attributed to just the CPU related costs have all disappeared and the costs are now just the 67 in relation to the I/O component.

The CPU is now so fast that it can effectively perform all the necessary operations in a negligible amount of time. An even faster CPU will not further improve the costs associated with this FTS as the costs now only include the I/O related components.

The (%CPU) value of (0) gives us this information if you didn’t follow how I derived the I/O cost of 67 in my previous post.

If we go the other way and now make the CPU about 1/10 the speed of the original example:

We now notice the overall costs have jumped up considerably up from 70 up 93.

The costs associated directly with CPU activities have now increased up from 3 to 26. The CPU component is in the ballpark of 10 times as expensive/significant when you take into account rounding errors (the original 3 value was rounded accordingly). Remember also that these figures are times expressed in units of time it takes to perform a single block I/O.

The CPUs are now so slow that it takes a considerable amount of time to complete all the required CPU operations.

Note that the (%CPU) value is now a significant (28%) of the overall costs as derived from the following formula:

round(cpu related cost/total cost) x 100 = round(26/93 x 100) = 28.

So having a faster (or possibly slower) CPU when performing a hardware upgrade/change can result in potentially different execution plan costings (and as such different execution plans) when using the CPU CBO costing model.

It’s called the CPU costing model for a reason and as one would indeed hope, the speed of said CPU(s) can directly impact the associated costs and decisions made by the CBO.

Share this:

Like this:

Related

[…] therefore basically = cost of execution plan multiplied by SREADTIM system statistic. Using my previous example with the FTS where the overall cost of the execution plan was 70, and the SREADTIM system […]

I don’t know which version you tested these on(maybe it is better you put version you tested on your posts to avoid confusion- a regular reader feedback) but I assume it is windows and 10GR2

I think 11.2.0.1 changed the way CPU cost effects cost.

After your advice I tested it on 11.2.0.1 (Linux) with workload settings and nothing changed but when I tested it on 10.2.0.4 (Solaris) I get what you explain. Same test on windows 11.1.0.6 gives a similar but a bit different

In answer to the first part of your question, I’ve basically “tested” this with various 9i, 10g and 11g versions on AIX, Linux and Windows platforms (although obviously not yet 11g R2 on Windows). I often don’t specify which versions I’ve used to test a specific demo as the list can be quite large and many points I make are somewhat version independent. I list differences when I come across them and hope others may comment as they find them too (or appear to). Certainly things can change with each new release so it’s important to test on one’s specific environment however you’re incorrect in your assumption that this has changed somehow in 11.2.0.1.

It’s all in the numbers …

The obvious clue that things are not as they appear in your 11.2.0.1 example is that the CPU% is 0 for both select statements, suggesting that the CPU component in both runs is trivial and doesn’t contribute to the overall costs. Even with a much lower CPUSPEED with the second run, the time component for cpu is so trivial as not to make any difference to the final costs

Why ?

The answer has actually got nothing to do with the CPUSPEED as such but in the value of the MREADTIM and (especially) the SREADTIM stats. The values are huge, massive in fact with an average multiblock read taking nearly 17 seconds and single blocks reads nearly 29 seconds. If you look at the Time in the execution plan, the FTS is estimated to take a very slooooooow 43 seconds.

Of course these times are just not realistic or you have some seriously slow disks. However, remember that the CBO costs are expressed in units of SREADTIM. Each unit of cost is effectively 29 seconds which means you must perform close to 29 seconds of CPU work for it to register at all in the overall cost figures. That’s a hell of a lot of CPU for such a tiny FTS. The actual CPU, even with a slow CPUSPEED when divided by 29 seconds is being rounded to 0 and has no effect on the final costs.

If you look at your other example, you have much faster I/O times (36ms for sreadtim) which means your CPU overheads when divided by 36ms do contribute somewhat to the overall costs for the slow CPU example. If you look at my demo, I have even faster I/Os at 5ms for SREADTIM which means changes in CPU speeds haves a more dramatic effect because it only requires an additional 5ms of CPU time for the CBO costs to be impacted.

In order to see this demo “working” in 11.2, you must significantly reduce the I/O related system statistic times. Do this and you’ll see how changing the CPUSPEED can also change the final CBO costs.

This is a perfect example of why understanding the CPU costing formula is so useful in determining why the costs are what they are and why they change (or don’t change) when the various dependencies change as well.

you were once again right:) As you said if I changed it it worked find. Silly me did not focus on numbers but on results

Thank you for staying with me on this one.

Now it is my turn to understand why my system statistics gathering finds my disks that slow even if they are not (tried to many times but still same) I will run Orion to find out what is wrong. Interestingly DB on Ubuntu phsical box saying disks are slow but Virtual box on same physical box says disks are fast.

I tried to find the real CPU costs by running a virtualized database, and progressively limiting the CPU.
I thought the cost variation would be linear, because CPU cost varies linearly. However, if I give my database less than 20% of the available CPU, the actual query exectution time ( using the same query plan) explodes. I have no idea what happened.