"CPU Costing" specifically estimates the CPU time=20
needed for all operations, so there is a 'CPUCycle
count' for a logical I/O, a CPUCycle count for comparing
locating a row in a block, a CPUCycle count for
skipping over each column in a row to find the right
one and so on.

Loosely, when CPUCosting is enabled it's part of
enabling the CPU cost model, normally controlled by=20
parameter _optimizer_cost_model, which can be set
to IO, CPU, or CHOOSE.

So in fact, the I/O costing changes at the same time,
so that (in particular):

cost of t/s =3D blocks to scan / recorded MBRC *

mreadtim / sreadtim +=20
CPU cost.

I've got a paper coming out on OTN in a couple of
weeks that talks about it a bit.