Indexes And Small Tables Part V (It’s No Game) May 13, 2009

So far in our little example, we’ve looked at how accessing a row of a one block table via a FTS required 4 consistent gets while accessing this same table via a Non-unique index reduced the consistent gets down to 3.

Time to take the next step and improve the efficiency yet further of accessing this small one block table.

We’re now going to replace the Non-unique index with a Unique Index instead. We can obviously do this because the values on the indexed ID column are indeed unique.

Now it’s always a good idea of course to document these table business rules (such as a column being unique) inside the database, however it’s somewhat alarming just how many application just don’t this. I’ve also previously discussed how a PK or Unique constraint can actually be policed via a Non-Unique index so there are many reasons why a small table might not have an associated Unique index.

Not least of course the incorrect perception that an index is not going to be much use on a small table anyways …

So let’s now replace the Non-Unique index with a Unique index instead:

We notice our first significant change. The number of consistent gets has reduced further down to just 2.

Why ?

Because with a Unique index, there can only be a maximum of 1 row returned. It’s simply not possible to return 2 or more rows.

Therefore, when selecting this one row, Oracle doesn’t have to perform the second fetch operation to confirm there are indeed no more rows to return. The first fetch will either return the one row of interest or none at all when resolving an equality predicate. That’s it, there are no other possibilities. We return the row of interest by simply accessing the index block (1 consistent get) followed by the table block (the second consistent get).

As we have previously, if we look at the actual consistent gets statistics of interest by running the following query in another session before/after the select statement:

We notice we have indeed only performed the 2 consistent gets. But we also notice another significant difference, that being both consistent gets are now the “cheaper” consistent gets – examination.

This means that the latches required to now perform this select statement via the Unique index is just 2, down from 6 for the Non-unique index and 8 from the FTS.

Generally during a consistent get, Oracle needs to grab the cache buffers chain latch so it can pin the specific block in memory and then grab the latch again so that it can subsequently unpin the block once it’s finished processing the block. Each of these accesses to the latch and the subsequently pin/unpinning of the block requires CPU and is a possible source of contention within the database.

For some operations that only require a very very quick “read and get out of there” type operation and/or on blocks that are unlikely to change within a given point of time, Oracle uses a cheaper consistent get operation which doesn’t actually require the block to be pinned. There’s no point in pinning the block as it’s only going to be read and accessed for a short time (shorter than might otherwise be required when processing a block in memory) and the block is unlikely to change anyways.

So for these operations, Oracle uses a cheaper consistent get called a consistent gets – examination. These consistent gets examinations only need grab the cache buffers chain latch before quickly reading the block and releasing the latch once the read operation is complete. Therefore it only needs to grab and release the cache buffer chains latch the once without having to pin/unpin the block, which means less CPU and less latch contention overall.

Now this isn’t particularly well documented. Often discussions mention reads of undo blocks as being candidates for consistent gets examinations as these reads are likely to be both relatively quick and a specific undo block is unlikely to change as only one transaction can actually update an undo block at a given time.

Getting back to indexes, reads of index root blocks are another candidate mentioned as again a read of an index root block is going to be very quick and an index root block is unlikely to change at a given point of time.

However, what is not well documented at all is the fact that any block accessed during an index Unique Scan is accessed via a consistent get – examination, including the consistent get associated with reading the table block as well. This is because again, any such read operation is going to be relatively quick as the most that ever needs to be read is the one index related entry and the one table row.

The net result is that now accessing a row from a small table via a Unique index requires only 2 latch accesses vs. the initial FTS example which required 8 latch gets as none of the FTS consistent gets are examinations.

Now you might say that these are all very small numbers, that 4 consistent reads isn’t that much, that 8 latches isn’t really that huge a number and reducing 8 latches down to 2 latches isn’t really going to be that noticeable. Yes it is effectively a 75% reduction but it’s a 75% reduction of not very much.

And if you’re discussing a single read of a single small lookup table you would likely be right.

But what if the small table is accessed frequently by the application, perhaps many 1,000s of times per minute. What if you have many such small tables, often used in small join operations by your OLTP applications. What if you have large numbers of users in a large application with many many such small table accesses. This effectively 75% saving can potentially become very significant, both in terms of the reduction in CPU load and also in the reduction of latch contention, which in turn can further reduce CPU loads.

A small improvement multiplied by a large amount can indeed make a difference …

However, I have one more step to go yet in further improving the efficiency of these small table lookups via an index.

Excellent stuff, and yes, sometimes that last block-get does make a difference. And I do have at least two suspicions on how the next saving could be achieved…

Just a a few thoughts though:
1) Should “small” in the context of this topic always mean that the index fits in 1 block ? I recall the nr of hops in an index depends on the blevel. That seems like something to monitor now.
2) As you mentioned the “overloaded” index again: are higher versions of oracle capable of defining an overloaded-pk index as “unique” ? I used this a few times, and I recall I couldnt define the fattened index as “unique” anymore: Oracle would refuse to use it for key-enforcement of a PK/UK constraint on fewer fields then the total index.

1) No, “small” could possibly mean an index with a blevel of 1 (or whatever I guess). However, if an index needs at least 2 leaf blocks to store the index entries, then the table is likely to need even more to stored all the non-index data as well. So the index is still likely to come out ahead if only a small number of rows are required.

2) I haven’t actually mentioned an overloaded index (an index in which we add additional columns in order to potentially avoid a table visit for those listening in). I don’t think I have anyways.

Yes, there are still issues if you want to overload a unique index (say use a unique index on a, b to police a unique constraint on just a).

What I’m proposing to cut the overheads by a further 50% is not quite overloading an index as such, it’s a little more left field than that🙂

re-1) Understood.
re-2) I should have defined overloaded when I carelessly used it in my reply. What I meant was an index used for constraint-policing but with extra fields on top of the constrained-fields.

You are correct in that “Overloaded” seems more often used to describe an index that is inflated beyond the where-clause-need, supposedly to ensure that the whole stmnt is satisfied from the index (a three star index?).
And before anyone pops the remark: overloading will have its downside in DML-effort and in redo-volume. YMMV, but storing less data is _generally_ better.

Splitting hairs here (as opposed to splitting blocks):
Is there some textbook or some general consensus on how to describe an inflated index for (PK/UK) constraint-policing of a smaller key? (fattened, overloaded, inflated… think of policing: CC for Constraint-Cop?).
And to distiguish from an inflated-index that ensures some (often used) query gets an index-only plan.
The term “three star index” seems to only apply to the combination of index and query…? Mind you, I dont have my “Tapio” quite fresh in mind, nor do I have it on my desk here.

Anyway, after splitting that hair:
I’m still curious what your left field will be (is that to my right as you are an antipodean?)

I’m not sure if there is such a term. I understood what you kinda meant by overloading an index so it’s a good a general term to use as any other. Not sure how to distinguish between your two examples although I’ve heard the term 3 star index in some indexing theory book or two.

In the end, we add more columns into the index than otherwise because we want the index to do more work, be more flexible or perhaps reduce overall work. I understand what you mean by the various terms you’ve used so they’ll do for me🙂

However, as you mention in your first post, there are issues and problems with trying to overload a unique index that’s meant to police a specific constraint. It’s easy enough to overload a non-unique index, but not a unique index.

I was actually going to finish up this little series but based on your comments, it might be worth extending it with another post just to explain why I can’t get the improvements I’m after with overloading my unique index.

It’s a combination of making it quite clear that running the queries don’t in turn change the result (not strictly necessary in this case but when your father is a scientist, some scientific principles are hard to shake🙂 ) and also wanting to clearly distinguish between different types of consistent gets as the consistent gets examinations here are key to the overall discussion.