In my previous post, I discussed how Oracle from 11g R2 onwards will automatically drop the segment and associated storage from unusable index objects. Mohamend Houri asked in the comments section the excellent question of just how useful this feature will be in real life cases when typically indexes are not left in an unusuable state for a long time, perhaps only when performing large data loads when such indexes would ordinarily be rebuilt anyways.

Thought the question was worth a seperate blog entry to provide a worthy answer.

The first point I would make is that we need to think a little outside the box and consider how such change in behaviour can open up new possibilities and flexibilities in how we index our tables.

For example, previously a Local Partitioned Index must have the same number of index partitions as the parent table. But what if an index is only useful for the “current” partition, where accessing newish data makes sense via an index. However, historical data in “older” partitions might only be accessed in batch processes via full partition scans. Why have a local index for older partitions when such indexes are never used. Previously, we had no choice, it was a case of if one or some of the partitions needed an index, then all the partitions needed to be indexed. If we made such unnecessary partitioned indexes unusable, we still needed to allocate storage for the index segment. Now, we can make any unnecessary index partition unusable and no storage at all is allocated to such index partitions.

Taking this a step further, we now have a really nice method of potentially indexing only portions of a table that need indexing, values which don’t have any benefit of being indexed (perhaps because the values are too numerous to ever be accessed efficiently via an index) no longer need to be indexed at all.

Here’s a classic example. Following is a table with a flag in which the vast number of rows in the data have been “processed”. However, we have a few rows, those current rows which are of interest to us, which have not yet been processed (they may have a status of another value). We need an index in order to find the few rows which have not yet been processed but the index needs to also include all the values which are not of interest and have been processed.

Note the CBO uses the index but it requires a total of 6 consistent reads.

Not bad but we can do somewhat better and perform less I/O , significantly reduce storage overheads and significantly reduce index maintenance operations, if only we didn’t store the unnecessary index values within the index.

One method could be to create a function-based index based on the decode function and only store non-null values that are of interest. However, this requires the application to likewise use the decode function in order to make use of the index.

Another method is to use a partitioned index and now with this new Oracle11g feature of zero sized unusable indexes, we don’t need any storage at all for the unwanted indexed values.

Let’s now re-create the index as a globally partitioned index, with one partition defined to contain all values of interest and another partition defined to contain the vast number of processed values. Initially, the index is created in an unusable state so no segments and no storage is allocated to any of the partitions:

Next, we’re only going to rebuild the partition containing just the relatively few rows of interest. The partition containing the values that are not of interest is left in an unusable state and so continues to occupy no storage at all:

Note how the index is now tiny (reduced from 1,877 leaf blocks to just 1) as it is only now just storing the index entries that are of interest. We have just saved ourselves heaps of storage as the other partition remains unusable and uses no storage at all.

We notice that the execution plan is just using the tiny index partition and as a result we have reduced our consistent gets down from 6 to just 4. Additionally, we have not had to change our application at all to use the improved index, it was the exact same query as executed previously.

This method can of course be used in Oracle prior to 11g R2 but now with zero sized unusable indexes, we do not have to allocate any storage at all to those indexes that we may wish to remain in an unusable state for extended or indefinite periods of time. So yes, zero sized unusable indexes can be extremely useful in many real life scenarios :)

PS :
“One method could be to create a function-based index based on the decode function and only store non-null values that are of interest. ”
Shoudn’t be instead
“One method could be to create a function-based index based on the decode function and only store non-processed values that are of interest. ”

Cool!
Updates (setting processed from ‘NO’ to ‘YES’) seem to benefit as well: I got 9 db block gets (non-partitioned) vs. 5 db block gets (partitioned with an unusable partition). We are avoiding the “insert” in the index due to it being unusable – correct?

Richard,
again a very instructive posting! Although I was aware about the extentless unusable index feature of 11gR2, the combination of it with local indexes was new for me. Cool idea, thanks for sharing!

Richard,
I know this is not AskTom (or AskRichard) and the question is not really closely related to the topic – but is there a logical reason why I get an ORA-01502 (in 11.1.0.7) when inserting into a table with an unusable unique index but no error if the unusable index is nonunique?

I think the “logic” is that if a unique index were to be unsuable, then ignoring it would result in the violation of a business rule, that being suddenly having duplicate data when there should be none.

You would get the same result even with a Non-Unique index, if the index was being used to police a PK or UK constraint.

Richard,
thank you for the plausible explanation. But I see the business rules mainly in the scope of constraints: So I don’t understand why it’s possible to disable a constraint, add violating data, and then to get an error when you try to enable the constraint again – but it’s not possible to do the same with an unique index. The index seems to be more strict than the constraint.

For me it would be more convenient if the index would allow the same deferment: So I could deactivate the PK and unique constraints and all indexes before a mass data load and activate or rebuild them after the import. The ora-01502 forces me at first to get the DDL for unique indexes, drop them, add the data, and to recreate unique indexes after the bulk load (in addition to the disable/enable for the constraint and to the unusable/rebuild for the nonunique indexes).