“I have a question regarding bitmapped indexes verses index compression. In your previous blog titled ‘So What Is A Good Cardinality Estimate For A Bitmap Index Column ? (Song 2)’ you came to the conclusion that ‘500,000 distinct values in a 1 million row table’ would still be a viable scenario for deploying bitmapped indexes over non-compressed b-tree indexes.

Now b-tree index compression is common, especially with the release of Advanced Index Compression how does this affect your conclusion? Are there still any rules of thumb which can be used to determine when to deploy bitmapped indexes instead of compressed b-tree indexes or has index compression made bitmapped indexes largely redundant?”

If you’re not familiar with Bitmap Indexes, it might be worth having a read of my previous posts on the subject.

Now Advanced Index Compression introduced in 12.1.0.2 has certainly made compressing indexes a lot easier and in many scenarios, more efficient than was previously possible. Does that indeed mean Bitmap Indexes, that are relatively small and automatically compressed, are now largely redundant ?

The answer is no, Bitmap Indexes are still highly relevant in Data Warehouse environments as they have a number of key advantages in the manner they get compressed over B-Tree Indexes.

Compression of a B-Tree index is performed within a leaf block where Oracle effectively de-duplicates the index entries (or parts thereof). This means that a highly repeated index value might need to be stored repeatedly in each leaf block. Bitmap index entries on the other hand can potentially span the entire table and only need to be split if the overall size of the index entries exceeds 1/2 a block. Therefore, the number of indexed values stored in a Bitmap Index can be far less than with a B-tree.

However, it’s in the area of storing the associated rowids where Bitmap Indexes can have the main advantage. With a B-tree index, even when highly compressed, each and every index entry must have an associated rowid stored in the index. If you have say 1 million index entries, that’s 1 million rowids that need to be stored, regardless of the compression ratio. With a Bitmap Index, an index entry has 2 rowids to specify the range of rows covered by the index entry, but this might be sufficient to cover the entire table. So depending on the number of distinct values being indexed in say a million row table, there may be dramatically fewer than 1 million rowids stored in the Bitmap Index.

To show how Bitmap Indexes are generally much smaller than corresponding compressed B-Tree indexes, a few simple examples.

In example 1, I’m going to create a B-Tree Index that is perfect candidate for compression. This index has very large indexed values that are all duplicates and so will compress very effectively:

In example 2, I’m going to create an index that still almost a perfect case for compressing a B-Tree Index, but far less so for a Bitmap Index. I’m going to create enough duplicate entries to just about fill a specific leaf block, so that each leaf block only has 1 or 2 distinct index values. However, as we’ll have many more distinct indexed values overall, this means we’ll need more index entries in the corresponding Bitmap Index.

So we have a relatively large indexed column that has some 1385 distinct values but each value just about fills out a compress leaf block. If we look at the compression of the index, we have reduced the index down from 9568 leaf blocks to just 1401 leaf blocks. Again, a very impressive compression ratio.

Unlike the previous example where we had just the one value, we now have some 1385 index entries that need to be created as a minimum for our Bitmap Index. So how does it compare now ?

Although the Bitmap Index is much larger than it was in the previous example, at just 464 leaf blocks it’s still significantly smaller than the corresponding compressed 1401 leaf block B-Tree index.

OK, example 3, we’re going to go into territory where no Bitmap Index should tread (or so many myths would suggest). We going to index a column in which each value only has the one duplicate. So for our 1 million row table, the column will have some 500,000 distinct values.

With relatively few duplicate column values, the compression of our B-Tree Indexes is not going to be as impressive. However, because the indexed values are still relatively large, any reduction here would likely have some overall impact:

So even in this extreme example, the Bitmap Index at 5740 leaf blocks is still smaller than the corresponding compressed B-Tree Index at 6017 leaf blocks.

In this last example 4, it’s a scenario similar to the last one, except the index entries themselves are going to be much smaller (a few byte number column vs. the 60 odd byte varchar2). Therefore, the rowids of the index entries will be a much larger proportion of the overall index entry size. Reducing the storage of index values via compression will be far less effective, considering the prefix table in a compressed index comes with some overhead.

We notice the index has actually increased in size, up to 2065 leaf blocks from 1998. The overheads of the prefix table over-ride the small efficiencies of reducing the duplicate number indexed values.

Is still smaller at 1817 leaf blocks than the best B-Tree index has to offer.

So the answer is no, Bitmap Indexes are not now redundant now we have Index Advanced Compression. In Data Warehouse environments, as long as they don’t reference column values that are approaching uniqueness, Bitmap Indexes are likely going to be smaller than corresponding compressed B-Tree indexes.

In Part I, I discussed how Zone Maps are new index like structures, similar to Exadata Storage Indexes, that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A Zone being a range of contiguous (8M) blocks.

I showed how a Zone Map was relatively tiny but very effective in reducing the number of consistent gets for a well clustered column (ALBUM_ID).

In this post, we’re going to continue with the demo and look at what happens when we update data in the table with a Zone Map in place.

So lets update the ALBUM_ID column (which currently has a Zone Map defined) for a few rows. The value of ALBUM_ID was previously 1 for all these rows (the full range of values is currently between 1 and 100) but we’re going to update them to 142:

We notice the maximum is still defined as being 100. So the update on the table has not actually updated the contents of the Zone Map. So this is a big difference between Zone Maps and conventional indexes, indexes are automatically updated during DML operations, Zone Maps are not (unless the REFRESH ON COMMIT option is specified).

If we look at the state of Zone Map entries that have a minimum of 1 (the previous values of ALBUM_ID before the update):

We see the Zone Map was still used by the CBO. The number of consistent gets has increased (up from 2364 to 3238) as we now have to additional access all the blocks associated with this stale zone, but it’s still more efficient that reading all the blocks from the entire table.

If we want to remove the stale zone entries, we can refresh the Zone Map or rebuild it (for ON DEMAND refresh):

We notice nothing has appreciably changed, the Zone Map is still being used but the number of consistent gets remains the same as before. Why haven’t we returned back to our previous 2364 consistent gets ?

Well, as the range of possible values within the updated zone is now between 1 and 142, the required value of 42 could potentially be found within this zone and so still needs to be accessed just in case. We know that the value of 42 doesn’t exist within this zone, but Oracle has no way of knowing this based on the possible 1 to 142 range.

Hence Zone Maps work best when the data is well clustered and the Min/Max ranges of each zone can be used to limit which zones need to be accessed. If the data was not well clustered and the values within each zone mostly had ranges between the min and max values, then Oracle wouldn’t be able to effectively prune many/any zone and the Zone Map would be useless.

Sometimes, a few pictures (or in this case index block dumps) is better than a whole bunch of words 🙂

In my previous post, I introduced the new Advanced Index Compression feature, whereby Oracle automatically determines how to best compress an index. I showed a simple example of an indexed column that had sections of index entries that were basically unique (and so don’t benefit from compression) and other sections with index entries that had many duplicates (that do compress well). Advanced Index Compression enables Oracle to automatically just compress those index leaf blocks where compression is beneficial.

If we look at a couple of partial block dumps from this index, first a dump from a leaf block that did have duplicate index entries:

The red section is a portion of the index header that determines the number of rows in the prefix table of the index (kdxlepnro 1). The prefix table basically lists all the distinct column values in the leaf blocks that are to be compressed. The value 1 denotes there is actually only just the 1 distinct column value in this specific leaf block (i.e. all index entries have the same indexed value). This section also denotes how many of the indexed columns are to be compressed (kdxlepnco 1). As this index only has the one column, it also has a value of 1. Note this value can potentially be anything between 0 (no columns compressed) up to the number of columns in the index. The (Adaptive)reference tells us that Index Advanced Compression has been used and that the values here can change from leaf block to leaf block depending on the data characteristics of the index entries within each leaf block (a dump of a basic compressed index will not have the “Adaptive” reference).

The green section is the compression prefix table and details all the unique combinations of index entries to be compressed within the leaf block. As all indexed values are the same in this index (value 42, internally represented as c1 2b hex), the prefix table only has the one row. prc 651 denotes that all 651 index entries in this leaf block have this specific indexed value.

Next follows all the actual index entries, which now only consist of the rowid (the 6 byte col 0 column) as they all reference psno 0, which is the unique row id of the only row within the prefix table (row#0).

So rather than storing the indexed value 651 times, we can just store the index value (42) just the once within the prefix table and simply reference it from within the actual index entries. This is why index compression can save us storage, storing something once within a leaf block rather than multiple times.

If we now look at a partial block dump of another index leaf block within the index, that consists of many differing (basically unique) index entries:

We notice that in the red section, both kdxlepnro 0 and kdxlepnco 0 (Adaptive) have a value of 0, meaning we have no rows and no columns within the prefix table. As such, we have no prefix table at all here and that this leaf block has simply not been compressed.

If we look at the actual index entries, they all have an additional column now in blue, that being the actual indexed value as all the index values in this leaf block are different from each other. Without some form of index entry duplication, there would be no benefit from compression and Index Advanced Compression has automatically determined this and not bothered to compress this leaf block. An attempt to compress this block would have actually increased the necessary overall storage for these index entries, due to the additional overheads associated with the prefix table (note it has an additional 2 byes of overhead per row within the prefix table).

I’ll next look at an example of a multi-column index and how Index Advanced Compression handles which columns in the index to compress.

I’ve finally managed to find some free time in the evening to write a new blog piece 🙂

This will have to be the record for the longest time between parts in a series, having written Part IV of this Index Compression series way way back in February 2008 !! Here are the links to the previous articles in the series:

As I’ve previously discussed, compressing an index can be an excellent way to permanently reduce the size of an index in a very cost effective manner. Index entries with many duplicate values (or duplicate leading columns within the index) can be “compressed” by Oracle to reduce both storage overheads and potentially access overheads for large index scans. Oracle basically de-duplicates repeated indexed column values within each individual leaf block by storing each unique occurrence in a prefix section within the block, as I explain in the above links.

But it’s important to compress the right indexes in the right manner. If indexes do not have enough repeated data, it’s quite possible to make certain indexes larger rather than smaller when using compression (as the overheads of having the prefix section in the index block outweighs the benefits of limited reduction of repeated values). So one needs to be very selective on which indexes to compress and take care to compress the correct number of columns within the index. Oracle will only protect you from yourself if you attempt to compress all columns in a unique index, as in this scenario there can be no duplicate values to compress. This is all discussed in Part II and Part III of the series.

So, wouldn’t it be nice if Oracle made it all a lot easier for us and automatically decided which indexes to compress, which columns within the index to compress and which indexes to simply not bother compressing at all. Additionally, rather than an all or nothing approach in which all index leaf blocks are compressed in the same manner, wouldn’t it be nice if Oracle decided for each and every individual leaf block within the index how to best compress it. For those index leaf block that have no duplicate entries, do nothing, for those with some repeated columns just compress them and for those leaf blocks with lots of repeated columns and values to compress all of them as efficiently as possible.

Well, wish no more 🙂

With the recent release of Oracle Database 12.1.0.2, one of the really cool new features that got introduced was Advanced Index Compression. Now a warning from the get-go. The use of Advanced Index Compression requires the Advanced Compression Option and this option is automatically enabled with Enterprise Edition. So only use this feature if you are licensed to do so 🙂

The best way as always to see this new feature in action is via a simple little demo.

To begin, I’ll create a table with a CODE column that is populated with unique values:

So I’ve fabricated the data such that the values in the CODE column are effectively unique within 75% of the table but the other 25% consists of repeated values.

From an index compression perspective, this index really isn’t a good candidate for normal compression as most of the CODE data contains unique data that doesn’t compress. However, it’s a shame that we can’t easily just compress the 25% of the index that would benefit from compression (without using partitioning or some such).

If we create a normal B-Tree index on the CODE column without compression:

We notice that the compressed index rather than decrease in size has actually increased in size, up to 2684 leaf blocks. So the index has grown by some 25% due to the fact the index predominately contains unique values which don’t compress at all and the resultant prefix section in the leaf blocks becomes nothing more than additional overhead. The 25% section of the index containing all the repeated values has indeed compressed effectively but these savings are more than offset by the increase in size associated with the other 75% of the index where the index entries had no duplication.

However, if we use the new advanced index compression capability via the COMPRESS ADVANCED LOW clause:

We notice the index has now indeed decreased in size from the original 2157 leaf blocks down to 2054. Oracle has effectively ignored all those leaf blocks where compression wasn’t viable and compressed just the 25% of the index where compression was effective. Obviously, the larger the key values (remembering the rowids associated with the index entries can’t be compressed) and the larger the percentage of repeated data, the larger the overall compression returns.

With Advanced Index Compression, it’s viable to simply set it on for all your B-Tree indexes and Oracle will uniquely compress automatically each individual index leaf block for each and every index as effectively as it can for the life of the index.

Zone Maps are new index-like structures that enables the “pruning” of disk blocks during accesses of the table by storing the min and max values of selected columns for each “zone” of a table. A zone is simply a range of contiguous blocks within a table. Zone Maps are similar in concept to Exadata storage indexes but are physical objects that can be explicitly controlled and maintained. Additionally, in a similar manner to Bitmap-Join indexes, they can also be defined on a table with an outer join to other tables, with the Zone Map attributes based on columns from these joined tables. They can also be created as part of the new Attribute Clustering capabilities I introduced in my last post.

Before getting too excited though, Zone Maps unfortunately requires the Partitioning Option AND either Exadata or SuperCluster. If you’re still interested, read on 🙂

I’ll start by creating and populating my BIG_BOWIE table. For this specific post, the key thing to note here is that the ALBUM_ID column is very well clustered.

We notice execution times are almost identical despite the Full Table Scan (FTS), as Exadata Storage Indexes (SI) have kicked in. With the data being so well clustered, the SI has been able to very effectively “prune” the physical blocks that need to be accessed. This is clearly highlighted by looking at the sessions statistics of a session that performed one of these queries:

Zone Maps are implemented in a similar manner to Materialized Views and are separate table structures that can be refreshed in a similar manner to how Materialized Views are refreshed. This is an important point and a key difference between Zone Maps and conventional indexes in that depending on the refresh property of the Zone Map, may not be updated during DML operations. The default refresh property is REFRESH ON LOAD DATA MOVEMENT, which means the Zone Map in part becomes “stale” after DMLs.

Zone Maps basically store the min and max of the corresponding column(s) within each “zone” of the table. If we have a look at the contents of the Zone Map (below listing in part):

A few things to note here. Firstly, the amount of data stored for the Zone Map is tiny. Each zone corresponds to approx. 8M of storage or in this case roughly 75,500 rows. This is another key difference between Zone Maps and conventional B-Tree indexes. There’s an index entry per referenced row of the table whereas a Zone Map only has one entry per zone of the table. Note also, because the ALBUM_ID data is so well clustered, each zone has a very narrow min/max range of ALBUM_ID values. This means the Zone Map can be extremely effective in “pruning” storage accesses as Oracle can very easily eliminate the majority zones from containing data of interest.

We notice the CBO has used the Zone Map during the FTS (see the TABLE ACCESS STORAGE FULL WITH ZONEMAP execution step and filter predicate information). It’s performing some additional consistent gets over the database index (2364 vs. 1590) but it’s not too bad, nowhere near the 139,000 odd blocks in the table, with performance comparable to before. The extra consistent gets are necessary as any possible zones need to be read in their entirety, whereas a B-Tree index can point to just the specific table blocks that contain data of interest.

So the Zone Map has provided a “good enough” outcome here. Meanwhile, it has required very minimal storage, we can control how the Zone Map is maintained (to be discussed later) and we can guarantee that the Zone Map object will always exist and be present, even after bouncing either the database or Exadata storage servers.

In Part II, I’ll look at what happens when I perform some DML on the base table.

One of the really cool new features introduced in 12.1.0.2 is Attribute Clustering. This new table based attribute allows you to very easily cluster data in close physical proximity based on the content of specific columns.

As I’ve discussed many times, indexes love table data that is physically clustered in a similar manner to the index as it can significantly improve the efficiency of such indexes. A low Clustering Factor (CF) makes an index more viable and is one of the more important considerations in CBO calculations.

But not only database indexes benefit from well cluster data. Other index structures such as Exadata Storage Indexes and the new Zone Maps (to be discussed in future articles) all benefit from well clustered data. Additionally, compression is likely to be much more effective with data that is well clustered and this in turns also impacts the efficiency of In-memory data (again, to be discussed in future articles).

So having the capability to now easily cluster data in regular heap tables has potentially many benefits.

To illustrate, I’m first going to create a table with data that is not well clustered at all. The CODE column has data that is basically evenly distributed throughout the whole table structure:

The CLUSTERING BY LINEAR ORDER clause orders data in the table based on the specified columns, in this case the CODE column. Up to 10 columns can be included using this particular technique (there are other attribute clustering options which I’ll again cover in later articles, yes I’ll be writing quite a few new articles) 🙂 WITHOUT MATERIALIZED ZONEMAP means I don’t want to create these new Zone Maps index structures at this stage which could potentially reduce the amount of table storage needed to be accessed (again, I’ll discuss these at another time).

You must use a direct path insert to make use of attribute clustering (or reorganize the table as we’ll see).

So lets insert the exact same data into this new ZIGGY2 table via a straight direct path sub-select:

We notice the CBO has now decided to use the index. This is due to the cost of the index based execution plan being just 114, significantly lower than the previous index cost of 7081 or the FTS at a cost of 383. Just as importantly, the resultant number of consistent gets has also significantly reduced to just 121, significantly less than the previous 7081 consistent gets when using the index. So the index is indeed much more efficient to use and the CBO costs for this is just about spot on. The end result is that performance has improved.

So how to now likewise improve the performance of the first table? Simple add the attribute clustering and reorganize the table:

The query likewise uses the index and with far less consistent gets and performance is significantly better.

So attribute clustering provides a nice mechanism by which data in a heap table (or importantly within a partition or sub-partition) can be physically clustered in a manner that can be potentially beneficial in various scenarios. Of course, the decision on how to actually cluster the data and on which columns is somewhat crucial 🙂

Oracle Database 12.1.0.2 has finally been released and it has a number of really exciting goodies from an indexing perspective which include:

Database In-Memory Option, which enables specific portions of the database to be in dual format, in both the existing row based format and additionally into an efficient memory only columnar based format. This in turn enables analytical based processing to access the real-time data in the In-Memory Store extremely fast, potentially faster and more effectively than via standard analytical based database indexes.

Advanced Index Compression, which allows Oracle to automatically choose the appropriate compression method for each individual leaf block, rather than having to manually select a single compression method across the whole index. This makes compressing an index a breeze and much more effective than previously possible.

Zone Maps, which enables Storage Index like capabilities to be manually configured and physically implemented inside the database, to eliminate unnecessary accesses of table storage via much smaller objects than conventional database indexes.

Attribute Clustering, a new table attribute which enables much better clustering of table data and we all know how both compression and index structures love table data to be well clustered.

These are all topics I’ll be covering in the coming weeks so stay tuned 🙂

I’ve been lucky enough to present at various conferences, seminars and user group events over the years in some 20 odd countries. But somewhere I’ve never quite managed to present at before is the place of my birth, the UK. Well this year, I’ve decided end my drought and submitted a number of papers for the UKOUG Tech14 Conference and the parallel OakTable World UK14 Conference which were all thankfully accepted. So the land of David Bowie, Coronation Street, Bruce Forsyth, televised darts and a touch of drizzle, here I come.

The conference this year is being held in balmy Liverpool on 8-10 December 2014.

We noticed they haven’t changed since the index was initially created. So by populating an empty table with an append insert, table statistics are now collected on the fly, but not the statistics on any existing indexes.

If we want to now collect just the index statistics and potentially any useful histograms on the table (as histograms are not collect during the bulk load operation), we can use the new GATHER_AUTO option with the dbms_stats.gather_table_stats procedure without having to re-gather base table statistics:

And so my association with Oracle indexes started. It would be an interesting statistic to know how many people have subsequently read the presentation 🙂 It would no doubt result in a few 50-50 block splits !!

I discussed recently an updated MOS note that details the needs vs. the implications of rebuilding indexes.

Following is a neat little trick if you want to very quickly and cheaply estimate the size of an index if it were to be rebuilt or a new index before you actually create the thing. I meant to blog about this sometime ago but was re- reminded of it when I recently came across this entry in Connor McDonald’s excellent blog.

I’m thinking of creating an index on the OBJECT_NAME column, but I’m unsure if I’ll have enough free space in my tablespace. So let’s quickly get an estimate of the index size by simply generating the explain plan of the CREATE INDEX statement:

Notice the little note below the execution plan. Oracle has estimated an index size of approximately 159M bytes and it’s only taken it 0.09 seconds to do so. A trace of the session highlights how Oracle simply uses the table statistics in its determination of the estimated index size.

Well, that’s OK I have sufficient space for an index of that size. Let’s create the physical index and check out its actual size:

The SUBOBJECT_NAME column only has a relatively few (33,669) values that are not null, but the explain plan is still estimating the index to have the full 3.9 million rows (remembering that fully null indexed values are not indexed in a B-Tree index). The estimated index size of 100M is therefore not going to be particularly accurate.

So in this example, the estimated index size is indeed way off. This method doesn’t seem to cater for null index values and assumes the index to be fully populated.

However, if we simply take the known ratio of not null values (in this example, 33669 not null rows /3939187 total rows =0.00855) and then apply it to the calculated estimate (100M x .00855 = 0.855M), where are now back into accurate ballpark territory again.

Of course, such estimates are based on the accuracy of the table statistics. If we have stale statistics, we’ll have stale index size estimates.

Let’s insert more rows and double the size of the table and associated index:

Both the estimated index entries and index size are now much more accurate.

The number of expected index entries is therefore a useful guide as to the potential accuracy of the size estimate.

So the next time you’re wondering whether an index is significantly larger than it should or whether you have sufficient space for a new index, this is a useful, simple technique to get a quick estimate.

In my previous post I discussed when deleting rows from parent tables, how Bitmap Indexes based on the FK constraint can prevent the expensive Full Tables Scans (FTS) on the child tables but not the associated exclusive table locks.

With only a visible Bitmap Index in place, does that mean we’ll have table locking issues if we delete a parent row with current transactions in place ? Let’s check it out.

In one session, we have a current transaction on the child table:

SQL> insert into bowie_kid values (1000001, 'LOW', 4);
1 row created.

In another session, we attempt to delete a parent row (with an ID = 1 which doesn’t currently exist with the child table):

SQL> delete bowie_dad where id = 1;
1 row deleted.

We note the DML was successful and didn’t hang. This means the B-Tree index is clearly being used to police this constraint, even though it’s currently invisible.

In a third session, we now attempt to insert a child row using a FK value that’s in the process of being deleted:

SQL> insert into bowie_kid values (1000003, 'HEROES', 1);

As expected, it hangs as it’s currently effectively waiting on the row level lock made possible by the index entry in the B-Tree index as invisible indexes are still maintained behind the scenes. If we look at a fresh block dump of both indexes, beginning with the Bitmap Index:

It has been updated and lists a new index entry C1 02 as the minimum value now in the index.

So the B-Tree index can be used to successfully police the FK index and prevent the possible table level locking issues associated with deleting parent rows, even though it’s invisible and there is an equivalent visible Bitmap index in place. Invisible indexes are simply not considered as viable execution paths by the Cost Based Optimizer, but may still be “visible” in a variety of scenarios such as quietly policing constraints behind the scenes.

Do I recommend creating two such indexes in Oracle Database 12c. Well, no as the costs of maintaining both indexes need to be considered. But I certainly do caution simply making indexes invisible and expecting the database to behave in exactly the same manner if the index were to be subsequently dropped.

Because rolling back all the above and then dropping the invisible index:

Franck Pachot made a very valid comment in my previous entry on Indexing Foreign Keys (FK) that the use of a Bitmap Index on the FK columns does not avoid the table locks associated with deleting rows from the parent table. Thought I might discuss why this is the case and why only a B-Tree index does the trick.

OK, so we have a small parent table (BOWIE_DAD) and a much larger child table (BOWIE_KID) with all the necessary constraints in place. Note we don’t actually have a child row with a FK DAD_ID = 1. So we can potentially delete this row from the BOWIE_DAD table (where ID = 1).

Let’s begin by creating a B-Tree index on the FK column (DAD_ID) and have a look a partial block dump of the first leaf block in the index:

We’ll compare future block dumps with this one but for now just note that the first index entry has a value of (hex) C1 03, which corresponds to the minimum value for DAD_ID = 2 we currently have in this table/index.

If we insert a new child record in one session (but not yet commit);

SQL> insert into bowie_kid values (1000001, 'LOW', 4);
1 row created.

In a second session, we can delete (but not yet commit) the unwanted parent row without any locking implications thanks to this index on the FK column:

SQL> delete bowie_dad where id = 1;
1 row deleted.

In a third session, we can insert another child record again with no locking implications, providing we don’t attempt to use the parent value the second session is in the process of deleting:

SQL> insert into bowie_kid values (1000002, 'LOW', 3);
1 row created.

But if we do try to insert a new child row with a FK value for which the parent is in the process of being deleted:

SQL> insert into bowie_kid values (1000003, 'HEROES', 1);

The statement hangs and it will do so until the transaction deleting the parent record commits (in which case it will receive an ORA-02291 integrity constraint error) or the transaction rolls back (in which case the insert will succeed).

If we take a fresh dump of the first leaf block (which must contain the associated index entry as it’s the minimum value now in the table):

We notice we indeed do have a new index entry (highlighted above), with all the associated locking information in ITL slot 2 for the new row in which the session is locked. So the key point here is that the index is indeed updated and Oracle can proceed or not depending on what happens with the transaction on the parent table. The overhead of this new index entry is minimal and locking can be easily policed and restricted to just the index entries with this specific value (hex) C1 02 which corresponds to DAD_ID = 1.

If we do indeed proceed with the delete on the parent table:

SQL> commit;
Commit complete.

The session attempting to insert the now deleted parent FK value indeed fails:

We notice the first key difference here in that these Bitmap Index entries are potentially HUGE, with just the 2 index entries in this block. The other thing to note is the combination of Bitmap indexes and DMLs can result in locking hell because if an index entry needs to be modified (resulting in a change in the compressed bitmap string), all rows between the rowid ranges specified within the Bitmap Index entry are effectively locked. So Bitmap Indexes introduce severe locking issues, regardless of the Parent/Child update issue highlighted above.

If we insert a child row in one session:

SQL> insert into bowie_kid values (1000001, 'LOW', 4);
1 row created.

And in another session insert another row with the same FK value:

SQL> insert into bowie_kid values (1000002, 'HEROES', 4);

The session hangs until the transaction in the first session completes because of the locking implications introduced with the Bitmap Index.

Therefore, with a Bitmap Index in place, the last of our worries will be locking issues associated with deleting a parent row. After rolling back the above, we attempt the following. In one session, we insert a child record:

SQL> insert into bowie_kid values (1000001, 'LOW', 4);
1 row created.

In a second session, we delete the unwanted parent row:

SQL> delete bowie_dad where id = 1;

and it hangs. The Bitmap Index is not effective in preventing this lock as it was with the B-Tree Index.

In a third session, we attempt to insert a child row with the soon to be deleted parent key:

SQL> insert into bowie_kid values (1000002, 'HEROES', 1);

and it hangs as well. So the Bitmap Index on the FK does not prevent the locking hell such parent deletes can introduce into our environments.

If we roll all this back and simply have one session delete a parent row:

SQL> delete bowie_dad where id = 1;
1 row deleted.

And in another session insert a child row with the FK about to be deleted, the insert hangs as expected with an exclusive transaction lock:

SQL> insert into bowie_kid values (1000001, 'BOWIE', 1);

However, if we look at a fresh partial block dump of the first Bitmap Index leaf block:

Unlike the B-Tree index which was updated, the Bitmap index has remained unchanged. No attempt was made by Oracle at this stage to insert the index entry as such a new Bitmap Index entry would likely generate too much overheads and not appreciably reduce the locking implications of these DML statements with these Bitmap Indexes in place anyways. The actual index update is delayed until such as change is possible with the rollback of the parent deletion.

However, in a third session, an insert into the child table with a FK that’s not to be deleted is successful:

Bitmap indexes are simply not designed with concurrency in mind and have efficiencies that make it easier for single sessions to load data in Data Warehouses environments where they are indeed suitable.

One advantage of the Bitmap index is that at least Oracle doesn’t have to perform a FTS on the (potentially huge) child table when checking for the existence of any associated child FK values. Oracle can quickly use the index to determine whether the parent delete can proceed or not. If we roll everything back and just attempt to delete a parent row:

We notice at just 3 consistent gets, the potentially expensive FTS on the child table has been avoided. Drop the Bitmap index and the FTS must be performed to ensure no current FK values would violate the constraint when the parent row is deleted:

We notice without the Bitmap Index in place, we are now performing many more (3629) consistent gets due to the necessary FTS.

So using a Bitmap Index to police a FK constraint doesn’t reduce the locking implications associated with deleting parent rows (with Bitmap indexes, we have locking hell regardless if there’s much DML) but it does at least reduce the overheads of checking the associated child table.