Blogroll

Resolving HW enqueue contention

Recently, I had few email exchanges about HW enqueue contention in oracle-l list and offline. There were few interesting observations emerged from test cases created during that discussion.

HW enqueue

When a session needs access to a resource, it requests a lock on that resource in a specific mode. Internally, lock and resource structures are used to control access to a resource. Enqueues, as name suggests, have First In First Out queueing mechanism. You can find more information about internals of locks in my paper [printed in 2001 ]Internals of locks

Segments have High Water Mark (HWM) indicating that blocks below that HWM have been formatted. New tables or truncated tables [ that is truncated without reuse storage clause ], have HWM value set to segment header block. Meaning, there are zero blocks below HWM. As new rows inserted or existing rows updated (increasing row length), more blocks are added to the free lists and HWM bumped up to reflect these new blocks. HW enqueues are acquired in Exclusive mode before updating HWM and essentially HW enqueues operate as a serializing mechanism for HWM updates.

In non-ASSM tablespaces, HWM is bumped up by 5 blocks at a time ( Actually, undocumented parameter _bump_highwater_mark_count controls this behavior and defaults to 5). Heavy inserts in to a table can result in increased HWM activity leading to HW enqueue contention. This issue is prevalent if the table has LOB columns or if the row length is big.

Measuring HW enqueue contention

We will use few test cases to see how underlying extent size and table structures are affecting HW enqueue contention. But, before we need to find a way to measure total number of gets on HW enqueue. If total number of gets on HW enqueue is reduced, enqueue contention can be relieved.

Fixed table x$ksqst stores statistics about total number of enqueue gets, success and failures of those gets at instance level. For example, to see total number of gets on HW enqueue, following query can be used. Column ksqstreq indicates total # of gets and ksqstwat shows total # of waits.

From Oracle version 10g and above, x$ksqst is externalized as v$enqueue_statistics.

However, this statistics shows activity at instance level. While we can use this statistics to measure HW enqueue activity , we need to make sure that there is no other session acquiring HW enqueue. Event 10704 can be used to trace enqueues and every call to get an enqueue prints few lines in the trace file. SQL statement to dump this information to trace file is :

10704, 00000, "Print out information about what enqueues are being obtained"
// *Cause: When enabled, prints out arguments to calls to ksqcmi and
// ksqlrl and the return values.
// *Action: Level indicates details:
// Level: 1-4: print out basic info for ksqlrl, ksqcmi
// 5-9: also print out stuff in callbacks: ksqlac, ksqlop
// 10+: also print out time for each line

Few lines from the trace files printed below. ksq is internal Oracle module names for enqueues and ksqgtl is to get locks on a resource. From the lines below, we can see that HW enqueue is acquired in mode 6, exclusive mode. Timestamp is also printed since we enabled this event at level 15. If we need count number of HW enqueue gets, we need to count occurrences of HW- string in the trace file.

So, in my test database, my session is the only session and we could query v$enqueue_statistics for our test cases.

Test case

Following test case will be used to see the impact of extent size, segment management on HW enqueue gets. If total number of HW enqueue gets can be reduced, contention can be relieved.

Following script creates a tablespace, then creates a table with lob column in that tablespace, inserts 9999 rows in to that table and prints difference in total enqueue gets before and after insert statement.

Tested above script for tablespace in versions 10.2.0.4 and 11.1.0.6. Here are the test results:

test

Extent management

Segment space management

10.2.0.4

11.1.0.6

#1

uniform size 5M

manual

2049

64

#2

uniform size 5M

auto

50

48

#3

uniform size 40K

manual

6604

6045

#4

uniform size 40K

auto

7552

7554

#5

autoallocate

manual

2231

279

#6

autoallocate

auto

269

291

There are few key points here.
1. In test cases #1 and #5, there is a dramatic decrease in enqueue gets between 10g and 11g for tablespace with segment space management set to manual. Looks like, a new feature faster lob is kicking in. Tested above script for a table without lob column. Virtually there is no difference between 10g and 11g, if the table has no lob column. row length is kept around 3000 bytes for this test.

3. In all test cases, automatic segment space management tablespaces performed fewer enqueue gets. In ASSM tablespaces HWM is bumped up by much higher number. I am not suggesting to use ASSM, since space wastage *can* increase in ASSM tablespaces. However, if resolving HW contention is primary issue, then consider ASSM.

4. Allocating additional extent with instance keyword seems to help in non-ASSM tablespace [ Behavior is different for ASSM tablespace and needs more research to understand that ]. In test case #1 above, HWM for the table was set at file_id 6, block_id 80 and that extent starting at file 6, block_id 9. That is, 70 blocks were below HWM initially. After allocating an extent with instance 1 keyword, HWM was bumped up by 1209 blocks.
[ Yong has a test case for this using dbms_space package: http://yong321.freeshell.org/oranotes/Dbms_Space.txt ]

5. High HW enqueue contention is prevalent during Oracle applications upgrade also. During one of our recent upgrade from 11.5.8 to 11.5.0, there was heavy HW enqueue contention on sys.source$ table due to many packages compiled/added during the upgrade. Only option was to increase _bump_highwater_mark_count during upgrade, to relieve excessive waits on HW enqueue contention. Of course, Oracle support must be contacted before adding any underscore parameters in production environment. Side effect of setting this parameter is that, for smaller tables, more blocks might be added below HWM leading to full table scan reading more blocks then necessary.

12 Responses to “Resolving HW enqueue contention”

Besides growing Segments, there is a much more problematic ASSM reclaim Problem with Blobs and HW-enqueue. This is supposed to be fixed in 10.2.0.4, but only(?) if you set the Event 44951.

This will batch-reclaim inactive (undo) LOB chunks and reduce the number of enqueues needed for tht operation. Do you have experiences with that, as well? This typically happens on LOB segments with concurrent insert/delete/update activity.

Hi Bernd
I guess, you are referring to bug 6376915. No, I have not personally encountered it. Looks very intrusive though. Posting a summary here for others:

The bug states that in effect space reclamation can cause HW enqueu’s – and what we are seeing is that if we truncate the table – we don’t see the HW:enqueue’s until we fill up the pre-allocated extents – then the HW enqueue’s start en masse.

Fix:

This fix causes ASSM LOB space to batch up reclaim instead of just reclaiming the requested/required number of LOB chunks. To enable this fix, set event 44951 to the maximum number of chunks that you would like to have reclaimed per attempt. The maximum allowed is 1024. Anything larger becomes 1024. However, if the requested amount of space is larger than the event’s value, the reclaim will be for the requested amount of space.

I have the similar issue in my zOracle 10.2.0.5 on Windows. In my top waits AWR, I see enq: HW – contention waits but when i trace out my session I see I/O slaves (something related to 1300021.1 which I think I can ignore). But in trace file I see many HW numbers.
Does the above mentioned Bug applied on 10.2.0.5? Should I set this event for 10.2.0.5?

Kostas Hairopoulossaid

I noticed in our 10.2.0.3 database with ASSM in AIX 5.3 box that some staging table (with insert and delete operations) are using a huge number of extents and is continuously growing. It seems that for some reason even if there are deletions, oracle cannot reclaim back the space and lower the HHWM. Is any way or any event we can set at instance level to trace the time of the extent is added in the staging table.
I noticed some notes from Steve Adams email that this can be shown dumping the redo log with layer 14.4 but the problem is that we don’t when it happens, thus we cannot check every log

Hello Khair
I don’t think, there is an event to print when an extent is added, at least, not that aware of. My recommendation is to write a small piece of code to get timestamp and # of extents every hour or so for that tablespace/object. With that output, you can pin down the hour at which extent is added and dump the log file during that time period.

Correct approach here is to understand root cause by dumping the log files. This might also mean that you may have to dump the ASSM segments with dbms_Space_admin.segment_dump to see block status of various blocks. With these two trace files, you may be able to understand, why extents are allocated even when not needed.

There are many bugs in 10.2.0.3 in ASSM for extent management. You may be hitting one of that. So, you might want to consider upgrading to version 10.2.0.4 and see if this problem reproduces.

Md. Zafar Ahsansaid

It is very pleasing to see your practical approach and demonstration for various issues.
Here may be i am just deviating some from the article but it is all related to the same. I tried to do the test and the result was very satisfactory and was in accordance to your observation. As we have to face different types of client in many cases they even donot agree with these statistics i mean HW resuest .so itried to aggrevate it by using 40K extent size and inserting data through 3 different sessions in test environment so that it could comeup with the HW enqueue wait event but unfortunately it didnot show any HW wait event . Could you please provide the test scenerio in which we can also create this wait event and then we can check if the wait event has gone by increasing the extent size. I have used manual allocate enough space to avoid this wait event( using alter table modify lob… allocate extent command).
other observation is:
I am finding lots of “direct path write” on the datafile “TS_LMT_AUTO_1M_01.DBF ” contention for each session that is doing the insert .
So could you please provide some help to reduce this “direct path write” on data file.
I have disk_asynch_io = TRUE. As this wait is not on temp file i hope it has nothing to take with pga and sort area and worksize_area_policy even then if it is required i have the following setting for them:

pga_aggregate_target =200000000

workarea_size_policy =AUTO

increasing the pga didnot help as it is coming on datafile?

Your early suggestions and solutions for both the points are very much appriciated.
Regards,
Md. zafar Ahsan Oracle DBA

For HW enqueue test, what version of software and what tablespace management type (ASSM or non-ASSM) are you using? You might need to increase concurrency, but keep the transaction smaller, typically, few blocks. That might trigger HW enqueue contention.

If your SQL inserts are suffering from direct path write wait events, then are you using conventional insert or direct mode insert? Do you have lob columns in that table? What software version please. Better thing is to understand why writes are slower? What kind of file system? Instead of setting disk_asynch_io parameter, you need to have filesystem_options to setall, Oracle RDBMS code will use both asynchronous I/O and direct I/O if the file system supports it.

Md. Zafar Ahsansaid

Sorry for missing some important parameters . Ok it is windows XP 32 bits, filesystem- ntfs, the database version is 10.1.0.2
Insert operation done through 3 simultaneous session. Yes it has clob as in your script.In fact i am using the same scripts, but even having 3 sessions running in parallel for insert opertaion still i am not able to produce the Hw-enq wait. Statistics are fine but I need to be inline with wait and produce HW-enq wait and then after increasing the segment size to 5m check how the waits are reduced with larger extent size.
Test case useed:
@enq_hw1 “uniform size 40K segment space management manual”
@enq_hw1 “uniform size 40K segment space management auto”.

So if possible please let me know how to go for it what changes to make to get the above goal? We canot reduce the extent size further as at least 5 block size(8k) is required while creating the tablespace. Other details i have provided in previous post if anyother information is required then please let me know.
Regarding filesystem_options to setall, . i set this but still didnot find any change for “direct path write” wait event on TS_LMT_AUTO_1M_01.DBF’ file .

saurabh mishrasaid

I have a typical after migration issue, we had i/o issues visible in prod after migration to 11gr2. Oracle recommended to set the parameter “_partition_large_extents to false” and after doing that we are observing lots of HW contentions and buffer busy waits in database, mostly the truncate is getting slowed down.