Blogroll

A stroll through shared pool heaps

Last week, we were discussing about increasing shared_pool_reserved_size to combat a performance issue(bug) in a conference call. I thought, it was a common knowledge that shared_pool reserved area is part of a shared_pool and surprisingly it is not-so-common.

In this blog, we will discuss about shared_pool and shared_pool reserved area internals. First, we will discuss about details specific to release 9i and then discuss changes in later releases 10g/11g.

oradebug command

We will use oradebug command to dump the heap with level 2. Level 2 is to dump shared_pool heap in to a trace file.

oradebug setmypid
oradebug dump heapdump 2

Above command generates a trace file and we will walk through the trace file and review various areas closely.

Parameters

In this test instance, we have a bigger SGA. Shared_pool (6GB) and shared_pool_reserved_size values are printed below.

In version 9i, Shared_pool is another sub-heap of SGA and it is further split in to multiple sub heaps. For this SGA, shared pool heap is split in to 6 sub heaps and this sub heap count is determined by the shared_pool_size and can be directly controlled by undocumented parameter _kghdsidx_count. Following picture illustrates these 6 sub shared pool heaps.

Each of these sub heaps are split in to extents. Notice the field xsz=0x1000000 above and it is the size of an extent. Converting xsz=0x1000000 to decimal results in an extent size of 16MB. There are 68 such extents in this sub heap.

Each extent size is 16MB( I think, this is same as granule size, but I need more test cases to confirm this). So, there are 68 extents of size 16MB in each sub-heap with a total size of 1088MB. Six such sub-heaps adds up to 6528MB, which is closer to our shared_pool size.

If you haven’t noticed already, there is a difference of 96MB between two consecutive extent boundary addresses above. That’s because, these extents are striped in memory across the sub-heaps. Meaning, 1st 16MB memory chunk is EXTENT 0 of sub heap 1, next 16MB memory chunk is EXTENT 0 of sub heap 2 etc ( This is not true in 10g+ versions though)

Let’s review anotomy of each extent. An extent is made up of many memory chunks. Each chunk has a comment and when a chunk is requested to be allocated from shared_pool, calling code passes a parameter with a comment and that comment is populated in ksmchcom column.

Notice the memory chunk of 1515376 bytes between two ‘reserved stopper’and that is allocated for shared pool reserved area. Each extent has one such area allocated for reserved pool. Remaining memory in each extent is associated with shared pool general area.

Of course, reserved area is considered only if there is not enough free memory in general area and the chunk size exceeds _shared_pool_reserved_min_alloc parameter.

There are 1515376 bytes in each extent with 40 bytes stopper in each side of that chunk, 68 extents in each sub-heap and 6 such sub-heaps, with a total of ( 1515456 X 68 X 6) 618,306,048 bytes which matches with x$ksmspr.

In release 9i, each sub heap of shared pool has one free list for general area and one free list for reserved area.

Free lists in general area is organized by its size and a range of size is associated with a bucket. For example, bucket 252 below covers chunk sizes ranging from 16408 to 32791 bytes. Of course, this improves performance as just one bucket can be accessed to find a free chunk closer to requested size. Chunks can be broken, recreatable chunks can be flushed etc to allocate a chunk.

In 9i, general area is searched for a chunk big enough to satisfy requests. If it isn’t possible to get a chunk either by breaking bigger chunk, then reserved pool is checked if the request is bigger than _shared_pool_min_alloc parameter. Then only LRU lists checked to flush chunks. So, this can result in holding shared_pool latches longer.

10g and above

We have shown that how reserved area is a part of shared pool size. In 10g and above, few things in this blog entry have changed (improved). We will try to explore these few of those changes:

1. In 10g+, these shared pool sub-heaps are further divided in to sub-heaps. Also, not all memory is allocated at startup to these sub heap areas. Sub heap 0 is holding all unallocated memory and released to other sub heaps as memory pressure in the shared_pool increases. This is a very good idea since ORA-4031 is thrown even if free memory is depleted from one sub-heap and other sub-heaps still have ample amount of free memory (in 9i). This issue can be reduced, by holding big chunk of free memory in heap 0 and re-distributing to other sub-heaps as memory pressure increases. Notice that shared pool sub heap 1 is divided in to further sub heaps (1,0), (1,1) and (1,2) below. Sub heap 0 is not visible in the trace file, but x$ksmss gives it away.

3. In 9i, we saw that all sub-shared_pool heaps were of same size. But, in 10g, due to dynamic redistribution of memory from sub-heap 0 to other heaps, heap sizes can vary wildly.

4. In 9i, we saw that extents are striped across multiple sub-heaps. But, in 10g, that isn’t true anymore, due to dynamic redistribution of memory.

5. If sga_target and memory_target parameters are in use, this gets more complicated. Part of shared_pool itself can be reallocated and tagged as ‘KGH:NO ACCESS’ comment, which means that part of that shared pool is allocated to buffer cache (ASMM or AMM). In this case, shared pool objects can be flushed, extents deallocated from sub heaps and tagged as KGH: NO ACCESS. Of course, excessive redistribution of memory between buffer_cache and shared_pool can have dramatic effect on performance. This can be mitigated by having minimum values for various areas.

In summary, reserved area is just part of shared_pool and there has been many improvements in this area. Knowledge about shared pool internals will be very useful, especially to understand and resolve performance issues. This can be read in a document format in Investigations: A stroll through shared pool

Very good description Riyaj. We were just investigating something similar yesterday. Fortunately it was on a test system we had been banging away at and not production. Anyway, we ran across your post. Good job.