Sometimes you wonder if you could pin or keep
a table or index in memory. Many application
would benefit, if some key tables could be accessed very quick and faster
then average. Fast access means, that the table has to be cached in
memory, to avoid wait time on disk reads and other waits associated with
buffer pool management.

PL/SQL objects can be pinned in SGA, but technically there
is no alter tablet_namepin.
But the trick can be accomplished by clever use of the new buffer pools;default, keep and
recycle (version 8 and up). Default pool is not new actually, it's
been around and used for at while.

Having a situation, where you want to pin a table in memory i.e.
keep the table in the buffer pool. That would accomplish, that the table would
not be aged out of the buffer cache i.e. replaced (memory space reused) by
other objects through the Least Recently Used (LRU) algoritmn
and table access would always be fast.

Prior to version 8, this could not be accomplished. The only alternative, was to alter table t_name cache. The effect was, that full table
scans would not go to the least recently used end of
the lru list, but to the most recently used end,
hereby surviving longer, but seldom forever in the buffer cache.

After version 8, you can control if an object is loaded in default,
keep or recycle pool. And that changes things.

Using buffer_pool_keep
for pinning a table

First you have to create the keep (and/or recycle)
buffer pool by editing the initSID.ora
file. (NOTE: there is no difference in the behavior of the three buffer pools,
the naming of the pools is merely for memo technically reasons and intended
use.

Just moving a table to the keep pool, does not guarantee that
the table is always kept, it can be aged out of the keep pool by the LRU algoritmn.)

Part of initSID.ora file using
arbitrary numbers (italics is new lines)

db_block_buffers = 65536

db_block_lru_latches = 4

buffer_pool_keep = 16384 -- version 8.0.x

buffer_pool_keep = (buffers:16384, lru_latches:1) -- version 8.1.x

buffer_pool_recycle= .........

After restarting the instance, you can isolate the table(s) in the keep
pool by changing table storage

alter table t_name storage ( buffer_pool keep);

Now you can pre page the table and load all table rows into memory by a full
table scan as select * from t_name. If
the number of blocks the table occupies is less than the number of blocks in
the keep pool. (tested on 8.1.7.1, win2k) Or you can
let the application populate the buffer pool, having slower access first time
the data block is accessed. The advantage is that data blocks never used, won't
be loaded, and won't take up valuable memory space.

You might have several indexes associated with the table. You can choose to
drop the indexes and always do a full table scan. If
the table is small and full scan is fast and cheap. Even memory scans has cpu cost and can give rise to
latch contention and memory latency.

The alternative is to cache the index(es) and reserve space for the index(es)
in the pool. Remember that the optimizer doesn't know that the table is fully
cached, and will try to use index lookup, if not told otherwise by hints like /*+
full (t_name) */. Hints can be
over ruled by the optimizer. Test your statements to be sure.

If the table (and index?) is frequently updated, reserve extra space for
block copies until the db_writer wakes up and cleans
out the dirty blocks.

Oracle document A76992-01 (EE doc for
8.1.6/7) has following guidelines.

Identifying Segments to Put into the keep and recycle Buffer
Pools

A good candidate for a segment to put into the recycle buffer
pool is a segment that is at least twice the size of the default buffer
pool and has incurred at least a few percent of the total I/Os in the system.

A good candidate for a segment to put into the keep pool is a
segment that is smaller than 10% of the size of the default
buffer pool and has incurred at least 1% of the total I/Os in the system.

The trouble with these rules is that it can sometimes be difficult to
determine the number of I/Os per segment if a
tablespace has more than one segment. One way to solve this problem is to
sample the I/Os that occur over a period of time by
selecting from v$session_wait to
determine a statistical distribution of I/Os per segment.

select file#, count(block#), count (distinct file# || block#)

fromv$bh

group by file# ;

For monitoring buffer pool usage, I have created 3 views in sys
schema, to help me do a quick check, now and then. I created the views
due to complexity and runtime problems (slow running) with a single select. And
this works nice.

A view for getting an objects buffer_ pool defaults:

create or replace view oci_buffer_pools

as

selecttable_name object, buffer_pool from dba_tables

wherebuffer_poolis not null

union

selecttable_name object, buffer_pool from dba_tab_partitions

wherebuffer_pool is not null

union

selecttable_name object, buffer_pool from dba_tab_subpartitions

wherebuffer_pool is not null

union

selectindex_name object, buffer_pool from dba_indexes

wherebuffer_pool is not null

union

selectindex_name object, buffer_pool from dba_ind_partitions

wherebuffer_pool is not null

union

selectindex_name object, buffer_pool from dba_ind_subpartitions

wherebuffer_pool is not null

/

A view to select the objects and types in the buffer cache:

create or replace view oci_block_header as

-- For performance, queries against this view should use cost based optimizer

-- having count(a.object_name) > 100 -- if you don't want small objects

/

As a curiosity, I will mention that I once tried to cache a very large table
(25+ mill rows) on a Sun 6500, running Solaris 7 and 8.1.6EE.

A full table scan would not populate the keep pool. I tried to alter
table t_namecache and that
didn't help. I never figured why. I had to create a small procedure, selecting
rows by primary key lookup in a loop. As all rows were very similar and no
empty columns, calculations showed that I could advance primary key number by
400, hereby moving two third of a block forward, select that row and repeat. Hereby having a fair chance of hitting every data lock at least
once. The table and index was partitioned in 5 hash partitions. By
running 5 procedures parallel, the table loading finished in approx. 16
minutes.

Converting the table to IOT with 5 hash partitions, the same loading
procedure lasted only 9 minutes and saved ½ Gbyte
memory space.

Update Feb 2006 – PhillipeEbersohl

I use mainly Oracle 9iR2 versions. I had a set of tables
defined in a couple of schemas, namely V2 and DALIM_UPDATE, and I use a KEEP
buffer as suggested in the main article.But, using:

I had the unpleasant surprise to see some tables and indexes of
my DALIM_UPDATE schema appearing as being in the KEEP buffer. Investigation
showed up that the object_name was not associated to the object_owner in the OCI_BUFFER_POOLS view. I corrected it as follows:

CREATE OR REPLACE VIEW OCI_BUFFER_POOLS
AS
select t.ownerobject_owner,
table_name object, buffer_pool
from dba_tables t
where buffer_pool is not null
union
select tabp.table_owner , table_name
object, buffer_pool
from dba_tab_partitionstabp
where buffer_pool is not null
union
select tabsp.table_owner, table_name
object, buffer_pool
from dba_tab_subpartitionstabsp where buffer_pool is not
null
union
select i.owner, index_name
object, buffer_pool
from dba_indexesi
where buffer_pool is not null
union
select ip.index_owner, index_name
object, buffer_pool
from dba_ind_partitionsip
where buffer_pool is not null
union
SELECT ips.index_owner, index_name
object, buffer_pool
from dba_ind_subpartitionsips where buffer_pool is not null
;