Since very few Oracle databases can afford the cost of full RAM caching,
many rules have been developed for the segregation and isolation of
cached objects. Some of these rules of thumb will yield clues about the
best way to utilize Solid State Disk (SSD) in a solid-state Oracle
environment:

§Segregate large-table
full-table scans: Tables that experience large-table full-table
scans will benefit from the largest supported block size and should be
placed in a tablespace with the largest block size.

§Use the RECYCLE Pool:
If the db_cache_size is not set to the largest
supported block size for the server, the
db_recycle_cache_size parameter should not be used.
Instead, use a db_32k_cache_size, or whatever the maximum size
for the system is, and assign all tables that experience frequent
large-table full-table scans to the largest buffer cache in the
database.

§Segregate Indexes:
In many cases, Oracle SQL statements will retrieve index information via
an index range scan , scanning the b-tree or
bitmap index for ranges of values that match the SQL search criteria.
Hence, it is beneficial to have as much of an index residing in RAM as
possible. One of the first things the Oracle9i DBA should do is to
migrate all of their Oracle indexes into a large blocksize tablespace.
Indexes will always favor the largest supported blocksize.

§Segregate random access
reads: For those databases that fetch small rows randomly from
the disk, the Oracle DBA can segregate these types of tables into 2K
Tablespaces. While disk is becoming cheaper every day, no one wants to
waste any available RAM by reading in more information to RAM than is
actually going be used by the query. Hence, many Oracle DBAs will use
small block sizes in cases of tiny, random access record retrieval.

§Segregate Locator Object
(LOB) column tables: For those Oracle tables that contain raw,
long raw, or in-line LOBs, moving the table rows to large block size
will have an extremely beneficial effect on disk I/O. Experienced DBAs
will check dba_tables.avg_row_len to make sure that the
blocksize is larger than the average size. Row chaining will be reduced
while at the same time the entire LOB can be read within a single disk
I/O, thereby avoiding the additional overhead of having Oracle to go out
to read multiple blocks.

§Segregate large-table
full-table scan rows: When the RECYCLE pool was first introduced in Oracle8i, the idea was that the full table scan
data blocks, which are not likely to be reread by other transactions,
could be quickly flushed through the Oracle SGA thereby reserving
critical RAM for those data blocks which are likely to be reread by
another transaction. In Oracle9i, the RECYCLE pool can be configured to
use a smaller block size.

§Check the average row
length: The block size for a tables' tablespace should always be
greater than the average row length for the table (dba_tables.avg_row_len). If the block size is
smaller than the average row length, rows chaining occurs and excessive
disk I/O is incurred.

§Use large blocks for
data sorting: The TEMP tablespace will benefit from the largest
supported blocksize. This allows disk sorting to happen in large blocks
with a minimum of disk I/O.

Recent TPC-C benchmarks make it clear that very large RAM regions are a
central component in high performance Oracle databases. The 2004 UNISYS
Oracle Windows benchmark exceeded 300,000 transactions per minute using
a Windows-based 16-CPU server with 115 gigabytes of Oracle data buffer
cache. The following are the actual Oracle parameters that were used in
the benchmark, and the benefit of large scale RAM caching becomes more
evident:

db_16k_cache_size
= 15010M

db_8k_cache_size = 1024M

db_cache_size = 8096M

db_keep_cache_size = 78000M

At this point, it should be clear that RAM resources are an important
factor in maintaining the performance of I/O intensive Oracle systems.