Faulty Quotes 5 – Block Sizes

The topic of deviating from the default 8KB block size in Oracle Database, or using multiple block sizes in a single database seems to surface every couple of months in the OTN forums, Oracle-L, comp.databases.oracle.server Usenet group, and similar discussion forums. I think that I understand why. A lot of information has been written that advocates using multiple block sizes in a single Oracle database, or using the largest possible block size to improve “full scan” or “range scan” performance. Such information is found in blogs, news articles, discussion forums, expert sites, books, and even Oracle’s download.oracle.com website. So, why do people ask questions about using larger than default block sizes or multiple block sizes in discussion forums if there are so many sources of information that say “just do it”. Well, chances are that the Google (or other search engine) search that found all of the sources recommending the use of non-standard settings also found several pages where people basically stated “stop, think, understand before making any changes.” See the Faulty Quotes 3 blog article.

So, you might be curious what my Google search found. Is it a best practice to implement multiple block sizes in a single database, and is it a best practice to move all of your indexes to a tablespace using the largest supported block size? (See chapter 1 of Expert Oracle Practices for a discussion on the topic of implementing “best practices”.) In the following quotes, I have attempted to quote the bare minimum of each article so that the quote is not taken too far out of context (I am attempting to avoid changing the meaning of what is being quoted).

“Oracle9i introduced a new feature that allowed a single instance of the database to have data structures with multiple block sizes. This feature is useful for databases that need the flexibility of using a small block size for transaction processing applications (OLTP); and a larger block size to support batch processing applications, decision support systems (DSS), or data warehousing. It can also be used to support more efficient access to larger data types like LOBs.”

“In Oracle databases 9i, 10g, and 11g, it is a best practice to use multiple block sizes; this allows you to tailor the block size to a specific type of access. Place tables and indexes in tablespaces sized (block size) according to access…”

“Larger oracle block sizes typically give fewer index levels and hence improved index access times to data. A single I/O will fetch many related rows and subsequent requests for the next rows will already be in the data buffer. This is one of the major benefits of a larger block size. Another benefit is that it will decrease the number of splits.”

dba-oracle.com/art_so_blocksize.htm

“Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace.”
“As you can see, the amount of logical reads has been reduced in half simply by using the new 16K tablespace and accompanying 16K data cache. Clearly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and above are worth your investigation and trials in your own database.”

rampant-books.com/t_oracle_blocksize_disk_i_o.htm

“B-tree indexes with frequent index range scans perform best in the largest supported block size. This facilitates retrieval of as many index nodes as possible with a single I/O, especially for SQL during index range scans. Some indexes do not perform range scans, so the DBA should make sure to identify the right indexes”

praetoriate.com/t_oracle_tuning_data_buffer_pools.htm

“This is an important concept for Oracle indexes because indexes perform better when stored in large block size tablespaces. The indexes perform better because the b-trees may have a lower height and mode entries per index node, resulting in less overall disk overhead with sequential index node access.”

remote-dba.cc/s56.htm

“Indexes want large block sizes – B-tree indexes perform best in the largest supported block size and some experts recommend that all indexes should reside in 32K block size tablespaces. This facilitates retrieval of as many index nodes as possible with a single I/O, especially for SQL performing index range scans.”
“Many DBAs make their default db_block_size 32k and place indexes, the TEMP tablespace and tables with large-table full-table scans in it, using other block sizes for objects that require a smaller fetch size.”

remote-dba.net/unix_linux/multiple_block_sizes.htm

” Large blocks – Indexes, row-ordered tables, single-table clusters, and table with frequent full-table scans should reside in tablespaces with large block sizes.”

oracle-training.cc/s54.htm

“Larger block sizes are suitable for indexes, row-ordered tables, single-table clusters, and tables with frequent full-table scans. In this way, a single I/O will retrieve many related rows, and future requests for related rows will already be available in the data buffer.”

oracle-training.cc/oracle_tips_block_sizes.htm

“Indexes want large block sizes – Indexes will always favor the largest supported blocksize. You want to be able to retrieve as many index nodes as possible in a single I/O, especially for SQL that performs index range scans. Hence, all indexes should reside in tablespaces with a 32k block size.”

oracle-training.cc/t_oracle_multiple_buffers.htm

“One of the first things the Oracle9i DBA should do is to migrate all of their Oracle indexes into a large blocksize tablespace. Indexes will always favor the largest supported blocksize.”

“It’s pretty well established that RAC performs less pinging with 2k blocksizes”
“Large blocks gives more data transfer per I/O call.”
“Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.”

dbapool.com/articles/040902.html

“Index Branches: Larger oracle block sizes typically give fewer index levels and hence improved index access times to data .This is one of the major benefits of a larger block size.”

“In Oracle9i and Oracle10g it is a good practice to use multiple block sizes, this allows you to tailor the block size to a specific type of access. Place tables and indexes in tablespaces according to access. For single block read type OLTP access, use 8k block sizes. For full table scan access such as with data warehouses use 16-32K block sizes. For index lookups use 8-16K block sizes. For indexes that are scanned or bitmap indexes, use 16-32K block sizes.”

“Because the blocksize affects the number of keys within each index block, it follows that the blocksize will have an effect on the structure of the index tree. All else being equal, large 32k blocksizes will have more keys per block, resulting in a flatter index than the same index created in a 2k tablespace… You can use the large (16-32K) blocksize data caches to contain data from indexes or tables that are the object of repeated large scans.”

dbazine.com/oracle/or-articles/burleson2

“Hence, one of the first things the Oracle9i database administrator will do is to create a 32K tablespace, a corresponding 32K data buffer, and then migrate all of the indexes in their system from their existing blocks into the 32K tablespace… Indexes will always favor the largest supported blocksize.”

statspackanalyzer.com/sample.asp

“You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.”

“If you have large indexes in your database, you will need a large block size for their tablespaces.”
“Oracle provides separate pools for the various block sizes, and this leads to better use of Oracle memory.”

“… and using multiple block caches act as an intelligent cache differentiator that automatically leverage cache performance optimization. I have successfully tested, like many other DBAs and developers, that beyond any possible SGA tuning that using multiple-block-size database can certainly improve performance through this performance approach.”

“Simply by using the new 16K tablespace and accompanying 16K data cache, the amount of logical reads has been reduced by half. Most assuredly, the benefits of properly using the new data caches and multi-block tablespace feature of Oracle9i and later, are worth examination and trials in the DBA’s own database.”

“Objects that experience full scans and indexes with frequent range scans might benefit from being placed in a larger block size, with db_file_multiblock_read_count set to the block size for that tablespace.”

“At first, beginners denounced multiple block sizes because they were invented to support transportable tablespaces. Fortunately, Oracle has codified the benefits of multiple blocksizes, and the Oracle 11g Performance Tuning Guide notes that multiple blocksizes are indeed beneficial in large databases to eliminate superfluous I/O and isolate critical objects into a separate data buffer cache:

‘With segments that have atypical access patterns, store blocks from those segments in two different buffer pools: the KEEP pool and the RECYCLE pool…'”

————————-

Before deciding whether or not to implement a large block size (or a very small block size), or add either a larger or smaller than default block size tablespace, I suggest reviewing the following:

“But in most cases the administration overhead is much bigger than the performance benefit. You can easily end up with over- or undersized db_XXk_cache_size and the database can’t do anything about it. Then the performance will be better in some parts of the day and worse later on.”

“I would not recommend going into a system planning on using multiple blocksizes – they were invented for one thing, to transport data from a transactional system to a warehouse (where you might be going from 8k OLTP to 16/32k warehouse) and to be used only to extract/transform/load the OLTP data.”

“My block size is 4096 and my db_32k_cache_size=67108864
I want to create a tablespace with 32K and rebuild all indexes into this tablespace. These are
frequently used indexes. Do you think is there any benefit for using 32K block size in this scenerio”

“before you do something, you should have an identified goal in mind
so, tell us all – WHY would you do this? Don’t say “cause I read on some website it makes things super fast” (it doesn’t), tell us WHY you think YOU would derive benefit from this?
I do not think there is in general benefits to be gained from using multiple block size tablespaces – short of TRANSPORTING data from one block size to another for an ‘extract transform and load’ process.”

“BUT – do not use multiple block sizes for anything other than transporting data from database A to database B where the block size in A is different from B. No silver bullets with this ‘trick’, nothing you want to do in real life. The cases whereby multiple blocksizes are useful are typically limited to benchmarks, old wives tales, and very exceptionally rare conditions.”

“Use 8k. This is right in the middle, and won’t put you in an edge condition. Call it the Goldilocks block, not to small, not to big, just right.
For both OLTP and DSS, 8k is an optimal size. I use 8k, always.
There is minimal gains to be had in messing with block sizes. Having good db design and good execution plans is a better place to worry about performance.”

Summary of an OTN forums thread – what was likely the longest thread ever on the topic of block sizes (and very likely multiple block sizes in the same database) from June 2008. The message thread was too large to be supported on the new OTN software due to performance reasons. Fortunately, Jonathan Lewis obtained a copy of the thread content in a PDF file:http://jonathanlewis.wordpress.com/2008/07/19/block-sizes/

I posted a number of test cases in the above mentioned OTN thread where I simulated some of the activity in a data warehouse, and activity in an OLTP type database. To a large extent, the performance was very close to being identical in the databases with the default 8KB and 16KB tablespaces, with just a few exceptions. As I recall, the 16KB database encountered performance problems when a column with a NULL value was updated, and when a rollback was performed.

Below you will find the scripts to reproduce my test cases that appeared in the above mentioned OTN thread, and the performance results that I obtained. The OLTP test required roughly 10-12 hours to complete:Block Size Comparison (save with a .XLS extension and open with Microsoft Excel).

I guess the message is that you should verify that the swimming pool contains water before diving in head first.

Odd, I have NEVER encountered an error with larger than 8k blocksizes before and I worked in a very large Oracle shop. Care to back that up? I have ran DB’s with multiple block sizes before and I agree that it is a pain to manage and not worth the small gains, but I do believe in sizing the blocks based on the type of DB you have.

I have not checked Metalink/MOS lately, but I believe that Noons might be referring to Bug No 4260477 (reported in 9.2.0.5, fixed in 10.2) that indicates that there are problems with inserting/deleting (and possibly updating) a large number of rows in a single block within a single transaction with 32KB block size. There is also Bug No 6918210 that I believe was discovered by Jonathan Lewis and later confirmed by Greg Rahn as well as a test case that I posted in the very large OTN thread from June 2008.

I am assuming that with jumbo TCP/IP frames enabled on the switch, that 8KB will fit nicely inside a jumbo frame (See http://en.wikipedia.org/wiki/Jumbo_frame ) while larger block sizes might not fit so nicely – this might be a factor in a RAC configuration.

You might take a look at Greg Rahn’s site ( http://structureddata.org/2008/09/08/understanding-performance/ ). He was involved with the development of Exadata at Oracle Corp., and I believe that he mentioned in one of the referenced OTN threads that Exadata and most/all internal databases at Oracle Corp. use 8KB block sizes. If I recall correctly, he also stated in one of his posts that very large block sizes typically allow for higher compression ratios, so that is a positive for using the largest available block size.

I am not sure if Noons frequents this blog often, but I understand that he has experience working on large Oracle databases – at least that is what I gather from his Usenet postings.

Not all problems with larger blocks (greater than the default 8KB) will generate an error message – in some cases poor performance might result, and other potential problems might be even more difficult to detect. An attempted quick search of MOS found these problems:
* Bug 2680659 – CTAS with COMPRESS fails with OERI[kd9alloc] with 32k block size [ID 2680659.8] fixed in 9.2.0.5 and 10.1.0.2

* Bug 5374820 – Client side dump on import of table with more than one LOB column into 32k block size [ID 5374820.8] fixed in 10.2.0.4 and 11.1.0.6

* Bug 3078613 – Buffer cache corruption can occur when using different DB block sizes in the same database [ID 3078613.8] fixed in 9.2.0.5 and 10.1.0.2

* RMAN Backup of Datafiles Copy Skip Datafiles of Different Block Size [ID 844497.1] RMAN backup of datafiles copy skip the datafiles having different block size. It wont give any error or warning message but wont take backup of the datafile having different datafile block size. Affects 10.2.0.4, but fixed in 11.1.0.?

* Cannot Open All The RAC Instances When Using 32k Block Size. [ID 271200.1] fixed in 9.2.0.4 with additional patch

* Bug 4940513 – OERI[kturrur11] can occur with multi block undo [ID 4940513.8] – possible if the block size of the data tablespaces is larger than the block size of the undo tablespace fixed in 11.1.0.6 and 10.2.0.3

* Bug:5925224 RDBMS: Compression does not correctly handle rows that are larger than the block size leading to ORA-600 [kdblcflush:#row] errors fixed in 10.2.0.3.0 Patch 21 (10.2.0.3.21P)

* Bug 4553025 – Large inserts into hash clustered tables can spin [ID 4553025.8] – An infinite loop could result when inserting or updating hash cluster table with a 16K or 32K block size, when the ITL entries on the block reach 255 fixed in 11.1.0.?

* ORA-17507 with ODM and large db_block_size [ID 414622.1] – When Veritas ODM library is in place and the database block size is 16k or 32k then a fast full index scan can result in ORA-17507, fixed in 10.2.0.4 and 11.1.0.?

* FLASHBACK VERSIONS QUERY GIVES ORA-600 [15201] ERROR [ID 268611.1] – A flashback query on a table fails with the error ORA-600 [15201] when the table resides in a tablespace with a DB_BLOCK_SIZE larger than 8K, fixed in 10.2.0.?

* ORA-19504 ORA-27054 DURING THE BACKUP [ID 833094.1] – NFS server is on RH3 (kernel 3) which does support only 8K rsize and wsize while the blocksize of the database is 16K affects 10.2.0.4

* ORA-8007 ‘Further changes to this block by this transaction not allowed’ Error [ID 465226.1] – error may be signaled, if using a large DB_BLOCK_SIZE value for the tablespace, particularly 32K, affects 10.2.0.4, fixed in 11.1.0.?

Mike, if you found something that works (larger than 8KB block sizes), and you progressed beyond the “stop, think, understand before making any changes” stage, by all means use what works for you. The message of this article is not that larger block sizes should not be used, but that understanding why or why not larger block sizes should be used is what is important.

Noons, good point regarding the bugs. A couple of those bugs are referenced in the linked pages. For instance, in the very long OTN thread I mentioned Metalink Bug No 4260477 (reported in 9.2.0.5, fixed in 10.2) that indicates that there are problems with inserting/deleting (and possibly updating) a large number of rows in a single block within a single transaction with 32KB block size. Greg Rahn mentioned that the problem found by Jonathan Lewis in the OTN thread is likely Bug No 6918210.

Are there other bugs that you have heard about regarding block size?

You will probably notice that none of the supporting articles considered the effects of multiple sessions concurrently modifying data in the database – I believe that this potential problem is mentioned in Richard Foote’s blog articles.

I try to provide at least one blog article per day. Unfortunately, some (maybe most) of the articles take a long time to put together, so it is difficult to publish articles at that pace. The blog stats currently show that there are 126 articles, with at least 1 more blog article on the way – most of the older articles may be found on the Archived Articles page https://hoopercharles.wordpress.com/archived-blog-articles/

[…] the DBMS tier while at the same time enabling a SQL-based contract of the UI-layer with the DBMS. Faulty Quotes 5 – Block Sizes Read this before deciding whether or not to implement a large block size (or a very small block […]

[…] Faulty Quotes 5 – Block Sizes Read this before deciding whether or not to implement a large block size (or a very small block size), or add either a larger or smaller than default block size tablespace. […]

I saw suffering large OLTP due silly 16K block size. 90% of CPU effort ( 32 CPUs) was just to ensure read consistency,
at 99% CPU utilisation. The whole service become unavailable with running server, full functioning database.

made good experience for different block size

( due you can’t create database with just 2k block size )
heavy duty work on a table with a suboptimal written application, especially in RAC –
move hot sput out into a 2k blocksize tablespace- the contention drops dramatically, trhoughput increases measurable.
( well, you may play also with PCTFREE 99 too)

Hints for Posting Code Sections in Comments

********************
When the spacing of text in a comment section is important for readability (execution plans, PL/SQL blocks, SQL, SQL*Plus output, etc.) please use a <pre> tag before the code section and a </pre> tag after the code section:

<pre>

SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;
TODAY
---------
01-MAR-12

</pre>
********************
When posting test case samples, it is much easier for people to reproduce the test case when the SQL*Plus line prefixes are not included - if possible, please remove those line prefixes. This:

SELECT
SYSDATE TODAY
FROM
DUAL;

Is easier to execute in a test case script than this:

SQL> SELECT
2 SYSDATE TODAY
3 FROM
4 DUAL;

********************
Greater than and Less than signs in code sections are often interpretted as HTML formatting commands. Please replace these characters in the code sections with the HTML equivalents for these characters: