Oracle's RDBMS SQL Command Dump Block

Before continuing with this article, you should read the Oracle Database Concepts 11g Release 2 (11.2) of the documentation, the book every DBA should start with.

Our examination of data blocks starts in Section 12-6 of the Concepts Manual.

Data block format: "Every Oracle data block has a format or internal structure that enables the database to track the data and free space in the block. This format is similar whether the data block contains table, index, or table cluster data." A block is the smallest unit of logical storage that the Relational Database Management System (RDBMS) can manipulate. Block size is determined by the database parameter DB_BLOCK_SIZE. The logical storage of data blocks, extents, segments, and table spaces (from smallest to largest) map to the data files, which are stored in operating system blocks.

An undo block will store the undo transaction that is the actual SQL command needed to reverse the original SQL transaction statement. This undo is needed for read consistency for all read-only queries until you commit or rollback that transaction.

Read consistency within a changed block (transaction) is maintained for any of the following commands: insert, update, delete, merge, select for update, or lock table. Any of the previous changes are tracked until the command is issued to either commit or rollback a particular transaction. This consistency keeps the data view to each user the same, whether they are just doing queries or actually changing any other data.

A point in time or what is called the System Change Number (SCN) identifies each transaction, and transaction flags show the state of the transaction. The only end user that can see any changed data will be the one making the changes, no matter the application used until they commit that change.

The SCN advances for every change to the database as a sequential counter, which identifies a certain point in time. The SCN tracks more than just single transactions by end users. These transactions will be in Data Definition Language (DDL) or Data Manipulation Language (DML). DDL statements are associated with creating objects (create table) or what is also called metadata. DML are the other commands mentioned earlier (insert, update, delete, among others) that manipulate the data in some way. The RDBMS advances the SCN if another person logs in, reconnects, or alters their session as well as when Oracle background processes (which constantly check the state of activity inside of the database) take place.

It is undo that gives everyone a point-in-time consistent view of the data, which is called Read Consistency. There are controls created from business rules within the application called triggers and integrity constraints that validate the data entered by the user. Database locks control access to data during changes for exclusive access by the end user changing it.

During a delete or update statement:

The data block is read, loading it into a memory structure called a buffer cache

The redo log buffer will contain the corresponding delete or update statement

An entry in the undo segment header block is created for this transaction

It also copies the delete or update row into an undo block

For a delete, the row is removed from the data block and that block is marked as dirty

Locks keep exclusive use of that block until a commit or rollback occurs

Dirty is an internal designation where the block is identified as having changed data that has not been written to disk. The RDBMS needs to track this information for transactional integrity and consistency. The underlying dynamic performance view v$bh indicates when a particular block is dirty, as seen by the following query:

SYS@ORCL11>select file#, block# from v$bh where dirty='Y';

When a transaction is committed by the end user:

The transaction SCN is updated in the data block and the undo segment header marks that statement as committed in the header section of the undo block.

The logwriter process (LGWR) will flush the log buffer to the appropriate online redo log file.

SCN is changed on the data block if it is still in the buffer cache (fast commit).

Delayed block cleanout can happen when all of the changed blocks don't have the updated SCN indicating the commit has occurred. This can cause problems with a transaction that is updating large numbers of rows if a rollback needs to occur. Symptoms include hanging onto an exclusive lock until that rollback is finished, and causing end users to wait.

The delayed block cleanout process does occasionally cause problems that would require opening an Oracle Support Request. Delayed block cleanout was implemented to save time by reducing the number of disk reads to update the SCN until the RDBMS needs to access data from that same block again. If the changed block has already been written to the physical disk and the Oracle background process encounters this same block (for any other query, DML, or DDL), it will also record the committed change at the same time. It does this by checking the transaction entry by SCN in the undo header, which indicates the changes that have been committed. That transaction entry is located in the transaction table, which keeps track of all active transactions for that undo segment.

Each transaction is uniquely identified by the assignment of a transaction ID (XID), which is found in the v$transaction view. This XID is written in the undo header block along with the Undo Byte Address (Uba), which consists of the file and block numbers UBAFIL data file and UBABLK data block, and columns found in the v$transaction view, respectively.

Please take the time to go through the following demonstration; it will solidify the complex concepts in this article.

Demonstration of data travel path

Dumping a block is one of the methods to show how data is stored. It will show the actual contents of the block, whether it is a Table or Index Block, and an actual address that includes the data file number and block number. Remember from the concepts manual that several blocks together make up an extent, and extents then make up segments. A single segment maps to a particular table or index. It is easy to see from the following simplified diagram how different extents can be stored in different physical locations in different data files but the same logical tablespace:

The data in the test case comes from creating a small table (segment) with minimum data in a tablespace with a single data file created just for this demonstration. Automatic Segment Space Management (ASSM) is the default in 11g. If you create a tablespace in 11g with none of the optional storage parameters, the RDBMS by default creates an ASSM segment with locally managed autoallocated extents.

It is possible to define the size of the extents at tablespace creation time that depends on the type of data to be stored. If all of the data is uniform and you need to maintain strict control over the amount of space used, then uniform extents are desirable.

Allowing the RDBMS to autoallocate extents is typical in situations where the data is not the same size for each extent, reducing the amount of time spent in allocating and maintaining space for database segments. Discussing the details, options, and differences for all of the ways to manage segment space in Oracle Database 11g is beyond the scope of this article.

For this example, we will be using race car track information as the sample data. For this demonstration, you will create a specific user with the minimum amount of privileges needed to complete this exercise; SQL is provided for that step in the script.

There are several key files in the zipped code for this article that you will need for this exercise, listed as follows:

Location of trace files

The dump block statement will create a trace file in the user dump (udump) directory on any version prior to 11gR1, which can be viewed by a text editor. Using 11gR1 and above, you will find it in the diag directory location. This example will demonstrate how to use the adrci command-line utility to view trace files. First we set the home path where the utility will find the files, then search with the most recent listed first—in this case, it is the NEWDB_ora_9980.trc file.

Now that you know the location for the trace files, how do you determine which trace file was produced? The naming convention for trace files includes the actual process number associated with that session. Use the following command to produce trace files with a specific name, making it easier to identify a separate task:

SYS@NEWDB>ALTER SESSION SET TRACEFILE_IDENTIFIER = SYSDUMP_SESSION;

Running dump block SQL statements

Let's run through the test case demonstrating how transaction data moves from one component to the next. The first script dumpblock_sys.sql creates the tablespace and a user called TTRACKER with the minimum database privileges needed to perform this exercise. Please edit the script dumpblock_sys.sql to change the location of your data file in the test database. You could also leave it blank if the startup initialization database parameter db_create_file_dest is populated with a valid operating system location that Oracle has read and writes privileges to. Then, the database would create the data files in the db_create_file_dest and assign a system-generated name to them.

In this dump block exercise, you are going to need two different SQL*PLUS windows or sessions. You start by logging into the database as someone with sysdba privileges, which is usually the user sys on a small test database. In a production database, more often sysdba privileges are assigned to a DBA account, which can be more accurately tracked for auditing purposes. The SYS account is only used in this scenario to create the TTRACKER user and dump some of the blocks.

Run the dumpblock_sys.sql script first logged in as SYS, it will create the tablespace and table. Then pause and you'll see the following screenshots for the output:

SYS@NEWDB>@dumpblock_sys.sql

After the pause, log on as ttracker in the second window, as shown by the output below, and run the second script:

TTRACKER@NEWDB>@dumpblock_ttracker.sql

The second script will pause at a point and request that you run the first script again, but the first script needs some information from the ttracker session—the data file number and block number, in this example 8 and 135. Your data file and block numbers will be different from any provided in the sample code. Return to the first window (sys), put in the correct numbers and press the Enter key until it says to CONTINUE DUMPBLOCK_TTRACKER.SQL IN PREVIOUS SESSION.

When it pauses, return to the second window (ttracker) and hit the Enter key. There will be another pause where a request for the undo file number and undo block number is found by the select statement just above it on the SQL*PLUS command line. Go back and forth between the different windows at each pause until both scripts are done.

What is happening is that you are dumping blocks with the sys login and manipulating data with the ttracker login (that is, insert, update, commit). At the same time, you are looking at how the SCN advances through the different transaction steps as well as matching the transaction with the corresponding undo SQL statement in the undo segment. We could have done this entire demonstration with a single logon, but I wanted you to visualize the read consistency view of a transaction.

What you should have when the script is finished is something similar to the following list in the trace directory. What will be different for your environment is the session number and ORACLE_SID. Notice that there are rtf versions of the following text files in the code section:

NEWDB_ora_8582_SYSDUMP1.rtf

NEWDB_ora_8621_SYSDUMP2.rtf

NEWDB_ora_8628_SYSDUMP3.rtf

NEWDB_ora_8635_SYSDUMP4.rtf

You should also have a couple of lst files in the directory you executed SQL*PLUS from, to review in case of an error. I have provided the same lst files from my session for you to review as well:

dumpblocksys.lstdumpblockttracker.lst

If you choose to rerun these scripts on the same database, you will get an error on the create sections if you do not drop the user ttracker, tablespace, and the table named tracks. The easiest way to drop both the tablespace and table is:

sys@NEWDB> drop tablespace track including contents and datafiles;

and then follow this by dropping the user ttracker:

sys@NEWDB> drop user ttracker;

Identifying files and blocks

There is more than one way to identify the file number and block number of a particular segment. One way is with the dbms_rowid package. The Oracle-supplied package dbms_rowid will show the absolute block number and relative file number of a table extent. That information comes from both the data file header and the block header.

The next query will search for the data file where those segments or extents are stored. A tablespace is only a logical structure that actually maps to the data file. It is important to understand that only certain objects (extents) are mapped to a physical data file.

And finally, the following query will verify the file number for the tablespace TRACK data file. We are interested in manipulating the data in the TRACKS table only, which in this test database is file number 8 and block number 135. 135 is actually the starting block number for this table segment.

sys@NEWDB>select name, file# from v$datafile;

Each time you use the dump block command, it will create a trace file in order to track a single transaction across the database structures. This trace contains the following information that we are interested in for this demonstration:

Contents of the block for a certain file number and block number

List of the actual rows

SCN of a particular block

Transaction ID for a data block and the corresponding undo block

Any locks being held for that transaction

Flags indicating the state of the transaction

Read Consistency, Fast Commit, and Delayed Block Cleanout behavior

The dump block command can be used in several different ways to extract information. See the following for a few examples as they apply in this test case:

There is much more information in these trace files than what we can cover in a book. For more information look for reference materials that refer to Oracle Internals. Refer to the following legend for the trace files.

Legend for Trace Files:

Xid—Transaction ID

Uba—Undo Byte Address

Flag—Transaction flag

C--- Committed,

---- Uncommitted

--U- Fast commit, delayed block cleanout has not happened

Lck—Number of Rows locked

Scn—System Change Number

Not all of the transaction flags have been listed here and further research may be required.

Tracking the SCN through trace files

We will be going through four different trace files showing the transaction state at different points. It will be easier to switch from one to the other if you go ahead and open each one in its own window. Start with the output file called dumpblocksys.lst as that contains the starting database SCN number. That comes from the following query:

SYS@NEWDB>SELECT CURRENT_SCN FROM V$DATABASE;

The database SCN will always be ahead of a transaction SCN because the transaction event has already occurred. The transaction SCN is recorded in the accompanying trace file, while the database SCN is recorded in the lst output files.

Farther down in the file dumpblocksys.lst, there are no active transactions as shown by no rows returned when selecting from v$transaction the first time. At the first pause is the point where ttracker starts modifying the table, allowing us to actively track that transaction by dumping the block.

In the next section, different transaction types will be covered, actively demonstrating what is known as Oracle Internals, the undocumented internal structure of Oracle data.

Single Row Insert

Open up the first trace file called NEWDB_ora_8582_SYSDUMP1_SESSION.rtf, as this contains additional comments. The text version of this file (trc) is provided and contains the original trace data with any identifying or sensitive information removed. Note this transaction SCN number, which is then linked to the data from the insert statement as follows:

The hexadecimal number is converted to decimal format by using the calculator mentioned earlier.

SCN of block insert statement = hex scn: 0x0000.001827be = 1,583,038

Switch to the dumpblockttracker.lst output file. Note that the database scn has advanced past the original starting database value and past the first transaction scn (to 1583039) due to the changes made; the tracks table was created and a single row has been inserted.

Switching back to the file NEWDB_ora_8582_SYSDUMP1_SESSION.rtf, farther down the Flag there are dash marks with a 1 in the Lck column. By referring back to the legend earlier in this article, this indicates that the transaction state is uncommitted and there is a single lock. This single lock is important, as the ttracker user can exclusively use this data block until a commit or rollback statement is issued.

Notice the Uba (Undo Byte Address) that ties this transaction to the matching SQL undo statement in an undo segment should the end user need to rollback this transaction. Also notice the Itl heading in the above list that stands for Interested Transaction List, which consists of all transactions currently active for that block. In this case, there is only the current single transaction.

The Xid found in NEWDB_ora_8582_SYSDUMP1_SESSION.rtf should match the query from the v$transaction table, as seen in the dumpblockttracker.lst file. The start_scn also matches the SCN found in the trace file NEWDB_ora_8582_SYSDUMP1_SESSION.rtf. Labeling the columns clearly specifies what each one represents. If the column labeled NOUNDO equals YES, then there would be no undo available. See the query as follows:

SYS@NEWDB>SELECT UBAFIL AS UNDOFILENUMBER,UBABLK AS UNDOBLOCKNUMBER,STATUS,NOUNDO,XID AS TRANSACTIONID,START_SCN FROM V$TRANSACTION;

Farther down in the trace file is the section that contains the actual data. Look for the key words block row dump. The data found in the trackname column is in hexadecimal format, which is converted by referring to http://www.asciitable.com. Look up the Hex (hexadecimal) to find the Char (character) equivalent.

Commit of a single row insert

What happens when we finally issue a commit statement? Looking at the NEWDB_ora_8621_SYSDUMP2_SESSION.rtf file, there are several items of interest. Notice the advancement of the SCN after the commit statement was issued from the previous insert statement:

scn: 0x0000.001827c3 --001827c3 = 1,583,043

The flag shown below is now showing as committed by fast commit process, which causes delayed block cleanout until this block is visited by another query, DDL, or DML statement.

Also see that the Uba is the same address as before the commit statement in the previous section (above), as excerpted from NEWDB_ora_8582_SYSDUMP1_SESSION.rtf:

The fast commit process is fast because it makes the minimal changes needed to indicate that a commit has occurred. This leaves the lock (see the Lck Flag of 1) even after the commit has finished! Delayed block cleanout happens when the next operation visits this block and refers back to the transaction table to find the details for finishing the cleanup process.

Single row update and corresponding undo block

Continuing with the demonstration, we are modifying the existing Atlanta entry in the tracks table to Daytona by issuing an update statement. In this case, we are dumping both the data block and the undo block in order to compare information across those two database structures. Taken from dumpblocksys.lst, the following information tells us several things.

The sys user only sees the previous data of Atlanta showing read consistency for everyone else; that information is coming from the undo segments. Ttracker sees the entry as Daytona, which has a lock (exclusive view) on the changed data. The select statement below returns the row ATLANTA:

SYS@NEWDB>SELECT * FROM TTRACKER.TRACKS;

There is an active transaction that shows how the undo file number, and undo block number can be determined. Just as we need the file number and block number of a table segment to explore the internals, we will need the file number and block number of the undo segment.

SYS@NEWDB>SELECT UBAFIL AS UNDOFILENUMBER,UBABLK AS UNDOBLOCKNUMBER,STATUS,NOUNDO,XID AS TRANSACTIONID,START_SCN FROM V$TRANSACTION;

The SCN for this update statement is located in the file NEWDB_ora_8628_SYSDUMP3.rtf.

scn: 0x0000.001827cb -- 001827cb = 1,583,051

Now that we have accessed the same block by issuing an update, the flag now shows as committed in the corresponding trace file. Also notice the lock (Lck) has been removed:

Comments in the file NEWDB_ora_8628_SYSDUMP3.rtf indicate where the data block dump stops and the undo block dump starts. Farther down in the same file, find the data section labeled as follows. This undo block dump contains the entry for Atlanta. Now, if the end user (ttracker in this case) rolled back the Daytona update, then the ATLANTA row would be restored as shown in the undo block dump:

Let's continue with the demonstration to show the internal changes when the commit of an updated row in the tracks table occurs.

Commit of a single row update

The associated SCN for this transaction is listed in the trace file and converted as follows:

scn: 0x0000.001827d1 -- 001827d1 = 1,583,057

Note that both transactions in the Itl are listed for this data block: first the commit and then the following update statement. Each statement is uniquely identified by its respective Uba and SCN. Just as before in Commit of a single row insert, the flag U indicates delayed block cleanout until this block is accessed again, and the lock is still at least partially present.

There is usually more than one way to accomplish a task, such as converting hexadecimal information to a readable character string. Earlier, this conversion task was outlined with reference to an online hex to ASCII calculator to do the work for you. There is an Oracle supplied package called utl_raw that can also do the same conversion with the word DAYTONA returned, as follows:

The last database SCN recorded in this exercise from the dumpblocksys.lst file is 1583062.

We have traveled the SCN, changing from the initial value of 1583007 to the end value mentioned above as 1583062. By altering a small amount of data and looking at the associated changes within the trace files, we can clearly demonstrate database read consistency, locking mechanisms, and delayed block cleanout. While the concepts manual is a good place to start, a DBA needs to be able to prove what they have read (or heard) with an actual test case.

After accomplishing this task successfully, you should be more comfortable with the internal structures of the database. More advanced scenarios would be to work with more data, add indexes (index segments can be dumped), or multiple schemas. If this type of work really intrigues you, start asking around at conferences. There are lots of people that are used to working with Oracle at the block level. Truly geeky!

Summary

This article covered how data moves from one database component to the next; the mechanics or essential architectural infrastructure at the block level; what the data in a datafile, redo, undo, or archivelog actually looks like; and how the database keeps up with SCN changes. Dumping database blocks provides the raw hexadecimal values that convert to the characters that the end user sees at the application level.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.