I have not been blogging for quite some time now and thought of ending the year with one last blog. I picked a very simple yet powerful datapump topic for blogging. Read on …

As is the norm after exhausting all tuning options, we kind of hope that redo is the bottleneck for all performance problems :). So if your datapump performance issues still exists after multiple tuning iterations, As DBA’s you explore ways to run the import in NOLOGGING mode to help improve the performance. There aren’t too many easy options prior to Oracle 12c. Let us look at some of options available prior to 12c

Few of the ways to implement NOLOGGING are

1) Set the Database to NOARCHIVELOG mode, Hmm , this requires a bounce and I don’t have the luxury of having an scheduled outage in production. Next option …

2) Set all tables to NOLOGGING mode (after pre-creating them if needed)

3) If both the above options are not possible, then for some of us when our back is against the wall ….. we either quit or move forward. Since quitting is not an option , we tend to try undocumented stuff like “_disable_logging” hippy I finally found the silver bullet to better performance and off course it is OK if the database gets corrupted :)

It looks like Oracle Support had enough of it dealing with database corruption:). Starting with Oracle 12 , you have an option to import the data by disabling the archival only for the import operation. You also have the option of disabling archival only for table or Indexes. Again some amount of redo is always generated with NOLOGGING which is the norm than exception.

Also this option will only work if FORCE LOGGING is turned off. This could be a pain point if you have DataGuard.

It is highly recommended to backup the database after running import in NOLOGGING mode so that media recovery is possible. Otherwise you may have rerun the import operation which is easier said than done because of the dependencies.

Post navigation

Most of you are aware that Oracle does simple compression when you have a ordered result set by sending the data once along with count for duplicate data. This is prior to Oracle 12c. Starting with Oracle 12c, Oracle supports Advanced Network Compression. Too bad, it is available as part of Advanced Compression option.

Lets forget Oracle for sometime and look at what are the main constraints for networking? It is generally the network bandwidth and data volume. Lets think of network bandwidth as a pipe and data volume to
be water. If you need to send more water across the pipe, you need to have a bigger pipe or convert the water to a different form to occupy less space so that more water in different
form can be send across the same pipe. The best example for data volume is YouTube & NetFlix who gobble up more than 60% of the internet traffic.

Now back to Oracle, Oracle does something similar to what I explained above with advanced network compression. Oracle compresses the data to be transmitted over network at the sending side and then
convert data back to original at the receiver side to reduce the network traffic ; End result being transparent to user. As with any other advanced compression option, Oracle supports GZIP or LZO for
compression of data.

You can notice significant improvement in performance if network is the bottleneck. If you have a CPU bound system, you are most likely going to make things worst from bad.How to implement?

1. Using SQL*Net parameters

SQLNET.COMPRESSION: This parameter enables or disables data compression. Compression will be used only if this parameter is set to ON at both the server and client side. Also please note that SQLNET.COMPRESSION is not applicable to Oracle Data Guard streaming redo and SecureFiles LOBs.

SQLNET.COMPRESSION_LEVELS: This parameter specifies the compression ratio. A value of LOW consumes less CPU for compression resulting in lower compression ratio whereas a value of HIGH consumes more CPU for compression resulting in higher compression ratio.

SQLNET.COMPRESSION_THRESHOLD: This parameter specifies in bytes the minimum data size for which compression will be done. Compression is not done for any value below this size

2. Using TNSNAMES parameters

Compression can also be enabled using connect descriptor for an individual connection using COMPRESSION and COMPRESSION_LEVELS. These parameter settings have same meaning as SQL*Net compression parameters

Post navigation

Prior to Oracle 12c truncating tables with dependent children was a pain. We had to disable the constraint before truncating the parent table followed by enabling the constraint. The performance gain achieved using TRUNCATE instead of DELETE almost disappeared if the constraints were enabled with VALIDATE option. If enabling the constraint with NOVALIDATE option was not acceptable, then DELETE seemed to be the only way to go.

Starting with Oracle 12c , Oracle supports truncating tables with CASCADE option similar to other RDBMS like MySQL & PostgreSQL. Well It took some time for Oracle to introduce this option but glad we finally have this option.

Similar to DELETE , the constraints have to be defined with ON DELETE CASCADE. Otherwise just like DELETE , TRUNCATE also will fail. Also the cascading effect will impact all children , grand children and so on.

Let us look at an example. See scripts to create the table towards end of this blog.

In t his example we have 3 tables DEPT (PARENT) , EMP(CHILD) & EMPPROJ (GRANDCHILD) . The child/grandchild tables are created withoutthe “ON DELETE CASCADE” , So obviously the TRUNCATE command with CASCADE option will fail. Please do note the difference in the error message for with and without CASCADE option.

Before we truncate the table, lets take the count of emp & empproj table.

SQL>

SQL> select count(*) from emp;

COUNT(*)
———-
5

SQL> select count(*) from empproj;

COUNT(*)
———-
5

Now let us truncate the table with and without t he CASACADE option and then take a count of emp and empproj tables. The key thing to remember is that the CASCADE option will truncate all data from empproj when emp table is truncated because of child & grandchild option.

Post navigation

In Oracle 12c , there are lot of new great RMAN features focused on reducing recovery time; Some of them to provide better DBA :) experience too .Let us glance at some of them in this blog.

Support for point in time recovery for tables and partitions. I would rate this as one of best options of Oracle 12c.

Option to duplicate database with NOOPEN option so that duplicated/cloned database remains in mount state. Prior to 12c, the cloned database is automatically opened in RESETLOGS mode. The NOOPEN option is very useful when you want to clone a database as part of upgrade.

In Oracle 12c , duplicate database supports pull based restore from backup sets; Prior to Oracle 12c , the support was only for push based restore from image copies.

Support for recovering the database from snapshots taken using 3rd party vendors like EMC,Hitachi etc. The recover command now supports SNAPSHOT TIME clause to implement this feature.

Transporting database across platforms using backupsets. Prior to 12c, only image copies was supported.

Transporting database across platforms using inconsistent backups i.e.. backups taken without putting the database in READONLY mode. New clause ALLOW INCONSISTENT is introduced to implement this.

Support for execution of SQL commands from RMAN prompt without SQL prefix.

Support for SQL*Plus command DESC from RMAN prompt.

Post navigation

Let us assume you have a table with ten columns and four B*Tree indexes. If you analyze the space usage; chances are very high that size of all indexes together is nearly equal to more than the size of the table. If you end up creating more indexes to support complex business requirements, the size of all indexes together could be 2-4 times(or more) the size of tables. So what is the big deal, space is cheap. Really? This is what you get to read but when your database is in 10s or 100s of terabytes, every byte matters. So how would you feel if you were told that there is an option in Oracle 12c that would let you create partial index on partitioned tables to save storage and improve on performance.

Let us explore more on this topic.

We know that the main purpose of index is to provide fast access to selective data and that the purpose of index is defeated when you retrieve large amounts of data(subjective to size of table). Moreover for some applications, you need indexes only for the current or recent months whereas for some applications, index is not required for current or recent months. For reporting or data warehouse databases, full table scans in parallel are generally many times faster compared to sequential index range scans. But when you create a global or local Index on partitioned table, index is created on all of the data or partitions. Prior to Oracle 12c , you could not create indexes on selective partitions; Indexes always meant on all of the data. However with Oracle 12c, you can create partial indexes that contains index data from selective partitions only.

So how would you implement partial indexes?

Enable or disable the indexing property at table level or partition/subpartiton level using INDEXING clause.

Create indexes as FULL or PARTIAL. When you specify PARTIAL, only partitions that have the indexing property turned on will be part of index.

Example

In the below example, I am disabling the index property of the table and enabling the indexing property at partition level. Now when I create partial indexes, only those partitions with indexing property turned on are part of index.

Post navigation

Extended Datatypes

Prior to Oracle 12c, VARCHAR2/NVARCHAR2 datatypes allowed storing variable length characters of up to 4000 bytes whereas RAW data type allowed storing variable length binry data of up to 2000 bytes. In Oracle 12c, the maximum size for data types VARCHAR2, NVARCHAR2 and RAW is increased to 32767 bytes. These data types are now referred to as extended data types. Extended data types are implicitly implemented using concept of LOBs; Just like LOB’s, any data stored in excess of 4000 bytes for VARCHAR2/NVARCHAR2/RAW is stored out-line. If the data stored is within pre-Oracle12c limits , then they are stored in-line. So be aware about the restrictions of LOBs since all most all of the restrictions of lobs are applicable to extended datatypes (at least for now).

The first thing that came to mind about extended data types is what purpose does it actually serve since it is implemented as LOB; Why would you introduce LOB restrictions to your columns by increasing the limits of VARCHAR2, NVARCHAR2 and RAW. As most of you are aware that all versions of databases including Oracle 12c do not support changing VARCHAR2/NUMBER/RAW to LOBs directly with ALTER ..MODIFY command. (Would result in ORA-22858: invalid alteration of datatype); I thought that instead of extended datatype ,Oracle could have removed the restriction to convert VARCHAR2/NUMBER/RAW to LOBs directly and provided some restricted form of index support.

Being a new feature, I did not find much useful information about the feature and so decided why not be one of the earliest blogger to explore this topic.

So let us make sure that the database is enabled for extended datatypes.

Now lets create a table with two columns , one with VARCHAR2 and other with LOB

SQL> CONN SHAN/SHAN

Connected.

SQL> CREATE TABLE EXT_DATATYPE (

2 EMPLOYEE_NO VARCHAR2(32767) ,

3 EMPLOYEE_COMMENTS BLOB);

Table created.

Ok now lets trying adding primary key to this table. Ooops , It failed because of the restrictions on index lengths.

SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO);

ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO)

*

ERROR at line 1:

ORA-01450: maximum key length (6398) exceeded

Now lets modify the column size to 6398 bytes and add primary key to the table. The command still fails; So what is the secret column length that supports indexes? It is actually 6389 bytes.

SQL> ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(6398));

Table altered.

SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO);

ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO)

ERROR at line 1:

ORA-01450: maximum key length (6398) exceeded

For VARCHAR2/NVARCHR2/RAW columns more that 6389 bytes , you can create indexes using function based indexes or virtual columns. With both approaches, you are shortening the length of column to create indexes using either SUBSTR or new STANDARD_HASH functions or something creative that you can come up with.

Let us revert the column length to time trusted 4000 bytes, add index and then try to increase the column length. You are going to get a different error but the underlying cause us same.

SQL> ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(4000));

Table altered.

SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_NO);

Table altered.

SQL> ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(6390));

ALTER TABLE EXT_DATATYPE MODIFY (EMPLOYEE_NO VARCHAR2(6390))

*

ERROR at line 1:

ORA-01404: ALTER COLUMN will make an index too large

Now let us drop our primary key constraint and try adding lob data type as primary key; It will fail as LOB columns cannot be primary key or unique key.

SQL> alter table EXT_DATATYPE drop primary key;

Table altered.

SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS));

ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS))

ERROR at line 1:

ORA-01735: invalid ALTER TABLE option

SQL> ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS);

ALTER TABLE EXT_DATATYPE ADD PRIMARY KEY (EMPLOYEE_COMMENTS)

ERROR at line 1:

ORA-02329: column of datatype LOB cannot be unique or a primary key

Summary: In essence, It if would have been better if Oracle increased the datatypes sizes to 6000 bytes instead of 32K so that indexing , primary key constraints are all supported. There is always a learning curve with new features; it gets little more complicated with enhancements. With 32K sizes ,Initially it is going to introduce new risks and create confusion if implemented without understanding the actual consequences even though there are some workarounds for creating indexes using virtual columns or function based indexes. I just picked one restriction of LOBs ; More testing to be done to verify the behavior of AFTER UPDATE DML trigger , INSERT AS SELECT and many more restrictions.

In nutshell , the following steps are required to enable extended datatypes. Most of the steps are familiar except for step 3 and 4. In step 3 , we are modifying initialization parameter MAX_STRING_SIZE from STANDARD to EXTENDED. Once changed , this is a irreversible action. Step 4 increases the sizes of VARCHAR2, NVARCHAR2 & RAW in the required views.