Archive for the ‘Oracle internals’ Category

“A filmmaker should never assume he’s superior to his subject. I often find that even the simplest topic remains an enigma. The best film portraits not only evoke that enigma but ingest it in a process that renders what’s invisible visible.” – Damian Pettigrew, a documentary SME

The same statement above holds for database SMEs using the new fast ingest process. Because their data might be at one point neither visible nor invisible. From DBA point of view I mean…

Memoptimized Rowstore fast ingest is a new feature of Oracle 19c and is all about high performance data streaming. Here is an example from Oracle which is ultra fast, arguably 25 million inserts per second on two socket server:

Memoptimized Rowstore was introduced in Oracle 18c and here are few good blog posts on the subject:

The fast ingest functionality of Memoptimized Rowstore enables fast data inserts into an Oracle Database from applications that ingest small but high volume transactions with almost no transactional overhead. The inserted data is temporarily buffered in the large pool before writing it to disk in bulk in a deferred, asynchronous manner.

Memoptimized Rowstore provides the following two functionalities:

Fast ingest:
– Fast ingest optimizes the processing of high-frequency, single-row data inserts into a database
– Fast ingest uses the large pool for buffering the inserts before writing them to disk, so as to improve data insert performance

Fast lookup:
– Fast lookup enables fast retrieval of data from for high-frequency queries
– Fast lookup uses a separate memory area in the SGA called the memoptimize pool for buffering the data queried from tables
– For using fast lookup, you must allocate appropriate memory size to the memoptimize pool using MEMOPTIMIZE_POOL_SIZE

The memoptimize pool contains two parts: (1) the memoptimize buffer area which occupies 75% of the memoptimize pool and (2) a hash index that is a non-persistent segment data structure having several units containing a number of hash buckets. The hash index occupies 25% of the memoptimize pool.

Note that a table can be configured for using both fast ingest and fast lookup but these 4 points should be always considered:

– Because fast ingest buffers data in the large pool, there is a possibility of data loss in the event of a system failure. To avoid data loss, a client must keep a local copy of the data after performing inserts, so that it can replay the inserts in the event of a system failure before the data is written to disk. A client can use the DBMS_MEMOPTIMIZE package subprograms to track the durability of the inserts. After inserts are written to disk, a client can destroy its local copy of the inserted data.

Use the GET_APPLY_HWM_SEQID function to find the low high-water mark of sequence numbers of the records that have been successfully written to disk across all the sessions:

select DBMS_MEMOPTIMIZE.GET_APPLY_HWM_SEQID from dual;

Equivalently, use the GET_WRITE_HWM_SEQID function to return the high-water mark sequence number of the record that has been written to the large pool for the current session.

select DBMS_MEMOPTIMIZE.GET_WRITE_HWM_SEQID from dual;

– Queries do not read data from the large pool, hence data inserted using fast ingest cannot be queried until it is written to disk.

If you need to flush all the fast ingest data from the large pool to disk for the current session, just run:

exec DBMS_MEMOPTIMIZE.WRITE_END;

– Parent-child transactions must be synchronized to avoid errors. For example, foreign key inserts and updates of rows inserted into the large pool can return errors, if the parent data is not yet written to disk.

– Index operations are supported by fast ingest similar to the regular inserts. However, for fast ingest, database performs index operations while writing data to disk, and not while writing data into the large pool.

You can disable a table for fast ingest by specifying the NO MEMOPTIMIZE FOR WRITE clause in the ALTER TABLE statement. You can view the fast ingest data in the large pool using the V$MEMOPTIMIZE_WRITE_AREA view. You can also view and control the fast ingest data in the large pool using the subprograms of the packages DBMS_MEMOPTIMIZE and DBMS_MEMOPTIMIZE_ADMIN.

The DBMS_MEMOPTIMIZE_ADMIN package has only one procedure WRITES_FLUSH which does not accept any input or output parameters.

exec DBMS_MEMOPTIMIZE.WRITES_FLUSH;

Here is one more example from Oracle:

As you see, you can lose data using this new technique. Use it with caution unless you can afford that or re-insert the data.

If you need to remove or populate fast lookup data for a certain table from the memoptimize pool, use DBMS_MEMOPTIMIZE.DROP_OBJECT and DBMS_MEMOPTIMIZE.POPULATE respectively.

Rate this:

First things first: the word autonomous come from the Greek word autónomos which means “with laws of one’s own, independent”.

After starting using the Autonomous Data Warehouse Cloud, I must say I am pleasantly surprised to see something totally new, simple, uncomplicated and effortless, with no additional tuning or re-architecturing of the Oracle databases needed – the underlying Oracle Cloud Infrastructure is super fast and highly reliable.

1. You may connect to ADWC by either using the web interface as you can see above or as a client (I use SQL Developer 17.4) but for the client connection type choose Cloud PDB and not TNS. Your configuration file is a zip file and not a plain text file to what DBAs are used to.

2. You cannot create indexes on columns, you cannot partition tables, you cannot create materialized views, etc. Not even database links. You will get an error message: “ORA-00439: feature not enabled: Partitioning” or “ORA-01031: insufficient privileges”.

ADWC lets you create primary keys, unique keys and a foreign key constraints in RELY DISABLE NOVALIDATE mode which means that they are not enforced. These constraints can be created also in enforced mode, so technically you can create constraints as in a non-autonomous Oracle database.

Note that in execution plans primary keys and unique keys will only be used for single table lookups by the optimizer, they will not be used for joins.

But … you can run alter system kill session!

3. The Oracle Autonomous Data Warehouse interface contains all necessary capabilities for a non-professional database user to create its own data marts and run analytical reports on the data. You can even run AWR reports.

4. You do not have full DBA control as Oracle (in my opinion) uses lockdown profiles in order to make the database autonomous. As ADMIN user, you have 25 roles including the new DWROLE which you would normally grant to all ADWC users created by you. Among those 25 roles, you have GATHER_SYSTEM_STATISTICS, SELECT_CATALOG_ROLE, CONSOLE_ADMIN, etc. You have access to most DBA_ and GV_$ views. Not to mention the 211 system privileges.

5. ADWC configures the database initialization parameters based on the compute and storage capacity you provision. ADWC runs on dozens of non-default init.ora parameters. For example:

One of the few alter session commands you can run is “alter session disable parallel dml;”

7. Monitoring SQL is easy:

But there is no Oracle Tuning Pack: you did not expect to have that in an autonomous database, did you? There is no RAT, Data Masking and Subsetting Pack, Cloud Management Pack, Text, Java in DB, Oracle XML DB, APEX, Multimedia, etc.

8. Note that this is (for now) a data warehousing platform. However, DML is surprisingly fast too. I managed to insert more than half a billion records in just about 3 minutes:

Do not try to create nested tables, media or spatial types, or use LONG datatype: not supported. Compression is enabled by default. ADWC uses HCC for all tables by default, changing the compression method is not allowed.

9. The new Machine Learning interface is easy and simple:

You can create Notebooks where you have place for data discovery and analytics. Commands are run in a SQL Query Scratchpad.

10. Users of Oracle Autonomous database are allowed to analyze the tables and thus influence on the Cost Based Optimizer and hence on performance – I think end users should not be able to influence on the laws (“νόμος, nomos”) of the database.

Conclusion: The Autonomous Database is one of the best things Oracle have ever made. And they have quite a portfolio of products….

Finally, here is a live demo of the Oracle Autonomous Data Warehouse Cloud:

Rate this:

With more than 500 new features, Oracle Database 12c is designed to give us exactly what we need for easy database administration, consolidation, security and availability. And I cannot deny the complexity it brings into our pluggable world…

But straight to the point. Here are 12 new commands for Oracle DBAs:

1. impdp … transform=disable_archive_logging:Y

The new TRANSFORM option DISABLE_ARCHIVE_LOGGING appended to the impdp command line will cause Oracle Data Pump to disable (almost completely) redo logging when loading data into tables and when creating indexes. Check this example.

With the new DBMS_QOPATCH package we can view the installed database patches:

GET_OPATCH_BUGS: Provides a bugs list for a patch
GET_OPATCH_COUNT: Provides the total number of installed patches
GET_OPATCH_LSINVENTORY: Returns whole opatch inventory
GET_OPATCH_PREQS: Provides prerequisite patches for a given patch

If we want to create a common user in the CDB, we must prefix it with c##. It is because the default common user prefix in 12.1.0 is c##. However, the _common_user_prefix parameter controls that prefix. One option is to set it to NULL.

SYSBACKUP, SYSDG and SYSKM are also database users. SYSDG is used for Oracle Data Guard operations: startup, shutdown and flashback database while SYSKM is used for encryption key management operations: connect to the database even if the database is not open.

This “trick” does not work with temporary tables, external tables and cluster tables. Check this example.

5. alter system set max_string_size=extended;

The new init.ora parameter MAX_STRING_SIZE (default is standard) controls the maximum size of the VARCHAR2, NVARCHAR2, and RAW data types. It can be increased from 4,000 to 32767 bytes. Check this example.

6. alter database move datafile ‘old_location’ to ‘new_location’;

In 12.1.0, a data file can now be moved online while it is open and being accessed, all these are possible now:

RMAN enables us to recover one or more tables or table partitions to a specified point in time without affecting the remaining database objects. We can use previously-created RMAN backups to recover tables and table partitions to a specified point in time. Check this example.

In-Database Archiving enables us to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. To manage In-Database Archiving for a table, we must enable ROW ARCHIVAL for the table and manipulate the ORA_ARCHIVE_STATE hidden column of the table. Check my old blog post.

Oracle Database 12c introduces the Parallel Upgrade Utility (catctl.pl), which enables components that do not have to be upgraded in a specific order to be upgraded at the same time, taking full advantage of CPU capacity. The Parallel Upgrade Utility reduces the total amount of time it takes to perform an upgrade by loading the database dictionary in parallel using multiple SQL processes to upgrade the database.

-n specifies the number of processes to use for parallel operations (default = 4). The maximum is 8. Set this parameter to 0 to run catctl.pl in serial mode.

What is really interesting is John Watson’s comment and example at the very end of the orafaq article. He shows the need for filters that cannot be accomplished with access. John suggest that “all indexes need to include the ora_archive_state column, perhaps appended to primary and unique key indexes and prepended to other indexes. If you don’t do this, you will find that many operations that could previously be completed without touching the table will require table scans”.

The Oracle Documentations says the following: “In-Database Archiving enables you to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter.

With In-Database Archiving you can store more data for a longer period of time within a single database, without compromising application performance. Archived data can be compressed to help improve backup performance, and updates to archived data can be deferred during application upgrades to improve the performance of upgrades.”

The AMIS Technology Blog says: Note the emphasis on ‘without compromising performance’. It would seem that such in line archived records are skipped in a smart way during full table scans and other records retrieval operations.

What I would like to show, is in a way also the opposite, that performance can be even better with Oracle in-database archiving. I have a table called clients and visibility is set to ACTIVE. Almost all rows of the table are “invisible”. Check how bytes and cost increase, once I set the visibility to ALL. And of course, the full table scan!

Rate this:

More than 100 years ago, Mark Twain said that “the elastic heart of youth cannot be compressed into one constrained shape long at a time”.

In modern technology, “the elastic cloud can be compressed with the constraints of long datatypes”. Let me explain the meaning of my statement in terms of Siebel applications running on RAC and Exadata.

Chained and migrated rows are often a big problem in a Siebel database. The issue of chained rows can be resolved by using big block size while migrated rows require segment reorganization with higher PCTFREE (20, 30 or even 50). Chained tables in Siebel are often S_ORG_EXT, S_ASSET, S_ORDER_ITEM and S_ORDER_ITEM_OM. At least these will require using a bigger block size.

When a row is updated in a Hybrid Columnar Compressed (HCC) table, then it is migrated to another block in the segment that is managed using “OLTP compression“. Any HCC Compression Unit (CU) containing at least one migrated row, will also cause the block containing that row to be accessed by the server using a “cell single block physical read“.

Look at the “Top 5 Timed Foreground Events” in a Siebel on Exadata OLTP DB using HCC:

The situation with HCC in OLTP is tricky for the following 3 reasons:

1. Every update of a record stored in HCC format results in a migrated row
2. The new row is stored in a new block that is marked for OLTP compression
3. Non-direct path inserts will be loaded into OLTP compressed blocks as opposed to HCC format

For the above reasons, mixing HCC with DML is not recommended. Partitioning can provide a mechanism for avoiding these issues since each partition can have its own storage format.

Only after decompressing the OLTP tables, the event “cell single block physical read“ disappeared and the performance got significantly improved.

Another good tip for Siebel on RAC is the usage of high number of hash partitions for hot indexes. High means 256 and more (should be a power of 2).

Look at the situation with gc buffer busy waits before the high hash partitioning:

As you can see the “gc buffer busy acquire” was the top event:

Looking at issues with these events in MOS might incline you to believe that this is a bug. However, this was not the case as you can see what happened after making the number of hash partitions for some of the indexes 256, the database performance was back to normal:

Note that also non-partitioned tables can have hash partitioned indexes!

Another tip: SecureFiles is a feature introduced in Oracle Database 11g that is *specifically* engineered to deliver *high performance* for this type of unstructured data. I have seen several queries getting even 10 times faster after migrating LOBs to SecureFiles.

About sequences: in RAC/Exadata, using the CACHE and NOORDER options together results in the best performance for a sequence. For example, in a Siebel database the S_DOCK_TXN_LOG_S sequence is used to generate the transaction ID used by S_DOCK_TXN_LOG table. The default cache size for sequences in Oracle is 20. If you are having thousands of concurrent users, Siebel/Oracle suggest you increase the cache size to be at least 10000.

Last 2 things:

– For gathering Siebel database statistics use always the latest version of coe_siebel_stats.sql. As of now, the latest version is 11.4.4.5
– The script coe_siebel_profile.sql provides a list of columns that are not indexed but potentially are good candidates for indexing according to their usage by the Optimizer.

The truth is however slightly different. These are Oracle internal or so called pseudo cursors which are just structures providing direct access database objects by avoiding the overhead of writing an explicit SELECT statement. These structures access directly most of the time the data dictionary tables, CLOBs and BLOBs. Sometimes even NCHAR and NVARCHAR2.

This is the reason why these internal cursors show in OEM and AWR with the legend “SQL Text Not Available”. There is just no SQL.

They are used internally within the database and cannot be described. At least not completely. However, here is what you can do as suggested by MOS 1298471.1:

The second query shows us the pseudo PL/SQL used by Oracle internally. Notice the missing value for SQL ID!

Now, here is a question: how to tune pseudo cursors? If you run STA (SQL Tuning Advisor), the error message you get will be “The SQL has aged out of the cursor cache”. OEM will notify you with “The SQL statement is not available”:

So the bottom line is: if you see messages about SQL Text not being available, it is most likely due to Oracle trying to access internally LOBs or its own tables.

Rate this:

Mutexes or mutual exclusion algorithms are lighter and more granular concurrency mechanism than latches and are used in concurrent programming to avoid the simultaneous use of a common resource.

Oracle started using mutexes from 10g for certain operations in the library cache but from 11g all library cache latches were replaced by mutexes which lead to the common to all DBAs “library cache: mutex X” concurrency event:

The irony here is that the reason for obtaining a mutex is to ensure that certain operations are properly managed for concurrency but the management itself leads to concurrency. These are the top 3 differences between mutexes and latches:

– A mutex can protect a single structure, latches often protect many structures
– A mutex get is about 30-35 instructions in the algorithm, compared to 150-200 instructions for a latch get
– A mutex is around 16 bytes in size, compared to 112-200 bytes for a latch

Last month, I found a very interesting and detailed article by Anju Garg on latches and mutexes. It is worth reading.

The aim of this post is to suggest on how to proceeded if you hit the “library cache: mutex X” concurrency event which simply means that a library cache operation is being performed and Oracle is trying to get the library cache mutex in eXclusive mode.

Do the following:

1. Run a query against the x$kglob table to find the names of the objects being pinned most of the time. x$kglob is the resource structure table for library cache locks and pins. Jagjeet’s note gives a very good example.

2. If you have issues with PL/SQL packages and procedures, use dbms_shared_pool.markhot() to mark them as hot. As suggested by Andrey Nikolaev, Oracle creates multiple copies of marked objects in the library cache, but frequently uses only one of them. That is why, in order to achieve uniform distribution, mark the PL/SQL objects as hot immediately after the instance restart.

The documentation for the dbms_shared_pool.markhot procedure can be found here.

3. If the issue is sequence related, then either verify with Oracle that it is not a bug, get rid of the sequence if possible or at least modify its parameters (cache, order, etc.).

As you can see from Dom Brook’s article, issues are often package related. Have a look at my screenshot from ASH. The data source is V$ACTIVE_SESSION_HISTORY.

You may review on Metalink “Waitevent: library cache: mutex X” (Document ID 727400.1) for the list of known Oracle bugs but for all I have seen, sometimes patches do not help, so finding a workaround is all the DBA is left with. An ASH report will be very helpful:

If you see “library cache: mutex X” as a top event, note that ADDM will first find that there is contention for latches related to the shared pool that were consuming significant
database time (no surprise). For the top SQL statements, ADDM will claim something like “Waiting for event library cache: mutex X in wait class ‘Concurrency’ accounted for 97% of the database time spent in processing the SQL statement with SQL_ID d2svmdafangyq.” Look then at the “Rationale” sections and hunt for the names of the PL/SQL packages 🙂

Rate this:

Douglas Adams, the author of “The Hitchhiker’s Guide to the Galaxy” said: “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools”.

I am talking here about Oracle’s Cost Based Optimizer.

The optimizer needs to make decisions about execution plans in a very short time: not all execution plans can be scanned. Staring Oracle 10g, Oracle allows the optimizer to run in tuning mode where it can gather additional information and make recommendations about how specific statements can be tuned further.

Funny thing here is that we rely on the same component (CBO) to improve the SQL statement that could not generate an optimal plan in the first place.

In this post, I would like to show something about Oracle 11gR2 that cannot be found either on Google or on Metalink. It is about tracing the SQL Tuning Advisor.

Christian Antognini’s article on SQL Profiles and how to trace it is probably the best written on that subject. Using the undocumented parameter _STN_TRACE, it is possible to trace what is happening during the SQL Tuning Advisor performance analysis. I find slight amusing the choice of the name of the domain used for Oracle FAQ but I guess this is not intentional.

Rate this:

Julian is the Global Database Lead of Accenture. His primary responsibility is managing and leading the Global Oracle Technology Practice which includes Autonomous Cloud, IaaS, PaaS, Database Services, Engineered Systems, Java, Middleware, Security and all other areas falling under Oracle Technology. He is also the Accenture-Enkitec Group Managing Director for ... Continue reading →