When building a materialized view on a prebuilt table, it is likely you will encounter ORA-12060: shape of prebuilt table does not match definition query at some point. Generally, two common causes for this error are:

Forgetting the primary key on the prebuilt table for a materialized view using the “WITH PRIMARY KEY” clause

A difference in data type or data length for a column, for example a VARCHAR2( 100 ) in the parent table mapped to a VARCHAR2( 50 ) column in the prebuilt table. This can also happen with NUMBERs that differ in precision and scale.

A very interesting case of this error was encountered when working with some more advanced functionality of materialized views. Materialized views can be created with columns that match their parent table, but are NULLed at refresh. This is handy when you want to maintain the same structure as the parent table, but not show any of the data. For example, if you want to make an updatable materialized view that can push inserted values for a column to its parent, but will clear out all data in that column upon refresh. The CAST() function can be used to accomplish this, but it commonly leads to the ORA-12060: shape of prebuilt table does not match definition query error. Here is an example of a materialized view on a prebuilt table consisting of two columns, one of which should be preserved in the parent table, but cleared in the materialized view:

This error originates from the id column, but notice that both the parent table and materialized view have the same NUMBER datatype. When no precision and scale are specified for a number, it is actually treated as NULL(*), not as a default precision and scale as one might expect. When introducing the CAST() function to the materialized view, the lack of a consistent, specified precision and scale triggers this error. This can be solved by either of the following:

Use the “WITH REDUCED PRECISION” clause

Specify a consistent precision and scale in the parent table, prebuilt table, and for the CAST() function in the materialized view

The “WITH REDUCED PRECISION” clause is a good place to start since it should be sufficient in most cases:

The second option, although more complicated, helps to ensure no data is lost between the parent table and materialized view. The following example requires rebuilding the parent table, specifying the precision and scale:

Oracle offers an analytic function, RATIO_TO_REPORT(), that can be used to easily compute the percentage of each record to various totals returned by a query. As an example, consider an inventory count for a department store. The INVENTORY_COUNT table captures the product name, its department, and the total count for that product. Here is the table:

A requirement is given stating that the query should produce the count for each product as well as two additional computations:
1) Ratio of each product count in relation to the total department product count
2) Ratio of each product count to the total inventory product count

There are two primary cursor types that can be used in Oracle PL/SQL: implicit cursors and explicit cursors. Implicit cursors take the form of a FOR loop that loops that individually process each result of a SELECT statement. The cursor is implicitly opened and closed. Explicit cursors require the declaration of a cursor object, which is then opened, processed, and finally closed. I’ve read several sources stating that implicit cursors, although easier to code in many cases, are less efficient. I thought I would try some less-than-scientific benchmarking of these two cursor types to see just what the difference is. To test, I’m using a simple two value table. ID serves as a sort column and VALUE is the arbitrary value we will retrieve for processing in the loop. The benchmark does nothing with the returned values, it simply places them in a variable with each iteration. Here is the test table I’m using:

Next, I’ll be using three simple procedures to set up and run the benchmark. BUILD_TEST_TAB simply populates TEST_TAB with 2 million random values, incrementing the ID field for each record. An index resides on this column to sort the list of values returned from the query. TEST_IMPLICIT_CURSOR creates a FOR loop that traverses the results of a query against TEST_TAB. TEST_EXPLICIT_CURSOR creates an explicit cursor for the same query.

Below is the call to the two test procedures. I rebuild TEST_TAB twice and run benchmarks 10 times against each table, alternating between implicit and explicit calls. This is an attempt to reduce the influence of other processes running, stats being generated on the table, and other factors that might skew results.

Interestingly enough, this simple series of tests is pointing to significantly better performance of implicit over explicit cursors. This is the opposite of what I have read in several sources. Granted, this is an oversimplified benchmark test and is only taking timing into account. I have not attempted to measure or compare resource usage or any other metric.

In spite of the many GUI-based tools now available to developers working with Oracle, it is still sometimes necessary to use the SQLPlus prompt. A properly configured session helps a lot to make your interaction more pleasant and readable. SQLPlus will automatically execute any script named login.sql that resides in the user’s launching directory. However, I frequently find myself moving around directories to more easily reference scripts on disk. That’s why I keep a file in my root directory with a short name. Then, after launching SQLPlus on Windows, I simply need to type @C:ses (notice a backslash is not needed) to set up my session. Below is an example of a session configuration I like to use:

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
SET SERVEROUTPUT ON -- Show output from PL/SQL
SET TRIMSPOOL ON -- When spooling to file, trim trailing whitespace on all lines
SET PAGESIZE 40000 -- Maximum available pagesize, setting to zero will also erase headers
SET LINESIZE 200 -- Line length of 200
SET LONG 40000 -- Visible long length matches pagesize
SET LONGCHUNKSIZE 200 -- Visible long chunk matches line length
SET SQLPROMPT "_user'@'_connect_identifier> "
SET TIMING ON -- Show runtimes of all commands

Oracle, like most databases, supports atomic transactions. This prevents DML changes from being seen by other users until you commit your transaction. It also provides the ability to rollback a transaction should an exception occur and you do not want the changes committed to the database. However, Oracle offers several packages that support more “immediate” operations that cannot, by their very nature, be made part of a transaction. One example of such an operation is a network exchange, such as sending an email or connecting to a web server. Let’s say, as part of a transaction, you would like to make a post to a web site using UTL_HTTP. You’d like to only post to this webpage if you succeed and would like to “rollback” this post with the rest of the transaction should an exception occur. An oversimplified example of a posting function might look like this:

However, if you call this procedure mid-transaction, the post will go out immediately and cannot be reversed. To avoid the need to rollback the post, all the information can be saved until the very end of the transaction. However, this can be cumbersome to code. It also does not tie directly to the transaction and relies on you to perform the post operation at the very end of the transaction, once you are certain a rollback is not needed. A trick to couple this type of operation to a database transaction is to use a job. When a job is submitted in Oracle, it is not actually scheduled for execution until the transaction is committed. If a rollback occurs, the job will never be scheduled in the queue. The following code shows such an example:

Notice how the job is scheduled one second after the time of submission. This means the job will execute almost immediately after committing the transaction. Before the commit, the scheduled job is available in the user_jobs view. After the commit, the job is no longer visible in user_jobs, indicating it has executed and completed.
Notice that I added an exception block to the web_post procedure that hides all errors. DBA_JOBs will try to run repeatedly if they fail and the error is thrown out to the job handler. This example assumes the post to the web page is not really important and a failure does not need to logged. Most likely, this will not be the case in a real application, but you need to be aware that jobs will retry if an exception is thrown to the job handler.

If you need to set and lock the system time in Oracle, you can use the ALTER SYSTEM SET FIXED_DATE command. The string format for the date is either YYYY-MM-DD-HH24:MI:SS or the current NLS_DATE_FORMAT for the session. Here is the Oracle reference.

Interestingly, though, fixing the date applies to SYSDATE(), but not to SYSTIMESTAMP(). The code below contains an example of fixing and releasing the date as well as several queries to these functions.

The LISTAGG() function in Oracle is a very useful to aggregate delimited lists of column values into single strings. However, it has a limitation based on the maximum length of VARCHARs within the database. In Oracle 11g, this maximum is 4000 characters. Thankfully, this limitation has been extended to 32,767 in 12c. Here is an example of what you may experience when exceeding this internal limit:

In this example, we are simply trying to convert all values into a comma delimited list. However, the total length of the aggregated values exceeds the 11g limit of 4000 characters. This causes an internal overrun when trying to aggregate these values. Let’s say we can loosen the requirements for our return value a little. Maybe we can return as much of the list as possible, but not exceed the system-imposed limit of 4000 characters. Although not very straightforward, additional analytic functions can be used to determine when this maximum is reached and prevent the internal overrun from occurring. In this case, we’ll use a running total in the form SUM( LENGTH( value || ',' ) ) OVER ( ORDER BY id ASC ) to determine at which point we exceed the total character limit. From this row forward, we will begin taking substrings of the values to stay at the limit. The first row to exceed the character limit will be truncated to meet a total of 4000 characters. All strings thereafter will effectively be reduced to NULL. Notice the comma delimiter has been moved out of the LISTAGG() function and into the inner query so we can count the delimiters as part of the 4000 character limit.