Oracle Data Warehouse Design and Architecture

Menu

Category Archives: Data Warehousing

Post navigation

In the BI system that I’m working on I have the perfect application for the use of Expression Filters. It is almost as if someone specifically devised this problem in order to tempt me into using them, and I really am very tempted indeed.

Let me explain.

In the application which feeds data to the BI system users can define “display filters” that determine whether network alerts are displayed in particular windows or not. For example they might create a filter called “Failed Core Devices” for which the definition reads: “IS_CORE=’Y’ and FAILED>=1”. Network events which meet those criteria are displayed in that window. They might also have a filter called “Failed Cisco Devices” with a definition: “DEV_TYPE LIKE ‘CISCO’ AND FAILED >= 1”. So, a failed Cisco core device will actually appear in both windows.

The requirement is to tag which windows an event was displayed in at a particular time, and because there are no data elements for each event to tell you the display window(s) the only way to work that out is to evaluate the expression for each event.

This is exactly what an Oracle Expression Filter does, and although I would have to convert the definitions from Netcool ObjectSQL to Oracle SQL (LIKE in Netcool ObjectSQL is a regular expression, so “DEV_TYPE LIKE ‘CISCO'” has to be converted to “REGEXP_LIKE(DEV_TYPE,’CISCO’)”), they are an obvious fit for the problem.

My query to determine the filter names for the events would be reduced to something like:

Which would obviously be pretty neat and tidy, considering the alternatives (think of how you might implement an expression such as “(COL1 NOT IN (1,3,4,5,78,900) AND COL2 IS NOT NULL) OR NOT REGEXP_LIKE(COL3,’^WHATEVER’)” by other means as simply as you would with an expression filter). It could be implemented as part of the ELT or as a filter or join in Business Objects (or your current query tool of choice).

However, what happens if I move on to another client and this method fails because of a new filter definition that references a new data element not currently included in the Expression Filter attribute set? Or an upgrade goes wrong? Or a patch breaks it? Or we discover a bug? Can I sleep at night if I’m leaving code behind that will have most reasonably competent Oracle practitioners reaching for Google to find out what EVALUATE is, and then has them reading documents for the next couple of hours? Even if I sit the support staff down and show them all how it works, that knowledge will have evaporated in a couple of months.

Documentation? Well I can certainly document how expression filters work, and how to detect problems and perform apparantly simple tasks like modifying the attribute set (I already did something similar for DBMS_Scheduler chains), but I’m not going to think of everything and even then do I want people to be following written instructions without understanding the architecture of expression filters? It’s pretty easy to get into a mess over this object oriented stuff.

The alternative? Currently unknown. The few options identified are prone to inaccuracy or imprecision (eg. they’ll only identify a single filter instead of the full set), and they tend to require continual maintenance of the ELT or the query tool or both whenever a filter is created, deleted or modified.

So to summarise: do I implement a solution that is a technical exact-match with potentially high future maintenance problems, or a poor technical solution with definite low-level ongoing maintenance problems.

I quick comment of mine on the Oracle Forums prompts me to add a further note of explanation here.

Many of us are pretty much wedded to the use of sequences to generate primary key values, either by using a trigger to populate the column or by referencing the sequence next value in the insert statement itself. We do this because they provide concurrent inserts with unique values, and we don’t care about missing the occasional value here and there due to caching, rollback etc.

Well sequences are fine things for solving concurrency issues on primary key generation, but I’m struggling to recall a time when I has multiple processes inserting new values into a data warehouse table concurrently. I can see that it might happen on a fact table, though it would rule out the presence of enabled bitmap indexes, but for a dimension we are generally inserting values from a data source that we have found do not exist as dimension values.

Without the need to support concurrent inserts the need to use a sequence largely disappears, and we are left instead with the need to use an object whose definition on creation often has to be inconveniently derived from the current maximum value in some column or other.

So, I’ve largely stopped using sequences to generate primary key values. Instead I have code of the following form:

The “current_max” subquery factoring clause (SQFC) gives an extremely fast read of the index on the primary key column (node#) of the dimension table to find the current maximum value, using Coalesce() to ensure that an empty dimension table in a newly deployed schema returns zero instead of NULL.

The “new_values” SQFC provides a list of new values, ordered in order to reduce (very slightly) the overhead of index maintenance on the unique constraint that protects the dim_node.node_txt column.

Witness also the lovely use of $$PLSQL_UNIT and $$PLSQL_LINE to pass to the DML logging procedure the approximate code location for the activity.

Sweet.

One side effect of this is that the primary key values are gap-free, unless one was to delete rows other than those with the highest primary key values of course. I wonder if the optimiser does have any special way of treating columns in tables of X rows for which the maximum value is equal to the minimum value plus (X-1) and the number of distinct values is X? Probably not, but I like to think that it would put the cat among the pigeons if it did.

Anyway, the only exception I have to the use of this method in the current system I’m working on is for tables populated by an Oracle Gateway process. Primary key values for those are still populated by trigger and sequence — what I have been describing as “Old School” to the imaginary critic in my head.

If you are merging into a data warehouse fact table (not so likely) or a large aggregation of a fact table (more likely) then you can improve efficiency by adding a transitive predicate into the USING clause to promote partition pruning on the target table.

Unless you are using Partition Change Tracking I think it is very unlikely that the MERGE inherent to a materialized view refresh is going to partition prune on the target table.

I’m told that it’s very important to keep one’s brain in shape as one grows older. Or “drifts gently towards the twilight” as I prefer to think of it. Trying to look on the optimistic side, obviously. Fortunately I end each day with my brain wrung out like a well-used sponge courtesy of some ETL development work on a largeish data warehouse development.

Although the regular ETL development is outsourced, we are responsible for the coding that processes historical data, so on deployment we will run our historical load code and then the “other team’s” regular load will take over. Yes, it is a strange arrangement. And by “we” I really mean my collegues somewhere in the pestilent jungles of Brazil, surrounded by poisonous frogs and who-knows-what other kind of other fearsome thing, and living on a diet of rice and beans in their houses on stilts.

My own role is not so bad. I don’t have to get too involved in the detailed busness logic that the code has to implement, and thank goodness for that because frankly I’ve not seen anything so complex for … well, forever. So I sit here in a metaphorically lofty perch, making myself popular by criticising the hard work of others and encouraging them to greater efforts by calling them lazy if they stop work before midnight (the traditional management role). Yes, it’s a good life alright. I constantly remind them that the real aim of the project is to produce a process for loading five years of historical data that is faster than the “B-Team’s” load of single day.

I think our methodology works quite well though. When you are down in the weeds of the logic (did I mention that it is particularly complex logic?) it is always difficult to see the slightly larger picture of where you’re taking the data next and how you optimise both specific ETL stages and how you can adjust the overall process to make everything more efficient. It’s a little like pair programming, I guess.

And we do get to do the fun bit — 10 million records joining 0-M:0-N through eight OR’d join conditions to 10 million other records, with a couple of billion joins resulting that we then artfully winnow down to a few million. The optimization process is a little mind bending, with all sorts of issues that we wouldn’t normally worry about too much becoming vitally important in an effort to avoid blowing out the host server’s memory with a degree of parallelism of 40. I have to say that there was initially some FUD over the historical process, with estimates of the load duration being bandied about that were in the order of months — we never thought it would take more than five days, and it’s now looking like it’ll run in under one day, which means that we’d be able to start the daily load process directly afterwards without altering it. And I think we still have a few more tricks up our sleeves as well in case we need to buy an extra hour somehow.

In fact it’s turning into something of a stroll through the SQL Reference book. The list of features we’re touching on include analytic functions (which have become a second language for everyone now, along with the “PCTFREE 0 NOLOGGING COMPRESS” which I nearly appended to my sandwich order the other day), connect by clauses, and even some rather exotic stuff with finding the lowest set bit in a something-or-other that I don’t understand yet but which Heitor thought of while lying in bed one night and which he used to help reduce an eight hour process to fifteen minutes. They say that they don’t have siestas in Brazil, but maybe they should start.

I just posted this question to the Oracle list, but thought I might post it here in case that reaches a wider audience. It also may be a situation that others in the DW arena have faced before.

I have a large and busy OLTP table, 100GB or so, against which there is a need to capture changes. Until an asynchronous CDC solution is in place we have to rely on two columns: create_date and update_date (null until the first update), both being of DATE type of course.

These are currently unindexed, but there is a desire to index them to improve change capture performance for queries such as:

select …

from …

where (create_dt >= trunc(sysdate)-1 and create_dt < trunc(sysdate))

or (update_dt >= trunc(sysdate)-1 and update_dt < trunc(sysdate))

The desire is obviously to provide the maximum performance benefit while reducing the impact on the OLTP system.

I thought of four different indexing options:

i) Indexing the columns separately, leading in the best case to an unusual execution plan where the indexes are range scanned and the results merged before accessing the table.

ii) A single composite index (create_date,update_date),leading to a fast full index scan.

iii) A single composite index (update_date,create_date), rewriting the query predicate as …

iv) A single-column function based index on (Nvl(update_dt,create_dt)) and rewriting the predicate appropriately.

Whichever of these is chosen the growth pattern for the data naturally tends towards index contention as all the new values are equal to sysdate.

So the nub of my problem comes down to these questions:

Is it possible to mitigate the index contention problem with a suitably high initrans values? Does it seem likely that implementing a reverse index would be worthwhile in reducing contention (I could probably take the pain of the fast full scan if it was, although it would lead to a larger index due to the 50/50 block splits).

Would you regard implementing a function-based index as risky in any significant way on major tables of a busy OLTP system? How about a reverse function based index?

Basically, “what would you do”?

Maybe I’m overthinking it, but I’d like to go to the sytem developers with a fully thought out set of options (even if they do tell me to take a running jump anyway).

When you’re designing ETL processes the mantra of “do it in SQL” realy gets engrained in your way of thinking. PL/SQL-based processing as a method of handling business logic is anathema, as the principle of “row-by-row equals slow-by-slow” is very well highlighted by the processing of millions of rows.

Very recently though I found a case where this principle had to be violated, and with performance results that were not at all unpleasant. I was stumped by a SQL problem that I thought would be amenable to analytic functions, but I couldn’t think of a way of doing it. Here’s the situation.

I have two sets of data representing, let us say, sources and targets. Sources and targets only have one attribute each, which is a date in each case, and sources and targets can be joined in a many-to-many relationship. However only one join is valid per source and only one is valid per target, and we have to form connections between sources and targets on a one-to-one basis. We work out which one of the possbily-many joins is the valid connection according to the following logic.

We start with the most recent source. We work out the earliest target to which it is joined, and discard all other targets to which it is joined (if any). The target to which it connected is now ineligible for any other source to connect to it. Then we take the next most recent source and we work through the targets in ascending target-date order and we connect to the first (ie. earliest) target available, discarding all others. The only reason why a target might not be available is that we already connected a source to it.

One way of looking at this is to imagine that the set of joined sources and targets are ordered by source-date descending and then target-date ascending. Examining each pair in turn we ask two questions:

“Has this source already connected to a target?”

“Has this target already connected to a source?”

If the answer to both of those is “No” then we take that join as the valid connection and we move on to the next pair.

It’s very easy to construct a sample data set for this problem as well, by simply joining a list of integers to itself to form pairs such as (1,1), (1,2), (1,3), … , (1000,1000) and ordering the output with a query such as this:

Given that data set the first pair is (1000,1), which becomes the first connection. The next connection will be (999,2), the next will be (998,3) etc.. It’s important to note that the real data set is not like that though, where every source connects to every target, so special conditions in this data set do not apply to the real one.

By the way, there are some special conditions that are easy even in the real data — if a row represents the first occurance for both the source and the target then that is always going to be the valid connection. Um … that’s actually the only special condition I could think of. There may be some very clever mathematical way of working out others, but I don’t know what it is.

So, the first solution I came up with was to use PL/SQL to read each row in order and attempt to insert into a table having unique constraints on the source_id and target_id columns, handling any uniqueness error. The handling of the error is the only virtue that PL/SQL brings to this case. It was functional, but it was slow because of course it requires a lot of logical reads and writes in the index maintenance. I considered an insert with an error-logging clause by the way, but that doesn’t handle unique or primary key constraint errors and if it did it would still have the same problem of high logical reads and writes.

The next solution was to maintain a pair of PL/SQL associative arrays to handle the detection of uniqueness, and that worked well except that I was using conventional path inserts for the result set, and I didn’t fancy coding up bulk inserts. I know enough PL/SQL to be dangerous but it’s not my native language.

My (currently) final solution was to embed the PL/SQL into a package so that a function to detect uniqueness could be embedded in a sql statement. I was pretty concerned about the context switch to PL/SQL, and the logic means that this process can’t be parallelised, but the result suprised me. When I streamed 100,000,000 rows through the process to detect 10,000 valid connections the process completed in 770 seconds, so that was at least 250,000 uniqueness tests per second. I can’t imagine that being achieved based on detecting violations of a unique constraint.

The call to ss.reset purges the arrays, and must be executed between runs. The rownum > 0 predicate is just the poor mans way of ensuring that the call to ss.is_new is not pushed into the inline view — Oracle obviously doesn’t know that the row ordering is a critical part of the logic.

Obviously an INSERT (with append hint to invoke direct path) can be prepended to that to perform the actual ETL task.

No distinct on the first select clause because we assume that City is unique (it would be if it were the PK on a real dimension, of course). In the absence of indexes to support fast full index scans we have four full table scans and three operations to support the distinct operator …

A pretty clean looking piece of SQL, but multiple scans of tables make my hackles rise. If this were a very large table and indexes couldn’t be leveraged then it could be a performance problem.

An alternative occured to me:

insert all
when 1 = 1 then into hier values (city ,state ,'City' )
when state_row_ind = 1 then into hier values (state ,region ,'State' )
when region_row_ind = 1 then into hier values (region ,country,'Region' )
when country_row_ind = 1 then into hier values (country,null ,'Country')
select city,
state,
region,
country,
row_number() over (partition by region order by 1) region_row_ind,
row_number() over (partition by state order by 1) state_row_ind,
row_number() over (partition by country order by 1) country_row_ind
from dim
/

The cost of the two methods is exactly the same, but the cardinalities are both out — the correct answer is “19”. Of the two methods I’d expect the first to be more accurate and it did do pretty well. With such a small amount of data it’s difficult to make an accurate comparison, and the proper approach would be to benchmark each method on the production data.

One interesting difference is that with the single table insert method you can control the order of insert into the hierarchy table, which might be important if you wanted to physically cluster particular rows together. I don’t think that there is a deterministic way of controlling the insert order in the multitable insert.