One problem that companies frequently have with Essbase is the metadata and fact data modification process. It’s basically binary. That is, a cube reflects either the “before” or “after” state. Between these two states there is a time gap, when the users are without access to either the “before” or “after” data. Furthermore, if there are any problems with the “after” data, users cannot access the “before” data while the problems are being fixed – because it was destroyed in the process.

Oracle’s EPM development team has worked to reduce the gap between “states,” particularly in ASO where techniques are available to greatly reduce the impact as seen by the users. Oracle’s product roadmaps indicate continued work in this area, and we look forward to seeing them in future releases.

Unfortunately, Oracle can address only the gap between the “before” and “after” states. They cannot address logical or functional issues arising from external processes. Limitations of the data source or of the ETL process may prevent a developer from taking advantage of all of the techniques and opportunities provided by Oracle.

Even when all of the meta and fact data exist to support on-the-fly cube updates, and new capabilities have been developed by Oracle, there still will be times when a transparently transformed cube should not be released or even retroactively recalled. People are apt to make mistakes when modifying metadata, and no matter how many steps are added to the data governance process, be they manual or automated, occasionally the old cube will be better than the new.

Do you want every financial analyst in the company screaming at you? One way to avoid this is never leaving the users in “data limbo.” A four-cube rotation can come to the rescue. Let’s assume the users are working with a cube that has an Essbase application name of Dog and an Essbase database name of Cat (written as Dog.Cat):

Dog.Cat: This is the current “published” cube to which users are connected

To create our four-cube rotation, we would add the following cubes:

Dog_1.Cat: This is the most recent “published” cube, kept as a fallback in case an audit failure is found too late

Dog_2.Cat: This is the second-most recent “published” cube (a “belt and suspenders” strategy)

Dog_W.Cat: W indicates that this is the Work-in-Process cube; this is where restructures and reloads will be performed. Until the metadata modification cycle begins, this is identical to Dog.Cat

Anytime modifications could cause a failure, we must leave the users’ world alone. At all times, the users should be able to cheerfully continue running their queries and reports against Dog.Cat. A four-cube rotation enables this.

Instead of altering Dog.Cat, modifications should be performed against Dog_W.Cat, whatever those changes might be (metadata, fact data calculations, etc.). Upon completion of modifications, the cube should be audited. Hopefully, this audit is automated, but in some companies, it may be manual. Actually, during the monthly-close cycle, some clients use this audit as an opportunity to not only check for process failures, but also to allow the team responsible for closing the books to run reports and verify that there isn’t “just one more JV” required before data is released to the rest of the company.

However it is accomplished, at some point the cube is ready to be “published.” At this point, all access to the four cubes is cut off, the cubes are unloaded from Essbase, and the four cubes are rotated: 4 moves to 1; 1 to 2; 2 to 3; and 3 to 4.

By using a bit of an atypical process, the users will experience only a second or two of interruption. They may have to sign in again, but other than that, few users even notice the interruption. The secret is that all four cubes have identically named databases, and the cube as a whole does not get rotated. No changes are made to the application. As most Essbase developers know, the application is seen on the disk drive as a subdirectory under “…/app/” or in this case, “…/app/Dog” or “…/app/Dog_W,” etc. The database is a subdirectory below that, “…/app/Dog/Cat” or “…/app/Dog_W/Cat,” etc.

The second secret to making this work is that we don’t tell Essbase about this rotation. We do it through the operating system, only moving the subdirectories. It takes five steps:

Del …/app/Dog_2/Cat

Move …/app/Dog_1/Cat …/app/Dog_2/Cat

Move …/app/Dog/Cat …/app/Dog_1/Cat

Move …/app/Dog_W/Cat …/app/Dog/Cat

Copy …/app/Dog/Cat …/app/Dog_W/Cat

All of the security is based on the Application name, so there is no problem. Now simply “load” the application and let the users back in.

There are some additional considerations, which I will address in my next blog:

The copy step is the only one that takes any perceptible time at all. I will present some alternatives that can be used if your cube is large.

There are special cases to be considered, depending on whether there are multiple tablespaces (ASO) or multiple volumes (BSO).

If the cube is part of a partition, one extra special-file rotation will be needed.

Finally, your ears may have perked up at the statement, “Anytime modifications could cause a failure….” My next blogpost will expand on this topic, and you will see that ASO and BSO cubes pose differing challenges when trying to decide exactly which changes pose a risk of irrecoverable failure.

Quite simply, it’s dimension members that never happen within your cube. This can occur for Business reasons: Snowshoes are not stocked in your Florida stores; Chart of Account issues: Departments were defined as valid only in a single Business Unit; or Cube Structure: Working has no meaning when used with Actual.

Dimensional Irrelevance can be caused by increasing the number of dimensions as well as by increasing dimensional diversity (for example, by changing the typical “Local Currency” member in the “Currency” dimension to have children enumerating each of the different currencies).

In other words, OLAP data tends to be very sparse.

Dimensional Irrelevance and BSO

Okay, we all started with BSO, where we learned that Dimensional Irrelevance and sparse data in general can be your enemy. Sure, BSO’s use of blocks and your choice of sparse and dense settings could reduce the problem, but in the end you still had to worry about sparsity:

The more sparse blocks, the more blocks to aggregate

The more sparse blocks, the larger the index file

Splitting a sparse dimension into multiple dimensions compounds the issue:

It multiplies the number of potential blocks

It multiplies the number of upper level blocks

As a result, both your block count and your calc time increased

With some designs, you would even hit the BSO design limit of 2**104 potential blocks

In fact, one of the most common ways to improve BSO performance is to combine dimensions. Think of combining the Business Unit and Department dimensions in situations where departments exist within a single Business Unit only (the reverse of the third bullet above).

Dimensional Irrelevance and ASO

With ASO, we do not have these problems. There is no calc time to increase because all queries are dynamically calculated. There are no blocks to be multiplied.

Well, the above sentence is not strictly true… but the explanation of why it is not strictly true contains detail that is not directly relevant to this blog (it had to be included, or there would have been complaints that I had not read my own chapter). Feel free to skip down to End of Detail come back to this detail later:

Some would consider the time to Aggregate a cube as a calc time; however, aggregation time is not affected by dimensional irrelevance. It is a function of only two things:

The size of the level 0 dat file, which is proportional to the number of level 0 data cells (see my Rule #3 – All queries against the same aggregation level take the same time†)

The number of aggregations performed (see my Rule #4 – Do not depend on aggregation or other “Maintenance” to make up for bad design)

“Blocks” do existwithinASO (kinda, sorta). They exist in the form of the compression dimension. Every group of 16 compression dimension members in the outline becomes a block identified by a sparse key made up from the remaining non-sparse dimensions. Again, there is no relation between dimensional irrelevance and performance:

There is no effect when irrelevance is increased/decreased by splitting/combining non-compression dimensions (the combinations of non-compression dimensions, i.e., the number of “potential blocks,” do not matter – only the number of existing combinations, aka “blocks”)

There isan effect when splitting or eliminating the compression dimension, but it’s not related to dimensional irrelevance. Instead, it’s related to the number of “Blocks” in the cube.Note: Remember that each “Block” of data has overhead. This overhead is the bitmap or key that identifies the sparse combination to which it belongs. Therefore, splitting or eliminating the compression dimension changes the amount of overhead per data cell as seen in the ratio of the dat file size level 0 data cells. Increases in the ratio result in increases in the size of the dat file. Dat file size is directly related to performance.

There can be performance advantages with no compression dimension, and these can greatly outweigh the cost of a larger dat file

See my Rule #9 – The use of a Compression dimension is not a given; consider and test alternatives, including not having a Compression dimension

End of Detail: The summary of all of the above is, “With ASO, we do not have these problems.”

Freedom – Or, Life for the Reformed BSO Practitioner

What does the freedom from dimensional irrelevance allow us to do? Many things in typical BSO cubes were designed with this issue in mind, three in particular:

The [Local Currencies] dimension. Wouldn’t it be nice to be able to see each currency by itself?

Compound Entity dimensions – BusinessUnit/department

Allocations – Usually, there are large scripts with many rules and levels or layers. How about an After-Allocation dimension with each of the layers and rules separately visible, with the original as loaded data at the bottom?

My friend Cameron points out that people think lots of extra dimensions make a cube harder to use. But with Smartview/SmartSlices and/or transparent partitions, these extra dimensions can be hidden from those users who have no interest in them.

Simply splitting up the [Local Currencies] dimension can make currency conversion much simpler. The typical BSO design comingles currencies so that they have to be converted at level 0 and then aggregated. But I learned in 4th grade math: A*rate + B*rate = (A + B) * rate. So, why not aggregate them up and then multiply ((A + B) * rate)?

Why do I suggest that you “aggregate them up and then multiply”? ASO, as you may know, is great at doing the latter ((A + B) * rate); but ASO requires a calculation script when multiplying then aggregating (A*rate + B*rate). This means writing a script, and running it in batch mode, and worrying about concurrency, and…. Let me know if you think this topic should be expanded upon in a future blog posting.

There’s an additional practice that you can be freed from when designing an ASO solution. Dimensional irrelevance and calculation times usually result in people either splitting up cubes or adding partitions and using x-refs. This is not necessary when using ASO. Dimensional irrelevance is not a problem and as pointed out above, smart slices can be used to make the larger cube look like the old, small cube to those of your users who cannot cope with the change.

Summary: Relax your requirements and do not let large cubes and many dimensions rule your designs. ASO can handle it.

Next time on the Essbase Mechanic: ASO Attributes – What Can We Do If We DECREASE Dimensional Irrelevance, and then Increase It Using Attributes – A New Look at Time Dimensions.

† The 12+1 Rules of ASO Designing for Performance are introduced in my chapter: How ASO Works and How to Design for Performance in the book Designing Essbase Applications.

It has taken me a long time to get this blog set up and ready to go, so please excuse me if I take a moment to thank the two main movers in that effort: My wife Kathy Walsh and my friend Cameron Lackpour. The order in which I have thanked them is well defined. The ranking as to which one of these two can claim the “Primary Nagger” award I leave to them to debate.

As I mentioned above, this blog will focus on the mechanics and inner workings of Essbase, with a trouble-light shining on ASO, to the extent that I can either divine them or discover them by asking questions. To those of you familiar with my other writings, this should come as no surprise.

Of course, I work on the entire gamut of applications within EPM and also frequently am called on for complex (often hierarchical) SQL development and to write automation scripts in both Windows and UNIX. Initially, I will include a number of these topics in my blogs. There are a number of self-directed emails over the past few years to remind me of potential topics to explore. Now I have the vehicle and I hope you will enjoy the ride with me.

This blog concentrates on the mechanics and internal workings of Essbase and supporting technologies. The focus is on advanced practices and techniques, and understanding the reasons behind observed performance.