Why I created a blog

Its been many years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Wednesday, March 10, 2010

Time for another quiz

The last quiz I did was very easy and was answered by a number of people very quickly. In addition, it wasnot technical at all. This quiz is a little harder and makes you think a little.

The problem.

I have expanded sample.basic and now it has 10,000 products with 5 levels in the hierarchy. Alternate hierarchies have been put under a parent called Alternate_Product_Hierarchies and the primary hierarchy is under a member called "All Products". Market too has been expanded and goes to city level so there are now county and city levels in the hierarchy. There are 3141 counties and for fun lets say there are 19,355 cities.

For the actual scenario for each month, I need to get sales at the generation 1 of Product and Market so I can use it for an allocation later. Because of the size of the dimensions, I can't just make the upper levels dynamic. Also, while I could just agg up the dimensions, it is slower that I need it to be. What is the quickest way I can get my sales at the top of the dimensions. This code also has to be re-runable. I'm sure there are multiple ways to do this. I'm interested to see what you come up with.

7 comments:

Anonymous
said...

OK, here's one way, although I don't think it would necessarily be faster, and I don't think anyone would be likely to take this approach.

Make a load rule that ignores the Products and Markets columns, loading directly to the top level of the dimensions. This can be done by either creating columns "Products" and "Markets" columns from text, or by setting "Products,Markets" as the header name under Data Load Settings. In the same dialog box, select the option for "Add to existing values", and then put "Products,Markets" in the Clear Data Combinations option.

Jared, I like your answer, but it would not be applicable in all cases. You assume Sales was loaded from a data file, but what if it was calculated by multiplying units by price. Unless you exported the data after this calculation, your approach would not work. I'm guessing the export could take time and would mean multiple steps.

While going to ASO would be nice, again it would not work. The scenario needs the result to be stored to be used for a later calculation. ASO does not yet support procedural calculations although you can do a little with solve order.

Create an ASO cube (Temporary cube) to load and AGG all the data.Use partitioning and push the data to BSO to load at higher level that can be used later for allocation.

ASO cube is faster for aggregations and everytime you want to load a new data you can just clear the existing data and perform the same process..

If it is calculating as Unit by price, yet you can use little bit of MDX to calculate the data and it wouldn't be an overhead.I don't have much knowledge on ASO and I am not sure if that type of calculation can be done in ASO.

Amarnath, That is what I call thinking outside of the box. I think it might talk longer to export the data, and load it and calculate it. The biggest problem is calculations across a partition are slow. The last time I checked, you could not have a replicated partition between an ASO and BSO cube so it would have to be a transparent partition. I think the subsequent calculations would be even slower than the time it took to agg the dimension.

My Idea was to have ASO as source and BSO as target.I will do all the load and agg's and if it is a transparent partition, my data will get reflected in BSO (transparent partition make most of the network and might be slow...correct me if I am wrong).So, the data is there in BSO and I can use any type of calculations in BSO.I haven't tried partitioning though but I will take some time out of my schedule to check it out.

Well, just for giggles: what if the dimensions you need a dynamic total for are dense (if they aren't already) to keep the block you need all in memory, then do like a @SUMRANGE on the level-0 descendants of the hierarchy you want? That way you're not wasting time on agg'ing to the disk, and all those intermediate levels, and not pounding the index trying to get all the blocks you want. There are several caveats, of course (hurting performance in other areas, agg issues if not using (+), etc)...