A delve into the disturbing world of EPM, uncovering the mysteries and facts beneath the shiny surface, yawn your way through yet another blog, let's hope this one is interesting...

Sunday, 28 September 2008

ODI Series 4 - Enter the Designer

Another blog and yes more ODI, today the designer comes in to play. The designer is where you perform the main integration work e.g. loading hierarchies from flat files or database tables into planning or transforming data before it is loaded into essbase. The designer stores all this information in the work repository.

The main elements to the designer are models and projects; a model is a set of datastores corresponding to data structures contained in a physical schema. In a model you will perform the reverse engineering for instance reverse-engineering the dimensions and properties from a planning application into a readable column format. The RKM (Reverse Knowledge Model) for planning should have already been downloaded (see the first part of the ODI series to find out what you need)

A project is a group of objects developed using ODI, the components to a project are:-

Packages:- A package is a sequence of steps organized into a flow diagram, so say you wanted to wait for a file to arrive, load it into a planning dimension, push it to essbase and then send a successful or failure email then the package is the place for this.

Interface:- An interface is really a set of rules that define the loading of a Datasource from one or more source Datastores, an example of an interface is load members into planning dimension.

Procedure :- A procedure is a reusable component that groups operations that do not fit into the interface category, for example sending a batch of files through FTP.

Variable :- This is a value which is stored in ODI, this value can be set or evaluated to create conditions in packages, it is pretty much what you would expect a variable to be.

Sequence :- A sequence is a variable automatically incremented when used.

Knowledge Modules – I described these in Part 1 of this series so if you need a refresher just have a quick look back.

You will get a much clearer understanding of the different components as we start to use them.

The first time you start the designer you will have to set up the connection to the master repository and work repository, you should be pretty used to this by now and I promise it is the last time you will have to do it. It is the same as when first set up the Topology manager.

The first objective I have set myself to achieve in the designer is to load dimension information from a flat file into the sample planning application, if you have been following then you will have already set up connection information to planning and to the flat file directory in the topology manager.

Before we start make sure you have extracted the KM files (Start KM_) from the impexp directory in the downloaded installation files

This is just so all the knowledge modules are stored in one place and simpler for importing.

Once you are in the designer, the first thing to do is create a new project.

You will notice most of the components I described earlier have been created.

At the moment all we are going to do is import the required KMs, right click the project name (ODI_DEMO_PROJECT) and select Import Knowledge Modules. Select the folder %ODI_HOME%\oracledi\impexp and a full list of the KMs available should be displayed.

As the first objective is just to use a flat file and planning we will require the following KMsRKM Hyperion Planning (reverse engineer the planning application)

LKM File to SQL (load data from a file to a temp area using SQL, the temp area is controlled by the Sunopsis memory manager)

IKM SQL to Hyperion Planning (move the data from temp area and load it into the planning repository using SQL)

You may have been wondering what the format of your flat file needs to be and this is where the RKM for planning comes in, setting up the RKM is done in the models area.

First you need to create a models folder, it is nothing more than a folder and lets you maintain your models. Click the Models tab then Insert Model folder (first button) and finally give it a name.

Next step is to insert a model, the first model to set up is a data store to hold all the reverse engineering of the planning app dimensions, when you insert the model you have to name it and choose the technology, in this case it will be Hyperion Planning, on selection the logical schema will be populated with the schema named you applied in the topology manager.

Now on to the reverse tab, this is where the clever KM will be used to reverse engineer the planning dimensions.Select Customized; choose the context and Logical agent (all of these have been set up from the previous ODI blog)Select RKM Hyperion Planning from the KM dropdown; it will have the project name appended to it.

Apply the changes and click Reverse and let the KM does its magic.

How do you know it has completed? Well a couple of ways, just watch task manager and wait for the CPU process to complete or officially look in the Operator, now you shout what the hell is the Operator. The operator is basically a GUI that displays the status of executions from ODI and are carried out through the use of the Agent.

I am not really going to go into any more detail about the Operator until later in the series when most of the integrations are complete, but for now you can access it from the start menu or directly from each ODI application, in the designer – Windows > Open Module > Operator.

So it looks like our reverse has completed successfully.

If you expand the model you see each of the available dimensions have been reversed and placed in a datastore with the same name as the dimension, there is also an extra store for the UDA.At present you cannot reverse engineer or load to Year/Period/Version/Scenario that is just like HAL.

Just to point out I have a couple of occasions where the reverse says it has completed but not all the dimensions in the datastore have been populated, if this is the case just run the reverse again.

Expanding a datastore such as Account and then columns, the fields that can be populated will be displayed.

This is where is should start to click in your mind what you are trying to achieve, from a flat file you want to populate some of the above fields which will then be loaded into your planning app, which fields you populate is up to you but obviously Account (member)/Parent are required if you don’t populate the other fields then defaults will be used.

Most of the fields are self-explanatory and relate straight to planning, if you have ever used HAL they will be very familiar. If you have never used HAL then the one that stands out the most is the Operation field.

Operation can have the following values.

Update – This is the default and is used if not populated, it Add, updates, moves the member being loaded.

Delete Level0 - Deletes the member being loaded if it has no children

Delete Idescendants–Deletes the member being loaded and all of its descendants.

Delete Descendants–Deletes the descendants of the member being loaded, but does not delete the member itself.

Just a slight deviation but I am just going to demonstrate what happens when you have not set up the connection to the planning app in the topology manager, I just renamed the planning app from plansamp to plansamp1 in the topology manager and ran the reverse again.

I have found with ODI that sometimes the error messages are pretty obscure, this one is one of the better ones “The application plansamp1 is invalid” gives you a rough idea, though this is the first opportunity that you have had to check you set up the connection correctly.

So now we have reversed our dimensions a csv hierarchy load file can be created. If you like you can just create a file with headings that relate to all the reversed fields.

For an example say I was going to add the following member as a child of account

The csv file would look like this :-

From this you should be able to build up a file to match your planning application pretty easily, once you have completed the file make sure you save it in the file store directory you set up in the topology manager, in my case “E:\FileStore\AccLoad.csv”

Now we have to create a model and datastore that will pick up the file.

Choose File as the technology and choose the logical schema you created originally in the topology manager.

In the reverse tab select your context.

Insert a new DataStore and give it a name and if you click browse for file button it should open up in the correct directory and you can select the csv file.

In the files tab as we are using a CSV file then choose Fie Format as delimited, the file has a header row so it was set to 1, the field separator was set to other and as a comma (,)

If you have been following this guide then you may have to go back into the topology manager as I forgot to set the JDBC options for the File set up, I have corrected the previous blog but this is what you need to make sure you have set.

Back to the designer, click the Columns tab then the Reverse button and all being well the file structure should be retrieved.

One thing to watch out for if you are using formulas, the physical and logical length is set to 50 and formulas can easy surpass that so you will need to increase the length and in the reversed dimension column set up which is defaulted to 80.

Ok we have created the source and target elements so now we have to put them together to bring them together and this is done by creating an interface, back to the projects tab and insert a new interface

Give it a name and make sure the context is selected, next check “Staging Area Different From Target” and choose “SUNOPSIS_MEMORY_ENGINE”.

This allows any transformations between the source and target to be handled by the memory engine, an example being if you have a column header named TB in the source file and in the planning target it goes to Time Balance then the memory engine will handle this mapping.

On to the Diagram tab, this is where you drag and drop you source and target datastores, its certainly not the best graphically designed section and I remember thinking it was hideous the first time I saw it.

First drag you Account Flat file datastore from the models window into the source window and then drag the reversed account dimension datastore into the target window, select yes for the automatic mapping. What this will do is automatically map the column headings of your file to the target planning dimension, if you create the file with exactly the same headings as the dimension properties then you will have less work to do.

Anyway you should end up with this lovely looking organised window.

Just stretch the windows to make it more appealing and understandable.

Now we have to map the remaining source and targets, the ones that have been mapped will show as red in the source, btw the “S” in the source/target windows stands for String as all the fields we are using are String based. In the target window the members that have been automatically mapped will be displayed with an icon that stands for Source so basically meaning the source is the same as the target.

To map, click on a name in the target window, this should display the mapping window below, tick the “Active Mapping” box and in the implementation window enter the 3 digit name of the source and the column name, so for the Target of “Alias: Default” the implementation window will have ACC.”Default_Alias”, finally click the “Staging Area” button, as we need to use the sunopsis memory engine to perform the mapping.

If you like you don’t have to manually enter the name, you can click the “Launch Expression Editor” and select from the window.

The expression editor has a whole raft of functions that can be used but you won’t probably use them for basic mappings.

The process needs to be repeated for the remaining source properties.

You can also force the value in the target mapping to be a fixed value, so say everything was going to be put against a “Source Plan Type” of Consol you wouldn’t need that in your source file and you can just enter the value into the mapping like so :-

If you are going to do this make sure the “Active Mapping” box is ticked and execute on the staging area is selected.

Once you have completed all the mappings click on the Flow tab and a diagram should be displayed outlining the process, if you click on the Target there are a number of options available like setting up error logging and the ability to refresh to essbase after the load has completed.

After applying changes you can execute the interface and the data in the file should be loaded into planning, you can check in the Operator to see how the process went.

You should also check the log files that were generated, you may see this is the log

2008-09-28 13:15:04,109 DEBUG [DwgCmdExecutionThread]: Number of columns in the source result set does not match the number of planning target columns.

I wouldn’t worry about the message, as it is not really an error.

In planning your member(s) should have been loaded.

So there we go we have completed the objective of loading information from a flat file into a planning dimension, from here you should be able to do the same for other dimensions.

I am going to wrap it up there for today, hopefully it all makes sense and you can go off and create your own load interfaces.

Next time I will look into loading data from a SQL table to planning and look at what can be achieved with ODI and essbase.

21 comments:

I am affraid I came to the wall during the ODI Series 4 setup. I just saw that, for some reason, RMI has not been installed. And ,in the ODI Series 3, we set Planning server to the RMI port 11333 (so, tehnically, I set the connection to the component which is not installed). I am maintaining 2 environments, 32bit and 64bit. Installation steps for these 2 environments were exactly the same, till every component. On the 32bit environment, RMI is installed correctly, while on 64bit, not. As I followed up, RMI is a part of planning and therefore, it should be installed as I selected all Planning components during installation. I dont have RMI folder under %HYPERION_HOME%/common, and of course, I dont have windows service Hyperion RMI running. Do you, by any chance, know why RMI is missing? Could the reason be that we used Planning installation files valid for 32bit environment during setup of 64bit environment (there is no 64bit Planning installation files)?

Hi, I am not sure why the RMI component is missing but it is vital it exists, if you don't have a \Hyperion\common\RMI\1.0.0 directory then it looks like something has gone wrong with the installation.

This issue (RMI) has been solved by the following workaround. I was suggested to make a copy of RMI folder from 32bit environment and paste it on the same path on 64bit environment, and then run InstallHyperionNTServiceRMI.bat from %HYPERION_HOME%\common\RMI\9.5.0.0\ location. With this action HyperionRMI service will be installed, started and – that’s it.

Obviously, this is not the way it should be done, but for 64bit environment Planning 32bit component have to be used as well, and only RMI has to be configured manually. Maybe it can cause some incompatibilities, but in our case, so far it is working well…

So, maybe you will need some time this info. This was also confirmed by ORACLE support (reffer to the SR #3-698669641).

Hi John, first of all: nice job!Im experimenting some issues when i do reverse engineering from a csv file. Sometimes it does perfectly and sometimes it doesnt retrieve any column so i have to do manually. Any ideas?

Your blog is really helpful. We've been installing ODI and following your instructions from the 1st series but this 4th one is showing us some issues.

We're using the Fusion edition of EPMA and Essbase.

We've installed ODI 10.1.3.0 and also installed the Hyperion Planning and Essbase patches that you advised.

On the reverse engineering procedure, we created the planning model folder, inserted a new model under that folder.. When we clicked the reverse button though, ours did not turn out successfully.

Here's the error we received:

org.apache.bsf.BSFException: exception from Jython: Traceback: File "string", line 3, in ? ImportError: No module named odi at org.apache.bsf.engines.jython.JythonEngine.exec(Unknown Source) at com.sunopsis.dwg.codeinterpretor.k.a(k.java) at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java) at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java) at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java) at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java) at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java) at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java) at com.sunopsis.dwg.dbobj.SnpSessStep.treatSes

I completed all the steps provided by you successfully. The problem i'm facing now is records from the falt file were getting rejected in planning. Please let me know the work around. The following is the information i found in agent window.

Initalizing the planning wrapper and connectinglog4j:WARN No appenders could be found for logger (C:\DOCUME~1\Praveen\LOCALS~1\Temp\/Entity.log).log4j:WARN Please initialize the log4j system properly.Traceback (innermost last): File "string", line 2, in ?Planning Writer Load Summary: Number of rows successfully processed: 0 Number of rows rejected: 2

John, I need help.I'd like to set up the connection to the master repository and work repository (see 1st screenshot). All test passed successfully. But when I try to login under these settings I have error "Designer can't connect to Execution repository."Wherу can I see a log file to resolve problem?

Hi John,I noticed something which I think is an ODI glitch / possibly a bug. When setting up the File Datastore on the Model tab, enter the Definition information. If you hit Apply before moving to the other tabs, the Reverse does not return the columns, even if you close ODI and reopen. If you complete the Files and Columns tab before clicking on the Apply or OK buttons, then the Reverse works. I thought it might be related to column headers, but I receive the same response on a file that had _ in the column headers and one that had no spaces or _ in the column header. Have you experienced this issue?

Hi!I´m having an issue when loading an attribute dimension structure. The thing is that it doesn´t allow member moves, which is quite particular situation, I know, but the only workaround I´ve found is to delete de member I want to move, and then load it again within the new parent. Another thing with this, is that the attribute association is lost when deleting it, so I have to load my base dimension again, which is a big one, so the "attribute dimension refresh process" performance goes down, without considering the manual stuff of deleting a member. If someone knows a better solution, I would be very pleased to know it.

First up thanks for walking me through ODI.I was getting sick of manually entering the field mappings when I noticed the following:Once you select a target field you can 'drag' a source field and drop it into the mapping window. Doing this automatically populated the mapping, checks the 'Active Mapping' box and selects 'Staging Area'This saved me a lot of time and potential typos with out the hassle of firing up the expression editor.

In the part that I can force the value in the target mapping to be a fixed value, in my case is in “Data Storage” of Store because I wouldn’t need that in my source file,In Hyperion Planning I can see that in my members with "fixed values" with Store, Planning shows me the same member but with "Never Share", please I need more details about this action, Thank you