Pages

Friday, November 24, 2017

Do you need to execute a stored procedure from you script? Maybe to populate the Open Interface Table? You can make use of the FDMEE API method executePLSQL but only if the stored procedure does not return OUT parameters. If you need to return any value, then you can use the Java classes for SQL.

The following snippet shows how to execute the procedure remotely from a dblink. Executing the procedure from the database you connect, follows the same approach.

Thursday, October 5, 2017

Long time since I don't post but several events happened lately. Anyway, sorry about that.

Before I dive into today's topic I'd like to summarize my life in the last weeks:

Kscope17 was a great event as always. It's a very good opportunity for us living on the other side of the pond. Meeting lot of people, partners, customers is always great. San Antonio was impressive. I spent one day visiting the city with my colleague Henri (The Finnish Hyperion Guy). We had sun and rain. And what do you when it's raining? Shopping! In addition, I got the "EPM Data Integration Top Speaker" award! That was awesome. I didn't expect it so I can only say thanks to all the community.

Heading to large family. This is an easy one: if all goes fine, next year we will be 2 more in the family :-)

New apartment. I've been very busy assembling IKEA furniture. For those would would like visiting Malaga (Spain), we bought a new apartment for rental. Feel free to visit it!

OK, now that you know what I have been doing...time for some FDMEE content!

Universal Data Adapter (UDA)

If you are not familiar with the UDA yet, it may be a good a idea that you visit my previous entries about it:

One of the drawbacks of the UDA is the configuration and maintenance. That's something we cannot change. It has been designed like that.

Why configuration?

UDA requires configuration in FDMEE and Oracle Data Integrator (ODI).

In FDMEE, UDA requires typical artifacts plus some specific ones

Source System with an ODI context assigned

Source Adapter

Source Period Mappings

Import Formats

Location and Data Load Rule

In ODI, UDA requires new artifacts to be created in addition to the ones imported during initial configuration (Projects and Model Folders)

Manually in ODI

Data Server

Physical Schema

Context

Generated in ODI from FDMEE

Datastore for Table/View definition

Package and Interface

Scenario for the Import Format

Why maintenance? There are many events that require re-generating the ODI objects created from ODI. I'm not going to list all of them but will explain the main ones.

Migrating across environments. LCM doesn't migrate the ODI objects. Besides, you can't perform a native ODI export/import. FDMEE has been designed to have the same ODI repositories' ID across environments so the export/import objects will fail.

Apply patches. Some patches may require to re-import some default objects for ODI. This will probably delete the objects you generated from FDMEE.

Changes in Tables/Views. Think about adding a new column to a Table/View. You have to re-import table definition again, regenerate the package and interface, adjust the Import Format and regenerate the ODI scenario

What about multiple databases? All configuration mentioned above multiplies as well. Why? All is chained. Each Source System has assigned an ODI Context. If you have multiple sources of the same database type, you can't use the Global context as you can only assign once to the Logical Schema UDA_MSSQL. Then, as you need multiple Source Systems and Source Adapters, you will need multiple Import Formats as they are assigned to adapters, and so on...

I know what you are thinking...lot of manual work to do!

Today, I will show you the solution we implemented for an integration with Microsoft Navision. (SQL Server). Customer had 30 different source databases in the same server (including different collations)

The Solution - Moving complexity to the DB makes UDA simpler!

As part of the Analysis & Design workshop, we explained the drawbacks of having 30 source databases. They understood that we had to find another solution immediately.

For 2 databases, solution architecture would be as:

Then, I told them that the design could be significantly simplified. However, that required some additional work on the SQL side. They wanted to keep things simple in FDMEE so they were happy to go for it. Actually, they did :-)

Basically, my advice was to move the complexity to SQL. That would made the UDA configuration and maintenance simpler.

Create a view in FDMEE database. This view has an additional column for the source database. It queries the source views using remote queries (Ex: OPENQUERY in SQL Server) which perform quite well as they leverage the source DB engine

Configure UDA in ODI

Configure UDA FDMEE

In addition to the required parameters, define an additional one for the column having the source database

Note that if the source databases are in different servers, the solution would be slightly different but still doable.

Also, a similar approach could be taken if you have multiple views with different layouts. You could merge them into one with a common layout.

I know you may have concerns about performance but if the views are correctly designed and network delay is not a bottleneck, everything should be fine. Indeed, ODI usually executes the SQL extract queries in the source database. Don't forget to play a bit with Data Server tuning settings to get the best performance.

As usually, I'm not telling you this is your solution but definitely a good practice. I hope you found it interesting as it may help you to simplify your design.

Some years ago, FDMEE was introduced into our lives with lot of nice and new functionality. Jython is probably one the most important ones. Why? That's an easy one. With Jython, FDMEE opened itself to the Java world.

Java Integration is the heart of Jython application development...The fact is that the most Jython developers are using it so that they can take advantage of the vast libraries available to the Java world, and in order to so there are needs to be a certain amount of Java integration in the application.

Most of the key Oracle EPM and non-EPM products have their own Java API (JAPI). During this blog series, I'm going to focus on the EPM ones. In a nutshell, the integration of FDMEE with the Java APIs for products like Essbase or HFM, gives us freedom to implement additional functionality to enhance our EPM integration solutions.

Using Java from within Jython

One of the goals of Jython is to use existing Java libraries straight forward. It's as simple as using external Jython modules (PY files) within a Jython script:

Import the required Java classes and use them directly in your code

Call the Java methods or functions you need

What about Types? well, here the good thing comes. Usually, you don't need to worry at all about them. There is some automatic type coercion (conversion of one type of object to a new object of a different type with similar content) either for parameters passed and for the value returned by the Java method.

Basically, when it gets a Java numeric type, or a Java string, Jython automatically converts it into one of its primitive types.

Let's have a look at the following example:

As you can see, the ArrayList object (which is an object from the Java Collection Framework) has been coerced into a Jython list. We can use methods from ArrayList Class (like add) and iterate the object as it would be a proper Jython list.

We will see more examples for coercion when using the Essbase and HFM JAPIs.

In other words, you are restricted to use classes available in Java 1.6. Also, if you use 3rd party Java libraries, they must be compatible with 1.6.

Regarding the different approaches to implement the Jython script:

Build the custom functionality in a Java library that you can later import into your scripts

Cast Java code as Jython within your script

Option 1 requires a deeper knowledge on Java programming. I'd recommend this option only if you know Java programming and your customization is a good candidate for being reused in other implementations. On the other hand, option 2 is quicker and probably a better option for one-time customization.

Essbase Java API

FDMEE comes with functionality that is commonly use:

Extract data

Run calculation scripts before/after loading data

Pass parameters to the scripts

Create drill regions

Among others...

But, what about?

Run calculation scripts before extracting data

Validate target data before is loaded

Load new metadata before loading data

Execute MaxL scripts

Using substitution variables in FDMEE artifacts like period mappings

Among others...

I wish the product would provide this functionality but unfortunately it doesn't. However, it provides a powerful scripting engine which enables us to extend its functionality.

Going back to the list above, you have probably met some these requirements in one of your projects. What did you do? Create a MaxL script and run it from script using subprocess module? Or, did you leverage the Essbase JAPI?

That probably depends on many other factors...do we have time for implementation? do we know how to do it? do they have existing batches doing the work?...

To me, using the Essbase JAPI is not only about having seamless integration but capturing errors in an elegant way. Something that you can hardly get by running batches from external scripts.

Spoiler!!! see how simple would be to execute a MaxL script or statement:

I will cover more details about using the Essbase JAPI and some examples in upcoming posts.

HFM Java API

What about HFM?

How can we extract Cell Texts?

Extract and Load Metadata?

Translate data before extracting it?

Run custom consolidation/calculation/translation logic?

Among others

HFM also has a JAPI! Actually, in the same way that happens with Essbase integration, FDMEE uses these APIs behind the scenes.

Spoiler again!!! extracting cell texts:

Other Java APIs

Besides the HFM and Essbase JAPIs, there are other products and components having their own API. Some of them such as LCM's one are documented, some others are not. In example, OLU's API (Outline Load Utility).

In the next posts, I will show some examples for customization implemented with the Essbase and HFM APIs. If you can't wait, my colleague John already published a very cool one.

I haven't forgotten about Planning. It does not have any published Java API but you should have a look at REST API.

Now that everything is heading into the Cloud, why not playing around with SAP HANA Cloud?

SAP HANA Cloud
When I first tried to get a SAP ECC training environment, I noticed that SAP was offering nothing for free. Nowadays, things have changed a little bit. Luckily, they noticed that you need to offer some trial/training sandbox if you want people get closer to you.
For those who want to be part of the game, you can visit their Cloud site.

Why the Universal Data Adapter?
SAP HANA Cloud brings something called SAP Cloud Connector. Too complicated for me :-)
Luckily for me, I googled an easier way of extracting data from Cloud. There is something called database tunnels which allows on-premise systems to connect the HANA DB in the cloud through a secure connection. It doesn't sound quite straight forward but it didn't take too long to configure.

There are different ways of opening the tunnel. I have used the SAP Cloud Console Client which you can download from SAP for free.

Once the database tunnel is opened from the FDMEE Server(s) to the SAP HANA Cloud DB, the Universal Data Adapter can be used in the same way that we used with on-premise HANA DB.

Please, note that as I'm not using a productive cloud environment I had to open the tunnel via command line. This is fair enough to complete my POC.

My data in SAP HANA Cloud
I'm keeping this simple so I have a table in HANA Cloud with some dummy data:

Let's go through the configuration steps to bring that data into my application.

As usually, I create a dedicated context for this new source system. That gives me more flexibility:

FDMEE

In FDMEE, nothing different.

We first create the source system with the context we created in ODI:

Then, add the source adapter for the table we want to extract data from:

Time now to import the table definition, classify columns and generate the template package in ODI:

As you can see above, FDMEE could reverse the HANA Cloud table so I can now assign the columns to my dimensions and regenerate the ODI scenario:

I'm not going to show how to create a location and data load rule as I assume you are familiar with that process.

Final step is to run our data load rule and see how data is pulled from the SAP cloud and loaded into HFM on-premise app through FDMEE :-)

I'm going to leave it here for today. As you can see, Universal Data Adapter provides a simple and transparent way of connecting our on-premise system with heterogeneous source systems, including SAP HANA Cloud!

Wednesday, April 26, 2017

Do you know Jason Jones? I guess you do but in case you don't, I'm sure you may have been playing around with any of his developments.

Personally, I've been following Jason since years. I remember what I thought when I went to one his presentations in Kscope: "This guy really knows what he says and has put lot of effort helping the EPM community. Definitely an EPM Rock star."

One day, I found something quite interesting in his blog: PBJ. I thought it could be very useful to improve and simplify something that I had already built using a different solution. Why not then using something he was offering to the community as open-source? It was good to me and also good to him. I guess that seeing something you've built is useful for others, must make you proud.

When I told him that I was going to integrate FDMEE on-prem with PBCS using PBJ, he was very enthusiastic. The library was not fully tested so I made sure I was providing continuous feedback. Some days ago he published about our "joint venture". Now it's time for me.

FDMEE Hybrid Integrations
We have already covered Hybrid integrations in some posts.
In a few words, FDMEE on-prem PSU200+ can be used to extract/load data from Oracle EPM Cloud Services (E/PBCS, FCCS so far).

I suggest you also visit John's blog to know more about hybrid integrations in FDMEE:

PBJ - The Java Library for PBCS
REST Web Services, what's that? I let you google and read about it. For us, REST is how EPM Cloud Services open to the external world. Oracle provides different REST APIs for the different EPM CS.

Luckily, Jason has gone one step further. He built a Java Library to use the REST API for PBCS:

PBJ is a Java library for working with the Planning and Budgeting Cloud Service (PBCS) REST API. It is open source software.

Why would we need PBJ in our solutions? Currently hybrid integrations have some missing functionality like working with metadata among others. For example, we recently built a solution in FDMEE to load exchange rates from HFM into PBCS.

FDMEE was offering seamless extracts from HFM. Rates are data in HFM but not in PBCS. They are treated as metadata. We used REST APIs for PBCS from FDMEE scripts which was working perfectly. However, we built the code using modules available in Jython 2.5.1. That gave rise to much head-scratching... Working with HTTP requests and JSON was not an easy task.
We noticed everything was much easier from Python 2.7 (Jython 2.7) but nothing we could do here as we were stick to what FDMEE can use :-(

TBH, we had a further ace up our sleeve: building our own Java library but we delayed this development for different reasons. It was then that PBJ appeared :-)

Why reinventing the wheel? PBJ is open-source and makes coding easier. We can collaborate with Jason in GIT and he is quite receptive for feedback given.

Using PBJ from FDMEE Scripts
When I first started testing it, I noticed that there were multiple JAR dependencies which had to be added to the sys.path in my FDMEE script. That was causing some conflicts with other Jars used by FDMEE so Jason came up with an uber-JAR:

uber-JAR—also known as a fat JAR or JAR with dependencies—is a JAR file that contains not only a Java program, but embeds its dependencies as well. This means that the JAR functions as an "all-in-one" distribution of the software, without needing any other Java code. (You still need a Java run-time, and an underlying operating system, of course.)

One of my concerns was the fact that FDMEE uses Java 1.6. That's usually a problem when using external Jars from FDMEE scripts. Luckily, PBJ is also built using Java 1.6 so the current versions of FDMEE and PBJ are good friends.

Before using any PBJ class we have to add the Jar to the sys.path which contains a list of strings that specifies the search path for modules:

We are now ready to connect to our PBCS instance.Example: loading new Cost Centers into PBCS
I have created a custom script in FDMEE to keep it simple. The script is basically performing the following actions:

Import PBJ Jar file

Connect to PBCS

Upload a CSV file with new Cost Centers

Execute a Job to add new Cost Centers

Our CSV file with new metadata is simple, just three new members:

PBJ has class PbcsClientException to capture and handle exceptions. You can use this class in addition to Python's one:

Once the job is completed we can see the results in the PBCS Job console:

Job was executed with no errors. By navigating to the Cost Center dimension we can see the new hierarchy added:

I have also added some code to write debug entries in the FDMEE process log. This is always useful and can help you to find and fix issues easily:

Conclusion and Feedback
In this post, my main goal has been to show you how to use PBJ library in FDMEE. I'm sure this can be very useful to implement different requirements for hybrid integrations.

Jason did a great job and the ball is now in our court. The best way of contributing is to keep testing PBJ and provide feedback.
Let me highlight that PBJ is not his only project. There are few others that you can check in his site.

Friday, April 7, 2017

The following post came up after seeing that lot of people in the FDMEE community were asking "how can we manipulate the source file and replace by new one on the fly?"
In other words, how we can replace the source file selected by end user with a new file we create during the import process... or let's be clear... how can we cheat on FDMEE? :-)

I thought it was good idea to share with you a real case study we had from a customer. Their ERP system had a built-in process which was extracting data in XML format. Hold on, but can FDMEE import XML files? Not out the box, Yes with some imagination and scripting.

The Requirement
As stated above, FDMEE does not support all kind of formats out of the box. We usually have to ask our ERP admin (or IT) to create a file in a format that FDMEE can easily read, mainly delimited files such as CSV.

But what about Web Services like SOAP or REST? they mainly return XML or JSON responses. We need to be prepared for that in case we want our FDMEE integration to consume a WS. This is quite useful in FDMEE on-premise as I guess Data Management for the Cloud will include "some kind of JSON adapter" sooner or later in order to integrate with non-Oracle web services.

And what about any other integration where source files are in another format than fixed/delimited?

Luckily I had that one... we get an XML file from our IT and we need to convert to CSV so we can import through FDMEE.

High Level Solution
The main idea is to convert the XML file selected to a CSV file that FDMEE can understand. Now the questions are Where and How?

Where? It makes sense that we do the conversion before the actual import happens. BefImport event script?

How? FDMEE will be expecting a CSV file, how do we convert the XML to CSV? There are multiple methods: Python modules, Java libraries for XPath... I will show one of them.

The XML File

Image below doesn't show the real XML (confidentiality) but a basic one:

As you can see data is enclosed in <data> tag an lines are enclosed in <dataRow> tags. Besides, each dimension has a different tag.

As an extra for this post, just to let you know that I usually use Notepad++ plugin XML Tools which allows me to perform multiple operations including XPath queries:

Before we move into more details. What do you think it would happen if we try to import the XML file with no customization?

FDMEE rejects all records in the file. What were you thinking then? That's the reason I'm blogging about this (lol)

Import Format, Location and DLR for the CSV File
In this case, our source type is File. However, I usually define instances of File when I want FDMEE admins/users to see the real source system (this is optional):

The Import Format (IF) has been defined to import a semicolon delimited file having numeric data only (you can use any delimiter):

The Data Load Rule is using the IF we just defined. As you may know, we can have one location with multiple DLRs using different IFs when source is File.

BefImport Event Script

The conversion will done in the BefImport event script which is triggered before FDMEE imports the file the end-user selected when running the DLR.

We can split this script into two main steps:

Create the new CSV file in the location's inbox folder

Update database tables to replace the original file selected with the new one created in step 1

The final solution could be more sophisticated (create the CSV based on IF definition, parse null values, etc.). Today we will go for the simple one.

Let's dive into details.

Converting XML to CSV

There are multiple ways of converting an XML to CSV. To simplify, we could group them as:

Method A: parses the entire XML and convert to CSV

Method B: converts nodes into CSV lines as we iterate them

Method A would be good for small files. It's also quite useful if our XML structure is complex. However, if we have big files we may want to avoid loading file into memory before converting which is more efficient. Therefore, I have decided to implement Method B. Within all different options we have, I will show the event-style method using xmlPython module.

Then we need to build the different paths for the XML and CSV files. We will also create a file object for the CSV file. This object will be used to create the csv writer.
The XML file is automatically uploaded to the location's inbox folder when import begins. The CSV file will be created in the same folder.

# Get Context detailsinboxDir=fdmContext["INBOXDIR"]locName=fdmContext["LOCNAME"]fileName=fdmContext["FILENAME"]loadId=fdmContext["LOADID"]# XML FilexmlFile=os.path.join(inboxDir,locName,fileName)fdmAPI.logInfo("Source XML file: %s"%xmlFile)# CSV file will be created in the inbox foldercsvFilename=fileName.replace(".xml",".csv")csvFilepath=os.path.join(inboxDir,locName,csvFilename)# To avoid blank lines in between lines: csv file # must be opened with the "b" flag on platforms where # that makes a difference (like windows)csvFile=open(csvFilepath,"wb")fdmAPI.logInfo("New CSV file: %s"%csvFilepath)

The writer object for the CSV file must use semicolon as delimiter so it matches with our IF definition. We have also enclosed non-numeric values in quotes to avoid issues in case you define your import format as comma delimited:

Once the writer is ready, it's time to iterate the nodes and building our CSV. Before seeing the code, I'd like to highlight some points:

We just want to capture start tags so we only capture start event in iterparse

We can include event in the for statement for debugging purposes (we can print how the XML file is read)

Property tag returns the XML node name (<entity>...)

Property text returns the XML node text (<entity>EastSales</entity>)

We know amount is the last XML tag so we will write the CSV line when it's found

The CSV writer generates the delimited line from list of node texts (row)

try:# Iterate the XML file to build lines for CSV filefor(event,node)initerparse(xmlFile,events=['start']):# Ignore anything not being dimension tagsifnode.tagin["data","dataRow"]:continue# For other nodes, get node value based on tagifnode.tag=="entity":entity=node.textelifnode.tag=="account":account=node.textelifnode.tag=="icp":icp=node.textelifnode.tag=="custom1":c1=node.textelifnode.tag=="custom2":c2=node.textelifnode.tag=="custom3":c3=node.textelifnode.tag=="custom4":c4=node.textelifnode.tag=="amount":amount=node.text# Build CSV row as a list (only when amount is reached)ifnode.tag=="amount":row=[entity,account,icp,c1,c2,c3,c4,amount]fdmAPI.logInfo("Row parsed: %s"%";".join(row))# Output a data rowwriter.writerow(row)exceptException,err:fdmAPI.logDebug("Error parsing the XML file: %s"%err)

The result of this step is the CSV file created in the same folder as the XML one:

If we open the file, we can see the 3 lines generated from the 3 XML dataRows:

Cool, first challenged completed. Now we need to make FDMEE to import the new file. Let's move forward.

Replacing the Source File on the fly
FDMEE stores the file name to be imported in several tables. It took to me some time and several tests to get which tables I had to update. Finally I got them:

AIF_PROCESS_DETAILS: to show the new file name in Process Details page

AIF_BAL_RULE_LOADS: to set the new file name for the current process

AIF_PROCESS_PERIODS: the file name is also used in the table where FDMEE stores periods processed by the current process

To update the tables we need 2 parameters: CSV file name and current Load Id (Process Id)

Process details show the new file (although it's not mandatory to change it if you don't want to)

I'm going to leave it here for today. Processing XML files can be something very useful, not only when we have to import data but in other scenarios. For example, I'm sure some of you had some solutions in mind where the Intersection Check Report (FDMEE generates an XML file which is converted to PDF) had to be processed...

I hope you enjoy this post and find it useful for your current or future requirements.