Even with a zero-based budgeting blueprint, many companies are still hesitant to go “all in” thinking that a zero-based budgeting program implementation requires too much time and resources. The introduction of Cloud services such as Oracle PBCS/EPBCS makes the implementation of a centralized financial system easier than ever, greatly reducing the barrier to entry.

This final post in this series shares the power of a PBCS/EPBCS environment to achieve the greatest success with a newly implemented zero-based budgeting program.

How Can PBCS/EPBCS Environments Enhance the ZBB Experience?

There are four key ways to gain the most from a PBCS or EPBCS environment, including the setup of targets and accountability metrics that offer more meaningful data and greater transparency when making budgeting decisions.

Clients are often given target settings goals in management meetings or over the phone, but we demonstrate for them how to integrate this into their budgeting systems. On numerous occasions, Alithya has been contracted to implement target settings where leadership sets growth targets and the systems flows down the revenue by service, product line, etc. In turn, analysts match the underlying details.

Not surprisingly, this is a common request because target setting has been a long-time tradition during the budget process. By setting up this target setting process in PBCS\EPBCS, an off-line process is instead online and is molded with the overall budgeting system process. Combining that with the zero-based budgeting mantra allows targets to be set and provides analysts with their needed baseline. Moreover, analysis can be done on departments that take the typical “reduce expenses by 10% approach” to archive the target number instead of the more insightful zero-based budget journey. Yes, target setting in a centralized system is easier, but the benefit of a centralized system is the ability to see how teams react to the new target. Did they take the traditional “reduce budget percentages to fit the numbers,” or did they look at their budget as a whole and analyze each line item and question the numbers organically?

After targets are set and the budget is approved, we look at the said cost saving come to fruition. A centralized system allows capital projects or initiatives to be tracked to help systematically measure the expenditures of cost savings activities found during the zero-based budget discovery. This provides a clear picture of what each department is doing and holds them more accountable for project decisions. It is an achievement to complete a zero-based budget “diet,” but holding teams accountable brings them to the next level of the zero-based budget “lifestyle.”

In essence, this new budgeting environment provides better insight into data – insight that ultimately allows savings to be found more effectively. For example, if you want to see the cost of direct materials, this centralized system can be set up to capture the costs in order to analyze and keep track of the different KPIs that reduce or increase overall costs.

Another example of how this works is by segmenting down employee costs such as travel. Instead of having a run rate of 10% of direct labor or travel costs, determine what job or tasks required that travel and use this KPI to negotiate travel expenses to further drive down costs. Essentially, use PBCS/EPBCS as a tool to capture KPIs (e.g. travel costs by job) and determine the best use of travel dollars and – more importantly – negotiate with vendors on key travel.

Lastly, a budgeting environment provides clarity to help teams make better informed decisions about future initiatives. With the ability to see all of the underlying data points in a single location, it is possible to identify past sales and marketing campaigns and expenditures that led to profitable customers. Therefore, zero-based budgeting teams that took the initiative to determine the best sales and marketing costs to benefit analysis from the ground up are able to dedicate more resources (e.g. dollars, people, etc.) to winning strategies. This is in contrast to the traditional budgeting approach of “10% rate of marketing spend year-of-year” that often masks the winning and more importantly losing marketing initiatives. Moreover, such planning and availability of different data points helps draw key inferences that allow sales and marketing teams to be more successful.

Summary

Utilizing a Cloud service such as Oracle PBCS/EPBCS makes it easier for companies to implement a centralized system and achieve success with a zero-based budgeting program. PBCS/EPBCS environments can and should be set up in a way that enhances the zero-based budgeting experience. This is achieved by integrating target setting goals and establishing accountability metrics that allow a deeper dive into budget data while providing greater transparency to make better informed decisions.

To learn more about zero-based budgeting best practices and to get professional help with your Oracle PBCS/EPBCS environments, feel free to contact our team of experts.

In a previous blog post, the history of Hyperion Profitability and Cost Management (HPCM) was discussed along with which modules made it to the Cloud. If you are after a more clear-cut comparison between Cloud and on-premise, the below table should fit the bill. Tables generally cannot provide all the needed context, yet they are, at times, the best starting point to understand the benefits and capabilities of one solution compared to another. The below PCMCS vs. HPCM table is not exhaustive, and if you have questions on any of the items covered, email us and we will provide further details.

Choosing between on-premise and Cloud depends on which factors are the most significant barring the overall licensing cost.

Allocations and data assignments cannot have “If” statements attached to them in the on-premise version of the software – a feature fundamental to supporting Tax transfer pricing capabilities.

The cross-dimension mapping is a functionality that is not available in HPCM. This mapping ensures the assignment of data sets to the same ID/name across multiple dimensions by using the “Same as Source but Different Dimension” option within PCMCS to support intercompany activities. This feature alone, or the lack of it, may significantly impact the design of an application and the overall complexity of allocation flows.

Features available in the Cloud but not yet released in on-premise solutions could tip the scale to favor the Cloud option when all other aspects surrounding a Cloud implementation no longer appear to be as pressing. Out-of-the-box content such as overnight backups, full application, and data restores that are at the business users’ fingertips – not to mention the reporting and dashboarding included in the Cloud version – are all differentiators of a product that enables business users to control their allocation process and methodology from its inception.

While there may be exceptions to the trend where on-premise solutions can have advantages (modules not available in the Cloud, for example) and, therefore, represent the best option at a given moment in time, the reality is that the future is being developed in the Cloud and for the Cloud, and at some point the shift will most likely no longer be an option, but a necessity.

If you need help making a decision with an existing implementation or you would like more details about HPCM vs. PCMCS to make a better informed decision, email us at infosolutions@alithya.com. Our PCM Center of Excellence team is ready to share leading practices and industry-specific solutions that accelerate the ROI and expand the capabilities of your chosen software.

Let me start with this: redesign IS possible. ARCS does not permanently punish any design decisions made on “Day 1,”…but not all changes are equal in complexity, nor can all changes be made without consequence. A successful implementation ensures that the application design is sound for today and that a well laid roadmap is in place for tomorrow. Many “one-off” changes can be made directly to a deployed reconciliation (i.e. only within a single period) or permanently going forward (i.e. to the profile). The “catch” is the key properties set on a profile or reconciliation – the Account ID. The Account ID represents the granularity at which the reconciliation is being performed, such as [Business Unit]-[Account] or [Entity]-[Natural Account]-[Subaccount].

[Screenshot 6: The Account ID is a unique identifier for the reconciliation.]

The Account ID is fundamental to the reconciliation, as indicated by the asterisks (i.e. “*”) in Screenshot 6. Changing it in any way will break the Prior Reconciliation “link” with previously completed instances of the reconciliation.

But let’s push that idea one step further – what if I want to change the key properties themselves – that is to say – change the actual Profile Segments? The Profile Segments determine the name (ex. from “Company” to “Business Unit”), number (ex. from 2 to 3 segments), and even type of values (ex. setting up the Business Unit segment to always be an integer) that are viable for use when setting up an Account ID. Therefore, if this was set up incorrectly or if the granularity at which reconciliations are performed has changed since the initial implementation, then redesigning the Profile Segments may become a requirement.

ARCS even makes this type of redesign possible, but at a cost. An administrator needs to first delete all Profiles; only then will the application allow a modification to the Profiles Segments in the Configuration card.

[Screenshot 7a: Unable to modify the Name of Profile Segment 1 which is currently named “Company.” The field appears grayed out. This is because Profiles are currently using these Profile Segments.]

[Screenshot 7b: After removing the Profiles, Profile Segment 1 is now able to be modified. In the example, Profile Segment is renamed to “Business Unit.”]

While Screenshots 7a & 7b show that this is possible, there are repercussions. Similar to changing the Account IDs, this change will break any links to previously completed reconciliations. Additionally, any existing mappings within outside Integration solutions such as Cloud Data Manager or FDMEE, or references to Profile Segments in customized attributes or rules may be affected. This type of redesign should only be done after carefully considering all options.

Other common questions relate to redesigning an attribute, typically the system attributes such as Process or Account Type. This is a straightforward change as it relates to updating the property on the Profiles; however, it is important to note that any reference to any existing artifact (i.e. an artifact can be a format, a custom attribute, an attribute member, etc.) within ARCS will prevent the deletion of said artifact. As an example, if the Account Type structure requires redesigning, but there is a reference to any of the members (such as in a historical period), then these members cannot be deleted without first removing the references. This can be tedious when there are multiple years of reconciliations to consider.

[Screenshot 8: When trying to remove the Custom Attribute named “PLACE CUSTOM ATTRIBUTE HERE,” ARCS prevents this deletion and cites which artifact is using the Custom Attribute. In this example, the Bank Reconciliation format is using this Custom Attribute – thus, it cannot be deleted.]

Unlike many system messages, ARCS actually provides useful troubleshooting information as seen in Screenshot 8. However, it still may not be worth it to you to retroactively make this change. A recommendation is to “archive” artifacts that will not be used going forward by renaming them with “Old” or “Hist,” then create a separate artifact to use going forward.

[Screenshot 9: A work-around to deleting previously used artifacts is to rename them and then use a new artifact going forward. In this example, the suffix “- Old” is added to this Custom Attribute to indicate that it is no longer in use.]

Previous uses of the artifact such as in completed reconciliations will update to reflect the name change. In the example provided in Screenshot 9, this custom attribute for historical periods will be updated with the “– Old” suffix to indicate to ARCS administrators that it is no longer in use but was used historically.

ARCS is a flexible application solution that allows for nearly any change to be made, though the effort and complexity will vary. While sound design can prevent many issues, it should be a comfort to know that there is “wiggle room” if the requirements change in the future.

Out-of-the-box, ARCS makes it easy to “oh, and this!” when adding new scope. The obvious example is monthly maintenance. Reconciliation Administrators and Power Users can build new Profiles to deploy for future months (or even the current month) with relative ease. With the “Copy” feature, previously created Profiles can serve as ready-to-use templates and reduce the manual effort involved in building a Profile from scratch.

[Screenshot 1: The Copy function from the Actions drop-down list can be used to duplicate existing Profiles*]

Copying existing Profiles, as seen in Screenshot 1, is intuitive, built-in functionality. This makes ARCS “Quick Starts” a popular project option when tight on a budget – the Partner will be contracted to create a limited subset of Profiles and the Client can then use these as a starting point to build out the rest, saving on the Build Phase effort.

Another common post-project add are Custom Attributes. As companies become more familiar with how their end users utilize the tool, new Custom Attributes can be included for reporting purposes (such as filtering or sorting in dashboards), providing information, or collecting feedback. Beyond the three system attributes of Process, Account Type, and Risk Rating, some typical Custom Attributes include source system names, supplemental detail such as cost center or department, or even more dynamic fields such as auto-populating metadata descriptions. Furthermore, where these are placed within a reconciliation changes the nature of what detail is being provided or collected. Custom Attributes can be placed at a reconciliation’s summary level, on each individual transaction, and even on the specific Action Plans within each transaction. Additionally, these can be inherited from a Format or set for individual Profiles. What information is useful or relevant to end users will change depending on the granularity.

[Screenshot 2: Custom Attribute on the Summary tab*]

[Screenshot 3: Custom Attribute on a Transaction*]

[Screenshot 4: Custom Attribute on an Action Plan*]

The variety of locations within the reconciliation to place these Custom Attributes, as seen in Screenshots 2 – 4, and the ease at which these can be added provides your company with the flexibility to determine ‘what’ and ‘where’ information should be presented.

ARCS provides a plethora of tools to grow the application with your company and add-on to your “Day 1” implementation. But what if you like what you have built, and just want to tweak it? Perhaps you want to move from “fat fingering” to fully integrating with your ERP source systems? The next post, Modifications in Account Reconciliation Cloud Service (ARCS): Tweaking and Tuning, discusses how ARCS can be modularly modified, keeping what you have…but better.

Modularity. My initial experience with this concept was during the build of my first computer. There is a great, omnipresent dread that consumes people who share this hobby – imagine this scenario (or nightmare rather!): you have just invested significant time, energy, and finances to create the perfect machine – only to have it rendered obsolete the next month by changing technology that is incompatible with your swanky new rig! The warring decision of function today versus future proofing for tomorrow is a constant struggle for all tech lovers (or tech survivors, as the case may be). So when a product is able to overcome this dilemma, it’s got my attention.

In my post A Safe Step into the Cloud: The Argument for Account Reconciliation Cloud Service (ARCS), I discussed the modular nature of ARCS as one of the key pillars that made the product an easy recommendation as a first step into the Cloud. For new projects, this is a comforting “safety cushion.” For existing applications, it means you are not stuck with what you have. Push your product to evolve with your needs and ensure that you are eking out every drop of value from your investment.

With ever-changing requirements, it is critical to know what tools are at your disposal. Some changes are straightforward; others…not so much. In this upcoming series of blog posts, we will discuss what it means for ARCS to be a modular solution and explore the four main ways in which this manifests:

New scope

Modifications

Redesign

Automation.

Over the next few weeks, we will be tweaking, tuning, tearing down, and putting the application back together to see how there can be no mistakes with modularity.

Recently my team and I had the opportunity to implement Oracle’s newest offering – Enterprise Data Management Cloud Service (EDMCS). EDMCS for those of you who are not familiar provides a cloud-based solution for managing master data (also referred to as metadata) across the organization. Some like to refer to EDMCS as Data Relationship Manager (DRM) in the Cloud, but the truth is, EDMCS is not DRM in the Cloud.

EDMCS is a completely new vision of what master data management can and should be. The architect of this new cloud offering is the same person who founded Razza Solutions which was the company that developed the product now known as DRM. That is important to know because it ensures that the best of what DRM has to offer is brought forward. But, more importantly, it ensures that the learnings and wish list of capabilities that DRM should have are in the forefront of the developers’ minds.

Ok, now let’s get back to fun stuff. In the 18.05 patch for EDMCS, the REST API (v1) was exposed for public usage. The documentation for the REST API can be found here:

As I highlighted in the previous post Troubleshooting Cloud Data Management Metadata Load Errors, I had developed an automation routine to upload EDMCS extracts to both PBCS and FCCS using FDMEE and Cloud Data Management. We had been eagerly awaiting the REST API for EDMCS to finalize this automation routine and provide a true end-to-end process that can be scheduled or initialized via a single action.

Let’s take a quick look back at the automation routine developed for this customer. After the metadata has been exported to a flat file from EDMCS, the automation would upload a copy to the PBCS and FCCS pods, launch Cloud Data Management data load rules which would process the EDMCS metadata extracts, run a restructure of the database after all dimensions had been loaded, and then send a status email alerting the administrator of the result. While elegant, I considered this to be incomplete.

Automation, in my view, is a process that can be executed without user interaction. While an automation routine certainly has parameters that must be generally maintained, once those parameters are set/updated, the automation cycle should not be dependent on user input or action. In the aforementioned solution, we were beholden to the fact that EDMCS exports had to be run interactively; however, with the introduction of the publicly exposed REST API in the 18.05 EDMCS patch, we are now able to automate the extract of metadata from EDMCS. That means we can finally complete our fully automated, end-to-end solution for loading metadata. Let’s review the EDMCS REST API and how we did it.

The REST API for EDMCS is structured similar to other Oracle EPM REST APIs. By this, I mean that multiple REST commands may need to be executed to achieve a functional result. For example, when executing a Cloud Data Management data load rule via the Data Management REST API, the actual execution of the data load rule is handled by a POST call to the jobs function with the required payload (e.g. DLR name, start period, etc.). This call is just one portion of a functional requirement. To achieve an actual data load, a file may need to be uploaded to the cloud, the data load rule initialized, and then the status of the data load rule be retrieved. To achieve this functional result, three unique REST API executions would need to occur.

To export metadata from EDMCS to a flat file using the REST API, the following needs to be executed:

Get the dimension information for the EDMCS application from which metadata will be exported

Execute an export of the dimension(s)

Determine the status of export

Download the export to a flat file

Let’s explore each of these in a little more detail. First, we need to get the dimension IDs for the application from which we will be downloading metadata. This is accessed from the applications function.

When executing this function, the JSON object return includes all applications that exist in EDMCS (including those archived). So the JSON needs to be iterated to find the record that relates to the application from which metadata needs to be exported. In this case, the name of the application is unique and can be used to locate the appropriate record. Next, we need to query the JSON object to get the actual dimension id (circled in red). The dimension ID is used in subsequent calls to actually export the dimension.

Great, now we have the dimension ID. Next, we need to execute the REST API call to export the dimension.

And there you have it, a fully automated solution to download metadata to flat files from EDMCS. Those files are then provided to the existing automation routine and our end-to-end process is truly complete.

And for my next trick…let’s explore some of the different REST API tools that are available to help you in your journey with the EPM REST APIs.

In my last post, I highlighted a solution that was recently deployed for a customer that leveraged Enterprise Data Management Cloud Service (EDMCS), Financial Data Quality Management Enterprise Edition (FDMEE), and Cloud Data Management (CDM) to create an automated metadata integration process for both Planning and Budgeting Cloud Service (PBCS) and Financial Close and Consolidation Cloud Service (FCCS). In this post, I want to take a bit of a deeper dive into the technical build and share some important learnings.

Cloud Data Management introduced the ability to load metadata from a flat file to the Oracle EPM Cloud Services in the 17.11 patch. This functionality provides customers the ability to leverage a common platform for loading both data and metadata within the Cloud. Equally important, CDM allows metadata to be transformed using its familiar mapping functionality.

As noted, this customer deployed both PBCS and FCCS. Within the PBCS application, four plan types are active while FCCS has the default two plan types. A design decision was made for EDMCS to create a single custom application type that would store the metadata for both cloud applications. This decision was not reached without significant thought as well as counsel with Oracle development. While the pros and cons of the decision are outside the scope of this post, the choice to use a custom application registration in EDMCS ensured that metadata was input a single time but still fed to both cloud applications. As a result of the EDMCS design decision, a single metadata file (per dimension) was supplied with properties necessary to support each plan type.

CDM leverages its 23 “dimensions” to store metadata information for processing. Exactly like data, metadata is imported using an import format into the CDM relational repository. Each field from a metadata file is aligned to a CDM dimension field. The CDM Account dimension always represents the target application member name and the CDM Entity dimension represents the parent of the member. All other fields can be aligned to any of the remaining 21 dimensions. CDM Attribute dimensions can be utilized in the import and mapping process but are not available for exporting to the cloud application. This becomes an important constraint especially in a multi-plan type deployment. These 21 fields can be used to store any of the properties required to successfully load metadata to the target plan type.

Let’s consider this case study for a moment. The PBCS application has four plan types. If a process were to be built to load all plan types from a single CDM data load rule, then we would not be able to have more than five plan type specific attributes or properties because we would not have enough CDM fields/dimensions to store the relevant information. This leads to an important design approach. Instead of a single CDM data load rule to load all plan types, a data load rule was created for each plan type. This greatly increased the number of metadata properties and attributes that could be loaded by CDM and ensured that future growth could be accommodated without a redesign of the integration process.

It is important to understand that CDM utilizes the Planning Outline Load Utility (OLU) to actually perform the metadata load to the cloud application. The OLU loads metadata using merge (yes Planning experts, I realize that I am not discovering fire) which is important to understand especially when processing multiple metadata loads for a single application. When loading metadata, there are certain properties that are Application level. I like to think of these as being global. Additionally, there are plan type specific attributes that can align (or not align) to the application level value/setting. I like to think of these as local.

Why is this important? Well when loading a metadata file, if certain global properties are excluded from the metadata load file, the local properties (if specified) are utilized to default the global properties. Since metadata is loading using merge, this only becomes problematic when a new member is being added to the outline.

In this particular example, an alternate hierarchy with shared members was specified in one of the plan types. The storage property of the alternates was obviously set as Shared; however, when attempting the metadata load, the following error was encountered:

A Base Member cannot be changed to a Shared Member.

After much investigation (details to follow), I discovered that the global property should also be included in the metadata load.

While CDM utilizes the OLU to load metadata, it does not provide as much verbosity in the error information as the PBCS web interface (which also uses OLU) when loading metadata. Below is an example of the error in the CDM process log. As a tangent, I’d love to check the logs without needing to open a Service Request. Maybe Oracle will build an enhancement that allows that in the future (hint, hint, wink, wink to my friends at Oracle).

So where do I go from here? Well, what do we know about CDM loading metadata to the cloud application? We know that CDM uses the OLU to load a flat file generated by CDM. Bingo! The metadata file output by CDM is a good starting point. That file is in the Outbox of the CDM application and can be downloaded in several different ways – CDM Import process (get creative folks), CDM process details, or EPM Automate. Now we have the metadata file and can test to determine if the error is caused by CDM or the metadata itself. It’s all about ruling out variables. So, we take the metadata file and import it manually within the PBCS web interface and are able to replicate the error. But now we have an important new data point – the line number from the metadata file that is causing the error.

Now that we have actionable information, we can review each property and start isolating and eliminating different variables. We determined that this error was only occurring for new alternate hierarchy parents being added to the outline. As a test, we added the global storage property and voila, the metadata load completed successfully. Face palm!

Maybe this would have been obvious to folks with a lot of Planning experience, but there are plenty of folks learning the intricacies of Planning and Essbase (including our friends converting from HFM to FCCS), so I wanted to share a lesson learned in my journey of metadata integration using CDM.

CDM functionality for metadata represents two of the three primary operations of ETL. In my next post, we’ll dive deeper into how the extract component of ETL was accomplished to provide a seamless end- to-end ETL solution for metadata.

“But we’re going to the Oracle EPM Cloud soon!” you say. You should maintain your patches anyway. With the recurring maintenance, updates, and patches available to the EPM Cloud products, expect the on-premise patches to contain similar updates. An upcoming conversion to Oracle EPM Cloud products may benefit from running the latest on-premise codelines.

If you have an existing on-premise installation of Oracle EPM System, be sure to maintain the latest EPM System Patch Set Updates every 3 to 6 months. Here are a few great reasons why:

New Features

Patches often contain reactive bug resolutions to known issues; however, we have also been seeing new functionality released in patches for 11.1.2.4.

You Own It

You already pay for it! As long as your Oracle Maintenance contract is current (very likely if you are reading this article), you’re already paying for access to patches. Why leave them unapplied? You are running legacy code when the latest version costs you nothing additional. Windows XP was a great OS, but we’ve got to keep up with the times.

Supportability

Maximize your success by reducing time to resolution on your issues. Should you submit a support request to the vendor, the first line of response to a ticket is often about current patch levels. Once provided, the subsequent reply frequently contains a recommendation to apply the latest Patch Set Updates (PSUs) to see if that fixes the issue. Annoying? Perhaps you’re a pessimist. Or have just been remiss with your patching. I’ve certainly changed my mind on the matter and can better side with them. The reason? Supporting the latest codeline is more efficient and effective for the vendor. Your problem may have already been addressed in a code fix. They can better and more quickly support you if they are troubleshooting the current release instead of legacy code.

Stability

In older versions, patches seem to come out on a haphazard schedule. Over the last few years, Oracle has regularly streamlined EPM System patch releases – typically releasing Patch Set Updates quarterly, which are different from Patch Set Exceptions. PSUs are a grouping of PSEs or fewer, more significant PSEs that get regression tested collectively by the vendor and are released under a singular patch. We’ve gained a much higher degree of confidence with this bulk model of PSUs. The organization of release schedule and bug fixes is more dependable and greatly appreciated. The PSU model provides less ambiguity on which patches to apply and brings greater stability to all customers.

Upgrade

Maybe it’s bigger than patching. Are you not on version 11.1.2.4 of your EPM System? Compliance with Enterprise IT requirements around browser version and operating systems is often impetus for an upgrade. But there are also plenty of compelling new software features, functions, conventions, and improvements in 11.1.2.4.

Operating System (OS) support for current platforms maximizes your investment and supportability. When 2.4 came out, many customers were forced to upgrade their older systems for compliance with the latest enterprise standards for server operating systems and/or client browser versions. Instead of being faced with an IT mandated technology upgrade, an upgrade on the business’ schedule is preferred.

What Kind of Effort is Involved?

The comprehensive effort to bring a simple deployment (3-4 servers, no High Availability) up to the latest PSUs is typically less than a day per environment. That includes an analysis of existing patches, the patching itself as well as any prerequisites, and a post-check verification to confirm all patches applied are properly indicated in the corresponding inventories.

An initial patch application may take a little bit longer because there are often common prerequisites to address that don’t have to be handled with subsequent patching. There are also considerations like bringing WebLogic up to the latest patch level, as well as one-offs like the fixes for the Equifax-discovered vulnerabilities, that don’t happen frequently. Once you’ve got a solid base of primary critical patching, additional patching events are typically shorter.

Patching can be tricky. Documentation can often be ambiguous, whether it be an unintended omission or even assumed knowledge based on an implied experience or understanding of the product. Sometimes post-install instructions get skipped or SQL statements do not get executed properly as part of the patch. Less experienced resources typically only patch the EPMSystem11R1 Oracle Home; however, did you know that Oracle’s ADF framework also has an Opatch directory under oracle_common? Possibly because those are often prereqs. But what about Oracle Data Integrator (ODI) and Oracle HTTP Server (OHS)? They also may have applicable OPatches. Who knows what you’re missing? We do! Let’s button it up.

A friendly game of laser tag between out-of-shape technology consultants became a small gold mine of analytics simply by combining the power of Essbase and the built-in data visualization features of Oracle Analytics Cloud (OAC)! As a “team building activity,” a group of Edgewater Ranzal consultants recently decided to play a thrilling children’s game of laser tag one evening. At the finale of the four-game match, we were each handed a score card with individual match results and other details such as who we hit, who hit us, where we got hit, and hit percentage based on shots taken. Winners gained immediate bragging rights, but for the losers, it served as proof that age really isn’t just a number (my lungs, my poor collapsing lungs). BUT…we quickly decided that it would be fun to import this data into OAC to gain further insight about what just happened.

Analyzing Results in Essbase

Using Smart View, a comprehensive tool for accessing and integrating EPM and BI content from Microsoft Office products, we sent the data straight to Essbase (included in the OAC platform) from Excel, where we could then apply the power of Essbase to slice the data by dimensions and add calculated metrics. The dimensions selected were:

Metrics (e.g. score, hit %)

Game (e.g.Game 1, Game 2, Total),

Player

Player Hit

Target (e.g. front, back, shoulder)

Bonus (e.g. double points, rapid fire)

With Essbase’s rollup capability, dimensions can be sliced by any one item or at a “Total” level. For example, the Player dimension’s structure looks like this:

Players

Red Team

Red Team Player 1

Red Team Player 2

Blue Team

Blue Team Player 1

Blue Team Player 2

This provides instant score results by player, by “Total” team, or by everybody. Combined with another dimension like Player Hit, it’s easy to examine details like number of times an individual player hit another player or another team in total. You can drill in to Red Team Player 1 shot Blue Team or Red Team Player 1 shot Blue Team Player 1 to see how many times a player shot an individual player. A simple Smart View retrieval along the Player dimension shows scores by player and team, but the data is a little raw. On a simple data set such as this, it’s easy to pick out details, but with OAC, there is another way!

Even More Insight with Oracle Analytics Cloud (OAC)

Using the data visualization features of OAC, it’s easy to build queries against the OAC Essbase cube to gain interesting insight into this friendly folly and, more importantly, answer the questions everybody had: what wasthe rate of friendly fire and who shot who? Building an initial pivot chart by simply dragging and dropping Essbase dimensions onto the canvas including the game number, player, score, and coloring by our Essbase metric “Bad Hits” (a calculated metric built in Essbase to show when a player hit a teammate), we discovered who had poor aim…

Dan from the Blue team immediately stands out as does Kevin and Wayne from the Red team! This points us in the right direction, but we can easily toggle to another visualization that might offer even more insight into what went on. Using a couple of sunburst type data visualizations, we can quickly tie who was shooting and who was getting hit – filtered by the same team and then weight by the score (and also color code it by team color).

It appears that Wayne and Kevin from the Red Team are pretty good at hitting teammates, but it is also now easy to conclude that Wayne really has it out for Kevin while Kevin is an equal opportunity shoot-you-in-the-back kind of teammate!

Reimagining the data as a scatter plot gives us a better look at the value of a player in relation to friendly fire. By dragging the “Score” Essbase metric into the size field of the chart, correlations are discovered between friendly fire and hits to the other team. While Wayne might have had the highest number of friendly fire incidents, he also had the second highest score for the Red team. The data shows visually that Kevin had quite a few friendly fire incidents, but he didn’t score as much (it also shows results that allow one to infer that Seema was probably hiding in a corner throughout the entire game, but that’s a different blog post).

What Can You Imagine with the Data Driving Your Business?

By combining the power of Essbase with the drag-and-drop analytic capabilities of Oracle Analytics Cloud, discovering trends and gaining insight is very easy and intuitive. Even in a simple and fun game of laser tag, results and trends are found that aren’t immediately obvious in Excel alone. Imagine what it can do with the data that is driving your business!

Account reconciliations – the means by which this question is answered – are a fundamental part of the financial close process. Imagine you are trying to build a sandcastle. Now imagine your “sand” is harvested from a cow pasture. You *could* continue to build this “sandcastle,” but you will likely finish with a pile of…bull-sand. In the same way, if your account balances and transactions have an integrity equivalent to “bull-sand,” this will inevitably lead to problems down the line.

The shift to the Cloud has complicated the decision-making process when considering new enterprise-wide application tools. The choice of whether to go with a known “on-premise” solution or take a bold step into Cloud solutions is a daunting one, particularly when considering moving high-visibility cycles such as forecasting or financial consolidations into this brave new world.

A Justified Recommendation

Take the measured move instead. If you feel hesitant to go “all-in” on Cloud offerings, here are four reasons why you should consider entering the Cloud through the arch of ARCS…the ARCSway (Get it?…archway…ARCSway…never mind – just keep reading…)

Safe Bet on a Strong Foundation

Oracle introduced Account Reconciliation Cloud Service (ARCS) as the “one stop shop” solution for managing and streamlining the reconciliation cycle in the Cloud back in 2016. While it’s not uncommon for some EPM products to lose functionality during their initial transition into the Cloud space, ARCS retains the “good bones” of its on-premise counterpart – Account Reconciliation Manager (ARM). ARCS builds upon the clever functionality and customizability of ARM, released in 2012, yet with the slick look and feel of the Oracle Cloud experience.

Since its release, ARCS has become the “golden child” of the reconciliation product family, receiving not only “first dibs” on refinement of existing capabilities, but also benefiting from the newest components such as Transaction Matching (note: this has separate licensing than the Reconciliation Compliance component of ARCS). As the product continues to gain steam, this trend is expected to continue. Between utilizing the tried-and-true foundation of the ARM tool and having Oracle’s watchful eye, ARCS is a safe bet.

No Mistakes with Modularity

Unlike some applications, ARCS is easy to implement in pieces. While good design will certainly prevent future heartache, there are no decisions made on Day 1 of a project that cannot be modified or enhanced in the future:

Want to manually enter data for reconciliations today, but automatically load them from a source system tomorrow? We can do this.

Missing fields for additional detail you would like users to include? Can be ready for next period (or the current one even!)

Only want to rollout in one country to start? No problem – go ahead and make the other entities jealous!

While some changes are “cleaner” than others (I am looking at you, Profile Segments!), ARCS welcomes you to “test the waters” and see what works in your company without needing to go “all-in.” For example, a current client has a live ARM application that provides a viable solution for its reconciliation process needs given the initial project timeline and budget. Although the client wasn’t able to fully utilize the available functionality at the time, the modularity of the reconciliation tools (both ARM and ARCS) allows the opportunity for enhancements without punishing this design decision – we are now revamping the client’s auto-reconciliation setup to further streamline the process. For Partners, this means additional project phases; for clients, this means not biting off more than you can chew (win-win!).

Relative to other EPM project lifecycles, ARCS is typically a quick implementation. As with all projects, there are certainly exceptions, but with Ranzal’s “Quick Start” methodology, we have stood up applications in just six weeks! A strong inventory of project “accelerators” – custom tools and scripts that Ranzal has developed based on common requests across multiple clients – allows sophisticated deployments in a timely manner. Couple this with the inherent time saving benefits of Cloud technology (i.e. lack of infrastructure setup, etc.), and ARCS shines as the first step in a Roadmap, producing tangible metrics for evaluation (ex. completion percentages per period, timeliness per Preparer/Reviewer, reconciliation accuracy, etc.) and giving users a taste of the Oracle Cloud experience in a short period of time.

You Don’t Have Anything Today and It’s Costing You

I know that may read like a presumptuous fear tactic, but hear me out:

Account reconciliations ARE being completed in your company – one way or another. Whether that means your CPAs are *click*click* clicking away on their keyboards to manually update Excel spreadsheets or – heaven forbid – actually printing out recons to hand sign, if you cannot name the system that is comprehensively handling your reconciliation cycle, it’s because there isn’t one.

And this is normal. But there are costs associated with this normalcy.

Reconciliation cycles aren’t sexy (well…personal taste…) and often have low visibility to upper management. And yet (!) the reconciliation process is often widespread across the company spanning business entities, departments, and corporate ladders (I see you, Mr/s. Director signing off on recons). ARCS is an attractive option when considering enterprise-wide Cloud solutions to “test run” because everyone can try it. A successful ARCS implementation paves the way for easier adoption of future projects – it gets everybody onboard.

Step Through the “ARCSway” and Ditch the Bulls*#!

The shift to the Cloud is disrupting the traditional market of on-premise EPM solutions. As you look at the new strategic options available to your company’s roadmap, consider ARCS as a “first step.” Of note, it is important to have an accurate understanding of the tool – ARCS is first and foremost a management tool, and although it can provide helpful information in troubleshooting account variances, it does not replace actually performing a reconciliation in an ERP system. Additionally, customizing reports can be difficult (unless you are familiar with BI Publisher), although the out-of-the-box reports and strong dashboarding capabilities largely make up for this limitation. All-in-all, I strongly recommend this product as an introduction to the new Oracle offerings. ARCS’ “low risk, high reward” nature provides real company value quickly while presenting you with a good picture of life in the Cloud. Now is the perfect time to ditch your “bull-sand” reconciliation process and update to a more solid foundation in the Cloud through the ARCSway.

Contact us today for details about a custom Cloud solution for your business needs.