In PerformancePoint Server 2007 Planning, you build centralised business rules in a new language called PerformancePoint Expression Language (PEL). PEL is very MDX-like and relatively straight-forward to pick up, if you are even a little familiar with MDX.

The beauty of PEL is that it can generate either MDX or T-SQL scripts from the same PEL Expression. This allows the developer/analyst to target either the cube itself, using MDX Script, or the fact table, using T-SQL, depending on which implementation approach is more suitable for the type of calculation and from a performance perspective.

Within the Business Rules Editor workspace you can easily select the implementation (SQL or MdxQuery) from the Rules Detail property list

With the Rule saved (it does not have to be deployed) you use the rule context menu to debug the rule.

If you come from a development background like me, you may expect a little more to happen than what actually does happen !

Depending on what implementation you have selected, and whether or not your PEL is valid, either the MDX or the T-SQL is generated from the PEL and displayed in a window. It's important to realise that the PEL expression has not been run. It is for information purposes only.

From this point you can eye-ball the resultant query to help determine your issue or, as I tend to do, cut and paste into a New Query window inside SQL Server Management Studio. The resultant T-SQL tends to be extremely verbose so actually debugging the problem using T-SQL will be rare !

If there is a problem with the actual PEL itself, the reason, in the form of a (normally) reasonably helpful error message is displayed in the window, instead of the resultant MDX/T-SQL.

I was initially a little disappointed with the built-in debugging facilities, but considering the target audience of the Planning Business Modeler, full integration with SSMS or Visual Studio was never going to be a consideration. But, to be fair, it is probably just enough; the debugger is primarily aimed at ensuring your PEL is correct, it stops somewhat short of helping you debug the actual business logic but this can be handled if you cut, paste and hack the resultant MDX or T-SQL into another tool and even that becomes less and less necessary as you develop your PEL writing skills.

Despite all the hype, the recent launch, several PPS related conferences, numerous articles and the pull-no-punches Olap Report Preview ,there is still some doubt about what the Planning element actually offers, well..

For a specific example, imagine, without PerformancePoint Server, trying to build an enterprise-wide solution to capture quarterly financial based budgets and/or forecasts from every budget-holding manager across the organisation, for consolidation and approval by the CFO. The issues you would face would revolve around data-capture, workflow, security, consolidation, business rules, performance, tracking etc. Even if you were successful, it's likely that you would eventually end up in Excel hell, or left with no confidence in the results and a huge team on full-time support.

Step forward PerformancePoint Server Planning. It removes much of the pain associated with all of the above and is not just limited to financial based scenarios. In addition, when integrated with PerformancePoint Monitoring you can track the actuals against the captured plans/targets from Planning to ensure the business stays on track. When the enevitable deviations occur, the Analytics element of PerformancePoint Server (Currently ProClarity) will help analyse why to allow corrective action to be taken.

I posted about the Planning Server Topology recently and to compliment this, the excellent deployment guide has been released. It covers all three elements of PerformancePoint Server 2007, not just planning.

At the time of writing the Planning forum was empty and the M&A forum had just the single topic. It will be interesting to see how the activity picks up through the launch period and into the New Year.

I've been setting up some PerformancePoint Planning demonstrations for both clients and internal knowledge transfer. As part of these demonstrations I've been loading the Account Dimension from CSV files.

There are several other ways of loading data into PerformancePoint planning dimensions and models and I'll no doubt post about the alternatives in the future.

There is a small gotcha that I'd thought I'd share. The pre-defined Account Dimension contains a member property called Account Type. This member property utlises a lookup table for the various built-in account types such as Unit, Expense etc.

The PerformancePoint CSV format requires that the first row contains the field (or rather member property) names, and optionally data types, with the remaining rows that actual data. This is slightly different for the Account Type member property, as this is a lookup field, you need to specify the key field name instead, in this case, AccountTypeMemberId.

With that known, you would be forgiven in thinking that, in order to load data against that field, you need to specify the actual AccountTypeMemberId. However, that would result in a new member property being created called 'AccountTypeMemberId' that contains the value and not the description. The proposed destination field Account Type would be left unpopulated.

Instead, to correctly load the member property, rather than use the Id, you need to specify the actual description from the lookup table. (Not the only un-intuitive feature of PerformancePoint Planning!)

I have it on good authority that the UK launch for PerformancePoint Server 2007 is 16th October 2007, a day under four weeks after the 19th September 2007 US launch.

I'm not sure if either of these dates coincide with the RTM date and whether the RTM date will be for just the 32-bit or together with the 64-bit version too. The 64-bit CTP releases have followed shortly after the 32-bit version so I guess it's fair to assume the RTM will not be any different.

You've been badgering away at a PerformancePoint Server Planning Model all day, updating objects, writing complex MDX scope statements to satisfy obscure business requirements and you've only got to add a new view to the time dimension before you can deploy and start testing when you find that your colleague - the one that had to leave early to pick up the kids - has it checked out !

What's the problem? Surely you can undo the checkout from the admin console like you can in VSS Admin? Okay, he could lose his isolated changes but you've been at the whole model for the whole day. Well, there is some good news and some rather irritating bad news !

There is indeed an admin level undo checkout feature that allows you to unlock the objects so you can check them out to you, and this feature is easily found in the Planning Admin Console.

It's one drawback is the fact it forces you to undo the checkout for ALL checked out objects in an application. There is no ability to select an individual planning object !

In my scenario it was not that much of a big deal as I could check all my objects in and then force the undo checkout on the only remaining checked out object but I'm not always going to be that lucky.

On the face of it, it would be little effort for the Redmond gang to add that additional functionality to the Admin Console. You can obviously check-in/check-out individual objects from within the Business Modeler so the stored procs already exist, right? hmm, actually, that gives me an idea ...