In enterprise business applications, complexity stems from complex enterprise needs. To give you a flavor of these, here are just a few examples of requirements that lead to creation of complex platform infrastructures that make up the complex enterprise software.

User Interface

Translation for multiple languages

Localization for different areas (e.g. currency, dates etc)

Complex Query Needs (custom fields, saved queries etc)

Customizability/Extensibility

Adding custom attributes, objects

Modifying out of the box functionality e.g. business processes

Modifying the look and feel for personalization by end users (e.g. MyYahoo)

Verticalization needs (e.g. industry specific flavors)

Operational

High Availability (for planned downtimes)

Selective feature uptake (to avoid re-training thousands of users)

Performance requirements for high volume, latency, throughput etc.

Functional

Complex security needs (users, roles etc)

Organizational setup (e.g. business units, divisions etc)

Error handling and compensation for rolled back business processes

In addition, there are a few additional factors to consider for large enterprise software products:

Development Scalability
For multi-product application suites, economies of scale is achieved by consolidating common functions into a common platform, which then have the same complexities that the applications face when trying to develop products for a wide range of customers. This adds a layer of “knobs” to tune over and above the functional knobs that the application software provides.

Integration
No enterprise software lives in an island and integration with other systems is usually one of the big costs in application deployments. Integrations are inherently complex due to the nature of trying to tie together heterogenous applications with different data models, granularity, cardinality, semantics and protocols.

Standards
To allow for a plug-n-play model, most platform level APIs are driven with facade style interfaces that can plug-in to various implementations e.g. jazn; that add a layer of complexity. Use of standards-based technologies (BPEL, ESB, Web Services) also contributes to some level of complexity, as those standards are aimed towards satisfying requirements from all the participating members who created those standards.

Perception
I do not pretend that all enterprise software are absolutely user-friendly, but do realize that large chunks of enterprise software are targeted towards specific roles, which may seem very unfriendly to users who are not in that role. For example, a snazzy, graphical UI is not necessarily “simple” for data-entry clerk as it is to a knowledge worker.

When you think about it, a large percentage of the world economy depends on enterprise business software (ERP, CRM, HCM etc), and requirements such as above tend to add complexity.

Note that while the software can be complex, there is no excuse for not making the end-user experience as smooth as it can be. Although in my experience, some complexity does tends to bleed into the user experience, especially in on-premise deployments. The SaaS model, to some extent, shields end-users from most of the complexity, but once you start getting into requirements such as integration, complexity does find its way through during implementations.

UPDATE: To clarify, the tips above specifically address the challenges put forth in the earlier post around enterprise application integration projects. For other enterprise projects, vanilla Scrum approach may work fine.

Enterprise integrations are complex, both functionally, due to implementation of a business process; and technically, due to introduction of one or more runtime layers between applications. Since these integrations typically represent end-to-end business flows, developers need to ensure that the performance meets the business need.

Here are some considerations when planning for performance testing of service oriented architecture (SOA) projects that integrate enterprise applications, such as Oracle’s Application Integration Architecture (AIA).

Update April 21, 2011: AIA specific tuning details can be found in Chapter 28 of the Developer’s Guide for AIA 11gR1 (E17364-02).

1. Define the End Goal. Clearly.

It may sound obvious, but it is the main cause of performance testing efforts going awry – lack of a clear end goal.

Note: “make it run faster” does not count as a clear goal!

Quantify desired metrics in an objective manner by setting Key Performance Indicators (KPI). Here are some KPIs you may want to check for:

Throughput of the end-to-end business flow by users, payload size, volume

Response Time for the end-to-end business flow by users, payload size, volume

System performance KPI should be derived from business metrics so that it involves both business and IT. This results in a more realistic goal than arbitrary benchmarks set by developers or vendors. For example, the throughput KPI could be derived based on a formula that uses software cost and peak order volume to result in a “minimum orders per CPU core per minute” indicator that satisfies the business needs.

When looking at transactions, always consider “peak” spikes vs the average. For example, orders coming in usually have peak periods (e.g. holiday season sales), wherein the system will be subject to transaction load that is a magnitude higher than on non-peak times. Defining KPIs based on peak transaction volumes will not only help in setting realistic goals, but ensures true success of the project when it actually handles the load when it is most needed by the business.

Finally, don’t try to boil the ocean – identify a subset of the integration use cases which are prone to performance bottlenecks and meet all the KPIs before attempting other ones.

3. Do you REALLY Need Production Grade Hardware for Testing?

Using dedicated hardware is always better than sharing existing development or QA environments. However, every business has different needs with their enterprise applications and even this changes by business process. For example, an order-to-cash process may have a need for consistently high target performance metrics with medium-high load; as compared to the financial close process, which may need it once every quarter with high load.

Instead of buying or configuring hardware that necessarily matches every possible target scenario, consider the use of commodity hardware with matching “normalized” KPIs that are downsized from the target business scenario. For example, say the production hardware uses a given compute unit (CPU/memory/cache specification); and the commodity hardware is determined to be one-fourth the compute unit. If the business KPI target is 40 orders/CPU core/minute on the production grade hardware, then the internal, normalized KPI would be one-fourth of that i.e. performance testing would need to achieve 10 orders/CPU core/minute on the commodity hardware to be considered successful.

Of course, the benchmark may not scale as linearly, but this can be easily factored into the equation, providing a good educated estimate of the integration performance. Compared to the alternative of not testing due to hardware unavailability and discovering issues in production, use of commodity hardware and normalized KPIs can be a very viable performance testing approach.

4. Choose a Consistent Testing Strategy

For integration scenarios, a bottoms up testing strategy may be useful to consider, i.e. optimize a single use case fully (to reach desired KPIs) before introducing additional artifacts or flows.

Plan on the sequencing of the use cases appropriately, which can save some cycles e.g. between a Query and an Insert use case, the Query may look simpler, but it needs data which can anyway be seeded by the Insert use case, so it may make sense to proceed with Insert first. Also, identify the “data profiles” for the use cases and create representative sample data e.g. B2B orders may have 50-100 lines per order whereas B2C orders may only have 4-5 lines/order.

For each use case, once KPIs are met with for a particular number of users, payload size etc., run longevity tests for at least 24 hours to ensure that the flow does not have memory leaks or other issues. Check the desired metrics e.g. JVM garbage collection, database AWR reports etc. and purge data after each run to ensure consistency between tests.

When the above passes, gradually increase number of users and increase payload on the same use case to identify system limitations when under load. Once the specific use case is optimized to KPI for concurrent users / payload, add new flows to the mix and tune.

While the above may again seem obvious, the temptation to “switch gears” when one use case is not fully working can cause a lot of overhead in switching context by the project teams and setting up data for the new use case etc. It is better to complete one full use case successfully before targeting others.

5. What about Standalone Testing for Integrations?

Standalone testing – stubbing out enterprise applications – is useful strategy to identify integration hotspots and remove the unknowns of the enterprise application performance from the integration scenario. However, be aware that it will not identify all performance issues. Developing stubs requires substantial investment to emulate the edge applications and may be non-trivial for enterprise applications that typically have complex setups. Furthermore, some integration settings on the SOA server will typically change when the applications are introduced, so avoid over-tuning the solution when performing standalone integration testing.

Performance testing and tuning is still somewhat of an art that requires a good understanding of the technologies, its limitations, and all the available tuning “knobs” in each technology to achieve the KPI requirements of the integration flow. At the same time, the non-technical, project related aspects of the testing exercise is also essential to the success of the initiative as a whole.

AgileScout invited me to write a guest post on the use of Agile in the enterprise. Having worked and being involved in multiple projects of varying complexities, I found adopting Agile (specifically Scrum) was challenging in many ways for all but the simplest projects. Most challenges could be overcome by modifying the methodology or adopting alternatives such as Kanban or “Scrum-ban“, but this is a practice that usually raises eyebrows in the Scrum community.

There are three areas that are challenging for Agile in the enterprise:

1. Complex inter-dependencies between projects – a reality in any enterprise

2. Handling of Specialized and Global Project Resources such as expert architects in geographically distributed teams

3. Sprint Overhead caused by complex project tasks such as initial architecture design, that would typically not fit in any reasonable sprint duration.