Portal Performance Planning

The performance of your portal has a direct impact on usability, and usability can make or break your portal ROI. Learn design and development approaches that will help prevent and pinpoint pesky portal performance problems.

Almost every time I think "This doesn't need to be brought up; it is obvious to everyone" during the planning stages of a portal project, I end up bringing it up during the QA or production stage shortly after mentally kicking myself for doing it again. I bring this up now because there are (hopefully) going to be many points in this article you will be saying to yourself (or out loud, if you have my silly habit) "I already knew that". When this happens, you may shake your head in amazement of the obvious, but please read on. If there is only one new performance tip you learn from this article, it will be worth it to not have to work some weekend trying to figure out what you missed. This is no guarantee that you won't be fixing some pitfall not covered here, but at least then you can be mad at me instead of yourself. What else are consultants for?

The ideas here are also geared mainly to J2EE, though many cross over well to C# and PHP. And, while the majority of these approaches were developed while working with the WebLogic portal, almost all are applicable to any web application. So, if you are using (or evaluating) something other than WLP for your project, the WLP-specific tips have been grouped into a single section for you to blatantly skip with hurting my feelings or your applications performance.

Three Common Performance Mistakes

Performance issues are almost inevitable in new portal applications. This is because portals are usually developed to aggregate access to systems that don't already co-exist, which means they probably weren't built with the intention to work with a portal. The difference between a well-planned portal project and one that people try to forget as soon as it is over is isn't determined by how many performance issues they have. Portal performance planning is rated on how quickly problems are solved when they present themselves. Every portal project where the mere mention of the application elicits a groan and a shudder from those involved suffered from at least one of the following traits.

Over-Planning for Performance

There are multiple reasons that planning for performance is the first mistake brought up here. One is that if it were last, you might never get to it (something that happens on projects, too). Another is that it is totally counter-intuitive, so it is often missed as a mistake. Further (and there are others that I won't bore you with), it is one of those mistakes I have lived through because I didn't think it was worth mentioning.

Over-planning has three serious drawbacks to it. The first is that, when too much focus is put into performance in the planning and design phases, a great deal of effort will be spent on tasks that will have an infinitesimal impact on performance. Although the idea that every millisecond saved is an improvement is valid, it should also be prioritized accordingly. That is, if you have some time after all the development is complete, go back and do those little tweaks.

The second drawback is that over-planning frequently leads to over-confidence. If you are positive you have killed every performance hole in your design phase, you are going to have a hard time figuring out where to start when a performance issue occurs.

The last drawback that you will examine here (there are others, but they are not as common and the parameters to specify when they apply could take more exposition than most are willing to read) is a direct result of the second, which is that no matter how well you plan, something will be missed. It is often a case of missing the forest while focusing on the trees.

A perfect example of over-planning and over-confidence worked to undermine both was an extremely large and complicated portal project that was managed with a big-bang waterfall approach. Detailed designs were integrated with an excellent model-driven-architecture tool and were considered to be bullet-proof by everyone from the architects and project managers to the developers and QA teams, all of who reviewed them prior to development beginning. The first integration release, with minimal functionality deployed, ran at a crawl. Debuggers didn't pinpoint anything obvious and the logs simply showed a slowdown occurring on every call to the back end. Days were spent looking for a network issue because it always occurred during a very simple call for user information that couldn't possible be (in the opinion of the designers and developers) the problem.

To make a long story short (something usually said just a little too late), the problem was that the more efficient StringBuffer was used rather than String to concatenate the necessary parameters for the request. Because the design was meticulously detailed prior to development and development began with the fully documented stubs generated by the MDA tool, the incredibly inefficient process of re-initializing the StringBuffer 20 times for each call because it only had the default constructor was the bottle-neck that no one found until a junior developer who had just read about how the StringBuffer behaves in such situations pointed it out to much the more senior team.

Under-Planning for Performance

Okay, so now that you all know not to put too much into performance planning up front, it is time for the first round of "well everyone knows that" as you look at under-planning. I'm hopefully going to make this worthwhile for those who already know that under-planning is a bad idea by throwing in a few specific approaches that are often not include in performance planning.

The first piece (and probably the most obvious) is to use a logging API to help pinpoint performance issues while making sure that it doesn't become one. Most logging APIs include a check for logging levels. This allows the developer to include logging statements generously during development, and preface those that will not often be needed with a check for the debug level. This leads to a so-obvious-it-doesn't-need-to-be-mentioned (a term we will use often, abbreviated as SOIDNTBM) mention that logging should be set at error level in production.

If your infrastructure doesn't have a pre-production environment that is identical to production to turn on debugging for trouble-shooting, you can still take the short-term performance hit of putting production into debug mode while you debug performance issues. This is a clue of when to log an event. Any call to an external system should have a debug log statement. Any internal algorithms that can take longer than a few milliseconds should log at the beginning, the end, and during any heavy calls (all with the debug check).

Most logging APIs include a configuration where the log entries include the class and a time stamp. This means that creating timers to measure the length of a call will be nothing but a performance hit because the timestamps can be evaluated with simple math to determine the length of the call.

All exceptions should be logged, and they should be logged all the time, without the debug flag check. Leading to another SOIDNTBM: Only use exceptions for exceptions. Every time I see a catch block that has logic making it obvious that the developer expects the exception to be thrown by their code and are trying to use the exception as a return value when they can anticipate it occurring, I know I will be spending a lot of time tracking down performance issues.

The fact that calling new on an object is a performance hit is definitely SOIDNTBM. Yet, time and time again I see the same String Literals used repeatedly in applications. What is even more dismaying is when the first attempt of reducing this overhead is taken by declaring them as static finals at a class level when the same string is used by multiple classes. I once audited an application where I noticed in the first JSP that I picked at random were several static final String declarations. It occurred to me these strings were probably used elsewhere. I picked one (again at random) and found the same static final declaration in 78 objects. I realize that many virtual machines are optimized to catch this sort of thing, but that optimization generally occurs only between objects that will occur in the same process. A trick I was taught on my very first Java project was to have a singleton (or interface) that contains all such strings. Another performance benefit of this approach is that in most code blocks you can use the higher performance "==" comparison rather than the much higher overhead involved with .equals(). Granted, this may be hard to maintain for very large applications, at which point such static constant objects can be spread throughout applications or packages, though as much care as practical should be given to not repeating the same declarations.

Pointing Fingers Before Pinpointing Problems

This is an entire section that I consider as SOIDNTBM, yet the only projects I have not seen this occur on were projects where the entire team had done at least three previous projects together. A development manager I once worked with described the phenomenon as "developer's ego." Many developers believe that everything they write is so well done that the problem must be somewhere else. Anyone with a great deal of experience in debugging other people's code knows that it is most often the minority of developers who don't subscribe to this belief that generally produce the fewest bugs.

The finger-pointing problem is even worse between project sub-teams. Large projects generally have one team working on the web application and other teams developing service-layer APIs. All too often, whichever team discovers the performance issue will immediately contact the other team and demand they fix the problem without specifying even an approximation of the cause because they didn't even try to find out. Nothing could be more counter-productive. If a developer finds a bug (I will save the importance and general lack of thorough unit testing for another article), they should trace it down as far as they can. If it is a simple fix, it takes far less time to fix it than to pass it off to someone else to fix. It is productive, however, to gently let the creator of the bug know that it was found and how it was fixed as a learning experience.

Plan a Proper Portal Performance Platform

One technical team lead created a list of the Ten Most Common Developer Mistakes. Even though I don't recall them all, one that has always stuck with went something like "The environment you deploy to will never be the same you develop on." In the context of performance, this means that production is going to contain much larger data sets and a great many more concurrent sessions than you will have when building the code to support production. A good rule of thumb is to figure what the edge performance parameters will be in production and double it for your performance testing. If you don't have a performance testing environment (or one that is smaller than production), it is very important that you make it clear to all stake holders that you do not know how the application will perform in production. Yes, this is definitely SOIDNTBM, and yet I guarantee that there will be at least four major releases somewhere this year that fail in production due to performance issues that weren't tested for.

Avoid Aggregation Aggravation

Portals often bring disparate applications together to save time, money, or both. If these applications have never been used together before, it shouldn't be a huge surprise if they don't perform well together. Portlets that combine calls to multiple source systems that run beautifully individually can become a huge performance hit when called sequentially and then the results are evaluated for presentation. These portlets should be built early and tested under load to determine whether alternative designs may be necessary.

One alternative is to store data that doesn't change often in a local database. You may need to change synchronous calls to asynchronous calls running in individual threads. In some cases, using asynchronous rendering will be the best you can do. Especially when you have multiple portlets making individual calls that add up to a long page load time. When possible, work with the UI designers and Information Architects to spread these portlets across separate pages.

WebLogic Portal-Specific Approaches

As promised (or warned about), here are some approaches specific to WLP, though the first one has a corollary in any J2EE web application.

Pre-Compile JSPs

JSP compilation is an overhead. Sure, it only occurs once, but it is once per page, and if there are half a dozen portlets on a page, the first user there is going to be very disappointed.

I've seen some interesting solutions to the JSP compilation issue, including setting the ANT task to pre-compile (which the WebLogic Server sometimes ignores and rebuilds the pages anyway) and a servelet that runs once after deployment to load every JSP (it was named touchUrl).

The most effective soluition is to set pre-compilation in weblogic.xml as follows:

The working-dir node is optional and useful for debugging as the line numbers reported in the logs will refer to the compiled class rather than the raw JSP (note that /tmp is an arbitrary path and you can pick your own).

Design Re-Usable Controls

One little-known fact about recent versions of WLP is that controls go in session. This was one of the driving factors behind the solution that lead to the Developer.com article "Reusable Syndicated Media Portlets: An Example of Simplified Content Presentation." Large user sessions lead to poor performance (or increased profits for hardware vendors). The solution is to make your controls more re-usable. This doesn't mean to build your portal so that a single control runs everything (well, it might not be a bad idea for a really small portal used by lots of people, but I've never seen one in the wild). However, it is a good idea to write abstract controls and to combine obviously related forward methods into a single control. Remember, it is the portlet configuration that determines the first action called.

Another thing to remember about the control lifecycle being in the user session is that any objects you declare at a class level in the controller will live for the entire session. While this may be desirable occasionally, it would be more efficient and manageable to create a session object with a POJO to manage it and then reference the object through the POJO on a method level. The reasoning behind this is that if you really need it in the session, you should need it more than one control.

Use a Single Level Menu Where Possible

Another unexpected side effect in WLP is that the multi-level menu causes all of the controls reference by pages it contains to be loaded into the session the first time the menu is called. Multi-level menu loads all controls, so use single level where possible. Obviously, there are many situations where a multi-level menu is the best choice for navigation. The caveat here is to make sure it is the best choice to avoid the overhead involved in loading that first page.

RTFM

SOIDNTBM. Read The [Fancy] Manuals. Capacity planning and performance tuning guides are available with each installation, and can (currently) be found at http://edocs.bea.com/.

Conclusion

As always, this is not an exhaustive discussion of all of the performance issues and preventive measures that can be taken for portal applications. One theme that ran through this article is that no matter how hard you try, you won't be able to cover every possible problem before QA, and sometimes not even before production. The best you can do is to plan for the ability to deal with performance issues and to avoid the approaches that you know from either experience or this article will most likely result in poor performance. Although much of this article is SOIDNTBM, I have spend an average of 10% of the debugging stage dealing with performance issues, and the idea for this article came from my editor because, well, it was SOIDNTBM.

About the Author

Scott Nelson is a Senior Principal Consultant with well over 10 years of experience designing, developing, and maintaining web-based applications for manufacturing, pharmaceutical, financial services, non-profit organizations, and real estate agencies for use by employees, customers, vendors, franchisees, executive management, and others who use a browser. He also blogs all of the funny emails forwarded to him at Frequently Unasked Questions.

Advertiser Disclosure:
Some of the products that appear on this site are from companies from which QuinStreet receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. QuinStreet does not include all companies or all types of products available in the marketplace.