(1)JVM Memory settings: When ever the task of performance tuning comes, the first idea comes up is JVM memory settings. Although these settings is widely covered in Java and general WAS tips (the developer works article listed in Resources selection explains very well about WAS performance. However it explains mainly from WAS point of view not Messaging in specific w.r.to JVM memory settings).

Before going to decide JVM memory settings, it would be handy to know the memory requirements of Service integration bus. Each message resides inside Messaging Engine(ME) needs around 400 bytes of Meta data (header) of heap memory, and in case in Pub-Sub scenario if durable subscriber is not active then Messaging Engine has to hold a message reference apart from actual Message. Each and every message references consume around 100 bytes of Meta data(header) of heap memory.

So while designing the JVM memory requirements, message load (i.e the number of messages presents in Messaging Engine) has to be considered including in failure cases.

In general it is better to set the minimum and maximum heap sizes to the same value for maximum performance. This setting avoids costly JVM Garbage collector (GC) compaction operations.

Messaging Engine persists messages (depending on the quality of service) to either File system or Data base depending on the Message Store settings. When there are large number of messages present in Messaging Engine, even though every things looks fine and no ‘Out of Memory’(OOM) error condition, some time this may lead to a situation in which message retrieval time takes longer than expected. This is because messages has to be read from Message Store to run search/selector algorithms on them. The best way to reduce message retrieval time is to increase JVM heap sizes so that more messages could be cached in memory.

(2) Avoiding Remote GET and PUT: Remote GET means consuming message which is not resident in the same Messaging Engine. SIBus uses a proprietary protocol to get message from neighbor MEs. When applications connect to one ME and get messages from other MEs, there is likelihood of performance degradation as messages has to be routed through network. So while designing the application, this point has to be considered. In case of Message driven beans (MDB) the best way is to run the same consuming application to run in all the MEs in case if design permits like that. This suits best in scalable environments in which a queue is partitioned across MEs, in this scenario having the same consuming application run on all instances of MDBs may be practically possible.

Remote PUT means message is sent to ME to which the application is not connected. Depending on the quality of service, Remote PUT involves store-forward i.e message has to be persisted in local ME and then be forwarded towards target ME. It then deletes the message only after getting confirmation from other ME. In Messaging System if large number of Remote Puts are involved, there could be performance impact. This has to be considered when designing application.

(3)Data store performance settings:The workload that the messaging engine imposes on the relational database management system (RDBMS) is slightly different from usual database workloads, because the messaging engine performs mainly SQL INSERT and DELETE operations. This has to be considered while tuning DB.

Each messaging engine can request a large number of concurrent connections to the database. By design, a messaging engine uses many threads to perform database updates concurrently. Hence making available sufficient number of DB connections to Messaging Engine is key to having better performance for peak loads.

(4) Thread pool settings:WebSphere Application Server uses different threads pools for different set of tasks. To balance your system for the type of workload you require, you should vary the settings on different thread pools. Generally the following thread pools are important for SIBus:Default is generally shared by all container applications. If, for example, you are running multiple MDBs, then make sure you scale up this pool. SIBFapThreadPool is the service integration bus FAP outbound channel thread pool, and it is utilized by all JMS application sending or consuming messages to/from the server. The optimum value for its maximum size should be around 50.WebContainer has no consequences for usual JMS applications, unless of course you have for example servlets using JMS or driving your EJB JMS producers.

(5) Exception destination and Message order:SIBus cannot guarantee the ordering of messages sent to an exception destination. Because of this, if message order is important, you can configure a bus destination so that it does not use an exception destination. In this situation, the Maximum failed deliveries per message limit specified for the destination is ignored, and the message remains available to consumers. Synchronous consumers repeatedly attempt to get the message; message-driven beans and other asynchronous consumers repeatedly attempt consume the message.

This situation goes into a loop and could be a performance hit until either the message is removed from the destination (for example, by an administrator using the administrative console) or the consumer can subsequently process the message without rolling back.

(6)Message reliability levels : Message reliability is an important component for any messaging system and the SIBus provides five different levels of reliability. Best effort non-persistent Express non-persistent Reliable non-persistent Reliable persistent Assured persistent.Persistent messages are always stored to some form of persistent data store, while non-persistent messages are generally stored in volatile memory. There is a trade-off here between reliability of message delivery and the speed with which messages are delivered. As the reliability level decreases, the faster the messages can be processed.

Resources========Developer works article on WAS performance:http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.html

As a J2EE developer/administrator, you might have heard about GlobalTransaction or UserTransaction(UT), which is a specification that J2EE prescribes to build transactional enterprise applications. In this blog, I will only discuss the relationship it has with J2C Connections/Database connections.

When a connection is requested under a global transaction, by default, the connection has affinity towards that transaction. This means that a connection, that is reserved under a certain transaction, is visible only to that transaction even if it is closed explicitly by the application. This state remains as is until the enclosed transaction ends(either commits/rollbacks).

You might be wondering what benefit might it get if a connection is kept unused though it is explicitly closed? You have a valid question. But, there are other school of applications which access database multiple times to complete business logic execution with in a single transaction. In those scenarios, the said behavior acts as a connection cache keyed by transaction context. So, it would be relatively faster to fetch a connection from its transaction cache than getting it from a pool. It is exactly for the same reason such an approach is taken by websphere to boost the application performance.

Another interesting question you might ask is "In my application I don't use UserTransaction; should I really care about it?". The answer is "YES". In the absense of UT, websphere transparantely proives what is known as "Local Transaction Containment(LTC)". This LTC is created by containers before executing application components such as service method of a Servlet or business method of an EJB. So, if you acquire a connection in a servlet service method, then the connection is by default associated with LTC. Even if you close the connection explicitly using con.close() method, the connection is not freed until the service method is completed. Sometimes, it may cause leak in the connection pool and finally make other requests wait for the connection.

How to avoid such an anamoly?

It is simple. Change the connection sharing mode to UNSHAREABLE. By default it is SHAREABLE. You can find this setting under <Resource- Ref> of module level Deployment Descriptor in your application(web.xml, ejb-jar.xml). You can configure <RES-SHARING-SCOPE> to suite your needs. <resource-ref> <description></description> <res-ref-name>jdbc/ERWWDataSourceV5</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Shareable</res-sharing-scope> <!--- your change must go here --> </resource-ref>

How do I decide? First, analyze if your application asks for connection multiple times with in a UT or LTC. If yes, then go with the default behavior. Perhaps, that is the optimal setting for your application. Otherwise switch to UNSHAREABLE behavior as explained above.

The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.