I've noticed that dequeuing messages when connecting to a JBoss messaging server on localhost is heaps faster than when you connect to a remote server. Naturally, I would expect the local server to be faster, but connecting to a remote server I am only getting 2 - 3 messages per second using a MessageListener that does nothing. I've also noticed the same behaviour when browsing queues with Hermes - locally you get 100's of message/second and remotely it's only 2 - 3 messages per second.

A rough test to see this is shown below. Am I doing anything wrong in the way I am using MessageListeners? I can get more throughput by increasing the number of sessions and message listeners I create, but I just wanted to know if there is anything else I can do to make it faster. The browsing in Hermes is a bit of a problem, as its slow enough to be unusable with anything more than a few messages.

The two machines are connected to each other on a 100 MB hub that's supposed to have intelligent switching. The hub is connected to our corporate network which is also 100 MB.

The messages are text messages containing XML that is < 2 KB.

There is another application producing the messages onto the queue, but it doesn't seem to make much of a difference whether this is running or not. I can stop the consuming application and let the producing one run to back up some messages. If I then stop the producer and start the consumer things are still about the same.

The messages are persistent and we are using an Oracle 9i database. When running on the local server or a remote server, I am connecting to the same Oracle server and using a different schema for each.

We've observed the speed difference browsing the remote queues through Hermes in both Windows and Solaris environments. Is Hermes a reliable gauge?

By local I mean the client application is running in a different VM on the same box as the JBoss messaging server. I'm connecting with the URL jnp://localhost:1099. Can the persistence behave differently if the client is local compared to remote? JBoss is still using Oracle when I'm connecting locally.

I've noticed that when browsing with Hermes, you can only see a maximum of FullSize messages for the queue. I'm assuming this is because the QueueBrowser implementation only works of in memory messages to eliminate the overhead of paging all the messages back into memory for browsing (fair enough). Following this assumption, I'm also assuming that browsing the messages wouldn't be going to Oracle at all, yet it's still slow.

Thanks Tim. Following your results, I have re-tested with a completely fresh 4.0.4.GA and messaging 1.0.1.CR2 installed in an out of the box configuration (i.e. no Oracle) using ant -f release-admin.xml. I'm still getting the same result - I'm only consuming 300 messages/minute. I've checked the message size and it's actually < 1 KB.

Interestingly, putting the 300 messages onto the queue is almost instantaneous - I can't even time it it's so quick, and I'm putting them onto the queue from the remote machine as well.

I've also tried disconnecting the hub from the rest of our network and connecting via. IP address and it's still the same.

Browsing the JMX console from one machine to the other is also extremely fast.

I think I'm onto it. If you place plain text messages into the queue (i.e. text messages that don't contain XML) then everything is quick. I'm placing the JBoss copyright.txt which is 6 KB into the queue and it browses and dequeues like lightning.

If the text message contains any sort of XML, then it bogs down. I've been placing the docs/dtd/jaws.dtd into the queue (as it's a similar size to my messages) and it also bogs down.

If you mix non XML text messages and XML text messages, then it consumes the non XML ones quickly and the XML ones slowly. I know that Hermes can display messages in formatted XML, so it is parsing it, but my test case code doesn't touch the message at all, so there can be no parsing involved there.

Tim, can you give this a try with your test rig and see if it's the same?

Thanks Tim. There must be something on our corporate install of Windows XP that is mucking this up - maybe virus scanner or local firewall. These are locked down, so I can't turn them off. We are deploying to Solaris on Monday, so I might just wait and see if the problem occurs in that environment.