This forum is now a read-only archive. All commenting, posting, registration services have been turned off. Those needing community support and/or wanting to ask questions should refer to the Tag/Forum map, and to http://spring.io/questions for a curated list of stackoverflow tags that Pivotal engineers, and the community, monitor.

AnnouncementAnnouncement Module

Collapse

No announcement yet.

Concurrency in Spring DMLC apparently getting capped at around 340 (against the max vPage Title Module

In our perf tests, we are hitting a situation where the Spring DMLC (DefaultMessageListenerContainer) threadpool size is not increasing beyond about 340, though the maxConcurrentConsumers is 1000.

The DMLC is initialized mostly with the default params, except the maxConcurrentConsumer which is 1000, and the ackMode which is Session.AUTO_ACKNOWLEDGE.

The prefetch limit is set to 1 on the shared connection.

The following is the scalability we are trying to achieve.
On receiving message M from the broker, the corresponding consumer thread in DMLC is cloning the former into say 4000 messages (M_1,M_2,....M_4000) and posting them back to the broker. Now, we have three consuming nodes (meaning three DMLC, one in each box, configured with the above specs), and we want to get to a point where 1000 consumers(in each of the three nodes) are executing the messages concurrently. The maximum concurrency we have seen so far, by grepping the consumer box logs for the number of DMLC asyncInvoker threads is around 340 per consumer node. (The initial concurrency level ,i.e. concurrentConsumers is just 1)

Can you suggest some pointers on how to achieve this by tweaking the appropriate params?

The most relevant ones I see are increasing the IdleTaxExecutionLimit/IdleConsumerLimit etc.

Additional details :
AMQ broker is configured in a shared FS (Filer) based master-slave mode, and the messages are being produced in asyncSend mode.

I am not aware of any cap for the consumer amount, but is it possible that the reason why you don't see any increase in consumers is because you have consumers available in the cache (the once that are done processing previous request are returned back to a cache)

Comment

What you say is possible, but then that would mean that even before a new consumer is created, an existing one is done processing one of these (second level)4000 messages. Though I haven't tried to determine if that's the case through debugging, I thought this to be unlikely as a single message processing is a high latency operation. And so by the time any of the consumers is done processing its message, all the 4000 messages should get allocated to 4000 different consumers (assuming we have 4 boxes with maxConcurrentConsumers set to 1000)

Comment

Will have to check this part out. (What could be the easiest way to verify that?)
On the other hand, please note that all 4000 messages are not getting delivered one by one, as we have asyncSend enabled in the producer path. So, I'm assuming there's some batched delivery happening to the broker when this federated set of messages are posted.

By the way, I got our perf tests kicked off recently by setting the consumer side prefetch to 0 AND setting the default consumer pool size (i.e. concurrentConsumers) to 500. Now I see all the 500 consumers processing events. However it remains to be seen if the 500th consumer received an event while the 1st consumer is still busy. Only that would validate the latency logic I described in my last post.

I would refer to a vendor documentation on how to check what's being queued up. For example with ActiveMq you can see it with JConsole.
The fact that 4000 messages are not being delivered at one time is not what I am concerned about. All I am trying to figure out is the limitation part that you are claiming. In other words the only way to verify it properly is to queue up the amount of messages you want to test and then start DMLC with that many consumers and ensure that those consumers (listeners) are pretty slow to give a chance for every consumer to be created.