We noticed that the UIL2 architecture (SocketManager) uses many threads:- 2 (read/write) per connection on the server side- 2 (read/write) per connection on the client side

So if you have like 20 (EJB3) MDBs with a pool size of each 10 each, then UIL2 will create 20 * 10 * 4 = 400 threads that do mostly sleep and perform only ping/pong every minute or so.

Threads cost memory and constitute a scheduling problem. The current design limits the amount of MDBs in one JVM (or in extreme cases on one machine) unnecessarily.

Have the JBoss team ever thought about reducing this massive waste of resources? Maybe with NIO selectors und NIO buffers Threads could be shared among connections. Likewise the MQ destinations could provide selectors instead of blocking calls to pop a message from the stack. I think it makes sense to have one thread per MDB on each client and server side, handling all of the connections. It should be possible to do this without giving up the non-blocking behaviour.

I was actually thinking of tweaking UIL2 to use NIO. But I thought I better ask here first if any work had been done on a similar thing already. I can imagine that the adventure with OIL2 failed: it's not an easy task. When I dare make a guess: does it deadlock? does a slow connection impact others?

Meanwhile we have been folding MDB code into less MDBs, as the UIL2 tweaking is errorprone and difficult. We are stuck with RedHat EL3 (kernel 2.4) at the moment, so many threads can really be a pain for the scheduler sometimes.

This causes a thread-death every minute per connection and side. On our system we see around 5000 new threads every hour just because of that! And we can not change it, because it's hardcoded. This pool config makes the use of a thread pool absurd in terms of resource consumption.

Actually having this extra pool is a bit of an overkill in the first place, I think. It should be enough to have the Read task execute the message synchronously in general. On the server side this is an inject into a queue I guess, which is fast. On the client side this is passing the message to an MDB thread (or is it just passed to an instance? - I haven't checked), which is fast as well.

Ok, thanks. As the bug was fixed in 4.0.4RC1, we should be safe with 4.0.4GA.So the pool was introduced as a fix against deadlocks. Okay, but this way the delivery order of messages is not known anymore (depends on scheduler). Couldn't that cause more trouble during bursts?

"oglueck" wrote:Ok, thanks. As the bug was fixed in 4.0.4RC1, we should be safe with 4.0.4GA.So the pool was introduced as a fix against deadlocks. Okay, but this way the delivery order of messages is not known anymore (depends on scheduler). Couldn't that cause more trouble during bursts?

No the thread pool doesn't affect the JMS semantics, it is simply there for when the protocol is doing more than one request/response, the protocol is bi-directional with requests being issued from either side "at any time".

More importantly the handling of a request by one side may to lead a further request to the other side. e.g.

"You just gave me a message for a receiver/session that I'm in the process of closing, please NACK that message"

When you say you removed the pool from the SocketManager did you change the actual SocketManager class code?

I'm sure that this is a stupid question, and that the answer is yes, but I was hoping for a simple configuration answer. We're also seeing thousands of new threads being created, which can't be a good thing.