I would bet that this issue is related with https://issues.apache.org/jira/browse/DIRMINA-628 (quote = " Windows Firewall dialog complaining about our software trying to perform an operation that needs to be blocked / unblocked before continuing").

I remind an old thread discussion saying that it was the reliable way to test if some socket opts where available but it is ?

Edouard De Oliveira
added a comment - 17/Nov/08 23:38 I would bet that this issue is related with https://issues.apache.org/jira/browse/DIRMINA-628 (quote = " Windows Firewall dialog complaining about our software trying to perform an operation that needs to be blocked / unblocked before continuing").
I remind an old thread discussion saying that it was the reliable way to test if some socket opts where available but it is ?

It may be reliable, but the problem is that when used in an Applet, it will simply not work. We are 'happy' that Alexander not only have had the problem, but was clever enough to dig it and found what was the problem ! I was totally unaware that this class was doing such a socket connection...

Emmanuel Lecharny
added a comment - 17/Nov/08 23:49 Yeah, I just created a lonk on it.
It may be reliable, but the problem is that when used in an Applet, it will simply not work. We are 'happy' that Alexander not only have had the problem, but was clever enough to dig it and found what was the problem ! I was totally unaware that this class was doing such a socket connection...

Emmanuel Lecharny
added a comment - 18/Nov/08 13:06 Funny enough, we had the very same problem back in 1.1.2, and it was 'solved' by catching all the exceptions. Except that I don't think it solves anything...

I am new to Mina and the whole environment. Anyway, I am creating a project that began with 2.0.0-M3 and is now using 2.0.0-M4. I have this issue, where my server and client is creating a lot of loopback threads, that I believe must be related to this issue. Any word on whether it will be fixed?

Matthew McMahon
added a comment - 13/Jan/09 06:19 I am new to Mina and the whole environment. Anyway, I am creating a project that began with 2.0.0-M3 and is now using 2.0.0-M4. I have this issue, where my server and client is creating a lot of loopback threads, that I believe must be related to this issue. Any word on whether it will be fixed?

Removing the socket testing code from the DefaultSocketSessionConfig has had an unintended side effect for stand alone applications that use the NioSocketAcceptor. Before binding to the port, the receive buffer size on the ServerSocket is always set to value from the DefaultSocketSessionConfig, which previously would have been the OS default, but is now 1024 bytes. Upgrading from 2.0.0-M2 to 2.0.0-M4 caused a huge drop in read performance in several applications which previously had no trouble keeping up with the rate of incoming messages. I was able to observe the receive buffer size on the server app staying pegged at close to 1400 bytes, and confirmed using a debugger that the socket recieve buffer size was being set to 1024 bytes, which is a rather small default.

As a fix, I propose changing NioSocketAcceptor.open to check DefaultSocketSessionConfig.isReceiveBufferChanged before setting the receive buffer size on the ServerSocket. This will let the OS default be used in most cases which I think should be the expected behavior. I've attached a tar file containing patches with the proposed changes for NioSocketAcceptor, DefaultSocketSessionConfig and AbstractSocketSessionConfig

John Costello
added a comment - 27/Jan/09 15:35 Removing the socket testing code from the DefaultSocketSessionConfig has had an unintended side effect for stand alone applications that use the NioSocketAcceptor. Before binding to the port, the receive buffer size on the ServerSocket is always set to value from the DefaultSocketSessionConfig, which previously would have been the OS default, but is now 1024 bytes. Upgrading from 2.0.0-M2 to 2.0.0-M4 caused a huge drop in read performance in several applications which previously had no trouble keeping up with the rate of incoming messages. I was able to observe the receive buffer size on the server app staying pegged at close to 1400 bytes, and confirmed using a debugger that the socket recieve buffer size was being set to 1024 bytes, which is a rather small default.
As a fix, I propose changing NioSocketAcceptor.open to check DefaultSocketSessionConfig.isReceiveBufferChanged before setting the receive buffer size on the ServerSocket. This will let the OS default be used in most cases which I think should be the expected behavior. I've attached a tar file containing patches with the proposed changes for NioSocketAcceptor, DefaultSocketSessionConfig and AbstractSocketSessionConfig