I have been benchmarking Java NIO with various JDKs on Linux. Server isrunning on a 2 CPU 1.7 GHz, 1GB RAM, Ultra160 SCSI 36GB disk

With Linux kernel 2.6.5 (Gentoo) I had NPTL turned on and support forepoll compiled in. The server application was designed to supportmultiple disptach models :

1. Reactor with Iterative Disptach with multiple selector threads. Essentiallythe accepted connections were load-balanced between varying number ofselector threads. The benchmark then applied a step function to experimentallydetermine the optimal # of threads and connection per selector ratio.

2. Also a simple concurrent blocking disptach model was supported. This isessentially a reader thread per connection model.

2. To work around not so performant/scalable poll() implementation on Linux's we tried using epoll with Blackwidow JVM on a 2.6.5 kernel. While epoll improved the over scalability, theperformance still remained 25% below the vanilla thread per connection model. With epollwe needed lot fewer threads to get to the best performance mark that we could get out of NIO.

Without NPTL of course it's a different story. The blocking server just melts at 400 concurrentconnections! We have run the test upto 10K connections and the blocking server outperformedNIO driven selector based server by same margin. Moral of the story - NIO arrives at the scenea little too late - with adequate RAM and better threading models (NPTL), performance gainsof NIO don't show up.

Sun's JVM doesn't support epoll() so we couldn't use epoll with it. Normal poll() basedselector from Sun didn't perform as well. We needed to reduce the number of connectionsper thread to a small number (~ 6-10) to get comprabale numbers to epoll based selector.That meant running lot more selector threads kind of defeats the purpose of multiplexed IO.The benchmarks also dispell the myth created by Matt Welsh et al (SEDA) that a singlethreaded reactor can keep up with the network. On a 100Mbps ethernet that was true: networkgot saturated prior to server CPUs but with > 1Gbps network, we needed multiple selectorsto saturate the network. One single selector's performance was abysmal (5-6x slower thanconcurrent connections)

For application that want to have fewer number of threads for debuggability etc, NIO may bethe way to go. The 25-35% performance hit may be acceptable to many apps. Fewer threadsalso means easier debugging, it's a pain to attach a profiler or a debugger to a server hosting1000+ threads :-) . Bottom line with better MT support in kernels (Linux already with NPTL), oneneeds to re-consider the thread per connection model

Before the 2.6 version of the Linux kernel, processes were the schedulable entities, and there was no real support for threads. However, it did support a system call — clone — which creates a copy of the calling process where the copy shares the address space of the caller. The LinuxThreads project used this system call to provide kernel-level thread support (most of the previous pthread implementations in Linux worked entirely in userland). Unfortunately, it had a number of issues with true POSIX compliance, particularly in the areas of signal handling, scheduling, and inter-process synchronization primitives.

To improve upon LinuxThreads, it was clear that some kernel support and a re-written threads library would be required. Two competing projects were started to address the requirement: NGPT (Next Generation POSIX Threads) worked on by a team which included developers from IBM, and NPTL by developers at Red Hat. NGPT was abandoned in mid-2003, at about the same time when NPTL was released.

NPTL was first released in Red Hat Linux 9. Old-style Linux POSIX threading is known for having trouble with threads that refuse to yield to the system occasionally, because it does not take the opportunity to preempt them when it arises, something that Windows was known to do better at the time. Red Hat claimed that NPTL fixed this problem in an article on the Java website about Java on Red Hat Linux 9.[3]

EJB servers are required to support the UserTransaction interface for use by EJB beans with the BEAN value in the @TransactionManagement annotation (this is called bean-managed transactions or BMT). The UserTransaction interface is exposed to EJB components through either the EJBContext interface using the getUserTransaction method, or directly via injection using the general @Resource annotation. Thus, an EJB application does not interface with the Transaction Manager directly for transaction demarcation; instead, the EJB bean relies on the EJB server to provide support for all of its transaction work as defined in the Enterprise JavaBeans Specification. (The underlying interaction between the EJB Server and the TM is transparent to the application; the burden of implementing transaction management is on the EJB container and server provider.

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.