Monday, 26 December 2011

Many applications record a series of events to file-based storage for later use. This can be anything from logging and auditing, through to keeping a transaction redo log in an event sourced design or its close relative CQRS.

Java has a number of means by which a file can be sequentially written to, or read back again. This article explores some of these mechanisms to understand their performance characteristics. For the scope of this article I will be using pre-allocated files because I want to focus on performance. Constantly extending a file imposes a significant performance overhead and adds jitter to an application resulting in highly variable latency. "Why is a pre-allocated file better performance?", I hear you ask. Well, on disk a file is made up from a series of blocks/pages containing the data. Firstly, it is important that these blocks are contiguous to provide fast sequential access. Secondly, meta-data must be allocated to describe this file on disk and saved within the file-system. A typical large file will have a number of "indirect" blocks allocated to describe the chain of data-blocks containing the file contents that make up part of this meta-data. I'll leave it as an exercise for the reader, or maybe a later article, to explore the performance impact of not preallocating the data files. If you have used a database you may have noticed that it preallocates the files it will require.

The Test

I want to experiment with 2 file sizes. One that is sufficiently large to test sequential access, but can easily fit in the file-system cache, and another that is much larger so that the cache subsystem is forced to retire pages so that new ones can be loaded. For these two cases I'll use 400MB and 8GB respectively. I'll also loop over the files a number of times to show the pre and post warm-up characteristics.

For years I was a big fan of using RandomAccessFile directly because of the control it gives and the predictable execution. I never found using buffered streams to be useful from a performance perspective and this still seems to be the case.

In more recent testing I've found that using NIO FileChannel and ByteBuffer are doing much better. With Java 7 the flexibility of this programming approach has been improved for random access with SeekableByteChannel.

It seems that for reading RandomAccessFile and NIO do very well with Memory Mapped files winning for writes in some cases.

I've seen these results vary greatly depending on platform. File system, OS, storage devices, and available memory all have a significant impact. In a few cases I've seen memory-mapped files perform significantly better than the others but this needs to be tested on your platform because your mileage may vary...

A special note should be made for the use of memory-mapped large files when pushing for maximum throughput. I've often found the OS can become unresponsive due the the pressure put on the virtual memory sub-system.

Conclusion

There is a significant difference in performance for the different means of doing sequential file IO from Java. Not all methods are even remotely equal. For most IO I've found the use of ByteBuffers and Channels to be the best optimised parts of the IO libraries. If buffered streams are your IO libraries of choice, then it is worth branching out and and getting familiar with the implementations of Channel and Bufferor even falling back and using the good old RandomAccessFile.