Featured in
Architecture & Design

Mini-talks: The Machine Intelligence Landscape: A Venture Capital Perspective by David Beyer. The future of global, trustless transactions on the largest graph: blockchain by Olaf Carlson-Wee. Algorithms for Anti-Money Laundering by Richard Minerich.

Featured in
Operations & Infrastructure

Mini-talks: The Machine Intelligence Landscape: A Venture Capital Perspective by David Beyer. The future of global, trustless transactions on the largest graph: blockchain by Olaf Carlson-Wee. Algorithms for Anti-Money Laundering by Richard Minerich.

Featured in
Enterprise Architecture

Mini-talks: The Machine Intelligence Landscape: A Venture Capital Perspective by David Beyer. The future of global, trustless transactions on the largest graph: blockchain by Olaf Carlson-Wee. Algorithms for Anti-Money Laundering by Richard Minerich.

OpenJDK and HashMap …. Safely Teaching an Old Dog New (Off-Heap!) Tricks

The OpenJDK off-heap JDK Enhancement-Proposal (JEP) seeks to standardise a facility that has been available as an internal-use-only API in HotSpot and OpenJDK from Java 6. This facility has the capability to manipulate off-heap memory as efficiently as on-heap memory, but without some of the limitations on-heap memory usage brings. On-heap memory works very well for millions of short-lived objects/values, however once you attempt to place additional requirements such as billions of objects/values, you have to be more creative if you want to avoid ever-increasing GC pauses. In some cases you want to avoid pause times altogether. What off-heap offers is the capability to build “arenas” of memory storage that follow their own rules and don’t impact GC pause times. Two collections that easily lend themselves to using arenas are Queue and HashMap as these have simple object life cycles, so having to write your own garbage collection is not too onerous. The benefit is collections that can grow much larger than traditional heap sizes and even larger than main memory size with trivial impact on pause times. By comparison, if your heap exceeds main memory size, your machine will become unusable, possibly requiring power cycling.

This article will survey the impact this JEP will have to empower the familiar Java HashMap with new off-heap capabilities. Simply put, this JEP may be just the magic that can “teach” HashMap (that lovable old dog) some new tricks. The JEP asks future OpenJDK releases to make several radical departures from traditional Java platform priorities:

Safely re-factor the useful parts of sun.misc.Unsafe into a new API package

Remove the FUD (and, frankly, technical bigotry) associated with using off-heap programming tactics to achieve Java performance gains. Ultimately, it is somewhat clear that this JEP is asking the OpenJDK platform to now openly embrace as mainstream this once dark craft, secret society of off-heap practitioners.

This article will strive (in a general and gentle way) to accommodate all interested Java developers. The authors request that even newbies stay with this article for the whole ride, despite any unfamiliar “bumps in the road”; so don’t be discouraged - and - please remain seated until the article comes to a complete stop. This article will strive to provide an historical context that should provide insight into the following questions:

what are the historical successes/failures relating to solution efforts?

what are some of the unresolved issues still facing on-heap HashMap use cases?

how do the capabilities delivered via the new JEP help provide remedy (i.e. by taking the HashMap off-heap)?

what can we expect from future JEPs with regard to solving problems still not solved by the off-heap JEP?

So let’s get started on this ride. It is worth remembering that before Java, hash tables were implemented in native memory heap e.g. in C and C++. In a way, reintroducing off-heap memory for storage is a reintroduction of old tricks that most contemporary developers never knew. In many ways, this is a ride “back to the future”, so enjoy the trip.

OpenJDK Off-Heap JEP

There have been a couple of submissions for an Off Heap JEP. The following example outlines the minimum requirements to support off-heap memory. Others submissions have attempted to offer a replacement for sun.misc.Unsafe, which off-heap currently uses. Those also include many other useful and interesting pieces of functionality.

JEP Summary: Create replacements for portions of sun.misc.Unsafe so there is no reason to use that library directly.

Success Metrics: There is a supported way to implement the same key functionality as Unsafe and FileDispatcherImpl with the same performance.

Motivation:Unsafe is currently the only means of building large, thread safe off-heap data structures. This is useful for minimising GC overhead, sharing memory between processes and implementing embedded databases without having to use C and JNI, which is likely to be slower and less portable. FileDispatcherImpl is currently required to implement memory mapping of any size. (The standard APIs are limited to less than 2 GB.)

Description:Provide a wrapper class for off-heap memory (similar to ByteBuffer) but with the following enhancements.

the ability to reuse a slice of buffer for different records within a buffer.

the ability to map an off-heap data structure to such a buffer in such a way that bounds checking is optimised away.

Key functionality to be retained:

support for memory mapped files

support for NIO

support for writes committed to disk.

Alternatives: Use sun.misc.Unsafe directly.

Testing: This would have the same testing requirements as sun.misc.Unsafe and memory mapped files does now. Additional testing would be required to show that thread safe operations work in the same way as AtomicXxxx classes. The AtomicXxxx classes could be rewritten to use this public API alone.

Risks: While a number of developers use Unsafe, they might not agree on what is a suitable replacement. This may mean the scope of this JEP is broadened or new JEPs created to cover other functionality in Unsafe.

Other JDK : NIO

Compatibility: A backward compatibility library would be desirable. This could be implemented for Java 7, and possibly Java 6, if there is enough interest. (As Java 7 is the current version at time of writing)

Security: Ideally the risk to security should be no more than the current ByteBuffer.

Performance and scalability: Optimising bounds checking may be difficult. More functionality may need to be added to the new buffer for common operations to reduce the overhead e.g. writeUTF, readUTF.

Brief History of HashMap

The term “Hash Code” first appeared in Computing literature in January 1953, when H. P. Luhn (1896-1964) wrote an internal IBM memorandum using that term. Luhn was trying to solve the problem “Given a text-book formatted stream of words, what is the best algorithm and data structure to render a 100% complete (Word, PageSet) index?”

H.P. Luhn (1896-1964)

Luhn writes “hashcode” is my essential operator.

Luhn writes “Associative Array” is my essential operand.

The term ‘HashMap’ (aka HashTable) evolves.

NOTE: HashMap derived from Comp Scientist born in 1896. HashMap is an old dog!

Moving the HashMap story from its origins to early real usage, let’s jump from the mid-1950s to the mid-1970s

Since that time advances in data structures (HashMap, Heap, etc) have moved slowly. We did see Tarjan’svery significant F-Heap breakthrough in 1987, but outside of that -- not much else for the operand. Remember, the HashMap first appeared in 1953, sixty plus years ago!

Now in 2014, however, it may once again be the data structures’ turn to make some significant progress. From the OpenJDK platform view, off-heap HashMap is a data structure on the move.

So much for the history of the HashMap. Let’s now begin to explore today’s HashMap. Specifically, let’s begin to look at the three current Java breeds of this old dog.

N. Wirth 1934-

java.util.HashMap (not Thread Safe)

Fails quickly, each and every time, for any and all true Multi-Threaded (MT)-concurrency use-case(s). All code everywhere must use a Java Memory Model (JMM) memory barrier tactic (e.g. synchronized or volatile) to guarantee order.

A simple hypothetical example that FAILS:

- synchronized writer

- non-synchronized reader.

- true concurrency (2 x CPU/L1)

Let's see why this fails...

Suppose Thread 1 writes to the HashMap, the effects of which are stored in CPU 1's level 1 cache only. Then Thread 2, becomes eligible to run a few seconds later and is resumed on CPU 2; it reads the HashMap, which comes from CPU 2's level 1 cache - it does not see the writes that Thread 1 made, because there was no memory barrier operation between the write and the read in both the writing and the reading thread, as required by the Java Memory Model for shared state. Even if Thread 1 synchronizes the writes, then even though the effect of the writes will be flushed to main memory, Thread 2 will still not see them because the read came from CPU 2’s level 1 cache. So synchronizing only on writes prevents collisions only on writes. To affect a necessary memory barrier operation on all Thread views, you must also synchronize the reader.

Achieving high performance when using "synchronized" requires low contention rates. This is very common, so in many cases, it is not as bad as it sounds. However once you introduce any contention (multiple threads trying to operate on the same collection at the same time) performance will be impacted. In the worst case, with high lock contention, you might end up having multiple threads exhibiting poorer performance than a single thread's performance (operating with no locking or contention of any kind).

Collections.synchronizedMap() does return an MT-Safe HashMap.

This is achieved via a coarsely grained lock blocking all mutate() and access() operators on all keys, effectively blocking the entire Map operand to all but one Thread operator. This results in Zero MT-concurrency, meaning only one thread at a time may have access. Another consequence of this coarsely grained locking approach is the extremely undesirable condition known as High Lock Contention (see picture at left in which N x threads contend for a 1x Lock but are forced to block waiting as the Lock is already owned by a 1 x Running Thread.

Luckily for this fully synchronized, never truly concurrent, isolation=SERIALIZABLE (and overall disappointing) trap of a HashMap, our upcoming OpenJDK off-heap JEP has a recommendation for remedy: Hardware Transactional Memory (HTM). With HTM, writing coarse-grained synchronized blocks in Java will become cool again! Just let HTM assist by taking code with zero concurrency and, in hardware, turn it into something that is truly concurrent and 100% MT-safe. That’s got to be cool, right?

Upon the arrival of JDK 1.5 we Java programmers finally found in the core API the long-coveted java.util.concurrent.ConcurrentHashMap. Though CHM cannot be used as a universal drop-in replacement for HashMap (CHM uses more resources and may be inappropriate in low-contention cases) it does solve a problem other HashMap(s) could not: Delivering both true MT-safety and true MT-Concurrency. Let’s draw a picture of exactly how CHM is helpful.

Lock Striping.

Have a lock Set for independent subsets of the java.util.HashMap: N hash buckets N/Segments locks. (Pix at right, Segments=3)

Lock striping is useful when there is a design ambition of re-factoring a high contention lock into multiple locks without compromising data integrity

Issue: how do you protect the whole collection all at once? Acquire ALL of the locks (recursively)?

So now you may be asking: With the arrival of ConcurrentHashMap and the java.uti.concurrent package, is Java finally a programming platform on which the High Performance Computing community can build solutions that solve their problems?

Unfortunately, the most realistic answer is still “not quite yet.” Really, so what problems still remain?

CHM has a problem relating to scale and holding medium-lived objects. If you have a small number of critical collections that use CHM, it is likely that some will be very large. In some cases, the bulk of your medium-lived objects live inside some of these collections. The problem with medium-lived objects is that they contribute the most to GC pause times and potentially can be 20 times more expensive than short-lived objects. Long-lived objects tend to stay in tenured space and short-lived objects die young, but medium-lived objects pass through all the survivor space copied and die in tenured space, making them expensive to copy around and finally clean up. Ideally you want a collection that can store data with zero GC impact.

ConcurrentHashMap elements live at runtime on the Java VM Heap. Because the CHM is on-heap it can be a significant contributor to Stop-the-World (STW) pauses, if not the most significant. When an STW GC event occurs, all application processing endures the infamous “embarrassing pause” delay. This delay, caused by CHM (and all of its elements) being on-heap, is a miserable experience. It is an experience and a problem that the High Performance Computing community will not tolerate.

Before the High Performance Computing community fully embraces Java, there must be a solution that tames the on-heap GC monster.

The solution is spiritually very simple: take CHM off-heap.

And, of course it is exactly this solution that this OpenJDK off-heap JEP is designed to support.

Before we deep-dive to show what off-heap life will be like for HashMap, let’s fully show on-heap’s inhospitable details.

Brief History of the Heap

Java Heap memory is allocated to the JVM by the operating system. All Java Objects are referenced via their on-heap JVM location/identity. Your run-time Object’s on-heap reference must reside in one of two distinct on-heap regions. These regions are more formally referred to as generations. Specifically: (1) YoungGeneration (consisting of EDEN and two SURVIVOR subspaces) and (2) TenuredGeneration. (NOTE: Oracle has announced that the Perm Generation will begin phasing out in JDK 7 and will be eliminated in JDK 8). All generations are subject to the dreaded “Stop-the-World” full garbage collection event, unless you use a “pause less” collector such as Azul’s Zing.

In the world of Garbage Collection, the operations are performed by the “Collectors” and the operands of these Collectors are the Heap’s “Generation” (and sub-Space) targets. The Collectors operate on the Heap Gen/Space targets. The inner details of how the full Garbage Collection works is its own (very large) subject, covered in its own dedicated article.

For now know this: If any Collector (of any kind) operating on any generation’s Heap space causes a full “Stop the World” event -- that is a serious problem.

It is a problem that must have a solution.

It is a problem that the off-heap JEP will resolve.

Let’s take a closer look.

Java Heap Layout: Through the Generations View

Garbage collection makes writing program much easier, however in the world of SLA targets, either in writing or implied (my Java Applet stopping for 30 seconds is not an option), the Stop-The-World pause time is a big headache. It is so large that for many Java developers, it is the only performance issue in front of them. By the way, there are many other performance issues that need to be addressed, once STW is no longer a problem.

The benefit of using off-heap storage, is that the number of medium-lived objects can drop dramatically. It can even drop the number of short-lived objects as well. For High Frequency Trading systems, it is possible to create less garbage in a day than your Eden size, which means you can run for a whole day without a single minor collection. Once you have very low memory pressure, and few object reaching tenured space, tuning your GC becomes very trivial. Often you don’t even need to set any GC parameters (except perhaps increase the eden size)

By moving objects off-heap, Java apps are often able to reclaim custody of controlling their own destiny, meeting performance SLA expectations and obligations.

Wait. What did that last sentence just say?

ATTENTION: All passengers, please fold-up your trays and place your seats upright. This is very much worth repeating and is one of the central tenants of this OpenJDK off-heap JEP.

By moving collections (like HashMap) off-heap, Java apps are often able to reclaim their custody (no longer at the mercy of a STW GC “embarrassing pause” event) to control their own destiny, meeting performance SLA expectations and obligations.

This is a practical option, one that is already used in High Frequency Trading systems in Java.

HashMap’s Present … what new problems can this “old dog” now solve (by going off-heap)?

The following diagram illustrates two JavaVM processes (PID1 and PID2) that have the ambition to use a SharedHashMap (SHM) as an inter-process communication (IPC) facility. The diagram’s bottom horizontal axis represents the full SHM OS locality domain. When being operated upon, OpenHFT objects must be somewhere in the OS physical memory's user address space or kernel address space views. Drilling down, we know that they must start out as "On-Process" locality. From the Linux OS view, the JVM is an a.out (rendered via gcc invoke). When that a.out is running, from the Linux process internals view, that running a.out has a PID. A PID's a.out (at run-time) has a known anatomy containing three segments:

That's the OS view of the PID. That PID is an executing JVM, and that JVM has its own view of its operands' potential locality.

From the JVM view, the operands exist as On-PID-on-heap (normal Java) or On-PID-off-heap (via Unsafe or NIO's bridge to Linux mmap(2)). Whether On-PID-on-heap or On-PID-off-heap all operands are still executing in USER address space. In C/C++ there are APIs (OS system calls) available that allow C++ operands to have locality Off-PID-off-heap. These operands live in KERNEL address space.

(Click on the image to enlarge it)

The next 6 bullets refer to the above diagram.

#1. To best exercise this diagram’s flow, let’s sayPID 1 defines a BondVOInterface that is JavaBean compliant. We want to demonstrate (following the numbered flows pictured above) how to operate on a Map<String,BondVOInterface> in a manner that highlights the off-heap advantages.

Now, some OpenHFT on-heap →off-heap magic takes place. Watch carefully … for all of this article’s travels, the “magic” that is about to be shared now is this trip’s very best “sight seeing” opportunity:

#2. The above OpenHFT factory invoke, at run time within each process, renders and compiles a BondVOInterface£nativeinner implementation that takes full custody of doing the byte addressing arithmetic necessary to manifest a transitively sound/complete off-heap abstractAccess() / abstractMutate() operator set (over the interface's getXX()/setXX() Java Bean compliant method signatures). The effect of all this is that the OpenHFT run-time has taken your interface and compiled it into an implementation class that will act as your bridge to explicit off-heap capability. Arrays are simulated using an indexed getter and setter. The Interface for the array is also generated in the same way as the outer interface. The setter and getter signatures for arrays are setXxxxAt(int index, Type t);and getXxxxAt(int index);(Note the ‘At’ suffix for both of the array gettr/settr signatures).

All this is rendered for you, at run-time, via an in-process OpenHFT JIT compiler. All you do is supply the interface. Pretty cool, right?

#3. PID 1 then calls the OpenHFT API for shm.put(K, V); to write by Key (V = BondVOInterface) data into the off-heap SHM. We have crossed the OpenHFT bridge that was built for us in [2].

We are now off-heap! What a sight, right? :-)

Let’s take a look at how we get here from PID 2’s viewpoint.

#4. Once PID 1 has finished putting its data into our off-heap SHM, PID 2 can now call the exact same OpenHFT factory, something like

to start its journey across an OpenHFT constructed bridge to get its ref to the exact same off-heap OpenHFT SHM. Of course, it is assumed that PID 1 and PID 2 , being co-located on the same localhost OS, share a common view of /dev/shm (and have a priori access to the exact same /dev/shm/myBondPortfolioSHM file view identity).

#5. PID 2 can then call V = shm.get(K); (which creates a new off heap reference each time) or PID 2 can call V2 = shm.getUsing(K, V); which reuses an off heap reference of your choice (or returns NULL in the case that K is not an Entry). There is yet a third getsignature available to PID 2 in the OpenHFT API: V2 = acquireUsing(K,V); which has the distinction that, should Knot be an Entry, you willnot be returned NULL - but - instead you will be returned a reference to a newly provided non-NULLV2 placeholder. This reference allows PID 2 to operate on the SHM’s off-heap V2Entryin place.

NOTE: whenever PID 2 calls V = shm.get(K); it is returned a new off-heap reference. This creates some garbage but you have a reference to this data until you discard it. However, when PID2 calls either V2 = shm.getUsing(K, V);or V2 = shm.acquireUsing(K, V);,the off heap reference is moved to the location of the new key, and this operation is GC-less as you have recycled everything yourself.

NOTE: no copy occurs at this point, only the location of the data in off heap space has been set or changed.

#6. An off heap record is a reference that wraps a Bytes for off heap manipulation and an offset. By changing these two, any area of memory can be accessed as if it were an interface of your choice. When PID 2 operates on the 'shm' reference it sets the correct Bytes and offset, calculated by reading the hash map stored in the /dev/shm file view. Atfer getUsing() returns, the calculation for the offsets are trivial and are inlined. i.e. once the code is JITed, the get() and set() methods are turned into simple machine code instructions to access those fields. Only the fields you access are read or written to, true ZERO-COPY! Beautiful.

//ZERO-COPY // our reusable, mutable off heap reference, generated from the interface. BondVOInterface bondZC = DataValueClasses.newDirectReference(BondVOInterface.class); // lookup the key and give me my reference to the data if it exists. if (shm.getUsing("369604103", bondZC) != null) { // found a key and bondZC has been set // get directly without touching the rest of the record. long_matDate = bondZC.getMaturityDate(); // write just this field, again we need to assume we are the only writer. bondZC.setMaturityDate(parseYYYYMMDD("20440315")); //demo of how to do OpenHFT off-heap array[ ] processing int tradingHour = 2; //current trading hour intra-day BondVOInterface.MarketPx mktPx = bondZC.getMarketPxIntraDayHistoryAt(tradingHour); if (mktPx.getCallPx() < 103.50) { mktPx.setParPx(100.50); mktPx.setAskPx(102.00); mktPx.setBidPx(99.00); // setMarketPxIntraDayHistoryAt is not needed as we are using zero copy, // the original has been changed. } } // bondZC will be full of default values and zero length string the first time. // from this point, all operations are completely record/entry local, // no other resource is involved. // now perform thread safe operations on my reference bondZC.addAtomicMaturityDate(16*24 *3600*1000L);//20440331 bondZC.addAtomicCoupon(-1*bondZC.getCoupon());//MT-safe! now a Zero Coupon Bond. // say I need to do something more complicated // set the Threads getId() to match the process id of the thread. AffinitySupport.setThreadId(); bondZC.busyLockEntry(); try { String str = bondZC.getSymbol(); if (str.equals("IBM_HY_2044")) bondZC.setSymbol("OPENHFT_IG_2044"); } finally { bondZC.unlockEntry();}

It is important to realize the full scope of OpenHFT on-heap ←→ off-heap magic that is taking place in the above diagram.

The fact is that the OpenHFT SHM implementation is at step #6 intercepting at run-time the arg-2 overloaded invoke of V2 = shm.getUsing(K, V);. At its essence, the SHM implementation is querying

(
( arg2 instanceof Byteable ) ?
ZERO_COPY :
COPY
)

and its eligibility to execute as a ZERO-COPY(via reference update) instead of a full COPY (via Externalizable).

The key interface for how the off heap references function is Byteable. This allows the references to be (re)assigned.

public interface Byteable { voidbytes(Bytes bytes, long offset);}

If you implement your own class that supports this method, you can implement or generate your own Byteable classes.

For now, and as we have alluded, you may be tempted to continue to just think “all this happens magically”. There is a whole lot taking place here to affect this magic , and it all happens - exclusively - within the executing application process! Using a Run-Time-Compiler, which takes as input my BondVOInterface interface, the OpenHFT internals determine the Interface’s source and compiles its source (again, in process) into an OpenHFT-savvy implementation Class. If you don’t want to have the class generated at runtime, you can pre-generate and compile it at build time. The OpenHFT internals then load that freshly rendered implementation Class into a runnable context. It is at that point that the run-time then physically executes the rendered BondVOInterface£native inner Class's generated methods to affect a ZERO-COPY operator capability onto the off-heap Bytes[] record. This capability is so ZERO-COPY that as soon as you perform a thread safe operation in one thread it is visible to another, even if that thread is in another process.

And there you have the essence of the OpenHFT SHM magic: Java now has true ZERO-COPY IPC.

Abra Cadabra!

PERFORMANCE RESULTS: CHM vs.SHM

On Linux 13.10, i7-3970X CPU @ 3.50GHz, hex core, 32 GB of memory.

SharedHashMap -verbose:gc -Xmx64m

ConcurrentHashMap -verbose:gc -Xmx30g

Of course, the primary accounting for why CHM is 438% slower than SHM is because CHM endured a 21.8 second STW GC event. But from the SLA viewpoint, an accounting for the cause (without a remedy for the cause) is irrelevant. From the SLA viewpoint, the fact is that CHM is just plain 438% slower. From the SLA viewpoint, CHM performance in this test is intolerably slower.

In Q2 2014 the Java Community Process will announce JSR-107 EG’s release of JCACHE - the Java Caching standard API/SPI. JCACHE will do for the Java Caching community exactly what JDBC did for the Java RDBMS community. At the heart and sole of JCACHE is its primordial Caching operand interface javax.cache.Cache<K,V>. If one looks closely at this Cache API it should become clear that a Cache is a near perfect superset of Map (with a few pedantic differences). One of the primary ambitions of JCACHE is to help deliver a scalable (scale up and scale out) solution to the general Java data locality, latency, and caching problem. So if JCACHE’s central operand is a Map, and one of JCACHE’s primary missions is solve data locality/latency problems, how perfect is it to adapt OpenHFT’s off-heap SHM as the implementation of JCACHE’s primary operand interface? For some Java caching use cases, OpenHFT’s off-heap SHM ambitions are perfectly perfect.

In just a moment (remain seated, please) this article will share exactly how to adapt the OpenHFT SHM as a fully JSR-107 interoperable off-heap JCACHE operand. Before that, we want to first gargle the fact that the javax.cache.Cache interface is a capability superset of the java.util.Map interface. We need to know exactly ‘how big a super-set?’ …. as this will impact exactly how much work we have to do to 100% soundly and 100% completely adapt SHM as an implementation.

The following picture depicts the small extent of development work it would take for a community driven OpenHFT programmer to adapt/contribute an OpenHFT off-heap SHM as a fully JSR-107 interoperable JCACHE operand (community-driven open source JCACHE provider=RedHat Infinispan).

As this ride nears its “last stop”, we would like to offer a parting parable for your consideration.

The business relationship between community driven open source off-heap HashMap providers and JCACHE provider vendors (both proprietary and open-source) can be harmonious and synergistic. Each have essential roles in making the end-users’ off-heap experiences more enjoyable. The off-heap Hash-Map providers can deliver the core off-heap HashMap (as JCACHE) operand. The JCACHE vendors (both proprietary and open-source) can adapt that operand into their products and then provide the core JCACHE operators (and infrastructure).

It is kind of like the relationship that Cows (dairy farmers, if you will, producers of the core operand=Milk) have with Dairy companies (producers of the Milk operator set={pasteurize, skim, 1%, 2%, half-half, etc.) . Together, the (Cows, Dairy Cos.) pair produces a product that the end-users enjoy much more than if the (Cows, Dairy Cos.) pair did not work together. End users need them both.

Cows make milk openly, 24/7, and with perfectly unbothered focus. The dairy companies can make milk more enjoyable (half-half, 2%, 1%, skim) … so they do have an opportunity to play an important role … but they do not make the milk. Right now the OpenSource “cows” are making the off-heap HashMap “milk”. If proprietary solution-vendors think they can make that milk more enjoyable, go for it, such an effort is welcomed by all. But these vendors are encouraged not to try to make any claim that their proprietary milk is in any way a better “milk”. Cows make the best milk.

In closing, it is so exciting to consider how far Java has come with regard to accommodating the High Performance Computing community. Things truly have changed a lot, and all for the better.

Just look at what that lovable old dog HashMap can now do! With OpenJDK, OpenHFT, and Linux , off-heap HashMap now has friends in “low places” (i.e. the native OS).

Protected now from any disturbances caused by STW GC, HashMap is now born again as an essential HPC data structure operand. Stay forever young, HashMap … forever young!

Thanks for traveling with us, we hope you enjoyed the ride. Till next time, bye-bye.

About the Authors

Peter K. Lawrey is the Principal Consultant for Higher Frequency Trading Ltd., and the software engineering lead of the OpenHFT project. He is a Java Community Process member presently sitting on the active JCP expert group defining the Java standard API for Distributed Data Grids (JSR-347). He is the founder of the 500 member Performance Java Users’ Group and author of the blog “Vanilla Java” (230 articles, 3 million site hits). Peter graduated from Melbourne University with two degrees: Computer Science and Electrical Engineering. Peter is a top 3 ranked StackOverflow answer respondent. He has spent the last 5 years developing, supporting, and providing consulting and training for high frequency trading systems in Europe and Eastern USA.

Ben D. Cotton III is an IT consultant at J.P.Morgan Chase & Co., currently using Java data grid technology on an UHPC Linux supercomputer to render and aggregate real-time liquidity risk. Ben graduated Rutgers University with a degree in Computer Science. He spent his first 11 years at AT&T Bell Laboratories writing BellMac32-ASM/C/C++ code supporting numerous proprietary telecommunications, network analytics, and provisioning protocols; and has spent the last 14 years writing Java code supporting low-latency, high-throughput, transactional, fixed income, derivatives, electronic trading, clearing, pricing and risk systems. Like Peter, Ben also sits on JSR-347 EG.

The idea of safely going off heap opens many possibilities. question: with NIO going off-heap with DirectByteBuffer mapping, there is 2GB limit. This article says "billions of object/values" in a off heap SharedHashMap. Is it more entries capacity than 2GB? If so, truly exceptional. These massive entries on heap will suffer eventual entries GC. This is exceptional solution. Also, very nicely written.

No STW GC and reliable, real-time throughput is superb
by
Oleg Fetisov

more impressive than even SHM [ no GC ] being 438% faster than CHM [ heavy STW GC ], is that SHM throughput remains extremely reliable under all load distributions. This is true real-time performance. Reliable performance. Very impressive.

I fully understand the advantages of going off-heap, but can you give a few practical use-cases to share member between processes on the same machine? In most cases I see, you want to share memory between different machines for high availability and scalability. But sharing memory between processes on the same machine, doesn't provide these advantages. Do you have plans to share memory between machines? That would be really interesting. I see that you are mentioning Infinispan, but I don't see how it can be combined with OpenHFT to create a distributed heap.

And what happens when a JVM crashes during the modification of something off-heap? Could it be that the other JVM will see corrupted state and that JVM could become unstable.

It is wasteful to distribute locality=127.0.0.1 "problems" using UDP/TCP transports when nothing ever crosses a physical NIC. E.g. We use Infinispan MapReduce APIs to affect the distribution of a quantitative risk algorithm onto a 90 JVM node data grid. But, that 90 node grid is served by a 240xCPU/3TBxRAM Linux supercomputer at which all resources are 100% localhost. Why would we want our risk algebra's operators/operands transported across nodes with an OSI loopback when in fact we could achieve "just a few" ns. latencies via put()/get() to an off-heap /dev/shm IPC mechanism? Both Infinispan and OpenHFT are very open to community contributions to affect this capability. It is entirely possible (but admittedly not common) to scale out with out necessarily implying the need for locality=remote resources.

I know it is wasteful to do IPC for local processes over the network stack. But why do you want to create different processes sharing the same memory, why not one JVM running the logic of both the processes?

I'm sure I'm missing something obvious, but I would like to understand what that is.

In your setup, how do you deal with failover of a machine in your cluster? Or there is no important data on a machine, so therefor it doesn't matter if the machine dies.

We want the option to rapidly re-factor our deployed algorithms' M-R distribution to be anywhere on our (100% local, ..., 100% remote) compute resource locality curve. If we are 100% local we benefit greatly from the OpenHFT solution's participation. If we are 100% remote ... we benefit from not having to unwind and re-factor the tightly-coupled nature of our algorithm's logic off of a single JVM's full custody. In that, case a data grid provider is helpful.

In summary: Our problem naturally distributes ... it simply must. But, we want custody of where it distributes.

I know I have not answered your "what if failure" question. I'm not the best person to answer w/ specifics. Please stand by.

>> And what happens when a JVM crashes during the modification of something off-heap?

> There is a race condition where either;- the key/value is incomplete or corrupt. This depends on how it is used as to how likely this is.- corruption of the SHM itself. This has been designed to result in lost data or a memory leak I.e. an entry which uses space but is unreachable. This shouldn't result in a critical failure, though an entry could be lost in the worst case.

As i was going through the article, it became quite obvious that this solution could be applied to the low latency sub micro sec (everyone has there on definition of low latency, hence wanted to be specific) problem where GC is a killer.

For example HFT Order Management systems, need to hold all the orders but do not want to clutter the heap and at the same time need to provide quick access. There is lot of emphasis on quick access. This surprised me that with small map size (10M) and low GC activity, throughput is impressive.