tag:blogger.com,1999:blog-52225422503523978622018-12-19T01:04:41.472-08:00Stuff Gil SaysCaptured blurbs, musings, and rants. Often recycled.
<br><br>
Pause Less, Play MoreGil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5222542250352397862.post-24069506078838865852017-05-02T21:45:00.000-07:002017-05-02T22:02:52.048-07:00Zing hits the trifecta<h3>Three winners combine to make Java go fast, start fast, and stay fast.</h3><br /><a href="https://3.bp.blogspot.com/-86wvvCXY7vc/WQfm2oyvrQI/AAAAAAAAAQg/J4KDcgFum_Qu4hKvgp6sxrSI5mmDq_KCACLcB/s1600/falcon-08.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="142" src="https://3.bp.blogspot.com/-86wvvCXY7vc/WQfm2oyvrQI/AAAAAAAAAQg/J4KDcgFum_Qu4hKvgp6sxrSI5mmDq_KCACLcB/s200/falcon-08.jpg" width="200" /></a>By now you should be able find lots of material <br />online about Azul's new Falcon compiler technology, which was just released as the default JIT optimizer in Zing, our already-very-cool JVM. Falcons and JIT compilers are obviously all about speed. But with Falcon, Zing doesn't just bring more speed to Java. It brings that speed faster. It brings that speed sooner. It brings that speed all the time.<br /><br /><br />Falcon produces faster code. That's the point. And by bringing an LLVM backend optimizer to the JVM, Falcon lets Zing leverage the optimization work of literally hundreds of other developers who have been (and will continue to be) busy adding more and more optimizations and getting the valuable features of each new processor generation to actually get used by optimized code. A great example of this benefit in play is vectorization. LLVM's (and hence Falcon's) vectorization engine will now match normal Java loops with modern instructions, making code like this:<br /><br /><table class="highlight tab-size js-file-line-container" data-tab-size="8" style="border-collapse: collapse; border-spacing: 0px; box-sizing: border-box; color: #24292e; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; font-size: 14px; tab-size: 8;"><tbody style="box-sizing: border-box;"><tr style="box-sizing: border-box;"><td class="blob-code blob-code-inner js-file-line" id="LC101" style="box-sizing: border-box; font-family: SFMono-Regular, Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 20px; overflow: visible; padding: 0px 10px; position: relative; vertical-align: top; white-space: pre; word-wrap: normal;"><span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">int</span> <span class="pl-en" style="box-sizing: border-box; color: #795da3;">sumIfEven</span>(<span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">int</span>[] <span class="pl-v" style="box-sizing: border-box; color: #ed6a43;">a</span>) { <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">int</span> sum <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">0</span>; <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;"> for</span> (<span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">int</span> i <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">=</span> <span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">0</span>; i <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">&lt;</span> a<span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">.</span>length; i<span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">++</span>) { <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">if</span> ((a[i] <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">&amp;</span> 0x1) <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">==</span> <span class="pl-c1" style="box-sizing: border-box; color: #0086b3;">0</span>) { sum <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">+=</span> a[i]; } } <span class="pl-k" style="box-sizing: border-box; color: #a71d5d;">return</span> sum; } </td></tr></tbody></table><br />run faster (as in up to 8x faster) on modern servers than the current HotSpot JVM will (see details of <a href="https://github.com/giltene/GilExamples/tree/master/VectorizationExample-benchmarks">jmh microbenchmark here</a>, and try for yourself with the <a href="http://docs.azul.com/zing/zing-quick-start.htm">trial version of Zing</a>). This loop has a predicated operation (add only when the number is even), which makes it hard to match with the vector instructions (SSE, AVX, etc.) that have been around for quite a while. But when the same, unmodified classes are executed on a newer server that has AVX2 instructions (which include some cool new vector masking capabilities) code like this will get fully vectorized, with the speed benefits of the more sophisticated instructions exposed. The cool part is not just the fact that such code gets to be vectorized, or that it is fast. It's that Falcon gets to do this sort of thing without Azul engineers putting in tons of engineering effort to optimize for and keep up with new processors. Others (e.g. Intel) have spent the last few years contributing optimizations to LLVM, and we (and your Java code) now get to benefit from that work.<br /><br />Could other JIT compilers do this? Of course they could. Eventually. With enough work. But they haven't yet. Falcon is ahead in optimization adoption because it gets to leverage an ongoing stream of new optimization contributions made by others. And we expect it to stay ahead that way. With Falcon and LLVM, Zing gets to be fast sooner, and on an ongoing basis.<br /><br /><a href="https://3.bp.blogspot.com/-l3oXRUuULJ4/WQjJiV5twAI/AAAAAAAAASk/gw3UMRjD_y82oU4Ikgn3bDSIhfhoGILOgCLcB/s1600/LLVM-Logo-Derivative-3.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="190" src="https://3.bp.blogspot.com/-l3oXRUuULJ4/WQjJiV5twAI/AAAAAAAAASk/gw3UMRjD_y82oU4Ikgn3bDSIhfhoGILOgCLcB/s200/LLVM-Logo-Derivative-3.png" width="200" /></a>Of course Falcon is not just about leveraging other people's work and contributions. We like that, but we had to sink in a bunch of our own work and contributions to make it all possible. The Falcon project at Azul has made significant improvements to LLVM's ability to optimize code in managed runtime environments that includes things like JITs, speculative optimization, deoptimization, and Garbage Collection. When we started, LLVM was able to deal with these concepts to various degrees, but having any of them around in actual code ended up disabling or defeating most optimizations. Over the past three years, Azul's LLVM team has improved that situation dramatically, and successfully and repeatedly landed those improvements upstream such that they can benefit others in the LLVM community (in other runtimes, not just in Java).<br /><br />With Falcon, we also had to build a host of runtime and Java-specific optimizations that are typical of optimizing JIT compilers, but not typical in static compilation environments; implicit null checks, speculative devirtualization and guarded inlining are just a few examples. Happily, we found that the tooling available with LLVM and it's maturity as an optimization-creation platform have made our new optimization development velocity dwarf the speed at which we were able to create new optimizations in the past.<br /><br />Great. But how does all this "now we have faster code" stuff make a trifecta? Falcon completes a picture that we've been working towards at a Azul for a while now. It has to do with speed, with how speed behaves, and the ways by which speed can be made to behave better.<br /><br /><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-6wcutjxB1sk/WQgEk6dmz-I/AAAAAAAAASI/HPS1C-9LeqQCO-UZZGtoO--v6N4YRWFwgCLcB/s1600/wandering_albatross3_weimer3hr.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="133" src="https://4.bp.blogspot.com/-6wcutjxB1sk/WQgEk6dmz-I/AAAAAAAAASI/HPS1C-9LeqQCO-UZZGtoO--v6N4YRWFwgCLcB/s200/wandering_albatross3_weimer3hr.jpg" width="200" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Albatross takeoff run</td></tr></tbody></table>Since the late 1990s when JIT compilers were added to the JVM, Java has had a reputation for being "eventually pretty fast". But that speed has been flakey compared to traditional static environments. Like an albatross, we have come to expect the JVM to take time (and many temporary slowdowns) to get up to speed and airborne. And due to the common pauses associated with typical runtime behaviors (like Stop-The-World GC pauses), Java's eventual speed couldn't even be called "eventually consistent". It has been predictably unpredictable. Consistently inconsistent.<br /><br /><br />For a quick overview of the various aspects of "speed" that JVM designs have been challenged with, lets start by looking at what JIT compilers typically do. Most JIT environments will typically load code and start executing it in a slow, interpreted form. As "hotter" parts of the code are identified, they are compiled using relatively cheap Tier 1 compiler that focuses on producing "better than interpreted" code, which is used to record the detailed profiling needed for eventual optimization. Finally, after a piece of code has labored through enough slow interpreter and Tier 1 (profiling) execution, higher optimizations are applied. This typically happens only after 10,000+&nbsp;slower executions have passed.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-wR5TdSpKjdM/WQf1upLkKeI/AAAAAAAAARY/N4G0td-v6YonS4gKU6jy4bSuKrUYn3THgCLcB/s1600/CodeDist.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="168" src="https://3.bp.blogspot.com/-wR5TdSpKjdM/WQf1upLkKeI/AAAAAAAAARY/N4G0td-v6YonS4gKU6jy4bSuKrUYn3THgCLcB/s320/CodeDist.png" width="320" /></a></div>The chart to the right depicts this evolution of code execution over time, showing the relative portions of interpreted, Tier1 (profiling), and optimized code that is dynamically executed. Since many valuable JIT optimizations rely on speculative assumptions, reversions to interpreted code can (and will) happen as the code learns which speculations actually "stick". Those spikes are referred to as "de-optimizations".<br /><br /><br /><br /><a href="https://1.bp.blogspot.com/-v03KAJI1qwQ/WQf1veSrEnI/AAAAAAAAARg/uWsLkjge8YcRUlaL1zJQLCPiPs2pUM8bQCLcB/s1600/RespTime.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="167" src="https://1.bp.blogspot.com/-v03KAJI1qwQ/WQf1veSrEnI/AAAAAAAAARg/uWsLkjge8YcRUlaL1zJQLCPiPs2pUM8bQCLcB/s320/RespTime.png" width="320" /></a>Interpreted and Tier 1 compiled code tend to be significantly slower than optimized code. The amount of time it takes an operation to complete (e.g. the response time to a service request) will obviously be dramatically affected by this mix. We can see how response time evolves over time, as the portion of code that is actually optimized grows and eventually stabilizes.<br /><br />(In this same depiction, we can see the impact of Stop-The-World GC pauses as well).<br /><br />We can translate this operation completion time behavior to a depiction of speed over time, showing the speed (as opposed to time) contribution that different optimization level have towards overall speed:<br /><br /><a href="https://2.bp.blogspot.com/-pacW1gljqDM/WQf1vKYF0ZI/AAAAAAAAARc/XcG32Tx3fn4hNe56UF4RtCctUMA-iFQKgCLcB/s1600/SpeedDist.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="339" src="https://2.bp.blogspot.com/-pacW1gljqDM/WQf1vKYF0ZI/AAAAAAAAARc/XcG32Tx3fn4hNe56UF4RtCctUMA-iFQKgCLcB/s640/SpeedDist.png" width="640" /></a>This is how Java "speed" has behaved until now. You can see the albatross takeoff run as it takes time to work up speed. You can see the eventual speed it gains as it stabilizes on running optimized code. And you can also see the all-too-frequent dips to (literally) zero speed that occur when the entire runtime stalls and performs a Stop-The-World Garbage Collection pause.<br /><br />Zing, with Falcon and friends, now improves speed in three different ways:<br /><br /><ul><li><b><i>Falcon</i></b> literally raises the bar by applying new optimizations and better leveraging the latest hardware features available in servers, improving the overall speed of optimized code.</li></ul><br /><ul><li><b><i>ReadyNow</i></b> and it's profile-playback capabilities remove the albatross takeoff run, replacing it with an immediate rise to full optimized speed at the very beginning of execution.&nbsp;</li></ul><br /><ul><li><b><i>The C4 Garbage Collector</i></b> eliminates the persistent down-spikes in speed associated with Stop-The-World GC pauses.&nbsp;</li></ul><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-M-iqLUS2IEY/WQf1vp1NmrI/AAAAAAAAARk/j9h5nZWu8TARwcJzawF4CaZa9-zmMcZYwCLcB/s1600/SpeedDistZing.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: left;"><img border="0" height="338" src="https://4.bp.blogspot.com/-M-iqLUS2IEY/WQf1vp1NmrI/AAAAAAAAARk/j9h5nZWu8TARwcJzawF4CaZa9-zmMcZYwCLcB/s640/SpeedDistZing.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><br />These three key features play off each other to improve the overall speed picture, and finally provide Java on servers a speed profile and speed-behavior-over-time similar to the speed people have always expected from C/C++ applications. With Zing, Java is now not only faster, it is consistently faster. From the start.<br /><br /><br /><a href="https://2.bp.blogspot.com/-bWJLWUMDl3E/WQfmTxT3O6I/AAAAAAAAAQc/JE_WAW6IIvAvCgnvjI59HE1p8EKXAG3cwCLcB/s1600/427712-peregrine-falcon-1280x720.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="180" src="https://2.bp.blogspot.com/-bWJLWUMDl3E/WQfmTxT3O6I/AAAAAAAAAQc/JE_WAW6IIvAvCgnvjI59HE1p8EKXAG3cwCLcB/s320/427712-peregrine-falcon-1280x720.jpg" width="320" /></a><br /><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-IRBF1HwVFKM/WQfx3G7HKeI/AAAAAAAAARA/wsiZJdWlJ4kunN2_X3MGhuSeDHam5io5QCLcB/s1600/sammiwa.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="216" src="https://3.bp.blogspot.com/-IRBF1HwVFKM/WQfx3G7HKeI/AAAAAAAAARA/wsiZJdWlJ4kunN2_X3MGhuSeDHam5io5QCLcB/s320/sammiwa.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Albatrosses arguing about the future</td></tr></tbody></table>Falcon is real. Fast-from-the-start Java is here with ReadyNow. And Stop-The-World GC stalls are a thing of the past with C4. We are kicking ass and taking names. Sign up, or just take Zing for a spin <a href="http://docs.azul.com/zing/zing-quick-start.htm">with a free trial</a>.<br /><br /><br /><br /><br /><br /><br /><br />Or, you could just keep taking those albatross takeoff runs every time, ignore the regular dips to zero speed and altitude, and just keep listening to people who explain how that's all ok, and how some day someone will eventually find (and ship?) some other holy graal...Gil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.com3tag:blogger.com,1999:blog-5222542250352397862.post-56573838757439749822015-11-10T10:39:00.001-08:002015-11-10T10:41:52.482-08:00How Java Got The Hiccups[This is a recycled post from an older blog location. Originally posted in late 2011 when put up the first version of jHiccup. I was recently reminded of its existence, and figured I'd revive it here...]<br /><br /><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">When we decided to put up an open source tool that helps measure runtime platform [un]responsiveness, we found that the hardest thing to explain about such a tool is just why it is actually needed.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">Most people think they already know how the underlying platform behaves, and expect the results to be uninteresting. The basic assumption we seem to have about the platforms we run on is that the platform itself is fairly consistent in its key responsiveness behavior.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">Servers have lots of CPU, and unless you completely saturate and thrash a server, people expect to have their software up and running on a CPU within milliseconds of “wanting to”. Sure, we know that the cpu we run on is being time sliced, and some other load may be using it some of the time, but at 20% CPU utilization, how big of an issue can that really be? We don’t expect the rare few milliseconds of delay every once in a while to really show up in application responsiveness stats. Some people also know that other factors (like competing loads, hardware power saving, and things like internal runtime bookkeeping work) can add to the noise levels, but they think of it as just that – “noise”.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-9ach8Tjdn-0/VkI15DK_KCI/AAAAAAAAAM0/B3N2a3wZ2qo/s1600/Screen%2BShot%2B2015-11-10%2Bat%2B10.22.03%2BAM.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-9ach8Tjdn-0/VkI15DK_KCI/AAAAAAAAAM0/B3N2a3wZ2qo/s400/Screen%2BShot%2B2015-11-10%2Bat%2B10.22.03%2BAM.png" width="336" /></a></div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">But what happens when that “noise” grows to levels that are larger than the processing you actually want to do? When the platform’s waits, stalls, pauses, execution interruptions, or whatever other name they might go by, come in chunks big enough to dominate the application response time? What happens most of the time is that we ignore the issue, chalk it off as an “outlier”, and continue to think of the server we run on as a smooth, continually operating machine.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">A very effective way to ignore the issue seems to be to collect and present results in terms of means and standard deviations. The reality of platform noise is anything but “normal” in distribution – it tends to be multi-modal – mostly good, and then very, very bad, with very little in between. Like drowning in a lake with an average depth of 2 inches, a 30 second stall in a system with an average response time of 0.2 seconds milliseconds and a standard deviation of 0.3 seconds can make for a really bad day.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">So what can we do to make people more aware of the need to actually look at their runtime platform behavior, and see if it really is as smooth as they thought it was?</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">We can name the problem.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">We chose to use a name that would get some attention, even if it sounds a bit silly at first. A name that would make you think of anything but a normal, smooth distribution. We decided to name that thing where you see your system stalling every once in a while a “Hiccup”.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">I then built a tool to measure and chart your runtime hiccups, and we called it…&nbsp;<a href="https://www.azul.com/jhiccup" style="box-sizing: border-box; color: #216dae; text-decoration: none;">jHiccup</a>.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">Your system probably gets the hiccups all the time. Especially when it’s running your application under load. How big each hiccup is, and how often they happen varies. A lot. But almost all systems that do anything other that sitting idle will exhibit some level of hiccups, and looking at the hiccups of even an idle application turns out to be educational.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">jHiccup is so simple that most people’s reaction to seeing what it actually does is “duh!”. The reaction to the plotted results is another thing though. Those usually evoke more of a “hmmm…. that’s interesting.”</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">jHiccup uses a trivial mechanism to measure runtime hiccups while your application is actually running: It measures how long it takes a separate application thread to do absolutely nothing. Doing nothing should be pretty quick, usually, and if doing nothing took an otherwise idle application thread a long time, then it experienced a runtime hiccup. What caused the observed hiccup doesn’t really matter. It’s a pretty safe bet that other application threads – the ones that actually do something, would experience the same hiccup levels, with the hiccup time adding to their overall time to perform whatever work it is they were trying to complete.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-bq0ZxLqJZH0/VkI1SiN_ueI/AAAAAAAAAMs/THBPqsgCrPQ/s1600/Screen%2BShot%2B2015-11-10%2Bat%2B10.18.14%2BAM.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-bq0ZxLqJZH0/VkI1SiN_ueI/AAAAAAAAAMs/THBPqsgCrPQ/s400/Screen%2BShot%2B2015-11-10%2Bat%2B10.18.14%2BAM.png" width="337" /></a></div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">Simple measurements with jHiccup showing what happens to an idle application running on a dedicated, idle system, are unsurprisingly boring. However, looking at what jHiccup observes as “the time to do nothing” when an actual java application load is running on the same runtime can teach you a lot about what runtime hiccups look like for your specific application.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">The most striking thing about “Hiccup Charts” (the way we plot jHiccup results) is that for Java runtimes carrying actual workloads, they tend to show regular patterns of pretty big hiccups, into the 100s of msec, and into the seconds sometimes. Those patterns are clearly not “noise”, and as the Hiccup Chart percentile distributions show, they often have a significant effect on your application’s behavior in the higher percentiles. Most importantly, they are not caused by your application’s code. They are caused by the runtime platform (the JVM and everything under it, including the OS, the hardware, etc.) stopping to do something, and stalling all work while that thing is done.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">What the various causes of the hiccups are, and what we can do about them is something for another post. For now it’s enough that we know they are there, that we have a name to call them by, and that we now have ways to make pretty(?) pictures that show them.</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;">So that’s how Java got the Hiccups. Now, if only someone could figure out a way to cure them…..</div><div style="box-sizing: border-box; color: #777777; font-family: Roboto; font-size: 16px; line-height: 20px; margin-bottom: 22px;"><span style="color: #777777;">jHiccup can be found on github at </span><a href="https://github.com/giltene/jHiccup"><span style="color: #0b5394;">https://github.com/giltene/jHiccup</span></a><span style="color: #777777;">. For more details on jHiccup, how it works, how to use it, and for some “pretty” pictures see&nbsp;</span><a href="https://www.azul.com/jhiccup" style="box-sizing: border-box; color: #216dae; text-decoration: none;">http://www.azul.com/jhiccup</a></div>Gil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.com0tag:blogger.com,1999:blog-5222542250352397862.post-38445909738803851422014-11-16T23:13:00.000-08:002016-01-15T16:25:34.763-08:00WriterReaderPhaser: A story about a new (?) synchronization primitiveI recently added a synchronization primitive mechanism in my <a href="http://hdrhistogram.org/">HdrHistogram</a> and <a href="http://latencyutils.org/">LatencyUtils</a> code, which I think has generic use for some very common operations. Specifically, when wait-free writers are updating stuff that background analyzers or loggers needs to look at. I've isolated it in what I now call a WriterReaderPhaser. The name is very intentional, and we'll get to that in a moment. And to the code (all 66 actual lines of it, 200 with elaborate comments). But first, I'll stray into some "how did this come about" storytelling.<br /><br />WriterReaderPhaser is a new (I think) synchronization primitive: It provides a straightforward interface and API to coordinate wait-free writing to a shared data structure with blocking reading operations of the same data. Readers view a stable (i.e. non changing, coherent) data set while writers continue to modify data without waiting. And readers are guaranteed forward progress, and will only block for other readers and for writers that may have been "in flight" at the time the reader establishes a stable view of the data.<br /><br /><h3>How did this come about?</h3><br /><a href="https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcS7DD5KF0sEk4OQX63owTs2GmL6LB7FPjnzobsMamncNeRvW37n" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" class="rg_i" data-src="https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcS7DD5KF0sEk4OQX63owTs2GmL6LB7FPjnzobsMamncNeRvW37n" data-sz="f" jsaction="load:str.tbn" name="to7NOY7kljpiaM:" src="https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcS7DD5KF0sEk4OQX63owTs2GmL6LB7FPjnzobsMamncNeRvW37n" style="height: 166px; margin-top: 0px; width: 261px;" /></a>This sometimes happens when I build stuff: I find myself in need of some behavior that I thought would be common, but for which I can't find an existing implementation, or a name, or a description. This can obviously be ascribed to my weak Google-fu skills, but after a while I give up and just build the thing, because "it's not that complicated". So I build a one-off implementation into whatever I am doing at the time, and move on with life. At some later point, I find myself needing the same thing again. And since I had already solved that problem once, I go back to my old code and (let's be honest) copy-and-paste my first implementation into whatever new thing I'm working on. Sometimes the little guy on my right shoulder wins over the other guy, and I come back and refactor the behavior into a separate class and build an API for more generic use, at which point the "does this deserve it's own library? It's own repo?" thinking starts, coupled with much Yak Shaving [1]. Sometimes the guy on the left shoulder wins, and I actually get on with the real work I was supposed to be doing. I'll leave it to you to decide which little guy is red and which is white.<br /><br />Sometimes (usually much later) I realize that what I built was actually new. That even though I thought it was a common use case, and built my version simply out of impatience or frustration at not finding something I could use as-is, I may actually be the first person to solve it. Most of those times, this realization is quickly followed by someone showing me a paper or a piece of code that is 30 years old that makes me go "oh... right.". But sometimes that doesn't happen. Sometimes it really is new.<br /><br /><a href="http://hdrhistogram.org/">HdrHistogram</a> itself started this way. It was nothing more than about 100 lines of code in a one-off "JitterMeter" tool I was playing with, which needed to record latencies very quickly and report accurate percentiles with many nines in them. Then I found myself building all sorts of variations on jitter meters and sharing them (<a href="https://github.com/giltene/jHiccup">jHiccup</a> is an evolved version with a better name). And then I found that people (myself included) were taking the code and ripping out just the histogram trick inside, because they needed a histogram that was actually useful for talking about latencies. Recognizing that a fast histogram with good precision and accurate and fine grained quantile reporting capability is actually a very common use case, I decided to build a Yak shaving co-op on github and called it HdrHistogram. The first Yak hair I produced was Java-colored but others have recently added other colors and breeds.<br /><br /><a href="http://hdrhistogram.org/">HdrHistogram</a> is a [presumably] successful example of this process going the distance. More often than not, it doesn't. That's probably what my stale repos on github with 2 stars and no forks represent.<br /><br />WriterReaderPhaser is currently about halfway through this precarious process, but at this point I'm pretty sure it's not going to die. It's a class on it's own, but not yet it's own library. Certainly not it's own repo yet. It will need to find a home, but org.giltene.stuff is probably not where it needs to end up. Since it's so short, this blog entry is as good a home as any for now.<br /><br />Most importantly, it looks like it may actually be a new and generically useful synchronization primitive. More accurately: nobody has shown me that "oh... right." link or paper yet, and I'm done holding my breath for now.<br /><br /><h3>So what is WriterReaderPhaser about?&nbsp;</h3><br />Have you ever had a need for logging or analyzing data that is actively being updated? Have you ever wanted to do that without stalling the writers (recorders) in any way? If so, then WriterReaderPhaser is for you.<br /><br /><a href="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBxASEBIUEBISEBUSFBAQFBQVEBQPEBQVFRIWFxQUFBQYHCggGBolGxQVITEhJSkrLi4uFx8zODMsNygtLisBCgoKDg0OGBAQGCwkHRwvLCwvLCwsLCwsLCwsLCwsLCwrLCssLCwsLCwsLCwsLCwsLCssLCwsLCwsNywsKywsN//AABEIAOAA4QMBIgACEQEDEQH/xAAcAAEAAQUBAQAAAAAAAAAAAAAABQECAwQGBwj/xAA9EAACAQMCAwYDBQYFBQEAAAAAAQIDBBEFIRIxQQYTUWFxkQcigTJCobHBFCNy0eHxJDNSYvBDU2OCkhb/xAAZAQEAAwEBAAAAAAAAAAAAAAAAAQIDBAX/xAAgEQEBAAMAAgMBAQEAAAAAAAAAAQIDERIhBDFBUROB/9oADAMBAAIRAxEAPwD3EAAAAAAAAAAAAAAAAAZAApkOSAqDG60f9S90XRmnyaYFwKZK5AAAAAAAAAAAAAAAAAAAAAAAAAAAAAWTmorLeEgLsmpUvlygnN+XJer5GlXuePLbcKay3vhteb8DzHtn8VIUW6NiozktnU504/w/6mB6dd3bgnKrVhSjz5qC/wDpnMX/AG80mm3xXFOo1/pfe/ijwHUdUurufFWqVK0s5w25RXoiS0bsz3mXcTnRj0Shxt/yHeD15fFDSfFrzdKTX5Evp3bLTazSpV6eX04uB+zPHf8A8faPlc1o+tOL/DhNa47DT/6NxTqvpGUe6l6LL3HlDj6Lp1nzjPPk3le5s0r1cp5j+K+jPmzRe1N9p1V058TSxmnNtr1j/Q9t7Pa/Su6KnH7yXEuq8USh2MJF2SAhdSpNKW8Hsn1j4fQmKNZNLfOSEtgFEyoAAAAAAAAAAAAAAAAAAAACjAMhbm4dSTX3IN56cTX6I2NYumkqcftVMrK6LqzgviX2kVlZ8FN4qVfkhvul1kSOV+JXbWpWqOzsm8Z4KjjzlLOOCPkQ1l2SpW8OK7aqVdmqSfyQ/jxvJ+Rtdi9O/Zbd3dZZrV8qipbuEetV56t8jo9H01yl3lRZk90nvjzf+4CM0/QXJpxjGinunwrix5R5Inrfs5S+9xTf+6TJ63tPIkqNp5EDm4aBR/7cfb9THW7NUmsJOHpJv8GdnCzLpWHkOQeSdouy0nFua7xLbiwu8S8iz4cXc7evK3nupZcJdPQ9WrWG3I4ftHorozVeksYknKONo78/Qjg7pYlFqXJ+xh027cJunJ8vs58DW0m6VSlCS+8s+hBdptWVGvRw0sy4XvvjmmWQ9IoVcozog9Mu+JR35rJNQllEJXgIAAAAAAAAAAAAAAAAAC2T2K5IvXK/yqmudTZ+UV9p+wEd3/E51pbJ54c9ILk/rzPD7yq9Y1f/AMEM+ipU+vrL9TvvixrytrPuaeYzuE4LD3UfvP22+hyvY/Tla6e6rWKt4/l8VSj9leWWSJqMVcXDx/l0uGMY9MR+zH9Tq7G2yRei2XDFLG+zl5tnV2NAgZrW25bElSt1guoUuRtpAYlSRXgMoAwSooitUsFKEoyWU1h+aJvBiuI7MDz3shFwVek85pVJL6Zx+aZyHbuCer2kJcpcOV5/MvzR3WhQ/wATfNcu9a9pM4DtHLve0VCC34HTT8vkcn+ZI9F0Orw5g/utx9jrLSeUcVZz/fVf43+h1thLJAkkypbEuAAAAAAAAAAAAAABr3dxwQcn0/EztkBrtzmShnZbv16GezPxi2M7WzR1qL2kuHz5mi6/FOdSWFzjHPJRX9iKZWNSSTziSfNNbNdUcuPya1uuPINQrS1fV0ll0lLhXgqUHvL64O6aVW4+VYp0kqcEuWIrEV7YJS00OzpOpKhSVCdWPBNxWV1+yvu83yKWemun1UlnOerydOO/GsrhYldMtuXmdJZ0cEdplNbE7TikaTiq+MS8oipIAAAYLutGEJSlsopy+iWTOc52wrydONCDxK4kqfpD7z9gIjQn3dpKtPbvHUryfk22vwweZ/DqX7Xq91dS3jBVKiflKTUN/SJ13xV1mNtYOhB4lVXdRxzUFs37EX8PtP8A2XSnUa4Z3b4l48PKK9MLP1JHS6XLMm/Ft/idjp3I5PSaOMY8jrrCJAkYF5ZEvAAAAAAAAAAAAAUbAw3dZQi5Pos/yOQq1G25PnLd+pL6/d5agunzS/REJJnD8jPt4314+ujZaVRRs5l1sjHx46l0mYZMJjYo3ri+q80TFprsvvYl+DOanIwObL47MsfouMr0K11OnLrwvzN2M01/xnmVO9nHz8n/ADN2z7QuL3zHzW69jfH5X9ZXV/HoQyc3Zdpoy2fDL0eH7Ezb6jTlykumz2Z0Y7ccmdwsbUmchRqd/dVaz3jTzSpeG32mvYmO0t+6dB8G86mKcF1blscj2w1NafpsnF/Pw93B9XUn193k1Vebdpa8tU1qFCDzCE1ST6cMH+8l9Xk9E1LhdWFGmsQoRjFJdMLGDkfhFpqpUK97UWXLMKbfgsqT+rb9jqdMpOT4pc5NyfuSJ3TKXI6a0hsROnUeRO0I7EDLEuKIqAAAAAAAAAAAFDDdVVCLk+SRmbIDtFd8oL+KX6Iz2ZeOPU4ztQ95cfanPrmTPPLP4jp1nGdL906ipxnF5e7wm0S3xF1XubOSi/mq/u14+bRweo6b3Fvp8WsSqVVUl6twx+Bhq1TKW39a5Wz1Hsalleu5a2a9zdwo0nOpJRjGKbb9Dzm+7d3fE61OMVQ4uCEZR3n4vPPoYYassvpe5SPSpMw1JGO0uJTpQlOPBKUYycfBtZaE2ZVeLakjDOZfNmtUl4kJUqTNapIuqSNaTC3FXUaL6Wr1YNYk8eHM1ZyNecxDjoKXaZ5i6sOJweY77p+KTIXt8lqUaXBXjRVPP7upFpNvm+LlnmaU5mtUZvjtyxUuvGu2nCnTtra2oNShCMU5ReYtpLP47/UltJpLY8xtE5VUs4jH5m/JfzPTezc+KEH4rc69W25fbn2YeLq7GlhEjBGraLY3UbM1QAAAAAAAAAAAKMDFcVVGMpPkk37HG3FdycpPm3n+RM9pLrCUE+fzS9DnqjycXyM+3xb68XnXaKMrzVKVDD7ujiUtmltu2vdIp8RpRjXss7RjPL8ElKH6I7+FFJ8WFnlnG/ucx2v7MftkqbVTgcMrllNPGfyNcM5byfULj6qGvLmrqtfu6eYWtJrifLjaNKvaQuNSpW9NJUbbCaxt8uHL3aOyqUqdjZT7qP8AlwbWFlyljZv6kB8OrJqnUuJ/arN4zz4U2TMpMbz6Rz2l9a7SqhcUaCg6kqmM4eMZeETNWolu2l67HA9m/wDF6jWuJbxptqH4qPskjZ+IN7JqlbwfzVJJvHPGcL9TDLVLlMZ/1aZerXWzl1NV14y5NP0eSA7RV61ra0+6lHEI8E+Jtyba+758yI7E3kYZpzjU46rc1Jr5GvUp/jfG5L+fvjrqkjDIvmYZSMGrHUZrVGZajNaci0FkmYKszJKRrODnJRX3ml/UvEJDTKXyrxqywvKCf9z0jQKXDGKxySOL7PUFOq5fdp4hD6LH5YPQNLpcjt04cxcmy9rorPkbqNS2Wxto2ZqgAAAAAAAAAChjuKyhFyfKKbf0MhA9o7rCVNPeW79F098exTPLxx6nGdqEurhznKT+88+nka5VyyUPMt7euqegsmkVbMcpEdSx1YZ5mCUEo4SSXgtkZ2zHULzZZw44a10C7tas3aVKfBUe6mnt4bLwNbSLGtU1CdS4zLuVtJxcYt8lw+XM7mojAze75ZfXtX/NxfbSs61ehbR6tSl9Xj8kydvq8Lei5NLFOOF4+CRo67oLq1I1aU+7qR5Po/8AmTB2mtqsrNxzxyjwuTX3sPfY052Y4/iPc7Ubpaq3blOdxwPL4YRayvVG3pOoVO9qUKr45U+UvFeZh02haVqNPGIVIxSeJcE014+Js6fpPcznNzdRzxu+f/Nxtxx8aYd9VsK/pupKnxfPHmuXT+omznNSuY0r7jzlcPzY33xjH4I2rGVepLvJPgh0j4oyuiz3Fps7eJGpMpabKU//AEj6vmYasyX0e04qsIc1T+aXg3/crhj2rZXkdZ2dseCEVjfm/V7nZabS2InS6PI6SzpbHoT6cdvtu0Ymcsgi8IAAAAAAAAAABRnD9oa0qdxLvE8S+y/FY5HcEdrGmU68cTXLk1s0Z7MPPHi2OXK4qncRlyaL2zW1bQK1JtwfGue20iEoalVjtPxez3OHLVcXRMpXQuRZI0IapB81h+6NqNVPk8mS6rkYaki+bNeciBZORr1GXzkzXqMLLJswORdNmGTLTKynEffaNb1G3KCy+q+Vv1wa2pW9bu1C3xFJKPPDS5bEnNmCczabe2d/EXGOW1nSo0beLXzT4lxS8f6EzTrcUI/wr8jYq8Mk1JKSfRrKNKaUVhbJbJGuzZ5YziuOHKyWq+Zt8oLifr0/E7HsrZtQUpLefzfTocxY23E6cOtR8UvRcj0rSbdLC8Fgtox/We7LvpN6bR2J23jsaFjSxglKUTpc7IgAAAAAAAAAAAAAtlyLijQEdeW+U8nNajo8J54op+eMM7OpDJqVbZMiyUleY3nZ6Ucuk/o/5kVU7yns1KHTfl7nqtewXmiMu9MUs5WfZmWeiVpjss+3n+n6vOba4JZXNYbe3U34VeJZx6+K9V0Jeei93Pjovu5Ll1i/VEPrUblyU+BZSw3DlJ+LRyZ6cpW82SscpmvUkasdQlJ4qR4X4pY90ZI1c8nkyuPF1ZmGoy+ozBNkLLJs1qrMspGCoy0QwTbMFKHHNR+r8kuZkqyMlpFqOVzqPgj446v8i8nS3kdH2YtuOcqrXVRj5I7/AE2jyIDQLJQhGK6Je/U7HT6GyPRxnMXDlfaRtaeyNuJiorCMqJQqAAAAAAAAAAAAAAAChRxLgBinTRr1LZG6UaAh6tkvA0K9ivA6VwyYZ0EwOGv9Apz3cVnxWzObvOzM4tOnNvG+MY/E9Uq2iNGvYlMteOS8zseV1oSg8Ti4+q2MTpxfLb8j0i501PZrJBXvZyD3jmD8uRz5fG/jSbf64u4pNdNvE0akjqbnR60OS4l5cyLuLOMs8S4X6YZjdeU+2szlQDTbUVzk8I6HRLRTrLb5aWEvXq/fPsaELLu5ObfFwp8Pq9vyOt7L2HBTWecvmf1NdOPapuy9Om0yh5HSWdPCIzTqGxO0Y4wdjmZYIuKIqAAAAAAAAAAAAAAAAAAAAAACmCoAtcTFOl5GcAaU7U1K1lklmi1wQHPVtPRG3WkxlnMUzr50cmCdqRyfp3jz6t2Zg31SznGduZNWVljkidnZ+RfStRJIW2q2dLBJUzFSp4M8USKgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACmCuAAAAAAAAAAAAAAAAAAP/9k=" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;">&nbsp;</a>I'm not talking about logging messages or text lines here. I'm talking about data. Data larger than one word of memory. Data that holds actual interesting state. Data that keeps being updated, but needs to be viewed in a stable and coherent way for analysis or logging. Data like frame buffers. Data like histograms. Data like usage counts. Data that changes.<br /><br /><h3>Existing solutions</h3><br /><a href="http://1.bp.blogspot.com/-2ESV2uAomqQ/VGkVzxBTK2I/AAAAAAAAAKo/ecMKCL39ZPg/s1600/images-2.jpeg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-2ESV2uAomqQ/VGkVzxBTK2I/AAAAAAAAAKo/ecMKCL39ZPg/s1600/images-2.jpeg" /></a>Sure, you can use channels, queues or magic rings to move data updates and safely process them in background copies of the data. You can use persistent data structures and all sorts of immutable trickery. But those are&nbsp;<i>expensive</i>. As in orders of magnitude more expensive than updating in-cache state in place. When this data thing you want to look at could be updated millions of times per second, you invariably end up with some sort of double-buffered (or multi buffered) scheme: Updates are done to an active copy, and analysis is done "in the background" on stable, inactive copies.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-njQflE6yZO8/VGkXNrkJA9I/AAAAAAAAALE/bIa72scVebU/s1600/double.buffering.gif" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="141" src="http://2.bp.blogspot.com/-njQflE6yZO8/VGkXNrkJA9I/AAAAAAAAALE/bIa72scVebU/s1600/double.buffering.gif" width="200" /></a></div><br />Double buffered schemes usually involve some sort of "phase flipping". At some point the notion of which copy is active changes. Writers update the "new" active copy, and readers access a stable and coherent copy that used to be active, but now isn't. It's this phase flipping that usually comes in the way of keeping writers from blocking.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-lVuSvY5gHrs/VGkWFEE9CiI/AAAAAAAAAKw/nhbYyxYft_I/s1600/MutexLock.jpeg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-lVuSvY5gHrs/VGkWFEE9CiI/AAAAAAAAAKw/nhbYyxYft_I/s1600/MutexLock.jpeg" style="cursor: move;" /></a></div>There are all sorts of variations on how to do this flipping. We can obviously use some form of mutual exclusion lock to protect the writes and the flip. But then writers will block each other, and be blocked by the flipping operation. We can use ReaderWriter locks backwards: where the state being protected by the ReaderWriter lock would be the notion of which data set is the "active" one (the one writers write to). In this scheme writers take the read lock for the duration of their active state modification operations, while readers take the write lock to flip the roles of active and inactive data sets. This can be [much] better than complete mutual exclusion when multiple writers are involved, since writers no longer block other writers, but readers still block writers during a flip. Also, when you start asking yourself "what does 'read' mean again in this context?" that is a good sign you have a problem. Most people write buggier code when standing on their head and juggling. I'm sure there are a whole bunch of other schemes people use, but in my looking around thus far, I didn't find any examples that were non-blocking for the writers.<br /><br /><h3>Why did I care?</h3><br />The thing I actually wanted to double-buffer was a histogram. And not just any histogram. A fixed-footprint histogram that supports lossless recording of experienced latencies, such that later computation of precise percentiles will be possible, all the way to the as-many-9s-as-there-are-in-the-data level. The very purpose of such a histogram is often to capture and analyze latency outlier behavior. The recording operation cannot be allowed to be a cause of the very outliers it is trying to measure. For the latency recording mechanism to have any susceptibility to blocking or locking would be unacceptable.<br /><br /><a href="http://2.bp.blogspot.com/-ZYWowJadenY/VGkYH_s7q_I/AAAAAAAAALU/N7w8PyHilvc/s1600/wrk2_CleanVsCO.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="170" src="http://2.bp.blogspot.com/-ZYWowJadenY/VGkYH_s7q_I/AAAAAAAAALU/N7w8PyHilvc/s1600/wrk2_CleanVsCO.png" width="400" /></a>These latency histograms are basically non-blocking data structures with tens (or hundreds) of kilobytes of state that is rapidly being mutated by critical path "writer" code. But I wanted to log their contents over intervals that are short enough to be interesting for monitoring purposes, and for later time based analysis. In order to log the latency information being captured, I needed a logging "reader" to somehow gain access to a stable, coherent "snapshot" of the latency data that was recorded during some prior interval. To do this, I needed a way for the reader to flip the roles of the active and inactive histograms, but I needed to do that without ever blocking the writers. This is a classic case of an asymmetric synchronization need. I'm fine blocking, delaying and pausing the reader. I just can't afford for the writers to ever block or otherwise delay the execution of the thread they are recording in.<br /><br />In comes WriterReaderPhaser. And the best starting point for understanding what it does is to dissect the name:<br /><br />The&nbsp;<b><i>Phaser</i></b>&nbsp;part is there because it's main function is to coordinate phase shifts between the writers and the readers. Besides, I couldn't bring myself to call this thing a lock. It's not a lock. Not in it's most important function, which is phase shift coordination. Writers remain lock-free in all cases (they actually remain wait free on architectures that support atomic increment operations). They never block or lock. Calling WriterReaderPhaser a lock would be like calling an AtomicLong an "add lock" because someone could also construct a spin-lock around it....<br /><br />The&nbsp;<i><b>WriterReader</b></i>&nbsp;part is a reversal of the commonly used ReaderWriter (or ReadWrite) term. ReaderWriter locks&nbsp;are asymmetric, but in the reverse direction of what I needed: they enable [relatively] smooth reader operation while causing the writers to block. The <i>really</i> cool wait-free&nbsp;<a href="http://concurrencyfreaks.blogspot.com/2013/12/left-right-concurrency-control.html">Left-Right</a>&nbsp;which Martin Thompson had pointed me to achieves perfectly smooth reader operation, but that's still not what I needed. WriterReaderPhaser works for the exactly reversed need: Writers remain non-blocking and perfectly smooth, while only readers suffer.<br /><br />The desired behaviors I was looking for in a WriterReaderPhaser were:<br /><br />1. Writers remaining lock-free at all times. Ideally they will remain wait-free at all times.<br /><br />2. A Reader can coordinate a phase flip and access to the inactive data such that:<br /><br />2.1 Other readers will not flip a phase while this reader is still interested in the inactive data.<br /><div><br /></div>2.2 No writer modification will be made to the inactive data after the phase flip operation is complete, and for as long as the reader is interested in the inactive data.<br /><br />2.3 Readers are guaranteed forward progress (even in the presence of heavy and continuous writer activity, and even when there is no writer activity at all).<br /><br /><h3>Defining WriterReaderPhaser:</h3><br />With these high level desired behaviors stated, lets clearly define the qualities and guarantees that a well implemented WriterReaderPhaser primitive would provide to users, and the relevant rules that users must adhere to in order to maintain those qualities and guarantees:<br /><br />A WriterReaderPhaser instance provides the following 5 operations:<br /><ul><span style="font-family: 'Courier New', Courier, monospace;"><li>writerCriticalSectionEnter</li><li>writerCriticalSectionExit</li><li>readerLock</li><li>readerUnlock</li><li><span style="font-family: Courier New, Courier, monospace;">flipPhase</span></li></span></ul><div>When a WriterReaderPhaser instance is used to protect an actively updated data structure [or set of data structures] involving [potentially multiple] writers and [potentially multiple] readers , the assumptions on how readers and writers act are:<br /><ul><li>There are two sets of data structures (an "active" set and an "inactive" set)</li><li>Writing is done to the perceived active version (as perceived by the writer), and only within critical sections delineated by <span style="font-family: Courier New, Courier, monospace;">writerCriticalSectionEnter</span>&nbsp;and <span style="font-family: Courier New, Courier, monospace;">writerCriticalSectionExit</span>&nbsp;operations.</li><li>Only readers switch the perceived roles of the active and inactive data structures. They do so only while holding the <span style="font-family: Courier New, Courier, monospace;">readerLock</span>, and the switch is only done before execution a&nbsp;<span style="font-family: Courier New, Courier, monospace;">flipPhase</span>.</li><li>Readers do not hold onto&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">readerLock</span>&nbsp;indefinitely.&nbsp;</li><li>Only readers perform&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">readerLock</span>&nbsp;and&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">readerUnlock</span>.</li><li>Writers do not remain in their critical sections indefinitely.&nbsp;</li><li>Only writers perform&nbsp;<span style="font-family: Courier New, Courier, monospace;">writerCriticalSectionEnter</span>&nbsp;and&nbsp;<span style="font-family: Courier New, Courier, monospace;">writerCriticalSectionExit</span>.</li><li>Only readers perform&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">flipPhase</span>&nbsp;operations, and only while holding the&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">readerLock</span>.</li></ul><br />When the above assumptions are met, WriterReaderPhaser <u><i><b>guarantees</b></i></u> that the inactive data structures are not being modified by any writers while being read while under <span style="font-family: Courier New, Courier, monospace;">readerLock</span>&nbsp;protection <u><i>after</i></u> a <span style="font-family: Courier New, Courier, monospace;">flipPhase</span>&nbsp;operation.<br /><br />The following <i>progress guarantees</i> are provided to writers and readers that adhere to the above stated assumptions: <br /><ul><li>Writers operations (<span style="font-family: 'Courier New', Courier, monospace;">writerCriticalSectionEnter&nbsp;</span>and&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">writerCriticalSectionExit</span>) are wait free (on architectures that support wait-free atomic increment operations).</li><li><span style="font-family: 'Courier New', Courier, monospace;">flipPhase</span>&nbsp;operations are guaranteed to make forward progress, and will only be blocked by writers whose critical sections were entered prior to the start of the reader's&nbsp;<span style="font-family: 'Courier New', Courier, monospace;">flipPhase</span>&nbsp;operation, and have not yet exited their critical sections.</li><li><span style="font-family: 'Courier New', Courier, monospace;">readerLock</span>&nbsp;only blocks for other readers that are holding the&nbsp;<span style="font-family: Courier New, Courier, monospace;">readerLock</span>.</li></ul><h3></h3><h3>Example use</h3><div><div><br /></div><div>Imagine a simple use case where a large set of rapidly updated counters is being modified by writers, and a reader needs to gain access to stable interval samples of those counters for reporting and other analysis purposes.&nbsp;</div><div><br /></div><div>The counters are represented in a volatile array of values (it is the array reference that is volatile, not the value cells within it):</div><div><br /></div><div><span style="font-family: 'Courier New', Courier, monospace;">volatile long counts[];</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">...</span></div><div><br /></div><div>A writer updates a specific count (n) in the set of counters:</div><div><br /></div><div><span style="font-family: 'Courier New', Courier, monospace;"><i>writerCriticalSectionEnter</i></span></div><div><span style="font-family: 'Courier New', Courier, monospace;">&nbsp; &nbsp;counts[n]++; //&nbsp;should&nbsp;use atomic increment if multi-writer</span></div><div><span style="font-family: 'Courier New', Courier, monospace;"><i>writerCriticalSectionExit</i></span></div><div><br /></div><div><div>A reader gains access to a stable set of counts collected during an interval, reports on it, and accumulates it:</div></div><div><br /></div><div><div><div><span style="font-family: Courier New, Courier, monospace;">long intervalCounts[];</span></div></div></div><div><div><span style="font-family: Courier New, Courier, monospace;">long&nbsp;</span><span style="font-family: 'Courier New', Courier, monospace;">accumulated_counts</span><span style="font-family: Courier New, Courier, monospace;">[];</span></div></div><div><span style="font-family: Courier New, Courier, monospace;"><br /></span></div><div><span style="font-family: Courier New, Courier, monospace;">...</span></div><div><span style="font-family: 'Courier New', Courier, monospace;"><i>readerLock</i></span></div><div><span style="font-family: 'Courier New', Courier, monospace;">&nbsp; &nbsp;reset(</span><span style="font-family: 'Courier New', Courier, monospace;">interval_counts</span><span style="font-family: 'Courier New', Courier, monospace;">);</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">&nbsp; &nbsp;long tmp[] = counts;</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">&nbsp; &nbsp;counts = interval_counts;</span></div><div><span style="font-family: 'Courier New', Courier, monospace;">&nbsp; &nbsp;interval_counts = tmp;</span></div><div><span style="font-family: Courier New, Courier, monospace;"><i>flipPhase</i></span><br /><i style="font-family: 'Courier New', Courier, monospace;">&nbsp; &nbsp;// At this point, interval_counts content is stable&nbsp;</i><i style="font-family: 'Courier New', Courier, monospace;">&nbsp;</i></div><div><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp;report_interval_counts(</span><span style="font-family: 'Courier New', Courier, monospace;">interval_counts</span><span style="font-family: Courier New, Courier, monospace;">);</span></div><div><span style="font-family: Courier New, Courier, monospace;">&nbsp; &nbsp;accumulated_counts.add(</span><span style="font-family: 'Courier New', Courier, monospace;">interval_counts</span><span style="font-family: Courier New, Courier, monospace;">);</span></div><div><span style="font-family: Courier New, Courier, monospace;"><i>readerUnlock</i></span></div><div><br /></div><h3>A working implementation</h3></div><div><br /></div>Under the hood, my WriterReaderPhaser implementation achieves these qualities in a fairly straightforward way, by using a dual set of epoch counters (and "odd" set and "even" set) to coordinate the phase flip operations, coupled with a read lock that is used purely to protect readers from each other in multi-reader situations: i.e. to prevent one reader from flipping a phase or changing the notion of active o inactive data while another reader is still operating on it. Many other implementation mechanisms are possible, but this one is certainly sufficient for the job at hand.<br /><br />Rather than describe the logic in text, it is easiest to list it as code at this point. Below is the entire WriterReaderPhaser class as implemented in my current HdrHistogram repository, spelled out in Java code (most of which is detailed comments). The mechanism can obviously be ported to any language and envrionment that can provide support to atomic increment and atomic swap operations. It's the API and documentation (in the case the details in the JavaDoc comments) that is more important. A simple example of how this is used in practice can be found in HdrHistogram's various interval histogram recorders, like the original (and probably simplest example) in&nbsp;<a href="https://github.com/HdrHistogram/HdrHistogram/blob/HdrHistogram-2.0.3/src/main/java/org/HdrHistogram/IntervalHistogramRecorder.java">IntervalHistogramRecorder.java</a>, or its more recent replacements in&nbsp;<a href="https://github.com/HdrHistogram/HdrHistogram/blob/master/src/main/java/org/HdrHistogram/DoubleRecorder.java">DoubleRecorder.java</a>&nbsp;and <a href="https://github.com/HdrHistogram/HdrHistogram/blob/master/src/main/java/org/HdrHistogram/Recorder.java">Recorder.java</a>&nbsp;which add some unrelated and more complicated logic that deals with safely avoiding some copy costs on getIntervalHistogram() variants.<br /><br />And yes, it is now all in the public domain.<br /><br />Enjoy.<br /><br /><script src="https://gist.github.com/giltene/b3e5490c2d7edb232644.js"></script> <br />[1] For an apparent etymology of the term "Yak Shaving", read the example story attributed <a href="http://rationalwiki.org/wiki/Fun:Yak_shaving">here</a>.<br /><br /></div>Gil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.com24tag:blogger.com,1999:blog-5222542250352397862.post-31856835379733736532014-10-31T10:19:00.000-07:002015-09-04T20:00:12.413-07:00What sort of allocation rates can servers handle?<div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-gmbga5_7W3E/VFO98X2DKmI/AAAAAAAAAJk/Aq51g07fwv4/s1600/PauseThenGoLikeHell.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://3.bp.blogspot.com/-gmbga5_7W3E/VFO98X2DKmI/AAAAAAAAAJk/Aq51g07fwv4/s1600/PauseThenGoLikeHell.png" /></a></div>First, a side note: This blog entry is a [nearly verbatim] copy of <a href="https://groups.google.com/d/msg/mechanical-sympathy/jdIhW0TaZQ4/UyXPDGQVVngJ">a posting I made on the Mechanical Sympathy Google Group</a>. I'm lazy. But I recycle. So I think of that as a net positive.<br /><br />The discussion in question actually started from a question about what a good choice of hardware for running a low latency application may looks like these days, but then evolved into other subjects (as many of the best discussions on the group do), one of which was allocation rates.<br /><br />Several smart and experienced people on the group chimed in and shared their hard earned wisdom, a lot of which came down to recommendations like "keep your allocation rates low", and "reducing allocation rates is one of the best tools to improve application behavior/performance". Specific numbers were cited (e.g. "My current threshold ... is ~300-400MB/sec").<br /><br />This made that big "Java applications work hard to use less than 5% of today's toy server capabilities" chip I carry on my shoulder itch. I decided to scratch the itch by pointing out that one thing (and <i>one thing only</i>) is making people work hard to keep their apps within those silly limits: <i>It's all about GC Pauses.</i><br /><br />To support my claim, I went on a trivial math spree to show that even today's "toy" commodity servers can easily accommodate a rate of allocation 50x higher than the levels people try and contain their applications within, and that the only bad thing about a higher allocation rate is higher pause artifacts. In the poor systems that have those&nbsp;pause artifacts, of course....<br /><br />The rest of the below is the actual posting:<br /><br />...</div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">These "keep allocation rates down to 640KB/sec" (oh, right, you said 300MB/sec) guidelines are purely driven by GC pausing behavior. Nothing else.</div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">Kirk (and others) are absolutely right to look for such limits when pausing GCs are used. But the *only* thing that makes allocation rates a challenge in todays Java/.NET (and other GC based) systems is GC pauses. All else (various resources spent or wasted) falls away with simple mechanical sympathy math. Moore's law is alive and well (for now). And hardware-related sustainable allocation rate follows it nicely. 20+GB/sec is a very practical level on current systems when pauses are not an issue. And yes, that's 50x the level at which people seem to "tune" for by crippling their code or their engineers...</div></div><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">Here is some basic mechanical sympathy driven math about sustainable allocation rates (based mostly on Xeons):</div><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><span style="background-color: white; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px;">1. From a&nbsp;</span><b style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px;"><u>speed and system resources spent</u></b><span style="background-color: white; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px;">&nbsp;perspective, sustainable allocation&nbsp;rate&nbsp;roughly follows Moore's law for the past 5 Xeon CPU generations.</span><br /><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp; 1.1 From a&nbsp;<u><b>CPU speed</b></u>&nbsp;perspective:</div><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /><ul><li>The rate of sustainable allocation of a single core (at a given frequency) is growing very slowly over time (not @&nbsp;Moore's&nbsp;law rates, but still creeping up with better speed at similar frequency, e.g. Haswell vs. Nehalem).</li></ul></div><div style="border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The number of cores per socket is growing nicely, and with it the overall overall CPU power per socket (@ roughly&nbsp;Moore's&nbsp;law). (e.g. from 4 cores per socket in late 2009 to 18 cores per socket in late 2014).</li></ul></div><div style="border: 0px; color: #222222; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="font-family: Arial, Helvetica, sans-serif;"></div><ul style="font-family: Arial, Helvetica, sans-serif;"><li>The overall CPU power available to sustain allocation rate per socket (and per 2 socket system, for example) is therefore growing at roughly&nbsp;Moore's&nbsp;law rates.</li></ul><div style="font-family: Arial, Helvetica, sans-serif;"></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp; 1.2 From a&nbsp;<b><u>cache</u></b>&nbsp;perspective:</div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>L1 and L2 cache size per core have been fixed for the past 6 years in the Xeon world.</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The L3 cache size per core is growing fairly slowly (not at Moore's law rates), but the L3 cache per socket has been growing slightly faster than number of cores per socket. (e.g. from 8MB/4_core_socket in 2009 to 45MB/18_core_socket in late 2014).</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The cache size per socket has been growing steadily at Moore's law rates.</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>With the cache space per core growing slightly over time, the cache available for allocation work per core remains fixed or better.</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp; 1.3 From a&nbsp;<b><u>memory bandwidth</u></b>&nbsp;point of view:</div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The memory bandwidth per socket has been steadily growing, but at a rate slower than Moore's law. E.g. A late 2014 E5-2690 V3 has a max bandwidth of 68GB/sec. per socket. A late 2009 E5590 had 32GB/sec of max memory bandwidth per socket. That's a 2x increase over a period of time during which CPU capacity grew by more than 4x.</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>However, the memory bandwidth available (assume sustainable memory bandwidth is 1/3 or 1/2 of max), is still WAY up there, at 1.5GB-3GB/sec/core (that's out of a max of about 4-8GB/sec per core, depending on cores/socket chosen).</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>So while there is a looming bandwidth cap that may hit us in the future (bandwidth growing slower than CPU power), It's not until we reach allocation levels of ~1GB/sec/core that we'll start challenging memory bandwidth in current commodity server architectures.&nbsp;</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>From a memory bandwidth point of view, this translates to &gt;20GB/sec of comfortably sustainable allocation rate on current commodity systems..</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp; 1.4 From a&nbsp;<b><u>GC *work*</u></b>&nbsp;perspective:</div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>From a GC perspective, work per allocation unit is a constant that the user controls (with ratio or empty to live memory).</li></ul></div><div style="border: 0px; font-family: Arial, Helvetica, sans-serif; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>On Copying or Mark/Compact collectors, the work spent to collect a heap is linear to the live set size (NOT the heap size).</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="font-family: Arial, Helvetica, sans-serif;"></div><ul style="font-family: Arial, Helvetica, sans-serif;"><li>The frequency at which a collector has to do this work roughly follows:&nbsp;</li></ul><span style="font-family: Arial, Helvetica, sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</span><span style="font-family: Courier New, Courier, monospace;">allocation_rate / (heap_size - live_set_size)</span></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="border: 0px; color: black; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="font-family: Helvetica;"></div><ul><li><span style="font-family: Helvetica;">The overall work per time unit is therefore follows allocation rate (for a given </span><span style="font-family: Courier New, Courier, monospace;">live_set_size</span><span style="font-family: Helvetica;"> and </span><span style="font-family: Courier New, Courier, monospace;">heap_size</span><span style="font-family: Helvetica;">).</span></li></ul></div><div style="border: 0px; color: black; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="font-family: Helvetica;"></div><ul><li><span style="font-family: Helvetica;">And the overall work per allocation unit is therefore a constant (for a given </span><span style="font-family: Courier New, Courier, monospace;">live_set_size</span><span style="font-family: Helvetica;"> and </span><span style="font-family: Courier New, Courier, monospace;">heap_size</span><span style="font-family: Helvetica;">)</span></li></ul></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The constant is under the user's control. E.g. user can arbitrarily grow heap size to decrease work per unit, and arbitrarily shrink memory to go the other way (e.g. if they want to spend CPU power to save memory).</li></ul></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>This math holds for all current newgen collectors, which tend to dominate the amount of work spent in GC (so not just in Zing, where it holds for both newgen and olden).</li></ul></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li><b><i><u>But</u></i></b> using this math does require a willingness to grow the heap size with Moore's law, which people have refused to do for over a decade. [driven by the unwillingness to deal with the <i><u>pausing effects</u></i> that would grow with it]</li></ul></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>[BTW, we find it to be common practice, on current applications and on current systems, to deal with 1-5GB/sec of allocation rate, and to confortably do so while spend no more than 2-5% of overall system CPU cycles on GC work. This level seems to be the point where most people stop caring enough to spend more memory on reducing CPU consumption.]</li></ul></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;">2. From a&nbsp;<b><u>GC pause</u></b>&nbsp;perspective:</div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>This is the big bugaboo. The one that keeps people from applying all the nice math above. The one that keeps Java heaps and allocation rates today at the same levels they were 10 years ago. The one that seems to keep people doing "creative things" in order to keep leveraging Moore's law and having programs that are aware of more than 640MB of state.</li></ul></div><div style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>GC pauses don't have to grow with Moore's law. They don't even have to exist. But as long as they do, and as long as their&nbsp;<b><u>magnitude</u></b>&nbsp;grows with the attempt to linearly grow state and allocation rates. Pauses will continue to dominate people's tuning and coding decisions and motivations. [and by magnitude, we're not talking about averages. We're talking about the worst thing people will accept during a day.]</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>GC pauses seem to be limiting both allocation rates and live set sizes.</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The live set size part is semi-obvious: If your [eventual, inevitable, large] GC pause grows with the size of your live set or heap size, you'll cap your heap size at whatever size causes the largest pause you are willing to bear. Period.</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>The allocation rate part requires some more math, and this differs for different collector parts:</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp; 2.2 For the newgen parts of collector:&nbsp;</div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>By definition, a higher allocation rate requires a linearly larger newgen sizes to maintain the same "objects die in newgen" properties. [e.g. if you put 4x as many cores to work doing the same transactions, with the same object lifetime profiles, you need 4x as much newgen to avoid promoting more things with larger pauses].</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>While "typical" newgen pauses may stay just as small, a larger newgen linearly grows the worst-case amount of stuff that a newgen *might* promote in a single GC pauses, and with it grows the actual newgen pause experienced when promotion spikes occur.&nbsp;</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>Unfortunately, real applications have those spikes every time you read in a bunch of long-lasting data in one shot (like updating a cache or a directory, or reading in a new index, or replicating state on a failover),&nbsp;</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>Latency sensitive apps tend to cap their newgen size to cap their newgen pause times, in turn capping their allocation rate.</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp; 2.3 For oldgen collectors:</div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>Oldgen collectors that pause for *everything* (like ParallelGC) actually don't get worse with allocation rate. They are just so terrible to begin with (pausing for ~1 second per live GB) that outside of batch processing, nobody would consider using them for live sets larger than a couple of GB (unless they find regularly pausing for more than a couple of seconds acceptable).</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>Most Oldgen collectors that *try* to not pause "most" of the time (like CMS) are highly susceptible to allocation rate and mutation rate (and mutation rate tends to track allocation rate linearly in most apps). E.g. the mostly-concurrent-marking algorithms used in CMS and G1 must revisit (CMS) or otherwise process (G1's SATB) all references mutated in the heap before it finishes. The rate of mutation increases the GC cycle time, while at the same time the rate of allocation reduces the time the GC has in order to complete it's work. At a high enough allocation rate + mutation rate level, the collector can't finish it's work fast enough and a promotion failure or a concurrent mode failure occurs. And when that occurs, you get that terrible pause you were trying to avoid.&nbsp;</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><ul><li>As a result, even for apps that don't try to maintain "low latency" and only go for "keep the humans happy" levels, most current mostly-concurrently collectors only remain mostly-concurrent within a limited allocation rate. Which is why I suspect these 640KB/sec (oh, right, 300MB/sec) guidelines exist.</li></ul></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">Bottom line:</div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><br />When pauses are not there to worry about, sustaining many GB/sec of allocation is a walk in the park on today's cheap, commodity servers. It's pauses, <u>and only pauses</u>, that make people work so hard to fit their applications in a budget that is 50x smaller than what the hardware can accommodate. People that do this do it for good reason. But it's too bad they have to shoulder the responsibility for badly behaving garbage collectors. When they can choose (and there <i><u>is</u></i> a choice) to use collectors that don't pause, the pressure to keep allocation rates down changes, moving the "this is too much" lines up by more than an order of magnitude.<br /><br /></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">With less pauses comes less responsibility.</div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;">[ I need to go do real work now... ]</div></div></div></div></div>Gil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.com2tag:blogger.com,1999:blog-5222542250352397862.post-90352443364511237202014-03-29T23:07:00.002-07:002014-10-31T19:38:56.742-07:00A Pauseless HashMap<div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">HashMaps are great. And fast. Well, fast most of the time. If you keep growing them, you'll get elevator music every once in a while. Then they go fast again. For a while.<br /><br />Wouldn't it be nice if HashMaps didn't stall your code even when they were resizing?</div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">Some background: As those of you who have read my various past rants may have noticed, I spend a lot of my time thinking about the behavior of {latency, response-time, reaction-time}. In addition to trying to better understand or teach about the behavior (with monitoring and measurement tools like HdrHistogram, LatencyUtils, and jHiccup), I actually work on things that try to improve bad behavior. For some definitions of "improve" and "bad". Eliminating pausing behavior in GC was the lowest hanging fruit, but more recent work has focused on eliminating pauses due to other things that stand out once those pesky GC blips are gone. Things like at-market-open deoptimzations. Things like lock deflation, lock de-biasing, class unloading, weak reference processing, and all sorts of TTSP (time to safepoint) issues. I've also learned a lot about how to bring down Linux's contribution to latency spikes.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-p5UORt6k3_I/Uzey8aSOUyI/AAAAAAAAAHU/n0_0fUGuTBM/s3200/images.jpeg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://1.bp.blogspot.com/-p5UORt6k3_I/Uzey8aSOUyI/AAAAAAAAAHU/n0_0fUGuTBM/s3200/images.jpeg" /></a></div><br />But the JVM and the OS are not the only things that cause latency spikes. Sometimes it's <b><i>your</i></b> code, and the code is doing something "spiky". In my day job, I keep running into in actual, real-world low latency system code that is typically super-fast, but occasionally spikes in actual work latency due to some rare but huge piece of work that needs to be done. This is most often associated with some state accumulation. Once we eliminate GC pauses (which tend to dominate latency spikes, but also tend to simply disappear when Zing is applied), we get to see the things that were hiding in the GC noise. We often run into "nice" patterns of growing latency spikes at growing intervals, with a near-perfect doubling in both magnitude and interval between the spikes. This happens so often that we've studied the common causes, and (by far) the most common culprits seem to be HashMaps. The kind used to accumulate something during the day, and which resize in powers-of-two steps as a result.</div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">I've had "build a Pauseless HashMap" on my weekend project list for over a year now, but finally got around to actually building it (at the request of a friend on the mechanical sympathy group). There are probably at least 17 ways to skin a HashMap so it won't stall puts and gets when it resizes, but this is my simple take on it:&nbsp;</div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><a href="https://github.com/giltene/PauselessHashMap" style="border: 0px; color: #6611cc; cursor: pointer; margin: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">https://github.com/giltene/<wbr></wbr>PauselessHashMap</a></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">Keep in mind that (so far) this is a "probably-working draft" that's gone through some bench testing, but is not yet battle hardened (scrutiny is welcome).</div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></span></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;">I intentionally<span style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;">&nbsp;based this version on Apache Harmony's version of HashMap, and not on OpenJDK's, in order to make it available without GPLv2 license restrictions (for those who may want to include it in non-GPL products). The background resizing concept itself is simple, and can be applied just as&nbsp;</span><span style="border: 0px; color: black; font-family: Helvetica; margin: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;">easily to the OpenJDK version (e.g. if some future Java SE version wants to use it). Y</span></span><span style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;">ou can use (</span><a href="https://svn.apache.org/repos/asf/harmony/enhanced/java/trunk/classlib/modules/luni/src/main/java/java/util/HashMap.java">https://svn.apache.org/repos/<wbr></wbr>asf/harmony/enhanced/java/<wbr></wbr>trunk/classlib/modules/luni/<wbr></wbr>src/main/java/java/util/<wbr></wbr>HashMap.java</a><span style="color: black; font-family: Helvetica; font-size: 12px;">) as a baseline comparison for the code I started with.</span></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; color: black; font-family: Helvetica; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></span></div><div style="background-color: white; border: 0px; color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: 13px; margin: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; color: black; font-family: Helvetica; margin: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;">This is also a classic example of how GC makes this sort of concurrent programming thing both fast and simple. This is a classic case of an&nbsp;asymmetric speed need between two concurrent actors that share mutable state. I worked hard to make the fast path get() and put() cheap, and managed (I think) to not even use volatiles in the fast path. In doing this, I shifted all the cost I could think of to the background work, where latency&nbsp;doesn't matter nearly as much. This sort of trick would be much harder (or slower) to do if GC wasn't there to safely pick up the junk behind us, as it would (invariably, I think) add a need for additional synchronizing work in the fast path.</span></span></div><div><span style="border: 0px; color: black; font-family: Helvetica; margin: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; font-size: 12px; margin: 0px; padding: 0px; vertical-align: baseline;"><br /></span></span></div>Gil Tenehttp://www.blogger.com/profile/10732691137498021997noreply@blogger.com7