Friday, July 28, 2006

It's 2AM. The western world is asleep. Bats flit and chatter outside my urban Minnesota home, chasing mosquitos. My available television channels (only broadcast; I don't have cable) have been reduced to dating shows, infomercials, and evangelical fundraisers.

I'm up coding. Why?

Perhaps it's because I eat, sleep and breathe code. I'm usually up until 2 or 3AM hacking or studying code. I wake up around 7, head to work, and crack open my laptop on the bus to do more coding. I code all day on Java EE apps with occasional JRuby breaks. I come home, sit down in my home office and code on JRuby. I go to bed at 2 or 3AM and the process repeats.

I blog about coding.

I go to user groups focused on coding.

I feel uncomfortable at parties unless I can talk about coding (although I do have other hobbies; mathematical/logical puzzles, pool, go, movies, console video games, and beer among them).

When I get drunk, I go on long-winded rants about coding and code-related topics. When I sober up, my first worry is whether I've damaged part of my code-brain.

The nonfiction books in my bookshelf are all books on coding or remnants of my CS undergrad studies.

I am a coder.

Passion

I want to know where the other passionate coders are. I know they're out there, juggling bits and optimizing algorithms at all hours of the night. I know they share many of my characteristics. I know they love doing what they do, and perhaps they--like me--have always wanted to spend their lives coding.

How do we find them?

Google seems to know how. Give the coders what they want: let them work when the sun is asleep, let them eat when they want to eat, dress like they want to dress, play like they want to play; let them follow their creativity to whatever end, and reward that creativity both monetarily and politically; let them be.

Is this approach feasible? Google's bottom line seems to say so, boom-inflated numbers notwithstanding. And Google's approach is really just the current in a long line of attempts to appease the coder ethos. The dot-commers tried to figure it out, but rewarded breathing and loud talking as much as true inspiration and hard work. Other companies are now learning from those mistakes; Google is just the most prominent.

What is it that we want? What makes me say "this is the job for me"?

Reread that list of characteristics above. What theme shines through?

Perhaps coders just want the freedom to think, to learn, to create in their own ways. Perhaps it's not about timelines and budgets and marketability. Perhaps coding--really hardcore, 4AM, 24-hours-awake coding--is the passionate, compelling, empowering art form of our time.

Artists are mocked. Artists are ridiculed. Artists are persecuted. Artists are sought out. Artists are revered.

Artists create their best work when left to their own devices, isolated from the terrible triviums of modern living.

So do coders.

Artists go on long-winded, oft-maligned midnight rants about what it means to be an artist, man, and what it means to create art.

So do coders.

Perhaps what we've always hoped is true. Perhaps we're not misfits or malcontents. Perhaps we're the latest result of that indescribable human spark that moves mountains and shoots the moon. Perhaps it's no longer presumptuous to say it:

Sunday, July 23, 2006

I love stirring up trouble. In working on the compiler for JRuby, it's become apparent that a few targetted areas could yield tremendous performance benefit even in pure-interpreted mode. I describe a bit of this evening's study here.

As an experiment, I cut out a bunch of the stuff in our ThreadContext that caused a lot of overhead for Java-implemented methods. This isn't a safe thing to do for general cases, since most of these items are important, but I wanted to see what bare invocation might do for speed by reducing some of these areas to theoretical "zero cost".

So with the unnecessary overhead removed (simulating zero-cost for those bits of the method call process) we're down to mid 6-seconds for fib(30). The iterative version is calculating fib(500000), but I'll come back to that.

Now consider Ruby's own numbers for both of these same tests:

Time for bi-recursive, interpreted: 2.001974Time for iterative, interpreted: 9.015137

Hmm, now we're not looking so bad anymore, are we? For the recursive version, we're only about 3.5x slower with the reduced overhead. For iterative, only about 2.5x slower. So there's a few other things to consider:

- Our benchmark still creates and pushes/pops a frame per invocation- Our benchmark still has fairly costly overhead for method arguments, both on the caller and callee sides (think multiple arrays allocated and copied on both ends)- Our benchmark is still using reflected invocation

Yes, the bits I removed simulate zero cost, which we'll never achieve. However, if we assume we'll get close (or at least minimize overhead for cases like this where those removed bits are obviously not needed), these numbers are not unreasonable. If we further assume we can trim more time off each call by simplifying and speeding up argument/param processing, we're even better. If we eliminate framing or reduce its cost in any way, we're better still. However, the third item above is perhaps the most compelling.

You should all have seen my microbenchmarks for reflected versus direct invocation. Even in the worst comparison, direct invocation (via INVOKEINTERFACE) took at most 1/8 as much time as reflected. The above fib invocation and all the methods it calls are currently using reflected invocation, just like most stuff in JRuby.

So what does performance hypothetically look like for 6.5s times 1/8? How does around 0.8s sound? A full 50-60% faster than C Ruby! What about for iterative...24s / 8 = 3s, a solid 66% boost over C Ruby again. Add in the fact that we're missing a bunch of optimizations, and things are looking pretty damn good. Granted, the actual cost of invoking all those reflected methods is somewhat less than the total, but it's very promising. Even if we assume that the cost of the unoptimized bits is 50% of the total time, leaving 50% cost for reflection, we'd basically be on par with C Ruby.

It's also interesting to note that the interpreted and compiled times for the iterative version are almost identical. Interpretation is expensive for many things, but not for a simple while loop. The iterative version's code is below:

This is a good example of code that's very light on interpretation. While loops in Ruby and JRuby boil down in both cases to little more than while loops in the underlying language, interpretation or not. The expense of this method is almost entirely in two areas: variable assignment and method calls, neither of which are sped up by compilation. The similarity of the compiled and interpreted numbers for this iterative algorithm show one thing extremely clearly: our method call overhead really, really stinks. It is here we should focus all our efforts in the short term.

Given these new numbers and the fact that we have many optimizations left to do, I think it's now very reasonable to say we could beat C Ruby performance by the end of the year.

Side Note: The compiler work has gone very well, and now supports several types of variables and while loops. This compiler is mostly educational, since it is heavily dependent on the current ThreadContext semantics and primitives. As we attack the call pipeline, the current compiler will break and be left behind, but it has certainly paved the way, showing us what we need to do to make JRuby the fastest Ruby interpreter available.

Wednesday, July 19, 2006

I have received confirmation from my employer, Ventera Corporation, that they will fund my trip to Antwerp. Hooray! I'm also planning on bringing my wife and spending the holiday season in Europe. It ought to be a great trip, right on the heels of presenting JRuby.

Any Europeans in Amsterdam, Antwerp, Paris, Venice, Prague, or points nearby that might like to chat some time, let me know. We're planning on visiting at least those five cities.

And for the record, I speak only one European language: Spanish (and very poorly, I might add). My Mandarin Chinese is better, but I don't expect that will help much.

I will be attending RubyConf 2006, but I still will not have my own presentation. The selection process is complete. I still have standing offers from two other potential presenters to share time, but I have not yet heard whether they were accepted.

Oddly enough, I did receive the following email:

We have finished the presentation selection process, and regret tohave to inform you that your paper was not among those chosen forinclusion.

Since I missed the submission deadline by a day, it's rather unremarkable that I was not selected to present. So I missed the deadline AND was declined? Ouch!

Tuesday, July 18, 2006

July 17 was a particularly big news day for JRuby. Three articles were posted, one in two places, on JRuby and the two core developers Tom and I. I present here a collection of articles and blogs that have particular significance to the Ruby+Java world.

Andy released this article shortly after our JavaOne press conference appearance. He provides a good executive summary of Ruby, IronRuby, and JRuby.

Ugo Cei on Ruby/Java Integration

In addition, I was today pointed to Ugo Cei's series of blog postings on Ruby/Java integration. They're short, but give a good quick overview of what works and what doesn't. Covered topics include the Ruby/Java bridges RJB (worked, but looks cumbersome) and YAJB (did not work), JRuby (worked like gangbusters, naturally ;), and remoted Java code over XML-RPC (a fairly popular recommendation from Rubyists).

Part I - RJB part 1Part II - RJB for a more complicated casePart III - YAJB...OutOfMemory on a simple scriptPart IV - JRuby, a great comparison to the second RJB postingPart V - XML-RPC part 1Part VI - XML-RPC part 2

Friday, July 14, 2006

I keep wondering if things can continue moving along so well indefinitely. JRuby's got the momentum of a steamroller shot out of a cannon these days.

Compilers

I have been back at work on the compiler, which is starting to take real shape now. I'm doing this initial straight-forward version in Java, so I can work on evolving my code-generation library independently. This version also isn't quite so complicated that it warrants using Ruby.

Early returns have been very good. I've been testing it so far with the standard "bi-recursive" fib algorithm, with a few other node types thrown in as I implement them:

The performance boost from the early compiler is roughly 50% when running fib(20):

fib(20)Time for interpreted: 1.524Time for compiled: 0.729

The fib algorithm is obviously extremely call-heavy. There's two recursions to fib plus four other calls for '<', '-', and '+'. In JRuby, this means that the overhead of making dyn-dispatched method calls is almost as high as directly interpreting code. For larger runs of fib performance tails off a bit because of this overhead. We're taking a two-pronged approach to performance right now; the compiler is obviously one prong but overall runtime optimization is the other. The compiler gives us an easy 50% boost, but we may see another large boost just by cleaning up and redesigining the runtime to optimize call paths. I would not be surprised if we're able to exceed C Ruby's performance in the near term.

Oh, and I love the following stack trace from a bug in the compiler. Note the file and line number where the error occurred:

Exception in thread "main" java.lang.IncompatibleClassChangeError at MyCompiledScript.fib_java (samples/test_fib_compiler.rb:2) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

How cool is that?

Conferences

Close on the heels of my disappointing mixup with RubyConf 2006, we have received an invitation to present at JavaPolis 2006 in Antwerp, Belgium. We actually were hoping to go and would have planned on delivering a BOF or Quicky session, but after sending a proposal one of the conference organizers said JRuby was already on the wish list. We're guaranteed "at least" a full session. JavaPolis will be held December 13-15, and we'll be getting cozy with some of the top Java folks in the world. It should be a fun and exciting event, especially considering what we'll get done in the five months before then.

I've also received other offers from potential RubyConf 2006 presenters to share their tiem slots with me. We should be able to get word out and demos presented after all. I'd still like to be able to present without stepping on others' time, so if you really would like to see JRuby have a full presentation at RubyConf this year, email the organizers.

Codehaus

We have made the move to Codehaus, and we've mostly got JRuby operations migrated there from SourceForge. JIRA is up, Confluence is up with a minimal set of pages, and we're totally running off Codehaus SVN now. So far we're very happy with the move; JIRA is beautiful and the SVN repo works great. We have not yet migrated off SF.net email lists and we haven't moved over all bugs from SF's tracker, but otherwise we're basically a Codehaus project now. Huzzah!

Sunday, July 9, 2006

This was originally posted to the jruby-devel mailing list, but I am desperate to be proven wrong here. We use reflection extensively to bind Ruby methods to Java impls in JRuby, and the rumors of how fast reflection is have always bothered me. What is the truth? Certainly there are optimizations that make reflection very fast, but as fast as INVOKEINTERFACE and friends? Show me the numbers! Prove me wrong!!

--

It has long been assumed that reflection is fast, and that much is true. The JVM has done some amazing things to make reflected calls really f'n fast these days, and for most scenarios they're as fast as you'd ever want them to be. I certainly don't know the details, but the rumors are that there's code generation going on, reflection calls are actually doing direct calls, the devil and souls are involved, and so on. Many stories, but not a lot of concrete evidence.

A while back, I started playing around with a "direct invocation method" in JRuby. Basically, it's an interface that provides an "invoke" method. The idea is that for every Ruby method we provide in Java code you would create an implementation of this interface; then when the time comes to invoke those methods, we are doing an INVOKEINTERFACE bytecode rather than a call through reflection code.

The down side is that this would create a class for every Ruby method, which amounts to probably several hundred classes. That's certainly not ideal, but perhaps manageable considering you'd have JRuby loaded once in a whole JVM for all uses of it. It could also be mitigated by only doing this for heavily-hit methods. Still, requiring lots of punky little classes is a big deal. [OT: Oh what I would give for delegates right about now...]

The up side, or so I hoped, would be that a straight INVOKEINTERFACE would be faster than a reflected call, regardless of any optimization going on, and we wouldn't have to do any wacked-out code generation.

Initial results seemed to agree with the upside, but in the long term nothing seemed to speed up all that much. There's actually a number of these "direct invocation methods" still in the codebase, specifically for a few heavily-hit String methods like hash, [], and so on.

So I figured I'd resolve this question once and for all in my mind. Is a reflected call as fast as this "direct invocation"?

A test case is attached. I ran the loops for ten million invocations...then ran them again timed, so that hotspot could do its thing. The results are below for both pure interpreter and hotspotted runs (time are in ms).

I would really love for someone to prove me wrong, but according to this simple benchmark, direct invocation is faster--way, way faster--in all cases. It's obviously way faster when we're purely interpreting or before hotspot kicks in, but it's even faster after hotspot. I made both invocations increment a static variable, which I'm hoping prevented hotspot from optimizing code into oblivion. However even if hotspot IS optimizing something away, it's apparent that it does a better job on direct invocations. I know hotspot does some inlining of code when it's appropriate to do so...perhaps reflected code is impossible to inline?

Anyone care to comment? I wouldn't mind speeding up Java-native method invocations by a factor of ten, even if it did mean a bunch of extra classes. We could even selectively "directify" methods, like do everything in Kernel and Object and specific methods elsewhere.

--

The test case was attached to my email...I include the test case contents here for your consumption.

Update: A commenter noticed that the original code was allocating a new Object[0] for every call to the reflected method; that was a rather dumb mistake on my part. The commenter also noted that I was doing a direct call to the impl rather than a call to the interface, which was also true. I updated the above code and re-ran the numbers, and reflection does much better as a result...but still not as fast as the direct call:

Wednesday, July 5, 2006

It's not really Ruby or Java-related, but with all the noise about net neutrality the past few weeks I figured I'd put my opinions out there. I doubt any of this is particularly original, but it's where I stand.

I believe, as do most well-informed Netizens, that Verizon et al are nuts to demand a new "pay-for-performance" model. They claim that since traffic to and from Google, Amazon, Yahoo, and friends takes up a larger percentage of their bandwidth than that from other sites, those heavy-hitters should pay intermediate providers a little something extra to make sure traffic arrives quickly. They liken it to a high-speed highway toll lane, allowing those who pay the price a higher quality of service. They are also completely wrong.

Plenty of metaphors have been tossed around by both sides. The toll lane metaphor in particular fails badly because of one simple fact: Google et al do not decide who visits their sites, and therefore they do not decide where their traffic goes. Google, for example, is a simple supplier of a service; the fact that the service is provided over the internet is entirely irrelevant since the internet is not funded based on who requests what from whom; it's based on the privilege of making, serving, and receiving requests. I'll talk in terms of Google from here on.

Traffic from Google (and to a lesser extent, traffic to Google) certainly do take up their share of overall internet bandwidth. This much is true. However Google already pays for some of that bandwidth on their end of the net. They pay for the privilege of receiving and serving requests and sending requests and receiving responses from sites they index or integrate with.

The other end of the internet is also already being paid for. Users pay for their internet connections, be they dialup or broadband, and in return are given the privilege of sending and receiving requests over that connection. Service pricing scales up with speed and reliability; corporate users pay the highest rates to guarantee quality-of-service while home users pay much lower consumer rates. Traffic that crosses the internet from point A to point B is already paid for twice...once on the service end and and once on the consumer end. This is how the internet has operated for better than two decades; it is today essentially a utility like power or water.

Adding a requirement that Google pay for traffic crossing various networks adds a third payment into this system. Where providers previously could only profit from what happened at the edges of the internet, they now want to choke money from the middle as well. Not only is this absurd given the structure and organization of the internet, it is also quite infeasible. If Google were expected to pay every primary and secondary backbone provider for all traffic crossing their wires, the costs could be astronomical...a death of a thousand cuts. It would, however, represent an explosive boom in revenues for the largest providers, since they would profit from every byte they delivered from point A to point B...while simultaneously charging A and B for the right to such extortion.

Let us assume that the providers are truthful in saying they need such a system to handle the explosive popularity of sites like Google's. What brought us to this situation? Again, the fault lies with the providers themselves.

Today, you can get a 1MBps DSL connection from any provider for about $25-$30 per month. Such a speed was unheard of only a couple years ago, and yet it is now standard in many regions. If you are willing to accept a few restrictions on how you use it, you can get 4MBps-8MBps cable internet for about the same price. Again, a remarkably low price for the service you get.

Supply and demand drives the market, wherein a fixed supply combined with increased demand causes prices to go up. While the supply of bandwidth has increased over the past few years, the demand for faster connections and the amount of data traversing them has skyrocketed. At the same time, prices have dropped.

Say a given provider serves 1,000 users, all with 56KBps modems. At peak theoretical usage this would require 56MBps of upstream bandwidth to serve (considering only downloads). Now consider a three-year period in which our friendly provider increases the size of their pipe by ten times (a very large increase), with a resulting total upstream bandwidth of 560MBps. Suppose over the same period half of their users switch to 4MBps broadband internet connections (also a large number, but perhaps not unreasonable in an internet-savvy population), and proceed to utilize them no more than 50% (reasonable given modern internet media services). This brings the total theoretical demand up to 1.052GBps, nearly double the ten-times buildout. Even dropping the number of broadband subscribers to 30% exceeds the buildout by a healthy margin.

I believe it is for this reason the providers have taken their current position. Rather than any one of them unilaterally increasing broadband prices to the consumer or service prices to corporate endpoints, they have decided to force content providers to foot the bill for their lack of foresight. They have not built out their networks fast enough to match the speed at which they have rolled out low-cost, high-speed internet. Now they're paying the price for over-marketing and under-planning with maxed-out networks and consumers ever-hungrier for more speed at lower cost.

Did they think they could decrease prices and increase service indefinitely without appropriate infrastructure to back it up?

So here we are stuck in this current situation. The large providers have had their "oh, shit" moment. They unfortunately hold most of the cards and have nearly convinced enough congresspeople to follow their lead. Again the needs of America's corporations are being put before the needs of her people. What other costs might there be?

For one, any other country that continues to observe network neutrality will continue to see the internet growing explosively. The US will fall behind out of a desire to increase provider profits.

Also, since consumers have little choice which networks their data flows through, providers excercise local monopolies. Without network neutrality, their networks will only be open to the highest bidders, eventually decreasing the level of choice and quality of service for end-users. When neutrality falls, how long before content providers bid each other completely off networks? Will I have to choose my next internet provider (if I have a choice) based on whether they allow me to more easily access Yahoo or Google?

So what are we to do? Obviously we should make every effort to ensure our congresspeople are voting in our best interests; that much is obvious. But I would propose another possible solution for Google and friends: if neutrality falls...don't pay. In fact, don't service those providers at all.

Stay with me here. Google and its family of services are likely the most popular destinations on the internet. Popularity is demand; users demand that Google services be available over their internet connections. They may not be screaming now about network neutrality, but when their ISP or the upstream provider tries to extort money from Google and Google responds by serving notice of such extortion back to the users in place of their desired services, you'll have thousands upon thousands of angry customers.

They will be angry not at Google--oh no. Anger is most often directed at the nearest target. If the reception on your TV is bad, the first thing blamed is the TV itself or the hardware or wires connecting it. If your gut grows beyond control, you may likely blame genetics or stress rather than the sodas and chips you consume daily. If you can't reach Google or GMail, guess where the blame and anger fall first: that's right, your ISP. Your ISP will quickly turn around and direct their own anger at the upstream providers that brought this new nightmare down upon them. "My customers can't reach Google because Google refuses to pay your fees. Resolve it or I and my 50,000 subscribers will no longer do business with you."

The beauty of this plan is that an ad-based company like Google won't even stand to lose much money. Google ads are delivered on countless pages across the net, all of which would still operate just as before. Google could bide its time while the furious public demanded net neutrality be reinstated. Supply and demand, and we're back where we started.

There's one other fact I need to call out, unfortunately. Those of you who have demanded absurdly fast connections at absurdly low prices will very likely have to accept that prices may increase to preserve neutrality. The internet as it exists today can't accommodate 100% of users using multi-megabytes-per-second...the infrastructure just isn't there. If there really is a crisis brewing on the internet such that available ISP demands will greatly exceed what backbone providers can handle, either endpoint speeds will need to decrease or prices will have to increase throughout the pipeline to fund network buildout. I would consider it a small price to pay to ensure the future of the internet as we know it. I hope you will too.

For more information on network neutrality, please visit the following links:

And it's out the door! We are currently sending out the announcement: JRuby 0.9.0 is released!

Any frequent readers out there will know there's a ton of improvements in this release. I'll be blogging a bit later on the details of each, but here's a short list:

All non-native Ruby libraries (lib/ruby/1.8 from the C version) are included in this release, so you can download, unpack, and hit the ground running. No installing, no copying files, no fuss.

RubyGems 0.9.0 (coincidence?) is included pre-installed with appropriate scripts to run with JRuby. Gems install beautifully, but I'd recommend you use --no-rdoc and --no-ri until we get a few performance issues worked out (doc generation is still pretty slow).

WEBrick finally runs and can serve requests. This also allows the Rails 'server' script to server up your Rails apps. Speaking of Rails...

Ruby on Rails now runs most scripts and many simple applications out-of-the-box. The 1.1.4 release of Rails included our patch for block-argument assignment, eliminating an unsupported (by JRuby) and unusual syntax preventing scripts from running. We've also improved performance a bit and as mentioned before, got the WEBrick-based 'server' script running well.

YAML parsing, StringIO, and Zlib are now Java-based instead of the original pure-Ruby implementations. This has greatly improved the performance of RubyGems and other libraries/apps that make use of these libraries. There are a few libraries remaining to be "nativified" and we expect to get future performance benefits from doing so.

As always, there are a large number of interpreter, core, and library fixes as part of this release.

This release represents another huge leap forward beyond our JavaOne demo. Where Rails was a bit touchy and unpredictable with that release, it's now becoming very usable. Where RubyGems only half-worked before, it now appears to work well every time. We feel we're on track for full Rails support by the end of this summer, with a JRuby 1.0 release to follow. Thanks all for your support and assistance!

Special thanks to JRuby contributors Ola Bini and Evan Buswell for their work on YAML/Zlib reimplementation and server sockets, respectively. We wouldn't have RubyGems or WEBrick working without their contributions.

Sunday, July 2, 2006

So...I had the proposal all ready and in the RubyConf system. I got some feedback from Tom and could have pushed the "final submit" button at any time.

Then we got busy wrapping up the very-exciting 0.9.0 release, I got my dates mixed up and...well, enough excuses. I missed the deadline by *one day*, and the organizers stated that they ethically couldn't make an exception for me.

I'm pretty devastated. By RubyConf we were planning to have Rails working and performance vastly improved. It was to be a true "coming of age" for Ruby on the JVM, and I wanted to share that with the Ruby community at their definitive conference. Sadly, it is not to be.

If any readers are particularly interested in seeing JRuby present at RubyConf 2006, you might drop an email to the organizers and perhaps they'll change their minds.

Now if you'll excuse me, I'm going to spend the rest of the day sulking.

Updates:

John Lam (of the RubyCLR project) would like to see us at RubyConf 2006, and has offered to share his time (assuming his talk is accepted) if we can't sway the organizers. I'm still hoping there's a way to get our own presentation, but it's great to have backers out there pulling for us. Thanks John!

Saturday, July 1, 2006

Rails 1.1.4 was released on Friday to very little fanfare. It was basically a fixit release for routing logic that broke in the 1.1.3 release (which broke while fixing a security issue in 1.1.2, etc). However, there's a less visible but vastly more interesting fix (for JRubyists) also included in 1.1.4.

The former is certainly terse, but it takes a little head-twisting to follow what's going on. Inifinitely more serious: it doesn't work in JRuby.

Anguish

I don't know when this syntax became valid or if it's always been valid, but it is seldom used. We had not seen it anywhere except in Rails, and there it's only used in a few top-level scripts for gathering command-line options. The level of effort to implement this missing functionality in JRuby is nontrivial:

there are parser changes necessary to treat the block arg as an assignment to a hash/array, which is a Call instead of a DVar (dynamic var)

there are interpreter changes to allow Call nodes to appear in block arg lists

Salvation

Naturally, we wanted to avoid adding this functionality if possible, so I emailed the ruby-core mailing list to ask the man himself, matz, whether this was an accepted standard. His response was music to my ears:

FYI, in the future, block parameters will be restricted to local variables only, so that I don't recommend using something other than local variables for the parameters.

Redemption

Being the rabble-rouser that I am, and armed with matz's Word, I contacted the Rails team about "fixing" this unusual block syntax. Naturally, we wanted it fixed because it was going away, and not just because we're lazy. A bug was created, patches were assembled, and finally, in 1.1.4, this syntax has been fixed.

Enlightenment

So what does this mean for JRuby? Until 1.1.4 was release, we had planned to include our own patch for this syntax, to allow Rails to work correctly. You would have had to patch Rails after installing, or none of the standard Rails scripts would run. Now, with the release of 1.1.4, the following will all work out-of-the-box with JRuby 0.9.0 (currently in RC status):

And suddenly, you have http://localhost:3000/mytest/hello running JRuby on Rails.

We'll have a much better HOWTO packed into the 090 release, and I'll certainly be blogging on various aspects of JRuby on Rails over the next week or two. Keep in mind that we don't officially "support" Rails just yet; there's more work to do. We are, however, another leap beyond the JavaOne demo, and at this rate things are looking very good for an end-of-Summer JRuby 1.0.