Why move? Since my blog is hosted at IBM, I've always felt a bit funny aboutposting things that weren't strictly business related. Constrained. Just a feeling. Never had my wrist slapped, for something I wrote on my blog anyway. ;-)But I have kept most of my posts pretty technical.

I could keep my developerWorks blog and start another one,but that doesn't feel right either. And I don't need to feel double the guilt for why I haven't posted to my blogs recently.My conclusion is that it's time to move my blog off IBM property.

Finally, I did the smart thing and set up a virtual feed at feedburner (listedabove), so that when I move next time, it'll be complete transparent, for feedaggregators anyway.

BTW, it was mostly a snap to copy my content from developerWorks back intoBlogger, since Blogger supports programmatic posting (more or less, it's AtomPub). And the Roller implementation at developerWorkshas a nice little function to dump all my entries out as a feed. Afterthat, it was a simple matter of programming.The only real problem was that Blogger only allows 50 programmatic posts a dayto a blog, so it took a couple of days to perform the move.

(The sessions at RubyConf 2007 were recorded by Confreaks, and will hopefully be available soon on the web. I'll update this post with a link when they are.)

Ropes: an alternative to Ruby strings by Eric IvancichInteresting. Strings havealways been a problem area in various ways, SGI'sropes provides an interesting solution for someuse cases. In particular, Strings always show up as pigs in Java memory usage, what with the many, long-ish class names and method signatures, that have lots of duplication internally; I wonder if something like ropes might help.

Ruby Town Hall by MatzDidn't take any notes, but the one thing that stuck out for me was the guy from Texas Instrumentswho said they were thinking of putting Ruby in one of their new calculators. He askedMatz if future releases would be using the non-GPL'd regex library, as the TI lawyerswere uncomfortable (or something, can't remember the exact words) with it (the licensing). See alsothe notes under Rubinius, below, regarding licensing.

But the big news for me, from this question, was the calculator with Ruby.Awesome, if it comes to be. I talked to the guy later and he indicated it was,as I guessed/hoped, the TI-Nspire. Sadly, he also indicated thecalculator probably wouldn't be available till mid-2008.

IronRuby by John LamDidn't go into enough technical depth, sadly. And the technical presentation included infoon the DLR and then XAML, which I don't think were really required.John had a devil's tail attached to his jacket or pants,which appeared about half way through the presentation. Really, I think everyone seemed to be quiteopen to IronRuby, no one seems to be suggesting it's evil or anything. Are they?

JRuby by Charles Nutter and Thomas EneboLots of good technical info. Tim Bray did a very brief announcement at the beginning aboutsome kind of research Sun is doing with a university on multi-VM work; sounded like it didn't involve Java, and given the venue, I assume it had something to do with Ruby.Sounds like we'll hear more about it in the coming weeks.

The JRuby crew seems to be making great progress, including very fresh 1.0.2 and 1.1 beta 1 releases.

One thing that jumped right out at me when the 'write a Ruby function in Java'capability was discussed, was how similar it seemed to me to what I've seenin terms of the capabilities of defining extensions in the PHP implementationprovided in Project Zero. That deserves some closer investigation. It wouldbe great if we could find some common ground here - perhaps a path to a nirvana of defining extension libraries for use with multiple Java-based scripting languages?

I happened to hit the rubini.us site a few times this weekend, and at one point noticed thefollowing at the bottom of the page: Distributed under the BSD license. It's been a while since I looked at the Ruby implementations, in terms of licensing, but I like the sound of this, because I know some of theother implementations' licenses were not completely permissive (see Ruby Town Hall above). Ruby has still not quite caught on in theBigCo environments yet, and I suspect business-friendly licensing may be needed to make that happen. It certainly won't hurt.

Mac OS X Loves Ruby by Laurent SansonettiOh boy, does it ever. Laurent covered some of the new stuff for Ruby in Leopard, and had peopleaudibly oohing and ahhing. The most interesting was the Cocoa bridge, which allows you to buildCocoa apps in Ruby, using XCode, which (now?) supports Ruby (syntax highlighting, code completion?).Most of the oohing had to do with the capability of injecting the Ruby interpreter into runningapplications, and then controlling the application from the injector. Laurent's example wasto inject Ruby into TextEdit, to create a little REPL environment, right in the editor. Lotsof ooh for the scripting of Quartz Composer as well.

Apple also now has something called BridgeSupport which is a framework whereby existingC programming APIs (and Objective C?) are fully described in XML files, for use by frameworkslike the Cocoa bridge, as well as code completion in XCode. That's fantastic. I've had todo this kind of thing several times over the years, and, assuming the ideas are 'open',it would be great to see more people step up to this, so we can stop hacking C headerfile parsers (for instance). And I think I could live with never having to writea java .class file reader again, thankyouverymuch.

I suspect all this stuff is available for Python as well.

Laurent also showed some of the DTrace support. No excuse not to look at DTrace now.Well, once I upgrade to Leopard anyway.

Someone asked "Will Ruby Cocoa run on the iPhone?" Laurent's reply: "Next question".Much laughter from the crowd. Funny, in a sad way, I guess.

Matz KeynoteMatz covered some overview material, mentioned Ruby will get enterprisey: "Thesuit people are surrounding us".He then dove into some of the stuff coming in 1.9.Most of it sounds great, except for the threading model moving from green threadsto native threads, and a mysterious new loop-and-increment beast, which franklylooked a bit too magical to me. The green vs. native threads thing is personalpreference of mine; I'd prefer that languages not be encumbered with thethreading foibles provided by the platform they're running on. Green threadsalso give you much finer control over your threads. On the other hand,given our multi-core future, I think there's probably no way to avoid interacting with OS-level threads, at some level.

Behaviour Driven Development with RSpec by David Chelimsky and Dave AstelsI really need to catch up on this stuff, I'm way behind the times here. They showedsome new work they were doing that better captured design aspects like stories,including executable stories, with a web interface that can be used to buildthe stories. That's going to be some fun stuff.Presentation available as a PDF.

Controversy: Werewolf considered harmful?Charles Nutter wonders ifthe ever-popular game is detracting from collaborative hack-fests. The game certainly is quite popular.I played one game, my first, and it was a bit nerve-wracking for me. But then, I was a werewolf, and the lastone killed (the villagers won), the game came down to the final play, and I'm a lousy liar.

I kept notes again on a Moleskine Cahier pocket notebook,which works out great for me. Filled up about 3/4 of the 64 pages, writing primarily on the 'odd' pages, leaving the 'even' pages for other doodling, meta-notes, drawing lines, etc. I can get a couple of days in one notebook. The only downsideis you need something to write on, for the Cahiers, for support, and the last half of the notebook pagesare perforated. I don't really need the perforation, but it wasn't a big problem. They endup costing about $2 and change for each notebook.

I was, like usual, primarily surfing during the conference on my Nintendo DS with the Opera browser; good enough to twitter, check email, check Google Reader. It'sa good conversation starter, also. At one point, a small-ish, slightly scruffy Asiangentleman leaned over my shoulder to see what in the world I was doing, so I gave him my little spiel on how it was usable for the simple stuff, yada yada.He seemed amused.

In "Is It Atompub?",James Snell discusses how an arbitrary binary filecould be posted to a feed via AtomPub, along with it's meta-data,at the same time, in a single HTTP transaction.

I'm assuming here that by "metadata", James means "stuff in the media link entry".

All James' options look like reasonable things a server CAN do. But it raises the question: how does the client knowwhich of these methods is actually supported? I guess one of the answers is via exposingcapabilities via features, which is something James iscurrently working on.

I think the answer is, if you're writing generic-ish client code, that you can't, today.

Doesn't mean you can't get close. The sequence described in Section 9.6.1of the brand spankin newRFC 5023seems to describe it pretty well. You can shortcut through that example, by issuing a PUT against theentry returned by the original POST, I assume, eliminating the secondary GET.It's just not one transaction anymore, it's two.

I'll also add that James' first option: have the server retrieve the meta-data from thebinary blob's intrinsic meta-data, seems wrong. It will require the server to know aboutmeta-data formats for all manner of binary blobs users might want to store. I think it'sfine for a server to support that, if they want, but it doesn't seem to right to assumethat every server DOES support it. But don't get me wrong, I would love to have the exif data extracted out of my photos upon upload.

So the next interesting question is, if the server DOES populate the meta-datafor the resource on my behalf, based on the resource, how do I as a generic client knowthat? Check for a non-empty atom:summary element?

BTW, I hadn't gotten a chance to extend thanks to the folkworking on the Atom and AtomPub specs, for all the work they've done over the years.Standards work is hard and thankless. Kudos all around.

---

Edited at 9:03pm: ops - I wanted to title this "only the server knows" not "only the shadow knows"

First, as a meta-comment to this, I truly love to see folks expressing such polarizing and radical opinions. Greatconversation starters, gets you thinking in someone else's viewpoint, devil's advocate, etc. At the very least, it's a change of pace fromthe REST vs everything else debate. :-)

Back to the issue at hand, debugging Ruby. I certainly understand where Giles is coming from, as I'vequestioned a few Rubyists about debugging, only to have them claim "I don't need a debugger" and "the command-line debugger is good enough". There is clearly a notion in the Ruby community, unlikeone I've seen almost anywhere else, that debuggers aren't important.

I twitteredthis morning, in reference to Giles' post, the following: "Seems like Stockholm syndrome to me". As in, if you don't have a decent debugger to use in the first place, itseems natural to be able to rationalize reasons why you don't need one. I have theexact same feelings for folks who spend almost all of their time programming Javamaking claims "the only way to do real programming is in an IDE"; because it's prettyclear to me, at this point, the only way to do Java programming is with an IDE.I've personally not needed an IDE to do programming in any other language, except of course Smalltalk, where it was unavoidable. Of course, extremely programmabletext editors like emacs, TextMate, and even olde XEDITkinda blur the line betweena text editor and an IDE.

My personal opinion: I love debuggers. If I'm stuck with a line-mode debugger, then fine,I'll make do, or write a source-level debugger myself(aside: you just haven't lived life if you haven't written a debugger).But I'll usually make use of a source-level debugger, if it's easy enough to use. Sometimes theyaren't.

Honestly, I love all manner of program-understanding tools. Doc-generators, profilers,tracers, test frameworks, code-generators, etc. They're tools for your programming kit-bag.I use 'em if I need 'em and I happen to have 'em. They're all 'crutches', because in a pinch, there's always printf() & friends. But why hop on one leg if I can actually make betterheadway, with a crutch, to the finish line of working code? Seems like a puritanical view tome to say "you shouldn't use crutches", and especially ironic for Ruby itself, which is a language fullof crutches!

Perhaps there is something about Ruby itself, which makes debugging bad. Or as Giles seems to be indicating, that testing frameworks can completely take the place of debugging. Some unique aspect of Ruby that sets it apart from other languages where debuggers are deemedacceptable and desirable. As a Ruby n00b, I'm not yet persuaded.

As for Ruby debuggers themselves, NetBeans 6.0 (beta)has a fairly nice source-level debugger that pretty much works out-of-the-box. Eclipse fanaticscan get Aptana or the dltk (Dynamic Languages Toolkit).I think it would be nice to have a stand-alone source-level Ruby debugger, outside the IDE, becausehonestly I think TextMate is a good enough IDE for Ruby anyway.

When people think about links, with regard to tying together information on the web, the usual thoughts are of URLs. Either absolute URLs, or a URL relative tosome base (either implicitly the URL of the resource that contains the link,or explicitly via some kind of xml:base-like annotation).

But I wrestle with this.

Here's one issue. Let's say I have multiple representations of my resourcesavailable; today you see this typically as services exposing data as either JSONor XML. If that representation includes a link to other data that can be exposed as either JSON or XML, do you express that link as some kind of "platonic URL"? Or if you are doing content-negotiation via'file name extension' sort of processing, does your JSON link point to a.json URL, but your XML link point to a .xml URL?

The godfather had something interesting to say in a recent presentation. In"The Rest of REST",on slide 22, Roy Fielding writes:

Hypertext does not need to be HTML on a browser- machines can follow links when they understand the data format and relationship types

Where's the URL? Perhaps tying links to URLs is a constraint we can relax. Consider,as a complementary alternative, that just a piece of data could be considered a link.

Here's an example: let's say I have a banking system with a resource representinga person, that has a set of accounts associated with it. I might typically representthe location of the account resources as a URL. But if I happen to know, a priori,the layout of the URLs, I could just provide an account number (assuming that's thekey). With the account number, and knowledge of how to construct a URL toan account given that information (and perhaps MORE information), the URL to theaccount can easily be constructed.

The up-side is that the server doesn't have to calculate the URL, if all they have is theaccount number. They just provide the account number. The notion of content-type-specificURLs goes away; there is only the account number. The resources on the server can bea bit more independent of themselves; they don't have to know where the resourceactually resides, just to generate the URL.

Code-wise, on the server, this is nice. There's always some kind of translationstep on the server that's pulling your URLs apart, figuring out what kind of resourceyou're going after, and then invoking some code to process the request. "Routing". For that codeto also know how to generate URLs going back to other resources, means the code needsthe reverse information.

The down-side, of course, is that you can't use a dumb client anymore; yourclient now needs to know things like how to get to an account given just anaccount number.

And just generally, why put more work on the client, when you can do it onthe server? Well, server performance is something we're always trying to optimize -why NOT foist the work back to the client?

But let's also keep in mind that the Web 2.0 apps we know and love today aren'tdumb clients. There's user-land code running there in your browser. Typicallyprovided by the same server that's providing your resources in the first place.ie, JavaScript.

I realize that's a bad example for me to use; me being the guy who thinks browsers area relatively terrible application client, but what the heck; that's the way thingsare today.

For folks who just want the data, and not the client code, because they havetheir own client code, well, they'll need some kind of description of how everything'slaid out; the data within a resource representation, and the locations of theresources themselves. But the server already knows all that information, and couldeasily provide it in one of several formats (human- and machine-readable).

As an proof point of all of this, consider Google Maps. Think about how the maptiles are being downloaded, and how they might be being referenced as "links". Do youthink that when Google Maps first displays a page, all the map tiles for that first map vieware sent down as URLs? Think about what happens when you scroll the map area, and newtiles need to be downloaded. Does the client send a request to the server asking for theURLs for the new tiles? Or maybe those URLs were even sent down as part of the originalrequest.

All rhetorical questions, for me anyway. I took a quick look at the JavaScript for Google Maps inFireBug, and realized I've already debugged enough obfuscated code for a few lifetimes. Probably a TOS violation to do that anyway. Sigh. I'll leave thatexercise to younger pups. But ... what would you do?

For Google maps, it's easy to imagine programmatically generating the list oftiles based on location, map zoom level, and map window size. Assuming the tilesare all accessible via URLs that include location and zoom level somewhere in the URL.In that case, the client code for calculating the URLS of the tiles needed is just a math problem.Why make it more complex than that?

I think there are problem domains where dealing with 'links' as just data, insteadof explicit URLs make sense, as outlined with Google Maps. Remember what Roy wrote in his presentation:"machines can follow links when they understand the data format and relationship types".Of course, there's plenty of good reason to use continue to use URLs for links as well, especially with dumb-ish clients.

I think it's fair to say there's a tension here; on the one hand, there's no need to wrap yourdata in Atom or protocols in APP if you don't need to - that's just more stuff you have to deal with.On the other hand, if Atom and APP support becomes ubiquitous, why not take advantageof some of that infrastructure; otherwise, you may find yourself reinventing wheels.

I can certainly feel Yaron's point of "I'm running into numerous people who think that if you just sprinkle some magic ATOM pixie dust on something then it suddenly becomes a standard and is interoperable."I'm seeing this a lot now too. Worrisome. Even more so now that more and more peopleare familiar with the concept of feeds, but don't understand the actual technology. I'veseen people describe things as if feeds were being pushed to the client, for instance,instead of being pulled from the server.

One thing that bugs me is the potentially extraneous bits in feeds / entries that are actually required: atom:name, atom:title, and atom:summary. JamesSnell is right; it's simple enough to boilerplate these when needed. But to me, there's enough beauty in Atom and APP, to really make it reusable acrossa wide variety of problem domains, that these seem like warts.

Another thorn in my side is the notion of two ways of providing thecontent of an item. atom:content with embedded content, or linked to with the src attribute. The linked-to style is needed, clearly, forbinary resources, like image files. But it's a complication; it would clearlybe easier to have just one way to do it, and that would of course have to bethe linked-to style.

The picture in my mind is that Atom becomes something like ls;just get me the list of the 'things' in the collection, and some of theirmetadata. I'll get the 'things' separately, if I even ever need them. Works for operating systems.

Of course, the tradeoff there is that there are big performance gains to behad by including your content IN the feed itself, with the embedded style;primarily in reducing the number of round-trips between client and serverfrom 1 + # of resources, to 1. It doesn't help that our current web clientof choice, the web browser, doesn't provide programmatic control over it's own cache of HTTP requests. Maybe if it did, the linked-to style would be less of a performance issue.

I suspect I'm splitting hairs to some extent; one of my many vices. I'm keeping an open mind; I'm glad people are actually playing with usingAtom / APP as an application-level envelope / protocol. It's certainlyworth the effort to try. There's plenty to learn here, and we're starting to have some nice infrastructurehere to help us along, and the more people play, the more infrastructurewe'll get, and ...

Given that I can say that this is where I live and this is where I work I'd claim it's document-oriented enough for me :-)

Stefan is referring to a comment discussion in his blog postonLively.How we got from Lively to whether Google Maps is "document-oriented" ... well, read the original post and the comments.

w/r/t "document oriented", the term is used on the Lively page, but I think Stefan ismisinterpreting it. I can onlyinfer from Stefan's post that he believes a web application is "document oriented" if you can generatea URL to somewhere in the middle of it. Like he did with two map links he included.

I'd refer to this capability as having addressable pages. Addressable in terms of URLsthat I can paste into a browser window, email to my mom, or post in a blog entry. Important quality.It's especially nice if you can just scrape the URL from the address bar of your browser,but not critical, as Google Maps proves; a button to generate a link is an acceptable, but lessfriendly substitute. Being addressable is especially hard for "web 2.0" apps, which is why I mentionit at all.

My read of the Lively page's usage of "document-oriented" is more a thought on application flow.In particular, pseudo-conversationalprocessing. Which is both the way CICS "green screen" applications are written, as well as the wayWeb 1.0 applications are written. Turns out to be an incredible performant and scalable style of buildingapplications that can run on fairly dumb clients. The reason I infer this as "document-oriented" is that,in the web case, you literally are going from document (one URL) to another document (another URL)as you progress through the application towards your final goal. Compared to the style of imperative and event driven programming you apparently do with Lively.

So, with that thought in mind, Google Maps is clearly not "document oriented". The means bywhich you navigate through the map is 'live'; you aren't traversing pages like you did in old school MapQuest (which, btw, is also now "live").

But even still, given my interpretation of Stefan's definition, I'd say there's no reasonwhy a Lively app can be just as "document-oriented" as Google Maps given his definition; that is, exposinglinks to application state inside itself as URLs. You may need to work at it, like you would with any Web 2.0 app, but I don't see technical reason why it can't be done. Hint: client code running in your browser can inspect your URLs.

Back to Stefan's original note about Lively: "it might be a good idea to work with the Web as opposed to fight against it".I think I missed Stefan's complaints about Google Maps fighting against the web. Becauseif Lively is fighting against the web, then so is Google Maps.

Lastly, a note that Lively is built out of two technologies. JavaScript and SVG, and it runs in a web browser. I'm finding it really difficultto figure out how Lively is fighting the web.

Another problem with the SOA name is the "service" bit. At least for me, the term "service" connotes a collection of non-uniform operations. I don't even like the phrase "REST Web services." Certainly, SOAP/WS-*, CORBA, DCOM, etc. fit this definition. But REST? Not so much. In REST the key abstraction is the resource, not the service interface. Therefore SOA (and I know this is not anyone's strict definition) encompasses the above mind set, but includes SOAP and similar technologies and excludes REST.

If you change Pete's definition of "service" to be "a collection of operations", independentof whether they are uniform or not, then REST fits the definition of service. Next, you can simply say the resource (URL) is the service interface, for REST. Just a bunch of constraints / simplifications / specializationsof the more generic 'service' story.

Sure there are plenty of other details that separate REST from those other ... things.But you can say that about all of them; they're ALL different from each other, in thedetails. And at 10K feet, they're all doing generally the same thing.

As a colleague mentioned to me the other day, REST is just another form of RPC.

I feel like we might be throwing out the baby with the bath water here. It's truethat I never want to go back to the CORBA or SOAP/WS-* worlds (I have the scars to proveI was there), but that doesn't mean there's nothing to learn from them. For instance, the client-facing story for REST seems a bit ugly to me. I know this isn't real REST,but if thisis the kind of 'programming interface' that we have to look forward to, in terms of documentation and artifacts tohelp me as a programmer ... we got some problems.

I look forward toseeing what Steve Vinoskibrings to the table, as a fellow scar-bearer.

Project Zero's ebullient leader, Jerry Cuomo, just published an article at hisblog talking about the PHP supportin Project Zero. If you didn't already know, PHP is supported in Project Zero using a PHP interpreter written in Java.Pretty decent 10K ft summary of why, what, etc. With links to more details.

Wanted to point out two interesting things, to me, from Jerry's post ...

"The idea with XAPI-C is to take the existing php.net extensions, apply some macro magic, potentially some light code rewrite, and make those extension shared libraries available to Java through JNI."

Interesting programming technique; trying to reuse all the great, existing PHP extensions out there. One of PHP's great strengths is thebreadth of the extension libraries. It would be great to be able to reuse this code,if possible. It's a really interesting idea to me in general, to be able toreuse extension libraries, not just across different implementations of the samelanguage, but across multiple languages. It just seems like a terrible waste tohave lots of interesting programming language libraries available, but notbe able to use them in anything but a single language.

"We have debated the idea of moving our PHP interpreter into a project unto it's own -where it can explore better compatibility with php.net as well as pursue supporting a full complement of PHP applications. Thoughts?"

This seems like it makes a lot of sense. Especially if it meant being ableto open source the interpreter to allow a wider community to develop it.I think the main question is, is there a wider community outthere interested in continuing to develop the interpreter and libraries?

Lastly, want to point out how impressed I've been by the team in Hursley, UK and RTP, NC forgetting the interpreter as functional as it is so quickly. The team maintainsa wiki page atthe Project Zero web site describing the current state of functionality, if you're interested in seeing how far along they are.

A question: if you had to provide a client library to wrapper your RESTful web services, wouldyou rather expose it as a set of resources (urls) with the valid methods (request verbs)associated with it, or provide a flat 'function library' that exposed the resourcesand methods in a more human-friendly fashion?

Example. Say you want to create a to-do application, which exposes two resources: a list of to-doitems, and a to-do item itself. A spin on the "Gregorio table"might look like this:

resource

URL template

HTTP VERB

description

to-do items

/todo-items

GET

return a list of to-do items

to-do items

/todo-items

POST

create a new to-do item

to-do item

/todo-items/{id}

GET

return a single to-do item

to-do item

/todo-items/{id}

PUT

update a to-do item

to-do item

/todo-items/{id}

DELETE

delete a to-do item

(Please note: the examples below are painfully simple, exposing just the functional parameters(presumably uri template variables and HTTP request content)and return values (presumably HTTP response content), and not contextual information like HTTP headers, status codes,caches, socket pools, etc. A great simplification.Also note that while I'm describing this in Java code, the ideas are applicable to otherlanguages.)

If you were going to convert this, mechanically, to a Java interface, it might look something like this:

A different way of thinking about this is to think about the table as a flat list of functions.In that flavor, add another column to the table, named "function", where the value inthe table will be unique across all rows. Presumably the function names are arbitrary, but sensible, likeToDoList() for the /todo-items - GET operation.

Now, if you look at the combination of the two 'pure' interfaces, compared with the 'applied'interface, there's really no difference in function. In fact, the code to implement all thesemethods across both flavors would be exactly the same.

The only difference is how they're organized.

Now, the question is, which one is better?

Now, you might say I'm crazy, who would ever choose the 'pure' story over the 'applied' story?And my gut tells me you're right. The 'applied' story seems to be a better fit for humans, who are largely going to be the clients of these interfaces, writing programs to use them.

But this flies in the face of transparency, where we don't want to hide stuff so muchfrom the user. HTTP is in your face, and all that. At what point do we hide HTTP-ness?If you don't want to hide stuff from your users, you might choose 'pure'.

And I wonder, are there other advantages of the 'pure' interface? You might imagine somehigher-level programming capabilities (mashup builders, or even meta-programmingfacilities if your programming language can deal with functions/methods as firstclass objects) that would like to take advantage of the benefits of uniform interface (as in the 'pure' interface).

And of course, there's always the option of supporting both interfaces, as in somethinglike this:

Even though you have 10 methods to implement, each real 'operation' is duplicated, in boththe 'pure' and 'applied' flavors, so you really only have 5 methods to implement. No additional codefor you to write (relatively), but double the number of ways people can interact with your service.

Shiver me timbers, yes, I'm blogging a bit about NetBeans.After hearing such gushingreviews of it, I figured I'd take a look. It's been at least a year, and probably more, since Ilast looked at it. And I should note, I'm just looking at the Ruby bits.

Thought I'd provide some quick notes on NetBeans 6.0 Beta 1 as a confirmed Eclipse user.I'd give the history of my usage of Eclipse, but then Smalltalk enters the picture aroundpre-Eclipse VisualAge Java time-frame, and you don't want me to go there, do ya matey?That would just make me sad anyway, remembering the good old days.

I've also used the RDT functionality available in Aptana, and will make comparisons as appropriate.

cons:

On the mac, uses a package installer, instead of just a droppable .app file.

Mysterious 150% cpu usage after setting my Ruby VM to be my MacPorts installedruby. I didn't see any mention in the IDE ofwhat it was doing, but I figured it was probably indexing the ruby filesin my newly-pointed-to runtime. Only lasted a minute or so.If it had lasted much longer, I might have killed it, and thenuninstalled it.

Can only have a single Ruby VM installed; Eclipse language supportusually allows multiple runtimes to be configured, one as default,but overrideable on a per-project, or per-launch basis.What do JRuby folks do, who want to run on JRuby or MRI alternatively?

Plenty of "uncanny valley"effects going on, because Swing is in use. Of course, Eclipse also has a lot of customui elements; I'm becoming immune to the uncanny valley; and FireFox on the mac doesn'thelp there either.

pros:

I see the Mac .app icon changed from the sharp-cornered version to a kinder,gentler version (see the image at the top), but I think I can still validly compare Eclipse and NetBeansto two of my favorite sci-fi franchises, given their logo choices. But it's certainly less Borg-ish than older versions.

Install now ships as a .dmg for the Mac (disk image file) instead ofan embedded zip file in a shell script.

Debugging works great. Same as Eclipse with the RDT.

I can set Eclipse key-bindings.

F3 works most of the time ("find the definition") likein Eclipse. In fact,this is cool: F3 on a 'builtin' like Time, and NetBeans generatesa 'dummy' source file showing you what the source would look like, sansmethod bodies, but with comments, and the method navigator populated. Nice!

AMercurial plugin is available for easy installation through the menus, and CVS andSVN support is built-in. I played a bit with the Mercurial pluginin a previous milestone build, and it was easy enough to use, but I never could figure out how to 'push' back to my repo.Why Eclipse doesn't ship SVN support, built-in, in this day and age, isa mystery to me.

Don't need to add the "Ruby project nature" to a project just to edit a ruby file in the project. How I despise Eclipse's project natures.

Provides great competition for Eclipse.

Quite usable, overall. Hat's off to the NetBeans folks! I'll probably start using it for one-off-ish Ruby work I do.