I just knocked out a generic implementation of a sweeper that leverages Java’s concurrency structures to provide a very pluggable background replacement for finalize(), which we all know is evil. It also works well for caching. You can find it over on GitHub under the name RobertFischer/CleanSweep. Instructions on its use are there, but it basically looks like this:

This is an announcement of a bunch of open source code that I’ve just released.

Using GitHub as a Maven Repository for Gradle

If you’re using Gradle for your JVM builds (and you should be) and GitHub for your open source project infrastructure (and you should be), then you might be pleasantly surprised to know that you can use GitHub as a Maven repository, which means that your library can be deployed to and served from GitHub’s own cloud infrastructure.

Your clients add this line to their Gradle build scripts:

apply from:"http://smokejumperit.com/github-libs.gradle"

And at that point, they can draw from anything deployed on GitHub. Good times!

JSON Parser Library

A long while back, I was pissed off at the available JSON parsing libraries, so I wrote my own JavaCC-drived JSON parser library. It supports a fair bit of the broken JSON that is out in the wild, and is screaming fast. For more information, see the README for the RobertFischer/json-parser project.

JSON Client for the JVM: Resty

I’ve been playing around with a bunch of different JSON clients for interacting with a REST service (GitHub’s API v3) on the JVM, and I didn’t really like any of them. All of them felt WAAAAAAAAAAAY too Java-y, and I was looking for something much simpler. RESTClient for Groovy was nice, but I had trouble debugging the errors going to and from the server. After much searching, the best I found was beders’s Resty library, but it had a problem in that it couldn’t parse JSON arrays at the top level. So I forked it.

You can check out my version of Resty, which uses my more fault-tolerant and faster JavaCC JSON parser (see above) and can parse top-level arrays. There’s also substantially improved error handling, and it’s now built using Gradle instead of Maven because XML’s pointy brackets make my eyes bleed.

I’m getting back into the game a little bit, and I decided to take a look at Lift for web development. After an initially promising experience, I became totally unhappy with Eclipse (it began forgetting I had Google App Engine libraries on the classpath after every clean). So I moved back to the command line. The recommendation to use Simple Build Tool or Maven for Lift put me off: SBT is pretty weak for a build tool, and Maven is…well…Maven, off to download the Internet. So I went back to Gradle.

For the record, I’m using Objectify for my Google App Engine development, because JPA gets you into that whole ugly ORM conceptual space without need: there’s no “R” in GAE, so you might as well deal with it in a more natural way.

So, given that situation, I came up with this build.gradle script. (Forgive the ugly organization, but this is actually the relevant points extracted from a multi-project build.)

If you start Lift up at this point (using gradle gaeRun), there are errors about Lift not having permission to mess with the thread pool.

Now, here’s the amazing magic trick which I found documented nowhere but discovered while digging through the Lift source code. It’s an astounding and miraculous trick which is necessary in order to get Lift to work on Google App Engine.

In the file ./src/main/resources/props/default.props, make sure to have this set:

inGAE=true

That’s it. You do that, Google App Engine works. You don’t do that, it won’t. Magic!

I’ve been working in .NET for a while now, but the time finally came when I no longer had to be working on a Microsoft workstation via VPN. This meant that I was free to start developing .NET on my Mac, and that meant starting with installing F#.

I presumed I was going to have to do some kind of hardcore magic dance to get this to work, so imagine my surprise when I fired up MacPorts for shits and giggles but actually discovered an fsharp port! It even seemed reasonably up-to-date! What a world!

So I began with the three steps beginning basically every serious endeavor on MacPorts: a selfupdate, an upgrade outdated, and a clean installed. I then dove in: install fsharp.

Unfortunately, the libgdiplus port (a dependency of fsharp) failed to build for me. Some googling around and I found a ticket saying that I failed to build its dependencies with +universal, so I should do that. According to the ticket, the problem was probably cairo, so I fired off upgrade --enforce-variants cairo +universal, and that got me a bit further, but no ultimately it failed again: libsasl (from cyrus-sasl) was apparently also of the wrong architecture. About this time I was having Gentoo flashbacks, so I fired off upgrade --enforce-variants installed +universal, because sometimes the only way to be sure is to take off and nuke the site from orbit. That completed just fine after an hour or so.

Another install fsharp later and I got past libgdiplus. The mono and fsharp ports installed like a breeze. (It was a very slow breeze — more like stagnant air being pushed around by a decrepit fan — but it worked all the same.)

So far, things look pretty good. I have to run the fsi as fsi --no-gui --readline for some reason, but that’s pretty minor.

I used to have a hackish JavaCC plugin under my Gradle-Plugins project, but I have ported it over to the Compiler plugins infrastructure that I announced before, and also added JJTree as part of the mix. This new approach solves a myriad of annoying edge cases in the old JavaCC plugin. The biggest benefit of the new version is that the process now integrates seamlessly into the normal Java compile, so you don’t have to split up your Java code into multiple different places. The second biggest benefit is that you can have more than one JavaCC per project (you now get one per source set, like in all the other languages).

This is probably it for compilers for the moment. Left to my own devices, I’ll be releasing compilers for Ashlar (my own language) and BiteScript next, but they’re going to be a while into the future. An update to the Mirah one will be forthcoming when Mirah hits a slightly more stable point.

I just released the DepNames plugin for Gradle. It’s part of my gradle-plugins collection, appearing in version 0.6.6. You can read the description with an example from the README, but the basic idea is that you can create “keywords” for your common external dependencies.

And if you can upgrade your definition of felix in one place and all your projects get the same upgrade. The dependency keywords can be defined in either the root project or globally (with the root project definitions trumping the global ones).

Ashlar‘s infrastructure is now live. Basically, we have a compiler and a runtime (ashlarc and ashlar, respectively). Ashlar compiles code down to a component (JAR + properly configured metadata). When Ashlar executes, it loads the component (OSGi install + processing), checks the metadata for any additional components required, fetches those additional components via Ivy, and loads them. Only after all that is done does it invoke the component (OSGi start + processing), which fires off any module in the component with code to execute.

So, in short, you don’t have to muck about with the classpath in Ashlar: we resolve that automatically. If you need to do fancy Ivy stunts, you can muck about with ~/.ashlar/ivy.settings. If you don’t want to think about Ivy or if you mess up your ivy.settings, Ashlar will automatically generate an appropriate one.

We’ve got ashlar and ashlarc. I’d like two other programs eventually: ashlard to deploy components to a local Ivy repository that Ashlar uses, and ashlarx to be a REPL and execute scripts.

For testing, I’m using Cuke4Duke by way of my gradle-plugins. The functional tests act on the actual distribution: the very same code that will be zipped up to make a release. So I can’t get into a situation where the tests pass as long as I’m in the context of a unit test case, but compiled code bombed out on us.

As for the language itself: at this point, all the language can do is print out integers and rational numbers. Not terribly exciting. But the language itself is where the attention is going next: first, some rudimentary type-checking; then, mathematical operations.

Things are about to get a bit slower, because now that I’ve got to this point, I’m shifting some of my free-time focus back to other pending projects. But it’s a pretty exciting place to be.

A few years back, I started exploring programming language implementations. Generally, I wanted to understand the kind of decisions and trade-offs that programming language designers make: specifically, I was curious as to why Scala made some of the decisions that they did, and so I went down the road of trying to build a language that “fixed” what I perceived to be issues in Scala. That language was called Cornerstone.

After a while, I discovered that there are good reasons why Scala does things the way it does. In those “fixes”, what I was asking for was basically having my cake and eating it, too. Cornerstone was born of naiveté, and so while it was a wonderful educational opportunity, it was a stillborn language.

With lessons learned, I reconsidered my approach and started in on a new language, Ashlar. In my free time this summer, as a counter-balance to the pastoral/ministerial work I was doing, I cranked on Ashlar. It’s still just getting started, but a big hurdle has been crossed: the runtime is up and running, and the compiler infrastructure is in place. The functional test of printing “Hello, World” has gone from red to green. By request, I’ve added a bit of a description up on the Ashlar wiki, so go check it out.

In particular, there is an extensive conversation about the assumptions of Ashlar. Let me know what you think about that: I’m very curious.