Today I wanted to put together a QUnit-CLI example that leveraged Java 6’s included scripting features. Seeing as the Java 6 JDK includes a recent version of Rhino as its primary JavaScript engine, I thought this would be a piece of cake. Wrong.

To the javax.script package’s credit, Creating a new scripting engine and evaluating some script code is dead simple. Example below from Oracle’s own pages…

The trouble came into play when I ran QUnit-CLI ‘s Rhino based suite.js file. Ka-blew-ey!

Exception in thread "main" javax.script.ScriptException:
sun.org.mozilla.javascript.internal.EcmaError:
ReferenceError: "load" is not defined. (#1) in at line number 1
at com.sun.script.javascript.RhinoScriptEngine.eval(RhinoScriptEngine.java:110)
at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:232)
at Java6RhinoRunner.load(Java6RhinoRunner.java:29)
at Java6RhinoRunner.main(Java6RhinoRunner.java:14)

It turns out that scripts running from in a Rhino Shell environment have access to extra functions that are not directly provided when executing embedded from Java. The top-level “load” function which is used to load additional JavaScript files from the Rhino Shell is not directly available when running from the scripting engine.

It turns out, I am not the onlyone with this problem. My solution was to bind an additional object with a Java-based “load” function and add a new top-level “load” function to the JavaScript scope that invokes the Java code.

To make matters worse, from the Rhino Shell, the “print” function will output the given text to standard output with a newline character. As far as I can tell there isn’t a non-newline print command available on the Shell. Annoyingly, when running from Java both print and println are available and tied to their common java behaviors. This means that my suite.js code which uses “print” needs to use “println” when running from Java. My first thought was to override print to execute println from my Java runner, but it looks like these basic top-level functions can’ be redefined from JavaScript.

Now we have suit.js running from Rhino embedded within the Java 6 JDK … but things are perfect. The current code throws “Inappropriate array length” errors for a few QUnit tests. I will be looking into these next.

After a good deal of hacking and a push from jzaefferer, I gotten the example code in QUnit-CLI to run using Rhino and no browser in sight. This isn’t a complete substitute for in-browser testing, but makes integration with build servers and faster feedback possible.

The first hurdle was adding guards around all QUnit.js’s references to setTimeout, setInterval, and other browser/document specific objects. In addition I extended the test.js browser checks to include all of the asynchronous tests and fixture tests. Finally I cleaned up a bit of the jsDump code to work better with varying object call chains. My alterations can be found on my fork here.

The second hurdle was getting QUnit-CLI using my modified version of QUnit.js and adjusting how Rhino errors are handled. Adding a QUnit submodule to the QUnit-CLI git repository easily fixed the first (I previously posted my notes on git submodules and fixed branches). QUnit.js’s borrowed jsDump code is used to “pretty-print” objects in test messages. jzaefferer ran into an issue when running QUnit’s own tests through QUnit-CLI resulting in the cryptic error:

js: "../qunit/qunit.js", line 1021: Java class "[B" has no public instance field or method named "setInterval".
at ../qunit/qunit.js:1021
at ../qunit/qunit.js:1002
at ../qunit/qunit.js:1085
at ../qunit/qunit.js:1085
at ../qunit/qunit.js:1085
at ../qunit/qunit.js:110
at ../qunit/qunit.js:712 (process)
at ../qunit/qunit.js:304
at suite.js:84

It turns out that errors objects (e.g. ReferenceError) throw in Rhino include an additional property of rhinoException which points to the underlying Java exception that was actually thrown. The error we saw is generated when the jsDump code walks the error object tree down to a byte array off of the exception. Property requests executed against this byte array throw the Java error above, even if they are done part of a typeof check, e.g.

var property_exists = (typeof obj.property !== 'undefined');

Once I figured this out, I wrapped the object parser inside QUnit.jsDump to properly pretty-print error objects and delegate to the original code for any other type of object.

With these changes we have a decent command line executable test suite runner for QUnit. With a bit more work QUnit-CLI will hopefully be able to print Ant/JUnit style XML output and/or include stack traces when errors bubble out of test code.

In a previous post I described how I created a simple version of John Conway’s Game of Life using HTML 5 Web Workers for multi-threaded matrix calculations: HTML5-GoL. Once each new matrix is calculated, I needed to display it somewhere. In order to keep rendering (and therefore UI thread overhead) at a minimum I decided to only pass back the delta from the previous matrix to the current one (those squares with new life and recently deceased). The end result was a pretty fast implementation that could render fast enough for smooth transitions.

Canvas Tag – Simple 2D bitmap graphics

The Canvas tag was first introduced with Apple within Webkit and allows for a simple block area to render simple 2D bitmap graphics. This differs from SVG implementations which render individual graphic primitive objects which can be re-rendered and modified as DOM elements. Canvas tag graphics are simple drawing, composition, translation, and image manipulation.

In order to draw each Game of Life matrix, we first need the canvas area

Once we have the 2D context we can start drawing simple shapes (lines, blocks, circles, etc) or do any number of other 2D actions. For this simple Game of Life example I created a new GoLCanvas object which would hold all of the GoL specific actions (e.g. create life at x,y, clear the board, etc). Overall the drawing API is simple if not a bit archaic (reminds me of my CS course in computer graphics where we did very basic OpenGL commands in C++).

Back in March I went crazy. I spent WAY too many nights hacking in David Morgan-Mar’s Piet programming language. Piet programs are bitmaps which are executed as a “pointer” traverses the image. To make matters even worse, the underlying native commands only allow the program to use a single stack for storage. Nothing else. The original goal was to learn enough to put together an interesting and funny presentation for the Esoteric and Useless Languages April Fool’s Day meeting of the Lambda Lounge user group here in St. Louis. In the end I think I succeeded but you can be the judge for yourself. My slide deck can be found Prezi.com and a video of my talk can be found on blip.tv (Thanks to Alex Miller for recording it). You can also find some other posts on Piet and the subset language I created QuickPiet here.

The most surprising outcome of all that image editing and eye-crossing stack tracing was a completely inappropriate desire to build more and more complex programs with the very basic tools [Quick]Piet offers. I became consumed with thoughts of how to replicate higher level ideas and abstractions built on top of a single stack. I am sure most of my work could easily be explained and improved upon in any Finite Automata or Language Theory textbook … but I wanted to do it myself.

My Piet Presentation @ Lambda Lounge (blip.tv)

When you only have a stack to work with, you realize you don’t have much. Our ability as software developers to add layers of abstraction is such a powerful tool. Without abstractions the entire complexity of an application has to fit into a developer’s head. With only a stack, even small tasks are difficult because the entire state of the application has to be maintained as each command is execute. I quickly realized that I needed a way to have stationary registers or variables that could hold data throughout the life-cycle of a program without me needing to know where they were on the stack constantly.

My solution was to keep track of the size of the stack. Simple no?

Imagine a stack full of values. The only constants of a stack are the top and bottom. If we want to hide some registers inside our stack, it makes sense to keep them at one of the ends. Only one problem: the top is highly volatile and we have no idea where the bottom is. My idea was to keep the current size of the stack as the top most element on the stack at all times. If you always knew the top value was the “size” of the stack, then when you first start your application you could stash a few extra values down in the “negative” and use these values as registers/variables throughout your application. This leads to one big hurdle: how can we keep track of the stack size as we execute our program?

After a lot of soul searching and too many crumpled of pieces of notebook paper to count I conjectured that you could recreate all of the native stack commands that Piet offers as macros that would maintain and update the stack size as you went. A Piet Developer (are there any other than me?) would simply need to work with these macros instead of the native commands and could build out new macros which would leverage that stack size to retrieve and store values from the registers. Abstraction for the win.

Several hours later I had built macros which mimicked all of the more basic commands which would preserve the stack size value (e.g. PUSH, POP, ADD, SUB, etc). Each macro was 5 – 10 native commands depending on the number of stack values changing at a time. The hard part came when I wanted to build a new ROLL command. This command differed from all of the others since the area of the stack effected by executed the command depended on the input values. All of the other commands effected a fixed number of values (e.g. PUSH always nets a single new value on the stack). So while most macro-commands could be built with useful assumptions about how many values were changing, ROLL needed to effect a variable amount of the stack to varying depths. When it was all said and done, my new ROLL command was more than 30 commands!

Since a ROLL command has a variable depth the basic solution is to calculate how deep things are going to end up (iterations mod depth) and hide the stack size value in the middle of the roll’s depth and roll to a depth one greater than originally requested. This should leave the stack size value at the top of the stack once again and everything else where it should be.

You can find my implementations of all macros in this gist hacked together in a small html file with JavaScript. I might be crazy, but damn it I did it. With these macros registers are possible (assuming you know how many registers you need at “compile time” — although an infinite amount is theoretically possible). It seems like this is the first big step to a proof of Piet’s Turing completeness …. but that is a task for another day.

Previously I talked about getting QUnit JavaScript tests running on the command line using a simple Rhino setup. Hacking a few lines together meant that we could run our tests outside the constraints of a browser and potentionally as part of an automated build (e.g. Hudson). We had just one big problem: QUnit can run outside of a browser, but it still makes LOTS of assumptions based on being in a browser.

Foremost was the fact that the QUnit.log callback passed back HTML in the message parameter. I wasn’t the only one to catch onto this issue (GitHub issue 32). While I was still formulating a plan of attack to refactor out all of the browser oriented code, Jörn Zaefferer was putting a much simpler fix into place. His commit added an additional details object to all QUnit.log calls which will contains the original assertion message w/o HTML and possibly the raw expected and actual values (not yet documented on the main page). Problem solved!

Or so it seemed.

As I tried to hack my example CLI test runner to use the quick fix I ran into several issues.

Core QUnit logic still includes browser assumptions

Even with changes to wrap browser calls and guard against assuming certain browser objects existing, qunit.js is full of browser code. It would be great if the core unit testing code could be separated by the code necessary to execute properly within a browser page and separate from how test results should be displayed on an HTML page. If these three responsibilities were found in 3 different objects, it would be simple to replace one or more with ones that fit the scenario much more closely without needing to resort to hacks or breaking backward compatibility.

Single Responsibility Principle

Lifecycle callbacks break down outside of a browser

QUnit has some really nice lifecycle callbacks which anyone needing to integrate a testing tool can use. They include callbacks for starting and finishing each test, individual assertions, and the whole test suite. The first thing I wanted to add was reporting of the total number of passing and failing tests along with execution time when the tests were all done. This looked like a simple job for QUnit.begin and QUnit.done.

It turns out that QUnit.begin won’t get called unless there is a window “load” event …. which doesn’t happen in Rhino …. so that is out. To make matters worse, QUnit.done is getting called twice! For each test! This means that my “final” stats are spammed with each test run. With help of Rhino’s debugger app, I saw that the culprit were the successive “done” calls near the end of the “test” function. Not sure how to fix that yet.

Most of the time it is great not having to worry about if a value is really “False” or if it is undefined or some other “falsey” value. Only one problem, zero (0) is falsey too. Going back to my C days this isn’t too big of a deal (and can be used in clever ways). However, if you are checking for the existence of properties of an object by doing a boolean expression … don’t. Sure, undefined is falsey and if an object doesn’t have a property it will return undefined … but what if the value of that proeperty really IS undefined or in my case zero. No good.

** The double bang (!!boolean) convention is a convienent way for converting between a falsey or truthy value to the explicit values TRUE and FALSE

Unit test frameworks should standardize the order of arguments!

This is a personal pet peeve. I wish all unit testing framework would standardize whether the EXPECTED value or the ACTUAL value should go first on equals assertions. JUnit puts expected first. TestNG puts it second. QUnit puts it second. This is damn confusing if you have to switch between these tools frequently.

I moved my current code to a new GitHub repo -> QUnit-CLI. As I learn more and find a better solution, I will keep this repo updated. Currently the code outputs one line for each passing test and a more detailed set of lines for each failing test (including individual assertion results). Because of the QUnit.done problem above, the “final” test suite results are shown twice for each test (making them not very “final”) which shows the total test results and execution time. [Edited to have correct link]

~~~~~~~~~~

Side Note: As an end goal, I would like to build a CLI for QUnit that will output Ant+JUnit style XML output which would make integrating these test results with other tools a piece of cake. I CAN’T FIND THE XML DOCUMENTED ANYWHERE! Lots of people have anecdotal evidence of how the XML will be produced but no one seems to have a DTD or XSD that is “official”. If anyone knows of good documentation of the XML reports Ant generates for JUnit runs please let me know. Thanks.

I like JavaScript. I admit it. I used to hate it, but then things changed. What changed? A few months back I decided to take a crack at learning some of the new HTML 5 APIs by implementing John Conway’s Game of Life in JavaScript using web workers and a 2D canvas tag for display. The end result was a lot of fun and a pretty cool pet project: HTML5-GoL.

Game Of Life: Devs-In-A-Cabin & STL Code Retreat & HTML 5

Screenshot of HTML5-GoL animation

John Conway’s Game of Life is a simple algorithm for cellular automaton where each new generation can be computed by calculating the number of neighbors each cell has in the previous generation. A living cell will stay alive if it has 2 or 3 neighbors (diagonals count). A dead cell will spawn new life IFF it has exactly 3 neighbors. Simple.

A few months back Amos King got a number of developers together for a weekend of hacking and talking shop at a cabin near St. James, MO (#devsinacabin). The idea was for every developer come prepared to give a quick talk on something … I hacked together the basics of this HTML5 implementation the day of and presented that night.

Not long after, Mario Aquino and James Carr put together a Code Retreat here in St. Louis where pairs of developers repeatedly tried to implement the Game of Life in many different languages and styles in 40 minute chunks. I cleaned up my code a bit before hand to show one implementation that actually had graphics (I cheated, I know). FYI both Mario and James have put together JavaScript implementations of GoL also.

The main goal of the project was to teach me some of the new HTML 5 based JavaScript API changes that everyone has been talking about (not quite the multi-media extravaganza of The Wilderness Downtown, but decent). The HTML 5 spec is a HUGE collection of new APIs and markup changes that are aimed at solving a lot of common web problems in a standardized way. One of the best examples of the need for standardization can be seen in the Apache Shindig project. While hacking my way through their internal pub/sub code I found a huge blocks of code handling communicating between the container and a given gadget depending on the browser hacks needed to talk back and forth. For HTML 5 compliant browsers this was a single postMessage function call.

Without too much trouble I was able to put something together that leveraged web workers for calculating each new “world” state outside of the main UI event loop and a simple 2D canvas tag to display all of the live and dead cells. There are a ton of other HTML 5 features I would love to add (mouse interaction, local storage and loading of customer patterns, etc), but those are for another day. For now the code is up on GitHub with a nice home page including an example implementation (JSpec tests for all of the actual game logic).

Click the above picture to go there and see it in action
and keep reading below to find out how I used web workers to offload my hard computations…

Web Workers: Multi-Threaded JavaScript

The event oriented nature of JavaScript can be a nice feature and a huge pain int he a$$. Simple button click events and AJAX calls are easy, but try to do any significant calculations and you quickly run into UI problems. Trying up your one and only JavaScript thread on an animation loop or complex parsing task means your UI becomes unresponsive and sluggish. Take the time to break all of your work into small chunks and hundreds of setTimeout and setInterval calls and you might go cross-eyed. Enter Web Workers.

Web Workers are separate JavaScript threads that can be spawned by the main UI thread and will download and execute a specified .js file from the same domain. The UI thread and the new worker can communicate by asynchronous message passing of String values (and vanilla JSON objects, although this isn’t technically in the spec). In addition, web workers can’t manipulate the DOM or anything on the page directly. So what can they do? They can run in parallel of your UI code and leverage any other normal native API that your UI code can (web sockets, XMLHttpRequest, local storage, etc). This means that you can offload large computations and server communication to the second thread and leave your main thread for dealing with the user and the DOM.

*** From several soures and the spec itself, it sounds like Web Workers are not designed to be cheap to create and start. This doesn’t mean that implementations have to make these expensive operations, but the idea is that you won’t create hundreds of these. Instead create one or two and reuse them. I noticed that besides the extra network hits to grab the actual code to run int he worker, there was occasionally a noticeable pause as the worker thread was spawned. Since the UI thread will have continued on during this time, it is probably a best practice to have the worker and UI threads coordinate their work with some “I a alive and ready” type messages. ***

In the Game of Life code I wanted to use a Web Worker to do the generation-to-generation calculations since I figured this would be the bulk of the processing time and my naive 2D array implementation required a fair amount of calculations for large “worlds”. The gol-client.js file is the jumping off point. The main chunk of code is executed after the DOM is loaded and creates the necessary objects used to draw on the canvas and creates a new Web Worker loaded with gol-worker.js.

First thing first, the worker loads up a secondary library file containing the actual Game of Life logic using the importScripts(”) command. The worker is the only one that actually knows about the Game of Life algorithm and matrix (“world”). After creating a new matrix it randomly seeds the world and kicks off a timer to calculate a new generation each X ms and post the changed positions back to the main UI thread.

*** Initially I assumed that the calculations for each new board would be time consuming and I would be able to run them continuously and post back changes to the UI thread after each one without any waits. Turns out it is fast and spamming messages to the UI thread can cause some hiccups in performance more than just waiting. I haven’t had a chance to examine what really happened but in Chrome on Windows I was seeing some really strange behavior when I flooded the UI thread w/ messages too fast. I am not sure if their is a message buffer I was overflowing or if it was just choking on the throughput. ***

All in all workers are a piece of cake and really helpful. I noticed some performance differences between Firefox and Chrome (hint Chrome was faster) but I can’t blame one particular thing or another. It might just be my crap code.

Along with offloading large processing tasks web workers can be used for background server communication, as a proxy for helping multiple client windows to talk to each other, persistent data stores, and common processors that can have life cycles beyond the scope of a single page but across multiple requests for a whole web-app (shared web workers).

>> Another post discusses some of my findings w/ using the Canvas tag to display the world … …

Lately I have been hacking around a lot with JavaScript (HTML5, NodeJS, etc…) and was asked by another team if I had any suggestions on testing JavaScript and how they could integrate it into their build. Most of my experience testing JavaScript has been done by executing user acceptance tests written with tools like Selenium and Cucumber (load a real browser and click/act like a real user). Unfortunately these tests are slow and brittle compared to our unit tests. More and more of today’s dynamic web apps have large amounts of business logic client side that doesn’t have a direct dependency on the browser. What I want are FAST tests that I can run with every single change, before every single commit, and headless on our build server. They might not catch everything, but they will catch a lot long before the UATs are finished.

Goal : Run JavaScript unit tests as part of our automated build

The first hurdle was finding a way to run the code headless outside of a browser. Several of today’s embedded JavaScript interpreters are available as separate projects but for simplicity I like Rhino. Rhino is an interpreter written entirely in Java and maintained by the Mozilla Foundation. Running out of a single .jar file and with a nice GUI debugger, Rhino is a good place to start running JavaScript from the command line. While not fully CommonJS compliant yet, Rhino offers a lot of nice native functions for loading other files, interacting with standard io, etc. Also, you can easily embed Rhino in Java apps (it is now included in Java 6) and extended with Java code.

My JS testing framework of choice lately has been JSpec which I really like: nice GUI output, tons of assertions/matchers, async support, Rhino integration, and really nice r-spec-esque grammar. Unfortunately JSpec works best on a Mac and the team in question needs to support Windows/*nix mainly.

Enter QUnit: simple, fast, and used to test JQuery. Only one problem, QUnit is designed to run in a browser and gives all of its output in the form of DOM manipulations. Hope was almost lost when I found a tweet by John Resig himself that suggested that QUnit could be made to work with a command line JavaScript interpreter. Sadly despite many claims of this functionality, I couldn’t find a single good tutorial which showed me how nor a developer leveraging this approach. A bit of hacking, a lucky catch while reading the Env.js tutorial, and twada’s qunit-tap project came together in this simple solution:

Step 2 : Create a test suite (suite.js)

This code could be included in the test file, but I like to keep them separate so that myLibTest.js could be included into a HTML page for running QUnit in normal browser mode without making any changes.

This file contains all of the Rhino specific commands as well as some formatting changes for QUnit. After loading QUnit we need to a do a bit of house work to setup QUnit to run on the command line and override the log callback to print our test results out to standard out. QUnit offers a number of callbacks which can be overridden to integrate w/ other testing tools (find them on the QUnit home page under “Integration into Browser Automation Tools”) Finally load our library and our tests. [EDIT: ADD]

Step 5 : Profit!

There you go, command line output of your JavaScript unit tests. Now we can test our “pure” JavaScript which doesn’t rely on the DOM. Using a tool like Env.js this is also possible and will be discussed in a future post. [EDIT: ADD]

Step 5+ : Notice the problem -> HTML output on the command line

You may have noticed that my output messages include HTML markup. Sadly QUnit still has assumptions that it is running in a browser and/or reporting to an HTML file. Over the next couple weeks I am going to work on refactoring out the output formatting code from the core unit testing logic and hopefully build out separate HTML, human readable command line, and Ant/JUnit XML output formatters so that integrating QUnit into your build process and Ant tasks is a piece of cake.

Track my progress on my GitHub fork and on the main QUnit project issue report.

Today, I was going to write about Google Maps until I stumbled onto this little gem. My post still involves Google Maps code, but my focus is on Javascript and how virtually anything you can think of that might make the language more of a joy to program in, you can implement yourself.

Our post begins with a plain Google Maps application, virtually copied right from Google’s API docs example. I made a few changes though, highlighted below. I hate mixing event handlers in HTML tags and I have, as of yet, still not installed a decent mixed mode for editing Javascript inside of an HTML source file. So I pulled the embedded script into it’s own file and added listeners for load and unload events on the window.

The default map is centered on Google’s headquarters in Palo Alto. Of course, Palo Alto isn’t very pleasing to look at for anyone who isn’t a smug self-centered Palo Altoan, but I suspect that most people don’t change this, at least initially, because it’s daunting to try and figure out what your hometown’s latitude and longitude is. Google Maps itself, though actually provides a decent way to turn a string location into a GLatLng, using a GeoCoderThingy. The getLatLng() method takes a callback. I live in St. Louis and my original post had to do with our lovely Forest Park, so the code below demonstrates how to center your map on America’s largest urban park.

Notice the extract parameter and method rename refactoring here. It’s good and our code is clean, but Forest Park doesn’t appear prominent enough on the map for my liking. It should be bigger, more zoomed in. Pretty easy, just extract parameter around the hardcoded zoom level, and adjust the callback a bit since now displayMapAt will take two parameters.

This is ugly. Anonymous functions are powerful, but hell if I hate reading them, all curly and indented funny and pretending to be simple variables when there’s a bunch of characters there that don’t ever belong in any variable name ever.

Much better, but we had to rearrange displayMapAt‘s parameters to get it to work and they were in a good order before, all matching the order they’re in when they’re used and such and not to mention what would happen if displayMapAt wasn’t your code and you couldn’t just mix it all up willy nilly. Haskell, which I borrowed most (all) of the ideas for fn.js from has the notion of “reversing” a function; basically creating a new function that takes the original functions arguments in reverse order. That would work here, but Javascript and Haskell’s syntactic beauty lean in different directions, so I looked to a different place for inspiration to solve this tiny, tiny problem: Arc.

Paul Graham’s hundred year language, Arc, has this placeholder syntax for anonymous functions that is ever so useful.

So I broke into fn.js and added an identity check for the underscore variable. Once your function has all of it’s parameters, if one of them is the underscore variable (not equal to, is the reference itself) then we once again return an intermediate curried-sorta function that will on final invocation replace it’s placeholder value with a newly passed one. Right now the code only supports a single placeholder parameter, but there’s no reason you couldn’t alter the check and replace code to handle multiple placeholders.

Updated: see this Hacker News thread and Tom Robinson’s comments below for the whole story, but suffice to say === doesn’t work the way I thought it did, and _ needs a value and a simple == comparison. Also, these placeholder changes have been pushed to fn.js on github.

In the end, our displayMapAt function keeps it’s original parameter ordering, our callback isn’t an ugly inline anonymous function and the underscore very succintly points out where a variable is missing and that a future call will replace it with a real value.

I found John Resig’s excellent env.js a few weeks ago, and immediately decided that I could do a better job, and so tried writing my own version. The promise of the tool for me, is that with it I can use very natural javascript dom methods to scrape web pages instead of complicated and fragile regular expressions.

Part of my decision to write my own version stemmed from the fact that when I first tried out env.js, now being maintained by thatcher on github, it choked all over the very first web page I gave it. Its a wonder I noticed the errors it spit out at all, though, as env.js logs an awful lot of useless information to the console. Maybe its a unix aesthetic, but I feel solid working programs are quiet programs.

Those two minor quibbles aside, though, there are a lot of neat ideas going on in env.js. Besides, with git, I could very easily fork thatcher’s work and go whichever way I wanted to. Long story short, I sat back down with env.js last night and fed it some web pages. It did choke all over them, but this time, I stuck around and figured out what the problem was.

env.js uses an internal SAX parser written byDavid Joham and Scott Severtson, that looks for any ampersands in its input and tries to interpret them as HTML escape sequences. That in itself isn’t all that illogical. If you type &amp; into a web page, env.js should be able to handle it properly. However, it was doing so inside of <script> tags and dying when it encountered the javascript && operator. Furthermore, the unescaping logic searched from the first ampersand to the first semicolon after that ampersand to decide what escape sequence it was looking at. Bad news when it was looking at a language descended from C, which uses operators that start with & and statements the end with ;