Let's say you are a startup focusing on upload technology and you want the maximum level of control for your file uploads. In our case that means having the ability to directly interact with the multipart data stream as it comes in (so we can abort the upload if something isn't right, - beats the hell out of letting the user wait an hour to tell him after the upload has finished).

Here is a complete example on how to accomplish this in node.js (you'll need the bleeding edge git version):

// chunk could be appended to a file if the uploaded file needs to be saved});});
stream.addListener('complete', function(){
res.sendHeader(200, {'Content-Type': 'text/plain'});
res.sendBody('Thanks for playing!');
res.finish();
sys.puts("\n=> Done");});}

The code is rather straight forward. First of all we include the multipart.js parser which is a library that has just been added to node.js.

Next we create a server listening on port 8000 that dispatches incoming requests to one of our 3 functions: display_form, upload_file or show_404. Again, very straight forward.

display_form serves a very short (and invalid) piece of HTML that will render a file upload form with a submit button. You can get to it by running the example (via: node uploader.js) and pointing your browser to http://localhost:8000/.

upload_file kicks in as soon as you select a file and hit the submit button. It tells the request object to expect binary data and then passes the work on to the multipart Stream parser. The result is a new stream object that emits two kinds of events: 'part' and 'complete'. 'part' is called whenever a new element is found within the multipart stream, you can find all the information about it by looking at the first argument's headers property. In order to get the actual contents of this part we attach a 'body' listener to it, which gets called for each chunk of bytes getting uploaded. In our example we just use this event to render a progress indicator in our command line, but we could also append this chunk to a file which would eventually become the entire file as uploaded from the browser. Finally the 'complete' event sends a response to the browser indicating the file has been uploaded.

show_404 is handling all unknown urls by returning an error response.

As you can see, the entire process is pretty simple, yet gives you a ton of control. You can also easily use this technique to show an AJAX progress bar for the upload to your users. The multipart parser also works with non-HTTP requests, just pass it the {boundary: '...'} option into the constructor and use steam.write() to pass it some data to parse. Check out the source of the parser if you're curious how it works internally.

What happens if you take an insanely fast JavaScript engine and build an asynchronous I/O library around it? You get node.js, an up and coming project that brings you bloat-free server-side JavaScript.

This enables you to write any kind of "backend" service with just a few lines of JavaScript. But don't take my word for it, here is how a "Hello World" example looks in node.js:

This code creates a new http server and tells it to listen on port 8000. Whenever a request comes in, the closure function passed to createServer is fired. In this example we are waiting 2 seconds before sending a "Hello World" response back to the client.

However, during those 2 seconds the server will continue to accept new connections. That gives you a very scalable hello world server. Talking about performance: Ryan (the awesome guy behind node.js) has done some initial benchmarks which show how fast of a beast we are talking about here. To me the most notable aspect is the extremely low response times you can achieve with node.js (pretty much 0-1ms).

Here is another chart from a req/sec benchmark among various server-side JS libs:

Now you may think this is all very nice and stuff, but what do you actually need it for? Well, it depends. There are a few people writing web frameworks for node.js, but at this point it would be rather adventurous to build a full blown web application on top of node. However, node has incredible potential if you want to develop chat applications, check out this one for example. We are using node.js to run the backend for transload.it which we'll share more details about in the future. One could also easily write his own key-value store in node.js - let's say you'd like to have something like Memcached but with a REST interface.

Trying out node.js requires you to have a Linux/Unix machine but it's pretty much as simple as downloading the code and running:

./configure
make
make install

The documentation is pretty excellent and there are lots of friendly people on the mailing list to help if you have any problems.

To me the most exciting thing about node.js is its incredible potential to reunite. You can have long debates about using Python, Ruby, PHP, Java, ASP or Perl for a web project, but nobody will debate that JavaScript is *the* language of the web. Now imagine the smartest people from all those communities creating code in a single language that works on your browser as well as your server ...

I'd love to hear your thoughts on node.js and any questions you might have. Just leave comment!

The second 'revert' creates a new commit that basically redoes the bad commit. Why? Because now you have a "clone" of the bad commit sitting cleanly on top of your HEAD, waiting for you to be messed with. This is much better than using git reset to go back to it, because you are not deleting history (which is very likely to give you merge conflicts).

git reset HEAD^

By using git revert you can now go back to the moment before the "clone" commit was created, but still have all of its changes in your tree (=local file system). This essentially takes you back in time to the moment before your evil teammate hit the commit button and gives you the ability to yell "Nooooooo! Let me tell you about atomic commits buddy".

Now you can use 'git add' or 'git add -p' to stage all your changes for the first atomic change and commit it. Repeat the process until you've split the bad commit into nice atomic pieces.

All that is left to do now is to explain to your teammate that next time he accidentally makes a bad commit, he should use 'git reset HEAD^' right away. This way the problem can be fixed before git push distributes the bad commit to your team.

Let me know if this makes sense or if you have any question. I also accept "Git Challenges". That is if you have a problem you'd like to know how it could be solved in Git, just let me know in the comments and I'll blog about it : ).