joseoncode.comhttp://joseoncode.com/
José on Code!en-usTue, 26 Sep 2017 06:11:36 +0000Tue, 26 Sep 2017 06:11:36 +0000Reloading node with no downtimehttp://feedproxy.google.com/~r/JoseFRomaniello/~3/hq7o4eChH2Y/reloading-node-with-no-downtime
Sun, 18 Jan 2015 20:22:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2015/01/18/reloading-node-with-no-downtime<p>I wrote a blog post about <a href="http://joseoncode.com/2014/07/21/graceful-shutdown-in-node-dot-js/">Unix signals and Graceful shutdown in node.js applications</a> five months ago. In this article I will explain how to reload a node.js application with no downtime.</p>
<p>One of the things that I like about nginx is how it handles configuration changes <a href="http://nginx.org/en/docs/control.html">Controlling nginx</a>. The master process &quot;reload the configuration&quot; by creating new worker process when it receives the SIGHUP signal.</p>
<p>Node.js comes with a <a href="http://nodejs.org/api/cluster.html">cluster module</a> that allows us to do very powerful things.</p>
<p>For this example I will use one worker but it can be extended to use as many workers as you want.</p>
<p>master.js:</p>
<div class="highlight"><pre><code class="text">var cluster = require(&#39;cluster&#39;);
console.log(&#39;started master with &#39; + process.pid);
//fork the first process
cluster.fork();
process.on(&#39;SIGHUP&#39;, function () {
console.log(&#39;Reloading...&#39;);
var new_worker = cluster.fork();
new_worker.once(&#39;listening&#39;, function () {
//stop all other workers
for(var id in cluster.workers) {
if (id === new_worker.id.toString()) continue;
cluster.workers[id].kill(&#39;SIGTERM&#39;);
}
});
});
</code></pre></div>
<p>The master process start the first worker and then listen to the SIGHUP signal. Then when it receives a SIGHUP signal it fork a new worker and wait the worker until is <a href="http://nodejs.org/api/cluster.html#cluster_event_listening">listening</a> on the IPC channel, once the worker process is listening it kill the other workers.</p>
<p>This works out of the box because the cluster module allows several worker process to listen on the same address.</p>
<p>server.js:</p>
<div class="highlight"><pre><code class="text">var cluster = require(&#39;cluster&#39;);
if (cluster.isMaster) {
require(&#39;./master&#39;);
return;
}
var express = require(&#39;express&#39;);
var http = require(&#39;http&#39;);
var app = express();
app.get(&#39;/&#39;, function (req, res) {
res.send(&#39;ha fsdgfds gfds gfd!&#39;);
});
http.createServer(app).listen(8080, function () {
console.log(&#39;http://localhost:8080&#39;);
});
</code></pre></div>
<p>This is the entry point for the application, it is a simple express application with the exception of the first part.</p>
<p>You can test this as follows:</p>
<p><img src="https://s3.amazonaws.com/joseoncode.com/img/node_reload.png" alt=""></p>
<p>I&#39;ve uploaded a more <a href="https://github.com/jfromaniello/zero-downtime-node">complete example</a> to github.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/hq7o4eChH2Y" height="1" width="1" alt=""/>http://joseoncode.com/2015/01/18/reloading-node-with-no-downtimeGraceful shutdown in node.jshttp://feedproxy.google.com/~r/JoseFRomaniello/~3/ggtix_z0mwY/graceful-shutdown-in-node-dot-js
Mon, 21 Jul 2014 08:42:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2014/07/21/graceful-shutdown-in-node-dot-js<p>According to <a href="http://en.wikipedia.org/wiki/Unix_signal">wikipedia - Unix Signal</a>:</p>
<blockquote>
<p>Signals are a limited form of inter-process communication used in Unix, Unix-like, and other POSIX-compliant operating systems. A signal is an asynchronous notification sent to a process or to a specific thread within the same process in order to notify it of an event that occurred.</p>
</blockquote>
<p>There are a bunch of generic signals, but I will focus on two:</p>
<ul>
<li> <code>SIGTERM</code> is used to cause a program termination. It is a way to <strong>politely</strong> ask a program to terminate. The program can either handle this signal, clean up resources and then exit, or it can ignore the signal.</li>
<li> <code>SIGKILL</code> is used to cause inmediate termination. Unlike SIGTERM it can&#39;t be handled or ignored by the process.</li>
</ul>
<p>Wherever and however you are deploying your node.js application it is very likely that the system in charge of running your app use these two signals:</p>
<ul>
<li> <a href="http://upstart.ubuntu.com/cookbook/#stopping-a-job">Upstart</a>: When stoping a service, by default it sends SIGTERM and waits 5 seconds, if the process is still running, it sends SIGKILL.</li>
<li> <a href="http://supervisord.org/configuration.html">supervisord</a>: When stoping a service, by default it sends SIGTERM and waits 10 seconds, if the process is still running, it sends SIGKILL.</li>
<li> <a href="http://supervisord.org/configuration.html">runit</a>: When stoping a service, by default it sends SIGTERM and waits 10 seconds, if the process is still running, it sends SIGKILL.</li>
<li> <a href="https://devcenter.heroku.com/articles/dynos#graceful-shutdown-with-sigterm">Heroku dynos shutdown</a>: as described in this link heroku send SIGTERM, waits the process to exit for 10 seconds and if the process is still running it sends SIGKILL.</li>
<li> <a href="https://docs.docker.com/reference/commandline/cli/#stop">Docker</a>: If you run your node app in a docker container, when running <code>docker stop</code> command the main process inside the container will receive SIGTERM, and after a grace period (10 seconds by default), SIGKILL.</li>
</ul>
<p>So, let&#39;s try a with this simple node application:</p>
<div class="highlight"><pre><code class="text">var http = require(&#39;http&#39;);
var server = http.createServer(function (req, res) {
setTimeout(function () { //simulate a long request
res.writeHead(200, {&#39;Content-Type&#39;: &#39;text/plain&#39;});
res.end(&#39;Hello World\n&#39;);
}, 4000);
}).listen(9090, function (err) {
console.log(&#39;listening http://localhost:9090/&#39;);
console.log(&#39;pid is &#39; + process.pid)
});
</code></pre></div>
<p>As you can see response are delayed 4 seconds. The node documentation <a href="http://nodejs.org/api/process.html#process_signal_events">here</a> says:</p>
<blockquote>
<p>SIGTERM and SIGINT have default handlers on non-Windows platforms that resets the terminal mode before exiting with code 128 + signal number. If one of these signals has a listener installed, its default behaviour will be removed (node will no longer exit).</p>
</blockquote>
<p>It is not clear from here what&#39;s the default behavior, I send SIGTERM in the middle of a request the request will fail as you can see here:</p>
<div class="highlight"><pre><code class="text">» curl http://localhost:9090 &amp;
» kill 23703
[2] 23832
curl: (52) Empty reply from server
</code></pre></div>
<p>Fortunately, the http server has a <a href="http://nodejs.org/api/http.html#http_server_close_callback"><code>close</code> method</a> that stops the server for receiving new connections and calls the callback once it finished handling all requests. This method comes from the NET module, so is pretty handy for any type of tcp connections.</p>
<p>Now, if I modify the example to something like this:</p>
<div class="highlight"><pre><code class="text">var http = require(&#39;http&#39;);
var server = http.createServer(function (req, res) {
setTimeout(function () { //simulate a long request
res.writeHead(200, {&#39;Content-Type&#39;: &#39;text/plain&#39;});
res.end(&#39;Hello World\n&#39;);
}, 4000);
}).listen(9090, function (err) {
console.log(&#39;listening http://localhost:9090/&#39;);
console.log(&#39;pid is &#39; + process.pid);
});
process.on(&#39;SIGTERM&#39;, function () {
server.close(function () {
process.exit(0);
});
});
</code></pre></div>
<p>And then I use the same commands as above:</p>
<div class="highlight"><pre><code class="text">» curl http://localhost:9090 &amp;
» kill 23703
Hello World
[1] + 24730 done curl http://localhost:9090
</code></pre></div>
<p>You will notice that the program doesn&#39;t exit until it finished processing and serving the last request. More interesting is the fact that after the SIGTERM signal it doesn&#39;t handle more requests:</p>
<div class="highlight"><pre><code class="text">» curl http://localhost:9090 &amp;
[1] 25072
» kill 25070
» curl http://localhost:9090 &amp;
[2] 25097
curl: (7) Failed connect to localhost:9090; Connection refused
[2] + 25097 exit 7 curl http://localhost:9090
» Hello World
[1] + 25072 done curl http://localhost:9090
</code></pre></div>
<p>Some examples in blogs and stackoverflow uses a timeout on SIGTERM in the case that server.close takes longer than expected. As mentioned above this is unnecesary because every process manager will send a SIGKILL if the SIGTERM takes too much time.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/ggtix_z0mwY" height="1" width="1" alt=""/>http://joseoncode.com/2014/07/21/graceful-shutdown-in-node-dot-jsA common case of double callbacks in node.jshttp://feedproxy.google.com/~r/JoseFRomaniello/~3/1Qjnq2zRK2I/case-of-double-callbacks
Fri, 27 Dec 2013 08:06:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/12/27/case-of-double-callbacks<p>A double-callback is in javascript jargon a callback that we expect to be called once but for some reason is called twice or more times.</p>
<p>Sometimes it is easy to discover, as in this example:</p>
<div class="highlight"><pre><code class="text">function doSomething(callback) {
doAnotherThing(function (err) {
if (err) callback(err);
callback(null, result);
});
}
</code></pre></div>
<p>The obvious error here is that when <code>doAnotherThing</code> fails the callback is called once with the error, and once with result.</p>
<p>However there is one special case that is very hard to reproduce and to discover, moreover it has happened to me several times.</p>
<p>Yesterday, my friend and co-worker <a href="https://twitter.com/thepose">Alberto</a> asked me this:</p>
<blockquote>
<p>&quot;Why does this test hangs on the <strong>assertion</strong> line?&quot;
expect(foo).to.be.equal(&#39;123&#39;);</p>
</blockquote>
<p>The test look like this</p>
<div class="highlight"><pre><code class="text">it(&#39;test something&#39;, function (done) {
function_under_test(function (err, output) {
expect(foo).to.be.equal(&#39;123&#39;);
});
});
</code></pre></div>
<p>After some debugging I found out that it didn&#39;t hang only on expect but when we throw any error inside the callback.</p>
<p>A bunch of calls before in the stack, there was a little function with a bug like this:</p>
<div class="highlight"><pre><code class="text">function (callback) {
another_function(function (err, some_data) {
if (err) return callback(err);
try {
callback(null, JSON.parse(some_data));
} catch(err) {
callback(new Error(some_data + &#39; is not a valid JSON&#39;));
}
});
}
</code></pre></div>
<p>The intention of the developer with this try method is clear: to catch JSON.parse errors. But the problem is that it also catch errors thrown <strong>inside callback</strong> and execute the callback with a wrong error.</p>
<p>The solution is trivial, parse outside the try as follows:</p>
<div class="highlight"><pre><code class="text">function (callback) {
another_function(function (err, some_data) {
if (err) return callback(err);
try {
var parsed = JSON.parse(some_data)
} catch(err) {
return callback(new Error(some_data + &#39; is not a valid JSON&#39;));
}
callback(null, parsed);
});
}
</code></pre></div>
<p>Introducing these errors is very easy I&#39;ve done several times, throubleshooting is very hard, so be careful and do not wrap callbacks call in try&#47;catch blocks.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/1Qjnq2zRK2I" height="1" width="1" alt=""/>http://joseoncode.com/2013/12/27/case-of-double-callbacksThe Architecture we use to Deploy to Public and Private Cloudshttp://feedproxy.google.com/~r/JoseFRomaniello/~3/rNsgmuAAADk/shiping-auth0
Fri, 13 Sep 2013 11:48:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/09/13/shiping-auth0<blockquote>
<p>Originally posted on <a href="http://inside.auth0.com/2013/09/13/shiping-auth0/" rel="canonical">inside.auth0.com</a>.</p>
<p>How we use Puppet, GitHub, TeamCity, Windows Azure, Amazon EC2 and Route53 to ship Auth0</p>
</blockquote>
<h2>Introduction</h2>
<p>There are two ways companies deploys web applications to the cloud: PaaS and IaaS. With <a href="http://en.wikipedia.org/wiki/Platform_as_a_service">Platform as a Service</a> you usually deploy your applications to the vendor&#39;s platform. <a href="http://en.wikipedia.org/wiki/Infrastructure_as_a_service#Infrastructure_as_a_service_.28IaaS.29">Infrastructure as a Service</a> is the basic cloud-service model: you get servers, usually vms.</p>
<p>We started auth0 using <a href="http://heroku.com">Heroku</a>&#39;s Platform as a Service but soon we decided to provide a self-hosted option of Auth0 for some customers. In addition, we wanted to have the same deployment mechanism for the cloud&#47;public version and the appliance version. So, we decided to move to IaaS.</p>
<p>What I am going to show here is the work after several iterations at a very high level. It is not a silver bullet and you probably don&#39;t need this (yet?) but it is another option to consider. Even if this is exactly what you are looking for, the best advice I can give you &quot;don&#39;t architect everything from start&quot;, this come from the work of several weeks and took several iterations but <strong>we never ceased to ship</strong>, everything evolved and keeps evolving, and we keep tuning the process a lot on the run. </p>
<h2>The big picture</h2>
<p><img src="https://s3.amazonaws.com/joseoncode.com/img/2013-08-25_1458.png" alt=""></p>
<h2>What&#39;s Puppet and why is so important when using IaaS?</h2>
<p>Picture yourself deploying your application to your brand new vm today. What is the first thing you will do? Well, if you have a node.js application installing node will be a good start. But probably you will need to install 10 other things as well, and changing the famous <a href="https://www.google.com/search?q=nodejs+ulimit&amp;oq=nodejs+ulimit&amp;aqs=chrome..69i57j0l2.2293j0&amp;sourceid=chrome&amp;ie=UTF-8">ulimit</a>, configuring logrotate, ntp and so on. Then you will copy your application somewhere in the disk and configure it as a service and so on. Where do you keep this recipe?</p>
<p><a href="http://docs.puppetlabs.com/">Puppet</a> is a tool for <a href="http://en.wikipedia.org/wiki/Configuration_management">configuration management</a>. Rather than an install script you describe at a high level the state of the resources in the server. When you run puppet it will check everything and then it will do whatever it takes to put the server in that specific state, from removing a file to installing a software.</p>
<p>There is another tool similar to puppet called <a href="http://www.opscode.com/chef/">chef</a>. One of the things regarding Chef that I would like to test in the future is <a href="http://aws.amazon.com/en/opsworks">Amazon OpsWorks</a>. </p>
<p>After you have your configuration in a language like this, deploying to a new server is very easy. Sometimes I modify the configuration via ssh to test and then I update my puppet scripts. </p>
<p>There is another concept emerging called <a href="http://martinfowler.com/bliki/ImmutableServer.html">InmutableServers</a>, it is a very interesting way and there seems to be some companies using it.</p>
<h2>Sources and repositories</h2>
<p>Auth0 is very modularized and it is not a single web application but a network of less than ten. Every web application is a node application. Our <strong>core</strong> is a web app without ui which handles the authentication flows and provide a rest interface, <strong>dashboard</strong> is another web application which is just an interface to our <strong>core</strong> where you can configure and test most of the settings, <strong>docs</strong> is another app full of markdown tutorials to name a few.</p>
<p>We use github private repositories because we already had a lot of things opensourced there.</p>
<p>We use branches to develop new features and when it is ready we merge to master. Master is always deployable. We took some of the concepts from a talk we saw; &quot;<a href="http://zachholman.com/talk/how-github-uses-github-to-build-github/">How Github uses Github to build Github</a>&quot;. When something is ready? is a tricky question but we are very responsible and self organized team, we do pull-requests <em>from branch to master</em> when we want the approval of our peers. Teamcity automatically run all tests and will mark the pull requests as OK, this is a very useful feature of TC. But the most important thing we do in this stage are code reviews.</p>
<p>Very often we send a branch to one of our 4 tests environments with our <a href="https://github.com/github/hubot">hubot</a> (a personal bot on the chat room):</p>
<p><img src="https://s3.amazonaws.com/joseoncode.com/img/2013-08-25_1622.png" alt=""></p>
<ul>
<li> <strong>ui</strong> is our dashboard application</li>
<li> <strong>template-scripts</strong> was a branch</li>
<li> <strong>proto</strong> is the name of te environment</li>
</ul>
<p>with that in place we can review a living instance of the branch in a environment similar to production.</p>
<p>Then we iterate until we finally merge.</p>
<p>This is what works for us now, anyone of the team can merge or push directly to master and we consciously decide when we should do the pull-request ceremony.</p>
<h2>Continuous integration</h2>
<p>We used <a href="http://jenkins-ci.org/">Jenkins</a> for six months but it failed a lot, I had to rebuild few of the plugins we were using. Then I had a short fantasy to build our own CI server but we choose <a href="http://www.jetbrains.com/teamcity/">teamcity</a> since I had use it before, I knew how to set it up and it is a good product.</p>
<p>Every application is a project in teamcity, when we push to master teamcity does:</p>
<ol>
<li> pull the repository</li>
<li> install the dependencies (in some repos node_modules versioned)</li>
<li> run all the tests</li>
<li> bundle node dependencies with <a href="http://github.com/carlos8f/bundle-deps">carlos8f&#47;bundle-deps</a></li>
<li> bump package version</li>
<li> npm pack</li>
</ol>
<p>1,2,3 are very common even in non-node.js projects. In the 4th step what we do is to move all &quot;dependencies&quot; to &quot;bundleDependencies&quot; in the package.json by doing this, the <code>npm pack</code> will contain all the modules already preinstalled. The result of the task is the tgz generated by <code>npm pack</code>.</p>
<p>This will automatically trigger the next task called &quot;<strong>configuration</strong>&quot;. This taks pull our configuration repository written in puppet and all the puppet sub modules, then it will take the last version of every node project and build one &quot;.tgz&quot; for every &quot;site&quot; we have in puppet. We have several &quot;site&quot; implementations like:</p>
<ul>
<li> <strong>myauth0</strong> used to create a quick appliance</li>
<li> <strong>auth0</strong> the cloud version you see in app.auth0.com. It is very different from the previous one, for instance it will not install mongodb since we use MongoLab in the cloud deployment.</li>
<li> <strong>some-customer</strong> some customers need some specific settings or features, so we have configurations with our customers name.</li>
</ul>
<p>The artifact of the <em>configuration task</em> is a tgz with all puppet modules including auth0 and the site.pp. All the packages are uploaded to Azure Blob storage in this stage.</p>
<p>The next task called &quot;<strong>cloud deploy</strong>&quot; in the CI pipeline will trigger immediately after the config task, it updates the puppetmaster (currently in the same CI server) and then runs the puppet agent in every node of our load balanced stack via ssh. After it deploys to the first node it does a quick test of the node and if there is something wrong it stop it and it will not deploy to the rest of the nodes. Azure load balancer then will take the node out of rotation until we fix the problem in the next push.</p>
<p>We have a backup environment where we continuously deploy, it is on Amazon and in a different region. It has a clone of our database (max 1h stale). This node node is used in case that azure us-east has an outage or something like that, when this happens <a href="http://aws.amazon.com/en/route53/">Route53</a> will redirect the traffic to the backup environment. We take high availability seriously, read more <a href="http://www.auth0.com/trust">here</a>. When running in backup mode, all the settings become read-only, this means that you can&#39;t change the properties of an Identity Provider however your users will be able to login to your application which is Auth0 critical mission.</p>
<p>Adding a new server to our infrastructure take very few steps:</p>
<ul>
<li> create the node</li>
<li> run an small script that installs and configure puppetagent</li>
<li> approve the node in the puppetmaster</li>
</ul>
<p>Assembling an appliance for a customer is very easy as well, we run an script that install puppetmaster in the vm, download the last config from the blob storage and run it. We use <a href="http://es.wikipedia.org/wiki/JeOS">Ubuntu JeOS</a> in this case.</p>
<h2>Final thoughts</h2>
<p>I&#39;ve to skip a lot of details to make this article concise. I hope you find it useful, if there is something you will like to know regarding this don&#39;t hesitate to put your question in a comment.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/rNsgmuAAADk" height="1" width="1" alt=""/>http://joseoncode.com/2013/09/13/shiping-auth0Promises A+ and Qhttp://feedproxy.google.com/~r/JoseFRomaniello/~3/wqld93AIlno/promises-a-plus
Thu, 23 May 2013 08:04:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/05/23/promises-a-plus<p>Two years ago I published a blog post about <a href="http://joseoncode.com/2011/09/26/a-walkthrough-jquery-deferred-and-promise/">jQuery promises</a> I have got lot of feedback since then but even if this still valid for jQuery I want to drag focus to the great job some of the javascript gurus are doing.</p>
<p>The important specification here is <a href="http://promises-aplus.github.io/promises-spec/"><strong>Promises A+</strong></a></p>
<p><img src="https://rawgithub.com/promises-aplus/promises-spec/master/logo.svg" alt="promisesalogo" style="width: 216px;"></p>
<p>The specification is very short, readable and useful. Go read it. It specify the interface for a Promise despite how it is created.</p>
<p>There are several frameworks and libraries that follow this specification and this is a <strong>GOOD</strong> thing, because it means that you can pass a promise from some library to other one and everyone speak the same interface.</p>
<h3>So, what&#39;s a promise again?</h3>
<blockquote>
<p>A promise represents a value that may not be available yet. </p>
</blockquote>
<p>There is another definition I heard that I like a lot:</p>
<blockquote>
<p>A promise is an asynchronous value.</p>
</blockquote>
<p>If you have done any javascript you know that when you need to call an asynchronous function you have to pass a <code>callback</code> which is a function that will be called after it finish doing its job. So, the function doesn&#39;t return anything and this make it harder to compose asynchronous code sometimes.</p>
<h3>The Q library</h3>
<p><a href="https://github.com/kriskowal/q"><strong>Q</strong></a> is a library that implements the standard and has some extra helpers. Q works in the browse and in node.js.</p>
<p>From now on, I will use Q to show some examples, but keep in mind that the very basic things are part of the Promise&#47;A+ and Q adds some friendly helpers on top of that.</p>
<h3>Basic usage</h3>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/xFFVn/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>In this first example I have called <code>Q.delay(2000)</code> this method returns a promise that will be <em>fulfilled</em> after two seconds. <em>You can think of this method as a <code>setTimeout</code> that instead of having callback parameter it returns a promise.</em> </p>
<p>Every promise has a <code>then</code> method that receive two arguments (or two callbacks) in order to access the fulfilled value and rejected value. Either callbacks could be null or undef.</p>
<h3>Chaining</h3>
<p><code>then</code> returns a new promise, this allow <strong>Promises A+</strong> to be <em>chained</em></p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/qdmgy/1/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>In this example I&#39;m returning a value in the first then&#47;onfulfilled function this makes the returned promise to be fulfilled with that value (3.2.6.1 section in spec).</p>
<p>Because this is something you do a lot, Q promises have a helper <code>thenResolve</code>:</p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/cyqU7/2/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>The most interesting thing about chaining promises is that you can <em>serialize</em> work:</p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/mnNae/2/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>In this example we first get the user with <code>getUser</code> and then we get his tweets <code>getTweets</code>. The result of <code>then(getTweets)</code> becomes a new promise that will be fullfiled when the two things are fulfilled and it will be fulfilled with tweets.</p>
<p>Can you read that thing <strong>&quot;getUser then getTweets then forEach tweet alert tweet message&quot;</strong>? This is important. We are working with asynchronous code in javascript yet the code is still very readable and easy to compose.</p>
<h3>Deferred</h3>
<p>At this point we have used only the promise returned from the delay method. Another way to create promises is using <code>Q.defer</code>. A Defer has two important methods <code>resolve</code> and <code>reject</code>, and it has a property <code>promise</code>. It goes without saying that this is not part of the specification and different frameworks might have different ways to create deferred.</p>
<p>The <code>delay</code> method in Q could be implemented with defer as follows: </p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/enU7D/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>At the point I&#39;m writing this jQuery promises are not compatible with Promises&#47;A and Promises&#47;A+, so an easy way to fix this is as follows:</p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/xSU2G/2/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>Despite the specification doesn&#39;t work with jQuery Promises, the Q implementation does in a straightforward way:</p>
<div class="highlight"><pre><code class="text">Q($.get(&#39;/something&#39;))
</code></pre></div>
<p>You can wrap a jQuery promise with Q to convert it to Promise&#47;A+.</p>
<h3>Parallelism</h3>
<p>What if you need to do several asynchronous tasks that doens&#39;t depend on each other? Use <code>Q.all</code>.</p>
<p><code>Q.all</code> converts an array of promises into a single promise that will be fulfilled when all the promises are fulfilled with an array of all the values or rejected with the first reason a promise is rejected.</p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/FRRxM/3/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>in this example I&#39;m calling getUsers three time for the three ids I have in the array. Then I wait the three promises to be fulfilled (this will happen after 1s aprox.) and then I show a message.</p>
<p>A more complex example here:</p>
<iframe width="100%" height="300" src="http://jsfiddle.net/jfromaniello/FRRxM/4/embedded/" allowfullscreen="allowfullscreen" frameborder="0"></iframe>
<p>In this case the <code>spread</code> method (from Q- not standard) works like <code>then</code> but &quot;spread&quot; all the values in arguments thus we can give the mergeProfiles function directly.</p>
<h3>Error throwing and handling in asynchronous code</h3>
<p>Another interesting thing about promises is error handling. In node.js land it happens a lot that you end with a code like this:</p>
<div class="highlight"><pre><code class="text">doFoo(function (err, r1) {
if (err) return handleError(err);
doBar(r1, function (err, r2) {
if (err) return handleError(err);
doBaz(r2, function (err, r3) {
if (err) return handleError(err);
callback(r3);
});
});
});
</code></pre></div>
<p>I want you to notice this line three times:</p>
<div class="highlight"><pre><code class="text">if (err) return handleError(err)
</code></pre></div>
<p>With promises you can write this same code as follows:</p>
<div class="highlight"><pre><code class="text">doFoo()
.then(doBar)
.then(doBaz)
.then(null, handleError);
</code></pre></div>
<p>Because the two first <code>then</code> calls doesn&#39;t have a onreject handler they will pass the rejection reason to the next promise until someone handles that error. More interesting if a promise is rejected none of the fulfill handlers here will be called.</p>
<p>The other interesting thing about this is that if you throw an exception inside a then call the promise will be rejected.</p>
<h3>node.js</h3>
<p>node.js api and modules follow a convention for asynchronous code, functions usually have callback parameter as the very last parameter and this callback get called with error and value.</p>
<p>So, Q make it easy to convert this style to promises as follows:</p>
<div class="highlight"><pre><code class="text">var Q = require(&#39;q&#39;);
var readdir = Q.nfbind(require(&#39;fs&#39;).readdir);
//usage
readdir(&#39;./path&#39;)
.then(function (files) {
}, function (err) {
});
</code></pre></div>
<p>This nfbind method has an alias <code>denodeify</code>.</p>
<p>There are lot more helpers but the other one interesting is <code>nodeify</code>. Do you feel shame letting the world know that you use promises and want to expose an standard-old node.js api? Use nodeify:</p>
<div class="highlight"><pre><code class="text">module.exports = function (callback) {
mysuperpromise()
.then(blabla)
.nodeify(callback);
};
</code></pre></div>
<h3>Tests</h3>
<p>This is not that important but it is something I found and I like a lot. When writing unit tests against asynchronous code, typically you do something like this:</p>
<div class="highlight"><pre><code class="text">function test (done) {
getSomething(function (err, result) {
if (err) return done(err);
Assert.areEqual(result, 123);
done();
});
}
</code></pre></div>
<p>As I cited before &quot;promises are asynchronous values&quot;. What if the assert and test framework could handle promises as well? You could easily write something like this:</p>
<div class="highlight"><pre><code class="text">function test () {
return getSomething().should.eventually.equal(123);
}
</code></pre></div>
<p>This is already done and you can use it today, have a look to <a href="https://github.com/domenic/chai-as-promised">chai-as-promised</a>.</p>
<h3>More material</h3>
<p>Watch this video:</p>
<iframe width="560" height="315" src="http://www.youtube.com/embed/hf1T_AONQJU" frameborder="0" allowfullscreen></iframe>
<p>Follow <a href="https://twitter.com/domenic">@domenic</a>.</p>
<p>Read <a href="http://domenic.me/2012/10/14/youre-missing-the-point-of-promises/">his blogpost</a>.</p>
<p><a href="https://github.com/kriskowal/q/wiki/API-Reference">Q Api Reference</a> is very helpful.</p>
<h3>Conclusion</h3>
<p>Promises are the future (of JavaScript asynchronous code). I put JavaScript there because I am sure some people are working on <em>better</em> languages with better syntax for asynchronous flows but that doesn&#39;t feel is going to change in the short term for javascript.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/wqld93AIlno" height="1" width="1" alt=""/>http://joseoncode.com/2013/05/23/promises-a-plusActivation links with Hawkhttp://feedproxy.google.com/~r/JoseFRomaniello/~3/Z5rOQ0INgN0/activation-links-with-hawk
Wed, 22 May 2013 08:44:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/05/22/activation-links-with-hawk<p>If you ever wrote a Sign-up form I am sure you have faced this use case where you have to generate an activation link to confirm the email account of the new user. One of the most common solutions is to generate a random-unique identifier and save it to the database.</p>
<p>In this post I will show how to generate a secure link using <a href="https://github.com/hueniverse/hawk">Hawk</a>.</p>
<blockquote>
<p>Hawk is an HTTP authentication scheme using a <a href="http://en.wikipedia.org/wiki/Message_authentication_code">message authentication code (MAC)</a> algorithm to provide partial HTTP request cryptographic verification.</p>
</blockquote>
<p>Hawk allows the MAC to be in a host header or a query string parameter. We will use this last feature know as &quot;bewit&quot;.</p>
<p>The basic idea is that Hawk can generate a MAC for a url including every part of the url this is its protocol, host, port, path, query and then can validate if a MAC is valid for that particular url. MAC are generated with a private key.</p>
<p>Imagine we have already saved the user profile with an <code>active: false</code> flag, now we want to send the activation link, we can call this little module to generate the link:</p>
<div class="highlight"><pre><code class="text">var hawk = require(&#39;hawk&#39;);
var urljoin = require(&#39;urljoin&#39;);
var credentials = {
id: &#39;l&#39;,
key: &#39;my super secret key&#39;,
algorithm: &#39;sha256&#39;
};
function getActivationLink (user) {
var url = urljoin(process.env.BASE_URL, &#39;/activate?user=&#39; + user.email);
var bewit = hawk.uri.getBewit(url, {
credentials: credentials,
ttlSec: 60 * 5
});
return url + &#39;&amp;bewit=&#39; + bewit;
}
</code></pre></div>
<p>I&#39;ve used <a href="https://github.com/jfromaniello/url-join">url-join</a> to join a BASE_URL environment variable with the path of the activation endpoint. Another thing to notice is that this MAC will be valid just for the next 5 minutes after generated the link.</p>
<p>The resulting link will look like this <code>http:&#47;&#47;mysuperapp.com&#47;activate?user=foo@bar.com&amp;bewit=H3424HFSDKJ4FDS</code>.</p>
<p>The next step is to handle the activation endpoint. If we are using Express we can have a middleware like this:</p>
<div class="highlight"><pre><code class="text">var hawk = require(&#39;hawk&#39;);
var credentials = {
id: &#39;l&#39;,
key: &#39;my super secret key&#39;,
algorithm: &#39;sha256&#39;
};
function credentialsFunc (id, callback) {
return callback(null, credentials);
};
function validateMac (req, res, next) {
hawk.uri.authenticate(req, credentialsFunc, {}, function (err, credentials, attributes) {
if (err) return res.send(401);
next();
});
}
module.exports = validateMac
</code></pre></div>
<p>And then the activation endpoint will look like this:</p>
<div class="highlight"><pre><code class="text">app.get(&#39;/activate&#39;, validateMac, function (req, res) {
//this get called only if the mac is valid
//save in the database that req.query.user is an active user.
});
</code></pre></div>
<p>This is all, the benefits of this technique:</p>
<ul>
<li> no need to query the database when activating the user.</li>
<li> no need to store another secret in the database.</li>
</ul>
<p>Do not trust randomness, cryptography is your friend.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/Z5rOQ0INgN0" height="1" width="1" alt=""/>http://joseoncode.com/2013/05/22/activation-links-with-hawknode.js require helper for sublimehttp://feedproxy.google.com/~r/JoseFRomaniello/~3/9dN1kaQZ0cs/node-dot-js-require-helper-for-sublime
Thu, 16 May 2013 10:20:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/05/16/node-dot-js-require-helper-for-sublime<p>I published some time ago a plugin for Sublime that makes my life easier when working in node.js. It allows me to introduce <code>require</code> calls by searching for the files in the current folder.</p>
<p>I press <code>⌘⇧m</code>, then I search the file&#47;module I want to require and it automatically calculates the relative path. Also I can use it to introduce require to native modules, or the modules I&#39;ve installed on my node_modules folder.</p>
<p>Here is a short video: </p>
<p><img src="https://s3.amazonaws.com/joseoncode.com/img/require-helper.gif" alt=""></p>
<p>You can install it with the Sublime Package Control, source code is <a href="https://github.com/jfromaniello/sublime-node-require">here</a>.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/9dN1kaQZ0cs" height="1" width="1" alt=""/>http://joseoncode.com/2013/05/16/node-dot-js-require-helper-for-sublimeIntroducing mdocs.iohttp://feedproxy.google.com/~r/JoseFRomaniello/~3/73CSJp5Q91Q/introducing-mdocs-dot-io
Thu, 28 Feb 2013 20:04:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/02/28/introducing-mdocs-dot-io<p>I would like to introduce a tool we have built to demo <a href="http://auth0.com">Auth0</a>.</p>
<p><a href="http://mdocs.io">mdocs.io</a> is a free and <a href="https://github.com/auth0/mdocs">opensource</a> tool to collaborative write documents. You can think of it like google docs with markdown. It uses the technique called operational transformation to allows users to edit a document simultaneously.</p>
<p>Every document is private at the beginning and you can easily share it to your peers or make it publicly available.</p>
<h2>How auth0 powers mdocs.io</h2>
<p>Login with google through OAuth, select contacts, etc. is basic stuff and there are lot of tools to this kind of thing on every platform. You can use mdocs in <em>solo</em> mode with your @gmail.com account.</p>
<p>But we wanted to use mdocs just as we use google docs, this means being able to share documents across the company or group of people. </p>
<p>So, when you go to <a href="http://mdocs.io">mdocs.io</a> you will see an option to connect mdocs to your company either by using <strong>google apps</strong>, <strong>office365</strong> or <strong>adfs</strong>. In order to complete this process you will need to involve an admin of the domain. If you are no the admin, you can follow the procedure which is about 2 clicks and send the link at the end of the process to your admin.</p>
<p>After you have finished this process you can login with your account in <a href="http://mdocs.io">mdocs.io</a> or you can bookmark a link like <strong>http:&#47;&#47;mdocs.io&#47;e&#47;yourcompany.com</strong> in which case you will see a prompt of google like this:</p>
<p><img src="https://s3.amazonaws.com/joseoncode.com/img/dump/2013-02-28_1941.png" alt=""></p>
<p>if your are not currently logged (or your adfs login, etc).</p>
<p>Then, you will be able to share documents to your company peers:</p>
<p><img src="https://s3.amazonaws.com/joseoncode.com/img/dump/2013-02-28_1944.png" alt=""></p>
<h2>More details</h2>
<p>mdocs.io is running on heroku and it uses mongodb thru <a href="http://mongolab.com">mongolab</a> and elastic search with <a href="http://bonsai.io">bonsai.io</a> (yes! searching documents works like a charm).</p>
<p>It also uses a JavaScript framework called <a href="https://github.com/Operational-Transformation">ot.js</a> for the collaborative part. It is pretty interesting how that concept works, maybe I will expand about it in another post.</p>
<p>It has some powerful key shortcuts when you are editing a document.</p>
<p>It is built on node.js of course :).</p>
<h2>Final thoughts</h2>
<p>We think that Auth0 open a lot of new possibilities and we really love mdocs.io, we use it to brainstorm ideas, to write articles, documentations and a lot of other things.</p>
<p>Since we are building this on the open, any pull request that we merge on the github repository will be immediately available on mdocs.io. So, if you feel like it could be better go ahead and help us :)</p>
<blockquote>
<p>If you want to learn more about Auth0 follow my upcoming articles on <a href="http://blog.auth0.com">blog.auth0.com</a></p>
</blockquote><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/73CSJp5Q91Q" height="1" width="1" alt=""/>http://joseoncode.com/2013/02/28/introducing-mdocs-dot-ioStatus reporthttp://feedproxy.google.com/~r/JoseFRomaniello/~3/ItdZfmLEb9Y/status-report
Fri, 01 Feb 2013 07:36:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2013/02/01/status-report<p>I have good news: I am not dead. </p>
<p>I have been really quiet in this blog for a long time because I was not inspired to write and&#47;or I didn&#39;t have anything important to share. Now I have lot of things to share but before that I just want to write about the last months.</p>
<p>I left Tellago 5 months ago. I was working with a really smart team in a very interesting project called <a href="http://kidozen.com">KidoZen</a>. I started this from day zero and I gave a lot of my time and effort to make it happen. The project is still going on and I prefer not to talk about why I left Tellago here.</p>
<p>The most important thing about my last project in Tellago is that it gave me a lot of experience in node.js, something I never used before. I contributed to various opensource things in node and I also write my stuff mostly because we needed to run KidoZen in Windows (<a href="https://github.com/jfromaniello/winser">winser</a> and <a href="https://github.com/jfromaniello/windowseventlogjs">windows-eventlog</a>).</p>
<p>After Tellago I took some time to try new ideas on my own, while I was looking for a more stable job. I had some very good job offers but I have to admit that I am very picky about jobs now. Which is bad because at the same time I need money to live and maintain a family :). </p>
<p>After two months or so in this situation I found an amazing and talented group of people to work with. <a href="http://woloski.com">Matías Woloski</a> was leaving SouthWorks and planning something with <a href="https://twitter.com/eugenio_pace">Eugenio Pace</a>. They told me about their plans and it felt like a great people to work with. I enjoy every day I work with them and I am very thankful for having this opportunity.</p>
<p>So what are we up to? The company name is <a href="http://qraftlabs.com" title="Quality Crafted Software">Qraftlabs</a> which is a word that mix Quality, Craft and Lab and describes us very well. In the last few months we have been doing a little of consultancy and at the same time a new product which I think is looking great and going to be a success called <a href="http://auth0.com">Auth0</a>.</p>
<p>Auth0 makes it easy for small startups to sell their services to companies. Lets say you have product which is an <em>issue tracker</em> and this product authenticate users through users and passwords, now suppose that bigcompany.com wants to use your product but they want&#47;need their employees to use their @bigcompany.com accounts and they want to be able to share issues with their employees. Your product will have to support some weird combinations of identity providers like office365, adfs, google (apps), etc. and also be able to query these directories. Auth0 does this for you and we have prepared a lot of docs and samples about it. If your are interested in Auth0 follow <a href="http://blog.auth0.com">our blog</a> and <a href="http://twitter.com/auth0">twitter</a>.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/ItdZfmLEb9Y" height="1" width="1" alt=""/>http://joseoncode.com/2013/02/01/status-reportContinuous testing in node with supervisorhttp://feedproxy.google.com/~r/JoseFRomaniello/~3/lfswcUFQM2A/continuous-testing-in-node
Wed, 21 Nov 2012 12:12:00 +0000jfromaniello@gmail.com (José F. Romaniello)http://joseoncode.com/2012/11/21/continuous-testing-in-node<p>I have been using a little module from Isaac Schlueter named <a href="https://github.com/isaacs/node-supervisor">Supervisor</a> for continuous testing.</p>
<p>Suppose you have a Makefile like this:</p>
<pre><code class='bash'>
REPORTER ?= spec
test:
@clear && reset
./node_modules/.bin/mocha --reporter $(REPORTER)
.PHONY: all test clean
</code></pre>
<p>You can add another target as follows:</p>
<pre><code class='bash'>
watch:
./node_modules/.bin/supervisor -q -n exit -e 'node|js|json|config' -x make test
</code></pre>
<p>The parameters means:</p>
<ul>
<li> q: quiet (supress debug messages)</li>
<li> n: no restart on exit</li>
<li> e: watch for changes in these extensions</li>
<li> x: the executable for this will be <strong>make</strong></li>
<li> test: the name of the thing we want to execute with <strong>make</strong></li>
</ul>
<p>This works pretty well for me, <a href="http://visionmedia.github.com/mocha/">mocha</a> has an option for continuous testing <strong>-w</strong> but it is very broken because it runs everything on the same node process.</p><img src="http://feeds.feedburner.com/~r/JoseFRomaniello/~4/lfswcUFQM2A" height="1" width="1" alt=""/>http://joseoncode.com/2012/11/21/continuous-testing-in-node