When would a leak actually occur?

A leak occurs when you continuously add event handlers without removing them.
This particular happens when you use a single emitter instance numerous times.
Let's make a function that returns the next value in a stream:

If all the functions above add handlers to the data event,
you're going to get the same leak error message,
but you know there isn't an actual leak.
At this point, you should set the maximum number of listeners accordingly:

What if the route doesn't exist?

What if a user GET /thing/:id/lakjsdflkajsdlkfjasdf?
What would actually happen is:

Match /thing/:id/*

Get req.user

Look for the next match

Return 404 (because there are no matches)

Your app has executed an extra database call when it didn't have to:

Match all routes

Return 404 because none match

Except this doesn't actually really work with wildcard routing.

What if it's more than just a database call?

If you add a multipart parser that downloads disks to files like this:

app.use(bodyParser.multipart())

If an attacker simply posts a bunch of files to ANY route,
your server would quickly be flooded with files and run out of disk.
This is one of the main reasons the multipart parser
was removed from Connect and Express.
Because implementing multipart is controversial,
it's better not to provide the parser at all.

The Koa way

It becomes very complex and ugly to route properly Express because
of the extensive use of callbacks.
It's simply not an option to have every route have 10 nested callbacks.
But with Koa or any future async/await framework,
things will be much easier when your code looks like this:

When ditching callbacks for promises or generators, life suddenly becomes
much easier. You don't have to worry as much about error handling.
There are no pyramids of doom. But at first, you'll be confused
with how promises and generators work, and finally, you'll be able
to use them with expertise!

.map(function* () {})

Some people, including me until recently, see function* () {} as a magical function.
They don't correlate function* () {} with regular functions.
However, function* () {} is in fact a regular function! The only real difference
between function* () {} and function () {} is that the former returns a generator.
That means you can pass function* () {} basically anywhere a regular function
can go, but just make sure you realize that a generator is returend.

For example, to execute in parallel, you might do something like this:

As a maintainer of many popular packages such as express,
I, along with the other maintainers, have come to realized that
semantic versioning simply does not work in practice.
It has caused a lot of bikeshedding, creating non-constructive
discussions in the repository.
It generally makes development actually more difficult.

Recently, some people have created some ideas about how to improve versioning.
I first proposed getting rid of versions < 1 all together as
0.x.x modules are inherently unstable due to its lack of specification.
Damon Oehlman proposed slimver,
a stricter variant of semantic versioning.

I'm proposing ferver,
which changes the semantics of semver to be more practical with breaking changes.
You can read more about it on the GitHub page,
but first, let's talk about the failures of semver.

Patches break

Part of the major issues with semver is that patchs could break.
A feature could be "fixed", but a consumer could have relied on that very
buggy feature, and patching it would break their app.
An example are times when a library simply does not behave
according to specifications and fixes it to cohere to the specification.

An example is with Express 3.4.3.
A bug with redirects was fixed, but some users were relying on that bug.
Thus, even though it was a patch (3.4.2 -> 3.4.3),
it broke some people's apps.
A user asked to at least bump a minor version,
but if we strictly cohere to semver,
we can't because it's a patch, not a new feature.

I greatly sympathize with this user, and this particular case
is essentially the first time I realized, "semver doesn't work".

0.x.x is anarchy

Versions < 1.0.0 do not have semantic versions according to semver.
These are considered libraries "in development" and developers could version
however they see fit. Thus, developers bump the minor or patch numbers
however they like. It's anarchy.

The problem isn't that versions less than < 1.0.0 are allowed.
Nope, it makes sense for libraries to be able to break changes before
declaring a stable 1.0.0. The problem is that there are no semantics
to 0.y.z versions. Consumers simply don't know how to depend
on these modules using version ranges
without introducing a significant amount of risk into their app.

Pinned dependencies

The current solution to the above problems is to pin all dependencies.
However, this is absolutely stupid and annoying to me.
If you're pinning versions, you're reducing the package manager
to a glorified CURL.

It makes maintaining very difficult and annoying.
Duplicate dependencies are bad, especially in frontend development
where file size matters, which is one reason frontend developers prefer Bower's
flatter dependency directory vs. npm's nested.
When every library pins, you're going to have a lot of duplicate dependencies,
even if they're the same version!
Not everyone has the time to update every patch and make a new release.

Even if you control the dependencies, some people like pinning dependencies.
To me, this is absolutely silly, but it is necessary because patchs can break.
For example, if you look at the 2.x branch of Connect,
you'll see that all the commits are just dependency updates.
However, they all do not break backwards compatibility because
Connect coheres to semver.
These updates should not be necessary and should be available
just from typing npm update.

Slower development

If you look at Express' current issues,
you'll see that half of them are planned for the next major version, 5.0.0:

The problem with this is that these minor issues may be backwards incompatible,
but they are nevertheless issues that consumers would have to deal with
until v5.0.0.

For example, Update path regexp functionality would break routes
for a very few people who write really weird routes,
but it introduces many new features and provides better semantics.
This introduces a lot of benefits for most developers
while introducing risk to a very few developers.

Ideally, these changes should happen as fast as possible,
but in a way that tells consumers, "Hey, this is new and improved, but it might
break your app. Proceed with caution.". There's no way to say that with semver
except with major version releases.

Prereleases

Semver does not have a good scheme for prereleases.
People append all sorts of weird strings to their versions.
1.0.0-beta1. 1.0.0-3.2.3.1. Who knows what these mean.
It only makes libraries more difficult to consume as well as confuse consumers.

Since semver is liberal with these affixes,
package managers like npm have trouble dealing with them.
For example, if you use 1.5.0-beta1 of a library and the latest
version is 1.4.0, npm outdated will mark 1.5.0-beta1 as outdated.
Yeah, I don't think that's outdated.

A good versioning system would allow for prereleases and beta builds
while still cohering to x.y.z versioning.
It would also be able to allow consumers to distinguish between
prereleases and releases semantically.

The fear of x.0.0

Many developers never release v1.0.0 of their projects.
With semver, this is really annoying because you have to pin
to reduce the risk of breaking changes.

But others absolutely hate when libraries update the major version.
They see it as a sign of "instability",
but according to semver, these backwards-incompatible changes
could be something as insignificant as returning null instead of undefined,
which wouldn't break most people's apps.

The problem here is that people don't associate major versions with "breaking changes".
They associate it with the character, purpose, and philosophy of a library.
0.x.x means "We don't know what we're doing".
1.0.0 means "We think we know the direction of this library.".
2.0.0 means "We're changing directions a little bit".
3.0.0 means "We're changing directions a little bit, again".

Semver simply has the wrong semantics.
Not every breaking change is a fundamental difference in a library's character.
People are okay if you break things here and there,
but it must be easy for them to know when you break something.
This can only be done with a major version bump with semver.

Solving semver

There are two ways to solve semver: bump major versions often,
or use a different versioning scheme.

Currently, I release most new modules I write as 1.0.0 and liberally
bump major versions. For example, koa-session
is already at 2.0.0, but koa hasn't even
reached 1.0.0.
Hell, co has already reached
3.0.6 and ES6 isn't even finalized.

This is why I proposed having semver drop versions < 1.
Who cares if you're at version 36 like Chrome.
I just want to know if something would break!
But this is not suitable for most people since, due to semver's semantics,
they correlate a lot of major version bumps with the library being "unstable".

The other solution is just use a different versioning scheme.
This is what ferver - versioning
based on whether a change is breaking.
Please, don't use it though. It's only a thought.

There's a new node.js framework in town, and its name is Koa.
It's the spiritual successor to Connect and Express,
written by the same author, TJ Holowaychuk.
It has a very similar middleware system,
but is completely incompatible with any other node.js framework.

Koa is bleeding edge and has not yet reached version 1.0,
but many people including TJ and myself have already ditched Express for Koa.
TJ himself has stepped back from maintaining Connect and Express and has instead delegated maintenance to a team, myself included.
Don't worry about using Connect or Express, they will still be maintained!

So why should you and shouldn't you ditch Express for Koa like TJ and I have?

Why you should

Superior, callback-less control flow

Thanks to Koa's underlying generator engine co,
there's no more callback hell.
Of course, this is assuming you write your libraries using generators, promises, or return thunks.

But co's control flow handling isn't about eliminating callbacks.
You can also execute multiple asynchronous tasks in parallel and in series without calling a function.

app.use(function* (){
yield [fn1, fn2, fn3]
})

Bam! You've just executed three asynchronous functions in parallel.
You've eliminated the need for any other control flow library such as async,
and you don't have to require() anything.

Superior middleware error handling

Thanks to co, you can simply use try/catch instead of node's if (err) callback(err) type error handling.
You can see this in the error handling examples in Koa:

Instead of adding an error handling middleware via app.use(function (err, req, res, next) {}) which barely works correctly, you can finally simply use try/catch.
All errors will be caught, unless you throw errors on different ticks like so:

But if you used this method, you'll still get memory leaks when clients abort the request.
This is because close events on the final destination stream are not propagated through the pipe()s back to the original stream.
You need to use something like finished,
otherwise you'll leak file descriptors.
Thus, your code should look more like:

Since you've handled all your errors, you wouldn't need to use domains.
But look at it. It's so much code just to send a file.
Express also does not handle the close event,
so you'll always need to use finished as well.

Since you simply pass the stream to Koa instead of directly piping,
Koa is able to handle all these cases for you.
You won't need to use domains as no uncaught exceptions will ever be thrown.
Don't worry about any leaks as Koa handles that for you as well.
You may treat streams essentially the same as strings and buffers,
which is one of the main philosophies behind Koa's abstractions.

In other words, Koa tries to fix all of node's broken shit.
For example, this case is not handled:

Concise code

Writing apps and middleware for Koa is generally much more concise than any other framework.
There are many reasons for this.

The first and obvious reason is the use of generators to remove callbacks.
You're no longer creating functions everywhere, just yielding.
There's no more nested code to deal with.

Many of the small HTTP utilities in the expressjs organization are included with Koa,
so when writing applications and middleware,
you don't need to install many third party dependencies.

The last and I think the most important reason is that Koa abstract's node's req and res objects,
avoiding any "hacks" required to make things work.

Better written middleware

Part of what makes Connect and Express great is its middleware ecosystem.
But what I greatly disliked about this ecosystem was that middleware are generally terrible.
There are many reasons for this aside from the inverse of the points above.

Express is similar to Koa in that many utilities are included.
This should make writing middleware for Express almost as easy as Koa,
but if you're writing middleware for Express,
you might as well make it compatible with node.js and any other app.use(function (req, res, next) {}) framework.
Supporting only Express at that point is silly.
However, you'll end up with a lot of tiny dependencies, which is annoying.
Koa middleware on the other hand is completely incompatible with node.js.

Express uses node's original req and res objects.
Properties have to be overwritten for middleware to work properly.
For example, if you look at the compression middleware,
you'll see that res.write() and res.end() are being overwritten.
In fact, a lot of middleware are written like this.
And it's ugly.

Thanks to Koa's abstraction of node's req and res objects, this is not a problem.
Look at koa-compress source code and tell me which one is more concise and readable.
Unlike Express, the compression stream's errors are actually handled as well and pipe() is actually used internally.

Then there's the fact that asynchronous functions' errors are simply logged instead of handled.
Developers are not even given a choice.
This is not a problem with Koa!
You can handle all the errors!

Although we're going to have to recreate the middleware ecosystem for Koa from the ground up,
I believe that all Koa middleware are fundamentally better than any other frameworks'.

Why you shouldn't

Generators are confusing

There are two programming concepts you have to learn to get started with Koa.
First is generators, obviously.
But generators are actually quite complicated.
In fact, any control flow mechanism, including promises, is going to be confusing for beginners.
Unlike promises, co is not based on a specification,
so you have to learn both how generators work as well as co.

You also need to understand how this works.
It becomes much more important when Koa uses this to pass data instead of node's req and res objects.
You may want to read yield next vs yield* next.

Generators are not supported out of the box

There are currently two ways to use generators in node.js.

The first is to use v0.11, an unstable version of node, with the --harmony-generators flag,
but you're obviously using an unstable version of node.js.
For many people and companies, this is unacceptable,
especially since many C/C++ addons don't work with v0.11 yet.
Since you need to explicitly set the --harmony-generators flag,
creating and using executables is also more difficult.

The second way to use generators is by using gnode.
The problem with this is that it's really slow.
It basically transpiles all files with generators when require()ing.
I tried this before, and it took about 15 seconds for my app to even start.
This is unacceptable during development.

We're going to have to wait until node v0.14 or v1 to be able to use generators without any flags.
Until then, you're going to be inconvenienced one way or another.

Documentation is sparse

Koa is pretty new, and TJ and I just don't have the time to write thorough documents.
Some things are still subject to change,
so we don't want to be too thorough or else we'd confuse people down the road.
It's also radically different than other frameworks,
so we'd have to explain both the philosophy as well as the technical details,
otherwise developers are going to get lost.

There have been a few blog posts, but in my opinion they don't explain Koa well enough.
The goal of this blog post is to explain more of the benefits instead of the philosophy or technical.
If you want to know more about the philosophical,
watch as I write my Koa talk.

One question a couple of people have asked is "what is the difference between yield next and yield* next? Why yield* next?" We intentionally do not use yield* next in examples to avoid new users from asking this question, but this question will inevitably be asked. Unfortunately, there isn't any very good explanations on these "delegating yields" as generators are relatively new. Although we Koa uses it internally for "free" performance, we don't advocate it to avoid confusion.

There's an extra co call here. But if we use delegation, we can skip the extra co call:

function* outer() {
this.body = yield* inner()
}

Essentially becomes:

function* outer() {
yield setImmediate
this.body = 1
}

Each co call creates a few closures, so there's going to be a tiny bit of overhead. However, this isn't much overhead to worry about, but with one *, you can avoid this overhead and use native language features instead of this third party library called co.

How much faster is this?

Here's a link to a discussion we had a while ago about this topic: https://github.com/koajs/compose/issues/2. You won't see much performance difference (at least in our opinion), especially since your actual application code will slow down these benchmarks significantly. Thus, it isn't worth advocating it, but it's worth using it internally.

What's interesting is that with yield* next, Koa is currently faster than Express in these "silly benchmarks": https://gist.github.com/jonathanong/8065724. Koa doesn't use a dispatcher, unlike Express who uses multiple (one from connect, one for the router).

Which is ideally how a web application should look if we weren't so lazy. The only overhead is the initiation of a single co instance and our own Context constructor that wraps node's req and res objects for convenience.

Using it for type checking

If you yield* something that isn't a generator, you'll get an error like the following:

TypeError: Objectfunctionnoop(done) {
done();
} has no method 'next'

This is because essentially anything with a next method is considered a generator!

For me, I like this because I, by default, assume that I'm yield* gen(). I've rewritten a lot of my code to use generators. If I see something that isn't written as a generator, I'll think to myself, "can I make this simpler by converting it to a generator?".

Of course, this may not be applicable to everyone. You may find other reasons you would want to type check.

Contexts

co calls all continuables or yieldables with the same context. This particulary becomes annoying when you yield a function that needs a different context. For example, constructors!

Recently, I had a Facebook argument with stranger on a mutual friend's wall.
He, as well as many Christians, believes that to attain salvation,
you must believe in Jesus Christ,
you must have faith in Him,
and you must believe He is the Son of God.
But Jesus Himself said otherwise.

“He that hath my commandments, and keepeth them, he it is that loveth me: and he that loveth me shall be loved by my Father, and I will love him, and will manifest Myself to him.” - John 14:21 (King James Version)

"Jesus answered, "If anyone loves Me, he will keep My word. My Father will love him, and We will come to him and make Our home with him." - John 14:23 (King James Version)

These two Bible versions, which quote Jesus Himself and not His disciples, explicitly define the following truths (assuming Jesus is the truth):

If you keep/obey His commandments, you love Jesus.

If you keep/obey His commandments, the Father and Jesus will love you and show Themselves to you.

If you love Him, you will keep His word/commandments/teachings.

If you love Him, the Father will love you.

If you love Him, you will share a home with the Father and Jesus.

What can we conclude? If you obey His commandments, you will attain salvation.
He never implies that any type of faith or believe in Him is a requirement to attain salvation,
and, if you are a Christian, only words Jesus say are relevant.

So the big question is, "What are His Commandments?"

"Jesus said unto him, Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind. This is the first and great commandment. And the second is like unto it, Thou shalt love thy neighbour as thyself. On these two commandments hang all the law and the prophets." - Matthew 22:37-40 (King James Version)

The first and greatest commandment is to love God,
but by the first two verses I quoted,
loving God is just following His commandments.
So to love God is to follow His commandments,
and to follow His commandments is to love God,
which is circular.

Thus, the only law you have to follow is the Golden Rule.
If you obey the Golden Rule with all your heart, all your soul, and all your mind,
then you are obeying God's commandments with all your heart, all your soul, and all your mind,
then you are loving God with all your heart, all your soul, and all your mind,
and God will return His love to you, and you will attain salvation.

Therefore, by Christian law, many non-Christians will attain salvation because they live by the Golden Rule,
the only law Jesus commanded us to obey.
Inversely, by their own law, many Christians will not attain salvation because they do not obey the Golden Rule - they love discriminately.
Similarly, any law that does not follow the Golden Rule is unjust,
and any "prophet" who does not follow the Golden Rule is a false prophet.