Node.js Bloghttp://blog.nodejs.org/
The Blog about Node.jsTue, 24 Mar 2015 04:23:54 GMTenweekly1http://nodejs.org/http://nodejs.org/images/logo-light.pngNode.jshttp://blog.nodejs.org/
V8 Memory Corruption and Stack Overflow (fixed in Node v0.8.28 and v0.10.30)http://blog.nodejs.org/2014/07/31/v8-memory-corruption-stack-overflow/
Thu, 31 Jul 2014 19:00:00 GMTundefinedhttp://blog.nodejs.org/2014/07/31/v8-memory-corruption-stack-overflow/A memory corruption vulnerability, which results in a denial-of-service, was
identified in the versions of V8 that ship with Node.js 0.8 and 0.10. In
certain circumstances, a particularly deep recursive workload that may trigger
a GC and receive an interrupt may overflow the stack and result in a
segmentation fault. For instance, if your work load involves successive
JSON.parse calls and the parsed objects are significantly deep, you may
experience the process aborting while parsing.

This issue was identified by Tom Steele of ^Lift
Security and Fedor Indunty, Node.js Core Team member
worked closely with the V8 team to find our resolution.

Remediation

Mitigation

To mitigate against deep JSON parsing you can limit the size of the string you
parse against, or ban clients who trigger a RangeError for parsing JSON.

There is no specific maximum size of a JSON string, though keeping the max to
the size of your known message bodies is suggested. If your message bodies
cannot be over 20K, there's no reason to accept 1MB bodies.

For web frameworks that do automatic JSON parsing, you may need to configure
the routes that accept JSON payloads to have a maximum body size.

]]>A memory corruption vulnerability, which results in a denial-of-service, was
identified in the versions of V8 that ship with Node.js 0.8 and 0.10. In
certain circumstances, a particularly deep recursive workload that may trigger
a GC and receive an interrupt may overflow the stack and result in a
segmentation fault. For instance, if your work load involves successive
JSON.parse calls and the parsed objects are significantly deep, you may
experience the process aborting while parsing.

This issue was identified by Tom Steele of ^Lift
Security and Fedor Indunty, Node.js Core Team member
worked closely with the V8 team to find our resolution.

Remediation

Mitigation

To mitigate against deep JSON parsing you can limit the size of the string you
parse against, or ban clients who trigger a RangeError for parsing JSON.

There is no specific maximum size of a JSON string, though keeping the max to
the size of your known message bodies is suggested. If your message bodies
cannot be over 20K, there's no reason to accept 1MB bodies.

For web frameworks that do automatic JSON parsing, you may need to configure
the routes that accept JSON payloads to have a maximum body size.

First and foremost these releases address the current OpenSSL vulnerability
CVE-2014-0224,
for both 0.8 and 0.10 we've upgraded the version of the bundled OpenSSL to
their fixed versions v1.0.0m and v1.0.1h respectively.

Additionally these releases address the fact that V8 UTF-8 encoding would allow
unmatched surrogate pairs. That is to say, previously you could construct a
valid JavaScript string (which are stored internally as UCS-2), pass it to a
Buffer as UTF-8, send and consume that string in another process and it would
fail to interpret because the UTF-8 string was invalid.

Note, the results encoded by V8 in this case are exactly what was passed into
the encoding routine. There is no overflow, underflow, or the inclusion of
other arbitrary memory, merely an unmatched UTF-8 surrogate resulting in
invalid UTF-8.

As of these releases, if you try and pass a string with an unmatched surrogate
pair, Node will replace that character with the unknown unicode character
(U+FFFD). To preserve the old behavior set the environment variable
NODE_INVALID_UTF8 to anything (even nothing). If the environment variable is
present at all it will revert to the old behavior.

This breaks backward compatibility for the specific reason that unsanitized
strings sent as a text payload for an RFC compliant WebSocket implementation
should result in the disconnection of the client. If the client attempts to
reconnect and receives another invalid payload it must disconnect again. If
there is no logic to handle the reconnection attempts, this may lead to a
denial of service attack. For instance socket.io attempts to reconnect by
default.

// Prior to these releases:
new Buffer('ab\ud800cd', 'utf8');
// <Buffer 61 62 ed a0 80 63 64>
// After this release:
new Buffer('ab\ud800cd', 'utf8');
// <Buffer 61 62 ef bf bd 63 64>
// This is an explicit conversion to a Buffer, but the implicit
// .write('ab\ud800cd') also results in the same pattern
websocket.write(new Buffer('ab\ud800cd', 'utf8'));
// This would result in the client disconnecting.

Node's default encoding for strings is UTF-8, so even if you're not
explicitly creating Buffers out of strings, Node may be doing so under the
hood. If what you're passing is not actually UTF-8 then when you call
.write(str) you could be specific and say .write(str, 'binary') which
signals Node to pass the string through without interpreting it.

You can also mitigate this in pure JavaScript by sanitizing your strings, as an
example see
node-unicode-sanitize
which will similarly replace unmatched surrogate pairs with the unknown unicode
character.

Thanks to Node.js alum Felix Geisendörfer for finding, getting the fixes
upstreamed, and helping
with the testing and mitigation. Also for helping to inform and improve the
process for Node.js security issues.

To float these fixes in your own builds you can apply the following patch with
git am

First and foremost these releases address the current OpenSSL vulnerability
CVE-2014-0224,
for both 0.8 and 0.10 we've upgraded the version of the bundled OpenSSL to
their fixed versions v1.0.0m and v1.0.1h respectively.

Additionally these releases address the fact that V8 UTF-8 encoding would allow
unmatched surrogate pairs. That is to say, previously you could construct a
valid JavaScript string (which are stored internally as UCS-2), pass it to a
Buffer as UTF-8, send and consume that string in another process and it would
fail to interpret because the UTF-8 string was invalid.

Note, the results encoded by V8 in this case are exactly what was passed into
the encoding routine. There is no overflow, underflow, or the inclusion of
other arbitrary memory, merely an unmatched UTF-8 surrogate resulting in
invalid UTF-8.

As of these releases, if you try and pass a string with an unmatched surrogate
pair, Node will replace that character with the unknown unicode character
(U+FFFD). To preserve the old behavior set the environment variable
NODE_INVALID_UTF8 to anything (even nothing). If the environment variable is
present at all it will revert to the old behavior.

This breaks backward compatibility for the specific reason that unsanitized
strings sent as a text payload for an RFC compliant WebSocket implementation
should result in the disconnection of the client. If the client attempts to
reconnect and receives another invalid payload it must disconnect again. If
there is no logic to handle the reconnection attempts, this may lead to a
denial of service attack. For instance socket.io attempts to reconnect by
default.

// Prior to these releases:
new Buffer('ab\ud800cd', 'utf8');
// <Buffer 61 62 ed a0 80 63 64>
// After this release:
new Buffer('ab\ud800cd', 'utf8');
// <Buffer 61 62 ef bf bd 63 64>
// This is an explicit conversion to a Buffer, but the implicit
// .write('ab\ud800cd') also results in the same pattern
websocket.write(new Buffer('ab\ud800cd', 'utf8'));
// This would result in the client disconnecting.

Node's default encoding for strings is UTF-8, so even if you're not
explicitly creating Buffers out of strings, Node may be doing so under the
hood. If what you're passing is not actually UTF-8 then when you call
.write(str) you could be specific and say .write(str, 'binary') which
signals Node to pass the string through without interpreting it.

You can also mitigate this in pure JavaScript by sanitizing your strings, as an
example see
node-unicode-sanitize
which will similarly replace unmatched surrogate pairs with the unknown unicode
character.

Thanks to Node.js alum Felix Geisendörfer for finding, getting the fixes
upstreamed, and helping
with the testing and mitigation. Also for helping to inform and improve the
process for Node.js security issues.

To float these fixes in your own builds you can apply the following patch with
git am

]]>DoS Vulnerability (fixed in Node v0.8.26 and v0.10.21)http://blog.nodejs.org/2013/10/22/cve-2013-4450-http-server-pipeline-flood-dos/
Tue, 22 Oct 2013 17:42:10 GMTundefinedhttp://blog.nodejs.org/2013/10/22/cve-2013-4450-http-server-pipeline-flood-dos/Node.js is vulnerable to a denial of service attack when a client
sends many pipelined HTTP requests on a single connection, and the
client does not read the responses from the connection.

We recommend that anyone using Node.js v0.8 or v0.10 to run HTTP
servers in production please update as soon as possible.

This is fixed in Node.js by pausing both the socket and the HTTP
parser whenever the downstream writable side of the socket is awaiting
a drain event. In the attack scenario, the socket will eventually
time out, and be destroyed by the server. If the "attacker" is not
malicious, but merely sends a lot of requests and reacts to them
slowly, then the throughput on that connection will be reduced to what
the client can handle.

There is no change to program semantics, and except in the
pathological cases described, no changes to behavior.

If upgrading is not possible, then putting an HTTP proxy in front of
the Node.js server can mitigate the vulnerability, but only if the
proxy parses HTTP and is not itself vulnerable to a pipeline flood
DoS.

For example, nginx will prevent the attack (since it closes
connections after 100 pipelined requests by default), but HAProxy in
raw TCP mode will not (since it proxies the TCP connection without
regard for HTTP semantics).

This addresses CVE-2013-4450.

]]>Node.js is vulnerable to a denial of service attack when a client
sends many pipelined HTTP requests on a single connection, and the
client does not read the responses from the connection.

We recommend that anyone using Node.js v0.8 or v0.10 to run HTTP
servers in production please update as soon as possible.

This is fixed in Node.js by pausing both the socket and the HTTP
parser whenever the downstream writable side of the socket is awaiting
a drain event. In the attack scenario, the socket will eventually
time out, and be destroyed by the server. If the "attacker" is not
malicious, but merely sends a lot of requests and reacts to them
slowly, then the throughput on that connection will be reduced to what
the client can handle.

There is no change to program semantics, and except in the
pathological cases described, no changes to behavior.

If upgrading is not possible, then putting an HTTP proxy in front of
the Node.js server can mitigate the vulnerability, but only if the
proxy parses HTTP and is not itself vulnerable to a pipeline flood
DoS.

For example, nginx will prevent the attack (since it closes
connections after 100 pipelined requests by default), but HAProxy in
raw TCP mode will not (since it proxies the TCP connection without
regard for HTTP semantics).

A carefully crafted attack request can cause the contents of the HTTP parser's buffer to be appended to the attacking request's header, making it appear to come from the attacker. Since it is generally safe to echo back contents of a request, this can allow an attacker to get an otherwise correctly designed server to divulge information about other requests. It is theoretically possible that it could enable header-spoofing attacks, though such an attack has not been demonstrated.

Versions affected: All versions of the 0.5/0.6 branch prior to 0.6.17, and all versions of the 0.7 branch prior to 0.7.8. Versions in the 0.4 branch are not affected.

Details

A few weeks ago, Matthew Daley found a security vulnerability in Node's HTTP implementation, and thankfully did the responsible thing and reported it to us via email. He explained it quite well, so I'll quote him here:

There is a vulnerability in node's http_parser binding which allows information disclosure to a remote attacker:

In node::StringPtr::Update, an attempt is made at an optimization on certain inputs (node_http_parser.cc, line 151). The intent is that if the current string pointer plus the current string size is equal to the incoming string pointer, the current string size is just increased to match, as the incoming string lies just beyond the current string pointer. However, the check to see whether or not this can be done is incorrect; "size" is used whereas "size_" should be used. Therefore, an attacker can call Update with a string of certain length and cause the current string to have other data appended to it. In the case of HTTP being parsed out of incoming socket data, this can be incoming data from other sockets.

Normally node::StringPtr::Save, which is called after each execution of http_parser, would stop this from being exploitable as it converts strings to non-optimizable heap-based strings. However, this is not done to 0-length strings. An attacker can therefore exploit the mistake by making Update set a 0-length string, and then Update past its boundary, so long as it is done in one http_parser execution. This can be done with an HTTP header with empty value, followed by a continuation with a value of certain length.

The fix landed on 7b3fb22 and c9a231d, for master and v0.6, respectively. The innocuous commit message does not give away the security implications, precisely because we wanted to get a fix out before making a big deal about it.

The first releases with the fix are v0.7.8 and 0.6.17. So now is a good time to make a big deal about it.

If you are using node version 0.6 in production, please upgrade to at least v0.6.17, or at least apply the fix in c9a231d to your system. (Version 0.6.17 also fixes some other important bugs, and is without doubt the most stable release of Node 0.6 to date, so it's a good idea to upgrade anyway.)

I'm extremely grateful that Matthew took the time to report the problem to us with such an elegant explanation, and in such a way that we had a reasonable amount of time to fix the issue before making it public.

]]>tl;dr

A carefully crafted attack request can cause the contents of the HTTP parser's buffer to be appended to the attacking request's header, making it appear to come from the attacker. Since it is generally safe to echo back contents of a request, this can allow an attacker to get an otherwise correctly designed server to divulge information about other requests. It is theoretically possible that it could enable header-spoofing attacks, though such an attack has not been demonstrated.

Versions affected: All versions of the 0.5/0.6 branch prior to 0.6.17, and all versions of the 0.7 branch prior to 0.7.8. Versions in the 0.4 branch are not affected.

Details

A few weeks ago, Matthew Daley found a security vulnerability in Node's HTTP implementation, and thankfully did the responsible thing and reported it to us via email. He explained it quite well, so I'll quote him here:

There is a vulnerability in node's http_parser binding which allows information disclosure to a remote attacker:

In node::StringPtr::Update, an attempt is made at an optimization on certain inputs (node_http_parser.cc, line 151). The intent is that if the current string pointer plus the current string size is equal to the incoming string pointer, the current string size is just increased to match, as the incoming string lies just beyond the current string pointer. However, the check to see whether or not this can be done is incorrect; "size" is used whereas "size_" should be used. Therefore, an attacker can call Update with a string of certain length and cause the current string to have other data appended to it. In the case of HTTP being parsed out of incoming socket data, this can be incoming data from other sockets.

Normally node::StringPtr::Save, which is called after each execution of http_parser, would stop this from being exploitable as it converts strings to non-optimizable heap-based strings. However, this is not done to 0-length strings. An attacker can therefore exploit the mistake by making Update set a 0-length string, and then Update past its boundary, so long as it is done in one http_parser execution. This can be done with an HTTP header with empty value, followed by a continuation with a value of certain length.

The fix landed on 7b3fb22 and c9a231d, for master and v0.6, respectively. The innocuous commit message does not give away the security implications, precisely because we wanted to get a fix out before making a big deal about it.

The first releases with the fix are v0.7.8 and 0.6.17. So now is a good time to make a big deal about it.

If you are using node version 0.6 in production, please upgrade to at least v0.6.17, or at least apply the fix in c9a231d to your system. (Version 0.6.17 also fixes some other important bugs, and is without doubt the most stable release of Node 0.6 to date, so it's a good idea to upgrade anyway.)

I'm extremely grateful that Matthew took the time to report the problem to us with such an elegant explanation, and in such a way that we had a reasonable amount of time to fix the issue before making it public.