This comment has been minimized.

Similar issue seems to exist for the handling of request haeder as well.

When I use this configuration file for h2o (note: this one does not send a huge response header) and access the server with: curl -k -v -H foo:$(perl -e 'print "0"x16384') https://127.0.0.1:8081, the command enters an infinite loop.
It is likely a issue within the HTTP/2 implementation of curl, since nghttp command is capable of issuing a similar request (by running nghttp -v -H foo:$(perl -e 'print "0"x16384') https://127.0.0.1:8081/).

the end result of this is it sends the leftover headers thinking it's the body. send keeps returning 0 and it keeps looping (expecting it will eventually return the amount sent). this is my output with a debug build of libcurl:

This comment has been minimized.

The original issue @kazuho reported: First I has to mention that the maximum size of one header field (sum of name and value) when receiving header fields in libnghttp2 is limited to 65536 bytes.
The value is hard coded for now (we can make it configurable if it is desirable). h2o sends "x-foo" + "1"*65530, which in total 65535 bytes. libnghttp2 makes name and value NULL-terminated string, so it appends '\0' for each name and value. This exceeds the maximum buffer size, and libnghttp2 is going to fail HTTP/2 session. When libnghttp2 thought the underlying session now can be closed (GOAWAY frame was sent), nghttp2_session_want_read(...) ==0 && nghttp2_session_want_write(...) == 0 gets true. The thing is http2_recv does not check this condition, and always keep checking readability of socket, which enters loop. So I think one way to fix this issue is check the above condition in http2_recv like so:

This comment has been minimized.

The second issue, that is curl hangs when it is instructed to send 16384 header value.
This is because I assumed that whole HTTP/1 header block is passed to http2_send function.
But it turns out that it is not true when TLS is enabled. The relevant portion of the code is

This is required when we are dealing with TLS layer directly. But in HTTP/2, the we don't require this 16KiB limit here, since it is nghttp2 callback that deals with TLS layer. So simple fix for this issue is handle HTTP/2 case specially like so:

diff --git a/lib/http.c b/lib/http.c
index 62952a8..1efdb75 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -1091,7 +1091,7 @@ CURLcode Curl_add_buffer_send(Curl_send_buffer *in,
}
- if(conn->handler->flags & PROTOPT_SSL) {
+ if(conn->httpversion != 20 && conn->handler->flags & PROTOPT_SSL) {
/* We never send more than CURL_MAX_WRITE_SIZE bytes in one single chunk when we speak HTTPS, as if only a fraction of it is sent now, this data needs to fit into the normal read-callback buffer later on and that

This comment has been minimized.

While #659 (comment) fixes the original issue, but I'm not sure why returning error from http2_recv triggers HTTP/2 connection closure. This is stream error, and only the relevant stream is affected. It looks like the current implementation treats stream error as connection error.

This comment has been minimized.

@jay Yes, you are correct regarding those APIs. Sorry for the confusion, my point is the case where at least one of APIs returns false, and stream was closed with other than NGHTTP2_NO_ERROR (e.g., NGHTTP2_PROTOCOL_ERROR). This is stream level error and dropping connection is not required. The current implementation returns CURLE_HTTP2 in this case (code). In multi interface, we ended up here, and underlying TCP connection is dropped. This is not desirable, if we imagine that several streams are flowing, and only one stream is closed with stream error. There is no reason to close other streams which they are completely fine.
One possible solution is introduce new error code to tell upper layer that this is stream error, and no need to close TCP connection.

At this point, I've only checked the code path when reading from network. But stream closure can happen on sending side. I'll check what happens in sending side with stream error.

This comment has been minimized.

PR #663 is better version to fix the original issue. It also handles the case where one RST_STREAM kills entire session that I mentioned in the previous post. You might not like additional error code (including its naming). We can discuss that here.

I have not fully understand whether we need similar check (nghttp2_session_want_read() and nghttp2_session_want_write()) in http2_send. Somehow after http2_send, http2_recv is called, so apparently we don't need them. But I'm not sure. Anyway, the PR improves the situation a bit in receiver side.

I notice you moved the check for nghttp2_session_want_read() and nghttp2_session_want_write() zero farther down in http2_recv. Wouldn't that be better back where it was? If we have a problem like that wouldn't we want to catch it as early in that function as possible? Also as far as http2_send wouldn't it be good there as well as a sanity check? Even if you walk the paths and see recv called (and therefore think it's covered by recv) those paths may change.

cc @bagder who may have some more experienced opinions about these things.

I notice you moved the check for nghttp2_session_want_read() and nghttp2_session_want_write() zero farther down in http2_recv. Wouldn't that be better back where it was? If we have a problem like that wouldn't we want to catch it as early in that function as possible?

The reason why I moved the check to the current location is to ensure that pending data is processed before closing session. We copy the received response body here. If it fits into per stream buffer, we don't pause the processing of libnghttp2. If we get then DATA+END_STREAM, and then GOAWAY, nghttp2_session_want_read() and nghttp2_session_want_write() will return 0. If we check that in previous position,the copied data may not be notified to the upper layer (see). But even the new position may not work if a stream other than the one which has the pending data is called first with http2_recv. If we can arrange the code so that stream with pending recv data gets called first, then my current code works better.

Also as far as http2_send wouldn't it be good there as well as a sanity check? Even if you walk the paths and see recv called (and therefore think it's covered by recv) those paths may change.

As a precaution, yes, it is good idea. I just worries that it may have the same problem I stated above. That is we might have stream with pending recv data while the nghttp2_session_want_read() and nghttp2_session_want_write() zero.

Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: curl#659
Ref: curl#663

This comment has been minimized.

But even the new position may not work if a stream other than the one which has the pending data is called first with http2_recv. If we can arrange the code so that stream with pending recv data gets called first, then my current code works better.

This sounds like an issue that the PR doesn't address is that correct? In other words are you saying that the position it's in now it could cause a bug?

This comment has been minimized.

@bagder Yes, I know :) This will likely add a unique error code CURLE_HTTP2_STREAM to signal HTTP2 bad stream when the HTTP2 connection is otherwise still good, therefore since there is a new define I figured it should be in before the close.

This comment has been minimized.

@tatsuhiro-t@jay Tried using curl on #663 at 675a203, and got the following results. The former looks strange. The latter seems fine to me. Can you please confirm? Thank you in advance.

1. Receiving Huge Header

When running H2O with configuration found in this comment and running curl -vv -k https://127.0.0.1:8081/, the command seems to wait until the connection is closed by the peer and then emits curl: (56) Unexpected EOF. Full output of curl -vv can be found here.

2. Sending Huge Header

When running H2O with configuration found in this comment, sending request header up to 65349 bytes (name + value without counting \0) succeeds. When the sum exceeds the value, I see curl: (92) HTTP/2 stream 1 was not closed cleanly: error_code = 7 reported.

As Tatsuhiro said libnghttp2 doesn't handle large headers but I don't know if this is correct behavior.

I think we should check for large header lengths instead of casting down like this because if the header is particularly large like 66000 it's going to overflow and become 464. I'd rather we just error there, what if I change those places to do something like if((end - hdbuf) > (uint16_t)-1) err. That still wouldn't address the case where the name + val + nul len exceeds 65535 so that would have to be handled as well. But basically the idea is I add more checking in http2_send.

Another thing I think we should do is put the nghttp2_error_code enums in presentation format for the user. I don't see a strerror type function in libnghttp2 but I see something similar in app-helper. Either I could add something like that to libcurl or libnghttp2 so we can give more information to the user.

Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: curl#659
Ref: curl#663

This commit ensures that data from network are processed before HTTP/2
session is terminated. This is achieved by pausing nghttp2 whenever
different stream than current easy handle receives data.
This commit also fixes the bug that sometimes processing hangs when
multiple HTTP/2 streams are multiplexed.
Ref: curl#659
Ref: curl#663

This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: curl#659
Ref: curl#663

Previously, when a stream was closed with other than NGHTTP2_NO_ERROR
by RST_STREAM, underlying TCP connection was dropped. This is
undesirable since there may be other streams multiplexed and they are
very much fine. This change introduce new error code
CURLE_HTTP2_STREAM, which indicates stream error that only affects the
relevant stream, and connection should be kept open. The existing
CURLE_HTTP2 means connection error in general.
Ref: #659
Ref: #663

This commit ensures that data from network are processed before HTTP/2
session is terminated. This is achieved by pausing nghttp2 whenever
different stream than current easy handle receives data.
This commit also fixes the bug that sometimes processing hangs when
multiple HTTP/2 streams are multiplexed.
Ref: #659
Ref: #663

This commit ensures that streams which was closed in on_stream_close
callback gets passed to http2_handle_stream_close. Previously, this
might not happen. To achieve this, we increment drain property to
forcibly call recv function for that stream.
To more accurately check that we have no pending event before shutting
down HTTP/2 session, we sum up drain property into
http_conn.drain_total. We only shutdown session if that value is 0.
With this commit, when stream was closed before reading response
header fields, error code CURLE_HTTP2_STREAM is returned even if
HTTP/2 level error is NO_ERROR. This signals the upper layer that
stream was closed by error just like TCP connection close in HTTP/1.
Ref: #659
Ref: #663

- Error if a header line is larger than supported.
- Warn if cumulative header line length may be larger than supported.
- Allow spaces when parsing the path component.
- Make sure each header line ends in \r\n. This fixes an out of bounds.
- Disallow header continuation lines until we decide what to do.
Ref: #659
Ref: #663

This comment has been minimized.

@tatsuhiro-t thanks for all your hard work on this. Landed in b2a0376...3f57880. Though we may know there is a frame size error we don't know why, so I added a warning that appears in verbose mode when the cumulative header length is over 60000 (less than 64KB to account for some overhead) to provide some clue to the user in addition to the individual length checks.

However as mentioned if the cumulative length to be sent may be too large we don't know for sure until after the fact so the error is more generic, although prefaced with a warning before the attempt:

* http2_send: Warning: The cumulative length of all headers exceeds 60000 bytes
and that could cause the stream to be rejected.
The headers are sent to nghttp2 which rejects them, and then this error:
Error: libcurl: (92) HTTP/2 stream 1 was not closed cleanly: REFUSED_STREAM (err 7)
Stream error in the HTTP/2 framing layer