Local error pages from proxy.i2p sometimes take a very long time to load

Description

This may also be the case for other "local" error pages.

It does not make much sense, to me - theoretically, once the router has established that it is throwing an error instead of loading the page, the message should all be rendered and served locally, and therefore very fast. And yet the page continues to give a "loading" icon in my browser (Tor Browser 4.0.8) for twenty seconds or more after the page begins to display. So where is this delay coming from? It should not be waiting for anything over the network at this point.

It may also be related to me using Privoxy, which is set to forward all .i2p URLs to the I2P HTTP proxy (including the local proxy.i2p) - but when I make an exception in my Privoxy config for proxy.i2p, the problem persists (and other problems begin...).

Sorry, it's hard to reproduce well. I suppose that those were not great examples. Here are two better examples, each one taking >10 secs just to load a supposedly "local" page. Particularly interesting is the one labeled "4", which had a long gap between requests for reasons I can't understand.

Interesting. Clearly it's an issue with the local resources (css and images) fetched from the fake host proxy.i2p. This could be a problem with us not properly flushing or closing the output stream after we write the data. We do set cache directives so the browser shouldn't always fetch them, maybe that's why it's intermittent; clearing the browser cache could make it easier to reproduce.

The 10 second gap is also odd, but I'm more focused on the 5-second duration per-resource.

itoopie, snowcamo, and header come after the css is complete because they are referenced in the css.

wget or eepget could be another interesting way to test.

Have you had a chance to try with 0.9.19 yet? As I said in comment 5 above, there were some possibly-related changes in that release.

Got it again. That is still through Privoxy, I will try to reproduce again on 0.9.19 proxied directly to :4444

Many thanks to you and all the devs who create this amazing freedom technology!

I feel like it's very clear that some 5000ms timeout is being triggered, here... 100% of the excessive delays seem to be just immediately greater than 5000ms. And yet, in the image labeled "2," all the local items take 1ms to load, and that was not with any significantly different environment than most of the other instances where the long delays have happened. Could it be a swap issue? Though the wrapper has >200MB breathing room on its memory limits at the moment, which is very typical for me (I give it a lot)...

Another instance, shown above, with a different error page, still on 0.9.19 (as I will continue to use for reproducing this).

I've still been trying to reproduce it with 0.9.19 without Privoxy, no success thus far (about 30 attempts).

I looked at Privoxy's configuration files for anything mentioning a 5s/5000ms timeout or delay, and the only thing I could find was 'keep-alive-timeout' in /etc/privoxy/config which is set to 5 by default on my Debian installation. Here's the information that Privoxy supplies about that config option, and the value as specified below it:

# 6.4. keep-alive-timeout
# ========================
#
# Specifies:
#
# Number of seconds after which an open connection will no longer
# be reused.
#
# Type of value:
#
# Time in seconds.
#
# Default value:
#
# None
#
# Effect if unset:
#
# Connections are not kept alive.
#
# Notes:
#
# This option allows clients to keep the connection to Privoxy
# alive. If the server supports it, Privoxy will keep the
# connection to the server alive as well. Under certain
# circumstances this may result in speed-ups.
#
# By default, Privoxy will close the connection to the server if
# the client connection gets closed, or if the specified timeout
# has been reached without a new request coming in. This behaviour
# can be changed with the connection-sharing option.
#
# This option has no effect if Privoxy has been compiled without
# keep-alive support.
#
# Note that a timeout of five seconds as used in the default
# configuration file significantly decreases the number of
# connections that will be reused. The value is used because some
# browsers limit the number of connections they open to a single
# host and apply the same limit to proxies. This can result in a
# single website "grabbing" all the connections the browser allows,
# which means connections to other websites can't be opened until
# the connections currently in use time out.
#
# Several users have reported this as a Privoxy bug, so the default
# value has been reduced. Consider increasing it to 300 seconds
# or even more if you think your browser can handle it. If your
# browser appears to be hanging it can't.
keep-alive-timeout 5

Now, I don't quite understand everything about this parameter, but I changed it to '12' in an attempt to hopefully start seeing 12003ms etc. times in the Tor Browser network tool instead of 5003ms, but have not been able to reproduce it since changing the setting (also about 30 attempts). So it seems that perhaps the changes mentioned in 0.9.19 have at least greatly reduced the frequency of this issue, although it's too soon to know for sure.

I've also been playing around with induced swappiness, now, just in case that's the source of the delays (particularly why they're so intermittent), although that wouldn't really make sense to delay things consistently to about 5000ms.

Still no instance of this issue with 0.9.19 with Tor Browser proxied directly to :4444. Investigation continues, next step is to reproduce with that Privoxy parameter at a value _lower_ than 5, looking for corresponding shifts in the Network tool, in case raising it to 12 actually resolved the issue somehow.

Thanks again, and apologies for spamming this bug with so many comments and screenshots.

Very exciting news. That Privoxy setting _is_ to blame. Mentioning the cache is what has allowed me to reproduce it reliably, thank you zzz!

Tried some various values of that Privoxy setting other than 5, the delay "chunks" all matched it up precisely each time. Then tried reproducing without Privoxy, and I can't get the error.

SO:

Whatever related stuff was changed between 0.9.18-0.9.19 seems to have resolved the issue when proxied directly to I2P

Delays are being introduced by Privoxy when used with I2P related to its 'keep-alive-timeout' setting

Now the question I have is, is there anything that I2P could do better to avoid this issue when used with Privoxy (or any other application that might involve keep alive timeouts?) Could the router be improved to not use "keep alive" functionality, and would this break anything? I simply don't know enough about what this "keep alive" thing is, or how it's being used, to answer these questions myself.

I _do_ think that it would make sense to try and fix this, if it can be done on I2P's end, as using Privoxy is a fairly common configuration for I2P users, and I would also imagine that it's not the only application that might be affected. Of course, it's possible that there isn't any way to fix this on I2P's end, which would be okay.

zzz, is there any way that the keep alive connection could be altered to avoid issues like this, or should the bug just be closed?

Still can't reproduce here with Privoxy for some reason (and yes keep-alive is set to 5), but added the close headers in fed6c2c70fb348309fe88120da252f009f3ee85b 0.9.19-15, and I'm optimistic it will fix it.