Noted no encryption and probably no authentication, but IPSec compression could be useful. I'm guessing that it's probably not supported either, but I had to ask... Is compression supported by the tunnel servers? If not, would it be considered as an option (defaulting off - enabled via the TB account on a per-tunnel basis)?

In other words, when it is turned on, allow it to be optional. We won't need it enabled for IPv6 as we should be compressing the encapuslating packet in its entirety, not just the IPv6 data part.

One could define "_USER_ADDR_/32" as "0.0.0.0/0" and let everyone try with only two policy rules needed at the server (and a note to reject UDP port 500 if not implementing). Simpler configuration, but it does try for everyone as opposed to being enabled per tunnel. Some who don't know IPSec may think their tunnel server is hacking them!

What is your end goal? Are you looking for IPsec encryption for security, the compression feature, or playing/testing/learning because it's there? (All are good reasons )

I don't see security as buying much benefit, as the traffic would be unencrypted across the Internet and multiple ISPs. For compression, there are other options, such as ROHC (Robust header compression), or possibly using PPP compression inside a PPTP tunnel, which I think HE now supports (PPTP, that is).

Both of these features are heavier to implement, especially when you're working with hundreds to thousands of tunnels.

An IPv6-in-v4 tunnel is lightweight, both in the hardware/software implementation as well as operational support: take your IPv6 packet, wrap an IPv4 header around it, fragment if necessary and go. HE automates the tunnel turn-up, and there's just about nothing to troubleshoot: if HE can ping you, they setup the tunnel, and if there's a problem, it's most likely not on their side. (Mine has broken twice in two years... fixed by changing endpoint, and changing it back)

IPsec is *very* resource intensive: tunnel setup, configuration, maintenance, encryption, compression, troubleshooting, troubleshooting, operational support, and did I mention troubleshooting? I doubt HE has the staff to pay anyone to troubleshoot hundreds of IPsec tunnels to end-users, and as I think this service is mostly volunteer, I doubt anyone would volunteer to do it. I'll be honest: I wouldn't.

PPP compression over PPTP would be easiest to implement (configuration), as it would be a knob enabled once the tunnel is already setup. PPP compression is usually implemented in software vs hardware, so in a best case, you're increasing bandwidth at the increase of latency and increased CPU usage. PPP compression was a feature used more commonly on lower speed serial links, where you had available CPU on the router and needed more bandwidth. With broadband/dsl to everyone's home now... I don't see the benefit of it, and the bulk of the CPU processing (compression of the download direction) would be on HE's tunnel routers.

The point of my response was only for the compression feature of IPsec. My three largest services are SMTP, HTTP, and NNTP (text only feed), and all of those have traffic which is compressible about 30% when taken in short chunks (i.e. ~1400 bytes). Although I'm not in danger of hitting my monthly bandwidth limit, it would still be nice to squeeze more out of what I do have. I consider HTTP and NNTP as public information which needs no encryption.

For me, encryption and/or authentication would be nice too (for certain services), but I would implement that on the encapsulated packet (IPv6), not the encapsulating packet (IPv4), so as it doesn't involve the outer IPv4 packet, it makes no difference to the tunnel service. Furthermore, even if the tunnel service did implement authentication and/or encryption, packets would only be protected between tunnel endpoints and the tunnel server itself, not beyond. As such, those two features are useless in that position.

In such, I would expect outer-level (at the tunnel server) compression (1) to occur only when encryption or payload-level compression (2) are not present. The current compression algorithms in the RFCs would in most cases only compress once. As encryption randomizes the payload (i.e. generally makes it incompressible), the benefit (a few bytes) over the time overhead of performing it on an encrypted header is usually nil. A smart server may check for an inner-level encryption or compression header and not even try outer-level compression in such a case.

In the absence of outer-level compression at the tunnel server, I would turn it on at the inner-level. The only reason I haven't yet done so is due to a possible kernel bug I'm exploring plus some configuration tuning (and the fact that many of my NNTP peers aren't currently supporting it, but someone has to be first).

What flavor of compression are you looking at? Is this something specific to NNTP? a generic on-the-fly zlib compression, like what is available (theoretically) for HTTP? or are you looking at ROHC (Robust Header Compression)?

If you mostly have large packets, ROHC won't help much, but I think that can be enabled on a per-peer basis, but I can't think of how that is signalled between endpoints.

I would never deploy IPsec for the benefit of compression. Waaaaaayyyyyyyyyyy too many moving parts, and variety of implementations/debugging options available on the endpoints.

If you mostly have large packets, ROHC won't help much, but I think that can be enabled on a per-peer basis, but I can't think of how that is signalled between endpoints.

I think header compression happens at a layer below the IP layer. As far as I know it is a hop by hop thing, and a router will have to decompress the header and compress it again if the next link also has header compression.

In case of a tunnel the compression could be deployed as a layer between the outer and inner IP header. It is relevant on slow links to single devices. Some links may be so slow that using packets larger than 1KB increases latency too much for other packets that will have to wait for a 1KB packet to clear the link. On such slow links you may want to reduce the packet size, thus the overhead for headers grows as percentage of link capacity, and header compression becomes relevant.

On backbone links header compression is not as interesting. First of all it is most likely too resource consuming for high speed links. A lot of effort has been put into requiring zero CPU time to route a packet. You don't want to mess up that with header compression. And the compression context becomes problematic. On links servicing a single device the number of contexts needed for header compression to be useful is fairly small. On the backbone links you simply cannot have contexts for all the communication going over that link.

To improve utilization of high speed backbone links, you'd be better off with larger packets than with header compression. That gives a bit of a conflict with some single devices wanting small packets due to latency. Hopefully those devices wont be responsible for any significant fraction of the packets over the backbone.

Instead of reducing packet sizes you'd be better off doing link by link fragmentation and reassembly at a layer beneath the IP layer in those cases where large packets are not desirable on a specific physical link. In fact IPv6 mandates this if the physical link cannot provide a 1280 bytes MTU. I don't see any reason you couldn't use this also to interleave packets in different QoS bands such that a 1280 byte packet in a low priority band doesn't delay a small packet in a high priority band.

As long as the bulk traffic uses large packets header compression isn't all that significant. A typical 1280 bytes packet with 40 bytes of IP header and 32 bytes of TCP header will in total use about 6% of the bytes for headers.

My opinion is (and I here may be repeating what others said):

Header compression is only useful in rare cases. You are better off increasing packet sizes and find other solutions to any obstacles preventing you from increasing packet size.

If the gain from a compression is never going to be more than 5% of the space, then it it probably isn't worth the complexity to deploy it.

Compression on a higher level is better. TCP or above is where I think compression belongs.

IPSec is way too complicated to deploy just for the benefit from compression.

Opportunistic encryption is a good idea for the overall security of the net. Unfortunately current methods for opportunistic encryption are too complicated.

Opportunistic encryption with IPSec relies on DNS to distribute keys. Why can't you setup the encryption without DNS and then optionally use DNS to validate the certificates? A better approach to opportunistic encryption is what you find in tcpcrypt. It takes the right approach with enabling encryption at the TCP layer as long as both endpoints support it and then leave validation of integrity of the connection to a higher level. Unfortunately it is a slightly too high level for the encryption. It only applies to TCP, and as far as I can tell doesn't protect port numbers because port numbers are being communicated before use of encryption has been negotiated.

With tcpcrypt I think the network could apply different service level depending on port numbers, that is some port numbers getter higher priority, some may not be allowed connectivity at all. And what is even worse, man-in-the-middle attacks can be aimed at specific port numbers to lower the chance of detection.