I’ve been doing some experiments with HTTP textures, and yes, you can proxy/cache them external to your machine even though the viewer doesn’t support it.

There are benefits to this and some downsides. It’s particularly handy if you’re behind a corporate or educational firewall and your network admin would prefer to proxy rather than open up a new port. Caching improvements, alas, are fairly dubious.

This is moderately advanced material.

What you’ll need:

A linux or unix box through which all of your network traffic flows. Really, if you’re running Windows, you should already have one running as a firewall for you, because Windows just isn’t safe to directly connect to the Internet on its own.

First, get squid3 installed on your gateway system. It might already be, which is even better.

Configuring Squid

Now in squid.conf you’ll find that the port defaults to 3128. We’ll add a new port for handling transparent proxying. For the purposes of this guide, we’ll use 3178.

So, add the following line to the configuration:

http_port 3178 transparent

Next, we want to make sure that squid allows requests to be made to port 12046 which is the port Second Life’s HTTP textures are running on:’

acl Safe_ports port 12046 # Second Life HTTP textures

If you just want to get HTTP textures across your corporate firewall efficiently, you can stop there as far as the squid configuration part goes, and just skip down to ‘Capturing the requests’.

Now, because of discard levels in the textures, most of the fetching that the viewer does is by Range-requests. This allows the viewer to fetch portions of the texture rather than the whole thing.

For the purposes of caching, we actually want the whole texture:

range_offset_limit –1

This forces squid to always fetch an entire object, even if you only want a piece of it. This means the whole texture will be fetched (or all of anything that gets requested through this instance of squid) even if only a portion is requested. Squid will return the requested portion as it becomes available. Bad news: Uses a lot more bandwidth on a cache miss. Good news: On a cache hit, squid can respond with any portion of the texture that the viewer wants, and you get a lot more use out of the fact that the viewer is using Keep-Alive requests, which can really make texture loading very very fast.

Still, not a lot of content seems to be caching. Well, we can fix that too:

Here, ‘agni.lindenlab.com’ matches the name of simulator hosts for the main grid and teen grid (they’re the same grid, actually). We throw in a few HTTP-protocol violations here to do our best to make sure that the texture is cached rather than discarded.

Capturing the requests

Lastly, we add in a rule to capture HTTP-texture requests from any viewers, and send them to our proxy.

The URI for this request follows the format for a Second Life grid CAPS (capabilities) request. Unfortunately, the way CAPS requests are made are… as a techie friend of mine would say “inefficient bollocks” from an HTTP protocol perspective.

The first portion of the request URI is your CAPS key, identifying you and the service you want to communicate with. The bit after the question-mark is the information about the specific thing we want.

The latter part will remain unique and distinct for each texture. Texture IDs never change.

But CAPS identifiers do.

The proxy can’t tell that you’re after the same texture, because it doesn’t know what any part of the URI actually means. The scheme Second Life uses here essentially potentially generates vast numbers of URIs for a single unique item. That will lead to your proxy caching the same texture many times under different URIs.

That’s wasteful and inefficient and almost (but not quite) completely defeats the purpose of what we’re doing.

To avoid it, you’d need to direct the request to another proxy first, transform the request to a unique key (the texture ID) preserving the rest of the request data in headers, then run that through your caching-proxy, then forward it through another proxy that transforms the request back to its original form. Basically it’s double-rewriting the request to present squid with a unique-per-texture identifier.

That’s a ton of work. Or you could write your own caching proxy to take care of this all in one go, or customize a single-purpose instance of squid to extract the right information to generate storage keys. By all means, pass it to me when you’re done, alright?