In order to test whether Twitter is doing any of the
typical sorts of caching that it could,
via ETag or Last-Modified processing,
I wrote a small program to issue HTTP requests with the relevant headers, which will indicate
whether the server is taking advantage of this information.
The program, http-validator-test.py, is below.

The first two lines indicate no special headers were passed in the response, and that a 200 OK response was
returned with the specified Last-Modified and ETag headers.

The next two lines show an If-None-Match
header was sent with the request,indicating to only send the content if
it's ETag doesn't match the value passed. It does match,
so a 304 Not Modified
is returned instead, indicating no content will be sent down (it hasn't changed since you last asked for it).

The last two lines show an If-Modified-Since
header was sent with the request,indicating to only send the content if
it's last modified date is later than the value specified. It's not later,
so a 304 Not Modified
is returned instead, indicating no content will be sent down (it hasn't changed since you last asked for it).

For content that doesn't change between requests, this is exactly the sort of behaviour
you want to see from the server.

Rut-roh. Full content sent down with every request. Probably worse,
generated with every request. In Ruby. Also note that no
Last-Modified header is returned at all, and different ETag
headers were returned for each request.

So there's some low fruit to be picked, perhaps. Semantically, the data shown on the page did not
change between the three calls, so really, the ETag header should not
have changed, just as it didn't change in the test of the python site above. Did anything really
change on the page? Let's take a look. Browse to my Twitter page,
http://twitter.com/pmuellr, and View Source. The only thing
that really looks mutable on this page, given no new tweets have arrived, is the 'time since
this tweet arrived' listed for every tweet. That's icky.

But poke around some more, peruse the gorgeous markup. Make sure you scroll right, to
take in some of the long, duplicated, inline scripts. Breathtaking!

There's a lot of cleanup that could happen here. But let me get right to the point. There's
absolutely no reason that Twitter shouldn't be using
their own API
in an AJAXy style application.
Eating their own dog food.
As the default. Make the old 1990's era, web 1.0 page available for
those people who turn JavaScript off in their browser. Oh yeah, a quick test of the APIs via curl indicates
HTTP requests for API calls do respect If-None-Match processing for the ETag.

The page could go from the gobs of duplicated, mostly static html, to just some code to render
the data, obtained via an XHR request to their very own APIs, into the page. As always, less is more.

We did a little chatting on this stuff this afternoon; I have more thoughts on how Twitter
should fix itself. To be posted later. If you want part of the surprise ruined,
Josh twittered after reading my mind.

Here's the program I used to test the HTTP cache validator headers: http-validator-test.py

Duncan Cragg pointed out that I had been
testing the Date header, instead of the Last-Modified header. Whoops, that was dumb.
Thanks Duncan. Luckily, it didn't change the results of the tests (the status codes
anyway). The program above, and the output of the program have been updated.

Duncan, btw, has a great series of articles on REST on his blog, titled
"The REST Dialog".

In addition, I didn't reference the HTTP 1.1 spec, RFC 2616, for folks wanting to learn
more about the mysteries of our essential protocol. It's available in multiple formats,
here:
http://www.faqs.org/rfcs/rfc2616.html.