On Tue, 29 Jun 1999, Scott Orshan wrote:
:->Beyond Forms, though, the display of a Web page is not necessarily
:->idempotent - it is not the same thing to retrieve it once as it is to
:->retrieve it many times. There are simple statistics and counters that would
:->become incorrect. Ads would be regenerated. You might repeat the
:->shopping transaction that you just made. The act of retrieving
:->a Web page might cause some physical action to happen - zooming
:->a camera or shifting a production line forward - not something you
:->would want to repeat accidentally.
There is a bit of confusion here about "display" vs. "retrieve". The
stats & counters count retrievals - or, in the terminology of the
HTTP/1.1 specification, requests. Any request that uses the "GET",
"HEAD", "PUT" or "DELETE" methods is idempotent (HTTP/1.1, section
9.2.1).
Well, they are supposed to be. There are lots of sites for which that
isn't true, because following standards on the web is considered
optional. But anyone who writes an application that uses one of those
methods for a request that isn't idempotent has only themselves to
blame.
Personally, I'm annoyed that HTML won't let me use an A tag to issue
anything but a GET request. I'd love to be able to write something
like <A HREF="/cgi-bin/change-the-world.py" METHOD=POST> to inform the
client that this request is *not* for an idempotent page, and it
shouldn't reissue it without checking with the user.
:->On the other hand, a Web page author should be able to specify
:->that their pages are to be refreshed on Back and Forward. I haven't
:->looked at the standard recently, but there's a Meta refresh tag
:->I believe, and instead of just being able to give a time, it might be
:->good to give an action that would refresh the page, i.e., Refresh: BACK.
I'm not convinced of that. Authors can already mark pages as "do not
cache." That should be enough. Some browsers (arguably broken)
integrate the cache & history mechanisms, so that uncached pages
aren't viewable in the history list, but require a manual reload.
Better browsers have a real history mechanism, and keep the tree of
pages visited, not just a list. Finally, some browsers have a "cache
browsing" capability, to let you pull arbitrary objects from the
cache.
:->Since this is part of HTML, not HTTP, you couldn't do this with
:->individual image URLs, or other non-HTML content, but the overall page
:->could be marked as such.
Which is a good reason for doing this with HTTP, not HTML. You want a
history control mechanism, similar to the cache-control
mechanism. Being a big fan of users having control of their clients, I
find that a bit disturbing. But expecting authors to have control is
even more prevelant than ignoring standards.
Oddly, the "Meta refresh tag" started life as part of HTTP, and the
HTTP-EQUIV attribute of the meta tag was used to let the client do the
work.
<mike