This issue is irrelevant to HTML, but assuming you mean HTTP; how so? If you're referring to the quote in the top answer, from the HTTP spec...

Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.

Said answer was replied to with this comment, which is accurate...

The part of the HTTP spec you quoted does not justify iOS 6's caching behavior. The default behavior should be to not cache POST responses (i.e. when "Cache-Control" header is not defined). The behavior violates the spec and should be considered a bug. Anyone building xml/json api web services should decorate their POST responses with "Cache-control: no-cache" to work around this issue. – David H

Perhaps it is because the movement is being interpreted as the scroll rather than a "click" (or touch in this case). Understandable if that is the case as you need to make an initial "click" (or touch) in order to scroll on touch screens.

Caching GET is great. A GET is just a question: what's the resource this url points to? That shouldn't change too often.

Caching POST is bad. A POST is a command: Do this thing with this data. The response you get back should be information about the new state of the world since you sent your data. But if POST responses get cached, you never find out about what's changed. Webapps that expect to get updated information from the server after a POST won't get updated information, they'll get the old cached information.

Webapps that expect to get updated information from the server after a POST won't get updated information, they'll get the old cached information.

The problem is worse than apps receiving outdated information. An app could tell the server to do something (e.g to log out of a secure system, or to create or cancel a purchase order, or to take an action in a browser-based game), and get back a cached "successful" response, even though the server never received the command at all.

The thing is, you can specify with Cache-Control and Expires headers whether or not something should be cached (both from the server, and the client).

What we're talking about is the default behaviour when there are no cache headers specifying caching behaviour. And one fundamental part of the web that has gone unbroken since the beginning is that by default, requests using the POST method are never cached by default, but requests using the GET method may be.

The POST method was always intended to be used in order to change the state on the server - submit some information, etc in such a way that your request is intended to have an effect on the server. It was always designed NOT to be idempotent - if you make the same request twice, it should be sent to the server twice, it should not be assumed that you don't want the second request to reach the server.

Essentially this bug means that if you make the same POST request twice to a server, the second one won't actually be sent to the server, but from the client side it will act the same as if it were. Which is quite broken both by the official HTTP specification and by the way browsers have worked since the beginning of time.

You're thinking of the responses that have a "Cache-Control: max-age=0" fields, which some have argued say that the response may be cached for some very short amount of time. I disagree on even that given that it's really hard to have a Unix time that isn't greater than 0 at this point. However Safari also caching POST responses without any mention of cache-related fields at all does break HTTP.

So what happens next? Does Apple release a quick one time fix to Safari or do webdevs around the world have to waste countless man hours "fixing" their sites for Safari? I'm asking because I have this really bad feeling...

What happens next is that Apple refuses to acknowledge that there's a problem and iOS 6 Safari becomes a "problem browser" for web developers like IE6 was. And that's the way it's going to be because Apple doesn't give a shit about the trauma you experience, as a web developer, working with their products.

Yeah but this one would be like Toyota releasing a car that doesn't allow going in reverse while your headlights are on. Maybe not something you test the first day you get it (let's assume it isn't in your test suite), but something that most people are going to encounter at some stage so should be realised fairly quickly.

Was there no "beta" testing period? This bug would be easily discovered the first time you visit any AJAX based site and try to take the same action twice. It's not like your normal browser bug.

Oh bullshit. Bugs happen. Tests help, but don't make you immune. Should this have been covered by a test? Yes. But we don't know if this was a deliberate, but poorly though out, optimization. No amount of testing will protect you from a bad decision. You also will never cover all possible cases in your tests. Programmers are human and humans fuck up.

I am not saying that with tests and proper architecture one gets immune for bugs. Not at all.

I am saying, that, with a proper architecture AND with full test coverage, "complexity" and "size" are no longer excuses for bugs.

Without tests, you can hide behind "it is so big, no one can oversee the entire possible scope of changes a simple feature introduces". Without a proper architecture you can hide behind "I thought FooBar class-layer-object-thing would convert it. Turns out that the strings in there are not always MB-safe after all" or so.

That still doesn't hold. Your tests are only as good as you make them. As a project grows in complexity, so do your test cases. It's easy to write a bug free tiny project. I'm willing to be /bin/true has had zero bugs in its existence. There's really no other excuse for bugs than complexity.

Complexity can be overcome by proper architecture. Layering, Isolating, Extracting and such.

As a web-application-developer I have seen many monolithic, spaghettified monstrosities. Complex++. Yet I have also developed on beautifully abstracted systems. The latter often far more advanced in features, and such then the former. Even wen those giant piles of spaghetti offered far less features and options to the users, they still contained a /lot/ more bugs. We all know, and have learned (if not: read Code Complete) how proper architecture helps you reduce the complexity of a piece of software.

Again: such things are no guarantee for bug-free code; but having a pile of spaghetti is a sure way to many bugs; while properly designed software will help you avoid most such bugs, by its very nature.

Again, and I cannot stress this enough: you will have bugs, but you cannot blame size or complexity, if you have taken proper measures to reduce the effect of size (tests) and complexity (architecture).

Were my caveats not enough to convince you that I was providing nothing more than anecdotal evidence? Did what I wrote really sound like I was trying to prove anything? Or are you just being an ass on the Internet?

I could have sworn that there used to be a "ExplainsRProgramming" novelty account that would come through and give a anyman explanation of various posts.

I mean, don't get me wrong as a relatively senior sys-adminesque support engineer dude, I have a general understanding of why this is dumb, but I'd love to really understand what is being said here, if only at a high level without having to spend several hours learning about a field I know nothing about (webdev(?)).

Caching is when record everything that has being returned to you before, and if the same request is made again, you don't bother creating new HTTP requests and just return the old results. For a static homepage, this increases speed. But when you're dealing with logging in (via POST), and you cache it, it means that all the data will be stale. It's like seeing the same reddit frontpage forever.

PUT and DELETE are very rarely used, and are never used by normal web browsers unless a Javascript application has programmed them specifically to do so, or by other specialised protocols/APIs such as WebDav or web services.

Ha, I really should have saved this for a more complicated issue. I guess what I'm saying (was trying to say) is, I get the difference between post and get, if only on a very basic level. I just don't get how/why this could happen. It's (generally) been my experience that when something is fucked, it's because someassholeone thought it would be a good idea. I don't understand how something like this could ever be considered a good idea, so how did it happen?

Ah, well that's somewhat disappointing. I thought this would be a premature optimization or disagreement on a spec or something. If this is just someone hit i one too many times in vim or something, that's kind of lame :/ .

Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.

Caching when max-age=0 may violate common sense, but it's debatable whether it violates the spec. If your intent was that a resource shouldn't be cached, the proper Cache-Control header has always been "no-cache".

I see what you are asking now. The answer is no, it wouldn't be. max-age=0 does not indicate cacheability. You'd need something that positively indicates cacheability, like Cache-Control: private, or Cache-Control: public (or an Expires: header, in the absense of a contradicting Cache-Control header). The max-age directive, even when non-zero, only specifies the length of time something may be held in cache before revalidation if something is cacheable, and does not grant additional cacheability for requests that are otherwise not cacheable in the current context. For something not cacheable, it shouldn't have any effect.

When they say "UNLESS the response includes appropriate Cache-Control or Expires header fields" they mean one that indicates cacheability, and the part of the Cache-Control header that indicates cacheability is the "public", "private", or "no-cache" directive (or "no-store", which implies "no-cache").

If you did want a POST request to be cacheable, then you should not just set Cache-Control: max-age=3600 or something, you should also specify that it's cacheable, as in Cache-Control: public; max-age=3600 or something. Or substitute public with private if you want something only locally cacheable (ie, only by the user's own browser).

It appears to be caching all POSTs that are not marked non-cacheable. By putting in a timestamp, he ensures each of his requests are unique and thus will never be satisfied from the cache and so will go through as if there were no cache.

The other workaround of marking POSTs non-cacheable is a better workaround.

Yes I understand - my point is that it seems weird that the OP phrases the answer in terms of how he changed his JavaScript function, not that he tweaked the URL. It's the kind of troubleshooting you do when you don't understand what goes on under the covers when you call the jQuery function.

If what first SO answer says is true, then what happens is wrong for at least for "Cache-Control max-age=0" part. I see no basis for the same answer's claim that the user agent can, in theory, cache POSTs on the basis of the use of HTTP 303.

Edit: spec also has this part: "When an intermediate cache is forced, by means of a max-age=0 directive, to revalidate its own cache entry, and the client has supplied its own validator in the request, the supplied validator might differ from the validator currently stored with the cache entry. In this case, the cache MAY use either validator in making its own request without affecting semantic transparency. However, the choice of validator might affect performance. The best approach is for the intermediate cache to use its own validator when making its request. If the server replies with 304 (Not Modified), then the cache can return its now validated copy to the client with a 200 (OK) response. If the server replies with a new entity and cache validator, however, the intermediate cache can compare the returned validator with the one provided in the client's request, using the strong comparison function. If the client's validator is equal to the origin server's, then the intermediate cache simply returns 304 (Not Modified). Otherwise, it returns the new entity with a 200 (OK) response." I don't see how this is relevant, but I might be wrong.

So we basically just treat iOS6 like we would any browser that's behind a misbehaving proxy?

Sometimes it's not just the browser, sometimes there's an upstream transparent proxy server. I've run into this before, and usually do put a "cache busting" url (via a unique querystring) and a unique POST url also out of habit.

Transparent proxy servers are everywhere and they cache all sorts of crap. It wouldn't surprise me that if you are hooked into someone's data network (if you roam) that you are behind a transparent proxy. I hit them a lot on 3g in the midwest where the mobile is subcontracted out to almost something akin to a mom-and-pop ISP.

This is actually pretty fair. If you want to be in charge, you accept responsibility for the successes -- and failures -- of the people you're in charge of. You don't get to just enjoy just the positive shit. That's why you get paid eight figures.

Slightly more specific analogy: you're doing a series of exams. The first time you hand one in, the school sends it off to an exam board, who marks it, sends feedback back, and the school gives you back your mark. The second (and future) times you hand one in, the school decides not to pay the expense of sending the paper off to the exam board, and instead just gives you a copy of the first set of feedback on the basis that it comes to the same thing but it's faster; after all, it's what the exam board sent in response to a submission, why should it be any different the second time?

POST requests are used to push data to a webserver (for example your login details).
Caching data will store the previous results in memory and will not check the server for new results.

For example, with caching on POSTs enabled, if you logged in (and the server returned the message: login successful), changed your password, then logged out. If you logged in afterwards with the same password, it would be sent to the server, but the server's response (invalid password) would be ignored and the cached value (login successful) would be returned to you.

Even though your login failed, your browser would think you were in the state of being correctly logged in, whereas the webserver would see you as an unauthenticated user. This would cause things to break when your browser tires to do things that need a logged in user.

I've had an issue relating to brower saving session state on my IOS 5.1.1 iPad: I was filling out a form, and I realsed I made a mistake. It was a multi step form so decided it would just be easeier to start again. I was in private mode. I closed all the tabs and closed the app, opened it back up onl yo find my session preserved!

Um, sure you do. If I send two POSTs to /status_updates with the same message, the server should know that I sent two status updates. If it wants to de-dupe on its end, that's fine, but that's not something that should be happening in the browser.

What Apple is doing goes against the HTTP standard. The server responding to the request determines if a response should be cached, not the client. This means that POST response caching by default is a bug.