For the first time visit case, where the browser/device has no "resources cached from the domain", browser could simply indicate that with a empty finger-print/ID header and the server can start pushing those resources. For frequently visited sites, it probably doesn't matter (since stuff is cached) but for the long tail of content, it may be important to optimize this use-case.
Overall â€“ I was (sort of) thinking of similar mechanisms. Inlining all the way is very bandwidth intensive (as you cite) so your approach seems like a fairly reasonable trade-off to me.
In the scenarios where there are intermediate caches much closer to the browser, there is still a gap because servers will push content which intermediate caches may have cached. .. Maybe the intermediaries can add on to the Id-header of the content when they fwd the requests upstream so server(s) can avoid resending the content that is cached downstream (unless requested, of course).
But wait a second â€“ we are already thinking of cutting them out from the equation !
I think this approach has a lot of merit and we should explore it further.
Thanks,
Rajeev
From: "Safruti, Ido" <ido@akamai.com<mailto:ido@akamai.com>>
Date: Fri, 27 Jul 2012 11:26:09 -0700
To: HTTP Working Group <ietf-http-wg@w3.org<mailto:ietf-http-wg@w3.org>>
Subject: Thoughts on server push and http/2.0
I want to propose somewhat of a new approach with regards to server push, which was a charged and debated topic here.
server-push has various limitations, but the reality is server-push is here, whether we like it or not, in the form of inlining. The different implementations out there could be a good resource for analysis and help us determine what are the limitations and which tools can we provide within the framework of HTTP/2.0.
The challenges/reservations that were presented so far with regards to server push are:
1. This is not controlled by the client
2. Potentially increase bandwidth consumption, battery life etc.
3. The cache management mechanism in place for request based resources aren't there for push. This may result in the server sending content already stored in client's cache.
4. There are suggestions calling for client poll rather than server push - this way the client can inspect the cache and decide which resources to request, and what info to report when requesting them.
One of the strongest arguments supporting server push, is that not only the demand is there, but in fact server push is already out there as mentioned above â€“ in inlining resources, and consolidating resources for following requests.
As this is already done, it would be helpful if we can suggest a model that provides better tools and more control to do exactly that.
The demand for server push is to better utilize the network. Any client driven method will have the initial (top-level) request, and only after receiving the response with the embedded/hinted resources the following requests can be issued. Which means that there is a full round-trip time in which the server can send data but isn't doing it. This is practically "free" bandwidth. High latency connections, especially high latency high-bandwidth (mobile for instance) could especially benefit from that.
* The alternatives used today, lack the additional control that support from the protocol can provide: for instance, a client can request to stop a pushed object, while it can't do so if the push is done via inlining (as it is consolidated into one request).
* We can learn from FEO and optimization techniques used today (especially around mobile) on how to hint on what is cached on the client, and how to use this data. today, when inlining and storing objects in local storage, the server sets a cookie (or cookies) to indicate the presence of the object in cache. this way, the client will report on following requests, and the server can decide not to inline the request.
* There are some obvious deficiencies to the cookie solution: nothing ensures the cookies are in sync with the cache/local storage, and we are overloading the cookie mechanism, which is already used for many other application specific needs.
* Inlining prevents efficient cache management. in some cases local storage is used as a "hack" to enable caching, but I believe we can do better.
* Adding client control (as mentioned above), as well as prioritization of streams: pushed content could be set with a low priority, ensuring that when other requests arrive, they will not be blocked.
Based on the above, I believe we should look again on enabling server push in the protocol, with the approach of helping facilitate the concerns and requirements needed for successful server push. My expectation from HTTP is to provide the tools and mechanisms to enable a successful implementation of server push, but keep it flexible enough so that the layers on top of it, or the application on top of HTTP can use it for a useful implementation. similar to how HTTP defines cookies, ETAGs and vary headers, but the other layers controls them and use them efficiently (even though you can easily go very wrong with these, and result in a highly inefficient implementation).
I'd like to propose a solution to mitigate the server-push limitations. The generic flow is explained below, but I'll start with a simple common scenario:
1. A user requests a page, and the server delivers the page as usual, without using server-push
2. When delivering the JS & CSS files of the page, the server marks the most common resources by adding to their cache instructions an ID, such as an 8-10 byte signature on the URL, ~12-15 base64 encoded.
3. Future request to pages (not resources, just pages) on this domain will include a new header which holds all the IDs of the JS/CSS files still in cache, served from this domain (or domains associated with this connection). Assuming an average case will include 10 such resources, the header value will be 150 bytes, added only to requests for pages
4. The server will use this information to decide whether and what to push more intelligently
With that simple scenario in mind, I would like to continue and suggest the more generic details (I specifically didn't go into the syntax details, as I think those could be easily resolved once agreeing on the concept):
1. when serving content (through push or in regular delivery), a server *can* add to the caching instructions associated with the object the following data:
* generate a specific id for the resource within the domain. this could be a hash similar to the etag generation. I believe that 8-10 bytes would be sufficient. the id should include the object reference (e.g. URL) and its version.
* define a scope for the object. this is similar to the scope of a cookie (domain, directory) but should also have the ability to specify content type. specifically - a common use-case will be to only include a top-level page (e.g. the main HTML or a frame HTML).
2. a client receiving a pushed object, will store the ID and scope associated with it, as a part of the cached entry.
3. when requesting a resource, the client (user-agent) should report the IDs of objects in its cache, for which the request is in scope:
* this will be reported in a dedicated header ("stored-objects:", "cache-map:" or any other name)
* assuming the server will not push unless it received client-cache-status info, the client can send this as a delayed header (currently supported in SPDY), and thus avoid delaying the processing of the request itself.
* It may make sense to enable reporting client cache out of context, to provide more info to the server. Not sure about it yet, it will definitely require maintaining a state on the server side of the client cache along the entire session, which adds complexity.
given the above tools, a server can now determine which resources to push based on tagging the common resources, or analyzing logs for most common resources. the scope is important in verifying that only required resources will be reported, as we don't want to transmit the entire cache map of a client.
I believe this approach provides the right tools to implement server push wisely, and leaves the tools flexible enough so that applications and specific implementations still have the capability to innovate and use it in smarter ways that we currently can't plan for.
All comments and ideas on how to make it better (or why should we kill it altogether) are appreciated.
- Ido