Related

This comment has been minimized.

This is tricky business and we'll probably bump into implementations where we should use one or the other, and that can't really be feature tested, which means we'll have to rely on browser sniffing in those cases. :(

This comment has been minimized.

I agree that this project really needs some performance testing to validate whether there are benefits to using localStorage for caching. I think there are a few different facets to benchmarking we need to consider though: there's localStorage vs alterative 'modern' storage formats and localStorage vs browser caching (which is the major thing we need to cover).

More random thoughts..

From what research I've done IDB is slower than localStorage at the moment (as partially confirmed by the above), but I wonder if some of the problem is that there's much confusion about which storage options should be used for particular use cases.

For offline, there are three problems, and three solutions :

Application => cache manifest

Configuration => localStorage

Data => IndexedDB, file system (File API)

For our particular needs, maybe we should be using IDB, but the problem there is it's direly slow. Does that mean we should be looking at the File API compared to localStorage instead? Libraries like https://github.com/ebidel/filer.js could help with that, but the issue is that even if we did use the most optimal solution we'd have to implement it in a tiered way.

This comment has been minimized.

The problem with IndexedDB is that it requires user approval, which for me, is a no go. Who would want to annoy their users with a prompt to only save a few milliseconds?

A JS mutable cache manifest is what we really need. Where we have some APIs to add/remove files from the manifest. Maybe we should suggest something like that to the right people? Since there is clearly a need for better control of caching.

Using other storage methods to cache script is a temp hack, which does not scale in the long run. At the moment localStorage is our best bet, even though I have my doubts.

This comment has been minimized.

I think that if we just abstract the storage implementation we have, we can easily add others by just plugging them in. 2,5mb should be sufficient for almost all JavaScript application, or you might have to reconsider your codebase. :)

This comment has been minimized.

I think 2.5mb is absolutely fine for most applications. I have yet to see a (properly built) web app that requires more than that for their JS payload and I think it should be okay to use.

With respect to the File/FileSystem API I mentioned this further up in the thread. The problem is that support for this is currently limited to Chrome http://caniuse.com/filesystem. We could say.. use it if supported otherwise fallback to localStorage but is that too much effort? What do you guys think?

This comment has been minimized.

I don't disagree, just an observation. Might be an edge case, but Twitter is over 2 MB :P

I think we should at least include the Filsystem API in our perf tests first. It's worth the effort if there's any gain from using that API. Remember that having only Chrome support that API, still means over 30% of users would benefit from it.

This comment has been minimized.

His perf test also show that localStorage in Safari is way faster than any other browser, and that Chrome is way slower. Wonder why that is... If every browsers was as fast as Safari, no one would actually complain about localStorage. It looks like a lot of the implementations could be drastically improved without changing the spec.

This comment has been minimized.

It might be worth bringing @beverloo from Chrome into this discussion as he might be able to shed some light on this. I also have to say: I wonder just how many (or how large) the scripts people are trying to load in must be for the performance to be visibly impacted. In my own experiments I've never seen read/writes taking so long that a user would actually notice it, but I'm generally dealing with payloads of under 500KB. Do we think others are assuming the maximum that can be stored for their benchmarks?

This comment has been minimized.

Great article. I found the comments (and links from there) to be useful too. Its fascinating that developers working at vendors (Heillman and Kinlan) are arguing that we shouldn't be using LS because of perf issues, size etc. but developers who have been using it in the wild feel that this is completely unnoticable.

This comment has been minimized.

Yes, I also found the comments interesting, they usually are. localStorage being way faster in Safari than any other browser clearly shows that it's mostly an implementation issue, which can be fixed without N years in committee meetings to change the spec.

Though it wouldn't hurt to have async localStorage and ability to ask for more storage.

This comment has been minimized.

It's probably calling the function, which does create a slight overhead, instead of just directly accessing the array index, but again, a perf test does not show real world usage, otherwise Chrome localStorage would clearly be useless.

This comment has been minimized.

That means a large amount of data in localStorage could actually increase page load time because JavaScript needs to wait before executing.

I too agree that a realworld storage benchmark across all browsers is needed. Until it's clear (and been validated by others) that loading JS from LS can be done without it blocking pageload (somehow), I guess this project will be remaining in an experimental state. Thats okay, though. One day doing this will be both safe and relatively efficient :)

This comment has been minimized.

From my understanding it only blocks when it encounters localStorage access in the code. So if that code is at the bottom of the page or async/defer, it should not block page loading. So the important question is still; is loading scripts from disk into memory faster than loading it from network? It should be, but as I said earlier, we need tests to prove it.

This comment has been minimized.

The resource is being loaded from a server that I own so that I can confirm that the resource is actually being read from the cache, and also because finding a file that both allowed CORS requests from * and didn't send no-cache headers didn't go very well.

Chrome 23 and Canary 25 both show that localStorage is much faster (60 - 80%) but please feel free to let me know of any flaws in my test. I'll create a new test soon that actually uses basket.js

Given that raw localStorage appeared to be faster I checked by adding a breakpoint in Chrome developer tools that the getUrl function is never called. It seems likely that the bottleneck is creating the script tag and adding it to the DOM, though I haven't proven that.

This was basket.js 0.2 from the website rather than the latest version in git, but I can see no reason why the newer version would be faster.

This comment has been minimized.

Lets run the above on a few mobile devices to see how well that fairs - I'll try some this afternoon :). We've never been able to accurately benchmark basket against the browser cache (and may not be doing so with the above either), but I'm interested in any efforts that get us closer to that point.

This comment has been minimized.

In that test, my iPad on WiFi seems to be almost perfectly equal, the same for iPhone 5 with good 3G connection. I'll try when I have a crappy connection as well to see if that makes a big difference. :)

This comment has been minimized.

I think this and some previous articles clearly show that localStorage is usually not a problem. We could still use some some perf tests with storing/fetching larger amount of data, which we do. But I'm pretty confident IndexDB and the Filesystem API is not the solution for Basket.js. We should update the Why localStorage to summary reflect the article.

This comment has been minimized.

your loosing a lot of performance in your promise implementation. Also simply optimising for the case where there is only one script would mean one less promise being created which would probably make up the difference by itself in that particular test. Probably not worth optimising for that case though.

This comment has been minimized.

I think it's worth us performance profiling the newer implementation with RSVP and see how much worse/different the performance is since the last version. If we discover the performance has taken a large hit, we should consider switching to offering builds with a promise-less API as well as those with RSVP support.