Now it may be optimal that, when right clicking on an image that uses the <picture> element and then selecting "Save As...", the browser would by default save the highest available resolution of that image (rather than saving whatever resolution of the image is displayed). Alternatively, perhaps the "Save as..." menu opiton could be a cascaded context menu for images that use <picture>, thereby allowing the user to specifically save any of the images. It would probably be similarly wise to make the right-click "View Image" option also link to the highest resolution image by default.

Another thing is that it may be useful to have the ability to flat out always load a specific size in a web pages by default, whether it's the highest resolution, the lowest, or a middle resolution. In particular, smaller resolution could be useful for slow and/or bandwidth-metered connections while largest would be good for big high-resolution displays on a fast unlimeted bandwidth connection. Heck, maybe it could even be possible to have a setting to load the lowest resolution images first, then once the page is fully loaded, start loading one of the higher resolutions (this would have a similar result to interlaced images).

One of the main things I'm concerned about is having fallback images that don't use <picture> being of a lower resolution than those made availble via <picture>. This is demonstrated with the following demo: http://responsiveimages.org/demos/on-a-grid

I'm also concerned about images that use <picture> having available larger resolutions than whatever image is displayed, but then getting a medium or low resolution image when saving said image and not even being aware that a higher resolution version even exists.

Your choice of words is funny considering that I made this thread specificially because I was paranoid about saving lower-res images when higher-res versions were available and I wasn't even aware of it.

When a server sends a page to your browser, the browser first downloads all the HTML on the page and then parses it. Or at least that's what used to happen. Modern Web browsers attempt to speed up page load times by downloading images before parsing the page's body. The browser starts downloading the image long before it knows where that image will be in the page layout or how big it will need to be.

This is simultaneously a very good thing—it means images load faster—and a very tricky thing. It means using JavaScript to manipulate images can actually slow down your page even when your JavaScript is trying to load smaller images (because you end up fighting the prefetcher and downloading two images).

Large images incur unnecessary download and processing time, slowing the experience for users. To work around this problem, web developers specify multiple sources of the same image at different resolutions and then pick the image of the correct size based on the viewport size. As web developers lack the markup to achieve what they need, they end up relying on semantically neutral elements, CSS background images, and JavaScript libraries. In other words, developers are being forced to willfully violate the authoring requirements of HTML.

Bypass preload scanner

The reliance on semantically neutral elements (e.g., the div and span elements), instead of semantically meaningful elements such as img, prevents browsers from loading the image resources until after the DOM has (at least partially) loaded and scripts have run. This directly hinders the performance work browser engineers have done over the years to optimize resource loading (e.g., WebKit's HTMLPreloadScanner). Unnecessarily bypassing things like the preload scanner can have measurable performance impact when loading documents. See, for example, The WebKit PreloadScanner by Tony Gentilcore for a small study that demonstrates an up to 20% impact in load time when WebKit's PreloadScanner is disabled. More recent performance tests yield similar results. For more information, see How the Browser Pre-loader Makes Pages Load Faster by Andy Davies.

Reliance on scripts and server-side processing:

The techniques rely on either JavaScript or a server-side solution (or both), which adds complexity and redundant HTTP requests to the development process. Furthermore, the script-based solutions are unavailable to users who have turned off JavaScript.

Modern Web browsers attempt to speed up page load times by downloading images before parsing the page's body. The browser starts downloading the image long before it knows where that image will be in the page layout or how big it will need to be.

No, they don't. That's silly and not true.How would a browser know what images are in a page before parsing the html? clairvoyance?

Second article is also very opinionated and debateable. A properly designed website (especially modern one) will decide in real-time what to download by using dynamic html (either client-side or server-side). Thinking reliance on javascript is bad is strange - it's the main focus of most web browsers out there to render modern web pages using DOM structures. Web developers do not lack the markup to achieve what they need for "responsive design" and saying otherwise is just not true either.

Improving Mozilla code: You know you're on the right track with code changes when you spend the majority of your time deleting code.

Modern Web browsers attempt to speed up page load times by downloading images before parsing the page's body. The browser starts downloading the image long before it knows where that image will be in the page layout or how big it will need to be.

The arstechnica article does a terrible job at describing what actually happens. It was never true that browsers first downloaded everything and then started parsing. It is also not really true that images are downloaded before the markup is parsed.

What used to happen is that images are downloaded as soon as the HTML parses creates the <img> element (subject to download priorities of course). That could well be before the browser has completely fetched the CSS and hence, before it knows the layout.

What happens now is pretty much the same thing, except if there is a <script src> that blocks the HTML parser from continuing (it has to wait in case the script uses document.write), browsers now continue parsing the rest of the page with a lightweight speculative tokenizer in order to find further URLs to start fetching while the blocking script is downloaded. This includes images and it can happen before the browser knows the layout.

It was a requirement that responsive images do not break this optimization. Hence <picture>. If it was not important, you could use JS instead.

If web designers would actually follow the few simple design rules there are, this wouldn't be a problem to begin with. e.g. put CSS in the <head>, don't use blocking javascripts above the fold, load js asynchronously when you can, and use a lazy loading system on websites that have a very large content and lots of images. Simple rules, easy to follow, and no need for arbitrary elements. It's like this is an ongoing war between "camp scripting" and "camp markup". Of course the W3C is going to be solidly in "camp markup". We have advanced scripting, and modern web browsers who are exceedingly good at it, so why jump through hoops to avoid it?

Improving Mozilla code: You know you're on the right track with code changes when you spend the majority of your time deleting code.