Il y a du thé renversé au bord de la tablehttps://dutherenverseauborddelatable.wordpress.com
Adventure! Excitement! Wonders! Random thoughts by David Rajchenbach-Teller!Tue, 21 Nov 2017 14:17:44 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngIl y a du thé renversé au bord de la tablehttps://dutherenverseauborddelatable.wordpress.com
This blog has movedhttps://dutherenverseauborddelatable.wordpress.com/2016/09/19/this-blog-has-moved/
https://dutherenverseauborddelatable.wordpress.com/2016/09/19/this-blog-has-moved/#respondMon, 19 Sep 2016 16:13:34 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1667]]>You can find my new blog on github. Still rough around the edges, but I’m planning to improve this as I go.]]>https://dutherenverseauborddelatable.wordpress.com/2016/09/19/this-blog-has-moved/feed/0yoricThe Gecko monoculturehttps://dutherenverseauborddelatable.wordpress.com/2016/03/07/the-gecko-monoculture/
https://dutherenverseauborddelatable.wordpress.com/2016/03/07/the-gecko-monoculture/#commentsMon, 07 Mar 2016 22:28:21 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1611]]>I remember a time, not so very long ago, when Gecko powered 4 or 5 non-Mozilla browsers, some of them on exotic platforms, as well as GPS devices, wysiwyg editors, geographic platforms, email clients, image editors, eBook readers, documentation browsers, the UX of virus scanners, etc, as well as a host of innovative and exotic add-ons. In these days, Gecko was considered, among other things, one of the best cross-platform development toolkits available.

The year is now 2016 and, if you look around, you’ll be hard-pressed to find Gecko used outside of Firefoxen (alright, and Thunderbird and Bluegriffon). Did Google or Apple or Microsoft do that? Not at all. I don’t know how many in the Mozilla community remember this, but this was part of a Mozilla strategy. In this post, I’d like to discuss this strategy, its rationale, and the lessons that we may draw from it.

Building a Gecko monoculture

For the first few years of Firefox, enthusiasm for the technology behind the browser was enormous. After years of implementing Gecko from scratch, Mozilla had a kick-ass cross-platform toolkit that covered almost everything from system interaction to network, cryptography, user interface, internationalization, even an add-on mechanism, a scripting language and a rendering engine. For simplicity, let’s call this toolkit XUL. Certainly, XUL had a number of drawbacks, but in many ways, this toolkit was years ahead of everything that other toolkits had to offer at the time. And many started to use XUL for things that had never been planned. Dozens of public projects and certainly countless more behind corporate doors. Attempts were made to extend XUL towards Python, .Net, Java and possibly more. These were the days of the “XUL Planet”. All of this was great – for one, that is how I joined the Mozilla community, embedding Gecko in exotic places and getting it to work nicely with exotic network protocols.

But this success was also hurting Mozilla’s mission in two ways. The first way was the obvious cost. The XUL platform had a huge API, in JavaScript, in C, in C++, in IDL, in declarative UI (XUL and XBL), not to mention its configuration files and exotic query language (hello, RDF, I admit that I don’t really miss you that much), and I’m certain I miss a few. Oh, that’s not including the already huge web-facing API that can never abandon backwards compatibility with any feature, of course. Since third-party developers could hit any point of this not-really-internal API, any change made to the code of Gecko had the potential of breaking applications in subtle and unexpected ways – applications that we often couldn’t test ourselves. This meant that any change needed to be weighed carefully as it could put third-party developers out of business. That’s hardly ideal when you attempt to move quickly. To make things worse, this API was never designed for such a scenario, many bits were extremely fragile and often put together in haste with the idea of taking them down once a better API was available. Unfortunately, in many cases, fixing or replacing components often proved impossible, for the sake of compatibility. And to make things even worse, the XUL platform was targeting an insane number of Operating Systems, including Solaris, RiscOS, OS/2, even the Amiga Workbench if I recall correctly. Any change had to be kept synchronized between all these platforms, or, once again, we could end up putting third-party developers out of business by accident.

So this couldn’t last forever.

Another way this success was hurting Mozilla is that XUL was not the web. Recall that Mozilla’s objectives were not to create yet another cross-platform toolkit, no matter how good, but to Take Back the Web from proprietary and secret protocols. When the WhatWG and HTML5 started rising, it became clear that the web was not only taken back, but that we were witnessing the dawn of a new era of applications, which could run on all operating systems, which were based on open protocols and at least at some level on open-source. The Web Applications were the future – an ideal future, by some criteria – and the future was there. In this context, non-standard, native cross-platform toolkits were a thing of the past, something that Mozilla was fighting, not something that Mozilla should be promoting. It made entire sense to stop putting resources in XUL and concentrate more tightly on the web.

So XUL as a cross-platform toolkit couldn’t last forever.

I’m not sure exactly who took the decision but at some point around 2009, Mozilla’s strategy changed. We started deprecating the use cases of Gecko that were not the Web Platform. This wasn’t a single decision or a single fell swoop, and this didn’t go in one single unambiguous direction, but this happened. We got rid of the impedimenta.

We reinvented Gecko as a Firefox monoculture.

Living in a monoculture

We have now been living in a Gecko monoculture long enough to be able to draw lessons from our choices. So let’s look at the costs and benefits.

API and platform cost

Now that third-party developers using Gecko and hitting every single internal API are gone, it is much easier to refactor. Some APIs are clearly internal and I can change them without referring to anyone. Some are still accessible by add-ons, and I need to look for add-ons that use them and get in touch with their developers, but this is still infinitely simpler than it used to be. Already, dozens of refactorings that were critically needed but that had been blocked at some point in the past by backwards internal compatibility have been made possible. Soon, Jetpack WebExtensions will become the sole entry point for writing most add-ons, and Gecko developers will finally be free to refactor their code at will as long as it doesn’t break public APIs, much like developers of every single other platform on the planet.

Similarly, dropping support for exotic platforms made it possible to drop plenty of legacy code that was hurting refactoring, and in many cases, made it possible to write richer APIs without being bound by the absolute need to implement everything on every single platform.

In other words, by the criteria of reducing costs and increasing agility, yes, the Gecko monoculture has been a clear success.

Web Applications

Our second objective was to promote web applications. And if we look around, these days, web applications are everywhere – except on mobile. Actually, that’s not entirely true. On mobile, a considerable number of applications are built using PhoneGap/Cordova. In other word, these are web applications, wrapped in native applications, with most of the benefits of both worlds. Indeed, one could argue that PhoneGap/Cordova applications are more or less applications which could have been developed with XUL, and are instead developed with a closer-to-standards approach. As a side-note, it is a little-known fact is that one of the many (discarded) designs of FirefoxOS was as a runtime somewhat comparable to PhoneGap/Cordova, and which would have replaced the XUL platform.

Despite the huge success of web applications and even the success of hybrid web/native applications, the brand new world in which everything would be a web application hasn’t arrived yet, and it is not sure that it ever will. The main reason is that mobile has taken over the world. Mobile applications need to integrate with a rather different ecosystem, with push notifications, working disconnected, transactions and microtransactions, etc. not to mention a host of new device-specific features that were not initially web-friendly. Despite the efforts of most browser vendors, browser still haven’t caught up this moving target. New mobile device have gained voice recognition and in the meantime, the WhatWG is still struggling to design a secure, cross-platform API for accessing local files.

In other words, by the criteria of pushing web applications, I would say that the Gecko monoculture has had a positive influence, but not quite enough to be called a success.

The Hackerbase

Now that we have seen the benefits of this Gecko monoculture, perhaps it is time to look at the costs.

By turning Gecko into a Firefox monoculture, we have lost dozens of products. We have almost entirely lost the innovations that were not on the roadmap of the WhatWG, as well as the innovators themselves. Some of them have turned to web applications, which is what we wanted, or hybrid applications, which is close enough to what we wanted. In the meantime, somewhere else in the world, the ease of embedding first WebKit and now Chromium (including Atom/Electron) have made it much easier to experiment and innovate with these platforms, and to do everything that has ever been possible with XUL, and more. Speaking only for myself, if I were to enter the field today with the same kind of technological needs I had 15 years ago, I would head towards Chromium without a second thought. I find it a bit sad that my present self is somehow working against my past self, while they could be working together.

By turning our back on our Hackerbase, we have lost many things. In the first place, we have lost smart people, who may have contributed ideas or code or just dynamism. In the second place, we have lost plenty of opportunities for our code and our APIs to be tested for safety, security, or just good design. That’s already pretty bad.

Also, somewhere along the way, we have largely lost any good reason to provide clean and robust APIs, to separate concerns between our libraries. I would argue that the effects of this can be witnessed in our current codebase. Perhaps not in the web-facing APIs, that are still challenged by their (mis)usage in terms of convenience, safety and robustness, but in all our internal+addons APIs, many of which are sadly under-documented, under-tested, and designed to break in new and exciting ways whenever they are confronted with unexpected inputs. One could argue that the picture I am painting is too bleak, and that some of our fragile APIs are, in fact, due to backwards compatibility with add-ons or, at some point, third-party applications.

Regardless, by the criteria of our Hackerbase, I would count the Gecko monoculture as a bloody waste.

Bottom line

So the monoculture has succeeded at making us faster, has somewhat helped propagate Web Applications, and has hurt us by severing our hackerbase.

Before starting to write this blogpost, I felt that turning Gecko into a Firefox monoculture was a mistake. Now, I realize that this was probably a necessary phase. The Gecko from 2006 was impossible to fix, impossible to refactor, impossible to improve. The Firefox from 2006 would have needed a nearly-full reimplementation to support e10s or Rust-based code (ok, I’m excluding Rust-over-XPConnect, which would be a complete waste). Today’s Gecko is much fitter to fight against WebKit and Chromium. I believe that tomorrow’s Gecko – not Firefox, just Gecko – with full support for WebExtensions and progressive addition of new, experimental WebExtensions, would be a much better technological base for implementing, say, a cross-platform e-mail client, or an e-Book reader, or even a novel browser.

As all phases, though, this monoculture needs to end sooner or later, and I certainly hope that it ends soon, because we keep paying the cost of this transformation through our community.

Surviving the monoculture

An exit strategy from the Gecko monoculture

It is my belief that we now need to consider an exit strategy from the Gecko monoculture. No matter which strategy is picked, it will have a cost. But I believe that the potential benefits in terms of community and innovation will outweigh these costs.

First, we need to avoid repeating past mistakes. While WebExtensions may not cover all the use cases for which we need an extension API for Gecko, they promise a set of clean and high-level APIs, and this is a good base. We need to make sure that whatever we offer as part of WebExtensions or in addition to them remains a set high-level, well-insulated APIs, rather than the panic-inducing entanglement that is our set of internal APIs.

Second, we need to be able to extend our set of extension APIs in directions we not planned by any single governing body, including Mozilla. When WebExtensions were first announced, the developers in charge of the project introduced a uservoice survey to determine the features that the community expected. This was a good start, but this will not be sufficient in the long run. Around that time, Giorgio Maone drafted an API for developing and testing experimental WebExtensions features. This was also a good start, because experimenting is critical for innovation. Now, we need a bridge to progressively turn experimental extension APIs into core APIs. For this purpose, I believe that the best mechanism is a RFC forum and a RFC process for WebExtensions, inspired from the success of RFCs in the Rust (or Python) community.

Finally, we need a technological brick to get applications other than Firefox to run Gecko. We have experience doing this, from XULRunner to Prism. A few years ago, Mike De Boer introduced “Chromeless 2”, which was roughly in the Gecko world what Electron is nowadays in the Chromium world. Clearly, this project was misunderstood by the Mozilla community – I know that it was misunderstood by me, and that it took Electron to make me realize that Mike was on the right track. This project was stopped, but it could be resumed or rebooted. To make it easier for the community, using the same API as Electron, would be a possibility.

Keeping projects multicultural

Similarly, I believe that we need to consider strategies that will let us avoid similar monocultures in our other projects. This includes (and is not limited to) B2G OS (formerly known as Firefox OS), Rust, Servo and Connected Devices.

So far, Rust has proved very open to innovation. For one thing, Rust has its RFC process and it works very well. Additionally, while Rust was originally designed for Servo, it has already escaped this orbit and the temptation of a Servo monoculture. Rust is now used for cryptocurrencies, operating systems, web servers, connected devices… So far, so good.

Similarly, Servo has proved quite open, albeit in very different directions. For one thing, Servo is developed separately from any web browser that may embed it, whether Servo Shell or Browser.html. Also, Servo is itself based on dozens of libraries developed, tested and released individually, by community members. Similarly, many of the developments undertaken for Servo are released themselves as independent libraries that can independently be maintained or integrated in yet other projects… I have hopes that Servo, or at least large subsets, will eventually find its way into projects unrelated to Mozilla, possibly unrelated to web browsers. My only reservation is that I have not checked how much effort the Servo team has made into checking that the private APIs of Servo remain private. If this is the case, so far, so good.

The case of Firefox OS/B2G OS is quite different. B2G OS was designed from scratch as a Gecko application and was entirely dependent on Gecko and some non-standard extensions. Since the announcement that Firefox OS would be retired – and hopefully continue to live as B2G OS – it has been clear that B2G-specific Gecko support would be progressively phased out. The B2G OS community is currently actively reworking the OS to make sure that it can live in a much more standard environment. Similarly, the Marketplace, which was introduced largely to appease carriers, will disappear, leaving B2G OS to live as a web OS, as it was initially designed. While the existence of the project is at risk, I believe that these two changes, together, have the potential to also set it free from a Gecko + Marketplace + Telephone monoculture. If B2G is still alive in one or two years, it may have become a cross-platform, cross-rendering engine operating system designed for a set of devices that may be entirely different from the Firefox Phones. So, I’m somewhat optimistic.

As for Connected Devices, well, these projects are too young to be able to judge. It is our responsibility to make sure that we do not paint ourselves into monocultural corners.

edit Added a link to Chris Lord’s post on the topic of missed opportunities.

]]>https://dutherenverseauborddelatable.wordpress.com/2016/03/07/the-gecko-monoculture/feed/8yoricDreaming the Internet of Thingshttps://dutherenverseauborddelatable.wordpress.com/2016/02/17/dreaming-the-internet-of-things/
https://dutherenverseauborddelatable.wordpress.com/2016/02/17/dreaming-the-internet-of-things/#respondWed, 17 Feb 2016 13:15:15 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1597]]>One of these days, using the Cloud of OpaqueCompany ™, I will be able to set the colour of my lightbulbs by talking to my TV. Somewhere along the way, my house will become a little bit more energy hungry and a little bit more dependent on the Cloud of OpaqueCompany(tm) . That’s the promise of the Internet of Things. Isn’t that neat? Isn’t that exciting?

Not really. At least, not for me. But, for some reason, whenever I read about that Internet of Things, it is about expensive gadgets that, to me, sounds like Christmas commercials: marginally useful, designed by marketers for spoilt westerners to be consumed then forgotten before the next Christmas shopping spree.

But this doesn’t have to be.

I have spent a little time scratching the surface and trying to determine whether there was something more to this Internet of Things, beside the shopping list. I came back convinced that, once you forget the marketing, this Internet of Things can become a revolution as big as the Personal Computer or the World Wide Web – at least if we let it fall into the right hands.

Say you are the owner or manager of a small commerce, say a restaurant. Chances are that you need a burglar alarm, either because you fear that you are going to be burglarised, or because your insurance requires one. You have two solutions. Either you go to a store and buy some off-the-shelf product, or you contract a company, draw a list of requirements and pay for a custom setup. In either case, you are a consumer, and you are stuck with what you paid for. But needs change. Perhaps the insurance policies now requires you to have an alarm that can call the police automatically. Perhaps neighbours complained about the noise of the alarm and you need to turn it into a silent alarm that rings your cellphone. Perhaps the insurance has changed their policy and now requires you to take pictures of the burglary. Perhaps you have had work done and the small window in the bathroom is now large enough that it could be used to break in. Or water damage has destroyed one of your sensors and you need to replace it, but the model doesn’t exist anymore. Or you are tired of triggering the alarm when you take out the garbage and need to refine the policy. Of your product was linked to a subscription, to call the police on your behalf, but the provider has stopped this service. In any of these cases, you are probably stuck. Because your needs have made you a consumer and you are served only as long as there is a market for your specific need.

Now, consider an alternate universe, in which you just need to walk or drive to the nearest store, buy a few off-the-shelf motion detectors, for the price of a few dollars and simply attach them in your restaurant, where you see fit. They use open standards, so you can install an app to get them to work together, or even better, use your cellphone to script them visually into doing what you need. Do you need to add one or ten, or replace them with different models, or add door-lock sensors? It’s just as easy. Do you need to add a camera? Well, place it and use your cellphone to add that camera to your script. Use your cellphone again and customise the effect, to call the police, or ring your cellphone, or deactivate a single alarm between 11pm and 11.30pm, because that’s when you take out the trash. And if your product is linked to a subscription, because it uses open standards, you can switch provider as needed. In this universe, the Internet of Things has put you in control – not a Cloud, not a silo – and drastically cut your costs and your dependencies.

A few months ago, Mozilla has started pivoting from SmartPhones to the Web of Things – that’s the name we give to Internet of Things done right, with open standards, you in charge, rather than silos and Opaque Cloud ™. I can make no promise that we are going to succeed, but I believe in the huge potential of this Web of Things.

By the way, it doesn’t stop at restaurants. The exact same open standards can help you guard against fires in your house or humidity in your server room. Or crowdsourcing flood detection in cities exposed to flash floods or automating experiments in a physics lab. Or watching your heartbeat or listening to your snores. Or determining which part of the village farm needs to be irrigated in priority or which part of the sewers need most attention.

Some of these problems already have commercial solutions. But what about your next problem, the one that hasn’t attracted the attention of any company large enough to produce devices specifically for you?

Here is to the Web of Things. Let’s make sure that it falls into the right hands.

Conclusions

]]>https://dutherenverseauborddelatable.wordpress.com/2015/11/23/daech-explique-a-moi-meme/feed/6yoric20151123_123209Designing the Firefox Performance Monitor (2): Monitoring Add-ons and Webpageshttps://dutherenverseauborddelatable.wordpress.com/2015/11/06/designing-the-firefox-performance-monitor-2-monitoring-add-ons-and-webpages/
https://dutherenverseauborddelatable.wordpress.com/2015/11/06/designing-the-firefox-performance-monitor-2-monitoring-add-ons-and-webpages/#respondFri, 06 Nov 2015 13:56:01 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1538]]>In part 1, we discussed the design of time measurement within the Firefox Performance Monitor. Despite the intuition, the Performance Monitor had neither the same set of objectives as the Gecko Profiler, nor the same set of constraints, and we ended up picking a design that was not a sampling profiler. In particular, instead of capturing performance data on stacks, the Monitor captures performance data on Groups, a notion that we have not discussed yet. In this part, we will focus on bridging the gap between our low-level instrumentation and actual add-ons and webpages, as may be seen by the user.

I. JavaScript compartments

The main objective of the Performance Monitor is to let users and developers quickly find out which add-ons or webpages are slowing down Firefox. The main tool of the Performance Monitor is an instrumentation of SpiderMonkey, the JavaScript VM used by Firefox, to detect slowdowns caused by code taking too long to execute.

SpiderMonkey is a general-purpose VM, used in Firefox, Thunderbird, but also in Gnome, as a command-line scripting tool, as a test suite runner and more. Out of the box, SpiderMonkey knows nothing about webpages or add-ons.

However, SpiderMonkey defines a notion of JavaScript Compartment. Compartments were designed to provide safe and manageable isolation of code and memory between webpages, as well as between webpages and the parts of Firefox written in JavaScript. In terms of JavaScript, each compartment represents a global object (typically, in a webpage, the window object), all the code parsed as part of this object, and all the memory owned by either. In particular, if a compartment A defines an event listener and attaches it to an event handler offered through some API by another compartment B, the event handler is still considered part of A.

Compartments do not offer a one-to-one mapping to add-ons or webpages, but they are close. We just need to remember a few things:

some compartments belong neither to an add-on, nor to a webpage (e.g. the parts of Firefox written in JavaScript);

each add-on can define any number of modules and worker threads, each of which its own compartment;

each webpage can define any number of frames and worker threads, each of which has its own compartment;

there are a number of ways to create compartments dynamically.

In addition, while Firefox executing JS code, it is possible to find out whether this code belongs to a window, using xpc::CurrentWindowOrNull(JSContext*). This information is not available to SpiderMonkey, but it is available to the embedding of SpiderMonkey, i.e. Firefox itself. Using a different path, one can find out whether an object belongs to an add-on – and, in particular, if the global object of a compartment belongs to an add-on – using JS::AddonIdOfObject(JSObject*).

Putting all of this together, in terms of JavaScript, both add-ons and web pages are essentially groups of compartments. We call these groups Performance Groups.

II. Maintaining Performance Groups

We extend SpiderMonkey with a few callbacks to let it grab Performance Groups from its embedding. Whenever SpiderMonkey creates a new Compartment, whether during the load of a page, during that of an add-on, or in more sophisticated dynamic cases, it requests the list of Performance Groups to which it belongs.

Attaching performance groups to a compartment during creation lets us ensure that we can update the performance cost of a compartment in constant-time, without complex indirections.

In the current implementation, a compartment typically belongs to a subset of the following groups:

its own group, which may be used to track performance of the single compartment;

a group shared by all compartments in the add-on on the current thread (typically, several modules);

a group shared by all compartments in the webpage on the current thread (typically, several iframes);

the “top group”, shared by all compartments in the VM, which may be used to track the performance of the entire JavaScript VM – while this has not always been the case, this currently maps to a single JavaScript thread.

Note that a compartment can theoretically belong to both a webpage and an add-on, although I haven’t encountered this situation yet.

As we saw in part 1 of this series, we start and stop a stopwatch to measure the duration of code execution whenever we enter/leave a Performance Group that does have a stopwatch yet. Consequently, each JavaScript stack has a single “top” stopwatch, which serves both to measure the performance of the “top group” and the performance of whichever JS code lies on top of the stack.

For performance reasons, groups can be marked as active or inactive, where inactive groups do not need a stopwatch. In a general run of Firefox, all the “own groups”, specific to a single compartment each, are inactive to avoid having to start/stop too many stopwatches at once and to commit too many results at the end of the event, while all the other groups are active. Own groups can be activated individually when investigating a performance issue, or to help tracking the effect of a module.

Note that we do not have to limit ourselves to the above kinds of groups. Indeed, we have plans to provide additional groups in the future, to be able to:

In a different embedding, for instance an operating system, one could envision envision a completely different repartition of performance groups, such as a group shared by all services acting on behalf of of a single user.

III. Threads and processes

Nowadays, Firefox Nightly is a multi-threaded, multi-process application. Firefox Release has not reached that point yet, but should within a few versions. As defined above, performance groups cross neither threads nor processes.

As of this writing, we have not implemented collection of data from various threads, as the information is not as interesting as one could think. Indeed, in SpiderMonkey, a single non-main thread can only contain a single compartment, and it is difficult to impact the framerate with a background thread. Other tools dedicated to monitoring threads would therefore be better suited than the mechanism of Performance Groups.

On the other hand, activity across processes can cause user-visible jank, so we need to be able to track it. In particular, a single add-on can have performance impact on several processes at once. For this reason, the Performance Monitor is executed on each process. Higher-level APIs provide two ways of accessing application-wide information.

The underlying implementation of this API is relatively straightforward:

in each process, the Performance Stats Service collects all the data at the end of each event, updating `durations` accordingly;

when `promiseSnapshot` is called, we broadcast to all processes, requesting the latest data collected by the Performance Stats Service;

if an add-on appears in several processes, we sum the resource impact and collapse the add-on data into a single item.

Polling is useful to get an overview of the resource usage between two instants for the entire system. At the time of this writing, however, it is somewhat oversized if the objective is simply to follow one add-on/webpage (as it always collects and processes data from all add-ons and webpage), or one process (as it always collects data from all processes). In addition, polling is not appropriate to generate performance alerts, as it needs to communicate with all processes, even if these processes are idle. This prevents the processes from sleeping, which is both bad for battery and for virtual memory usage.

2/ Events

For these reasons, we have developed a second, event-based API, which is expected to land on Firefox Nightly within a few days.

This same API can be used to watch tabs, or to watch all add-ons or all tabs at once.

The implementation of this API is slightly more sophisticated, as we wish to avoid saturating API clients with alerts, in particular if some of these clients may themselves be causing jank:

in each process, the Performance Stats Service collects all the data at the end of each event;

if the execution duration of at least one group has exceeded some threshold (typically 64ms), we add it to the list of “performance alerts”, unless it is already in that list;

performance alerts are collected after ~100ms – the timer is active only if at least one collection is needed;

each performance alert for an add-on is then dispatched to any observer for this add-on and to the universal add-on observers (if any);

each performance alert for a window is then dispatched to any observer for this window and to the universal window observers (if any);

each child process buffers alerts, to minimise IPC cost, then propagates them to the parent process;

the parent process collects all alerts and dispatches them to observers.

There are a few subtleties, as we may wish to register observers for add-ons that have not started yet (or even that have not been installed or have been uninstalled), and similarly for windows that are not open yet, or that have already been closed. Other subtleties ensure that, once again, most operations are constant-time, with the exception of dispatching to observers, which is linear in the number of alerts (deduplicated) + observers.

Future versions may extend this to watching specific Firefox features, or watching specific process, or the activity of the VM itself, and possibly more. We also plan to extend the API to improve the ability to detect whether the jank may actually be noticed by the user, or is somehow invisible, e.g. because the janky process was not visible at the time, or neither interactive nor animated.

To be continued

At this stage, I have presented most of the important design of the Performance Monitor. In a followup post, I intend to explain some of the work we have done to weed out false positives and show the user with user-actionable results.

]]>https://dutherenverseauborddelatable.wordpress.com/2015/11/06/designing-the-firefox-performance-monitor-2-monitoring-add-ons-and-webpages/feed/0yoricDesigning the Firefox Performance Stats Monitor, part 1: Measuring time without killing battery or performancehttps://dutherenverseauborddelatable.wordpress.com/2015/10/27/designing-firefoxs-performance-stats-monitor-1/
https://dutherenverseauborddelatable.wordpress.com/2015/10/27/designing-firefoxs-performance-stats-monitor-1/#respondTue, 27 Oct 2015 15:43:19 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1530]]>For a few versions, Firefox Nightly has been monitoring the performance of add-ons, thanks to the Performance Stats API. While we are waiting for the greenlight to let it graduate to Firefox Aurora, as well as investigating a few lingering false-positives, and while v2 is approaching steadily, it is time for a brain dump on this toolbox and its design.

The initial objective of this monitor is to be able to flag both add-ons and webpages that cause noticeable slowdowns, so as to let users disable/close whatever is making their use of Firefox miserable. We also envision more advanced uses that could let us find out if features of webpages cause slowdowns on specific OS/hardware combinations.

I. Performance Stats vs. Gecko Profiler

Firefox has long had a built-in profiler, that can be used to obtain detailed performance information about a specific webpage (through the DevTools), or about the entirety of Firefox (though a dedicated add-on). If you have not tried the Gecko Profiler, you really should. However, the Performance Stats API does not build upon this profiler. Let’s see why.

The Gecko Profiler is a Sampling Profiler. This means that it spawns a thread, which wakes up regularly (every ~1 millisecond by default) to capture the stack of the main thread and store it to memory. Once profiling is over, it examines the symbols in the captured stacks, and extrapolates that if a symbol appears in n% of the samples, it must also take n% of the CPU time. Also, it consolidates the stacks into a tree for easier visualisation.

This technique has several big advantages:

it has a small and bounded impact on performance (1 stack capture per wakeup);

if samples are sufficiently numerous, it generally provides very precise information;

it provides nicely detailed output.

Also, the stack contains all the information needed to determine if the code being executed belongs to an add-on or a webpage, which means that the Gecko Profiler could theoretically be used as a back-end for the Performance Stats API.

Unfortunately, it also has a few major drawbacks:

waking up one thread every ~1ms to communicate with another thread is rather battery-hostile, so this is not something that the browser should do for long periods, especially in a multi-process browser;

recording the stacks can also become pretty expensive quickly;

the mechanism is designed to allow extracting data after the fact, not for real-time monitoring;

interaction between the threads can quickly become non-trivial.

It might be possible to partially mitigate point 1/ by ensuring that the sampling thread is stopped whenever the execution thread is not executing “interesting” code. The battery gains are not clear – especially since we do not have a good way to measure such costs – and, more importantly, this would make point 4/ much more complex and error-prone.

It is also likely that points 2/ and 3/ could be addressed by making sure that, for the sake of Performance Stats, we only extract simplified stacks containing solely information about the owner of the code, whether add-on, webpage or platform. To do this, we would need to hack relatively deep in both the profiler (to be able to extract and treat various kinds of data, and to be able to do it both after-the-fact and in real time), and the JavaScript VM (to annotate the stack with ownership information), as well as to introduce support code (to provide a mapping between stack ownership data and add-on identification/webpage identification). Interestingly, introducing these VM changes and the support code essentially amount to writing a form of Event Profiler, which would use the existing Sampling Profiler essentially as a form of high-resolution clock. Sadly, these changes would probably make the interaction between threads quite more complicated.

Finally, the only way to get rid of points 4/ would be to move entirely away from a Sampling Provider to an Event Profiler. So let’s see how we could design Performance Stats as a form of Event-based Profiler.

II. Low-level API

One of the main interests of Sampling Profilers is that they interfere with the code considerably less than most other kinds of profilers. However, we have an advantage that most profilers do not have: we need much, much fewer details. Indeed, knowing whether a statement or even a function takes time will not help us find out whether an add-on is slow. Rather, we are interested in knowing whether a Performance Groups take time – we’ll return later to a more precise definition of Group, but for the moment, let’s just assume that a Group is anything we want to monitor: an add-on, or a webpage including its frames, or a webpage without its frames, etc.

In other words, we are interested in two events:

the VM is starting to execute code that belongs to an interesting Group;

the VM has finished executing code that belongs to an interesting Group.

If we can just implement a quick and reasonably accurate Stopwatch to measure the duration between these events, we have all the information we need.

Note that the structure is purely stack-based. So we can already start designing our Stopwatch API along the following lines:

the time between “start” and “finish” may be extremely short (possibly just a few µs), so we need an extremely precise clock – recall that the usual Windows XP CPU usage clock has a measurement precision of 16ms, and sometimes even stops updating for much longer;

since we perform all measurements on the main thread, and since we may enter/exit thousands of Groups per frame, we need the clock to be extremely fast;

we need to decide what happens when Group A calls Group B, e.g. when an add-on calls the platform, or when the platform calls an add-on, or when a treatment in the webpage causes an add-on to fire, etc.;

a few functions in Firefox need to have a blocking API but a non-blocking implementation (e.g. `alert()`, synchronous XMLHttpRequest, but also Firefox Sync), a situation that is implemented through nested event loops – we need to make sure that we do not display wildly inaccurate results in such cases.

II.1. Dealing with calls

Let’s start with issue 3/.

If Group A calls Group B, and if Group B performs a slow operation, who should be charged for this operation? We answer this question as would a Sampling Provider: both are charged. The most elegant implementation is simply to keep the Stopwatch of Group A running while Group B is executing with its own Stopwatch.

If Group A calls Group B, and if Group B calls back Group A (possibly indirectly), who should be charged for this operation? Well, since Group A is already charged because it’s calling into Group B, we should only charge Group A once. In other words, since Group A already has a Stopwatch, we should not start a second Stopwatch for a nested call to Group A. As a nice side-effect, this improves our speed and memory usage by ensuring that we don’t need to record as many things.

If Group A starts a nested event loop, and Group B starts working in the nested loop, who should be charged? While spawning nested event loops is an evil practice, which may cause all sorts of issues, Group A is not causing any visible slowdown, no matter how long Group B may be working, and in practice has no idea that Group B is working. Therefore, we cannot charge anything to Group A. In other words, we cancel all ongoing measurements and remove all executing Stopwatches.

To avoid having to maintain long lists of current stopwatches in case we may need to invalidate them, and to avoid the linear cost of such invalidation, we alter the API to take into account an iteration number, which increases whenever we start processing an event:

class Stopwatch { // RAII
public:
explicit Stopwatch(JSRuntime*, Group*); // Start the measure. Record the current iteration.
~Stopwatch(); // Stop the measure. If the current iteration is not the one that was originally recorded, this Stopwatch is stale, don’t bother with a final measure, just drop everything.
};
class Group {
public:
bool HasStopwatch(uint64_t iteration); // Does the group have a stopwatch for the current event?
void AttachStopwatch(uint64_t iteration); // Attach a stopwatch for the current event.
void DetachStopwatch(uint64_t iteration); // Does nothing if `iteration` is stale.
};

Note that, had we chosen to base our profiler on sampling, we would still have needed to find solutions to issues 3/ and 4/, and the implementation would certainly have been similar.

II.2. Dealing with clocks

Sampling profilers rely on a fixed rate system timer, which solves elegantly both the problem of clock imprecision (issue 1/) and that of the speed of actually reading the clock (issue 2/). Unfortunately, due to the issues mentioned in I., this is a solution that we could not adopt. Hence the need to find another mechanism to implement the Stopwatch. A mechanism that is both fast and reasonably accurate.

We experimented with various clocks and were disappointed in varying manners. After these experiments, we decided to go for a statistical design, in which each `Stopwatch` does not measure an actual time (which proved either slow or unreliable) but a number of CPU clock cycles, through the CPU’s built-in TimeStamp Counter and instruction RDTSC. In terms of performance, this instruction is a dream: on modern architectures, reading the TimeStamp Counter takes a few dozen CPU cycles, i.e. less than 10ns.

However, the value provided by RDTSC is neither a wall clock time, nor a CPU time. Indeed, RDTSC increases whenever code is executed on the CPU (or, on older architectures, on the Core), regardless of whether the code belongs to the current application, the kernel, or any other application.

In other words, we have the following issues:

we need to map between clock cycles and wall time;

if the computer has gone to sleep, the counter may be reset to 0;

if the process is moved between CPUs/cores, it may end up on a CPU core with an unsynchronized counter;

the mapping between clock cycles and walltime varies with the current frequency of the CPU;

other threads/processes using the same CPU will also increment the counter.

1/ Mapping clock cycles to wall time

If you recall, we have decided to use RDTSC instead of the usual OS-provided clocks because clocks were generally either not fine-grained enough to measure the time spent in a Group or too slow whenever we needed to switch between groups thousands of times per second. However, we can easily perform two (and exactly two) clock reads per event to measure the total CPU Time spent executing the event. This measure does not require a precision as high as we would have needed to measure the time spent in a group, nor does it need to be as fast. Once we have this measure, we can extrapolate: if n% of the cycles were spent executing Group A, then n% of the time spent in the event was spent executing Group A.

If n<0 or n>100 due to issues 2/ or 3/, we simply discard the measure.

2/ Handling the case in which the counter is reset to 0

Now that we have mapped clock cycles and wall time, we can revisit the issue of a computer going to sleep. Since the counter is reset to 0, we have n<0, hence the measure is discarded. Theoretically, if the computer goes to sleep many times in a row within the execution of the event loop, we could end up in an exotic case of n≥0 and n≤100, but the error would remain bounded to the total CPU cost of this event loop. In addition, I am unconvinced that a computer can physically go to sleep quite that fast.

3/ Handling the case in which threads move between CPUs/cores

In theory, this is the scariest issue. Fortunately, modern Operating Systems try to avoid this whenever possible, as this decreases overall cache performance, so we can hope that this will happen seldom.

Also, on platforms that support it (i.e. not MacOS/BSD), we only measure the number of cycles if we start and end execution of a group on the same CPU/core. While there is a small window (a few cycles) during which the thread can be migrated without us noticing, we expect that this will happen rarely enough that this won’t affect the statistics meaningfully.

On other platforms, but on modern architectures, if a thread jumps between cores during the execution of a Group, this has no effect, as cores of a single CPU have synchronized counters.

Even in the worst case scenario (MacOS/BSD, jump to a different CPU), statistics are with us. Assuming that the probability of jumping between CPUs/cores during a given (real) cycle is constant, and that the distribution of differences between clocks is even, the probability that the number of cycles reported by a measure is modified by X cycles should be a gaussian distribution, with groups with longer execution having a larger amplitude than groups with shorter execution. Since we discard measures that result in a negative number of cycles, this distribution is actually skewed towards over-estimating the number of cycles of groups that already have many cycles and under-estimating the number of cycles that already have fewer cycles.

Recall that this error is bounded by the total CPU time spent in the event. So if the total time spent in the event is low, the information will not contribute to us detecting a slow add-on or webpage, because the add-on or webpage is not slow in the first place. If the total time spent in the event in high, i.e. if something is slow in the first place, this will tend to make the slowest offenders appear even slower, something which we accept without difficulty.

4/ Variable mapping between clock cycles and walltime

Once again, this is an issue that cannot be solved in theory but that works nicely in practice. For once thing, recent architectures actually make sure that the mapping remains constant.

Moreover, even on older architectures, we suspect that the mapping between clock cycles and wall time will change rarely on a given thread during the execution of a single event. This may, of course, be less true in case of I/O on the thread.

Finally, even in the worst case, if we assume that the mapping between clock cycles and wall time is evenly distributed, we expect that this will eventually balance out between Groups.

5/ Cycles increasing with any system activity

Assuming that, within an iteration of the event loop, this happens uniformly over time, this will skew towards over-estimating the number of cycles of groups that already have many cycles and under-estimating the number of cycles that already have fewer cycles.

Again, for the same reason as issue 3/, this effect is bounded, and this is a bias that we accept without difficulty.

Once we have decided on the algorithm above, the design of the Stopwatch becomes mostly natural.

To be continued

In followup blog entries, I plan to discuss Performance Groups, measuring the duration of cross-process blocking calls, collating data from processes, determining when an add-on is slow and more. Stay tuned

]]>https://dutherenverseauborddelatable.wordpress.com/2015/10/27/designing-firefoxs-performance-stats-monitor-1/feed/0yoricWhat have I done since last July?https://dutherenverseauborddelatable.wordpress.com/2015/07/16/what-have-i-done-since-last-july/
https://dutherenverseauborddelatable.wordpress.com/2015/07/16/what-have-i-done-since-last-july/#respondThu, 16 Jul 2015 22:38:57 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1521]]>School year 2014-2015 is ending. It’s time for a brief report.

Session Restore

As I announced last year, I am mostly inactive on Session Restore these days. However, I am quite happy to have landed « Bug 883609 – Make Backups Useful ». This has considerably improved the resilience of Session Restore against a variety of accidents.

Besides this, I have mostly been reviewing and mentoring a few contributors.

For Q3, I will try to help Allasso (one of our contributors) land bug 906076, which we hope can improve a lot the startup speed for users with many tabs or tab groups.

Performance Monitoring

My biggest code contribution for the past 6 months is Performance Monitoring. Firefox Nightly now has a module (PerformanceStats.jsm/nsIPerformanceStats) dedicated to monitoring the performance of Firefox, webpages and add-ons in real-time. While there are a number of improvements I wish to land, this is already powerful enough to implement about:performance (a top-like utility for Firefox), the slow add-on watcher (which has progressively grown into something actually useful), and slow add-on Telemetry.

I am currently working on making measures faster and decreasing even further the number of false alerts, as well as ensuring that everything works nicely with e10s. Oh, and I have a new UX design for about:performance which I hope you will like.

Also, I have passed on the data to the AMO team so that they can start publishing the performance of add-ons to their authors.

Async Tooling

I have been less active on Async Tooling recently, in large part because most of the tooling we needed has landed, and the rest is now in the hands of the DevTools team. My main contribution has been DOM Promise Uncaught Error monitoring, which was both one of the blockers to port our code from Promise.jsm to DOM Promise, and a primitive necessary for the DevTools team.

My second contribution was modifying our (then) reference implementation of Promise, all our test suites and quite a number of individual tests to handle the case of uncaught asynchronous errors. I have had relatively little feedback on this, but this serves me almost for every single patch I write.

Other than that, I have landed the PromiseWorker, which is designed to make using ChromeWorkers simpler, and I have both landed and mentored a number of improvements to Sqlite.jsm, in particular error-reporting and clean async shutdown, as well as maintenance fixes to OS.File.

I do not have specific plans for the near future of Async Tooling.

Places

One year ago, I joined the effort to overhaul Places, our implementation of bookmarks, history, keywords, etc. Unfortunately, this effort is far from over, as all the participants (starting with my reviewer) keep being preempted for higher-priority work. However, we did manage to land a number of improvements. I contributed History.jsm, a (not complete yet) reimplementation of the History API, with a nicer API and off the main thread database access.

I also refactored Places to handle asynchronous shutdown sequence.

In the near future, I plan to finish and land a non-blocking reimplementation of the Forget button (and Sanitize dialog). I do not have other short-term plans for Places.

Shutdown

Shutdown has kept me quite busy this past 12 months. On one side, AsyncShutdown has been improved, made easier to use and debug post-mortem, and extended to support async shutdown of C++-based features. On another side, I have implemented a Dashboard to track AsyncShutdown timeouts and trends, which has saved my life a few times – in particular, when I fixed the crashes caused by Avast, and when I helped pinpoint and fix the topcrashers for Firefox 33.

Also, I landed the Shutdown Terminator, which turns shutdown hangs into actionable crashes, and also lets us track the duration of successful shutdowns (hint: if it takes more than 10 seconds, you’re as good as crashed).

I do not have short-term plans for Shutdown, although if I find time, I might try and make it crash a bit faster.

Community

Community also ate plenty of my time. My main involvement was mentoring a number of bugs (I lost track of the number, probably a 10-20) and welcoming new potential contributors both on #introduction and through the contribute form. As a note, while mentoring is almost always a pleasure, welcoming new contributors is a huge time sink with very little return on effort. More on this in another blog post.

I also dedicated time and effort to Teaching Open-Source to University students, with mixed results. Students who had signed up by themselves gave me great feedback, while students who had been assigned to the course without having any choice did not prove as pleasant to work with. While I hope to reproduce the experience eventually, this will not be with the same University.

The French Firefox OS launch, going to present (and actually sell) the first Firefox OS phones to entirely non-technical crowds, was also an interesting experience. I don’t know if it was useful in the end, but it was certainly fun.

Finally, while I haven’t found a good place to mention it, this year will be remembered also for Je Suis Charlie, both the initial terrorist attacks and the entirely predictable law on mass surveillance that just passed in France.

In the near future, I plan to continue mentoring bugs, but I will be less active on the contribute form – rather, I am lending a hand to the Participation team’s effort to replace the contribute form with something much better.

]]>https://dutherenverseauborddelatable.wordpress.com/2015/07/16/what-have-i-done-since-last-july/feed/0yoricLiving in a Go Faster, post-XUL worldhttps://dutherenverseauborddelatable.wordpress.com/2015/07/13/living-in-a-go-faster-post-xul-world/
https://dutherenverseauborddelatable.wordpress.com/2015/07/13/living-in-a-go-faster-post-xul-world/#commentsMon, 13 Jul 2015 13:30:09 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1516]]>A long time ago, XUL was an extraordinary component of Firefox. It meant that front-end and add-on developers could deliver user interfaces in a single, mostly-declarative, language, and see them adapt automatically to the look and feel of each OS. Ten years later, XUL has become a burden: most of its features have been ported to HTML5, often with slightly different semantics – which makes Gecko needlessly complex – and nobody understands XUL – which makes contributions harder than they should be. So, we have reached a stage at which we basically agree that, in a not-too-distant future, Firefox should not rely upon XUL anymore.

But wait, it’s not the only thing that needs to change. We also want to support piecewise updates for Firefox. We want Firefox to start fast. We want the UI to remain responsive. We want to keep supporting add-ons. Oh, and we want contributors, too. And we don’t want to lose internationalization.

Mmmh… and perhaps we don’t want to restart Firefox from bare Gecko.

All of the above are worthy objectives, but getting them all will require some careful thought.

So I’d like to put together a list of all our requirements, against which we could evaluate potential solutions, re-architectures, etc. for the front-end:

[1] I have heard this claim contested. Some apparently suggest that we should actually break Firefox and base all our XUL-less, Go Faster initiatives on a clean slate from e.g. Browser.html or Servo. If you wish to defend this, please step forward

Does this sound like a correct list for all of you?

]]>

https://dutherenverseauborddelatable.wordpress.com/2015/07/13/living-in-a-go-faster-post-xul-world/feed/31yoricRe-dreaming Firefox (3): Identitieshttps://dutherenverseauborddelatable.wordpress.com/2015/06/05/re-dreaming-firefox-3-identities/
https://dutherenverseauborddelatable.wordpress.com/2015/06/05/re-dreaming-firefox-3-identities/#commentsFri, 05 Jun 2015 10:14:43 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1508]]>Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Identities. Let me emphasise that the features described in this blog post do not exist.

Sacha has a Facebook account, plus two Gmail accounts and one Microsoft Live identity. Sacha is also present on Twitter, both with a personal account, and as the current owner of his company’s account. Sacha also has an account on his bank, another one on Paypal, and one on Amazon. With any browser other than Firefox, Sacha’s online life would be a bit complicated.

For one thing, Sacha is logged to several of these accounts most of the time. Sacha has been told that this makes him easy to track, not just when he’s on Facebook, but also when he visits blogs, or news sites, or even shopping sites, but really, who has time to log off from any account? With any other browser, or with an older version of Firefox, Sacha would have no online privacy. Fortunately, Sacha is using Firefox, which has grown pretty good at handling identities.

Indeed, Firefox knows the difference between Facebook’s (and Google’s, etc.) main sites, for which Sacha may need to be logged, and the tracking devices installed on other sites through ads, or through the Like button (and Google +1, etc.), which are pure nuisances. So, even when Sacha is logged on Facebook, his identity remains hidden from the tracking devices. To put it differently, Sacha is logged to Facebook only on Facebook tabs, and only while he’s using Facebook in these tabs. And since Sacha has two GMail accounts, his logging on each account doesn’t interact with the other account. This feature is good not only for privacy, but also for security, as it considerably mitigates the danger of Cross-Site Scripting attacks. Conversely, if a third-party website uses Facebook as an identity provider, Firefox can detect this automatically, and handle the log-in.

Privacy doesn’t stop there. Firefox has a database of Terms of Service for most websites. Whenever Firefox detects that Sacha is entering his e-mail address, or his phone number, or his physical address, Firefox can tell Sacha if he’s signing up for spam or telemarketing – and take measures to avoid it. If Sacha is signing up for spam, Firefox can automatically create an e-mail alias specific to this website, valid either for a few days, or forever. If Sacha has a provider of phone aliases, Firefox can similarly create a phone alias specific to the website, valid either for a few days, or forever. Similarly, if Sacha’s bank offers temporary credit card numbers, Firefox can automatically create a single-transaction credit card number.

Firefox offers an Identity Panel (if we release this feature, it will, of course, be called Persona) that lets Sacha find out exactly which site is linked to which identity, and grant or revoke authorizations to log-in automatically when visiting such sites, as well as log in or out from a single place. In effect, this behaves as a Internet-wide Single Sign On across identities. With a little help, Firefox can even be taught about lesser known identity providers, such as Sacha’s company’s Single Sign On, and handle them from the same panel. That Identity Panel also keeps track of e-mail aliases, and can be used to revoke spam- and telemarketing-inducing aliases in just two clicks.

Also, security has improved a lot. Firefox can automatically generate strong passwords – it even has a database of sites which accept accept passphrases, or are restricted to 8 characters, etc. Firefox can also detect when Sacha uses the same password on two unrelated sites, and explain him why this is a bad idea. Since Firefox can safely and securely share passwords with other devices and back them up into the cloud, or to encrypted QR Codes that Sacha can safely keep in his wallet, Sacha doesn’t even need to see passwords. Since Firefox handles the passwords, it can download every day a list of websites that are known to have been hacked, and use it to change passwords semi-automatically if necessary.

Security doesn’t stop there. The Identity Panel knows not only about passwords and identity providers, but also about the kind of information that Sacha has provided to each website. This includes Sacha’s e-mail address and physical address, Sacha’s phone number, and also Sacha’s credit card number. So when Firefox finds out that a website to which Sacha subscribes has been hacked, Sacha is informed immediately of the risks. This extends to less material information, such as Sacha’s personal blog of vacation pictures, which Sacha needs to check immediately to find out whether they have been defaced.

What now?

I would like to browse with this Firefox. Would you?

]]>https://dutherenverseauborddelatable.wordpress.com/2015/06/05/re-dreaming-firefox-3-identities/feed/8yoricRe-dreaming Firefox (2): Beyond Bookmarkshttps://dutherenverseauborddelatable.wordpress.com/2015/06/03/re-dreaming-firefox/
https://dutherenverseauborddelatable.wordpress.com/2015/06/03/re-dreaming-firefox/#commentsWed, 03 Jun 2015 23:48:26 +0000http://dutherenverseauborddelatable.wordpress.com/?p=1504]]>Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Beyond Bookmarks. Let me emphasize that the features described in this blog post do not exist.

« Look, here is an interesting website. I want to read that content (or watch that video, or play that game), just not immediately. » So, what am I going to do to remember that I wish to read it later:

Bookmark it?

Save it to disk?

Pocket it?

Remember that I saw it and find it in my history later?

Remember that I saw it and find it in my Awesome Bar later?

Hope that it shows up in the New Tab page?

Open a tab?

Install the Open Web App for that website?

Open a tab and put that tab in a tab group?

Wow, that’s 9 ways of fulfilling the same task. Having so many ways of doing the same thing is not a very good sign, so let’s see if we can find a way to unify a few of these abstractions into something more generic and powerful.

Bookmarking is saving is reading later

What are the differences between Bookmarking and Saving?

Bookmarking keeps a URL, while Saving keeps a snapshot.

Bookmarks can be used only from within the browser, while Saved files can be used only from without.

Merging these two features is actually quite easy. Let’s introduce a new button, the Awesome Bookmarks which will serve as a replacement for both the Bookmark button and Save As.

Clicking on the Awesome Bookmarks icon saves both the URL to the internal database and a snapshot to the Downloads directory (also accessible through the Downloads menu).

Opening an Awesome Bookmark, whether from the browser or from the OS both lead the user to (by default) the live version of the page, or (if the computer is not connected) to the snapshot.

Whenever visiting a page that has an Awesome Bookmark, the Awesome Bookmark icon changes color to offer the user the ability to switch between the live version or the snapshot.

The same page can be Awesome Bookmarked several times, offering the ability to switch between several snapshots.

By switching to Awesome Bookmarks, we have merged Saving, Bookmarking and the Read it Later list of Pocket. Actually, since Firefox already offers Sync and Social Sharing, we have just merged all the features of Pocket.

So we have removed collapsed items from our list into one.

Bookmarks are history are tiles

What are the differences between Bookmarks and History?

History is recorded automatically, while Bookmarks need to be recorded manually.

History is eventually forgotten, while Bookmarks are not.

Bookmarks can be put in folders, History cannot.

Let’s keep doing almost that, but without segregating the views. Let us introduce a new view, the Awesome Pages, which will serve as a replacement for both Bookmarks Menu and the History Menu.

This view shows a grid of thumbnails of visited pages, iOS/Android/Firefox OS style.

first the pages visited most often during the past few hours (with the option of scrolling for all the pages visited during the past few hours);

then the Awesome Bookmarks (because, after all, the user has decided to mark these pages)/Awesome Bookmarks folders (with the option of scrolling for more favourites);

then, if the user has opted in for suggestions, a set of Awesome Suggested Tiles (with the option of scrolling for more suggestions);

then the pages visited the most often today (with the option of scrolling for the other pages visited today);

then the pages visited most often this week (with the option of scrolling for the other pages visited this week);

…

By default, clicking on an Awesome Bookmark (or history entry, or suggested page, etc.) for a page that is already opened switches to that page. Non-bookmarked pages can be turned into Awesome Bookmarks trivially, by starring them or putting them into folders.

An Awesome Bar at the top of this Awesome Pages lets users quickly search for pages and folders. This is the same Awesome Bar that is already at the top of tabs in today’s Firefox, just with the full-screen Awesome Pages replacing the current drop-down menu.

Oh, and by the way, this Awesome Pages is actually our new New Tab page.

By switching to the Awesome Pages, we have merged:

the history menu;

the bookmarks menu;

the new tab page;

the awesome bar.

Bookmarks are tabs are apps

What are the differences between Bookmarks and Tabs?

Clicking on a bookmark opens the page by loading it, while clicking on a tab opens the page by switching to it.

That’s not much of a difference, is it?

So let’s make a few more changes to our UX:

Awesome Bookmarks record the state of the page, in the style of Session Restore, so clicking on an Awesome Bookmark actually restores that page, whenever possible, instead of reloading it;

The ribbon on top of the browser, which traditionally contains tabs, is actually a simplified display of the Awesome Pages, which shows, by default, the pages most often visited during the past few hours;

Whether clicking on a ribbon item switches to a page or restores it is an implementation detail, which depends on whether the browser has decided that unloading a page was a good idea for memory/CPU/battery usage;

Replace Panorama with the Awesome Page, without further change.

So, with a little imagination (and, I’ll admit, a little hand-waving), we have merged tabs and bookmarks. Interestingly, we have done that by moving to an Apps-like model, in which whether an application is loaded or not is for the OS to decide, rather than the user.

By the way, what are the differences between Tabs and Open Web Apps?

Apps can be killed by the OS, while Tabs cannot.

Apps are visible to the OS, while Tabs appear in the browser only.

Well, if we decide that Apps are just Bookmarks, since Bookmarks have been made visible to the OS in section 1., and since Bookmarks have just been merged with Tabs which have just been made killable by the browser, we have our Apps model.

We have just removed three more items from our list.

What’s left?

We are down to one higher-level abstraction (the Awesome Bookmark) and one view of it (the Awesome Page). Of course, if this is eventually released, we are certainly going to call both Persona.

This new Firefox is quite different from today’s Firefox. Actually, it looks much more like Firefox OS, which may be a good thing. While I realize that many of the details are handwavy (e.g. how do you open the same page twice simultaneously?), I believe that someone smarter than me can do great things with this preliminary exploration.