Mozilla Hacks – the Web developer bloghttps://hacks.mozilla.org
hacks.mozilla.orgTue, 20 Nov 2018 17:45:45 +0000en-UShourly1https://wordpress.org/?v=4.9.8LPCNet: DSP-Boosted Neural Speech Synthesishttps://hacks.mozilla.org/2018/11/lpcnet-dsp-boosted-neural-speech-synthesis/
https://hacks.mozilla.org/2018/11/lpcnet-dsp-boosted-neural-speech-synthesis/#respondTue, 20 Nov 2018 16:51:35 +0000https://hacks.mozilla.org/?p=32975LPCNet is a new project out of Mozilla’s Emerging Technologies group — an efficient neural speech synthesiser with reduced complexity over some of its predecessors. Neural speech synthesis models like WaveNet have already demonstrated impressive speech synthesis quality, but their computational complexity has made them hard to use in real-time, especially on phones. In a […]

]]>
LPCNet is a new project out of Mozilla’s Emerging Technologies group — an efficient neural speech synthesiser with reduced complexity over some of its predecessors. Neural speech synthesis models like WaveNet have already demonstrated impressive speech synthesis quality, but their computational complexity has made them hard to use in real-time, especially on phones. In a similar fashion to the RNNoise project, our solution with LPCNet is to use a combination of deep learning and digital signal processing (DSP) techniques.

Figure 1: Screenshot of a demo player that demonstrates the quality of LPCNet-synthesized speech.

LPCNet can help improve the quality of text-to-speech (TTS), low bitrate speech coding, time stretching, and more. You can hear the difference for yourself in our LPCNet demo page, where LPCNet and WaveNet speech are generated with the same complexity. The demo also explains the motivations for LPCNet, shows what it can achieve, and explores its possible applications.

You can find an in-depth explanation of the algorithm used in LPCNet in this paper.

]]>https://hacks.mozilla.org/2018/11/lpcnet-dsp-boosted-neural-speech-synthesis/feed/0Decentralizing Social Interactions with ActivityPubhttps://hacks.mozilla.org/2018/11/decentralizing-social-interactions-with-activitypub/
https://hacks.mozilla.org/2018/11/decentralizing-social-interactions-with-activitypub/#respondTue, 20 Nov 2018 15:14:23 +0000https://hacks.mozilla.org/?p=32965 ActivityPub is a W3C standard protocol that describes ways for different social network sites (loosely defined) to talk to and interact with one another. ActivityPub aims to do for social network interactions what RSS did for content, and is being used today to power alternative social networks like Mastodon and Pleroma.

]]>In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

Social websites first got us talking and sharing with our friends online, then turned into echo-chamber content silos, and finally emerged in their mature state as surveillance capitalist juggernauts, powered by the effluent of our daily lives online. The tail isn’t just wagging the dog, it’s strangling it. However, there just might be a way forward that puts users back in the driver seat: A new set of specifications for decentralizing social activity on the web. Today you’ll get a helping hand into that world from from Darius Kazemi, renowned bot-smith and Mozilla Fellow.

– Dietrich Ayala

Introducing ActivityPub

Hi, I’m Darius Kazemi. I’m a Mozilla Fellow and decentralized web enthusiast. In the last year I’ve become really excited about ActivityPub, a W3C standard protocol that describes ways for different social network sites (loosely defined) to talk to and interact with one another. You might remember the heyday of RSS, when a user could subscribe to almost any content feed in the world from any number of independently developed feed readers. ActivityPub aims to do for social network interactions what RSS did for content.

Architecture

ActivityPub enables a decentralized social web, where a network of servers interact with each other on behalf of individual users/clients, very much like email operates at a macro level. On an ActivityPub compliant server, individual user accounts have an inbox and an outbox that accept HTTP GET and POST requests via API endpoints. They usually live somewhere like https://social.example/users/dariusk/inbox and https://social.example/users/dariusk/outbox, but they can really be anywhere as long as they are at a valid URI. Individual users are represented by an Actor object, which is just a JSON-LD file that gives information like username and where the inbox and outbox are located so you can talk to the Actor. Every message sent on behalf of an Actor has the link to the Actor’s JSON-LD file so anyone receiving the message can look up all the relevant information and start interacting with them.

A simple server to send ActivityPub messages

One of the most popular social network sites that uses ActivityPub is Mastodon, an open source community-owned and ad-free alternative to social network services like Twitter. But Mastodon is a huge, complex project and not the best introduction to the ActivityPub spec as a developer. So I started with a tutorial written by Eugen Rochko (the principal developer of Mastodon) and created a partial reference implementation written in Node.js and Express.js called the Express ActivityPub server.

The purpose of the software is to serve as the simplest possible starting point for developers who want to build their own ActivityPub applications. I picked what seemed to me like the smallest useful subset of ActivityPub features: the ability to publish an ActivityPub-compliant feed of posts that any ActivityPub user can subscribe to. Specifically, this is useful for non-interactive bots that publish feeds of information.

To get started with Express ActivityPub server in a local development environment, install

In order to truly test the server it needs to be associated with a valid, https-enabled domain or subdomain. For local testing I like to use ngrok to test things out on one of the temporary domains that they provide. First, install ngrok using their instructions (you have to sign in but there is a free tier that is sufficient for local debugging). Next run:

ngrok http 3000

This will show a screen on your console that includes a domain like abcdef.ngrok.io. Make sure to note that down, as it will serve as your temporary test domain as long as ngrok is running. Keep this running in its own terminal session while you do everything else.

Then go to your config.json in the express-activitypub directory and update the DOMAIN field to whatever abcdef.ngrok.io domain that ngrok gave you (don’t include the http://), and update USER to some username and PASS to some password. These will be the administrative password required for creating new users on the server. When testing locally via ngrok you don’t need to specify the PRIVKEY_PATH or CERT_PATH.

Next run your server:

node index.js

Go to https://abcdef.ngrok.io/admin (again, replace the subdomain) and you should see an admin page. You can create an account here by giving it a name and then entering the admin user/pass when prompted. Try making an account called “test” — it will give you a long API key that you should save somewhere. Then open an ActivityPub client like Mastodon’s web interface and try following @test@abcdef.ngrok.io. It should find the account and let you follow!

Back on the admin page, you’ll notice another section called “Send message to followers” — fill this in with “test” as the username, the hex key you just noted down as the password, and then enter a message. It should look like this:

Screenshot of form

Hit “Send Message” and then check your ActivityPub client. In the home timeline you should see your message from your account, like so:

Post in Mastodon mobile web view

And that’s it! It’s not incredibly useful on its own but you can fork the repository and use it as a starting point to build your own services. For example, I used it as the foundation of an RSS-to-ActivityPub conversion service that I wrote (source code here). There are of course other services that could be built using this. For example, imagine a replacement for something like MailChimp where you can subscribe to updates for your favorite band, but instead of getting an email, everyone who follows an ActivityPub account will get a direct message with album release info. Also it’s worth browsing the predefined Activity Streams Vocabulary to see what kind of events the spec supports by default.

Learn More

There is a whole lot more to ActivityPub than what I’ve laid out here, and unfortunately there aren’t a lot of learning resources beyond the specs themselves and conversations on various issue trackers.

If you’d like to know more about ActivityPub, you can of course read the ActivityPub spec. It’s important to know that while the ActivityPub spec lays out how messages are sent and received, the different types of messages are specified in the Activity Streams 2.0 spec, and the actual formatting of the messages that are sent is specified in the Activity Streams Vocabulary spec. It’s important to familiarize yourself with all three.

You can join the Social Web Incubator Community Group, a W3C Community Group, to participate in discussions around ActivityPub and other social web tech standards. They have monthly meetings that you can dial into that are listed on the wiki page.

And of course if you’re on an ActivityPub social network service like Mastodon or Pleroma, the #ActivityPub hashtag there is always active.

]]>https://hacks.mozilla.org/2018/11/decentralizing-social-interactions-with-activitypub/feed/0The Power of Web Componentshttps://hacks.mozilla.org/2018/11/the-power-of-web-components/
https://hacks.mozilla.org/2018/11/the-power-of-web-components/#commentsThu, 15 Nov 2018 15:06:07 +0000https://hacks.mozilla.org/?p=32959Web Components comprises a set of standards that enable user-defined HTML elements. These elements can go in all the same places as traditional HTML. Despite the long standardization process, the emerging promise of Web Components puts more power in the hands of developers and creators.

Background

Ever since the first animated DHTML cursor trails and “Site of the Week” badges graced the web, re-usable code has been a temptation for web developers. And ever since those heady days, integrating third-party UI into your site has been, well, a semi-brittle headache.

Using other people’s clever code has required buckets of boilerplate JavaScript or CSS conflicts involving the dreaded !important. Things are a bit better in the world of React and other modern frameworks, but it’s a bit of a tall order to require the overhead of a full framework just to re-use a widget. HTML5 introduced a few new elements like <video> and <input type="date">, which added some much-needed common UI widgets to the web platform. But adding new standard elements for every sufficiently common web UI pattern isn’t a sustainable option.

In response, a handful of web standards were drafted. Each standard has some independent utility, but when used together, they enable something that was previously impossible to do natively, and tremendously difficult to fake: the capability to create user-defined HTML elements that can go in all the same places as traditional HTML. These elements can even hide their inner complexity from the site where they are used, much like a rich form control or video player.

The standards evolve

As a group, the standards are known as Web Components. In the year 2018 it’s easy to think of Web Components as old news. Indeed, early versions of the standards have been around in one form or another in Chrome since 2014, and polyfills have been clumsily filling the gaps in other browsers.

After some quality time in the standards committees, the Web Components standards were refined from their early form, now called version 0, to a more mature version 1 that is seeing implementation across all the major browsers. Firefox 63 added support for two of the tent pole standards, Custom Elements and Shadow DOM, so I figured it’s time to take a closer look at how you can play HTML inventor!

Given that Web Components have been around for a while, there are lots of other resources available. This article is meant as a primer, introducing a range of new capabilities and resources. If you’d like to go deeper (and you definitely should), you’d do well to read more about Web Components on MDN Web Docs and the Google Developers site.

Defining your own working HTML elements requires new powers the browser didn’t previously give developers. I’ll be calling out these previously-impossible bits in each section, as well as what other newer web technologies they draw upon.

The <template> element: a refresher

This first element isn’t quite as new as the others, as the need it addresses predates the Web Components effort. Sometimes you just need to store some HTML. Maybe it’s some markup you’ll need to duplicate multiple times, maybe it’s some UI you don’t need to create quite yet. The <template> element takes HTML and parses it without adding the parsed DOM to the current document.

Where does that parsed HTML go, if not to the document? It’s added to a “document fragment”, which is best understood as a thin wrapper that contains a portion of an HTML document. Document fragments dissolve when appended to other DOM, so they’re useful for holding a bunch of elements you want later, in a container you don’t need to keep.

“Well okay, now I have some DOM in a dissolving container, how do I use it when I need it?”

You could simply insert the template’s document fragment into the current document:

This works just fine, except you just dissolved the document fragment! If you run the above code twice you’ll get an error, as the second time template.content is gone. Instead, we want to make a copy of the fragment prior to inserting it:

document.body.appendChild(template.content.cloneNode(true));

The cloneNode method does what it sounds like, and it takes an argument specifying whether to copy just the node itself or include all its children.

The template tag is ideal for any situation where you need to repeat an HTML structure. It particularly comes in handy when defining the inner structure of a component, and thus <template> is inducted into the Web Components club.

New Powers:

An element that holds HTML but doesn’t add it to the current document.

Review Topics:

Custom Elements

Custom Elements is the poster child for the Web Components standards. It does what it says on the tin – allowing developers to define their own custom HTML elements. Making this possible and pleasant builds fairly heavily on top of ES6’s class syntax, where the v0 syntax was much more cumbersome. If you’re familiar with classes in JavaScript or other languages, you can define classes that inherit from or “extend” other classes:

class MyClass extends BaseClass {
// class definition goes here
}

Well, what if we were to try this?

class MyElement extends HTMLElement {}

Until recently that would have been an error. Browsers didn’t allow the built-in HTMLElement class or its subclasses to be extended. Custom Elements unlocks this restriction.

The browser knows that a <p> tag maps to the HTMLParagraphElement class, but how does it know what tag to map to a custom element class? In addition to extending built-in classes, there’s now a “Custom Element Registry” for declaring this mapping:

customElements.define('my-element', MyElement);

Now every <my-element> on the page is associated with a new instance of MyElement. The constructor for MyElement will be run whenever the browser parses a <my-element> tag.

What’s with that dash in the tag name? Well, the standards bodies want the freedom to create new HTML tags in the future, and that means that developers can’t just go creating an <h7> or <vr> tag. To avoid future conflicts, all custom elements must contain a dash, and standards bodies promise to never make a new HTML tag containing a dash. Collision avoided!

In addition to having your constructor called whenever your custom element is created, there are a number of additional “lifecycle” methods that are called on a custom element at various moments:

connectedCallback is called when an element is appended to a document. This can happen more than once, e.g. if the element is moved or removed and re-added.

disconnectedCallback is the counterpart to connectedCallback.

attributeChangeCallback fires when attributes from a whitelist are modified on the element.

Because we’re extending an existing tag, we actually use the existing tag instead of our custom tag name. We use the new special is attribute to tell the browser what kind of button we’re using:

<button is="hey-there" name="World">Howdy</button>

It may seem a bit clunky at first, but assistive technologies and other scripts wouldn’t know our custom element is a kind of button without this special markup.

From here, all the classic web widget techniques apply. We can set up a bunch of event handlers, add custom styling, and even stamp out an inner structure using <template>. People can use your custom element alongside their own code, via HTML templating, DOM calls, or even new-fangled frameworks, several of which support custom tag names in their virtual DOM implementations. Because the interface is the standard DOM interface, Custom Elements allows for truly portable widgets.

New Powers

The ability to extend the built-in ‘HTMLElement’ class and its subclasses

A custom element registry, available via customElements.define()

Special lifecycle callbacks for detecting element creation, insertion to the DOM, attribute changes, and more.

Review Topics

Shadow DOM

We’ve made our friendly custom element, we’ve even thrown on some snazzy styling. We want to use it on all our sites, and share the code with others so they can use it on theirs. How do we prevent the nightmare of conflicts when our customized <button> element runs face-first into the CSS of other sites? Shadow DOM provides a solution.

The Shadow DOM standard introduces the concept of a shadow root. Superficially, a shadow root has standard DOM methods, and can be appended to as if it was any other DOM node. Shadow roots shine in that their contents don’t appear to the document that contains their parent node:

In the above example, the <div> “contains” the <b> and the <b> is rendered to the page, but the traditional DOM methods can’t see it. Not only that, but the styles of the containing page can’t see it either. This means that styles outside of a shadow root can’t get in, and styles inside the shadow root don’t leak out. This boundary is not meant to be a security feature, as another script on the page could detect the shadow root’s creation, and if you have a reference to a shadow root, you can query it directly for its contents.

The contents of a shadow root are styled by adding a <style> (or <link>) to the root:

Whew, we could really use a <template> right about now! Either way, the <b> will be affected by the stylesheet in the root, but any outer styles matching a <b> tag will not.

What if a custom element has non-shadow content? We can make them play nicely together using a new special element called <slot>:

<template>
Hello, <slot></slot>!
</template>

If that template is attached to a shadow root, then the following markup:

<hey-there>World</hey-there>

Will render as:

Hello, World!

This ability to composite shadow roots with non-shadow content allows you to make rich custom elements with complex inner structures that look simple to the outer environment. Slots are more powerful than I’ve shown here, with multiple slots and named slots and special CSS pseudo-classes to target slotted content. You’ll have to read more!

New Powers:

A quasi-obscured DOM structure called a “shadow root”

DOM APIs for creating and accessing shadow roots

Scoped styles within shadow roots

New CSS pseudo-classes for working with shadow roots and scoped styles

Putting it all together

Let’s make a fancy button! We’ll be creative and call the element <fancy-button>. What makes it fancy? It will have a custom style, and it will also allow us to supply an icon and make that look snazzy as well. We’d like our button’s styles to stay fancy no matter what site you use them on, so we’re going to encapsulate the styles in a shadow root.

You can see the completed custom element in the interactive example below. Be sure to take a look at both the JS definition of the custom element and the HTML <template> for the style and structure of the element.

Conclusion

The standards that make up Web Components are built on the philosophy that by providing multiple low-level capabilities, people will combine them in ways that nobody expected at the time the specs were written. Custom Elements have already been used to make it easier to build VR content on the web, spawned multipleUI toolkits, and much more. Despite the long standardization process, the emerging promise of Web Components puts more power in the hand of creators. Now that the technology is available in browsers, the future of Web Components is in your hands. What will you build?

]]>https://hacks.mozilla.org/2018/11/the-power-of-web-components/feed/2New & Experimental Web Design Tools: Feedback Requestedhttps://hacks.mozilla.org/2018/11/new-experimental-web-design-tools-feedback-requested/
https://hacks.mozilla.org/2018/11/new-experimental-web-design-tools-feedback-requested/#respondWed, 14 Nov 2018 16:00:32 +0000https://hacks.mozilla.org/?p=32941We’re currently hard at work on some new tools for web designers: a comprehensive Flexbox Inspector as well as CSS change-tracking. Tell us about your biggest CSS and web design issues and pain points in the first-ever Design Tools survey from Mozilla! We want to hear from developers and designers, no matter what browser you use.

Our goal: To build empowering new tools that integrate smartly with your modern web design workflow.

We’re currently hard at work on a comprehensive Flexbox Inspector as well as CSS change-tracking. Early versions of each of these can be tried out in Firefox Nightly. (The Changes panel is hidden behind a flag in about:config: devtools.inspector.changes.enabled)

Please share your input

We’re just getting started, and now we want to learn more about you. Tell us about your biggest CSS and web design issues in the first-ever Design Tools survey! We want to hear from both web developers and designers, and not just Firefox users—Chrome, Safari, Edge, and IE users are greatly encouraged to submit your thoughts!

In early 2019, we’ll post an update with the results in order to share our data with the greater community and continue our experiment in open design.

]]>https://hacks.mozilla.org/2018/11/new-experimental-web-design-tools-feedback-requested/feed/0Private by Design: How we built Firefox Synchttps://hacks.mozilla.org/2018/11/firefox-sync-privacy/
https://hacks.mozilla.org/2018/11/firefox-sync-privacy/#commentsTue, 13 Nov 2018 15:09:17 +0000https://hacks.mozilla.org/?p=32922Firefox Sync lets you share your bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. We think it’s important to highlight the privacy aspects of Sync, which protects all your synced data by default so Mozilla can’t read it, ever. In this post, we take a closer look at some of the technical design choices we made in order to put user privacy first.

That shopping rabbit hole you started on your laptop this morning? Pick up where you left off on your phone tonight. That dinner recipe you discovered at lunchtime? Open it on your kitchen tablet, instantly. Connect your personal devices, securely. – Firefox Sync

Firefox Sync lets you share your bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. It’s a feature that millions of our users take advantage of to streamline their lives and how they interact with the web.

But on an Internet where sharing your data with a provider is the norm, we think it’s important to highlight the privacy aspects of Firefox Sync.

Firefox Sync by default protects all your synced data so Mozilla can’t read it. We built Sync this way because we put user privacy first. In this post, we take a closer look at some of the technical design choices we made and why.

When building a browser and implementing a sync service, we think it’s important to look at what one might call ‘Total Cost of Ownership’. Not just what users get from a feature, but what they give up in exchange for ease of use.

We believe that by making the right choices to protect your privacy, we’ve also lowered the barrier to trying out Sync. When you sign up and choose a strong passphrase, your data is protected from both attackers and from Mozilla, so you can try out Sync without worry. Give it a shot, it’s right up there in the menu bar!

Why Firefox Sync is safe

Encryption allows one to protect data so that it is entirely unreadable without the key used to encrypt it. The math behind encryption is strong, has been tested for decades, and every government in the world uses it to protect its most valuable secrets.

The hard part of encryption is that key. What key do you encrypt with, where does it come from, where is it stored, and how does it move between places? Lots of cloud providers claim they encrypt your data, and they do. But they also have the key! While the encryption is not meaningless, it is a small measure, and does not protect the data against the most concerning threats.

The encryption key is the essential element. The service provider must never receive it – even temporarily – and must never know it. When you sign into your Firefox Account, you enter a username and passphrase, which are sent to the server. How is it that we can claim to never know your encryption key if that’s all you ever provide us? The difference is in how we handle your passphrase.

A typical login flow for an internet service is to send your username and passphrase up to the server, where they hash it, compare it to a stored hash, and if correct, the server sends you your data. (Hashing refers to the activity of converting passwords into unreadable strings of characters impossible to revert.)

The crux of the difference in how we designed Firefox Accounts, and Firefox Sync (our underlying syncing service), is that you never send us your passphrase. We transform your passphrase on your computer into two different, unrelated values. With one value, you cannot derive the other0. We send an authentication token, derived from your passphrase, to the server as the password-equivalent. And the encryption key derived from your passphrase never leaves your computer.

Interested in the technical details? We use 1000 rounds of PBKDF2 to derive your passphrase into the authentication token1. On the server, we additionally hash this token with scrypt (parameters N=65536, r=8, p=1)2 to make sure our database of authentication tokens is even more difficult to crack.

We derive your passphrase into an encryption key using the same 1000 rounds of PBKDF2. It is domain-separated from your authentication token by using HKDF with separate info values. We use this key to unwrap an encryption key (which you generated during setup and which we never see unwrapped), and that encryption key is used to protect your data. We use the key to encrypt your data using AES-256 in CBC mode, protected with an HMAC3.

This cryptographic design is solid – but the constants need to be updated. One thousand rounds of PBKDF can be improved, and we intend to do so in the future (Bug 1320222). This token is only ever sent over a HTTPS connection (with preloaded HPKP pins) and is not stored, so when we initially developed this and needed to support low-power, low-resources devices, a trade-off was made. AES-CBC + HMAC is acceptable – it would be nice to upgrade this to an authenticated mode sometime in the future.

Other approaches

This isn’t the only approach to building a browser sync feature. There are at least three other options:

Option 1: Share your data with the browser maker

In this approach, the browser maker is able to read your data, and use it to provide services to you. For example, when you sync your browser history in Chrome it will automatically go into your Web & App Activity unless you’ve changed the default settings. As Google Chrome Help explains, “Your activity may be used to personalize your experience on other Google products, like Search or ads. For example, you may see a news story recommended in your feed based on your Chrome history.”4

Option 2: Use a separate password for sign-in and encryption

We developed Firefox Sync to be as easy to use as possible, so we designed it from the ground up to derive an authentication token and an encryption key – and we never see the passphrase or the encryption key. One cannot safely derive an encryption key from a passphrase if the passphrase is sent to the server.

One could, however, add a second passphrase that is never sent to the server, and encrypt the data using that. Chrome provides this as a non-default option5. You can sign in to sync with your Google Account credentials; but you choose a separate passphrase to encrypt your data. It’s imperative you choose a separate passphrase though.

All-in-all, we don’t care for the design that requires a second passphrase. This approach is confusing to users. It’s very easy to choose the same (or similar) passphrase and negate the security of the design. It’s hard to determine which is more confusing: to require a second passphrase or to make it optional! Making it optional means it will be used very rarely. We don’t believe users should have to opt-in to privacy.

Option 3: Manual key synchronization

The key (pun intended) to auditing a cryptographic design is to ask about the key: “Where does it come from? Where does it go?” With the Firefox Sync design, you enter a passphrase of your choosing and it is used to derive an encryption key that never leaves your computer.

Another option for Sync is to remove user choice, and provide a passphrase for you (that never leaves your computer). This passphrase would be secure and unguessable – which is an advantage, but it would be near-impossible to remember – which is a disadvantage.

When you want to add a new device to sync to, you’d need your existing device nearby in order to manually read and type the passphrase into the new device. (You could also scan a QR code if your new device has a camera).

Other Browsers

Overall, Sync works the way it does because we feel it’s the best design choice. Options 1 and 2 don’t provide thorough user privacy protections by default. Option 3 results in lower user adoption and thus reduces the number of people we can help (more on this below).

As noted above, Chrome implements Option 1 by default, which means unless you change the settings before you enable sync, Google will see all of your browsing history and other data, and use it to market services to you. Chrome also implements Option 2 as an opt-in feature.

Opera and Vivaldi follow Chrome’s lead, implementing Option 1 by default and Option 2 as an opt-in feature. Update: Vivaldi actually prompts you for a separate password by default (Option 2), and allows you to opt-out and use your login password (Option 1).

Brave, also a privacy-focused browser, has implemented Option 3. And, in fact, Firefox also implemented a form of Option 3 in its original Sync Protocol, but we changed our design in April 2014 (Firefox 29) in response to user feedback6. For example, our original design (and Brave’s current design) makes it much harder to regain access to your data if you lose your device or it gets stolen. Passwords or passphrases make that experience substantially easier for the average user, and significantly increased Sync adoption by users.

Brave’s sync protocol has some interesting wrinkles7. One distinct minus is that you can’t change your passphrase, if it were to be stolen by malware. Another interesting wrinkle is that Brave does not keep track of how many or what types of devices you have. This is a nuanced security trade-off: having less information about the user is always desirable… The downside is that Brave can’t allow you to detect when a new device begins receiving your sync data or allow you to deauthorize it. We respect Brave’s decision. In Firefox, however, we have chosen to provide this additional security feature for users (at the cost of knowing more about their devices).

Conclusion

We designed Firefox Sync to protect your data – by default – so Mozilla can’t read it. We built it this way – despite trade-offs that make development and offering features more difficult – because we put user privacy first. At Mozilla, this priority is a core part of our mission to “ensure the Internet is a global public resource… where individuals can shape their own experience and are empowered, safe and independent.”

0 It is possible to use one to guess the other, but only if you choose a weak password. ⬑

1 You can find more details in the full protocol specification or by following the code starting at this point. There are a few details we have omitted to simplify this blog post, including the difference between kA and kB keys, and application-specific subkeys. ⬑

6 One of the original engineers of Sync has written twoblog posts about the transition to the new sync protocol, and why we did it. If you’re interested in the usability aspects of cryptography, we highly recommend you read them to see what we learned. ⬑

We shipped some changes designed to improve MDN’s page load time. The effects were not as significant as we’d hoped.

Shipped performance improvements

Our sidebars, like the Related Topics sidebar on <summary>, use a “mozToggler” JavaScript method to implement open and collapsed sections. This uses jQueryUI’s toggle effect, and is applied dynamically at load time. Tim Kadlec replaced it with the <details> element (KumaScript PR 789 and Kuma PR 4957), which semantically models open and collapsed sections. The <details> element is supported by most current browsers, with the notable exception of Microsoft Edge, which is supported with a polyfill.

We expected at least 150ms improvement based on bench tests

The <details> update shipped October 4th, and the 31,000 pages with sidebars were regenerated to apply the change.

A second change was intended to reduce the use of Web Fonts, which must be downloaded and can cause the page to be repainted. Some browsers, such as Firefox Focus, block web fonts by default for performance and to save bandwidth.

One strategy is to eliminate the web font entirely. We replaced OpenSans with the built-in Verdana as the body font in September (PR 4967), and then again with Arial on October 22 (PR 5023). We’re also replacing Font Awesome, implemented with a web font, with inline SVG (PR 4969 and PR 5053). We expect to complete the SVG work in November.

A second strategy is to reduce the size of the web font. The custom Zilla font, introduced with the June 2017 redesign, was reduced to standard English characters, cutting the file sizes in half on October 10 (PR 5024).

These changes have had an impact on total download size and rendering time, and we’re seeing improvements in our synthetic metrics. However, there has been no significant change in page load as measured for MDN users. In November, we’ll try some more radical experiments to learn more about the components of page load time.

SpeedCurve Synthetic measurements show steady improvement, but not yet on target.

Moved MDN to MozIT

Ryan Johnson, Ed Lim, and Dave Parfitt switched production traffic from the Marketing to the IT servers on October 29th. The site was placed in read-only mode, so all the content was available during the transition. There were some small hiccups, mostly around running out of API budget for Amazon’s Elastic File System (EFS), but we handled the issues within the maintenance window.

In the weeks leading up to the cut over, the team tested deployments, updated documentation, and checked data transfer processes. They created a list of tasks and assignments, detailed the process for the migration, and planned the cleanup work after the cut over. The team’s attention to detail and continuous communication made this a smooth transition for MDN’s users, with no downtime or bugs.

The MozIT cluster is very similar to the previous MozMEAO cluster. The technical overview from the October 10, 2017 launch is still a decent guide to how MDN is deployed.

There are a handful of changes, most of which MDN users shouldn’t notice. We’re now hosting images in Docker Hub rather than quay.io. The MozMEAO cluster ran Kubernetes 1.7, and the new MozIT cluster runs 1.9. This may be responsible for more reliable DNS lookups, avoiding occasional issues when connecting to the database or other AWS services.

In November, we’ll continue monitoring the new servers, and shut down the redundant services in the MozMEAO account. We’ll then re-evaluate our plans from the beginning of the year, and prioritize the next infrastructure updates. The top of the list is reliable acceptance tests and deploys across multiple AWS zones.

Planned for November

We’ll continue on performance experiments in November, such as removing Font Awesome and looking for new ways to lower page load time. We’ll continue ongoing projects, such as migrating and updating browser compatibility data and shipping more HTML examples like the one on <input>.

]]>https://hacks.mozilla.org/2018/11/performance-and-hosting-moves-mdn-changelog-for-october-2018/feed/5Into the Depths: The Technical Details Behind AV1https://hacks.mozilla.org/2018/11/into-the-depths-the-technical-details-behind-av1/
https://hacks.mozilla.org/2018/11/into-the-depths-the-technical-details-behind-av1/#commentsThu, 08 Nov 2018 15:02:18 +0000https://hacks.mozilla.org/?p=32911AV1, the next generation royalty-free video codec from the Alliance for Open Media leapfrogs the performance of VP9 and HEVC. The AV1 format is and will always be royalty-free with a permissive FOSS license. In this video presentation, Mozilla's Nathan Egge dives deep into the technical details of the codec and its evolution.

Since AOMedia officially cemented the AV1 v1.0.0 specification earlier this year, we’ve seen increasing interest from the broadcasting industry. Starting with the NAB Show (National Association of Broadcasters) in Las Vegas earlier this year, and gaining momentum through IBC (International Broadcasting Convention) in Amsterdam, and more recently the NAB East Show in New York, AV1 keeps picking up steam. Each of these industry events attract over 100,000 media professionals. Mozilla attended these shows to demonstrate AV1 playback in Firefox, and showed that AV1 is well on its way to being broadly adopted in web browsers.

Continuing to advocate for AV1 in the broadcast space, Nathan Egge from Mozilla dives into the depths of AV1 at the Mile High Video Workshop in Denver, sponsored by Comcast.

AV1 leapfrogs the performance of VP9 and HEVC, making it a next-generation codec. The AV1 format is and will always be royalty-free with a permissive FOSS license.

]]>https://hacks.mozilla.org/2018/11/into-the-depths-the-technical-details-behind-av1/feed/6Cross-language Performance Profile Exploration with speedscopehttps://hacks.mozilla.org/2018/11/cross-language-performance-profile-exploration-with-speedscope/
https://hacks.mozilla.org/2018/11/cross-language-performance-profile-exploration-with-speedscope/#respondWed, 07 Nov 2018 15:24:58 +0000https://hacks.mozilla.org/?p=32890speedscope is a fast, interactive, web-based viewer for large performance profiles, inspired by the performance panel of Chrome developer tools and by Brendan Gregg’s FlameGraphs. Jamie Wong built speedscope to explore and interact with large performance profiles from a variety of profilers for a variety of programming languages. speescope runs totally in-browser, and does not send any profiling data to any servers.

The goal of speedscope is to provide a 60fps way of interactively exploring large performance profiles from a variety of profilers for a variety of programming languages. It runs totally in-browser, and does not send any profiling data to any servers. Because it runs totally in-browser, it should work in Firefox and Chrome on Mac, Windows, and Linux. It can be downloaded to run offline, either from npm, or just as a totally standalone zip file.

In doing performance work across many language environments at Figma, I noticed that every community tends to create its own tools for visualizing performance issues. With speedscope, I hoped to de-duplicate those efforts. To meet this goal, speedscope supports import of profiles from a growing list of profilers:

speedscope also has a stable documented file format, making it appropriate as a tool to target for visualization of totally custom profiles. This allows new profilers to support import into speedscope without needing to modify speedscope’s code at all (though contributions are welcome!). This is how I added support for visualizing rbspy profiles: rbspy#161. Firefox & Chrome both have capable profile visualizers, but the file formats they use change frequently.

Also unlike other similar tools, speedscope is designed to make it easy to host inside your own infrastructure. This allows you to integrate speedscope to view backend performance profiles with a single click. At Figma, we have a ruby backend, so I made an opinionated fork of rack-mini-profiler to do exactly this. If you support access to performance profiles across domains, you can even load them directly into https://www.speedscope.app via a #profileUrl=… hash parameter.

What can it do?

speedscope is broken down into three primary views: Time Order, Left Heavy, and Sandwich.

Time Order

In the “Time Order” view (the default), call stacks are ordered left-to-right in the same order as they occurred in the input file, which is usually the chronological order they were recorded in. This view is most helpful for understanding the behavior of an application over time, e.g. “first the data is fetched from the database, then the data is prepared for serialization, then the data is serialized to JSON”.

The horizontal axis represents the “weight” of each stack (most commonly CPU time), and the vertical axis shows you the stack active at the time of the sample. If you click on one of the frames, you’ll be able to see summary statistics about it.

Left Heavy

In the “Left Heavy” view, identical stacks are grouped together, regardless of whether they were recorded sequentially. Then, the stacks are sorted so that the heaviest stack for each parent is on the left — hence “left heavy”. This view is useful for understanding where all the time is going in situations where there are hundreds or thousands of function calls interleaved between other call stacks.

Sandwich

The “Sandwich” view is a table view in which you can find a list of all functions and their associated times. You can sort by self time or total time.

It’s called the “Sandwich” view because if you select one of the rows in the table, you can see flamegraphs for all the callers and callees of the selected row.

]]>https://hacks.mozilla.org/2018/11/cross-language-performance-profile-exploration-with-speedscope/feed/0Testing Privacy-Preserving Telemetry with Priohttps://hacks.mozilla.org/2018/10/testing-privacy-preserving-telemetry-with-prio/
https://hacks.mozilla.org/2018/10/testing-privacy-preserving-telemetry-with-prio/#respondMon, 29 Oct 2018 18:26:08 +0000https://hacks.mozilla.org/?p=32880Building a browser is hard; building a good browser inevitably requires gathering a lot of data to make sure that things that work in the lab works in the field. But as soon as you gather data, you have to make sure you protect user privacy. We’re always looking at ways to improve the security of our data collection, and lately we’ve been experimenting with a really cool technique called Prio.

]]>Building a browser is hard; building a good browser inevitably requires gathering a lot of data to make sure that things that work in the lab work in the field. But as soon as you gather data, you have to make sure you protect user privacy. We’re always looking at ways to improve the security of our data collection, and lately we’ve been experimenting with a really cool technique called Prio.

Currently, all the major browsers do more or less the same thing for data reporting: the browser collects a bunch of statistics and sends it back to the browser maker for analysis; in Firefox, we call this system Telemetry. The challenge with building a Telemetry system is that data is sensitive. In order to ensure that we are safeguarding our users’ privacy, Mozilla has built a set of transparent data practices which determine what we can collect and under what conditions. For particularly sensitive categories of data, we ask users to opt-in to the collection and ensure that the data is handled securely.

We understand that this requires users to trust Mozilla — that we won’t misuse their data, that the data won’t be exposed in a breach, and that Mozilla won’t be compelled to provide access to the data by another party. In the future, we would prefer users to not have to just trust Mozilla, especially when we’re collecting data that is sufficiently sensitive to require an opt-in. This is why we’re exploring new ways to preserve your data privacy and security without compromising access to the information we need to build the best products and services.

Obviously, not collecting any data at all is best for privacy, but it also blinds us to real issues in the field, which makes it hard for us to build features — including privacy features — which we know our users want. This is a common problem and there has been quite a bit of work on what’s called “privacy-preserving data collection”, including systems developed by Google (RAPPOR, PROCHLO) and Apple. Each of these systems has advantages and disadvantages that are beyond the scope of this post, but suffice to say that this is an area of very active work.

In recent months, we’ve been experimenting with one such system: Prio, developed by Professor Dan Boneh and PhD student Henry Corrigan-Gibbs of Stanford University’s Computer Science department. The basic insight behind Prio is that for most purposes we don’t need to collect individual data, but rather only aggregates. Prio, which is in the public domain, lets Mozilla collect aggregate data without collecting anyone’s individual data. It does this by having the browser break the data up into two “shares”, each of which is sent to a different server. Individually the shares don’t tell you anything about the data being reported, but together they do. Each server collects the shares from all the clients and adds them up. If the servers then take their sum values and put them together, the result is the sum of all the users’ values. As long as one server is honest, then there’s no way to recover the individual values.

We’ve been working with the Stanford team to test Prio in Firefox. In the first stage of the experiment we want to make sure that it works efficiently at scale and produces the expected results. This is something that should just work, but as we mentioned before, building systems is a lot harder in practice than theory. In order to test our integration, we’re doing a simple deployment where we take nonsensitive data that we already collect using Telemetry and collect it via Prio as well. This lets us prove out the technology without interfering with our existing, careful handling of sensitive data. This part is in Nightly now and reporting back already. In order to process the data, we’ve integrated support for Prio into our Spark-based telemetry analysis system, so it automatically talks to the Prio servers to compute the aggregates.

Our initial results are promising: we’ve been running Prio in Nightly for 6 weeks, gathered over 3 million data values, and after fixing a small glitch where we were getting bogus results, our Prio results match our Telemetry results perfectly. Processing time and bandwidth also look good. Over the next few months we’ll be doing further testing to verify that Prio continues to produce the right answers and works well with our existing data pipeline.

Most importantly, in a production deployment we need to make sure that user privacy doesn’t depend on trusting a single party. This means distributing trust by selecting a third party (or parties) that users can have confidence in. This third party would never see any individual user data, but they would be responsible for keeping us honest by ensuring that we never see any individual user data either. To that end, it’s important to select a third party that users can trust; we’ll have more to say about this as we firm up our plans.

We don’t yet have concrete plans for what data we’ll protect with Prio and when. Once we’ve validated that it’s working as expected and provides the privacy guarantees we require, we can move forward in applying it where it is needed most. Expect to hear more from us in future, but for now it’s exciting to be able to take the first step towards privacy preserving data collection.

]]>https://hacks.mozilla.org/2018/10/testing-privacy-preserving-telemetry-with-prio/feed/0Dweb: Identity for the Decentralized Web with IndieAuthhttps://hacks.mozilla.org/2018/10/dweb-identity-for-the-decentralized-web-with-indieauth/
https://hacks.mozilla.org/2018/10/dweb-identity-for-the-decentralized-web-with-indieauth/#respondWed, 24 Oct 2018 14:55:53 +0000https://hacks.mozilla.org/?p=32859IndieAuth is a decentralized login protocol that enables users of your software to log in to other apps. It's an extension to OAuth 2.0 that lets any website to become its own identity provider, leveraging all the existing security considerations and best practices in the industry around authorization and authentication.

]]>In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

We’ve covered a number of projects so far in this series that require foundation-level changes to the network architecture of the web. But sometimes big things can come from just changing how we use the web we have today.

Imagine if you never had to remember a password to log into a website or app ever again. IndieAuth is a simple but powerful way to manage and verify identity using the decentralization already built into the web itself. We’re happy to introduce Aaron Parecki, co-founder of the IndieWeb movement, who will show you how to set up your own independent identity on the web with IndieAuth.

– Dietrich Ayala

Introducing IndieAuth

IndieAuth is a decentralized login protocol that enables users of your software to log in to other apps.

From the user perspective, it lets you use an existing account to log in to various apps without having to create a new password everywhere.

IndieAuth builds on existing web technologies, using URLs as identifiers. This makes it broadly applicable to the web today, and it can be quickly integrated into existing websites and web platforms.

IndieAuth has been developed over several years in the IndieWeb community, a loosely connected group of people working to enable individuals to own their online presence, and was published as a W3C Note in 2018.

IndieAuth Architecture

IndieAuth is an extension to OAuth 2.0 that enables any website to become its own identity provider. It builds on OAuth 2.0, taking advantage of all the existing security considerations and best practices in the industry around authorization and authentication.

IndieAuth starts with the assumption that every identifier is a URL. Users as well as applications are identified and represented by a URL.

When a user logs in to an application, they start by entering their personal home page URL. The application fetches that URL and finds where to send the user to authenticate, then sends the user there, and can later verify that the authentication was successful. The flow diagram below walks through each step of the exchange:

Get Started with IndieAuth

The quickest way to use your existing website as your IndieAuth identity is to let an existing service handle the protocol bits and tell apps where to find the service you’re using.

If your website is using WordPress, you can easily get started by installing the IndieAuth plugin! After you install and activate the plugin, your website will be a full-featured IndieAuth provider and you can log in to websites like https://indieweb.org right away!

To set up your website manually, you’ll need to choose an IndieAuth server such as https://indieauth.com and add a few links to your home page. Add a link to the indieauth.com authorization endpoint in an HTML <link> tag so that apps will know where to send you to log in.

<link rel="authorization_endpoint" href="https://indieauth.com/auth">

Then tell indieauth.com how to authenticate you by linking to either a GitHub account or email address.

Note: This last step is unique to indieauth.com and isn’t part of the IndieAuth spec. This is how indieauth.com can authenticate you without you creating a password there. It lets you switch out the mechanism you use to authenticate, for example in case you decide to stop using GitHub, without changing your identity at the site you’re logging in to.

If you don’t want to rely on any third party services at all, then you can host your own IndieAuth authorization endpoint using an existing open source solution or build your own. In any case, it’s fine to start using a service for this today, because you can always swap it out later without your identity changing.

Now you’re ready! When logging in to a website like https://indieweb.org, you’ll be asked to enter your URL, then you’ll be sent to your chosen IndieAuth server to authenticate!

Learn More

If you’d like to learn more, OAuth for the Open Web talks about more of the technical details and motivations behind the IndieAuth spec.

You can learn how to build your own IndieAuth server at the links below:

]]>It’s that time of the year again- when we put on costumes and pass out goodies to all. It’s Firefox release week! Join me for a spook-tacular1 look at the latest goodies shipping this release.

Web Components, Oh My!

After a ratherlonggestation, I’m pleased to announce that support for modern Web Components APIs has shipped in Firefox! Expect a more thorough write-up, but let’s cover what these new APIs make possible.

Custom Elements

To put it simply, Custom Elements makes it possible to define new HTML tags outside the standard set included in the web platform. It does this by letting JS classes extend the built-in HTMLElement object, adding an API for registering new elements, and by adding special “lifecycle” methods to detect when a custom element is appended, removed, or attributes are updated:

Shadow DOM

The web has long had reusable widgets people can use when building a site. One of the most common challenges when using third-party widgets on a page is making sure that the styles of the page don’t mess up the appearance of the widget and vice-versa. This can be frustrating (to put it mildly), and leads to lots of long, overly specific CSS selectors, or the use of complex third-party tools to re-write all the styles on the page to not conflict.

Cue frustrated developer:

There has to be a better way…

Now, there is!

The Shadow DOM is not a secretive underground society of web developers, but instead a foundational web technology that lets developers create encapsulated HTML trees that aren’t affected by outside styles, can have their own styles that don’t leak out, and in fact can be made unreachable from normal DOM traversal methods (querySelector, .childNodes, etc.).

Custom elements and shadow roots can be used independently of one another, but they really shine when used together. For instance, imagine you have a <media-player> element with playback controls. You can put the controls in a shadow root and keep the page’s DOM clean! In fact, Both Firefox and Chrome now use Shadow DOM for the implementation of the <video> element.

Fonts Editor

The Inspector’s Fonts panel is a handy way to see what local and web fonts are being used on a page. Already useful for debugging webfonts, in Firefox 63 the Fonts panel gains new powers! You can adjust the parameters of the font on the currently selected element, and if the current font supports Font Variations, you can view and fine-tune those paramaters as well. The syntax for adjusting variable fonts can be a little unfamiliar and it’s not otherwise possible to discover all the variations built into a font, so this tool can be a life saver.

Reduced motion preferences for CSS

Slick animations can give a polished and unique feel to a digital experience. However, for some people, animated effects like parallax and sliding/zooming transitions can cause vertigo and headaches. In addition, some older/less powerful devices can struggle to render animations smoothly. To respond to this, some devices and operating systems offer a “reduce motion” option. In Firefox 63, you can now detect this preference using CSS media queries and adjust/reduce your use of transitions and animations to ensure more people have a pleasant experience using your site. CSS Tricks has a great overview of both how to detect reduced motion and why you should care.

Conclusion

There is, as always, a bunch more in this release of Firefox. MDN Web Docs has the full run-down of developer-facing changes, and more highlights can be found in the official release notes. Happy Browsing!

]]>https://hacks.mozilla.org/2018/10/firefox-63-tricks-and-treats/feed/0WebAssembly’s post-MVP future: A cartoon skill treehttps://hacks.mozilla.org/2018/10/webassemblys-post-mvp-future/
https://hacks.mozilla.org/2018/10/webassemblys-post-mvp-future/#commentsMon, 22 Oct 2018 15:32:56 +0000https://hacks.mozilla.org/?p=32793People have a misconception—they think that the WebAssembly that landed in browsers back in 2017—is the final version. In fact, we still have many use cases to unlock, from heavy-weight desktop applications, to small modules, to JS frameworks, to all the things outside the browser… Node.js, and serverless, and the blockchain, and portable CLI tools, and the internet of things.

The WebAssembly that we have today is not the end of this story—it’s just the beginning.

People have a misconception about WebAssembly. They think that the WebAssembly that landed in browsers back in 2017—which we called the minimum viable product (or MVP) of WebAssembly—is the final version of WebAssembly.

I can understand where that misconception comes from. The WebAssembly community group is really committed to backwards compatibility. This means that the WebAssembly that you create today will continue working on browsers into the future.

But that doesn’t mean that WebAssembly is feature complete. In fact, that’s far from the case. There are many features that are coming to WebAssembly which will fundamentally alter what you can do with WebAssembly.

I think of these future features kind of like the skill tree in a videogame. We’ve fully filled in the top few of these skills, but there is still this whole skill tree below that we need to fill-in to unlock all of the applications.

So let’s look at what’s been filled in already, and then we can see what’s yet to come.

Minimum Viable Product (MVP)

The very beginning of WebAssembly’s story starts with Emscripten, which made it possible to run C++ code on the web by transpiling it to JavaScript. This made it possible to bring large existing C++ code bases, for things like games and desktop applications, to the web.

The JS it automatically generated was still significantly slower than the comparable native code, though. But Mozilla engineers found a type system hiding inside the generated JavaScript, and figured out how to make this JavaScript run really fast. This subset of JavaScript was named asm.js.

But that wasn’t the end of the story. It was just the beginning. There were still things that engines could do to make this faster.

But they couldn’t do it in JavaScript itself. Instead, they needed a new language—one that was designed specifically to be compiled to. And that was WebAssembly.

So what skills were needed for the first version of WebAssembly? What did we need to get to a minimum viable product that could actually run C and C++ efficiently on the web?

Skill: Compile target

The folks working on WebAssembly knew they didn’t want to just support C and C++. They wanted many different languages to be able to compile to WebAssembly. So they needed a language-agnostic compile target.

They needed something like the assembly language that things like desktop applications are compiled to—like x86. But this assembly language wouldn’t be for an actual, physical machine. It would be for a conceptual machine.

Skill: Fast execution

That compiler target had to be designed so that it could run very fast. Otherwise, WebAssembly applications running on the web wouldn’t keep up with users’ expectations for smooth interactions and game play.

Skill: Compact

In addition to execution time, load time needed to be fast, too. Users have certain expectations about how quickly something will load. For desktop applications, that expectation is that they will load quickly because the application is already installed on your computer. For web apps, the expectation is also that load times will be fast, because web apps usually don’t have to load nearly as much code as desktop apps.

When you combine these two things, though, it gets tricky. Desktop applications are usually pretty large code bases. So if they are on the web, there’s a lot to download and compile when the user first goes to the URL.

To meet these expectations, we needed our compiler target to be compact. That way, it could go over the web quickly.

Skill: Linear memory

These languages also needed to be able to use memory differently from how JavaScript uses memory. They needed to be able to directly manage their memory—to say which bytes go together.

This is because languages like C and C++ have a low-level feature called pointers. You can have a variable that doesn’t have a value in it, but instead has the memory address of the value. So if you’re going to support pointers, the program needs to be able to write and read from particular addresses.

But you can’t have a program you downloaded from the web just accessing bytes in memory willy-nilly, using whatever addresses they want. So in order to create a secure way of giving access to memory, like a native program is used to, we had to create something that could give access to a very specific part of memory and nothing else.

To do this, WebAssembly uses a linear memory model. This is implemented using TypedArrays. It’s basically just like a JavaScript array, except this array only contains bytes of memory. When you access data in it, you just use array indexes, which you can treat as though they were memory addresses. This means you can pretend this array is C++ memory.

Achievement unlocked

So with all of these skills in place, people could run desktop applications and games in your browser as if they were running natively on their computer.

And that was pretty much the skill set that WebAssembly had when it was released as an MVP. It was truly an MVP—a minimum viable product.

This allowed certain kinds of applications to work, but there were still a whole host of others to unlock.

Heavy-weight Desktop Applications

The next achievement to unlock is heavier weight desktop applications.

Can you imagine if something like Photoshop were running in your browser? If you could instantaneously load it on any device like you do with Gmail?

We’ve already started seeing things like this. For example, Autodesk’s AutoCAD team has made their CAD software available in the browser. And Adobe has made Lightroom available through the browser using WebAssembly.

But there are still a few features that we need to put in place to make sure that all of these applications—even the heaviest of heavy weight—can run well in the browser.

Skill: Threading

First, we need support for multithreading. Modern-day computers have multiple cores. These are basically multiple brains that can all be working at the same time on your problem. That can make things go much faster, but to make use of these cores, you need support for threading.

Skill: SIMD

Alongside threading, there’s another technique that utilizes modern hardware, and which enables you to process things in parallel.

That is SIMD: single instruction multiple data. With SIMD, it’s possible to take a chunk of memory and split up across different execution units, which are kind of like cores. Then you have the same bit of code—the same instruction—run across all of those execution units, but they each apply that instruction to their own bit of the data.

Skill: 64-bit addressing

Another hardware capability that WebAssembly needs to take full advantage of is 64-bit addressing.

Memory addresses are just numbers, so if your memory addresses are only 32 bits long, you can only have so many memory addresses—enough for 4 gigabytes of linear memory.

But with 64-bit addressing, you have 16 exabytes. Of course, you don’t have 16 exabytes of actual memory in your computer. So the maximum is subject to however much memory the system can actually give you. But this will take the artificial limitation on address space out of WebAssembly.

Skill: Streaming compilation

For these applications, we don’t just need them to run fast. We needed load times to be even faster than they already were. There are a few skills that we need specifically to improve load times.

One big step is to do streaming compilation—to compile a WebAssembly file while it’s still being downloaded. WebAssembly was designed specifically to enable easy streaming compilation. In Firefox, we actually compile it so fast—faster than it is coming in over the network— that it’s pretty much done compiling by the time you download the file. And other browsers are adding streaming, too.

Another thing that helps is having a tiered compiler.

For us in Firefox, that means having two compilers. The first one—the baseline compiler—kicks in as soon as the file starts downloading. It compiles the code really quickly so that it starts up quickly.

The code it generates is fast, but not 100% as fast as it could be. To get that extra bit of performance, we run another compiler—the optimizing compiler—on several threads in the background. This one takes longer to compile, but generates extremely fast code. Once it’s done, we swap out the baseline version with the fully optimized version.

This way, we get quick start up times with the baseline compiler, and fast execution times with the optimizing compiler.

In addition, we’re working on a new optimizing compiler called Cranelift. Cranelift is designed to compile code quickly, in parallel at a function by function level. At the same time, the code it generates gets even better performance than our current optimizing compiler.

Cranelift is in the development version of Firefox right now, but disabled by default. Once we enable it, we’ll get to the fully optimized code even quicker, and that code will run even faster.

But there’s an even better trick we can use to make it so we don’t have to compile at all most of the time…

Skill: Implicit HTTP caching

With WebAssembly, if you load the same code on two page loads, it will compile to the same machine code. It doesn’t need to change based on what data is flowing through it, like the JS JIT compiler needs to.

This means that we can store the compiled code in the HTTP cache. Then when the page is loading and goes to fetch the .wasm file, it will instead just pull out the precompiled machine code from the cache. This skips compiling completely for any page that you’ve already visited that’s in cache.

Skill: Other improvements

Many discussions are currently percolating around other ways to improve this, skipping even more work, so stay tuned for other load-time improvements.

Where are we with this?

Where are we with supporting these heavyweight applications right now?

Threading

For the threading, we have a proposal that’s pretty much done, but a key piece of that—SharedArrayBuffers—had to be turned off in browsers earlier this year.
They will be turned on again. Turning them off was just a temporary measure to reduce the impact of the Spectre security issue that was discovered in CPUs and disclosed earlier this year, but progress is being made, so stay tuned.

We added our baseline compiler in late 2017 as well, and other browsers have been adding the same kind of architecture over the past year.

Implicit HTTP caching

In Firefox, we’re getting close to landing support for implicit HTTP caching.

Other improvements

Other improvements are currently in discussion.

Even though this is all still in progress, you already see some of these heavyweight applications coming out today, because WebAssembly already gives these apps the performance that they need.

But once these features are all in place, that’s going to be another achievement unlocked, and more of these heavyweight applications will be able to come to the browser.

Small modules interoperating with JavaScript

But WebAssembly isn’t just for games and for heavyweight applications. It’s also meant for regular web development… for the kind of web development folks are used to: the small modules kind of web development.

Sometimes you have little corners of your app that do a lot of heavy processing, and in some cases, this processing can be faster with WebAssembly. We want to make it easy to port these bits to WebAssembly.

Again, this is a case where some of it’s already happening. Developers are already incorporating WebAssembly modules in places where there are tiny modules doing lots of heavy lifting.

One example is the parser in the source map library that’s used in Firefox’s DevTools and webpack. It was rewritten in Rust, compiled to WebAssembly, which made it 11x faster. And WordPress’s Gutenberg parser is on average 86x faster after doing the same kind of rewrite.

But for this kind of use to really be widespread—for people to be really comfortable doing it—we need to have a few more things in place.

Skill: Fast calls between JS and WebAssembly

First, we need fast calls between JS and WebAssembly, because if you’re integrating a small module into an existing JS system, there’s a good chance you’ll need to call between the two a lot. So you’ll need those calls to be fast.

But when WebAssembly first came out, these calls weren’t fast. This is where we get back to that whole MVP thing—the engines had the minimum support for calls between the two. They just made the calls work, they didn’t make them fast. So engines need to optimize these.

Skill: Fast and easy data exchange

That brings us to another thing, though. When you’re calling between JavaScript and WebAssembly, you often need to pass data between them.

You need to pass values into the WebAssembly function or return a value from it. This can also be slow, and it can be difficult too.

There are a couple of reasons it’s hard. One is because, at the moment, WebAssembly only understands numbers. This means that you can’t pass more complex values, like objects, in as parameters. You need to convert that object into numbers and put it in the linear memory. Then you pass WebAssembly the location in the linear memory.

That’s kind of complicated. And it takes some time to convert the data into linear memory. So we need this to be easier and faster.

Skill: ES module integration

Another thing we need is integration with the browser’s built in ES module support. Right now, you instantiate a WebAssembly module using an imperative API. You call a function and it gives you back a module.

But that means that the WebAssembly module isn’t really part of the JS module graph. In order to use import and export like you do with JS modules, you need to have ES module integration.

Skill: Toolchain integration

Just being able to import and export doesn’t get us all the way there, though. We need a place to distribute these modules, and download them from, and tools to bundle them up.

What’s the npm for WebAssembly? Well, what about npm?

And what’s the webpack or Parcel for WebAssembly? Well, what about webpack and Parcel?

These modules shouldn’t look any different to the people who are using them, so no reason to create a separate ecosystem. We just need tools to integrate with them.

Skill: Backwards compatibility

There’s one more thing that we need to really do well in existing JS applications—support older versions of browsers, even those browsers that don’t know what WebAssembly is. We need to make sure that you don’t have to write a whole second implementation of your module in JavaScript just so that you can support IE11.

Where are we on this?

For easy and fast data exchange, there are a few proposals that will help with this.

As I mentioned before, one reason you have to use linear memory for more complex kinds of data is because WebAssembly only understands numbers. The only types it has are ints and floats.

With the reference types proposal, this will change. This proposal adds a new type that WebAssembly functions can take as arguments and return. And this type is a reference to an object from outside WebAssembly—for example, a JavaScript object.

But WebAssembly can’t operate directly on this object. To actually do things like call a method on it, it will still need to use some JavaScript glue. This means it works, but it’s slower than it needs to be.

To speed things up, there’s a proposal that we’ve been calling the host bindings proposal. It let’s a wasm module declare what glue must be applied to its imports and exports, so that the glue doesn’t need to be written in JS. By pulling glue from JS into wasm, the glue can be optimized away completely when calling builtin Web APIs.

There’s one more part of the interaction that we can make easier. And that has to do with keeping track of how long data needs to stay in memory. If you have some data in linear memory that JS needs access to, then you have to leave it there until the JS reads the data. But if you leave it in there forever, you have what’s called a memory leak. How do you know when you can delete the data? How do you know when JS is done with it? Currently, you have to manage this yourself.

Once the JS is done with the data, the JS code has to call something like a free function to free the memory. But this is tedious and error prone. To make this process easier, we’re adding WeakRefs to JavaScript. With this, you will be able to observe objects on the JS side. Then you can do cleanup on the WebAssembly side when that object is garbage collected.

So these proposals are all in flight. In the meantime, the Rust ecosystem has created tools that automate this all for you, and that polyfill the proposals that are in flight.

One tool in particular is worth mentioning, because other languages can use it too. It’s called wasm-bindgen. When it sees that your Rust code should do something like receive or return certain kinds of JS values or DOM objects, it will automatically create JavaScript glue code that does this for you, so you don’t need to think about it. And because it’s written in a language independent way, other language toolchains can adopt it.

ES module integration

For ES module integration, the proposal is pretty far along. We are starting work with the browser vendors to implement it.

Toolchain support

For toolchain support, there are tools like wasm-pack in the Rust ecosystem which automatically runs everything you need to package your code for npm. And the bundlers are also actively working on support.

Backwards compatibility

Finally, for backwards compatibility, there’s the wasm2js tool. That takes a wasm file and spits out the equivalent JS. That JS isn’t going to be fast, but at least that means it will work in older versions of browsers that don’t understand WebAssembly.

So we’re getting close to unlocking this achievement. And once we unlock it, we open the path to another two.

JS frameworks and compile-to-JS languages

One is rewriting large parts of things like JavaScript frameworks in WebAssembly.

The other is making it possible for statically-typed compile-to-js languages to compile to WebAssembly instead—for example, having languages like Scala.js, or Reason, or Elm compile to WebAssembly.

For both of these use cases, WebAssembly needs to support high-level language features.

Skill: GC

We need integration with the browser’s garbage collector for a couple of reasons.

First, let’s look at rewriting parts of JS frameworks. This could be good for a couple of reasons. For example, in React, one thing you could do is rewrite the DOM diffing algorithm in Rust, which has very ergonomic multithreading support, and parallelize that algorithm.

You could also speed things up by allocating memory differently. In the virtual DOM, instead of creating a bunch of objects that need to be garbage collected, you could used a special memory allocation scheme. For example, you could use a bump allocator scheme which has extremely cheap allocation and all-at-once deallocation. That could potentially help speed things up and reduce memory usage.

But you’d still need to interact with JS objects—things like components—from that code. You can’t just continually copy everything in and out of linear memory, because that would be difficult and inefficient.

So you need to be able to integrate with the browser’s GC so you can work with components that are managed by the JavaScript VM. Some of these JS objects need to point to data in linear memory, and sometimes the data in linear memory will need to point out to JS objects.

If this ends up creating cycles, it can mean trouble for the garbage collector. It means the garbage collector won’t be able to tell if the objects are used anymore, so they will never be collected. WebAssembly needs integration with the GC to make sure these kinds of cross-language data dependencies work.

This will also help statically-typed languages that compile to JS, like Scala.js, Reason, Kotlin or Elm. These language use JavaScript’s garbage collector when they compile to JS. Because WebAssembly can use that same GC—the one that’s built into the engine—these languages will be able to compile to WebAssembly instead and use that same garbage collector. They won’t need to change how GC works for them.

Skill: Exception handling

We also need better support for handling exceptions.

Some languages, like Rust, do without exceptions. But in other languages, like C++, JS or C#, exception handling is sometimes used extensively.

You can polyfill exception handling currently, but the polyfill makes the code run really slowly. So the default when compiling to WebAssembly is currently to compile without exception handling.

However, since JavaScript has exceptions, even if you’ve compiled your code to not use them, JS may throw one into the works. If your WebAssembly function calls a JS function that throws, then the WebAssembly module won’t be able to correctly handle the exception. So languages like Rust choose to abort in this case. We need to make this work better.

Skill: Debugging

Another thing that people working with JS and compile-to-JS languages are used to having is good debugging support. Devtools in all of the major browsers make it easy to step through JS. We need this same level of support for debugging WebAssembly in browsers.

Skill: Tail calls

And finally, for many functional languages, you need to have support for something called tail calls. I’m not going to get too into the details on this, but basically it lets you call a new function without adding a new stack frame to the stack. So for functional languages that support this, we want WebAssembly to support it too.

Where are we on this?

So where are we on this?

Garbage collection

For garbage collection, there are two proposals currently underway:

The Typed Objects proposal for JS, and the GC proposal for WebAssembly. Typed Objects will make it possible to describe an object’s fixed structure. There is an explainer for this, and the proposal will be discussed at an upcoming TC39 meeting.

The WebAssembly GC proposal will make it possible to directly access that structure. This proposal is under active development.

With both of these in place, both JS and WebAssembly know what an object looks like and can share that object and efficiently access the data stored on it. Our team actually already has a prototype of this working. However, it still will take some time for these to go through standardization so we’re probably looking at sometime next year.

Exception handling

Exception handling is still in the research and development phase, and there’s work now to see if it can take advantage of other proposals like the reference types proposal I mentioned before.

Debugging

For debugging, there is currently some support in browser devtools. For example, you can step through the text format of WebAssembly in Firefox debugger.But it’s still not ideal. We want to be able to show you where you are in your actual source code, not in the assembly. The thing that we need to do for that is figure out how source maps—or a source maps type thing—work for WebAssembly. So there’s a subgroup of the WebAssembly CG working on specifying that.

Once those are all in place, we’ll have unlocked JS frameworks and many compile-to-JS languages.

So, those are all achievements we can unlock inside the browser. But what about outside the browser?

Outside the Browser

Now, you may be confused when I talk about “outside the browser”. Because isn’t the browser what you use to view the web? And isn’t that right in the name—WebAssembly.

But the truth is the things you see in the browser—the HTML and CSS and JavaScript—are only part of what makes the web. They are the visible part—they are what you use to create a user interface—so they are the most obvious.

But there’s another really important part of the web which has properties that aren’t as visible.

That is the link. And it is a very special kind of link.

The innovation of this link is that I can link to your page without having to put it in a central registry, and without having to ask you or even know who you are. I can just put that link there.

It’s this ease of linking, without any oversight or approval bottlenecks, that enabled our web. That’s what enabled us to form these global communities with people we didn’t know.

But if all we have is the link, there are two problems here that we haven’t addressed.

The first one is… you go visit this site and it delivers some code to you. How does it know what kind of code it should deliver to you? Because if you’re running on a Mac, then you need different machine code than you do on Windows. That’s why you have different versions of programs for different operating systems.

Then should a web site have a different version of the code for every possible device? No.

Instead, the site has one version of the code—the source code. This is what’s delivered to the user. Then it gets translated to machine code on the user’s device.

The name for this concept is portability.

So that’s great, you can load code from people who don’t know you and don’t know what kind of device you’re running.

But that brings us to a second problem. If you don’t know these people who’s web pages you’re loading, how do you know what kind of code they’re giving you? It could be malicious code. It could be trying to take over your system.

Doesn’t this vision of the web—running code from anybody who’s link you follow—mean that you have to blindly trust anyone who’s on the web?

This is where the other key concept from the web comes in.

That’s the security model. I’m going to call it the sandbox.

Basically, the browser takes the page—that other person’s code—and instead of letting it run around willy-nilly in your system, it puts it in a sandbox. It puts a couple of toys that aren’t dangerous into that sandbox so that the code can do some things, but it leaves the dangerous things outside of the sandbox.

So the utility of the link is based on these two things:

Portability—the ability to deliver code to users and have it run on any type of device that can run a browser.

And the sandbox—the security model that lets you run that code without risking the integrity of your machine.

So why does this distinction matter? Why does it make a difference if we think of the web as something that the browser shows us using HTML, CSS, and JS, or if we think of the web in terms of portability and the sandbox?

Because it changes how you think about WebAssembly.

You can think about WebAssembly as just another tool in the browser’s toolbox… which it is.

It is another tool in the browser’s toolbox. But it’s not just that. It also gives us a way to take these other two capabilities of the web—the portability and the security model—and take them to other use cases that need them too.

We can expand the web past the boundaries of the browser. Now let’s look at where these attributes of the web would be useful.

Node.js

How could WebAssembly help Node? It could bring full portability to Node.

Node gives you most of the portability of JavaScript on the web. But there are lots of cases where Node’s JS modules aren’t quite enough—where you need to improve performance or reuse existing code that’s not written in JS.

In these cases, you need Node’s native modules. These modules are written in languages like C, and they need to be compiled for the specific kind of machine that the user is running on.

Native modules are either compiled when the user installs, or precompiled into binaries for a wide matrix of different systems. One of these approaches is a pain for the user, the other is a pain for the package maintainer.

Now, if these native modules were written in WebAssembly instead, then they wouldn’t need to be compiled specifically for the target architecture. Instead, they’d just run like the JavaScript in Node runs. But they’d do it at nearly native performance.

So we get to full portability for the code running in Node. You could take the exact same Node app and run it across all different kinds of devices without having to compile anything.

But WebAssembly doesn’t have direct access to the system’s resources. Native modules in Node aren’t sandboxed—they have full access to all of the dangerous toys that the browser keeps out of the sandbox. In Node, JS modules also have access to these dangerous toys because Node makes them available. For example, Node provides methods for reading from and writing files to the system.

For Node’s use case, it makes a certain amount of sense for modules to have this kind access to dangerous system APIs. So if WebAssembly modules don’t have that kind of access by default (like Node’s current modules do), how could we give WebAssembly modules the access they need? We’d need to pass in functions so that the WebAssembly module can work with the operating system, just as Node does with JS.

For Node, this will probably include a lot of the functionality that’s in things like the C standard library. It would also likely include things that are part of POSIX—the Portable Operating System Interface—which is an older standard that helps with compatibility. It provides one API for interacting with the system across a bunch of different Unix-like OSs. Modules would definitely need a bunch of POSIX-like functions.

Skill: Portable interface

What the Node core folks would need to do is figure out the set of functions to expose and the API to use.

But wouldn’t it be nice if that were actually something standard? Not something that was specific to just Node, but could be used across other runtimes and use cases too?

A POSIX for WebAssembly if you will. A PWSIX? A portable WebAssembly system interface.

And if that were done in the right way, you could even implement the same API for the web. These standard APIs could be polyfilled onto existing Web APIs.

These functions wouldn’t be part of the WebAssembly spec. And there would be WebAssembly hosts that wouldn’t have them available. But for those platforms that could make use of them, there would be a unified API for calling these functions, no matter which platform the code was running on. And this would make universal modules—ones that run across both the web and Node—so much easier.

Where are we with this?

So, is this something that could actually happen?

A few things are working in this idea’s favor. There’s a proposal called package name maps that will provide a mechanism for mapping a module name to a path to load the module from. And that will likely be supported by both browsers and Node, which can use it to provide different paths, and thus load entirely different modules, but with the same API. This way, the .wasm module itself can specify a single (module-name, function-name) import pair that Just Works on different environments, even the web.

With that mechanism in place, what’s left to do is actually figure out what functions make sense and what their interface should be.

There’s no active work on this at the moment. But a lot of discussions are heading in this direction right now. And it looks likely to happen, in one form or another.

Which is good, because unlocking this gets us halfway to unlocking some other use cases outside the browser. And with this in place, we can accelerate the pace.

So, what are some examples of these other use cases?

CDNs, Serverless, and Edge Computing

One example is things like CDNs, and Serverless, and Edge Computing. These are cases where you’re putting your code on someone else’s server, and they make sure that the server is maintained and that the code is close to all of your users.

Why would you want to use WebAssembly in these cases? There was a great talk explaining exactly this at a conference recently.

Fastly is a company that provides CDNs and edge computing. And their CTO, Tyler McMullen, explained it this way (and I’m paraphrasing here):

If you look at how a process works, code in that process doesn’t have boundaries. Functions have access to whatever memory in that process they want, and they can call whatever other functions they want.

When you’re running a bunch of different people’s services in the same process, this is an issue. Sandboxing could be a way to get around this. But then you get to a scale problem.

For example, if you use a JavaScript VM like Firefox’s SpiderMonkey or Chrome’s V8, you get a sandbox and you can put hundreds of instances into a process. But with the numbers of requests that Fastly is servicing, you don’t just need hundreds per process—you need tens of thousands.

Tyler does a better job of explaining all of it in his talk, so you should go watch that. But the point is that WebAssembly gives Fastly the safety, speed, and scale needed for this use case.

So what did they need to make this work?

Skill: Runtime

They needed to create their own runtime. That means taking a WebAssembly compiler—something that can compile WebAssembly down to machine code—and combining it with the functions for interacting with the system that I mentioned before.

For the WebAssembly compiler, Fastly used Cranelift, the compiler that we’re also building into Firefox. It’s designed to be very fast and doesn’t use much memory.

Now, for the functions that interact with the rest of the system, they had to create their own, because we don’t have that portable interface available yet.

So it’s possible to create your own runtime today, but it takes some effort. And it’s effort that will have to be duplicated across different companies.

What if we didn’t just have the portable interface, but we also had a common runtime that could be used across all of these companies and other use cases? That would definitely speed up development.

Then other companies could just use that runtime—like they do Node today—instead of creating their own from scratch.

Where are we on this?

So what’s the status of this?

Even though there’s no standard runtime yet, there are a few runtime projects in flight right now. These include WAVM, which is built on top of LLVM, and wasmjit.

In addition, we’re planning a runtime that’s built on top of Cranelift, called wasmtime.

And once we have a common runtime, that speeds up development for a bunch of different use cases. For example…

Portable CLI tools

WebAssembly can also be used in more traditional operating systems. Now to be clear, I’m not talking about in the kernel (although brave souls are trying that, too) but WebAssembly running in Ring 3—in user mode.

Then you could do things like have portable CLI tools that could be used across all different kinds of operating systems.

And this is pretty close to another use case…

Internet of Things

The internet of things includes devices like wearable technology, and smart home appliances.

These devices are usually resource constrained—they don’t pack much computing power and they don’t have much memory. And this is exactly the kind of situation where a compiler like Cranelift and a runtime like wasmtime would shine, because they would be efficient and low-memory. And in the extremely-resource-constrained case, WebAssembly makes it possible to fully compile to machine code before loading the application on the device.

There’s also the fact that there are so many of these different devices, and they are all slightly different. WebAssembly’s portability would really help with that.

So that’s one more place where WebAssembly has a future.

Conclusion

Now let’s zoom back out and look at this skill tree.

I said at the beginning of this post that people have a misconception about WebAssembly—this idea that the WebAssembly that landed in the MVP was the final version of WebAssembly.

I think you can see now why this is a misconception.

Yes, the MVP opened up a lot of opportunities. It made it possible to bring a lot of desktop applications to the web. But we still have many use cases to unlock, from heavy-weight desktop applications, to small modules, to JS frameworks, to all the things outside the browser… Node.js, and serverless, and the blockchain, and portable CLI tools, and the internet of things.

So the WebAssembly that we have today is not the end of this story—it’s just the beginning.

]]>https://hacks.mozilla.org/2018/10/webassemblys-post-mvp-future/feed/29Introducing Opus 1.3https://hacks.mozilla.org/2018/10/introducing-opus-1-3/
https://hacks.mozilla.org/2018/10/introducing-opus-1-3/#commentsThu, 18 Oct 2018 16:30:42 +0000https://hacks.mozilla.org/?p=32785Opus is a totally open, royalty-free, audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. This 1.3 release brings quality improvements to both speech and music compression, ambisonics support, and more.

Opus is a totally open, royalty-free audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is now included in all major browsers and mobile operating systems. It has been adopted for a wide range of applications, and is the default WebRTC codec.

This release brings quality improvements to both speech and music compression, while remaining fully compatible with RFC 6716. Here’s a few of the upgrades that users and implementers will care about the most.

Opus 1.3 includes a brand new speech/music detector. It is based on a recurrent neural network and is both simpler and more reliable than the detector that has been used since version 1.1. The new detector should improve the Opus performance on mixed content encoding, especially at bitrates below 48 kb/s.

There are also many improvements for speech encoding at lower bitrates, both for mono and stereo. The demo contains many more details, as well as some audio samples. This new release also includes a cool new feature: ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos.

]]>https://hacks.mozilla.org/2018/10/introducing-opus-1-3/feed/1Dweb: Decentralised, Real-Time, Interoperable Communication with Matrixhttps://hacks.mozilla.org/2018/10/dweb-decentralised-real-time-interoperable-communication-with-matrix/
https://hacks.mozilla.org/2018/10/dweb-decentralised-real-time-interoperable-communication-with-matrix/#commentsWed, 17 Oct 2018 15:12:41 +0000https://hacks.mozilla.org/?p=32759Matrix is an open standard for interoperable, decentralised, real-time communication over the Internet. It provides a standard HTTP API for publishing and subscribing to real-time data in specified channels, so it can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication--the most common use of Matrix today is as an Instant Messaging platform.

]]>In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

While Scuttlebutt is person-centric and IPFS is document-centric, today you’ll learn about Matrix, which is all about messages. Instead of inventing a whole new stack, they’ve leaned on some familiar parts of the web today – HTTP as a transport, and JSON for the message format. How those messages get around is what distinguishes it – a system of decentralized servers, designed with interoperability in mind from the beginning, and an extensibility model for adapting to different use-cases. Please enjoy this introduction from Ben Parsons, developer advocate for Matrix.org.

– Dietrich Ayala

What is Matrix?

Matrix is an open standard for interoperable, decentralised, real-time communication over the Internet. It provides a standard HTTP API for publishing and subscribing to real-time data in specified channels, which means it can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication, and anything else that can be expressed as JSON and needs to be transmitted in real-time over HTTP. The most common use of Matrix today is as an Instant Messaging platform.

Matrix is interoperable in that it follows an open standard and can freely communicate with other platforms. Matrix messages are JSON, and easy to parse. Bridges are provided to enable communication with other platforms.

Matrix is decentralised – there is no central server. To communicate on Matrix, you connect your client to a single “homeserver” – this server then communicates with other homeservers. For every room you are in, your homeserver will maintain a copy of the history of that room. This means that no one homeserver is the host or owner of a room if there is more than one homeserver connected to it. Anyone is free to host their own homeserver, just as they would host their own website or email server.

Why create another messaging platform?

The initial goal is to fix the problem of fragmented IP communications: letting users message and call each other without having to care what app the other user is on – making it as easy as sending an email.

In future, we want to see Matrix used as a generic HTTP messaging and data synchronization system for the whole web, enabling IoT and other applications through a single unified, understandable interface.

What does Matrix provide?

Matrix is an Open Standard, with a specification that describes the interaction of homeservers, clients and Application Services that can extend Matrix.

There are reference implementations of clients, servers and SDKs for various programming languages.

Architecture

You connect to Matrix via a client. Your client connects to a single server – this is your homeserver. Your homeserver stores and provides history and account information for the connected user, and room history for rooms that user is a member of. To sign up, you can find a list of public homeservers at hello-matrix.net, or if using Riot as your client, the client will suggest a default location.

Homeservers synchronize message history with other homeservers. In this way, your homeserver is responsible for storing the state of rooms and providing message history.

Let’s take a look at an example of how this works. Homeservers and clients are connected as in the diagram in figure 1.

Figure 1. Homeservers with clients

Figure 2.

If we join a homeserver (Figure 3), that means we are connecting our client to an account on that homeserver.

Figure 3.

Now we send a message. This message is sent into a room specified by our client, and given an event id by the homeserver.

Figure 4.

Our homeserver sends the message event to every homeserver which has a user account belonging to it in the room. It also sends the event to every local client in the room. (Figure 5.)

Figure 5.

Finally, the remote homeservers send the message event to their clients, which in are the appropriate room.

Figure 6.

Usage Example – simple chatbot

Let’s use the matrix-js-sdk to create a small chatbot, which listens in a room and responds back with an echo.

Make a new directory, install matrix-js-sdk and let’s get started:

mkdir my-bot
cd my-bot
npm install matrix-js-sdk
touch index.js

Now open index.js in your editor. We first create a client instance, this connects our client to our homeserver:

Finally, we respond to the events by echoing back messages starting “!”

function handleEvent(event) {
// we know we only want to respond to messages
if (event.getType() !== "m.room.message") {
return;
}
// we are only interested in messages which start with "!"
if (event.getContent().body[0] === '!') {
// create an object with everything after the "!"
var content = {
"body": event.getContent().body.substring(1),
"msgtype": "m.notice"
};
// send the message back to the room it came from
client.sendEvent(event.getRoomId(), "m.room.message", content, "", (err, res) => {
console.log(err);
});
}
}

Learn More

The best place to come and find out more about Matrix is on Matrix itself! The absolute quickest way to participate in Matrix is to use Riot, a popular web-based client. Head to <https://riot.im/app>, sign up for an account and join the #matrix:matrix.org room to introduce yourself.

]]>https://hacks.mozilla.org/2018/10/dweb-decentralised-real-time-interoperable-communication-with-matrix/feed/9Show your support for Firefox with new badgeshttps://hacks.mozilla.org/2018/10/show-your-support-for-firefox-with-new-badges/
Tue, 16 Oct 2018 14:57:45 +0000https://hacks.mozilla.org/?p=32787If you use Firefox and want to show your support, we've made a collection of badges you can add to your website. Whether you're passionate about Mozilla's mission, or just think Firefox is a kick-ass product, we'd love your help in spreading the word.

]]>Firefox is only as strong as its passionate users. Because we’re independent, people need to make a conscious choice to use a non-default browser on their system. We’re most successful when happy users tell others about an alternative worth trying.

If you’re a Firefox user and want to show your support, we’ve made a collection of badges you can add to your website to tell users, “I use Firefox, and you should too!”

You can browse the badges and grab the code to display them on a dedicated microsite we’ve built, so there’s no need to download them (though you’re welcome to if you want). Images are hosted on a Mozilla CDN for convenience and performance only. We do no tracking of traffic to the CDN. We’ll be adding more badges as time goes on as well.

So whether you’re excited to use a browser from a non-profit with a mission to build a better Internet, or just think Firefox is a kick-ass product, we’d love for you to spread the word.

]]>Payments, accessibility, and dead macros: MDN Changelog for September 2018https://hacks.mozilla.org/2018/10/payments-accessibility-and-dead-macros-mdn-changelog-for-september-2018/
Fri, 12 Oct 2018 14:50:58 +0000https://hacks.mozilla.org/?p=32775Changes and updates to the code, data, and tools that support MDN Web Docs. In September, the team launched MDN payments, improved MDN’s accessibility resources, and removed 15% of KumaScript macros. The team also shipped tweaks and fixes by merging 379 pull requests, including 66 pull requests from 38 new contributors.

Launched MDN payments

We’ve been thinking about the direction and growth of MDN. We’d like a more direct connection with developers, and to provide them with valuable features and benefits they need to be successful in their web projects. We’ve researched several promising ideas, and decided that direct payments would be the first experiment. Logged-in users and 1% of anonymous visitors see the banner that asks them to directly support MDN. See Ali Spivak’s and Kadir Topal’s post, A New Way to Support MDN, for more information.

Payment page on MDN

The implementation phase started in August, when Potato London was hired to design and implement payments. Potato did an amazing job executing on a 5-week schedule, including several design meetings, daily standups, and a trip from Bristol to London to meet face-to-face during the MDN work week. Thanks to the hard work from the Potato team, including Charlie Harding, Josh Jarvis, Matt Hall, Michał Macioszczyk, Philip Lackmaker, and Rachel Lee.

In honour of Potato, Tate Modern is exhibiting Magdalena Abakanowicz’s “Embryology”

Improved MDN’s accessibility resources

After the work week, we met with accessibility experts for the Hack on MDN event. Volunteers and staff improved MDN’s coverage of accessibility. This included discussions of accessibility topics, improving and expanding MDN’s documentation, and writing related blog posts. It also included code changes, improving MDN’s color contrast and adding markup for screen readers. See Janet Swisher’s Hack on MDN: Better accessibility for MDN Web Docs for the details.

Removed 15% of KumaScript macros

The MDN team got together for a week at the London office to reflect on the quarter and plan the coming year.

We discussed KumaScript, our macro language and rendering service that implements standardized sidebars, banners, and internal links. It’s been easier to analyze macros since we moved them to GitHub in November 2016. We’re happy with the performance gains, but code reviews take forever, translations are hard, and we’re slow to write tests. These issues contributed to an incident in August where a sidebar macro was broken, and all the API reference pages showed an error for a day (bug 1487640).

Staff is getting impatient with KumaScript, and wants to replace it with something better. Florian wrote up the notes from the meeting on Discourse as Next steps for KumaScript.

The team removed 72 macros in about 2 weeks, and will continue removing them for the rest of the year. This will leave a smaller number of important macros, and we can analyze them for the next steps in the project.

Planned for October

October is the start of the fourth quarter. We have a few yearly goals to complete, including the Python 3 transition, the next round of the payments experiment, and performance experiments. This quarter also contains major holidays and the Mozilla All Hands, which mean it has about half the working days of other quarters. Time to get to work!

Move to Mozilla IT infrastructure

In October, Ryan Johnson, Ed Lim, Dave Parfitt, and Josh Mize will complete the setup of MDN services in the Mozilla IT infrastructure, and switch production traffic to the new systems. This will complete the migration of MDN from Mozilla Marketing to Emerging Technologies, started in February 2018. The team is organizing the switch-over checklist, and experimenting with the parallel staging environments.

The production switch is planned for October 29th, and will include a few hours when the site is in read-only mode.

]]>Home Monitoring with Things Gateway 0.6https://hacks.mozilla.org/2018/10/home-monitoring-with-things-gateway-0-6/
https://hacks.mozilla.org/2018/10/home-monitoring-with-things-gateway-0-6/#commentsThu, 11 Oct 2018 15:19:10 +0000https://hacks.mozilla.org/?p=32758The latest version of the Things Gateway rolling out today comes with new home monitoring features that let you directly monitor your home over the web, without a middleman. That means no monthly fees, your private data stays in your home by default, and you can choose from a variety of sensors made by different manufacturers.

]]>When it comes to smart home devices, protecting the safety and security of your home when you aren’t there is a popular area of adoption. Traditional home security systems are either completely offline (an alarm sounds in the house, but nobody is notified) or professionally monitored (with costly subscription services). Self monitoring of your connected home therefore makes sense, but many current smart home solutions still require ongoing service fees and send your private data to a centralised cloud service.

The latest version of the Things Gateway rolls out today with new home monitoring features that let you directly monitor your home over the web, without a middleman. That means no monthly fees, your private data stays in your home by default, and you can choose from a variety of sensors from different brands.

Version 0.6 adds support for door sensors, motion sensors and customisable push notifications. Other enhancements include support for push buttons and a wider range of Apple HomeKit devices, as well as general robustness improvements and better error reporting.

Sensors

The latest update comes with support for door/window sensors and motion sensors, including the SmartThings Motion Sensor and SmartThings Multipurpose Sensor.These sensors make great triggers for a home monitoring system and also report temperature, battery level and tamper detection.

Push Notifications

You can now create rules which trigger a push notification to your desktop, laptop, tablet or smartphone. An example use case for this is to notify you when a door has been opened or motion is detected in your home, but you can use notifications for whatever you like!

To create a rule which triggers a push notification, simply drag and drop the notification output and customize it with your own message.

Thanks to the power of Progressive Web Apps, if you’ve installed the gateway’s web app on your smartphone or tablet you’ll receive notifications even if the web app is closed.

Push Buttons

We’ve also added support for push buttons, like the SmartThings Button, which you can program to trigger any action you like using the rules engine. Use a button to simply turn a light on, or set a whole scene with multiple outputs.

Error Reporting

0.6 also comes with a range of robustness improvements including connection detection and error reporting. That means it will be easier to tell whether you have lost connectivity to the gateway, or one of your devices has dropped offline, and if something goes wrong with an add-on, you’ll be informed about it inside the gateway UI.

If a device has dropped offline, its icon is displayed as translucent until it comes back online. If your web app loses connectivity with the gateway, you’ll see a message appear at the bottom of the screen.

Smart plugs

Bridges

Light bulbs

Sensors

These devices use the built-in Bluetooth or WiFi support of your Raspberry Pi-based gateway, so you don’t even need a USB dongle.

Download

You can download version 0.6 today from the website. If you’ve already built your own Things Gateway with a Raspberry Pi and have it connected to the Internet, it should automatically update itself soon.

We can’t wait to see what creative things you do with all these new features. Be sure to let us know on Discourse and Twitter!

]]>https://hacks.mozilla.org/2018/10/home-monitoring-with-things-gateway-0-6/feed/16Calls between JavaScript and WebAssembly are finally fast 🎉https://hacks.mozilla.org/2018/10/calls-between-javascript-and-webassembly-are-finally-fast-%f0%9f%8e%89/
https://hacks.mozilla.org/2018/10/calls-between-javascript-and-webassembly-are-finally-fast-%f0%9f%8e%89/#commentsMon, 08 Oct 2018 15:35:06 +0000https://hacks.mozilla.org/?p=32717At Mozilla, we want WebAssembly to be as fast as it can be. This started with its design, which gives it great throughput. Then we improved load times with a streaming baseline compiler. With this, we compile code faster than it comes over the network. Now, in the latest version of Firefox Beta, calls between JS and WebAssembly are faster than many JS to JS function calls. Here's how we made them fast - illustrated in code cartoons.

One of our big priorities is making it easy to combine JS and WebAssembly. But function calls between the two languages haven’t always been fast. In fact, they’ve had a reputation for being slow, as I talked about in my first series on WebAssembly.

This means that in the latest version of Firefox Beta, calls between JS and WebAssembly are faster than non-inlined JS to JS function calls. Hooray!

So these calls are fast in Firefox now. But, as always, I don’t just want to tell you that these calls are fast. I want to explain how we made them fast. So let’s look at how we improved each of the different kinds of calls in Firefox (and by how much).

But first, let’s look at how engines do these calls in the first place. (And if you already know how the engine handles function calls, you can skip to the optimizations.)

How do function calls work?

Functions are a big part of JavaScript code. A function can do lots of things, such as:

assign variables which are scoped to the function (called local variables)

use functions that are built-in to the browser, like Math.random

call other functions you’ve defined in your code

return a value

But how does this actually work? How does writing this function make the machine do what you actually want?

As I explained in my first WebAssembly article series, the languages that programmers use — like JavaScript — are very different than the language the computer understands. To run the code, the JavaScript we download in the .js file needs to be translated to the machine language that the machine understands.

Each browser has a built-in translator. This translator is sometimes called the JavaScript engine or JS runtime. However, these engines now handle WebAssembly too, so that terminology can be confusing. In this article, I’ll just call it the engine.

Each browser has its own engine:

Chrome has V8

Safari has JavaScriptCore (JSC)

Edge has Chakra

and in Firefox, we have SpiderMonkey

Even though each engine is different, many of the general ideas apply to all of them.

When the browser comes across some JavaScript code, it will fire up the engine to run that code. The engine needs to work its way through the code, going to all of the functions that need to be called until it gets to the end.

I think of this like a character going on a quest in a videogame.

Let’s say we want to play Conway’s Game of Life. The engine’s quest is to render the Game of Life board for us. But it turns out that it’s not so simple…

So the engine goes over to the next function. But the next function will send the engine on more quests by calling more functions.

The engine keeps having to go on these nested quests until it gets to a function that just gives it a result.

Then it can come back to each of the functions that it spoke to, in reverse order.

If the engine is going to do this correctly — if it’s going to give the right parameters to the right function and be able to make its way all the way back to the starting function — it needs to keep track of some information.

It does this using something called a stack frame (or a call frame). It’s basically like a sheet of paper that has the arguments to go into the function, says where the return value should go, and also keeps track of any of the local variables that the function creates.

The way it keeps track of all of these slips of paper is by putting them in a stack. The slip of paper for the function that it is currently working with is on top. When it finishes that quest, it throws out the slip of paper. Because it’s a stack, there’s a slip of paper underneath (which has now been revealed by throwing away the old one). That’s where we need to return to.

This stack of frames is called the call stack.

The engine builds up this call stack as it goes. As functions are called, frames are added to the stack. As functions return, frames are popped off of the stack. This keeps happening until we get all the way back down and have popped everything out of the stack.

So that’s the basics of how function calls work. Now, let’s look at what made function calls between JavaScript and WebAssembly slow, and talk about how we’ve made this faster in Firefox.

How we made WebAssembly function calls fast

With recent work in Firefox Nightly, we’ve optimized calls in both directions — both JavaScript to WebAssembly and WebAssembly to JavaScript. We’ve also made calls from WebAssembly to built-ins faster.

All of the optimizations that we’ve done are about making the engine’s work easier. The improvements fall into two groups:

Cutting out intermediaries — which means taking the most direct path between functions

Let’s look at where each of these came into play.

Optimizing WebAssembly » JavaScript calls

When the engine is going through your code, it has to deal with functions that are speaking two different kinds of language—even if your code is all written in JavaScript.

Some of them—the ones that are running in the interpreter—have been turned into something called byte code. This is closer to machine code than JavaScript source code, but it isn’t quite machine code (and the interpreter does the work). This is pretty fast to run, but not as fast as it can possibly be.

Other functions — those which are being called a lot — are turned into machine code directly by the just-in-time compiler (JIT). When this happens, the code doesn’t run through the interpreter anymore.

So we have functions speaking two languages; byte code and machine code.

I think of these different functions which speak these different languages as being on different continents in our videogame.

The engine needs to be able to go back and forth between these continents. But when it does this jump between the different continents, it needs to have some information, like the place it left from on the other continent (which it will need to go back to). The engine also wants to separate the frames that it needs.

To organize its work, the engine gets a folder and puts the information it needs for its trip in one pocket — for example, where it entered the continent from.

It will use the other pocket to store the stack frames. That pocket will expand as the engine accrues more and more stack frames on this continent.

Sidenote: if you’re looking through the code in SpiderMonkey, these “folders” are called activations.

Each time it switches to a different continent, the engine will start a new folder. The only problem is that to start a folder, it has to go through C++. And going through C++ adds significant cost.

This is the trampolining that I talked about in my first series on WebAssembly.

Every time you have to use one of these trampolines, you lose time.

In our continent metaphor, it would be like having to do a mandatory layover on Trampoline Point for every single trip between two continents.

So how did this make things slower when working with WebAssembly?

When we first added WebAssembly support, we had a different type of folder for it. So even though JIT-ed JavaScript code and WebAssembly code were both compiled and speaking machine language, we treated them as if they were speaking different languages. We were treating them as if they were on separate continents.

This was unnecessarily costly in two ways:

it creates an unnecessary folder, with the setup and teardown costs that come from that

it requires that trampolining through C++ (to create the folder and do other setup)

We fixed this by generalizing the code to use the same folder for both JIT-ed JavaScript and WebAssembly. It’s kind of like we pushed the two continents together, making it so you don’t need to leave the continent at all.

With this, calls from WebAssembly to JS were almost as fast as JS to JS calls.

We still had a little work to do to speed up calls going the other way, though.

Optimizing JavaScript » WebAssembly calls

Even in the case of JIT-ed JavaScript code, where JavaScript and WebAssembly are speaking the same language, they still use different customs.

Because JavaScript doesn’t have explicit types, types need to be figured out at runtime. The engine keeps track of the types of values by attaching a tag to the value.

It’s as if the JS engine put a box around this value. The box contains that tag indicating what type this value is. For example, the zero at the end would mean integer.

In order to compute the sum of these two integers, the system needs to remove that box. It removes the box for a and then removes the box for b.

Then it adds the unboxed values together.

Then it needs to add that box back around the results so that the system knows the result’s type.

This turns what you expect to be 1 operation into 4 operations… so in cases where you don’t need to box (like statically typed languages) you don’t want to add this overhead.

Sidenote: JavaScript JITs can avoid these extra boxing/unboxing operations in many cases, but in the general case, like function calls, JS needs to fall back to boxing.

This is why WebAssembly expects parameters to be unboxed, and why it doesn’t box its return values. WebAssembly is statically typed, so it doesn’t need to add this overhead. WebAssembly also expects values to be passed in at a certain place — in registers rather than the stack that JavaScript usually uses.

If the engine takes a parameter that it got from JavaScript, wrapped inside of a box, and gives it to a WebAssembly function, the WebAssembly function wouldn’t know how to use it.

So, before it gives the parameters to the WebAssembly function, the engine needs to unbox the values and put them in registers.

To do this, it would go through C++ again. So even though we didn’t need to trampoline through C++ to set up the activation, we still needed to do it to prepare the values (when going from JS to WebAssembly).

Going to this intermediary is a huge cost, especially for something that’s not that complicated. So it would be better if we could cut the middleman out altogether.

That’s what we did. We took the code that C++ was running — the entry stub — and made it directly callable from JIT code. When the engine goes from JavaScript to WebAssembly, the entry stub un-boxes the values and places them in the right place. With this, we got rid of the C++ trampolining.

I think of this as a cheat sheet. The engine uses it so that it doesn’t have to go to the C++. Instead, it can unbox the values when it’s right there, going between the calling JavaScript function and the WebAssembly callee.

So that makes calls from JavaScript to WebAssembly fast.

But in some cases, we can make it even faster. In fact, we can make these calls even faster than JavaScript » JavaScript calls in many cases.

Even faster JavaScript » WebAssembly: Monomorphic calls

When a JavaScript function calls another function, it doesn’t know what the other function expects. So it defaults to putting things in boxes.

But what about when the JS function knows that it is calling a particular function with the same types of arguments every single time? Then that calling function can know in advance how to package up the arguments in the way that the callee wants them.

This is an instance of the general JS JIT optimization known as “type specialization”. When a function is specialized, it knows exactly what the function it is calling expects. This means it can prepare the arguments exactly how that other function wants them… which means that the engine doesn’t need that cheat sheet and spend extra work on unboxing.

This kind of call — where you call the same function every time — is called a monomorphic call. In JavaScript, for a call to be monomorphic, you need to call the function with the exact same types of arguments each time. But because WebAssembly functions have explicit types, calling code doesn’t need to worry about whether the types are exactly the same — they will be coerced on the way in.

If you can write your code so that JavaScript is always passing the same types to the same WebAssembly exported function, then your calls are going to be very fast. In fact, these calls are faster than many JavaScript to JavaScript calls.

Future work

There’s only one case where an optimized call from JavaScript » WebAssembly is not faster than JavaScript » JavaScript. That is when JavaScript has in-lined a function.

The basic idea behind in-lining is that when you have a function that calls the same function over and over again, you can take an even bigger shortcut. Instead of having the engine go off to talk to that other function, the compiler can just copy that function into the calling function. This means that the engine doesn’t have to go anywhere — it can just stay in place and keep computing.

I think of this as the callee function teaching its skills to the calling function.

This is an optimization that JavaScript engines make when a function is being run a lot — when it’s “hot” — and when the function it’s calling is relatively small.

We can definitely add support for in-lining WebAssembly into JavaScript at some point in the future, and this is a reason why it’s nice to have both of these languages working in the same engine. This means that they can use the same JIT backend and the same compiler intermediate representation, so it’s possible for them to interoperate in a way that wouldn’t be possible if they were split across different engines.

Optimizing WebAssembly » Built-in function calls

There was one more kind of call that was slower than it needed to be: when WebAssembly functions were calling built-ins.

Built-ins are functions that the browser gives you, like Math.random. It’s easy to forget that these are just functions that are called like any other function.

Sometimes the built-ins are implemented in JavaScript itself, in which case they are called self-hosted. This can make them faster because it means that you don’t have to go through C++: everything is just running in JavaScript. But some functions are just faster when they’re implemented in C++.

Different engines have made different decisions about which built-ins should be written in self-hosted JavaScript and which should be written in C++. And engines often use a mix of both for a single built-in.

In the case where a built-in is written in JavaScript, it will benefit from all of the optimizations that we have talked about above. But when that function is written in C++, we are back to having to trampoline.

These functions are called a lot, so you do want calls to them to be optimized. To make it faster, we’ve added a fast path specific to built-ins. When you pass a built-in into WebAssembly, the engine sees that what you’ve passed it is one of the built-ins, at which point it knows how to take the fast-path. This means you don’t have to go through that trampoline that you would otherwise.

It’s kind of like we built a bridge over to the built-in continent. You can use that bridge if you’re going from WebAssembly to the built-in. (Sidenote: The JIT already did have optimizations for this case, even though it’s not shown in the drawing.)

With this, calls to these built-ins are much faster than they used to be.

Future work

Currently the only built-ins that we support this for are mostly limited to the math built-ins. That’s because WebAssembly currently only has support for integers and floats as value types.

That works well for the math functions because they work with numbers, but it doesn’t work out so well for other things like the DOM built-ins. So currently when you want to call one of those functions, you have to go through JavaScript. That’s what wasm-bindgen does for you.

But WebAssembly is getting more flexible types very soon. Experimental support for the current proposal is already landed in Firefox Nightly behind the pref javascript.options.wasm_gc. Once these types are in place, you will be able to call these other built-ins directly from WebAssembly without having to go through JS.

The infrastructure we’ve put in place to optimize the Math built-ins can be extended to work for these other built-ins, too. This will ensure many built-ins are as fast as they can be.

But there are still a couple of built-ins where you will need to go through JavaScript. For example, if those built-ins are called as if they were using new or if they’re using a getter or setter. These remaining built-ins will be addressed with the host-bindings proposal.

Conclusion

So that’s how we’ve made calls between JavaScript and WebAssembly fast in Firefox, and you can expect other browsers to do the same soon.

Thank you

Thank you to Benjamin Bouvier, Luke Wagner, and Till Schneidereit for their input and feedback.

]]>https://hacks.mozilla.org/2018/10/calls-between-javascript-and-webassembly-are-finally-fast-%f0%9f%8e%89/feed/26A New Way to Support MDNhttps://hacks.mozilla.org/2018/10/a-new-way-to-support-mdn/
https://hacks.mozilla.org/2018/10/a-new-way-to-support-mdn/#commentsWed, 03 Oct 2018 16:05:19 +0000https://hacks.mozilla.org/?p=32711MDN’s user base has grown exponentially in the last few years, so we are seeking support from our users to help accelerate content and platform development.

]]>Starting this week, some visitors may notice something new on the MDN Web Docs site, the comprehensive resource for information about developing on the open web.

We are launching an experiment on MDN Web Docs, seeking direct support from our users in order to accelerate growth of our content and platform. Not only has our user base grown exponentially in the last few years (with corresponding platform maintenance costs), we also have a large list of cool new content, features, and programs we’d like to create that our current funding doesn’t fully cover.

In 2015, on our tenth anniversary (read about MDN’s evolution in the 10-year anniversary post), MDN had four million active monthly users. Now, just three years later, we have 12 million. Our last big platform update was in 2013. By asking for, and hopefully receiving, financial assistance from our users – which will be reinvested directly into MDN – we aim to speed up the modernization of MDN’s platform and offer more of what you love: content, features, and integration with the tools you use every day (like VS Code, Dev Tools, and others), plus better support for the 1,000+ volunteers contributing content, edits, tooling, and coding to MDN each month.

Currently, MDN is wholly funded by Mozilla Corporation, and has been since its inception in 2005. The MDN Product Advisory Board, formed in 2017, provides guidance and advice but not funding. The MDN board will never be pay-to-play, and although member companies may choose to sponsor events or other activities, sponsorship will never be a requirement for participation. This payment experiment was discussed at the last MDN board meeting and received approval from members.

Starting this week, approximately 1% of MDN users, chosen at random, will see a promotional box in the footer of MDN asking them to support MDN through a one-time payment.

Banner placement on MDN

Clicking on the “Support MDN” button will open the banner and allow you to enter payment information.

Payment page on MDN

If you don’t see the promotional banner on MDN, and want to express your support, or read the FAQ’s, you can go directly to the payment page.

Because we want to keep things fully transparent, we’ll report how we spend the money on a monthly basis on MDN, so you can see what your support is paying for. We hope that, through this program, we will create a tighter, healthier loop between our audience (you), our content (written for and by you), and our supporters (also, you, again).

Throughout the next couple months, and into 2019, we plan to roll out additional ways for you to engage with and support MDN. We will never put the existing MDN Web Docs site behind a paywall. We recognize the importance of this resource for the web and the people who work on it.

]]>https://hacks.mozilla.org/2018/10/a-new-way-to-support-mdn/feed/5Hack on MDN: Better accessibility for MDN Web Docshttps://hacks.mozilla.org/2018/10/hack-on-mdn-better-accessibility-for-mdn-web-docs/
https://hacks.mozilla.org/2018/10/hack-on-mdn-better-accessibility-for-mdn-web-docs/#commentsTue, 02 Oct 2018 14:49:33 +0000https://hacks.mozilla.org/?p=32706Making websites accessible to a wide range of users is a vital topic for creators on the web. Over a long weekend in late September, more than twenty people met in London to work on accessibility on the MDN Web Docs website — both the content about accessibility and the accessibility of the site itself. The result was a considerable refresh and new opportunities to continue the projects begun.

]]>From Saturday, September 22 to Monday, September 24, more than twenty people met in London to work on improving accessibility on MDN Web Docs — both the content about accessibility and the accessibility of the site itself. While much remains to be done, the result was a considerable refresh in both respects.

Attendees at Hack on MDN listen to a lightning talk by Eva Ferreira. Photo by Adrian Roselli.

Hack on MDN events

Hack on MDN events evolved from the documentation sprints for MDN that were held from 2010 to 2013, which brought together staff members and volunteers to write and localize content on MDN over a weekend. As implied by the name, “Hack on MDN” events expand the range of participants to include those with programming and design skills. In its current incarnation, each Hack on MDN event has a thematic focus. One in March of this year focused on browser compatibility data.

The Hack on MDN format is a combination of hackathon and unconference; participants pitch projects and commit to working on concrete tasks (rather than meetings or long discussions) that can be completed in three days or less. People self-organize to work on projects in which a group can make significant progress over a long weekend. Lightning talks provide an unconference break from projects.

Accessibility on MDN Web Docs

Making websites accessible to a wide range of users, including those with physical or cognitive limitations, is a vital topic for creators on the web. Yet information about accessibility on MDN Web Docs was sparse and often outdated. Similarly, the accessibility of the site had eroded over time. Therefore, accessibility was chosen as the theme for the September 2018 Hack on MDN.

Hack on MDN Accessibility in London

The people who gathered at Campus London (thanks to Google for the space), included writers, developers, and accessibility experts, from within and outside of Mozilla. After a round of introductions, there was a “pitch” session presenting ideas of projects to work on. Participants rearranged themselves into project groups, and the hacking began. Adrian Roselli gave a brief crash course on accessibility for non-experts in the room, which he fortunately had up his sleeve and was able to present while jet-lagged.

At the end of each morning and afternoon, we did a status check-in to see how work was progressing. On Sunday and Monday, there were also lightning talks, where anyone could present anything that they wanted to share. Late Sunday afternoon, some of us took some time out to explore some of the offerings of the Shoreditch Design Triangle, including playing with a “font” comprised of (more or less sit-able) chairs.

Stephanie Hobson submitted several pull requests to improve the usability of the MDN Web Docs site for users of screen readers, such as moving the link to each section heading after the heading text, and moving the close button for menus to the top of the menu.

Also, a fun time was had and the group enjoyed working together. Check the #HackOnMDN tag on Twitter for photos, “overheard” quotes, nail art by @ninjanails and more. Also see blog posts by Adrian Roselli and Hidde de Vries for their perspectives and more details.

What’s next?

There is plenty of work left to make MDN’s accessibility content up-to-date and useful. The list of ARIA roles, states, and properties is far from complete. More reference pages need “accessibility concerns” information added. The accessibility of the MDN Web Docs site still can be improved. As a result of the enthusiasm from this event, discussions are starting about doing a mini-hack in connection with an upcoming accessibility conference.

If you find issues that need to be addressed, please file a bug against the site or the content. Better yet, get involved in improving MDN Web Docs. If you’re not sure where to begin, visit the MDN community forum to ask any questions you might have about how to make MDN more awesome (and accessible). We’d love to have your help!