Over the past few months, we're continuing to leverage the features of ARKit on iOS to enhance the WebXR Viewer app and explore ideas and issues with WebXR. One big question with WebXR on modern AR and VR platforms is how to best leverage the platform to provide a frictionless

Over the past few months, we're continuing to leverage the features of ARKit on iOS to enhance the WebXR Viewer app and explore ideas and issues with WebXR. One big question with WebXR on modern AR and VR platforms is how to best leverage the platform to provide a frictionless experience while also supporting the advanced capabilities users will expect, in a safe and platform independent way.

We recently released an update to the WebXR Viewer that that fixes some small bugs and updates the app to iOS 12 and ARKit 2.0 (we haven't exposed all of ARKit 2.0 yet, but expect to over the next coming months). Beyond just bug fixes, two features of the new app highlight interesting questions for WebXR related to privacy, friction and platform independence.

First, Web browsers can decrease friction for users moving from one AR experience to another by managing the underlying platform efficiently and not shutting it down completely between sessions, but care needs to be taken to not to expose data to applications that might surprise users.

Second, some advanced features imagined for WebXR are not (yet) available in a cross platform way, such as shareable world maps or persistent anchors. These capabilities are core to experiences users will expect, such as persistent content in the world or shared experiences between multiple co-located people.

In both cases, it is unclear what the right answer is.

Frictionless Experience and User Privacy

Hypothesis: regardless of how the underlying platform is used, when a new WebXR web page is loaded, it should only get information about the world that that would be available if it was loaded for the first time and not see existing maps or anchors from previous pages.

Consider the image (and video) below. The image shows the results of running the "World Knowledge" sample, and spending a few minutes walking from the second floor of a house, down the stairs to the main floor, around and down the stairs to the basement, and then back up and out the front door into the yard. Looking back at the house, you can see small planes for each stair, the floor and some parts of the walls (they are the translucent green polygons). Even after just a few minutes of running ARKit, a surprising amount of information can be exposed about the interior of a space.

If the same user visits another web page, the browser could choose to restart ARKit or not. Restarting results in a high-friction user experience: all knowledge of the world is lost, requiring the user to scan their environment to reinitialize the underlying platform. Not restarting, however, might expose information to the new web page that is surprising to the user. Since the page is visited while outside the house, a user might not expect is to have access to details of the interior.

In the WebXR Viewer, we do not reinitialize ARKit for each page. We made the decision that if a page is reloaded without visiting a different XR page, we leave ARKit running and all world knowledge is retained. This allows pages to be reloaded without completely restarting the experience. When a new WebXR page is visited, we keep ARKit running, but destroy all ARKit anchors and world knowledge (i.e., ARKit ARAnchors, such as ARPlaneAnchors) that are further than some threshold distance from the user (3 meters, by default, in our current implementation).

In the video below, we demonstrate this behavior. When the user changes from "World Knowledge" sample to the "Hit Test" sample, internally we destroy most of the anchors. When the user changes back to the "World Knowledge" sample, we again destroy most of the anchors. You can see at the end of the video that only the nearby planes still exist (the plane under the user and some of the planes on the front porch). Further planes (inside the house, in the case) are gone. (Visiting non-XR pages does not count as visiting another page, although we also shut down ARKit after a short time, to save battery, if the browser is not on an XR page, which destroys all world knowledge as well).

While this is a relatively simplistic approach to this tradeoff between friction and privacy, issues like these need to be considered when implementing WebXR inside a browser. Modern AR and VR platforms (such as Microsoft's Hololens or Magic Leap's ML1) are capable of synthesizing and exposing highly detailed maps of the environment, and retaining significant information over time. In these platforms, the world space model is retained over time and exposed to apps, so even if the browser restarts the underlying API for each visited page, the full model of the space is available unless the browser makes an explicit choice to not expose it to the web page.

Consider, for example, a user walking a similar path for a similarly short time in the above house while wearing a Microsoft Hololens. In this case, a map of the same environment is shown below.

This image (captured with Microsoft's debugging tools, while the user is sitting at a computer in the basement of the house, shown as the sphere and green view volume) is significantly more detailed that the ARKit planes. And it would be retained, improved and shared with all apps in this space as the user continues to wear and use the Hololens.

In both cases, the ARKit planes and Hololens maps were captured based on just a few minutes of walking in this house. Imagine the level of detail that might be available after extended use.

Platform-specific Capabilities

Hypothesis: advanced capabilities such as World Mapping, that are needed for user experiences that require persistence and sharing content, will need cross-platform analogs to the platform silos currently available if the platform-independent character of the web is to extend to AR and VR.

ARKit 2.0 introduces the possibility of retrieving the current model of the world (the so-called ARWorldMap used by ARKit for tracking planes and anchors in the world. The map can then be saved and/or shared with others, enabling both persistent and multi-user AR experiences.

In this version of the WebXR Viewer, we want to explore some ideas for persistent and shared experiences, so we added session.getWorldMap() and session.setWorldMap(map) commands to an active AR session (these can be seen in the "Persistence" sample, a small change to the "World Knowledge" sample above).

These capabilities raise questions of user-privacy. The ARKit's ARWorldMap is an opaque binary ARKit data struction, and may contain a surprising amount of data about the space that could extracted by determined application developers (the format is undocumented). Because of this, we leverage the existing privacy settings in the WebXR Viewer, and allow apps to retrieve the world map if (and only if) the user has given the page access to "world knowledge".

On the other hand, the WebXR Viewer allows a page to provide an ARWorldMap to ARKit and try and use it for relocalization with no heightened permissions. In theory, such an action could allow a malicious web app to try and "probe" the world by having the browser test if the user is in a certain location. In practice, such an attack seems infeasible: loading a map resets ARKit (a highly disruptive and visible action) and relocalizing the phone against a map takes an indeterminate amount of time regardless of whether the relocalization eventually succeeds or not.

While implementing these commands was trivial, exposing this capability raises a fundamental question for the design of WebXR (beyond questions of permissions and possible threats). Specifically, how might such capabilities eventually work in a cross-platform way, given that each XR platform is implementing these capabilities differently?

We have no answer for this question. For example, some devices, such as Hololens, allow spaces to be saved and shared, much like ARKit. But other platforms opt to only share Anchors, or do not (yet) allow sharing at all. Over time, we hope some common ground might emerge. Google has implemented their ARCore Cloud Anchors on both ARKit and ARCore; perhaps a similar approach could be take that is more open and independent of one companies infrastructure, and could thus be standardized across many platforms.

Looking Forward

These issues are two of many issues that are being discussed and considered by the Immersive Web Community Group as we work on the initial WebXR Device API specification. If you want to see the full power of the various XR platforms exposed and available on the Web, done in a way that preserves the open, accessible and safe character of the Web, please join the discussion and help us ensure the success of the XR Web.

2018-11-21T20:30:00Z2018-11-21T20:30:00ZBlair MacIntyrehttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttp://mozillagfx.wordpress.com/?p=1093WebRender newsletter #31

Greetings! I’ll introduce WebRender’s 31st newsletter with a few words about batching. Efficiently submitting work to GPUs isn’t as straightforward as one might think. It is not unusual for a CPU renderer to go through each graphic primitive (a blue filled circle, a purple stroked path, and image, etc.) in z-order to produce the final … Continue reading WebRender newsletter #31→

Greetings! I’ll introduce WebRender’s 31st newsletter with a few words about batching.

Efficiently submitting work to GPUs isn’t as straightforward as one might think. It is not unusual for a CPU renderer to go through each graphic primitive (a blue filled circle, a purple stroked path, and image, etc.) in z-order to produce the final rendered image. While this isn’t the most efficient way, greater care needs to be taken in optimizing the inner loop of the algorithm that renders each individual object than in optimizing the overhead of alternating between various types of primitives. GPUs however, work quite differently, and the cost of submitting small workloads is often higher than the time spent executing them.

I won’t go into the details of why GPUs work this way here, but the big takeaway is that it is best to not think of a GPU API draw call as a way to draw one thing, but rather as a way to submit as many items of the same type as possible. If we implement a shader to draw images, we get much better performance out of drawing many images in a single draw call than submitting a draw call for each image. I’ll call a “batch” any group of items that is rendered with a single drawing command.

So the solution is simply to render all images in a draw call, and then all of the text, then all gradients, right? Well, it’s a tad more complicated because the submission order affects the result. We don’t want a gradient to overwrite text that is supposed to be rendered on top of it, so we have to maintain some guarantees about the order of the submissions for overlapping items.

In the 29th newsletter intro I talked about culling and the way we used to split the screen into tiles to accelerate discarding hidden primitives. This tiling system was also good at simplifying the problem of batching. In order to batch two primitives together we need to make sure that there is no primitive of a different type in between. Comparing all primitives on screen against every other primitive would be too expensive but the tiling scheme reduced this complexity a lot (we then only needed to compare primitives assigned to the same tile).

In the culling episode I also wrote that we removed the screen space tiling in favor of using the depth buffer for culling. This might sound like a regression for the performance of the batching code, but the depth buffer also introduced a very nice property: opaque elements can be drawn in any order without affecting correctness! This is because we store the z-index of each pixel in the depth buffer, so if some text is hidden by an opaque image we can still render the image before the text and the GPU will be configured to automatically discard the pixels of the text that are covered by the image.

In WebRender this means we were able to separate primitives in two groups: the opaque ones, and the ones that need to perform some blending. Batching opaque items is trivial since we are free to just put all opaque items of the same type in their own batch regardless of their painting order. For blended primitives we still need to check for overlaps but we have less primitives to consider. Currently WebRender simply iterates over the last 10 blended primitives to see if there is a suitable batch with no other type of primitive overlapping in between and defaults to starting a new batch. We could go for a more elaborate strategy but this has turned out to work well so far since we put a lot more effort into moving as many primitives as possible into the opaque passes.

In another episode I’ll describe how we pushed this one step further and made it possible to segment primitives into the opaque and non-opaque parts and further reduce the amount of blended pixels.

In September the European Commission proposed a new regulation that seeks to tackle the spread of ‘terrorist’ content on the internet. As we’ve noted already, the Commission’s proposal would seriously … Read more

In September the European Commission proposed a new regulation that seeks to tackle the spread of ‘terrorist’ content on the internet. As we’ve noted already, the Commission’s proposal would seriously undermine internet health in Europe, by forcing companies to aggressively suppress user speech with limited due process and user rights safeguards. Here we unpack the proposal’s shortfalls, and explain how we’ll be engaging on it to protect our users and the internet ecosystem.

At the same time lawmakers in Europe have made online safety a major political priority, and the Terrorist Content regulation is the latest legislative initiative designed to tackle illegal and harmful content on the internet. Yet, while terrorist acts and terrorist content are serious issues, the response that the European Commission is putting forward with this legislative proposal is unfortunately ill-conceived, and will have many unintended consequences. Rather than creating a safer internet for European citizens and combating the serious threat of terrorism in all its guises, this proposal would undermine due process online; compel the use of ineffective content filters; strengthen the position of a few dominant platforms while hampering European competitors; and, ultimately, violate the EU’s commitment to protecting fundamental rights.

Many elements from the proposal are worrying, including:

The definition of ‘terrorist’ content is extremely broad, opening the door for a huge amount of over-removal (including the potential for discriminatory effect) and the resulting risk that much lawful and public interest speech will be indiscriminately taken down;

Government-appointed bodies, rather than independent courts, hold the ultimate authority to determine illegality, with few safeguards in place to ensure these authorities act in a rights-protective manner;

The aggressive one hour timetable for removal of content upon notification is barely feasible for the largest platforms, let alone the many thousands of micro, small and medium-sized online services whom the proposal threatens;

Companies could be forced to implement ‘proactive measures’ including upload filters, which, as we’ve argued before, are neither effective nor appropriate for the task at hand; and finally,

The proposal risks making content removal an end in itself, simply pushing terrorist off the open internet rather than tackling the underlying serious crimes.

As the European Commission acknowledges in its impact assessment, the severity of the measures that it proposes could only ever be justified by the serious nature of terrorism and terrorist content. On its face, this is a plausible assertion. However, the evidence base underlying the proposal does not support the Commission’s approach. For as the Commission’s own impact assessment concedes, the volume of ‘terrorist’ content on the internet is on a downward trend, and only 6% of Europeans have reported seeing terrorist content online, realities which heighten the need for proportionality to be at the core of the proposal. Linked to this, the impact assessment predicts that an estimated 10,000 European companies are likely to fall within this aggressive new regime, even though data from the EU’s police cooperation agency suggests terrorist content is confined to circa 150 online services.

Moreover, the proposal conflates online speech with offline acts, despite the reality that the causal link between terrorist content online, radicalisation, and terrorist acts is far more nuanced. Within the academic research around terrorism and radicalisation, no clear and direct causal link between terrorist speech and terrorist acts has been established (see in particular, research from UNESCO and RAND). With respect to radicalisation in particular, the broad research suggests exposure to radical political leaders and socio-economic factors are key components of the radicalisation process, and online speech is not a determinant. On this basis, the high evidential bar that is required to justify such a serious interference with fundamental rights and the health of the internet ecosystem is not met by the Commission. And in addition, the shaky evidence base demands that the proposal be subject to far greater scrutiny than it has been afforded thus far.

Besides these concerns, it is saddening that this new legislation is likely to create a legal environment that will entrench the position of the largest commercial services that have the resources to comply, undermining the openness on which a healthy internet thrives. By setting a scope that covers virtually every service that hosts user content, and a compliance bar that only a handful of companies are capable of reaching, the new rules are likely to engender a ‘retreat from the edge’, as smaller, agile services are unable to bear the cost of competing with the established players. In addition, the imposition of aggressive take-down timeframes and automated filtering obligations is likely to further diminish Europe’s standing as a bastion for free expression and due process.

Ultimately, the challenge of building sustainable and rights-protective frameworks for tackling terrorism is a formidable one, and one that is exacerbated when the internet ecosystem is implicated. With that in mind, we’ll continue to highlight how the nuanced interplay between hosting services, terrorist content, and terrorist acts mean this proposal requires far more scrutiny, deliberation, and clarification. At the very least, any legislation in this space must include far greater rights protection, measures to ensure that suppression of online content doesn’t become an end in itself, and a compliance framework that doesn’t make the whole internet march to the beat of a handful of large companies.

LPCNet is a new project out of Mozilla’s Emerging Technologies group — an efficient neural speech synthesiser with reduced complexity over some of its predecessors. Neural speech synthesis models like WaveNet have already demonstrated impressive speech synthesis quality, but their computational complexity has made them hard to use in real-time, especially on phones. In a […]

LPCNet is a new project out of Mozilla’s Emerging Technologies group — an efficient neural speech synthesiser with reduced complexity over some of its predecessors. Neural speech synthesis models like WaveNet have already demonstrated impressive speech synthesis quality, but their computational complexity has made them hard to use in real-time, especially on phones. In a similar fashion to the RNNoise project, our solution with LPCNet is to use a combination of deep learning and digital signal processing (DSP) techniques.

Figure 1: Screenshot of a demo player that demonstrates the quality of LPCNet-synthesized speech.

LPCNet can help improve the quality of text-to-speech (TTS), low bitrate speech coding, time stretching, and more. You can hear the difference for yourself in our LPCNet demo page, where LPCNet and WaveNet speech are generated with the same complexity. The demo also explains the motivations for LPCNet, shows what it can achieve, and explores its possible applications.

You can find an in-depth explanation of the algorithm used in LPCNet in this paper.

Countries around the world are considering how to protect their citizens’ data – but there continues to be a lack of comprehensive privacy protections for American internet users. But that could change. The National Telecommunications and Information Administration (NTIA) recently proposed an outcome-based framework to consumer data privacy, reflecting internationally accepted principles for privacy and data protection. Mozilla believes that the NTIA framework represents a good start to address many of these challenges, and we offered our thoughts to help Americans realize the same protections enjoyed by users in other countries around the world (you can see all the comments that were received at the NTIA’s website).

Mozilla has always been committed to strong privacy protections, user controls, and security tools in our policies and in the open source code of our products. We are pleased that the NTIA has embraced similar considerations in its framework, including the need for user control over the collection and use of information; minimization in the collection, storage, and the use of data; and security safeguards for personal information. While we generally support these principles, we also encourage the NTIA to pursue a more granular set of outcomes to provide more guidance for covered entities.

To supplement the proposed framework, our submission encourages the NTIA to adopt the following additional recommendations to protect user privacy:

Include the explicit right to object to the processing of personal data as a core component of reasonable user control.

Mandate the use of security and role-based access controls and protections against unlawful disclosures.

Close the current gap in FTC oversight to cover telecommunications carriers and major non-profits that handle significant amounts of personal information.

Expand FTC authority to provide the agency with the ability to make rules and impose civil penalties to deter future violations of consumer privacy.

Provide the FTC with more resources and staff to better address threats in a rapidly evolving field.

In its request for comment, the NTIA stated that it believes that the United States should lead on privacy. The framework outlined by the agency represents a promising start to those efforts, and we are encouraged that the NTIA has sought the input of a broad variety of stakeholders at this pivotal juncture. But if the U.S. plans to lead on privacy, it must invest accordingly and provide the FTC with the legal tools and resources to demonstrate that commitment. Ultimately, this will lead to long-term benefits for users and internet-based businesses, providing greater certainty for data-driven entities and flexibility to address future threats.

2018-11-20T16:32:01Z2018-11-20T16:32:01ZHeather Westhttps://blog.mozilla.org/netpolicyMozilla's official blog on open Internet policy initiatives and developmentsOpen Policy & Advocacy2018-11-21T10:08:34Zhttps://hacks.mozilla.org/?p=32965Decentralizing Social Interactions with ActivityPub

ActivityPub is a W3C standard protocol that describes ways for different social network sites (loosely defined) to talk to and interact with one another. ActivityPub aims to do for social network interactions what RSS did for content, and is being used today to power alternative social networks like Mastodon and Pleroma.

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

Social websites first got us talking and sharing with our friends online, then turned into echo-chamber content silos, and finally emerged in their mature state as surveillance capitalist juggernauts, powered by the effluent of our daily lives online. The tail isn’t just wagging the dog, it’s strangling it. However, there just might be a way forward that puts users back in the driver seat: A new set of specifications for decentralizing social activity on the web. Today you’ll get a helping hand into that world from from Darius Kazemi, renowned bot-smith and Mozilla Fellow.

– Dietrich Ayala

Introducing ActivityPub

Hi, I’m Darius Kazemi. I’m a Mozilla Fellow and decentralized web enthusiast. In the last year I’ve become really excited about ActivityPub, a W3C standard protocol that describes ways for different social network sites (loosely defined) to talk to and interact with one another. You might remember the heyday of RSS, when a user could subscribe to almost any content feed in the world from any number of independently developed feed readers. ActivityPub aims to do for social network interactions what RSS did for content.

Architecture

ActivityPub enables a decentralized social web, where a network of servers interact with each other on behalf of individual users/clients, very much like email operates at a macro level. On an ActivityPub compliant server, individual user accounts have an inbox and an outbox that accept HTTP GET and POST requests via API endpoints. They usually live somewhere like https://social.example/users/dariusk/inbox and https://social.example/users/dariusk/outbox, but they can really be anywhere as long as they are at a valid URI. Individual users are represented by an Actor object, which is just a JSON-LD file that gives information like username and where the inbox and outbox are located so you can talk to the Actor. Every message sent on behalf of an Actor has the link to the Actor’s JSON-LD file so anyone receiving the message can look up all the relevant information and start interacting with them.

A simple server to send ActivityPub messages

One of the most popular social network sites that uses ActivityPub is Mastodon, an open source community-owned and ad-free alternative to social network services like Twitter. But Mastodon is a huge, complex project and not the best introduction to the ActivityPub spec as a developer. So I started with a tutorial written by Eugen Rochko (the principal developer of Mastodon) and created a partial reference implementation written in Node.js and Express.js called the Express ActivityPub server.

The purpose of the software is to serve as the simplest possible starting point for developers who want to build their own ActivityPub applications. I picked what seemed to me like the smallest useful subset of ActivityPub features: the ability to publish an ActivityPub-compliant feed of posts that any ActivityPub user can subscribe to. Specifically, this is useful for non-interactive bots that publish feeds of information.

To get started with Express ActivityPub server in a local development environment, install

In order to truly test the server it needs to be associated with a valid, https-enabled domain or subdomain. For local testing I like to use ngrok to test things out on one of the temporary domains that they provide. First, install ngrok using their instructions (you have to sign in but there is a free tier that is sufficient for local debugging). Next run:

ngrok http 3000

This will show a screen on your console that includes a domain like abcdef.ngrok.io. Make sure to note that down, as it will serve as your temporary test domain as long as ngrok is running. Keep this running in its own terminal session while you do everything else.

Then go to your config.json in the express-activitypub directory and update the DOMAIN field to whatever abcdef.ngrok.io domain that ngrok gave you (don’t include the http://), and update USER to some username and PASS to some password. These will be the administrative password required for creating new users on the server. When testing locally via ngrok you don’t need to specify the PRIVKEY_PATH or CERT_PATH.

Next run your server:

node index.js

Go to https://abcdef.ngrok.io/admin (again, replace the subdomain) and you should see an admin page. You can create an account here by giving it a name and then entering the admin user/pass when prompted. Try making an account called “test” — it will give you a long API key that you should save somewhere. Then open an ActivityPub client like Mastodon’s web interface and try following @test@abcdef.ngrok.io. It should find the account and let you follow!

Back on the admin page, you’ll notice another section called “Send message to followers” — fill this in with “test” as the username, the hex key you just noted down as the password, and then enter a message. It should look like this:

Screenshot of form

Hit “Send Message” and then check your ActivityPub client. In the home timeline you should see your message from your account, like so:

Post in Mastodon mobile web view

And that’s it! It’s not incredibly useful on its own but you can fork the repository and use it as a starting point to build your own services. For example, I used it as the foundation of an RSS-to-ActivityPub conversion service that I wrote (source code here). There are of course other services that could be built using this. For example, imagine a replacement for something like MailChimp where you can subscribe to updates for your favorite band, but instead of getting an email, everyone who follows an ActivityPub account will get a direct message with album release info. Also it’s worth browsing the predefined Activity Streams Vocabulary to see what kind of events the spec supports by default.

Learn More

There is a whole lot more to ActivityPub than what I’ve laid out here, and unfortunately there aren’t a lot of learning resources beyond the specs themselves and conversations on various issue trackers.

If you’d like to know more about ActivityPub, you can of course read the ActivityPub spec. It’s important to know that while the ActivityPub spec lays out how messages are sent and received, the different types of messages are specified in the Activity Streams 2.0 spec, and the actual formatting of the messages that are sent is specified in the Activity Streams Vocabulary spec. It’s important to familiarize yourself with all three.

You can join the Social Web Incubator Community Group, a W3C Community Group, to participate in discussions around ActivityPub and other social web tech standards. They have monthly meetings that you can dial into that are listed on the wiki page.

And of course if you’re on an ActivityPub social network service like Mastodon or Pleroma, the #ActivityPub hashtag there is always active.

Mozilla took the next step today in the fight to defend the web and consumers from the FCC’s attack on an open internet. Together with other petitioners, Mozilla filed our reply brief in our case challenging the FCC’s elimination of critical net neutrality protections that require internet providers to treat all online traffic equally.

The fight for net neutrality, while not a new one, is an important one. We filed this case because we believe that the internet works best when people control for themselves what they see and do online.

The FCC’s removal of net neutrality rules is not only bad for consumers, it is also unlawful. The protections in place were the product of years of deliberation and careful fact-finding that proved the need to protect consumers, who often have little or no choice of internet provider. The FCC is simply not permitted to arbitrarily change its mind about those protections based on little or no evidence. It is also not permitted to ignore its duty to promote competition and protect the public interest. And yet, the FCC’s dismantling of the net neutrality rules unlawfully removes long standing rules that have ensured the internet provides a voice for everyone.

Meanwhile, the FCC’s defenses of its actions and the supporting arguments of large cable and telco company ISPs, who have come to the FCC’s aid, are misguided at best. They mischaracterize the internet’s technical structure as well as the FCC’s mandate to advance internet access, and they ignore clear evidence that there is little competition among ISPs. They repeatedly contradict themselves and have even introduced new justifications not outlined in the FCC’s original decision to repeal net neutrality protections.

Nothing we have seen from the FCC since this case began has changed our mind. Our belief in this action remains as strong as it was when its plan to undo net neutrality protections last year was first met with outcry from consumers, small businesses and advocates across the country.

We will continue to do all that we can to support an open and vibrant internet that is a resource accessible to all. We look forward to making our arguments directly before the D.C. Court of Appeals and the public. FCC, we’ll see you in court on February 1.

2018-11-16T20:08:06Z2018-11-16T20:08:06ZAmy Keatinghttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-20T19:23:12Zhttps://hacks.mozilla.org/?p=32959The Power of Web Components

Web Components comprises a set of standards that enable user-defined HTML elements. These elements can go in all the same places as traditional HTML. Despite the long standardization process, the emerging promise of Web Components puts more power in the hands of developers and creators.

Background

Ever since the first animated DHTML cursor trails and “Site of the Week” badges graced the web, re-usable code has been a temptation for web developers. And ever since those heady days, integrating third-party UI into your site has been, well, a semi-brittle headache.

Using other people’s clever code has required buckets of boilerplate JavaScript or CSS conflicts involving the dreaded !important. Things are a bit better in the world of React and other modern frameworks, but it’s a bit of a tall order to require the overhead of a full framework just to re-use a widget. HTML5 introduced a few new elements like <video> and <input type="date">, which added some much-needed common UI widgets to the web platform. But adding new standard elements for every sufficiently common web UI pattern isn’t a sustainable option.

In response, a handful of web standards were drafted. Each standard has some independent utility, but when used together, they enable something that was previously impossible to do natively, and tremendously difficult to fake: the capability to create user-defined HTML elements that can go in all the same places as traditional HTML. These elements can even hide their inner complexity from the site where they are used, much like a rich form control or video player.

The standards evolve

As a group, the standards are known as Web Components. In the year 2018 it’s easy to think of Web Components as old news. Indeed, early versions of the standards have been around in one form or another in Chrome since 2014, and polyfills have been clumsily filling the gaps in other browsers.

After some quality time in the standards committees, the Web Components standards were refined from their early form, now called version 0, to a more mature version 1 that is seeing implementation across all the major browsers. Firefox 63 added support for two of the tent pole standards, Custom Elements and Shadow DOM, so I figured it’s time to take a closer look at how you can play HTML inventor!

Given that Web Components have been around for a while, there are lots of other resources available. This article is meant as a primer, introducing a range of new capabilities and resources. If you’d like to go deeper (and you definitely should), you’d do well to read more about Web Components on MDN Web Docs and the Google Developers site.

Defining your own working HTML elements requires new powers the browser didn’t previously give developers. I’ll be calling out these previously-impossible bits in each section, as well as what other newer web technologies they draw upon.

The <template> element: a refresher

This first element isn’t quite as new as the others, as the need it addresses predates the Web Components effort. Sometimes you just need to store some HTML. Maybe it’s some markup you’ll need to duplicate multiple times, maybe it’s some UI you don’t need to create quite yet. The <template> element takes HTML and parses it without adding the parsed DOM to the current document.

Where does that parsed HTML go, if not to the document? It’s added to a “document fragment”, which is best understood as a thin wrapper that contains a portion of an HTML document. Document fragments dissolve when appended to other DOM, so they’re useful for holding a bunch of elements you want later, in a container you don’t need to keep.

“Well okay, now I have some DOM in a dissolving container, how do I use it when I need it?”

You could simply insert the template’s document fragment into the current document:

This works just fine, except you just dissolved the document fragment! If you run the above code twice you’ll get an error, as the second time template.content is gone. Instead, we want to make a copy of the fragment prior to inserting it:

document.body.appendChild(template.content.cloneNode(true));

The cloneNode method does what it sounds like, and it takes an argument specifying whether to copy just the node itself or include all its children.

The template tag is ideal for any situation where you need to repeat an HTML structure. It particularly comes in handy when defining the inner structure of a component, and thus <template> is inducted into the Web Components club.

New Powers:

An element that holds HTML but doesn’t add it to the current document.

Review Topics:

Custom Elements

Custom Elements is the poster child for the Web Components standards. It does what it says on the tin – allowing developers to define their own custom HTML elements. Making this possible and pleasant builds fairly heavily on top of ES6’s class syntax, where the v0 syntax was much more cumbersome. If you’re familiar with classes in JavaScript or other languages, you can define classes that inherit from or “extend” other classes:

class MyClass extends BaseClass {
// class definition goes here
}

Well, what if we were to try this?

class MyElement extends HTMLElement {}

Until recently that would have been an error. Browsers didn’t allow the built-in HTMLElement class or its subclasses to be extended. Custom Elements unlocks this restriction.

The browser knows that a <p> tag maps to the HTMLParagraphElement class, but how does it know what tag to map to a custom element class? In addition to extending built-in classes, there’s now a “Custom Element Registry” for declaring this mapping:

customElements.define('my-element', MyElement);

Now every <my-element> on the page is associated with a new instance of MyElement. The constructor for MyElement will be run whenever the browser parses a <my-element> tag.

What’s with that dash in the tag name? Well, the standards bodies want the freedom to create new HTML tags in the future, and that means that developers can’t just go creating an <h7> or <vr> tag. To avoid future conflicts, all custom elements must contain a dash, and standards bodies promise to never make a new HTML tag containing a dash. Collision avoided!

In addition to having your constructor called whenever your custom element is created, there are a number of additional “lifecycle” methods that are called on a custom element at various moments:

connectedCallback is called when an element is appended to a document. This can happen more than once, e.g. if the element is moved or removed and re-added.

disconnectedCallback is the counterpart to connectedCallback.

attributeChangeCallback fires when attributes from a whitelist are modified on the element.

Because we’re extending an existing tag, we actually use the existing tag instead of our custom tag name. We use the new special is attribute to tell the browser what kind of button we’re using:

<button is="hey-there" name="World">Howdy</button>

It may seem a bit clunky at first, but assistive technologies and other scripts wouldn’t know our custom element is a kind of button without this special markup.

From here, all the classic web widget techniques apply. We can set up a bunch of event handlers, add custom styling, and even stamp out an inner structure using <template>. People can use your custom element alongside their own code, via HTML templating, DOM calls, or even new-fangled frameworks, several of which support custom tag names in their virtual DOM implementations. Because the interface is the standard DOM interface, Custom Elements allows for truly portable widgets.

New Powers

The ability to extend the built-in ‘HTMLElement’ class and its subclasses

A custom element registry, available via customElements.define()

Special lifecycle callbacks for detecting element creation, insertion to the DOM, attribute changes, and more.

Review Topics

Shadow DOM

We’ve made our friendly custom element, we’ve even thrown on some snazzy styling. We want to use it on all our sites, and share the code with others so they can use it on theirs. How do we prevent the nightmare of conflicts when our customized <button> element runs face-first into the CSS of other sites? Shadow DOM provides a solution.

The Shadow DOM standard introduces the concept of a shadow root. Superficially, a shadow root has standard DOM methods, and can be appended to as if it was any other DOM node. Shadow roots shine in that their contents don’t appear to the document that contains their parent node:

In the above example, the <div> “contains” the <b> and the <b> is rendered to the page, but the traditional DOM methods can’t see it. Not only that, but the styles of the containing page can’t see it either. This means that styles outside of a shadow root can’t get in, and styles inside the shadow root don’t leak out. This boundary is not meant to be a security feature, as another script on the page could detect the shadow root’s creation, and if you have a reference to a shadow root, you can query it directly for its contents.

The contents of a shadow root are styled by adding a <style> (or <link>) to the root:

Whew, we could really use a <template> right about now! Either way, the <b> will be affected by the stylesheet in the root, but any outer styles matching a <b> tag will not.

What if a custom element has non-shadow content? We can make them play nicely together using a new special element called <slot>:

<template>
Hello, <slot></slot>!
</template>

If that template is attached to a shadow root, then the following markup:

<hey-there>World</hey-there>

Will render as:

Hello, World!

This ability to composite shadow roots with non-shadow content allows you to make rich custom elements with complex inner structures that look simple to the outer environment. Slots are more powerful than I’ve shown here, with multiple slots and named slots and special CSS pseudo-classes to target slotted content. You’ll have to read more!

New Powers:

A quasi-obscured DOM structure called a “shadow root”

DOM APIs for creating and accessing shadow roots

Scoped styles within shadow roots

New CSS pseudo-classes for working with shadow roots and scoped styles

Putting it all together

Let’s make a fancy button! We’ll be creative and call the element <fancy-button>. What makes it fancy? It will have a custom style, and it will also allow us to supply an icon and make that look snazzy as well. We’d like our button’s styles to stay fancy no matter what site you use them on, so we’re going to encapsulate the styles in a shadow root.

You can see the completed custom element in the interactive example below. Be sure to take a look at both the JS definition of the custom element and the HTML <template> for the style and structure of the element.

Conclusion

The standards that make up Web Components are built on the philosophy that by providing multiple low-level capabilities, people will combine them in ways that nobody expected at the time the specs were written. Custom Elements have already been used to make it easier to build VR content on the web, spawned multipleUI toolkits, and much more. Despite the long standardization process, the emerging promise of Web Components puts more power in the hand of creators. Now that the technology is available in browsers, the future of Web Components is in your hands. What will you build?

Hi! This is the 30th issue of WebRender’s most famous newsletter. At the top of each newsletter I try to dedicate a few paragraphs to some historical/technical details of the project. Today I’ll write about blob images. WebRender currently doesn’t support the full set of graphics primitives required to render all web pages. The focus … Continue reading WebRender newsletter #30→

Hi! This is the 30th issue of WebRender’s most famous newsletter. At the top of each newsletter I try to dedicate a few paragraphs to some historical/technical details of the project. Today I’ll write about blob images.

WebRender currently doesn’t support the full set of graphics primitives required to render all web pages. The focus so far has been on doing a good job of rendering the most common elements and providing a fall-back for the rest. We call this fall-back mechanism “blob images”.

The general idea is that when we encounter unsupported primitives during displaylist building we create an image object and instead of backing it with pixel data or a texture handle, we assign it a serialized list of drawing commands (the blob). For WebRender, blobs are just opaque buffers of bytes and a handler object is provided by the embedder (Gecko in our case) to turn this opaque buffer into actual pixels that can be used as regular images by the rest of the rendering pipeline.

This opaque blob representation and an external handler lets us implement these missing features using Skia without adding large and messy dependencies to WebRender itself. While a big part of this mechanism operates as a black box, WebRender remains responsible for scheduling the blob rasterization work at the appropriate time and synchronizing it with other updates. Our long term goal is to incrementally implement missing primitives directly in WebRender.

Since the launch of Firefox Monitor, a free service that notifies you when your email has been part of a breach, hundreds of thousands of people have signed up.

In response to the excitement from our global audience, Firefox Monitor is now being made available in more than 26 languages. We’re excited to bring Firefox Monitor to users in their native languages and make it easier for people to learn about data breaches and take action to protect themselves.

When your personal information is possibly at risk in a data breach, reading news and information in the language you understand best helps you to feel more in control. Now, Firefox Monitor will be available in Albanian, Traditional and Simplified Chinese, Czech, Dutch, English (Canadian), French, Frisian, German, Hungarian, Indonesian, Italian, Japanese, Malay, Portuguese (Brazil), Portuguese (Portugal), Russian, Slovak, Slovenian, Spanish (Argentina, Mexico, and Spain), Swedish, Turkish, Ukranian and Welsh.

We couldn’t have accomplished this feat without our awesome Mozilla community of volunteers who worked together to make this happen. We’re so grateful for their support in making Firefox Monitor available to more than 2.5 billion non-English speakers.

Introducing Firefox Monitor Notifications

Along with making Monitor available in multiple languages, today we’re also releasing a new feature exclusively for Firefox users. Specifically, we are adding a notification to our Firefox Quantum browser that alerts desktop users when they visit a site that has had a recently reported data breach. We’re bringing this functionality to Firefox users in recognition of the growing interest in these types of privacy- and security-centric features. This new functionality will gradually roll out to Firefox users over the coming weeks.

While using the Firefox Quantum browser, when you land on a site that’s been breached, you’ll get a notification. You can click on the alert to visit Firefox Monitor and scan your email to see whether or not you were involved in that data breach. This alert will appear at most once per site and only for data breaches reported in the previous twelve months. Website owners can learn about our data breach disclosure policy here. If you do not wish to see these alerts on any site, you can simply choose to “never show Firefox Monitor alerts” by clicking the dropdown arrow on the notification.

You’ll be notified of a data breach when you visit a site in Firefox

For those new to Firefox Monitor, here’s a brief step-by-step guide on how Firefox Monitor works:

Simply type in your email address, and it will be scanned against a database that serves as a library of known data breaches.

Step 2 – Learn about future data breaches

Sign up for Firefox Monitor using your email address and we will notify you about data breaches when we learn about them.

Step 3 – Use Firefox to learn about the sites you visit that have been breached

While using the Firefox browser, when you land on a site that’s been breached, you’ll get a notification to scan with Firefox Monitor whether or not you’ve been involved in that data breach.

Being part of a data breach is not fun, and we have tips and remedies in our project, Data Leeks. Through recipes and personal stories of those who’ve been affected by a data breach, we’re raising awareness about online privacy.

We invite you to take a look at Firefox Monitor to see if you’ve been part of a data breach, and sign up to be prepared for the next data breach that happens.

Mozilla’s Position on Data Breaches Data breaches are common for online services. Humans make mistakes, and humans make the Internet. Some online services discover, mitigate, and disclose breaches quickly. Others go undetected for years. Recent breaches include “fresh” data, which … Continue reading

Mozilla’s Position on Data Breaches

Data breaches are common for online services. Humans make mistakes, and humans make the Internet. Some online services discover, mitigate, and disclose breaches quickly. Others go undetected for years. Recent breaches include “fresh” data, which means victims have less time to change their credentials before they are in the hands of attackers. While old breaches have had more time to make their way into scripted credential stuffing attacks. All breaches are dangerous to users.

As stated in the Mozilla Manifesto: “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.” Most people simply don’t know that a data breach has affected them. Which makes it difficult to take the first step to secure their online accounts because they don’t know they’re insecure in the first place. This is why we launched Firefox Monitor.

Informing Firefox Users

Today we are continuing to improve our Firefox Monitor service. To help users who might have otherwise missed breach news or email alerts, we are integrating alerts into Firefox that will notify users when they visit a site that has been breached in the past. This feature integrates notifications into the user’s browsing experience.

To power this feature, we use a list of breached sites provided by our partner, Have I Been Pwned (HIBP). Neither HIBP nor Mozilla can confirm that a user has changed their password after a breach, or whether they have reused a breached password elsewhere. So we do not know whether an individual user is still at risk, and cannot trigger user-specific alerts.

For our initial launch we’ve developed a simple, straightforward methodology:

If the user has never seen a breach alert before, Firefox shows an alert when they visit any breached site added to HIBP within the last 12 months.

After the user has seen their first alert, Firefox only shows an alert when they visit a breached site added to HIBP within the last 2 months.

We believe this 12-month and 2-month policy are reasonable timeframes to alert users to both the password-reuse and unchanged-password risks. A longer alert timeframe would help us ensure we make even more users aware of the password-reuse risk. However, we don’t want to alarm users or to create noise by triggering alerts for sites that have long since taken significant steps to protect their users. That noise could decrease the value and usability of an important security feature.

Towards a more Sophisticated Approach

This is an interim approach to bring attention, awareness, and information to our users now, and to start getting their feedback. When we launched our Monitor service, we received tremendous feedback from our early users that we’re using to improve our efforts to directly address users’ top concerns for their online service accounts. For service operators, our partner, Troy Hunt, already has some great articles on how to prevent data breaches from happening, and how to quickly and effectively disclose and recover from them. Over the longer term, we want to work with our users, partners, and all service operators to develop a more sophisticated alert policy. We will base such a policy on stronger signals of individual user risk, and website mitigations.

We’re Hiring Again! You read that right, we are hiring “Software Engineers”, plural. We have some big plans for the next year and you can be a part of it! You can find the job post below. If you are … Continue reading

We’re Hiring Again!

You read that right, we are hiring “Software Engineers”, plural. We have some big plans for the next year and you can be a part of it!

About Thunderbird

Thunderbird is an email client depended on daily by 25 million people on three platforms: Windows, Mac and Linux (and other *nix). It was developed under the Mozilla Corporation until 2014 when the project was handed over to the community.

The Thunderbird project is lead by the Thunderbird Council, a group of volunteers from the community who has a strong interest in moving Thunderbird forward. With the help of the Mozilla Foundation, Thunderbird employs about a handful of staff, and is now hiring additional developers to support the volunteer community in making Thunderbird shine.

You will join the team that is leading Thunderbird into a bright future. We are working on increasing the use of web technologies and decreasing dependencies on the internals of the Mozilla platform, to ensure independence and easier maintenance.

The Thunderbird team works openly using public bug trackers and repositories, providing you with a premier chance to show your work to the world.

About the Contract

We need your help to improve and maintain Thunderbird. Moving Thunderbird forward includes replacing/rewriting components to be based primarily on web technologies, reducing the reliance on Mozilla-internal interfaces. It also includes boosting the user experience of the product.

Maintenance involves fixing bugs and regressions, as well as addressing technical debt and enhancing performance. Most tasks have a component of both maintenance and improvement, and any new component needs careful integration with the existing system.

We have compiled a high level list of tasks here; the work assigned to you will include a subset of these items. Let us know in your cover letter where you believe you can make most impact and how.

You will work with community volunteers and other employees around the globe to advance the Thunderbird product and mission of open and secure communications.

This is a remote, hourly 6-month contract with a possibility to extend. Hours will be up to 40 per week.

Your Professional Profile

Since we are looking to fill a few positions, we are interested to hear from both junior and senior candidates who can offer the following:

Familiarity with open source development.

Solid knowledge and experience developing a large software system.

Strong knowledge of JavaScript, HTML and CSS, as well as at least some basic C++ skills.

Good debugging skills.

Ideally exposure to the Mozilla platform as a voluntary contributor or add-on author with knowledge of XPCOM, XUL, etc.

Experience using distributed version control systems (preferably Mercurial).

Experience developing software cross-platform applications is a plus.

Ability to work with a geographically distributed team and community.

A degree in Computer Science would be lovely; real-world experience is essential.

You should be a self-starter. In a large code-base it’s inevitable that you conduct your own research, investigation and debugging, although others in the project will of course share their knowledge.

We expect you to have excellent communication skills and coordinate your work over email, IRC, and Bugzilla as well as video conferencing.

Next Steps

A cover letter is essential to your application, as we want to know how you’d envision your contributions to the team. Tell us about why you’re passionate about Thunderbird and this position. Also include samples of your work as a programmer, either directly or a link. If you contribute to any open source software, or maintain a blog we’d love to hear about it.

You will be hired as an independent contractor through the Upwork service as a client to the Mozilla Foundation. The Thunderbird Project is separate from the Mozilla Foundation, but the Foundation acts as the project’s fiscal and legal home.

By applying for this job, you are agreeing to have your applications reviewed by Thunderbird contractors and volunteers who are a part of the hiring committee as well as by staff members of the Mozilla Foundation.

Mozilla is an equal opportunity employer. Mozilla and the Thunderbird Project value diversity and do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

We’re currently hard at work on some new tools for web designers: a comprehensive Flexbox Inspector as well as CSS change-tracking. Tell us about your biggest CSS and web design issues and pain points in the first-ever Design Tools survey from Mozilla! We want to hear from developers and designers, no matter what browser you use.

Our goal: To build empowering new tools that integrate smartly with your modern web design workflow.

We’re currently hard at work on a comprehensive Flexbox Inspector as well as CSS change-tracking. Early versions of each of these can be tried out in Firefox Nightly. (The Changes panel is hidden behind a flag in about:config: devtools.inspector.changes.enabled)

Please share your input

We’re just getting started, and now we want to learn more about you. Tell us about your biggest CSS and web design issues in the first-ever Design Tools survey! We want to hear from both web developers and designers, and not just Firefox users—Chrome, Safari, Edge, and IE users are greatly encouraged to submit your thoughts!

In early 2019, we’ll post an update with the results in order to share our data with the greater community and continue our experiment in open design.

For the second year, Mozilla is releasing *Privacy Not Included. We’ll help you identify which connected devices provide robust privacy and security features — and which ones don’t.

🎶 He sees you when you’re sleeping 🎶

🎶 He knows when you’re awake 🎶

🎶 He knows if you’ve been bad or good… 🎶

The lyrics to “Santa Claus Is Comin’ to Town” detail an omniscient Saint Nicholas. But in 2018 — in an era of always-listening products and apps — the lyrics might as well be detailing the latest connected device.

This holiday season, Mozilla is helping consumers identify which connected products are secure and trustworthy — and which aren’t. The goal: help consumers shop for gifts based on how well they protect the privacy and security of their friends and family, in addition to traditional factors like a product’s price and performance.

For the second year, we’re releasing *Privacy Not Included, a shopping guide that lists connected devices’ privacy and security traits. Mozilla researchers spent the last several months exploring whether or not products encrypt personal information, offer automatic security updates, have clear privacy policies, and more.

Our researchers focused on the season’s most popular connected devices in the United States, from Nintendo Switch and the latest Roku to Fitbits and assorted drones, smart watches, and even a smart dinosaur. This year’s guide features:

Mozilla researchers did not make a conclusive determination if over half of the products met Minimum Security Standards. This was based on factors including if a company did not respond to inquiries or if a company’s response conflicted with recent independent security audits or penetration tester reports.

Answers to important questions like, “Can this product spy on me?” “Is it tracking my location?” and “Can I control the data it collects about me?”.

The debut of the Creep-O-Meter, an interactive tool allowing readers to rate how creepy they think a product using a sliding scale of “Super Creepy” to “Not Creepy,” as well to share how likely or unlikely they are to buy it. The home page of the *Privacy Not Included guide lists product based on rankings from Not Creepy to Super Creepy (Nearly 2,500 ratings were submitted by users during the guide’s beta testing period that began in late October.)

An assessment of how easy — or hard — it is to read a products’ privacy policies using Carnegie Mellon’s Explore Usable Privacy project, which created an algorithm to determine reading levels. The most common reading level required is a college reading level (grade 14). Tile Mate’s privacy policy is identified as the most difficult, requiring a college graduate reading level (grade 18), while the Tractive GPS 3G Pet Tracker is identified as the easiest to read, requiring a middle school reading level (grade 8).

We soft-launched this year’s guide at MozFest in October. And already, readers are weighing in. Nintendo Switch — which features encryption and automatic security updates — has emerged as one of the more trusted devices among users in the guide, with 72% of readers saying “not creepy.” Alternatively, the FREDI Baby Monitor — which lacks encryption and has the default password “123” — has 73% of readers saying “super creepy.”

There’s no shortage of holiday shopping guides. But most focus on price and performance, not privacy. We believe that’s a major oversight. Each day, more and more headlines emerge about flawed connected devices. These devices can track our locations without us knowing; they can sell our data to a galaxy of advertisers; and they often can be hacked or manipulated. In recent years, even stuffed animals and a children’s doll have been compromised.

*Privacy Not Included is part of Mozilla’s work to spark mainstream conversations about online privacy and security — and to put individual internet users in control of their own data. This guide compliments other Mozilla initiatives, like our consumer privacy campaigns; our annual Internet Health Report; and our roster of Fellows who develop research, policies, and products around privacy and security.

2018-11-14T12:00:33Z2018-11-14T12:00:33ZAshley Boydhttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-20T19:23:12Zhttps://hacks.mozilla.org/?p=32922Private by Design: How we built Firefox Sync

Firefox Sync lets you share your bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. We think it’s important to highlight the privacy aspects of Sync, which protects all your synced data by default so Mozilla can’t read it, ever. In this post, we take a closer look at some of the technical design choices we made in order to put user privacy first.

What is Firefox Sync and why would you use it

That shopping rabbit hole you started on your laptop this morning? Pick up where you left off on your phone tonight. That dinner recipe you discovered at lunchtime? Open it on your kitchen tablet, instantly. Connect your personal devices, securely. – Firefox Sync

Firefox Sync lets you share your bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. It’s a feature that millions of our users take advantage of to streamline their lives and how they interact with the web.

But on an Internet where sharing your data with a provider is the norm, we think it’s important to highlight the privacy aspects of Firefox Sync.

Firefox Sync by default protects all your synced data so Mozilla can’t read it. We built Sync this way because we put user privacy first. In this post, we take a closer look at some of the technical design choices we made and why.

When building a browser and implementing a sync service, we think it’s important to look at what one might call ‘Total Cost of Ownership’. Not just what users get from a feature, but what they give up in exchange for ease of use.

We believe that by making the right choices to protect your privacy, we’ve also lowered the barrier to trying out Sync. When you sign up and choose a strong passphrase, your data is protected from both attackers and from Mozilla, so you can try out Sync without worry. Give it a shot, it’s right up there in the menu bar!

Why Firefox Sync is safe

Encryption allows one to protect data so that it is entirely unreadable without the key used to encrypt it. The math behind encryption is strong, has been tested for decades, and every government in the world uses it to protect its most valuable secrets.

The hard part of encryption is that key. What key do you encrypt with, where does it come from, where is it stored, and how does it move between places? Lots of cloud providers claim they encrypt your data, and they do. But they also have the key! While the encryption is not meaningless, it is a small measure, and does not protect the data against the most concerning threats.

The encryption key is the essential element. The service provider must never receive it – even temporarily – and must never know it. When you sign into your Firefox Account, you enter a username and passphrase, which are sent to the server. How is it that we can claim to never know your encryption key if that’s all you ever provide us? The difference is in how we handle your passphrase.

A typical login flow for an internet service is to send your username and passphrase up to the server, where they hash it, compare it to a stored hash, and if correct, the server sends you your data. (Hashing refers to the activity of converting passwords into unreadable strings of characters impossible to revert.)

The crux of the difference in how we designed Firefox Accounts, and Firefox Sync (our underlying syncing service), is that you never send us your passphrase. We transform your passphrase on your computer into two different, unrelated values. With one value, you cannot derive the other0. We send an authentication token, derived from your passphrase, to the server as the password-equivalent. And the encryption key derived from your passphrase never leaves your computer.

Interested in the technical details? We use 1000 rounds of PBKDF2 to derive your passphrase into the authentication token1. On the server, we additionally hash this token with scrypt (parameters N=65536, r=8, p=1)2 to make sure our database of authentication tokens is even more difficult to crack.

We derive your passphrase into an encryption key using the same 1000 rounds of PBKDF2. It is domain-separated from your authentication token by using HKDF with separate info values. We use this key to unwrap an encryption key (which you generated during setup and which we never see unwrapped), and that encryption key is used to protect your data. We use the key to encrypt your data using AES-256 in CBC mode, protected with an HMAC3.

This cryptographic design is solid – but the constants need to be updated. One thousand rounds of PBKDF can be improved, and we intend to do so in the future (Bug 1320222). This token is only ever sent over a HTTPS connection (with preloaded HPKP pins) and is not stored, so when we initially developed this and needed to support low-power, low-resources devices, a trade-off was made. AES-CBC + HMAC is acceptable – it would be nice to upgrade this to an authenticated mode sometime in the future.

Other approaches

This isn’t the only approach to building a browser sync feature. There are at least three other options:

Option 1: Share your data with the browser maker

In this approach, the browser maker is able to read your data, and use it to provide services to you. For example, when you sync your browser history in Chrome it will automatically go into your Web & App Activity unless you’ve changed the default settings. As Google Chrome Help explains, “Your activity may be used to personalize your experience on other Google products, like Search or ads. For example, you may see a news story recommended in your feed based on your Chrome history.”4

Option 2: Use a separate password for sign-in and encryption

We developed Firefox Sync to be as easy to use as possible, so we designed it from the ground up to derive an authentication token and an encryption key – and we never see the passphrase or the encryption key. One cannot safely derive an encryption key from a passphrase if the passphrase is sent to the server.

One could, however, add a second passphrase that is never sent to the server, and encrypt the data using that. Chrome provides this as a non-default option5. You can sign in to sync with your Google Account credentials; but you choose a separate passphrase to encrypt your data. It’s imperative you choose a separate passphrase though.

All-in-all, we don’t care for the design that requires a second passphrase. This approach is confusing to users. It’s very easy to choose the same (or similar) passphrase and negate the security of the design. It’s hard to determine which is more confusing: to require a second passphrase or to make it optional! Making it optional means it will be used very rarely. We don’t believe users should have to opt-in to privacy.

Option 3: Manual key synchronization

The key (pun intended) to auditing a cryptographic design is to ask about the key: “Where does it come from? Where does it go?” With the Firefox Sync design, you enter a passphrase of your choosing and it is used to derive an encryption key that never leaves your computer.

Another option for Sync is to remove user choice, and provide a passphrase for you (that never leaves your computer). This passphrase would be secure and unguessable – which is an advantage, but it would be near-impossible to remember – which is a disadvantage.

When you want to add a new device to sync to, you’d need your existing device nearby in order to manually read and type the passphrase into the new device. (You could also scan a QR code if your new device has a camera).

Other Browsers

Overall, Sync works the way it does because we feel it’s the best design choice. Options 1 and 2 don’t provide thorough user privacy protections by default. Option 3 results in lower user adoption and thus reduces the number of people we can help (more on this below).

As noted above, Chrome implements Option 1 by default, which means unless you change the settings before you enable sync, Google will see all of your browsing history and other data, and use it to market services to you. Chrome also implements Option 2 as an opt-in feature.

Opera and Vivaldi follow Chrome’s lead, implementing Option 1 by default and Option 2 as an opt-in feature. Update: Vivaldi actually prompts you for a separate password by default (Option 2), and allows you to opt-out and use your login password (Option 1).

Brave, also a privacy-focused browser, has implemented Option 3. And, in fact, Firefox also implemented a form of Option 3 in its original Sync Protocol, but we changed our design in April 2014 (Firefox 29) in response to user feedback6. For example, our original design (and Brave’s current design) makes it much harder to regain access to your data if you lose your device or it gets stolen. Passwords or passphrases make that experience substantially easier for the average user, and significantly increased Sync adoption by users.

Brave’s sync protocol has some interesting wrinkles7. One distinct minus is that you can’t change your passphrase, if it were to be stolen by malware. Another interesting wrinkle is that Brave does not keep track of how many or what types of devices you have. This is a nuanced security trade-off: having less information about the user is always desirable… The downside is that Brave can’t allow you to detect when a new device begins receiving your sync data or allow you to deauthorize it. We respect Brave’s decision. In Firefox, however, we have chosen to provide this additional security feature for users (at the cost of knowing more about their devices).

Conclusion

We designed Firefox Sync to protect your data – by default – so Mozilla can’t read it. We built it this way – despite trade-offs that make development and offering features more difficult – because we put user privacy first. At Mozilla, this priority is a core part of our mission to “ensure the Internet is a global public resource… where individuals can shape their own experience and are empowered, safe and independent.”

0 It is possible to use one to guess the other, but only if you choose a weak password. ⬑

1 You can find more details in the full protocol specification or by following the code starting at this point. There are a few details we have omitted to simplify this blog post, including the difference between kA and kB keys, and application-specific subkeys. ⬑

6 One of the original engineers of Sync has written twoblog posts about the transition to the new sync protocol, and why we did it. If you’re interested in the usability aspects of cryptography, we highly recommend you read them to see what we learned. ⬑

Over the last few years, the Test Pilot team has developed innovative features for Firefox desktop and mobile, collaborating directly with Firefox users to improve the browser – from reminders … Read more

Today, just in time for the holiday shopping season, the Firefox Test Pilot team is introducing Price Wise and Email Tabs — the latest experimental features designed to give users more choice and transparency when shopping online. These game-changing desktop tools are sure to make shopping a breeze with more options to save, share, track and shop. We’ve also made a few updates to the Test Pilot program itself to make it even easier to become a part of the growing Firefox users testing new features.

Price Wise – Track prices across major retailers and get notified when the price drops

Online comparison shopping is more popular than ever, but it’s often hard to know when to buy to get the best deal. With Firefox Price Wise, you can add products to your Price Watcher list and get a desktop notification automatically every time the price drops. Users can even click through directly from their list to purchase as soon as the price changes, making online shopping more affordable and efficient. The feature is currently only available in the U.S., and works with products from five major retailers: Best Buy, eBay, Amazon, Walmart, and The Home Depot. This list of retailers were among the top 10 visited by Firefox users and we’re working to expand to more retailers in the future.

﻿

Email Tabs – Save and share content seamlessly as you browse the web

While there are many tools to help users share and save links when browsing, research shows that most of us still rely on email to get the job done – a manual process that requires multiple steps and services. We think there’s a better way. With Email Tabs, you can select and send links to one or many open tabs all within Firefox in a few short steps, making it easier than ever to share your holiday gift list, Thanksgiving recipes or just about anything else.

To start, click the Email Tabs icon at the top the browser, select the tabs you want and decide how much of the content you want to send – just the links, a screenshot preview or full text – then hit send and it’ll automatically be sent to your Gmail inbox.

Decide how you want to send whether its links, screenshot preview or full text

How about saving the links for future reference? Email Tabs also lets you copy multiple tabs to Clipboard for outside sharing. The feature only works with Gmail right now, but we’re working on adding more clients in the near future. This will be seamless if you’re logged into Gmail already, if not you can always log in once you’re prompted.

Copy one or multiple tabs to Clipboard

And of course, the best part of Price Wise and Email Tabs? With Firefox private browsing and content blocking features, you can shop online with extra protection against tracking this holiday season.

Improved Test Pilot for Users to Shape Firefox

We appreciate the thousands of Firefox users who have participated in the Test Pilot program since we started this journey. It’s their voice and impact that have motivated and inspired us to continue to develop features and services. Thanks to their support, we’re happy to share that several of our experiments are ready for graduation.

Send, which lets you upload and encrypt large files (up to 1GB) to share online, will be updated and unveiled later this year. Our Summer experiments, Firefox Color, which allows you to customize several different elements of your browser, including background texture, text, icons, the toolbar and highlights, and Side View, which allows you to view two different browser tabs in the same tab, within the same browser window, will graduate as standalone extensions.

We’re always working to improve our Test Pilot program to encourage Firefox users to participate and provide feedback on the latest Firefox features. With this version of Test Pilot, we’ve simplified the steps to make it easier for users to participate than before. To learn more about our revamped Test Pilot program and to help us test and evaluate a variety of potential Firefox tools, visit testpilot.firefox.com.

Hello Mozillians! As you may already know, last Friday November 09th – we held a new Testday event, for Firefox 64 Beta 8. Thank you all for helping us make Mozilla a better place: Gabriela, gaby2300. From Bangladesh team: Maruf Rahman, Tanvir … Continue reading

Hello Mozillians!

As you may already know, last Friday November 09th – we held a new Testday event, for Firefox 64 Beta 8.

Thank you all for helping us make Mozilla a better place: Gabriela, gaby2300.

– several test cases executed for the Multi-select tabs and Removal of Live Bookmarks and Feed features;
– bugs verified: 1495614, 1499504 ;

Thanks for another successful testday!

2018-11-12T13:12:21Z2018-11-12T13:12:21ZBogdan Marishttps://quality.mozilla.orgDriving quality across Mozilla with data, metrics and a strong community focusMozilla Quality Assurance2018-11-21T07:43:48Zhttps://hacks.mozilla.org/?p=32927Performance Updates and Hosting Moves: MDN Changelog for October 2018

This month's changelog, from the hard-working engineering team that builds and maintains the MDN Web Docs site, covers performance improvements and experiments, infrastructure updates, as well as countless tweaks and fixes to make your MDN experience better and better.

We shipped some changes designed to improve MDN’s page load time. The effects were not as significant as we’d hoped.

Shipped performance improvements

Our sidebars, like the Related Topics sidebar on <summary>, use a “mozToggler” JavaScript method to implement open and collapsed sections. This uses jQueryUI’s toggle effect, and is applied dynamically at load time. Tim Kadlec replaced it with the <details> element (KumaScript PR 789 and Kuma PR 4957), which semantically models open and collapsed sections. The <details> element is supported by most current browsers, with the notable exception of Microsoft Edge, which is supported with a polyfill.

We expected at least 150ms improvement based on bench tests

The <details> update shipped October 4th, and the 31,000 pages with sidebars were regenerated to apply the change.

A second change was intended to reduce the use of Web Fonts, which must be downloaded and can cause the page to be repainted. Some browsers, such as Firefox Focus, block web fonts by default for performance and to save bandwidth.

One strategy is to eliminate the web font entirely. We replaced OpenSans with the built-in Verdana as the body font in September (PR 4967), and then again with Arial on October 22 (PR 5023). We’re also replacing Font Awesome, implemented with a web font, with inline SVG (PR 4969 and PR 5053). We expect to complete the SVG work in November.

A second strategy is to reduce the size of the web font. The custom Zilla font, introduced with the June 2017 redesign, was reduced to standard English characters, cutting the file sizes in half on October 10 (PR 5024).

These changes have had an impact on total download size and rendering time, and we’re seeing improvements in our synthetic metrics. However, there has been no significant change in page load as measured for MDN users. In November, we’ll try some more radical experiments to learn more about the components of page load time.

SpeedCurve Synthetic measurements show steady improvement, but not yet on target.

Moved MDN to MozIT

Ryan Johnson, Ed Lim, and Dave Parfitt switched production traffic from the Marketing to the IT servers on October 29th. The site was placed in read-only mode, so all the content was available during the transition. There were some small hiccups, mostly around running out of API budget for Amazon’s Elastic File System (EFS), but we handled the issues within the maintenance window.

In the weeks leading up to the cut over, the team tested deployments, updated documentation, and checked data transfer processes. They created a list of tasks and assignments, detailed the process for the migration, and planned the cleanup work after the cut over. The team’s attention to detail and continuous communication made this a smooth transition for MDN’s users, with no downtime or bugs.

The MozIT cluster is very similar to the previous MozMEAO cluster. The technical overview from the October 10, 2017 launch is still a decent guide to how MDN is deployed.

There are a handful of changes, most of which MDN users shouldn’t notice. We’re now hosting images in Docker Hub rather than quay.io. The MozMEAO cluster ran Kubernetes 1.7, and the new MozIT cluster runs 1.9. This may be responsible for more reliable DNS lookups, avoiding occasional issues when connecting to the database or other AWS services.

In November, we’ll continue monitoring the new servers, and shut down the redundant services in the MozMEAO account. We’ll then re-evaluate our plans from the beginning of the year, and prioritize the next infrastructure updates. The top of the list is reliable acceptance tests and deploys across multiple AWS zones.

Planned for November

We’ll continue on performance experiments in November, such as removing Font Awesome and looking for new ways to lower page load time. We’ll continue ongoing projects, such as migrating and updating browser compatibility data and shipping more HTML examples like the one on <input>.

2018-11-09T17:03:24Z2018-11-09T17:03:24ZJohn Whitlockhttps://hacks.mozilla.orghacks.mozilla.orgMozilla Hacks – the Web developer blog2018-11-20T17:45:45Zhttps://medium.com/p/334c66ab4484How do people decide whether or not to get a browser extension?

The Firefox Add-ons Team works to make sure people have all of the information they need to decide which browser extensions are right for them. Past research conducted by Bill Selman and the Add-ons Team taught us a lot about how people discover extensions, but there was more to learn. Our primary research question was: “How do people decide whether or not to get a specific browser extension?”

We recently conducted two complementary research studies to help answer that big question:

The survey ran from July 19, 2018 to July 26, 2018 on addons.mozilla.org (AMO). The survey prompt was displayed when visitors went to the site and was localized into ten languages. The survey asked questions about why people were visiting the site, if they were looking to get a specific extension (and/or theme), and if so what information they used to decide to get it.

<figcaption>Screenshot of the survey message bar on addons.mozilla.org.</figcaption>

The think-aloud study took place at our Mozilla office in Vancouver, BC from July 30, 2018 to August 1, 2018. The study consisted of 45-minute individual sessions with nine participants, in which they answered questions about the browsers they use, and completed tasks on a Windows laptop related to acquiring a theme and an extension. To get a variety of perspectives, participants included three Firefox users and six Chrome users. Five of them were extension users, and four were not.

<figcaption>Mozilla office conference room in Vancouver, where the think-aloud study took place.</figcaption>

What we learned about decision-making

People use social proof on the extension’s product page

Ratings, reviews, and number of users proved important for making a decision to get the extension in both the survey and think-aloud study. Think-aloud participants used these metrics as a signal that an extension was good and safe. All except one think-aloud participant used this “social proof” before installing an extension. The importance of social proof was backed up by the survey responses where ratings, number of users, and reviews were among the top pieces of information used.

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with the “social proof” outlined: number of users, number of reviews, and rating.</figcaption><figcaption>AMO survey responses to “Think about the extension(s) you were considering getting. What information did you use to decide whether or not to get the extension?”</figcaption>

People use social proof outside of AMO

Think-aloud participants mentioned using outside sources to help them decide whether or not to get an extension. Outside sources included forums, advice from “high authority websites,” and recommendations from friends. The same result is seen among the survey respondents, where 40.6% of respondents used an article from the web and 16.2% relied on a recommendation from a friend or colleague. This is consistent with our previous user research, where participants used outside sources to build trust in an extension.

<figcaption>Screenshot of an example outside source: TechCrunch article about Facebook Container extension.</figcaption><figcaption>AMO survey responses to “What other information did you use to decide whether or not to get an extension?”</figcaption>

Almost half of the survey respondents use the description to make a decision about the extension. While the description was the top piece of content used, we also see that over one-third of survey respondents evaluate the screenshots and the extension summary (the description text beneath the extension name), which shows their importance as well.

Think-aloud participants also used the extension’s description (both the summary and the longer description) to help them decide whether or not to get it.

While we did not ask about the extension name in the survey, it came up during our think-aloud studies. The name of the extension was cited as important to think-aloud participants. However, they mentioned how some names were vague and therefore didn’t assist them in their decision to get an extension.

Themes are all about the picture

In addition to extensions, AMO offers themes for Firefox. From the survey responses, the most important part of a theme’s product page is the preview image. It’s clear that the imagery far surpasses any social proof or description based on this survey result.

<figcaption>Screenshot of a theme on addons.mozilla.org with the preview image highlighted.</figcaption><figcaption>AMO survey responses to “Think about the theme(s) you were considering getting. What information did you use to decide whether or not to get the theme?”</figcaption>

All in all, we see that while social proof is essential, great content on the extension’s product page and in external sources (such as forums and articles) are also key to people’s decisions about whether or not to get an extension. When we’re designing anything that requires people to make an adoption decision, we need to remember the importance of social proof and great content, within and outside of our products.

Following the explosion of extension features in Firefox 63, Firefox 64 moved into Beta with a quieter set of capabilities spread across many different areas. Extension Management The most visible change to extensions comes on the user-facing side of Firefox … Continue reading

Following the explosion of extension features in Firefox 63, Firefox 64 moved into Beta with a quieter set of capabilities spread across many different areas.

Extension Management

The most visible change to extensions comes on the user-facing side of Firefox where the add-ons management page (about:addons) received an upgrade.

Changes on this page include:

Each extension is shown as a card that can be clicked.

Each card shows the description for the extension along with buttons for Options, Disable and Remove.

The search area at the top is cleaned up.

The page links to the Firefox Preferences page (about:preferences) and that page links back to about:addons, making navigation between the two very easy. These links appear in the bottom left corner of each page.

These changes are part of an ongoing redesign of about:addons that will make managing extensions and themes within Firefox simpler and more intuitive. You can expect to see additional changes in 2019.

As part of our continuing effort to make sure users are aware of when an extension is controlling some aspect of Firefox, the Notification Permissions window now shows when an extension is controlling the browser’s ability to accept or reject web notification requests.

When an extension is installed, the notification popup is now persistently shown off of the main (hamburger) menu. This ensures that the notification is always acknowledged by the user and can’t be accidentally dismissed by switching tabs.

Finally, extensions can now be removed by right-clicking on an extension’s browser action icon and selecting “Remove Extension” from the resulting context menu.

Even More Context Menu Improvements

Firefox 63 saw a large number of improvements for extension context menus and, as promised, there are even more improvements in Firefox 64.

The biggest change is a new API that can be called from the contextmenu DOM event to set a custom context menu in extension pages. This API, browser.menus.overrideContext(), allows extensions to hide all default Firefox menu items in favor of providing a custom context menu UI. This context menu can consist of multiple top-level menu items from the extension, and may optionally include tab or bookmark context menu items from other extensions.

To use the new API, you must declare the menus and the brand new menus.overrideContext permission. Additionally, to include context menus from other extensions in the tab or bookmarks contexts, you must also declare the tabs or bookmarks permissions, respectively.

The API is still being documented on MDN at the time of this writing, but the API takes a contextOptions object as a parameter, which includes the following values:

showDefaults: boolean that indicates whether to include default Firefox menu items in the context menu (defaults to false)

context: optional parameter that indicates the ContextType to override to allow menu items from other extensions in this context menu. Currently, only bookmark and tab are supported. showDefaults cannot be used with this option.

While waiting for the MDN documentation to go live, I would highly encourage you to check out the terrific blog post by Yuki “Piro” Hiroshi that covers usage of the new API in great detail.

Other improvements to extension context menus include:

browser.menus.update() now allows extensions to update an icon without having to delete and recreate the menu item.

menus.create() and menus.update() now support a viewTypes property. This is a list of view types that specifies where the menu item will be shown and can include tab, popup (pageAction/browserAction) or sidebar. It defaults to any view, including those without a viewType.

The menus.onShown and menus.onClicked events now include the viewType described above as part of their info object so extensions can determine the type of view where the menu was shown or clicked.

The menus.onClicked event also added a button property to indicate which mouse button initiated the click (left, middle, right).

Minor Improvements in Many Areas

In addition to the extension management in Firefox and the context menu work, many smaller improvements were made throughout the WebExtension API.

Page Actions

A new, optional manifest property for page actions called ‘pinned’ has been added. It specifies whether or not the page action should appear in the location bar by default when the user installs the extension (default is true).

Keyboard Shortcuts

Dev Tools

Extensions can now create devtools panel sidebars and use the new setPage() API to embed an extension page inside the devtools inspector sidebar.

Misc / Bug Fixes

browser.search.search() API no longer requires user input in order to be called. This makes the API much more useful, especially in asynchronous event listeners. This feature was also uplifted to Firefox 63.

AV1, the next generation royalty-free video codec from the Alliance for Open Media leapfrogs the performance of VP9 and HEVC. The AV1 format is and will always be royalty-free with a permissive FOSS license. In this video presentation, Mozilla's Nathan Egge dives deep into the technical details of the codec and its evolution.

Since AOMedia officially cemented the AV1 v1.0.0 specification earlier this year, we’ve seen increasing interest from the broadcasting industry. Starting with the NAB Show (National Association of Broadcasters) in Las Vegas earlier this year, and gaining momentum through IBC (International Broadcasting Convention) in Amsterdam, and more recently the NAB East Show in New York, AV1 keeps picking up steam. Each of these industry events attract over 100,000 media professionals. Mozilla attended these shows to demonstrate AV1 playback in Firefox, and showed that AV1 is well on its way to being broadly adopted in web browsers.

Continuing to advocate for AV1 in the broadcast space, Nathan Egge from Mozilla dives into the depths of AV1 at the Mile High Video Workshop in Denver, sponsored by Comcast.

AV1 leapfrogs the performance of VP9 and HEVC, making it a next-generation codec. The AV1 format is and will always be royalty-free with a permissive FOSS license.

To introduce this week’s newsletter I’ll write about culling. Culling refers to discarding invisible content and is performed at several stages of the rendering pipeline. During frame building on the CPU we go through all primitives and discard the ones that are off-screen by computing simple rectangle intersections. As a result we avoid transferring a … Continue reading WebRender newsletter #29→

To introduce this week’s newsletter I’ll write about culling. Culling refers to discarding invisible content and is performed at several stages of the rendering pipeline. During frame building on the CPU we go through all primitives and discard the ones that are off-screen by computing simple rectangle intersections. As a result we avoid transferring a lot of data to the GPU and we can skip processing them as well.

Unfortunately this isn’t enough. Web page are typically built upon layers and layers of elements stacked on top of one another. The traditional way to render web pages is to draw each element in back-to-front order, which means that for a given pixel on the screen we may have rendered many primitives. This is frustrating because there are a lot of opaque primitives that completely cover the work we did on that pixel for element beneath it, so there is a lot of shading work and memory bandwidth that goes to waste, and memory bandwidth is a very common bottleneck, even on high end hardware.

Drawing on the same pixels multiple times is called overdraw, and overdraw is not our friend, so a lot effort goes into reducing it.
In its early days, to mitigate overdraw WebRender divided the screen in tiles and all primitives were assigned to the tiles they covered (primitives that overlap several tiles would be split into a primitive for each tile), and when an opaque primitive covered an entire tile we could simply discard everything that was below it. This tiling approach was good at reducing overdraw with large occluders and also made the batching blended primitives easier (I’ll talk about batching in another episode). It worked quite well for axis-aligned rectangles which is the vast majority of what web pages are made of, but it was hard to split transformed primitives.

Eventually we decided to try a different approach inspired by how video games tackle the same problem. GPUs have a special feature called the z-buffer (or depth-buffer) into which are stored the depth of each pixel during rendering. This allows rendering opaque objects in any order and still correctly have the ones closest to the camera visible.

A common way to render 3d games is to sort objects front-to-back to maximize the chance that front-most pixels are written first and maximize the amount of shading and memory writes that are discarded by the depth test. Transparent objects are then rendered back-to-front in a second pass since they can’t count as occluders.
This is exactly what WebRender does now, and moving from the tiling scheme to using the depth buffer to reduce overdraw brought great performance improvements (certainly more than I expected), and also made a number of other things simpler (I’ll come back to these another day).

This concludes today’s little piece of WebRender history. It is very unusual for 2D rendering engines to use the z-buffer this way so I think this implementation detail is worth the highlight.

Notable WebRender and Gecko changes

Bobby implemented dynamically growing the shared texture cache, cutting by half the remaining regression compared to Firefox without WebRender on the AWSY test.

Dan did some profiling of the talos test, and identified large data structures being copied a lot on the stack, which led to some of Glenn’s optimizations for this week.

As an employer, Mozilla has a long-standing commitment to diversity, inclusion and fostering a supportive work environment. In keeping with that commitment, today we join the growing list of companies … Read more

As an employer, Mozilla has a long-standing commitment to diversity, inclusion and fostering a supportive work environment. In keeping with that commitment, today we join the growing list of companies publicly opposed to efforts to erase transgender protections through reinterpretation of existing laws and regulations, as well as any policy or regulation that violates the privacy rights of those who identify as transgender, gender non-binary, or intersex.

The rights, identities and humanity of transgender, gender non-binary and intersex people deserve affirmation.

A workplace that is inclusive of a diversity of backgrounds and experiences is also good for business. We’re glad to see companies across the industry joining the Human Rights Campaign’s statement and sharing this perspective.

At Mozilla, we have clear community guidelines dictating our expectations of employees and volunteers and recently rolled out workplace guidelines for transitioning individuals and their managers. These actions are a part of our commitment to ensuring a welcoming, respectful and inclusive culture.

We urge the federal government to end its exploration of policies that would erase transgender protections and erode decades of hard work to create parity, respect, opportunity and understanding for transgender professionals.

2018-11-07T22:21:17Z2018-11-07T22:21:17ZMichael DeAngelohttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-20T19:23:12Zhttps://medium.com/p/934f64f596cHow do people decide where or not to get a browser extension?

How do people decide whether or not to get a browser extension?

The Firefox Add-ons Team works to make sure people have all of the information they need to decide which browser extensions are right for them. Past research conducted by Bill Selman and the Add-ons Team taught us a lot about how people discover extensions, but there was more to learn. Our primary research question was: “How do people decide whether or not to get a specific browser extension?”

We recently conducted two complementary research studies to help answer that big question:

The survey ran from July 19, 2018 to July 26, 2018 on addons.mozilla.org (AMO). The survey prompt was displayed when visitors went to the site and was localized into 10 languages. The survey asked questions about why people were visiting the site, if they were looking to get a specific extension (and/or theme), and if so what information they used to decide to get it.

<figcaption>Screenshot of the survey message bar on addons.mozilla.org.</figcaption>

The think-aloud study took place at a Mozilla office in Vancouver, BC from July 30, 2018 to August 1, 2018. The study consisted of 45-minute individual sessions with nine participants, in which they answered questions about the browsers they use, and completed tasks on a Windows laptop related to acquiring a theme and an extension. To get a variety of perspectives, participants included three Firefox users and six Chrome users. Five of them were extension users and four were not.

<figcaption>Mozilla office conference room in Vancouver, where the think-aloud study took place.</figcaption>

What we learned about decision-making

Now we share some key results from both the survey and the think-aloud study.

People use social proof on the extension’s product page

Ratings, reviews, and number of users proved important for making a decision to get the extension in both the survey and think-aloud study. Think-aloud participants used these metrics as a signal that an extension was good and safe. All except one think-aloud participant used this “social proof” before installing an extension. The importance of social proof was backed up by the survey responses where ratings, number of users, and reviews were among the top pieces of information used.

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with the “social proof” outlined: number of users, number of reviews, and rating.</figcaption><figcaption>AMO survey responses to “Think about the extension(s) you were considering getting. What information did you use to decide whether or not to get the extension?”</figcaption>

People use social proof outside of AMO

Think-aloud participants mentioned using outside sources to help them decide whether or not to get an extension. Outside sources included forums, advice from “high authority websites,” and recommendations from friends. The same result is seen among the survey respondents, where 40.6% of respondents used an article from the web and 16.2% relied on a recommendation from a friend or colleague. This is consistent with our previous user research, where participants used outside sources to build trust in an extension.

<figcaption>Screenshot of an example outside source: TechCrunch article about Facebook Container extension.</figcaption><figcaption>AMO survey responses to “What other information did you use to decide whether or not to get an extension?”</figcaption>

People use the description and extension name

Almost half of the survey respondents use the description to make a decision about the extension. While the description was the top piece of content use, we also see that over one-third of survey respondents use screenshots and the extension summary (“Description text beneath the extension name (top of page)”), which shows their importance as well.

Think-aloud participants also used the extension’s description (both the summary and the longer description) to help them decide whether or not to get it.

While we did not ask about the extension name in the survey, it came up during our think-aloud studies. The name of the extension was cited as important to think-aloud participants. But they mentioned how some names were vague and therefore didn’t assist them in their decision to get an extension.

In addition to extensions, AMO offers themes for Firefox. From the survey responses, the most important part of a theme’s product page is the preview image. It’s clear that the imagery far surpasses any social proof or description.

<figcaption>Screenshot of a theme on addons.mozilla.org with the Preview image highlighted.</figcaption><figcaption>AMO survey responses to “Think about the theme(s) you were considering getting. What information did you use to decide whether or not to get the theme?”</figcaption>

All in all, we see that while social proof is important, great content on the extension’s product page and in external sources (such as forums and articles) are also key to people’s decisions about whether or not to get the extension. When we’re designing anything that requires people to make an adoption decision, we need to remember these studies and the importance of social proof and great content, within and outside of our products.

In this Q&A, independent UX designer and creative catalyst Nadja Haldimann talks about how she approached working with Mozilla on the new Firefox Reality browser for virtual reality (VR). Before launch, Nadja and Mozilla’s Mixed Reality team worked with Seattle-based BlinkUX to do user testing. Here’s what

In this Q&A, independent UX designer and creative catalyst Nadja Haldimann talks about how she approached working with Mozilla on the new Firefox Reality browser for virtual reality (VR). Before launch, Nadja and Mozilla’s Mixed Reality team worked with Seattle-based BlinkUX to do user testing. Here’s what they learned, and the solutions they found, to create a web browser that people can use strapped to their faces.

How difficult is it to design for an immersive, 3D environment, compared to 2D software?
It’s not necessarily more difficult – all the same design principles still apply – but it is quite different. One of the things that you have to account for is how the user perceives space in a headset – it seems huge. So instead of designing for a rectangular window inside a rectangular display, you’re suspending a window in what looks to be a very large room. The difficulty there is that people want to fill that room with a dozen browser windows, and maybe have a YouTube video, baseball game or stock ticker running in the background. But in reality, we only have these 2-inch screens to work with, one for each eye, and the pixels of just half a cell phone screen. But the perception is it’s 1,000 times bigger than a desktop. They think they’re in a movie theater.

OK, so here you have this massive 3D space. You can put anything in there you want. What did you create?
That was a really big question for us: what is the first thing people see when they open the browser? We built two things for the Firefox Reality home page. First, we worked with digital artists to create scenes users could choose as the background, because, just like on a 2D desktop browser, we found people want to customize their browser window with themes and images that mean something to them. The goal was to create environments that were grounding and inviting, especially for people who might be experiencing an immersive environment for the first time.

Second, we created a content feed to help people find great new 3D experiences on the web. Immersive media is just getting off the ground, so content is somewhat limited today but growing quickly. The content feed showcases quality, family-friendly content that supports the WebVR API, so it’s easy to view on multiple devices.

What kinds of limitations or challenges did you run into while designing the browser’s UI?
In VR, the most important thing is to make the user comfortable. In the past, a significant number of people have had trouble with nausea and motion sickness — and women are more susceptible, according to research. You can avoid that by delivering a smooth, responsive experience, where the display can render the content very, very quickly. The best experience is one where the user actually forgets they’re in a VR environment. They’re happy spending time there and they want to keep exploring.

The first problem we ran into was that people felt like they were floating above the floor. Part of that was because we had the camera height set to 5’ 6”, which is roughly the height of an adult standing up. But in user testing, people were sitting down. So there was a disconnect between what people were seeing in the headset and where they knew their physical bodies to be. The other part was that we were using colors to indicate floor, without enough texture. It’s textures that let our brains identify distance in VR. We created low poly environments with limited textures, so people could perceive the floor, and that helped people feel more comfortable in the environment.

Another surprise was how people perceive an app window size in the immersive environment. In 2D, people talk about making a window “smaller” or “bigger”, and everyone knows how to change that. In 3D, users were more likely to say they wanted to put a window “farther away” or “bring it closer”. It’s the same fix, design-wise: you just give people a way to resize the window. But it’s interesting how differently people relate to objects in 3D. It’s a more tactile, interactive mindset.

Who were you designing this browser for?
That’s a good question because, in the beginning, we didn’t know exactly. The Firefox Reality browser is one of the first standalone VR browsers that lets people surf the 3D web, and it is built to work with newer standalone headsets that are super-affordable and wireless, devices like the Oculus Go, HTC VIVE Focus, and Lenovo Mirage Solo (Google Daydream). So it’s a pretty new market.

Based on business and personal use cases, we came up with personas, most of which were familiar with VR and 3D already: Gamers, architects, students, business travelers, and grandparents. But really the market for this product is extremely wide. We expect that VR will create a new genre of media that I believe will become a new standard. And our testing bore that out: People were interested in watching video in VR, with friends, in a theater-like setting, so it’s interactive. One person was excited to watch in bed, because it was easier to stare straight up to the ceiling with his headset on than it was to mess around with a laptop.

What was the biggest design surprise?
We ran into a lot of issues with having a virtual keyboard in the interface. People complained that the keyboard was too wide and it was awkward to select the letters. It was too difficult to find special characters like umlauts.

We made a bunch of tweaks so the virtual keyboard was easier to use. We also accelerated our timeline for voice input. In the initial release, we added a microphone icon to the URL bar so the user can click on that and talk to the browser, instead of typing in a search query.

What else did you learn from user testing?
People brought up privacy. Could we add profiles, like Netflix has? Can they save programs for later viewing? Could they have a guest account? Also there’s a need to have parental controls, because adult content is a big interest in VR. VR content is still quite limited, but people are already thinking about how to manage access to it in their homes.

What design tools did you use to create a 3D UI?
We’re designers, not programmers and short of learning Unity which has a steep learning curve, we needed to find some in-VR design tools that allowed us to import 2D and 3D objects and place them in space. The design tools for 2D, like Adobe Illustrator, Photoshop, Sketch, and InVision, don’t work for 3D, and there are only a few immersive 3D design tools out there today. We tried Google Blocks, Gravity Sketch, and Tvori before landing on Sketchbox. It’s an early-stage in-VR design tool with just enough functionality to help us get a feel for size, distance, and spacing. It also helped us communicate those coordinates to our software engineers.

What’s next?
We’re now working on adding multi-window support, so people can multitask in a VR browser the same way they do in desktop browsers today. We’re also planning to create a Theater Video setting, to give people an option to watch movies in a theater mode that’s a bigger screen in a large dark room. So it’ll be a lot like a physical movie theater, but in a VR headset. In the next 1.1 release, we’re planning to add support for 360-degree movies, bookmarks, repositioning the browser window, and exploring additional voice input options as well as early design work for augmented reality devices. It’s a work in progress!

2018-11-07T17:00:00Z2018-11-07T17:00:00ZMozilla Mixed Realityhttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttp://blog.mozilla.org/addons/?p=8580Friend of Add-ons: Jyotsna Gupta

Our newest Friend of Add-ons is Jyotsna Gupta! Jyotsna first became involved with Mozilla in 2015 when she became a Firefox Student Ambassador and started a Firefox club at her college. She has contributed to several projects at Mozilla, including … Continue reading

Our newest Friend of Add-ons is Jyotsna Gupta! Jyotsna first became involved with Mozilla in 2015 when she became a Firefox Student Ambassador and started a Firefox club at her college. She has contributed to several projects at Mozilla, including localization, SuMo, and WebMaker, and began exploring Firefox OS app development after attending a WoMoz community meetup in her area.

In 2017, a friend introduced Jyotsna to browser extension development. Always curious and interested in trying new things, she created PrivateX, an extension that protects user privacy by opening websites that ask for critical user information in a private browsing window and removing Google Analytics tracking tokens. With her newfound experience developing extensions, Jyotsna began mentoring new extension developers in her local community, and joined the Featured Extensions Advisory Board.

After wrapping up two consecutive terms on the board, she served on the judging panel for the Firefox Quantum Extensions Challenge, evaluating more than 100 extensions to help select finalists for each award category. Currently, she is an add-on content reviewer on addons.mozilla.org and a Mozilla Rep. She frequently speaks about cross-browser extension development at regional events.

When asked about her experience contributing to Mozilla, Jyotsna says, “It has been a wonderful learning experience for me as a Mozillian. When I was a student, Mozilla was something that I could add to my profile to enhance my resume. There was a time when I refrained myself from speaking up, but today, I’m always ready to speak in front of a huge number of people. Getting involved with Mozilla helped me in meeting like-minded people around the globe, working with diverse teams, learned different cultures, gained global exposure and a ton of other things. And I’m fortunate enough to have wonderful mentors around me, boosting me up to see a brighter side in every situation.”

Jyotsna also has advice for newcomers to open source projects. “To the contributors who are facing imposter syndrome, trust me, you aren’t alone. We were all there once. We are here for you. May the force be with you.”

Thank you so much for your many wonderful contributions, Jyotsna!

To learn more about how to get involved in the add-ons community, please take a look at our wiki to see current contribution opportunities.

speedscope is a fast, interactive, web-based viewer for large performance profiles, inspired by the performance panel of Chrome developer tools and by Brendan Gregg’s FlameGraphs. Jamie Wong built speedscope to explore and interact with large performance profiles from a variety of profilers for a variety of programming languages. speescope runs totally in-browser, and does not send any profiling data to any servers.

The goal of speedscope is to provide a 60fps way of interactively exploring large performance profiles from a variety of profilers for a variety of programming languages. It runs totally in-browser, and does not send any profiling data to any servers. Because it runs totally in-browser, it should work in Firefox and Chrome on Mac, Windows, and Linux. It can be downloaded to run offline, either from npm, or just as a totally standalone zip file.

In doing performance work across many language environments at Figma, I noticed that every community tends to create its own tools for visualizing performance issues. With speedscope, I hoped to de-duplicate those efforts. To meet this goal, speedscope supports import of profiles from a growing list of profilers:

speedscope also has a stable documented file format, making it appropriate as a tool to target for visualization of totally custom profiles. This allows new profilers to support import into speedscope without needing to modify speedscope’s code at all (though contributions are welcome!). This is how I added support for visualizing rbspy profiles: rbspy#161. Firefox & Chrome both have capable profile visualizers, but the file formats they use change frequently.

Also unlike other similar tools, speedscope is designed to make it easy to host inside your own infrastructure. This allows you to integrate speedscope to view backend performance profiles with a single click. At Figma, we have a ruby backend, so I made an opinionated fork of rack-mini-profiler to do exactly this. If you support access to performance profiles across domains, you can even load them directly into https://www.speedscope.app via a #profileUrl=… hash parameter.

What can it do?

speedscope is broken down into three primary views: Time Order, Left Heavy, and Sandwich.

Time Order

In the “Time Order” view (the default), call stacks are ordered left-to-right in the same order as they occurred in the input file, which is usually the chronological order they were recorded in. This view is most helpful for understanding the behavior of an application over time, e.g. “first the data is fetched from the database, then the data is prepared for serialization, then the data is serialized to JSON”.

The horizontal axis represents the “weight” of each stack (most commonly CPU time), and the vertical axis shows you the stack active at the time of the sample. If you click on one of the frames, you’ll be able to see summary statistics about it.

Left Heavy

In the “Left Heavy” view, identical stacks are grouped together, regardless of whether they were recorded sequentially. Then, the stacks are sorted so that the heaviest stack for each parent is on the left — hence “left heavy”. This view is useful for understanding where all the time is going in situations where there are hundreds or thousands of function calls interleaved between other call stacks.

Sandwich

The “Sandwich” view is a table view in which you can find a list of all functions and their associated times. You can sort by self time or total time.

It’s called the “Sandwich” view because if you select one of the rows in the table, you can see flamegraphs for all the callers and callees of the selected row.

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. Hello everyone, We would like to … Continue reading

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Hello everyone,

We would like to thank you for reading the l10n reports and keeping updated on what’s happening in the Mozilla l10n community. And we would also like to make it even better! If you have a few minutes before or after reading this month’s report, could you please take this short, anonymous survey?

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

Firefox 63 is currently available in release, while 64 is in beta, and 65 in Nightly. The last day available to ship updates for beta will be November 27.

Most of the strings landed towards the end of the Nightly cycle are still related to Privacy (Content Blocking, Trackers, etc.) and Security (certificate error pages), but there’s a also a new about:performance page, using Fluent for localization.

Talking about Fluent, as anticipated in the September edition of this report, expect a lot of strings moving to FTL files in the coming weeks. You can also visit arewefluentyet.com to visualize the work done around this area, and get an idea of how many strings are left.

A new project, strictly connected to Firefox, is also available in Pontoon for localization: Firefox Monitor. There are 2 distinct parts:

An add-on, which displays a notification (doorhanger) when you visit a website with known data breaches.

The actual website, monitor.firefox.com, and the text used to send emails to users.

Both projects are scheduled for launch in the first half of November.

What’s new or coming up in mobile

With version 63 slowly rolling out to users in the coming week, you will be now able to use Firefox for Android in English for Canada (en-CA) and Ligurian (lij) on the release version! Congratulations to those teams for completing and shipping this work.

We’ve also shipped Firefox for iOS v14 a few days ago, and the main new features are:

Siri shortcuts

Dark Theme improvements

Performance Improvements

Make Clear Private Data settings panel more granular

A new experiment just came out in Indonesia, and you can try it out too: ScreenshotGO. This project is shipping on Google Play in Indonesia only at the time, but you can give it a try by installing it from its GitHub project page. The main idea is to provide better screenshot management on Android devices. Users can add a “Go” button their device to speed up the taking of screenshots.

What’s new or coming up in Foundation projects

Fundraising

The October fundraiser went out over the last few days and is doing pretty great so far, thanks a lot to everyone who contributed! It’s definitely helping the Foundation raise money to fund next year’s programs.

Fundraising email schedule for November and December is shaping up, starting on Giving Tuesday:

Email #1

Giving Tuesday (11/27)

Email #2

12/03

Email #3

12/17

Email #4

12/27

Email #5 – Mitchell message

12/31

It’s likely we will get help from a vendor for some of those emails, especially if the copy gets approved right before the holidays. Of course we will keep you updated.

Advocacy

The Advocacy team is working on test campaigns around misinformation and plan to launch one or two of them early this month. The campaigns can potentially be launched globally, but will always keep a focus on Europe. Markets and locales will heavily depend from the campaign targets.

The campaigns will have 3 main goals:

To grow public support and demand for policymakers to tackle misinformation with healthy internet policies.

To provide policymakers with a network of accessible experts and reliable information to help shape healthy internet policies.

To engage corporations and build industry allies to tackle this issue.

What’s new or coming up in Support

– New and updated Firefox 63 content awaits, giving your talent another chance to shine. Click the links below, see if your locale is waiting for your involvement and go for it!

– Warm thanks to the Bengali community for organizing a fun and exciting event that included localizing Support content.

What’s new or coming up in Pontoon

– Errors & Warnings. We launched support for Errors & Warnings in Pontoon, which allow you to easily identify and fix translations breaking Firefox builds or exceeding the length limit on Mozilla.org. Note that Warnings, unlike Errors, mean that we’re not completely sure that the string contains critical issues, so it might be OK to just leave them unchanged. For more details, check out the documentation or read more about Errors & Warnings and a handful of other novelties in Pontoon.

– Changing Machine Translation provider. We switched to Google Cloud Translation API as our machine translation provider. Previously, we used a free plan of Microsoft Translator Text API, which only worked with the old version of the API. That version is now being deprecated and lately started behaving unreliably, often not returning any results. One of the benefits of using Google Translate is that the number of Mozilla locales with machine translation support more than doubled (from 48 to 103).

Events

A workshop organized by the Mozilla Nativo community is about to kick-off in a few weeks in Oaxaca, from Nov 10-11.

Another community workshop is right around the corner: the South East Asia event will take place in Hanoi from Nov 17-18.

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

Shout-out to Stoyan (one of our Bulgarian localizers) for working on a patch that adds several Firefox add-ons to help localizers test all permission strings. Update includes all recent webext permissions. Check it out here!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

2018-11-05T07:19:41Z2018-11-05T07:19:41ZThéo Chevalierhttps://blog.mozilla.org/l10nUsers first, no matter where they areMozilla L10N2018-11-05T07:19:41Z5bdb46d539991900bf024010Principles of Mixed Reality Permissions

Virtual and Augmented Reality (VR and AR) — known together as Mixed Reality (MR) — introduce a new dimension of physicality to current web security and privacy concerns. Problems that are already difficult on the 2D web, like permissions and clickjacking, become even more complex when users are immersed in a 3D

Virtual and Augmented Reality (VR and AR) — known together as Mixed Reality (MR) — introduce a new dimension of physicality to current web security and privacy concerns. Problems that are already difficult on the 2D web, like permissions and clickjacking, become even more complex when users are immersed in a 3D experience. This is especially true on head-worn displays, where there is no analogous concept to the 2D “window,” and everything a user sees might be rendered by the web application. Compounding the difficulty of obtaining permission is the more intimate nature of the data collected by the additional sensors required to enable AR and VR experiences.

To enable immersive MR experiences, devices have sensors that not only capture information about the physical world around the user (far beyond the sensors common on mobile phones), but also capture personal details about the user and (possibly) bystanders. For example, these sensors could create detailed 3D maps of the physical world (either by using underlying platform capabilities like the ability to intersect 3D rays with a model of world around the user, or by direct camera access), infer biometric data like height and gait, and potentially find and recognize nearby faces in the numerous cameras typically present on these devices. The infrared sensor that detects when a head-mounted device is worn could eventually disclose more detailed biometrics like perspiration and pulse rate, and some devices already incorporate eye-tracking.

For each sensor, there are straightforward uses in MR applications. A model of the world allows devices to place content on surfaces, hide content under tables, or warn users if they’re about to walk into a wall in VR. A user’s height and gait are revealed by the precise 3D motion of their head and hands in the world, information that is essential for rendering content from the correct position in an immersive experience. Eye-tracking can support natural interaction and allow disabled people to navigate using just their eyes. Access to camera data allows applications to detect and track objects in the world, like equipment being repaired or props being used in a game.

Unfortunately, there are concerns associated with each sensor—a data leak involving users’ home data could violate their right against unreasonable search; height and gait can be used as unique personal identifiers; a malicious application could use biometric data like pupil tracking and perspiration to infer users’ political or sexual preferences or track the location of bystanders who have not given consent and may not even be aware they are being seen. This is particularly worrying when governments may have access to this data.

At Mozilla, our mission is empowering people on the internet. The web is an integral part of modern life, and individual security and privacy are fundamental rights. When there are potential negative consequences, browsers typically request consent. However, as we collect and pass more data over the internet, we’ve fallen behind on ensuring users give informed consent. This trend could have far-reaching impact on users as more and more of their interactions move onto MR devices.

Informed Consent

The idea of informed consent originates in medical ethics and the idea that individuals have the right to exercise control over critical aspects of their lives. The internet is now a fundamental piece of people’s lives and society in general, and at Mozilla we strongly believe that informed consent is a right on the internet as well. Unfortunately, providing informed consent for internet users suffers from similar issues as informed consent in medicine, where users may not understand what they are being told and may not be motivated to consider their choices in the moment. Most importantly, the immersive web must have a foundation of trust to start from.

Obtaining informed consent requires disclosure, comprehension, and voluntariness. In order to be informed, people must have all necessary information, presented in a way they understand; in this context, that includes the data being collected or transmitted and the risks of unauthorized disclosure. To be able to consent, a person must not only be able to understand the disclosed information, but also be able to make a decision free of coercion or manipulation.

Completely and accurately presenting the information required for informed consent is challenging. Permissions have already become too complex to easily communicate to users what data is gathered and the potential consequences of its use or misuse. For example, PokémonGo uses access to the accelerometer and gyroscope in the the phone to align the Pokémon with the player’s orientation in the world and determine if they might be driving (i.e. they shouldn’t be playing the game). However, it can also be used by a bad actor to recover your password. These more subtle risks may be linked to more severe consequences.

Interactions between multiple sensors presents an additional permissions challenge—what happens when we combine accelerometer data with biometric data and microphone access? What happens if we add camera access? Individually, these sensors have complex threats; taken together, it is difficult to convey the full breadth of possible risks without sounding hyperbolic.

Given the new challenges of the immersive web, we have an opportunity to rework how we approach permissions and consent to better empower people. While we don’t yet understand what to tell users, we propose four principles as the basis for approaching this problem: permissions should be progressive, accountable, comfortable, expressive (PACE).

Principles

Progressive

The idea of progressive web applications is well understood in the web community, referring to the design of websites that work on a variety of devices and take advantage of the capabilities of each, creating progressively more capable sites as the capabilities of the device better match their needs. In Mixed Reality, the capabilities of devices are much more varied, requiring more dramatic changes to sites that want to support as many people as possible. Beyond just device capabilities, the intimate (even invasive) nature of AR sensing means that users may not want to grant the full capabilities of their device to all websites.

To both support a diversity of devices and respect user privacy, browsers need to embrace the idea of progressive permissions—giving people better controls over permission granting—by providing context for sensor access and enhancing the capabilities granted to websites gradually. This principle is closely related to the concept of informed consent; by requesting dangerous permissions out of context, sites risk providing incomplete disclosure and impacting comprehensibility. For example, most applications and sites request all necessary permissions at install or startup, then persist those permissions indefinitely.

The idea of providing context to permissions is not new; some mobile apps and websites already present people with an explanations of the permissions it will request at startup, providing a description of why access is required. Users can then select and approve/deny each permission at this point. If the user later accesses a feature that required a denied permission, then the application could re-present the request.

Part of “progressivity” is responsibly collecting data only the when needed and not persisting sensor use when not necessary. A person who has accepted microphone access to allow verbal input has not accepted unfettered microphone access for eavesdropping.

Therefore, progressive permissions should also be bidirectional, allowing users to turn permissions on and off repeatedly throughout the lifetime of a web app. In this example, a user might reasonably expect a site to use the microphone during input, and then stop using the sensor when input is complete—even if it still has permission to use it.

Also consider an application that requests camera access. At home, I grant it. At work, I open the application and it immediately uses the camera, compromising confidential information. We don’t want to keep prompting, but want the user to be aware of, and have control over, when sensor data is available to the application, changing permissions as they desire, depending on their preferences, context and needs (in contrast to current permissions, such as the camera permissions in the figure below). This principle is mutually reinforced by accountability.

Accountable

Accountability pertains to what happens after a permission is granted. All active or granted permissions should be easy to inspect and easy to change. We envision a user interface that is simple to access that lists:

current permissions

when each permission was approved/denied

data currently collected/monitored by the page

a toggle that allows easy switching between approval/denial of each permission (without requiring page reload)

Revocation should be straightforward, and only impact related features (revoking camera access should only affect features that require the camera, not prevent use of the entire site).

Additionally, when a website uses device resources, such as accessing files, there should be a method to hold the site accountable for resources accessed and/or modified. As browsers adopt new architectures to improve security through techniques like site isolation, identifying which pages are using which resources becomes easier, allowing browsers to report more accurate and granular usage data to users.

Examples of browsers continuing to execute JavaScript even after the browser is closed or the screen is turned off are troubling and violate accountability expectations. Some sensors, including motion and light sensors, aren’t protected by permissions and are exposed to JavaScript. These sensors also represent potential side channels for retrieving sensitive data and should be considered when designing accountability measures.

Comfortable

Users already report fatigue about excessive permissions requests. Embracing progressivity and accountability without taking this fatigue into account runs the risk disrespecting users’ attention and increasing this fatigue. Therefore permissions must also be comfortable. When we talk about permissions being comfortable, we’re explicitly referring to this need to balance user control with reduced friction. Interrupting users’ tasks, asking for permission at the wrong times, and excessive permissions requests can lead people to “give up” and automatically accept permissions to “get on with it.”

As we increase the amount and variety of information being sensed, we should consider alternatives to simple permission dialogs. For example, in some cases, browsers could use implicit permissions based on user action (e.g., pressing a button to take a picture might implicitly give camera access). In 3D immersive MR, where the user is using a head-worn display, permission requests that are presented in the immersive environment should provide a comfortable UX that is easily identified as being presented by the browser (as opposed to the page). If requests are jarring or visually uncomfortable, users may not take their time and consider them, but quickly accept (or dismiss) them to return to the immersive experience. Over time, we hope the web community will develop a consistent design language for various permissions across multiple browsers and environments.

Approaches to comfort can build on the previous principles: implicitly granting one kind of permission can be balanced by maintaining accountability and visibility of what data the site has access to, and by providing a simple and obvious way to examine and modify permissions.

Expressive

Expressiveness relates to the browser handling different permissions for different sensors differently, instead of assuming one size fits all (i.e., presenting a similar sort of prompt for any capability that needs user permission). The current permissions approach divides sensors into two categories: dangerous (requiring a prompt) and not (generally accessible without additional user input). Unfortunately, interactions between “not-dangerous” sensors, like the accelerometer and the touch screen used for input, can leak data like passwords (by watching the motion of the device when the user types)[1]. In an immersive context, devices have considerably more powerful sensors, resulting in more complex and difficult to predict interactions.

A possible solution to more expressive permissions is permission bundling, grouping related permissions together. However, this risks violating user expectations and could result in a less progressive approach.

Entering immersive mode will automatically require activating certain sensors; for example, a basic VR application will use motion sensors for rendering and be given an estimate of where the floor is so it can avoid placing virtual objects below the floor; from these, an application will be able to infer your height . These sorts of secondary inferences are not always so obvious. Even in a small study of 5 users, three participants believed that the only data collected by their VR device was either data they provided when creating an account or basic usage data (such as how frequently they use an application). Only two participants were aware that the device sensors collected and transmitted much more data. The richer the application, the more likely one or more of the sensors involved will be transmitting data that can be used to uniquely identify individuals.

One of these three participants explicitly stated that their VR system, an Oculus Rift, could not collect audio data.

Looking Forward

Accurately and completely explaining the data that’s being collected and potential consequences is central to acquiring informed consent, but there’s a danger that permissions prompts will become opaque legal waivers. As we add more sensors to devices and collect more personal and environmental data, it’s tempting to simply add more permission prompts. However, permission fatigue is already a serious issue.

When possible, we should identify opportunities for implicit consent. For example, you don’t have to give permission every time you move or click a mouse on the 2D web. When we do require explicit consent, platforms should provide a comfortable and consistent user experience.

The goal of permissions should be to obtain informed consent. In addition to designing technical solutions, we need to educate the public about the types of data collected by devices and potential consequences. While this should be required for making informed choices about permissions, it’s not sufficient. We need to combine the three aspects of informed consent (disclosure, comprehension, voluntariness) with the four PACE principles (progressive, accountable, comfortable, expressive) to provide an immersive web experience that empowers people to take control of their privacy on the internet.

The strength of the web is the ability for people to casually and ephemerally browse pages and follow links while knowing that their browser makes this activity safe—this is the foundation of trust in the web. This foundation becomes even more important in the immersive web due to the potential new pathways for abuse of the rich, intimate data available from these devices.

Current events demonstrate the dangers of rampant data collection and misuse of personal data on the web; mixed reality devices, and the new kinds of data they generate present an opportunity to change the conversation about permissions and consent on the web.

We propose the PACE principles to encourage MR enthusiasts and privacy researchers to consider new approaches to data collection that will inform and empower users while respecting their time and energy. These solutions will not all be technical, but will likely include education, advocacy, and design leadership. As VR and AR devices enter the mainstream tech environment, we should proactively explore the viability of new directions, rather than waiting and reacting to the greater damage that might come from future data breaches and abuse.

in this specific case, and for this reason, the devicemotion API has been deprecated in favor of a new sensor API ↩︎

2018-11-02T14:57:02Z2018-11-02T14:57:02ZDiane Hosfelthttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttps://quality.mozilla.org/?p=50203Firefox 64 Beta 8 Testday, November 9th

Hello Mozillians, We are happy to let you know that Friday, November 09th, we are organizing Firefox 64 Beta 8 Testday. We’ll be focusing our testing on: Multi-Select Tabs and Removal of Live Bookmarks and Feed. Check out the detailed instructions … Continue reading

Hello Mozillians,

We are happy to let you know that Friday, November 09th, we are organizing Firefox 64 Beta 8 Testday. We’ll be focusing our testing on: Multi-Select Tabs and Removal of Live Bookmarks and Feed.

WebRender’s 28th newsletter is here, and as requested, today’s little story is about picture caching. It’s a complex topic so I had to simplify a lot here. WebRender’s original strategy for rendering web pages was “We re-render everything each frame”. Quite a bold approach when all modern browsers’ graphics engines are built around the idea … Continue reading WebRender newsletter #28→

WebRender’s 28th newsletter is here, and as requested, today’s little story is about picture caching. It’s a complex topic so I had to simplify a lot here.

WebRender’s original strategy for rendering web pages was “We re-render everything each frame”. Quite a bold approach when all modern browsers’ graphics engines are built around the idea of first rendering web content into layers and then compositing these layers into the screen. This compositing approach can be seen as a form of mandatory caching. It is driven by the observation that most websites are very static and optimizes for it often at the expense of making it harder to handle dynamic content.

WebRender on the other hand almost took it as a mission statement to do the opposite and a go back to a simpler rendering model that just renders everything directly without painting/compositing separation.

I do have to add a bit of nuance: as you scroll with WebRender today, we re-draw a rectangle for each glyph on screen each frame, but the rasterization of the glyphs has always been cached in a traditional text rendering fashion. Most other things were initially redrawn each frame.

Working this way has a few advantages. In gecko, a lot of code goes into figuring out what should go into what layer and within these these layers figuring out the minimal amount of pixels that need to be re-drawn. More often than we’d like, figuring these things out takes longer than it would have taken to just re-draw everything (but of course you don’t know that until it’s too late!). Creating and destroying layers is very costly which means that when heuristics fail to guess what will change, stuttering can happen and it looks bad.
So layerization is expensive, its heuristics are hard to maintain and overall it is hard to make it work well with very dynamic content where a lot of elements are animated and transition from static to animated (motion design type of things).

On the other hand WebRender spends no time figuring layerization out and the cost of changing a single thing is about the same as the cost of changing everything. Instead of spending time optimizing for static content assuming rendering is expensive, the overall strategy is to make rendering cheap and optimize for dynamic and static content alike.

This approach performs well and scales to very complex web pages but there has to be a limit to how much work can be done each frame. And web developers have no limit as shown by this absolutely insane reproduction of an oil paining in pure CSS. Most browser will spend a ton of time painting this incredibly complex page into a layer and will let you scroll through it smoothly by just moving the layer around. WebRender on the other hand re-draws everything each frame as you scroll, and on most GPUs this is too much. ouch.
In addition to that, even if WebRender is fast enough to render complex content every frame at 60fps, there are valuable power optimizations we could get from not redrawing some things continuously.

In short, a “picture” in WebRender is a collection of drawing commands and the scene is represented as a tree of pictures. The idea behind picture caching is to have a very simple cost model for rendering a picture (much much simpler than Gecko’s layerization heuristics) and opt into caching the rendered content of a picture when we think it is profitable (as opposed to always having to cache the content into a layer). Because we don’t have a separation between painting and compositing in WebRender, it isn’t very expensive to switch between cached and non-cached the way creating and destroying layers is expensive in Gecko today and we don’t need double or triple buffering as we do for layers which also means a lot less memory overhead.

I’m excited about this hybrid approach between traditional compositing and WebRender’s initial “re-draw everything” strategy. I think that it has the potential to offer the best of both worlds. Glenn is making good progress on this front and we are hoping to have the core of the picture caching infrastructure in place before WebRender hits the release population.

Enabling WebRender in Firefox Nightly

In about:config set “gfx.webrender.all” to true,

restart Firefox.

2018-11-02T11:42:17Z2018-11-02T11:42:17ZNicalhttps://mozillagfx.wordpress.comhttps://s0.wp.com/i/buttonw-com.pngMozilla Gfx Team Blog2018-11-22T09:15:32Zhttp://blog.mozilla.org/addons/?p=8579November’s Featured Extensions

Pick of the Month: Undo Close Tab by Manuel Reimer Access recently closed tabs by right-clicking the icon in your toolbar. “The extension does exactly what is stated: it restores tabs, and not just the last one closed, but up … Continue reading

This article is part four of the series that reviews the user testing conducted on Hubs by Mozilla, a social XR platform. Previous posts in this series have covered insights related to accessibility, user experience, and environmental design. The objective of this final post is to give an overview of

This article is part four of the series that reviews the user testing conducted on Hubs by Mozilla, a social XR platform. Previous posts in this series have covered insights related to accessibility, user experience, and environmental design. The objective of this final post is to give an overview of how the Extended Mind and Mozilla collaborated to execute this study and make recommendations for best practices in user research on cross platform (2D and XR) devices.

PARTICIPANTS WILL MAKE OR BREAK THE STUDY

Research outcomes are driven by participant quality so plan to spend a lot of time up front recruiting. If you don’t already have defined target users, pick a user profile and recruit against that. In this study, Jessica Outlaw and Tyesha Snow of The Extended Mind sought people who were tech savvy enough to use social media and communicate on smartphones daily, but did not require that they owned head-mounted displays (HMDs) at home.

The researchers’ approach was to recruit for the future user of Hubs by Mozilla, not the current user who might be an early adopter. Across the ten participants in the study, a broad range of professions were represented (3D artist, engineer, realtor, psychologist, and more), which in this case was ideal because Hubs exists as a standalone product. However, if Hubs were in an earlier stage where only concepts or wireframes could be shown to users, it would have been better to include people with VR expertise because they could more easily imagine the potential it.

In qualitative research, substantial insights can be generated from between six and twelve users. Beyond twelve users, there tends to be redundancy in the feedback, which doesn’t justify the extra costs of recruiting and interviewing those folks. In general, there is more value in running two smaller studies of six people at different iterations of product development, rather than just one study with a larger sample size. In this study, there were ten participants, who provided both diversity of viewpoints and enough consistency that strong themes emerged.

The researchers wanted to test Hubs’ multi-user function by recruiting people to come in pairs. Having friends and romantic partners participate in the study allowed The Extended Mind to observe authentic interactions between people. While many of them were new to XR and some were really impressed by the immersive nature of the VR headset, they were grounded in a real experience of talking with a close companion

For testing a social XR product, consider having people come in with someone they already know. Beyond increasing user comfort, there is another advantage in that it was more efficient for the researchers. They completed research with ten people in a single day, which is a lot in user testing.

Summary of recruiting recommendations

Recruit participants who represent the future target user of your product (identifying user profiles is often a separate research project in user-centered design)

The farther along the product is in development, the less technologically sophisticated users need to be

You can achieve important insights with as few as six participants.

To test social products, consider bringing in people in pairs. This can also be efficient for the researchers.

COLLECTING DATA

It’s important to make users feel welcome when they arrive. Offer them water or snacks. Pay them an honorarium for their time. Give them payment before the interviews begin so that they know their payment is not conditional on them saying nice things about your product. In fact, give them explicit permission to say negative things about the product. Participants tend to want to please researchers so let them know you want their honest feedback. Let them know up front that they can end the study, especially if they become uncomfortable or motion sick.

The Extended Mind asked people to sign a consent form for audio, video, and screen recording. All forms should give people the choice to opt out from recordings.

In the Hubs by Mozilla study, the format of each interview session was:

Welcome and pre-Hubs interview on how participants use technology (20 min)

Use Hubs on 3 different devices (40 min)

Closing interview on their impressions of Hubs (30 min)

Pairs were together for the opening and closing interviews, but separated into different conference rooms for actual product usage. Jessica and Tyesha each stayed with a participant at all times to observe their behaviors in Hubs and then aggregated their notes afterward.

One point that was essential was to give people some experience with the Oculus Go before actually showing them Hubs. This was part of the welcome and pre-Hubs interview in this study. Due to the nascent stage of VR, participants need extra time to learn about navigating the menus and controllers. Before people to arrive in any XR experience, people are going to need to have some familiarity with the device. As the prevalence of HMDs increases, taking time to give people an orientation will become less and less necessary. In the meantime, setting a baseline is an important piece for users about where your experiences exist in the context of the device’s ecosystem.

Summary of data collection recommendations

Prioritize participant comfort

Signal that you are interested in their genuine feedback

Ask participants for consent to record them

Conduct pre-test and post-test interviews with participants to get the most insights

Allow time for people to get used to navigating menus and using the controller on new HMDs before testing your experience.

GENERATING INSIGHTS

Once all the interviews have been completed, it’s time to start analyzing it the data. It is important to come up with a narrative to describe the user experience. In this example, Hubs was found to be accessible, fun, good for close conversations, and participants’ experiences were influenced by the environmental design. Those themes emerged early on and were supported by multiple data points across participants.

Using people’s actual words is more impactful than paraphrasing them or just reporting your own observations due to the emotional impact of a first-person experience. For example,

There are instances where people make similar statements but each used their own words, which helps bolster the overall point. For example, three different participants said they thought Hubs improved communication with their companion, but each had a different way of conveying it:

[Hubs is] “better than a phone call.”
“Texting doesn’t capture our full [expression]”
“This makes it easier to talk because there are visual cues.”

Attempt to weave together multiple quotes to support each of the themes from the research.

User testing will uncover new uses of your product and people will likely spontaneously brainstorm new features they want and more. Expect that users will surprise you with their feedback. You may have planned to test and iterate on the UI of a particular page, but learn in the research that the page isn’t desirable and should be removed entirely.

Summary of generating insights recommendations

Direct quotes that convey the emotion of the user in the moment are an important tool of qualitative research

Pictures, videos, and screen captures can help tell the story of the users’ experiences

Be prepared to be surprised by user feedback

Mozilla & The Extended Mind Collaboration

In this study, Mozilla partnered with The Extended Mind to conduct the research and deliver recommendations on how to improve the Hubs product. For the day of testing, two Hubs developers observed all research sessions and had the opportunity to ask the participants questions. Having Mozilla team members onsite during testing let everyone sync up between test sessions and led to important revisions about how to re-phrase questions, which devices test on, and more.

Due to Jessica and Tyesha being outside of the core Hubs team, they were closer to the user perspective and could take a more naturalistic approach to learning about the product. Their goals were to represent the user perspective across the entire project and provide strategic insights that the development team could apply.

This post has provided some background on the Hubs by Mozilla user research study and given recommendations on best practices for people who are interested in conducting their own XR research. Get in touch with contact@extendedmind.io with research questions and, also, try Hubs with a friend. You can access it via https://hubs.mozilla.com/.

|

This is the final article in a series that reviews user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product. Links to the previous posts are below.

2018-10-31T23:23:47Z2018-10-31T23:23:47ZJessica Outlawhttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttps://medium.com/p/83f03b5f8ee9The User Journey for Firefox Extensions Discovery

The ability to customize and extend Firefox are important parts of Firefox’s value to many users. Extensions are small tools that allow developers and users who install the extensions to modify, customize, and extend the functionality of Firefox. For example, during our workflows research in 2016, we interviewed a participant who was a graduate student in Milwaukee, Wisconsin. While she used Safari as her primary browser for common browsing, she used Firefox specifically for her academic work because of the extension Zotero was the best choice for keeping track of her academic work and citations within the browser. The features offered by Zotero aren’t built into Firefox, but added to Firefox by an independent developer.

Popular categories of extensions include ad blockers, password managers, and video downloaders. Given the variety of extensions and the benefits to customization they offer, why is it that only 40% of Firefox users have installed at least one extension? Certainly, some portion of Firefox users may be aware of extensions but have no need or desire to install one. However, some users could find value in some extensions but simply may not be aware of the existence of extensions in the first place.

Why not? How can Mozilla facilitate the extension discovery process?

A fundamental assumption about the extension discovery process is that users will learn about extensions through the browser, through word of mouth, or through searching to solve a specific problem. We were interested in setting aside this assumption and to observe the steps participants take and the decisions they make in their journey toward possibly discovering extensions. To this end, the Firefox user research team ran two small qualitative studies to understand better how participants solved a particular problem in the browser that could be solved by installing an extension. Our study helped us understand how participants do — or do not — discover a specific category of extension.

Our Study

Because ad blockers are easily the most popular type of extension in installation volume on Firefox (more generally, some kind of ad blocking is being used on 615 million devices worldwide in 2017), we chose them as an obvious candidate for this study. Their popularity and many users’ perception of some advertising as invasive and distracting is a good mix that we believed could pose a common solvable problem for participants to engage with. (Please do not take our choice of this category of extensions for this study as an official or unofficial endorsement or statement from Mozilla about ad blocking as a user tool.)

<figcaption>Some ad blocker extensions currently available on AMO.</figcaption>

In order to understand better how users might discover extensions (and why those chose a particular path), we conducted two small qualitative studies. The first study was conducted in-person in our Portland, Oregon offices with five participants. To gather more data, we conducted a remote, unmoderated study using usertesting.com with nine participants in the US, UK, and Canada. In total, we worked with fourteen participants. These participants used Firefox as their primary web browser and were screened to make certain that they had no previous experience with extensions in Firefox.

In both iterations of the study, we asked participants to complete the following task in Firefox:

Let’s imagine for a moment that you are tired of seeing advertising in Firefox while you are reading a news story online. You feel like ads have become too distracting and you are tired of ads following you around while you visit different sites. You want to figure out a way to make advertisements go away while you browse in Firefox. How would you go about doing that? Show us. Take as much time as you need to reach a solution that you are satisfied with.

Participants fell into roughly two categories: those who installed an ad blocking extension — such as Ad Block Plus or uBlock Origin — and those who did not. The participants who did not install an extension came up with a more diverse set of solutions.

First, among the fourteen participants across the two studies, only six completed the task by discovering an ad blocking extension (two of these did not install the extension for other reasons). The participants who completed the task in this manner all followed a similar path: they searched via a search engine to seek an answer. Most of these participants were not satisfied with accepting only the extensions that were surfaced from search results. Instead, these participants exhibited a high degree of information literacy and used independent reviews to assess and triangulate the quality, reputation, and reliability of the ad blocking extension they selected. More generally, they used the task as an opportunity to build knowledge about online advertising and ad blocking; they looked outside of their familiar tools and information sources.

<figcaption>User journey for a participant who discovered an ad blocker to complete the task.</figcaption>

One participant (who we’ll call “Andrew”) in our first study is a technically savvy user but does not frequently customize the software he uses. When prompted by our task, he said, “I’m going to do what I always do in situations where I don’t know the answer…I’m going to search around for information.” In this case, Andrew recalled that he had heard someone once mention something about blocking ads online and searched for “firefox ad block” in DuckDuckGo. Looking through the search results, he found a review on CNET.com that described various ad blockers, including uBlock Origin. While Andrew says that he “trusts CNET,” he wanted to find a second review source that would provide him with more information. After reading an additional review, he follows a link to the uBlock Firefox extension page, reads the reviews and ratings there (specifically, he is looking for a large number of positive reviews as a proxy for reliability and trustworthiness), and installs the extension. For his final step, Andrew visits two sites (wired.com and huffingtonpost.com) that he believes use a large number of display ads and confirms that the ad blocker is suppressing their display.

The remaining participants did not install an ad blocking extension to complete the task. Unlike the participants who ultimately installed an ad blocking extension, these participants did not use a search engine or an outside information source in order to seek a solution to the task. Instead, they fell back on and looked inside tools and resources with which they were already familiar. Also, unlike the participants who were successful installing an ad blocking extension, these participants typically said they were dissatisfied with their solutions.

These participants generally followed two main routes to complete the task: they either looked within the browser in Preferences or other menus or they looked into the ads themselves. For the participants who looked in Preferences, many found privacy and security related options, but none related to blocking advertising. These participants did not seek out outside information sources via a search engine and eventually they gave up on the task.

A participant (“Marion”) in our first study recalled that she had seen a link once in an advertisement offering her “Ad Choices.” Ad Choices is a self-regulatory program in North America and the EU that is “aimed to give consumers enhanced transparency and control.” Marion navigated to a site she remembered had display advertising that offered a link to information on the program. She followed the link which took her to a long list of options for opting out of specific advertising themes and networks. Marion selected and deselected the choices she believed were relevant to her. Unfortunately, the confirmations from the various ad networks for her actions either did not work or did not provide much certainty into her actions. She navigated back to the site that she visited previously and could not discern any real difference in display advertising before and after enrolling in the Ad Choices program.

<figcaption>Marion attempting to opt out of advertising online using Ad Choices.</figcaption>

There were some limitations with this research. The task provided to participants is only a single situation that would cue them to seek out an extension to solve a problem. Rather, we imagine a cumulative sense of frustration might prompt users to seek out a solution. We would anticipate that they may hear about ad blocking from word of mouth or a news story (as participants in a similar study run at the same time with participants who were familiar with extensions demonstrated). Also, ad blocking is a specific domain of extension that has its own players and ecosystem. If we asked participants for example to find a solution to managing their passwords, we would expect a different set of solutions.

At a high level, those participants who displayed elements of traditionally defined digital information literacy were successful in discovering extensions to complete the task. This observation emerged through the analysis process in our second study. In future research, it would be useful to include some measurement of participants’ digital literacy or include related tasks to determine the strength of this relationship.

Nevertheless, this study provided us with insight into how users seek out information to discover solutions to problems in their browser. We gathered useful data on users’ mental model of the software discovery process, the information sources they looked to discover new functionality, and finally what kinds of information was important, reliable, and trustworthy. Those participants who did not seek out independent answers, fell back on the tools and information with which they were already familiar; these participants were also less satisfied with their solutions. These results demonstrate a need to provide cues to participants about Extensions within tools they are familiar with already such as Firefox’s Preferences/Options menu. Further, we need to surface cues in other channels (such as search engine results pages) that could lead Firefox users to relevant extensions.

Thanks to Philip Walmsley, Markus Jaritz, and Emanuela Damiani for their participation in this research project. Thanks to Jennifer Davidson and Heather McGaw for proofreading and suggestions.

The following post is based on a talk I presented at AmuseConfin Budapest about interviewing users.

I recently had a conversation with a former colleague who now works for a major social network. In the course of our conversation this former colleague said to me, “You know, we have all the data in the world. We know what our users are doing and have analytics to track and measure it, but we don’t know why they do it. We don’t have any frameworks for understanding behavior outside of what we speculate about inside the building.”

In many technology organizations, the default assumption of user research is that it will be primarily quantitative research such as telemetry analyses, surveys, and A/B testing. Technology and business organizations often default to a positivist worldview and subsequently believe that quantitative results that provide numeric measures have the most value. The hype surrounding big data methods (and the billions spent on marketing by vendors making certain you know about their enterprise big data tools) goes hand-in-hand with the perceived correctness of this set of assumptions. Given this ecosystem of belief, it’s not surprising that user research employing quantitative methods is perceived by many in our industry as the only user research an organization would need to conduct.

I work as a Lead User Researcher on Firefox. While I do conduct some quantitative user research, the focus of most of my work is qualitative research. In the technology environment described above, the qualitative research we conduct is sometimes met with skepticism. Some audiences believe our work is too “subjective” or “not reproducible.” Others may believe we simply run antiquated, market research-style focus groups (for the record, the Mozilla UR team doesn’t employ focus groups as a methodology).

I want to explain why qualitative research methods are essential for technology user research because of one well-documented and consistently observed facet of human social life: the concept of homophily.

This is a map of New York City based on the ethnicity of residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other, and each dot is 25 residents. Of course, there are historical and cultural reasons for the clustering, but these factors are part of the overall social dynamic.Source: https://www.flickr.com/photos/walkingsf/

Homophily is the tendency of individuals to associate and bond with similar others (the now classic study of homophily in social networks). In other words, individuals are more likely to associate with others based on similarities rather than differences. Social scientists have studied social actors associating along multiple types of ascribed characteristics (status homophily) including gender, ethnicity, economic and social status, education, and occupation. Further, homophily exists among groups of individuals based on internal characteristics (value homophily) including values, beliefs, and attitudes. Studies have demonstrated both status and value homophilic clustering in smaller ethnographic studies and larger scale analyses of social network associations such as political beliefs on Twitter.

Photos on Flickr taken in NY by tourists and locals. Blue pictures are by locals. Red pictures are by tourists. Yellow pictures might be by either. Source: https://www.flickr.com/photos/walkingsf

I bring up this concept to emphasize how those of us who work in technology form our own homophilic bubble. We share similar experiences, information, beliefs, and processes about not just how to design and build products and services, but also in how many of us use those products and services. These beliefs and behaviors become reinforced through the conversations we have with colleagues, the news we read in our press daily, and the conferences we attend to learn from others within our industry. The most insidious part of this homophilic bubble is how natural and self-evident the beliefs, knowledge, and behaviors generated within it appears to be.

Here’s another fact: other attitudes, beliefs, and motivations exist outside of our technology industry bubble. Many members of these groups use our products and services. Other groups share values and statuses that are similar to the technology world, but there are other, different values and different statuses. Further, there are values and statuses that are radically different from ours so as to be not assumed in the common vocabulary of our own technology industry homophilic bubble. To borrow from former US Secretary of Defense, Donald Rumsfeld, “there are also unknown unknowns, things we don’t know we don’t know.”

This is all to say that insights, answers, and explanations are limited by the breadth of a researcher’s understanding of users’ behaviors. The only way to increase the breadth of that understanding is by actually interacting with and investigating behaviors, beliefs, and assumptions outside of our own behaviors, beliefs, and assumptions. Qualitative research provides multiple methodologies for getting outside of our homophilic bubble. We conduct in situ interviews, diary studies, and user tests (among other qualitative methods) in order to uncover these insights and unknown unknowns. The most exciting part of my own work is feeling surprised with a new insight or observation of what our users do, say, and believe. In research on various topics, we’ve seen and heard so many surprising answers.

There is no one research method that satisfies answering all of our questions. If the questions we are asking about user behavior, attitudes, and beliefs are based solely on assumptions formed in our homophilic bubble, we will not generate accurate insights about our users no matter how large the dataset. In other words, we only know what we know and can only ask questions framed about what we know. If we are measuring, we can only measure what we know to ask. Quantitative user research needs qualitative user research methods in order to know what we should be measuring and to provide examples, theories, and explanations. Likewise, qualitative research needs quantitative research to measure and validate our work as well as to uncover larger patterns we cannot see.

An example of quantitative and qualitative research working iteratively.

It is a disservice to users and ourselves to ask only how much or how often and to avoid understanding why or how. User research methods work best as an accumulation of triangulation points of data in a mutually supportive, on-going inquiry. More data points from multiple methods mean deeper insights and a deeper understanding. A deeper connection with our users means more human-centered and usable technology products and services. We can only get at that deeper connection by leaving the technology bubble and engaging with the complex, messy world outside of it. Have the courage to feel surprised and your assumptions challenged.

(Thanks to my colleague Gemma Petrie for her thoughts and suggestions.)

Building a browser is hard; building a good browser inevitably requires gathering a lot of data to make sure that things that work in the lab works in the field. But as soon as you gather data, you have to make sure you protect user privacy. We’re always looking at ways to improve the security of our data collection, and lately we’ve been experimenting with a really cool technique called Prio.

Building a browser is hard; building a good browser inevitably requires gathering a lot of data to make sure that things that work in the lab work in the field. But as soon as you gather data, you have to make sure you protect user privacy. We’re always looking at ways to improve the security of our data collection, and lately we’ve been experimenting with a really cool technique called Prio.

Currently, all the major browsers do more or less the same thing for data reporting: the browser collects a bunch of statistics and sends it back to the browser maker for analysis; in Firefox, we call this system Telemetry. The challenge with building a Telemetry system is that data is sensitive. In order to ensure that we are safeguarding our users’ privacy, Mozilla has built a set of transparent data practices which determine what we can collect and under what conditions. For particularly sensitive categories of data, we ask users to opt-in to the collection and ensure that the data is handled securely.

We understand that this requires users to trust Mozilla — that we won’t misuse their data, that the data won’t be exposed in a breach, and that Mozilla won’t be compelled to provide access to the data by another party. In the future, we would prefer users to not have to just trust Mozilla, especially when we’re collecting data that is sufficiently sensitive to require an opt-in. This is why we’re exploring new ways to preserve your data privacy and security without compromising access to the information we need to build the best products and services.

Obviously, not collecting any data at all is best for privacy, but it also blinds us to real issues in the field, which makes it hard for us to build features — including privacy features — which we know our users want. This is a common problem and there has been quite a bit of work on what’s called “privacy-preserving data collection”, including systems developed by Google (RAPPOR, PROCHLO) and Apple. Each of these systems has advantages and disadvantages that are beyond the scope of this post, but suffice to say that this is an area of very active work.

In recent months, we’ve been experimenting with one such system: Prio, developed by Professor Dan Boneh and PhD student Henry Corrigan-Gibbs of Stanford University’s Computer Science department. The basic insight behind Prio is that for most purposes we don’t need to collect individual data, but rather only aggregates. Prio, which is in the public domain, lets Mozilla collect aggregate data without collecting anyone’s individual data. It does this by having the browser break the data up into two “shares”, each of which is sent to a different server. Individually the shares don’t tell you anything about the data being reported, but together they do. Each server collects the shares from all the clients and adds them up. If the servers then take their sum values and put them together, the result is the sum of all the users’ values. As long as one server is honest, then there’s no way to recover the individual values.

We’ve been working with the Stanford team to test Prio in Firefox. In the first stage of the experiment we want to make sure that it works efficiently at scale and produces the expected results. This is something that should just work, but as we mentioned before, building systems is a lot harder in practice than theory. In order to test our integration, we’re doing a simple deployment where we take nonsensitive data that we already collect using Telemetry and collect it via Prio as well. This lets us prove out the technology without interfering with our existing, careful handling of sensitive data. This part is in Nightly now and reporting back already. In order to process the data, we’ve integrated support for Prio into our Spark-based telemetry analysis system, so it automatically talks to the Prio servers to compute the aggregates.

Our initial results are promising: we’ve been running Prio in Nightly for 6 weeks, gathered over 3 million data values, and after fixing a small glitch where we were getting bogus results, our Prio results match our Telemetry results perfectly. Processing time and bandwidth also look good. Over the next few months we’ll be doing further testing to verify that Prio continues to produce the right answers and works well with our existing data pipeline.

Most importantly, in a production deployment we need to make sure that user privacy doesn’t depend on trusting a single party. This means distributing trust by selecting a third party (or parties) that users can have confidence in. This third party would never see any individual user data, but they would be responsible for keeping us honest by ensuring that we never see any individual user data either. To that end, it’s important to select a third party that users can trust; we’ll have more to say about this as we firm up our plans.

We don’t yet have concrete plans for what data we’ll protect with Prio and when. Once we’ve validated that it’s working as expected and provides the privacy guarantees we require, we can move forward in applying it where it is needed most. Expect to hear more from us in future, but for now it’s exciting to be able to take the first step towards privacy preserving data collection.

Browser extensions are wonderful. Nearly every day I come across a new Firefox extension that customizes my browser in some creative way I’d never even considered. Some provide amusement for a short time, while others have become indispensable to my … Continue reading

At Mozilla, we continually strive to honor both principles. It’s why Firefox extensions written to the WebExtensions API are limited in their abilities and have good oversight, including automatic and manual review. It’s also why we make sure users can understand exactly what permissions they’ve granted to those extensions and what parts of their browser they can access.

In short, Mozilla makes every effort to ensure that the extensions we offer are trustworthy.

So it was with great interest that I read Google’s recent Chromium Blog blog post entitled “Trustworthy Chrome Extensions, by default.” It outlines upcoming changes to Chrome’s extension architecture designed to make “extensions trustworthy by default.” I thought it would be interesting to explore each of the announced changes and compare them to what Mozilla has built into Firefox.

User Controls for Host Permissions

“Beginning in Chrome 70, users will have the choice to restrict extension host access to a custom list of sites, or to configure extensions to require a click to gain access to the current page.”

Being able to review and modify the sites that an extension has access to, especially those extensions that ask to “access your data for all websites,” is a worthy goal. Mozilla has discussed similar ideas, but the problem always comes down presenting this in a clear, uncomplicated way to a majority of users.

Having played a bit with this feature in Chrome, the implementation definitely seems targeted at power users. Extensions that request access to all websites still get installed with that access, so the default behavior has not changed.

The click-to-script option is intriguing, although the UX is a bit awkward. It’s workable if you have a single extension, but becomes unwieldy to click and reload every site visited for every installed extension.

Admittedly, getting this interface right in an intuitive and easy-to-use manner is not straightforward and I applaud Google for taking a shot at it. Meanwhile Mozilla will continue to look for ways Firefox can provide more permission control to a majority of extension users.

Extension Review Process

The post is vague about exactly what this means, but it likely means these extensions will be flagged for manual review. This brings Chrome up to the standard that Firefox set last year, which is great news for the web. More manual review means fewer malicious extensions.

“We’re also looking very closely at extensions that use remotely hosted code, with ongoing monitoring.”

Firefox expressly forbids remotely hosted code. Our feeling is that no amount of review can eliminate the risks introduced when developers can easily and undetectably change what code is loaded by extensions. Mozilla’s policy ensures that no unreviewed code is ever loaded into the browser, and enforced signatures prevents reviewed code from being altered after release.

Code Readability Requirements

“Starting today, Chrome Web Store will no longer allow extensions with obfuscated code…minification will still be allowed.”

In reality, minified and obfuscated code are not very useful in extensions. In both Chrome and Firefox, extensions load locally (not over the network) so there is almost no performance advantage to minification, and obfuscation can be overcome by a dedicated person with readily available tools and sufficient effort.

Nevertheless, Mozilla permits both obfuscated and minified extensions in our store. Critically, though, Mozilla requires all developers to submit original, non-obfuscated, non-minified code for review, along with instructions on how to reproduce (including any obfuscation or minification) the store version. This ensures that reviewers are able to review and understand every extension, and that the store version is unaltered from the reviewed version.

As you might expect, this takes a significant investment of time and energy for both Mozilla and developers. We believe it is worth it, though, to allow developers to secure their code, if desired, while simultaneously providing thoroughly reviewed extensions that maintain user security and privacy.

Required 2-Step Verification

As a whole, the web is moving in this direction and requiring it for developer accounts is a strong step towards protecting users. Mozilla recently added two-step authentication for Firefox Sync accounts, and two-step authentication for Firefox extension developers is on the roadmap for the fourth quarter of 2018. Like Google, we expect to have this feature enabled by 2019.

Manifest v3

“In 2019 we will introduce the next extensions manifest version…We intend to make the transition to manifest v3 as smooth as possible and we’re thinking carefully about the rollout plan.”

In 2015, Mozilla announced we were deprecating our extremely popular extension system in favor of WebExtensions, an API compatible with Chrome, as well as Edge and Opera. There were several reasons for this, but a large part of the motivation was standards — a fundamental belief that adopting the API of the market leader, in effect creating a de facto standard, was in the best interests of all users.

It was a controversial decision, but it was right for the web and it represents who Mozilla is and our core mission. Three years later, while there still isn’t an official standard for browser extensions, the web is a place where developers can quickly and easily create cross-browser extensions that run nearly unchanged on every major platform.

So I would like to publicly invite Google to collaborate with Mozilla and other browser vendors on manifest v3. It is an incredible opportunity to show that Chrome embodies Google’s philosophy to “focus on the user,” would reaffirm the Chrome team’s commitment to open standards and an interoperable web, and be a powerful statement that working together on the future of browser extensions is in the best interests of a healthy internet.

Conclusion

While all of the changes Google outlined are interesting, some of them could go a step further in protecting users online. Nevertheless, I’d like say — bravo! The motivation behind these changes is definitely in the spirit of Mozilla’s mission and a gain for the open web. With Chrome’s market share, these initiatives will have a positive impact in protecting the security and privacy of millions of users around the world, and the web will be a better place for it.

A lot of work remains, though. Expect Mozilla to keep fighting for users on the web, launching new initiatives, like Firefox Monitor, to keep people safe, and advancing Firefox to be the best user agent you can have in your online journies.

Europe was at the center of a milestone for women in tech today as nonprofit Women Who Tech and tech giant Mozilla announced the winners of the Women Startup Challenge Europe. Women-led startup finalists from across Europe pitched their ventures before a prestigious panel of tech industry executives and investors on 25 October at Paris’s City Hall, co-hosted by the office of Mayor Anne Hidalgo.

“While it’s alarming to see the amount of funding for women-led startups compared to European companies as a whole go down from 14% to 11% between 2016 and 2018, the Women Startup Challenge is on a mission to close the funding gap once and for all. If the tech world wants to innovate and solve the world’s toughest problems and generate record returns, they will invest in diverse startups,” said Allyson Kapin, founder of Women Who Tech. “If investors don’t know where to look, our Women Startup Challenge program has a pipeline of over 2,300 women-led ventures who are ready to scale.”

Sampson Solutions from the UK won the grand prize, receiving $35,000 in funding via Women Who Tech to help scale their startup. The Audience Choice Award went to Inorevia from Paris, France. Mozilla awarded an additional $25,000 cash grant to Vitrue from the UK, selected by jury member Mitchell Baker, Chairwoman of Mozilla.

“Paris is determined to provide girls and women with the resources to occupy their rightful place in the society and in the tech industry. We were thrilled to co-host the Women Startup Challenge Europe and showcase 10 talented women-led startups who are making an impact in this world,” said Deputy Mayor Jean-Louis Missika.

The Audience Choice Award was awarded to Inorevia. Inorevia’s work in developing instruments used for bioassays has resulted in minimizing the cost, time and manipulation needed for next-generation bioassay and precision medicine.

Mozilla Prize Winner VitrueHealth is developing computer vision based tools that sit in the background of clinical assessments, autonomously measuring motor function metrics, freeing clinicians to focus on more complex patient interactions, and allowing them to detect and treat degradations in functional health. This improves quality of life and saves millions in healthcare costs.

“I’m honored to award the Mozilla prize for privacy, transparency and accountability to Vitrue Health,” said Mitchell. “Vitrue creates data about mobility capabilities, makes that data accessible and useful, and provides it to patients. By providing patients with access to their data in a useful way, Vitrue offers us an example of how creating new data — even personal data — can be quite positive when it is handled well.”

🎉 WebRender is in beta 🎉! There are still a number of blocking bugs so WebRender will stay on beta for a few trains until it has received enough polish to hit the release population. This is an important milestone for everyone working on the project and the main piece of news outside of the … Continue reading WebRender newsletter #27→

WebRender is in beta ! There are still a number of blocking bugs so WebRender will stay on beta for a few trains until it has received enough polish to hit the release population. This is an important milestone for everyone working on the project and the main piece of news outside of the bullet points below.

I’m increasingly running out of ideas to write intros without repeating the same thing each week. So instead I’ll start the next few newsletter with a piece of WebRender history. Here is one:

Towards the beginning, WebRender’s overall architecture really felt centered around attempt at answering the question “Can we implement CSS rendering logic directly on the GPU?”. By that I mean that WebRender had a collection of shaders that very closely matched CSS properties. For example a single image shader was able to handle all of the image and background-image properties, and a single border shader was able to do all of the different border styles, parameters being provided in layout space instead of device space.
This maybe doesn’t sound like much, but for someone who’s been used to seeing layers upon layers of abstractions between the output of layout and the final pieces of graphics code that writes into the window, this idea of implementing the CSS specification directly into shaders in a fairly straightforward way was quite remarkable and novel.
In today’s WebRender the shader system isn’t as close to a verbatim implementation of CSS specifications as it used to be. A lot of this “low level CSS” vibe remains but we also split and combine shaders in ways that better take advantage of the characteristics of modern GPUs.
To me, this ability to solve specific web rendering challenges in the high and low level layers alike without having to conform to old rendering models is one of WebRender’s greatest strengths.

Ongoing work

Nical is getting a subset of SVG filters to run on the GPU instead of the CPU fallback.

Bobby never stops improving memory usage.

Matt and Gankro are improving the interaction between blob images and scrolling.

Kats is standing up WebRender in Firefox for Android.

Enabling WebRender in Firefox Nightly

In about:config set “gfx.webrender.all” to true,

restart Firefox.

2018-10-26T11:55:46Z2018-10-26T11:55:46ZNicalhttps://mozillagfx.wordpress.comhttps://s0.wp.com/i/buttonw-com.pngMozilla Gfx Team Blog2018-11-22T09:15:32Zhttps://hacks.mozilla.org/?p=32859Dweb: Identity for the Decentralized Web with IndieAuth

IndieAuth is a decentralized login protocol that enables users of your software to log in to other apps. It's an extension to OAuth 2.0 that lets any website to become its own identity provider, leveraging all the existing security considerations and best practices in the industry around authorization and authentication.

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

We’ve covered a number of projects so far in this series that require foundation-level changes to the network architecture of the web. But sometimes big things can come from just changing how we use the web we have today.

Imagine if you never had to remember a password to log into a website or app ever again. IndieAuth is a simple but powerful way to manage and verify identity using the decentralization already built into the web itself. We’re happy to introduce Aaron Parecki, co-founder of the IndieWeb movement, who will show you how to set up your own independent identity on the web with IndieAuth.

– Dietrich Ayala

Introducing IndieAuth

IndieAuth is a decentralized login protocol that enables users of your software to log in to other apps.

From the user perspective, it lets you use an existing account to log in to various apps without having to create a new password everywhere.

IndieAuth builds on existing web technologies, using URLs as identifiers. This makes it broadly applicable to the web today, and it can be quickly integrated into existing websites and web platforms.

IndieAuth has been developed over several years in the IndieWeb community, a loosely connected group of people working to enable individuals to own their online presence, and was published as a W3C Note in 2018.

IndieAuth Architecture

IndieAuth is an extension to OAuth 2.0 that enables any website to become its own identity provider. It builds on OAuth 2.0, taking advantage of all the existing security considerations and best practices in the industry around authorization and authentication.

IndieAuth starts with the assumption that every identifier is a URL. Users as well as applications are identified and represented by a URL.

When a user logs in to an application, they start by entering their personal home page URL. The application fetches that URL and finds where to send the user to authenticate, then sends the user there, and can later verify that the authentication was successful. The flow diagram below walks through each step of the exchange:

Get Started with IndieAuth

The quickest way to use your existing website as your IndieAuth identity is to let an existing service handle the protocol bits and tell apps where to find the service you’re using.

If your website is using WordPress, you can easily get started by installing the IndieAuth plugin! After you install and activate the plugin, your website will be a full-featured IndieAuth provider and you can log in to websites like https://indieweb.org right away!

To set up your website manually, you’ll need to choose an IndieAuth server such as https://indieauth.com and add a few links to your home page. Add a link to the indieauth.com authorization endpoint in an HTML <link> tag so that apps will know where to send you to log in.

<link rel="authorization_endpoint" href="https://indieauth.com/auth">

Then tell indieauth.com how to authenticate you by linking to either a GitHub account or email address.

Note: This last step is unique to indieauth.com and isn’t part of the IndieAuth spec. This is how indieauth.com can authenticate you without you creating a password there. It lets you switch out the mechanism you use to authenticate, for example in case you decide to stop using GitHub, without changing your identity at the site you’re logging in to.

If you don’t want to rely on any third party services at all, then you can host your own IndieAuth authorization endpoint using an existing open source solution or build your own. In any case, it’s fine to start using a service for this today, because you can always swap it out later without your identity changing.

Now you’re ready! When logging in to a website like https://indieweb.org, you’ll be asked to enter your URL, then you’ll be sent to your chosen IndieAuth server to authenticate!

Learn More

If you’d like to learn more, OAuth for the Open Web talks about more of the technical details and motivations behind the IndieAuth spec.

You can learn how to build your own IndieAuth server at the links below:

Mozilla is announcing the seven recipients of its Creative Media Awards — projects that use art and advocacy to highlight the unintended consequences of artificial intelligence The artificial intelligence … Read more

Today, Mozilla is announcing funding for seven art and advocacy projects that shine a light on the AI at work in our everyday lives.

These seven projects are winners of Mozilla’s latest $225,000 Creative Media Awards. They hail from five countries. And they make AI’s impact on society understandable using science fiction, short documentaries, games, and more. These projects will launch to the public by June 2019.

Mozilla’s Creative Media Awards are part of our mission to support a healthy internet. They fuel the people and projects on the front lines of the internet health movement — from digital artists in the Netherlands to computer scientists in the United Arab Emirates to science fiction writers in the U.S.

Stealing Ur Feelings will be an interactive film that reveals how social networks and apps use your face to secretly collect data about your emotions. The documentary will explore how emotion recognition AI determines if you’re happy or sad — and how companies use that information to influence your behavior.

An early version of Stealing Ur Feelings

[2] Do Not Draw a Penis | by Moniker in the Netherlands | $50,000 prize

Do Not Draw a Penis will address automated censorship and algorithmic content moderation. Users will visit a web page and will be met with a blank canvas. Users can draw whatever they like, and an AI voice will comment on their drawings (e.g. “nice landscape!”). But if the drawing resembles a penis or other “forbidden” content, the AI will scold the user, take control, and destroy the image.

A Week With Wanda will be a web-based simulation of the risks and rewards of artificial intelligence. Wanda — an AI assistant — will interact with users over the course of one week in an attempt to “improve” their lives. But she quickly goes off the rails. Along the way, Wanda might send uncouth messages to Facebook friends, order you anti-depressants, or freeze your bank account. (Wanda’s actions are simulated, not real.)

A potential conversation from A Week With Wanda

[4] Survival of the Best Fit | by Alia ElKattan in the United Arab Emirates, and Gabor Csapo, Jihyun Kim, and Miha Klasinc | $25,000 prize

Survival of the Best Fit is a web simulation of how blind usage of AI in hiring can reinforce workforce inequality. Users will operate an algorithm and see first-hand how white-sounding names are often prioritized, among other biases.

[5] The Training Commission | by Ingrid Burrington and Brendan Byrne in the U.S. | $25,000 prize

The Training Commission is a work of web-based speculative fiction that tells the stories of AI’s unintended consequences and harms to public life. It unfolds from the perspective of a journalist who is reckoning with how deeply AI has scarred society.

What Do You See? highlights how differently humans and algorithms “see” the same image, and how easily bias can take root. Humans will visit a website and describe an image in their own words, without prompts. Then, humans will see how an image captioning algorithm describes that same image.

Mate Me or Eat Me is a dating simulator that examines how exclusionary real dating apps can be. Users create a monster and mingle with others, swiping right and left to either mate with or eat others. Along the way, users have insight into how their choices affect who they see next — and who is excluded from their pool of potential paramours.

A mock-up from Mate Me or Eat Me

~

These seven awardees were selected based on quantitative scoring of their applications by a review committee, and a qualitative discussion at a review committee meeting. Committee members include Mozilla staff, current and alumni Mozilla Fellows, and outside experts. Selection criteria is designed to evaluate the merits of the proposed approach. Diversity in applicant background, past work, and medium were also considered.

These awards are part of the NetGain Partnership, a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The goal of this philanthropic collaboration is to advance the public interest in the digital age.

2018-10-24T13:00:52Z2018-10-24T13:00:52ZBrett Gaylorhttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-20T19:23:12Zhttps://blog.mozilla.org/?p=11798University of Dundee and Mozilla Announce Doctoral Program for ‘Healthier IoT’

With €1.5m in EU funding, this paid PhD program will explore how to build a more open, secure, and trustworthy Internet of Things This week, the University of Dundee and … Read more

With €1.5m in EU funding, this paid PhD program will explore how to build a more open, secure, and trustworthy Internet of Things

This week, the University of Dundee and Mozilla are announcing a new, innovative PhD program: OpenDoTT (Open Design of Trusted Things). This program will train technologists, designers, and researchers to create and advocate for connected products that are more open, secure, and trustworthy. The project is made possible through €1.5m in funding from the EU’s Horizon 2020 program.

These technologies need to be built responsibly, and this practice requires the cultivation of design research and advocacy. OpenDoTT addresses this need on a systems level. By training the very people who will develop and influence IoT technology, we can create positive change that starts at the drawing board.

The challenges of the Internet of Things (IoT) require interdisciplinary thinking. And so the program will be hosted across several locations with training by leading organizations in different fields. The doctoral researchers will begin at the University of Dundee to learn about design research, and then move to Mozilla’s office in Berlin to focus on internet health. Throughout their studies, they will receive training on open hardware from Officine Innesto; field research from Quicksand and STBY; internet policy from the Humboldt Institute for Internet and Society; responsible IoT from Thingscon; and usable security from SimplySecure.

University of Dundee will lead training in design research, building on their world-class work on the Internet of Things, co-creation, and craft technology. The university’s past projects have explored the future of voice assistants in the home and IoT for independent retailers.

Mozilla will lead training around open technology and healthy internet practices. Mozilla focuses on fueling the movement for a healthy internet by connecting open internet leaders with each other and by mobilizing grassroots activists around the world.

Professor Jon Rogers, the project coordinator and a Mozilla Fellow, says: “This program is a game changer for the future of IoT because it’s about developing leadership. Change happens through people, and this project will bring future leaders together for a radical training programme that is located between university research and industry advocacy.”

Dr. Nick Taylor of University of Dundee adds: “This project builds on our long-term collaboration with Mozilla and provides an amazing platform to make a real difference in the IoT landscape. These doctoral researchers represent a huge boost to Dundee’s growing capacity for design-led IoT research.”

Michelle Thorne, the program coordinator at Mozilla, states: “With training at the intersection of design, technology and policy, OpenDoTT will produce a cohort of leaders in the internet health movement who are uniquely qualified to steer the field not only toward what is possible, but what is also responsible.”

The program will begin recruiting doctoral trainees in late 2018, and the first trainees will begin in July 2019. There are five available slots in the program. Further details can be found on the project website (OpenDoTT.org), where potential applicants can register their interest.

The project is a Marie Skłodowska-Curie Innovative Training Network (ITN), which are designed to support mobility of young researchers across borders, while providing the training needed to support European industries. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813508.

2018-10-24T11:31:11Z2018-10-24T11:31:11ZMozillahttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-20T19:23:12Zhttp://blog.mozilla.org/community/?p=2403Firefox 63 new contributors

With the release of Firefox 63, we are pleased to welcome the 53 developers who contributed their first code change to Firefox in this release, 44 of whom were brand new volunteers! Please join us in thanking each of these … Continue reading

With the release of Firefox 63, we are pleased to welcome the 53 developers who contributed their first code change to Firefox in this release, 44 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

2018-10-23T17:35:58Z2018-10-23T17:35:58ZJosh Matthewshttps://blog.mozilla.org/communityNews and notes from and for the Mozilla community.about:community2018-10-23T17:35:58Zhttps://hacks.mozilla.org/?p=32854Firefox 63 – Tricks and Treats!

Firefox 63 comes with some long-awaited treats: an implementation of web components, including custom elements and the shadow DOM. Potch also covers the Fonts Editor, the associated font panel in the Firefox DevTools Inspector, and reduced motion preferences in CSS.

It’s that time of the year again- when we put on costumes and pass out goodies to all. It’s Firefox release week! Join me for a spook-tacular1 look at the latest goodies shipping this release.

Web Components, Oh My!

After a ratherlonggestation, I’m pleased to announce that support for modern Web Components APIs has shipped in Firefox! Expect a more thorough write-up, but let’s cover what these new APIs make possible.

Custom Elements

To put it simply, Custom Elements makes it possible to define new HTML tags outside the standard set included in the web platform. It does this by letting JS classes extend the built-in HTMLElement object, adding an API for registering new elements, and by adding special “lifecycle” methods to detect when a custom element is appended, removed, or attributes are updated:

Shadow DOM

The web has long had reusable widgets people can use when building a site. One of the most common challenges when using third-party widgets on a page is making sure that the styles of the page don’t mess up the appearance of the widget and vice-versa. This can be frustrating (to put it mildly), and leads to lots of long, overly specific CSS selectors, or the use of complex third-party tools to re-write all the styles on the page to not conflict.

Cue frustrated developer:

There has to be a better way…

Now, there is!

The Shadow DOM is not a secretive underground society of web developers, but instead a foundational web technology that lets developers create encapsulated HTML trees that aren’t affected by outside styles, can have their own styles that don’t leak out, and in fact can be made unreachable from normal DOM traversal methods (querySelector, .childNodes, etc.).

Custom elements and shadow roots can be used independently of one another, but they really shine when used together. For instance, imagine you have a <media-player> element with playback controls. You can put the controls in a shadow root and keep the page’s DOM clean! In fact, Both Firefox and Chrome now use Shadow DOM for the implementation of the <video> element.

Fonts Editor

The Inspector’s Fonts panel is a handy way to see what local and web fonts are being used on a page. Already useful for debugging webfonts, in Firefox 63 the Fonts panel gains new powers! You can adjust the parameters of the font on the currently selected element, and if the current font supports Font Variations, you can view and fine-tune those paramaters as well. The syntax for adjusting variable fonts can be a little unfamiliar and it’s not otherwise possible to discover all the variations built into a font, so this tool can be a life saver.

Reduced motion preferences for CSS

Slick animations can give a polished and unique feel to a digital experience. However, for some people, animated effects like parallax and sliding/zooming transitions can cause vertigo and headaches. In addition, some older/less powerful devices can struggle to render animations smoothly. To respond to this, some devices and operating systems offer a “reduce motion” option. In Firefox 63, you can now detect this preference using CSS media queries and adjust/reduce your use of transitions and animations to ensure more people have a pleasant experience using your site. CSS Tricks has a great overview of both how to detect reduced motion and why you should care.

Conclusion

There is, as always, a bunch more in this release of Firefox. MDN Web Docs has the full run-down of developer-facing changes, and more highlights can be found in the official release notes. Happy Browsing!

At Firefox, we’re always looking to build features that are true to the Mozillia mission of giving people control over their data and privacy whenever they go online. We recently announced our approach to Anti-tracking where we discussed three key feature areas we’re focusing on to help people feel safe while they’re on the web. With today’s release, we’re making progress against “removing cross-site tracking” with what we’re calling Enhanced Tracking Protection. To ensure we balance these new preferences with the experiences our uses want and expect, we’re rolling things out off-by-default and starting with third-party cookies. You can learn more details about our approach here.

What’s a tracking cookie and why do I need to block them?

Cookies have been around since almost the beginning of the web. They were created so that browsers could store small bits of information, like remembering that you’ve already logged into a site. Like any technology, cookies have many uses, including ones that aren’t so easy to understand. These include the use of cookies to help track your behavior across the internet, a technique known as cross-site tracking, mostly without your knowledge. We go more in-depth about this in our Firefox Frontier blog post.

We’ve all had the experience of seeing ads change based on browsing, even across multiple websites. These ads are often for things that you have no interest in purchasing, but the economics of the internet make it easy to cast a wide net cheaply. Maybe this seems like no big deal, but we think that you should have a say in how this data is used. After all, it’s more than just an annoying pair of shoes following you around, it’s data that can be used to subtly shape the content you consume or even influence your opinions.

At Firefox, we believe in giving control to the people, and hence giving users the choice to block third-party tracking cookies and the information collected in them.

Introducing Firefox’s Enhanced Tracking Protection

With today’s Firefox release, users will have the option to block cookies and storage access from third-party trackers. This is designed to effectively block the most common form of cross-site tracking.

To find this new option, go to your Firefox Options/Preferences. On the left-hand menu, click on Privacy & Security. Under Content Blocking click the checkbox next to “Third-Party Cookies” and select “Trackers (recommended)”:

Block cookies and storage access from third-party trackers

You might see some odd behavior on websites, so if something doesn’t look or work right, you can always disable the protection on a per site basis by clicking on the Shield Icon in the address bar, and then clicking “Disable Blocking For This Site”.

Disable the protection on a per site basis

We’ll continue to test this feature and hope to release it by default early 2019. Developers and site owners can read about the specifics of the functionality here.

Search Shortcuts – First, we know people primarily use the web to search for information. Second, who doesn’t love saving time to get to the places they want to go, like taking city local streets instead of back-to-back freeway traffic? We combined these two to bring you Search Shortcuts. We pinned the top two sites people use to search, Amazon and Google, to the New Tab page. Currently, this will only be available in the US. To learn more about this feature visit our Firefox Frontier blog post.

Adapting to your Windows Dark/Light Color Settings – Firefox will now match the dark or light theme you’ve chosen in your Windows settings to provide the perfect harmony in making you feel right at home.

Siri Shortcuts for Firefox for iOS – Starting with today’s release, people can now open a new tab in Firefox using a voice command. This is the first of several shortcuts that will be added in the coming months.

For developers, we’ve got tools to help you in the latest release. Visit our Hacks blog post for more detailed information.

We’re continuing to work hard in delivering the features to give people greater control while on the web. Watch this Mozilla blog for more updates in the coming months.

In the meantime, check out and download the latest version of Firefox Quantum available here. For the latest version of Firefox for iOS, visit the App Store.

As announced in August, Firefox is changing its approach to addressing tracking on the web. As part of that plan, we signaled our intent to prevent cross-site tracking for all Firefox users and made our initial prototype available for testing. … Continue reading

As announced in August, Firefox is changing its approach to addressing tracking on the web. As part of that plan, we signaled our intent to prevent cross-site tracking for all Firefox users and made our initial prototype available for testing.

Starting with Firefox 63, all desktop versions of Firefox include an experimental cookie policy that blocks cookies and other site data from third-party tracking resources. This new policy provides protection against cross-site tracking while minimizing site breakage associated with traditional cookie blocking.

This policy is part of Enhanced Tracking Protection, a new feature aimed at protecting users from cross-site tracking. More specifically, it prevents trackers from following users around from site to site and collecting information about their browsing habits.

We aim to bring these protections to all users by default in Firefox 65. Until then, you can opt-in to the policy by following the steps detailed at the end of this post.

What does this policy block?

The newly developed policy blocks storage access for domains that have been classified as trackers. For classification, Firefox relies on the Tracking Protection list maintained by Disconnect. Domains classified as trackers are not able to access or set cookies, local storage, and other site data when loaded in a third-party context. Additionally, trackers are blocked from accessing other APIs that allow them to communicate cross-site, such as the Broadcast Channel API. These measures prevent trackers from being able to use cross-site identifiers stored in Firefox to link browsing activity across different sites.

Does this policy break websites?

Third-party cookie blocking does have the potential to break websites, particularly those which integrate third-party content. For this reason, we’ve added heuristics to Firefox to automatically grant time-limited storage access under certain conditions. We are also working to support a more structured way for embedded cross-origin content to request storage access. In both cases, Firefox grants access on a site-by-site basis, and only provides access to embedded content that receives user interaction.

More structured access will be available through the Storage Access API, of which an initial implementation is available in Firefox Nightly (and soon Beta and Developer Edition) for testing. This API allows domains classified as trackers to explicitly request storage access when loaded in a third-party context. The Storage Access API is also implemented in Safari and is a proposed addition to the HTML specification. We welcome developer feedback, particularly around use cases that can not be addressed with this API.

How can I test my website?

We welcome testing by both users and site owners as we continue to develop new storage access restrictions. Take the following steps to enable this storage access policy in Firefox:

Open Preferences

On the left-hand menu, click on Privacy & Security

Under Content Blocking, click the checkbox next to “Third-Party Cookies”

Does this mean Firefox will no longer support the Tracking Protection feature?

Tracking Protection is still available to users who want to opt-in to block all tracking loads; with our updated UI, this feature can be enabled by setting “All Detected Trackers” to “Always”. All tracking loads will continue to be blocked by default in Private Browsing windows.

Expect to hear more from us in the coming months as we continue to strengthen Firefox’s default-on tracking protection.

People have a misconception—they think that the WebAssembly that landed in browsers back in 2017—is the final version. In fact, we still have many use cases to unlock, from heavy-weight desktop applications, to small modules, to JS frameworks, to all the things outside the browser… Node.js, and serverless, and the blockchain, and portable CLI tools, and the internet of things.

The WebAssembly that we have today is not the end of this story—it’s just the beginning.

People have a misconception about WebAssembly. They think that the WebAssembly that landed in browsers back in 2017—which we called the minimum viable product (or MVP) of WebAssembly—is the final version of WebAssembly.

I can understand where that misconception comes from. The WebAssembly community group is really committed to backwards compatibility. This means that the WebAssembly that you create today will continue working on browsers into the future.

But that doesn’t mean that WebAssembly is feature complete. In fact, that’s far from the case. There are many features that are coming to WebAssembly which will fundamentally alter what you can do with WebAssembly.

I think of these future features kind of like the skill tree in a videogame. We’ve fully filled in the top few of these skills, but there is still this whole skill tree below that we need to fill-in to unlock all of the applications.

So let’s look at what’s been filled in already, and then we can see what’s yet to come.

Minimum Viable Product (MVP)

The very beginning of WebAssembly’s story starts with Emscripten, which made it possible to run C++ code on the web by transpiling it to JavaScript. This made it possible to bring large existing C++ code bases, for things like games and desktop applications, to the web.

The JS it automatically generated was still significantly slower than the comparable native code, though. But Mozilla engineers found a type system hiding inside the generated JavaScript, and figured out how to make this JavaScript run really fast. This subset of JavaScript was named asm.js.

But that wasn’t the end of the story. It was just the beginning. There were still things that engines could do to make this faster.

But they couldn’t do it in JavaScript itself. Instead, they needed a new language—one that was designed specifically to be compiled to. And that was WebAssembly.

So what skills were needed for the first version of WebAssembly? What did we need to get to a minimum viable product that could actually run C and C++ efficiently on the web?

Skill: Compile target

The folks working on WebAssembly knew they didn’t want to just support C and C++. They wanted many different languages to be able to compile to WebAssembly. So they needed a language-agnostic compile target.

They needed something like the assembly language that things like desktop applications are compiled to—like x86. But this assembly language wouldn’t be for an actual, physical machine. It would be for a conceptual machine.

Skill: Fast execution

That compiler target had to be designed so that it could run very fast. Otherwise, WebAssembly applications running on the web wouldn’t keep up with users’ expectations for smooth interactions and game play.

Skill: Compact

In addition to execution time, load time needed to be fast, too. Users have certain expectations about how quickly something will load. For desktop applications, that expectation is that they will load quickly because the application is already installed on your computer. For web apps, the expectation is also that load times will be fast, because web apps usually don’t have to load nearly as much code as desktop apps.

When you combine these two things, though, it gets tricky. Desktop applications are usually pretty large code bases. So if they are on the web, there’s a lot to download and compile when the user first goes to the URL.

To meet these expectations, we needed our compiler target to be compact. That way, it could go over the web quickly.

Skill: Linear memory

These languages also needed to be able to use memory differently from how JavaScript uses memory. They needed to be able to directly manage their memory—to say which bytes go together.

This is because languages like C and C++ have a low-level feature called pointers. You can have a variable that doesn’t have a value in it, but instead has the memory address of the value. So if you’re going to support pointers, the program needs to be able to write and read from particular addresses.

But you can’t have a program you downloaded from the web just accessing bytes in memory willy-nilly, using whatever addresses they want. So in order to create a secure way of giving access to memory, like a native program is used to, we had to create something that could give access to a very specific part of memory and nothing else.

To do this, WebAssembly uses a linear memory model. This is implemented using TypedArrays. It’s basically just like a JavaScript array, except this array only contains bytes of memory. When you access data in it, you just use array indexes, which you can treat as though they were memory addresses. This means you can pretend this array is C++ memory.

Achievement unlocked

So with all of these skills in place, people could run desktop applications and games in your browser as if they were running natively on their computer.

And that was pretty much the skill set that WebAssembly had when it was released as an MVP. It was truly an MVP—a minimum viable product.

This allowed certain kinds of applications to work, but there were still a whole host of others to unlock.

Heavy-weight Desktop Applications

The next achievement to unlock is heavier weight desktop applications.

Can you imagine if something like Photoshop were running in your browser? If you could instantaneously load it on any device like you do with Gmail?

We’ve already started seeing things like this. For example, Autodesk’s AutoCAD team has made their CAD software available in the browser. And Adobe has made Lightroom available through the browser using WebAssembly.

But there are still a few features that we need to put in place to make sure that all of these applications—even the heaviest of heavy weight—can run well in the browser.

Skill: Threading

First, we need support for multithreading. Modern-day computers have multiple cores. These are basically multiple brains that can all be working at the same time on your problem. That can make things go much faster, but to make use of these cores, you need support for threading.

Skill: SIMD

Alongside threading, there’s another technique that utilizes modern hardware, and which enables you to process things in parallel.

That is SIMD: single instruction multiple data. With SIMD, it’s possible to take a chunk of memory and split up across different execution units, which are kind of like cores. Then you have the same bit of code—the same instruction—run across all of those execution units, but they each apply that instruction to their own bit of the data.

Skill: 64-bit addressing

Another hardware capability that WebAssembly needs to take full advantage of is 64-bit addressing.

Memory addresses are just numbers, so if your memory addresses are only 32 bits long, you can only have so many memory addresses—enough for 4 gigabytes of linear memory.

But with 64-bit addressing, you have 16 exabytes. Of course, you don’t have 16 exabytes of actual memory in your computer. So the maximum is subject to however much memory the system can actually give you. But this will take the artificial limitation on address space out of WebAssembly.

Skill: Streaming compilation

For these applications, we don’t just need them to run fast. We needed load times to be even faster than they already were. There are a few skills that we need specifically to improve load times.

One big step is to do streaming compilation—to compile a WebAssembly file while it’s still being downloaded. WebAssembly was designed specifically to enable easy streaming compilation. In Firefox, we actually compile it so fast—faster than it is coming in over the network— that it’s pretty much done compiling by the time you download the file. And other browsers are adding streaming, too.

Another thing that helps is having a tiered compiler.

For us in Firefox, that means having two compilers. The first one—the baseline compiler—kicks in as soon as the file starts downloading. It compiles the code really quickly so that it starts up quickly.

The code it generates is fast, but not 100% as fast as it could be. To get that extra bit of performance, we run another compiler—the optimizing compiler—on several threads in the background. This one takes longer to compile, but generates extremely fast code. Once it’s done, we swap out the baseline version with the fully optimized version.

This way, we get quick start up times with the baseline compiler, and fast execution times with the optimizing compiler.

In addition, we’re working on a new optimizing compiler called Cranelift. Cranelift is designed to compile code quickly, in parallel at a function by function level. At the same time, the code it generates gets even better performance than our current optimizing compiler.

Cranelift is in the development version of Firefox right now, but disabled by default. Once we enable it, we’ll get to the fully optimized code even quicker, and that code will run even faster.

But there’s an even better trick we can use to make it so we don’t have to compile at all most of the time…

Skill: Implicit HTTP caching

With WebAssembly, if you load the same code on two page loads, it will compile to the same machine code. It doesn’t need to change based on what data is flowing through it, like the JS JIT compiler needs to.

This means that we can store the compiled code in the HTTP cache. Then when the page is loading and goes to fetch the .wasm file, it will instead just pull out the precompiled machine code from the cache. This skips compiling completely for any page that you’ve already visited that’s in cache.

Skill: Other improvements

Many discussions are currently percolating around other ways to improve this, skipping even more work, so stay tuned for other load-time improvements.

Where are we with this?

Where are we with supporting these heavyweight applications right now?

Threading

For the threading, we have a proposal that’s pretty much done, but a key piece of that—SharedArrayBuffers—had to be turned off in browsers earlier this year.
They will be turned on again. Turning them off was just a temporary measure to reduce the impact of the Spectre security issue that was discovered in CPUs and disclosed earlier this year, but progress is being made, so stay tuned.

We added our baseline compiler in late 2017 as well, and other browsers have been adding the same kind of architecture over the past year.

Implicit HTTP caching

In Firefox, we’re getting close to landing support for implicit HTTP caching.

Other improvements

Other improvements are currently in discussion.

Even though this is all still in progress, you already see some of these heavyweight applications coming out today, because WebAssembly already gives these apps the performance that they need.

But once these features are all in place, that’s going to be another achievement unlocked, and more of these heavyweight applications will be able to come to the browser.

Small modules interoperating with JavaScript

But WebAssembly isn’t just for games and for heavyweight applications. It’s also meant for regular web development… for the kind of web development folks are used to: the small modules kind of web development.

Sometimes you have little corners of your app that do a lot of heavy processing, and in some cases, this processing can be faster with WebAssembly. We want to make it easy to port these bits to WebAssembly.

Again, this is a case where some of it’s already happening. Developers are already incorporating WebAssembly modules in places where there are tiny modules doing lots of heavy lifting.

One example is the parser in the source map library that’s used in Firefox’s DevTools and webpack. It was rewritten in Rust, compiled to WebAssembly, which made it 11x faster. And WordPress’s Gutenberg parser is on average 86x faster after doing the same kind of rewrite.

But for this kind of use to really be widespread—for people to be really comfortable doing it—we need to have a few more things in place.

Skill: Fast calls between JS and WebAssembly

First, we need fast calls between JS and WebAssembly, because if you’re integrating a small module into an existing JS system, there’s a good chance you’ll need to call between the two a lot. So you’ll need those calls to be fast.

But when WebAssembly first came out, these calls weren’t fast. This is where we get back to that whole MVP thing—the engines had the minimum support for calls between the two. They just made the calls work, they didn’t make them fast. So engines need to optimize these.

Skill: Fast and easy data exchange

That brings us to another thing, though. When you’re calling between JavaScript and WebAssembly, you often need to pass data between them.

You need to pass values into the WebAssembly function or return a value from it. This can also be slow, and it can be difficult too.

There are a couple of reasons it’s hard. One is because, at the moment, WebAssembly only understands numbers. This means that you can’t pass more complex values, like objects, in as parameters. You need to convert that object into numbers and put it in the linear memory. Then you pass WebAssembly the location in the linear memory.

That’s kind of complicated. And it takes some time to convert the data into linear memory. So we need this to be easier and faster.

Skill: ES module integration

Another thing we need is integration with the browser’s built in ES module support. Right now, you instantiate a WebAssembly module using an imperative API. You call a function and it gives you back a module.

But that means that the WebAssembly module isn’t really part of the JS module graph. In order to use import and export like you do with JS modules, you need to have ES module integration.

Skill: Toolchain integration

Just being able to import and export doesn’t get us all the way there, though. We need a place to distribute these modules, and download them from, and tools to bundle them up.

What’s the npm for WebAssembly? Well, what about npm?

And what’s the webpack or Parcel for WebAssembly? Well, what about webpack and Parcel?

These modules shouldn’t look any different to the people who are using them, so no reason to create a separate ecosystem. We just need tools to integrate with them.

Skill: Backwards compatibility

There’s one more thing that we need to really do well in existing JS applications—support older versions of browsers, even those browsers that don’t know what WebAssembly is. We need to make sure that you don’t have to write a whole second implementation of your module in JavaScript just so that you can support IE11.

Where are we on this?

For easy and fast data exchange, there are a few proposals that will help with this.

As I mentioned before, one reason you have to use linear memory for more complex kinds of data is because WebAssembly only understands numbers. The only types it has are ints and floats.

With the reference types proposal, this will change. This proposal adds a new type that WebAssembly functions can take as arguments and return. And this type is a reference to an object from outside WebAssembly—for example, a JavaScript object.

But WebAssembly can’t operate directly on this object. To actually do things like call a method on it, it will still need to use some JavaScript glue. This means it works, but it’s slower than it needs to be.

To speed things up, there’s a proposal that we’ve been calling the host bindings proposal. It let’s a wasm module declare what glue must be applied to its imports and exports, so that the glue doesn’t need to be written in JS. By pulling glue from JS into wasm, the glue can be optimized away completely when calling builtin Web APIs.

There’s one more part of the interaction that we can make easier. And that has to do with keeping track of how long data needs to stay in memory. If you have some data in linear memory that JS needs access to, then you have to leave it there until the JS reads the data. But if you leave it in there forever, you have what’s called a memory leak. How do you know when you can delete the data? How do you know when JS is done with it? Currently, you have to manage this yourself.

Once the JS is done with the data, the JS code has to call something like a free function to free the memory. But this is tedious and error prone. To make this process easier, we’re adding WeakRefs to JavaScript. With this, you will be able to observe objects on the JS side. Then you can do cleanup on the WebAssembly side when that object is garbage collected.

So these proposals are all in flight. In the meantime, the Rust ecosystem has created tools that automate this all for you, and that polyfill the proposals that are in flight.

One tool in particular is worth mentioning, because other languages can use it too. It’s called wasm-bindgen. When it sees that your Rust code should do something like receive or return certain kinds of JS values or DOM objects, it will automatically create JavaScript glue code that does this for you, so you don’t need to think about it. And because it’s written in a language independent way, other language toolchains can adopt it.

ES module integration

For ES module integration, the proposal is pretty far along. We are starting work with the browser vendors to implement it.

Toolchain support

For toolchain support, there are tools like wasm-pack in the Rust ecosystem which automatically runs everything you need to package your code for npm. And the bundlers are also actively working on support.

Backwards compatibility

Finally, for backwards compatibility, there’s the wasm2js tool. That takes a wasm file and spits out the equivalent JS. That JS isn’t going to be fast, but at least that means it will work in older versions of browsers that don’t understand WebAssembly.

So we’re getting close to unlocking this achievement. And once we unlock it, we open the path to another two.

JS frameworks and compile-to-JS languages

One is rewriting large parts of things like JavaScript frameworks in WebAssembly.

The other is making it possible for statically-typed compile-to-js languages to compile to WebAssembly instead—for example, having languages like Scala.js, or Reason, or Elm compile to WebAssembly.

For both of these use cases, WebAssembly needs to support high-level language features.

Skill: GC

We need integration with the browser’s garbage collector for a couple of reasons.

First, let’s look at rewriting parts of JS frameworks. This could be good for a couple of reasons. For example, in React, one thing you could do is rewrite the DOM diffing algorithm in Rust, which has very ergonomic multithreading support, and parallelize that algorithm.

You could also speed things up by allocating memory differently. In the virtual DOM, instead of creating a bunch of objects that need to be garbage collected, you could used a special memory allocation scheme. For example, you could use a bump allocator scheme which has extremely cheap allocation and all-at-once deallocation. That could potentially help speed things up and reduce memory usage.

But you’d still need to interact with JS objects—things like components—from that code. You can’t just continually copy everything in and out of linear memory, because that would be difficult and inefficient.

So you need to be able to integrate with the browser’s GC so you can work with components that are managed by the JavaScript VM. Some of these JS objects need to point to data in linear memory, and sometimes the data in linear memory will need to point out to JS objects.

If this ends up creating cycles, it can mean trouble for the garbage collector. It means the garbage collector won’t be able to tell if the objects are used anymore, so they will never be collected. WebAssembly needs integration with the GC to make sure these kinds of cross-language data dependencies work.

This will also help statically-typed languages that compile to JS, like Scala.js, Reason, Kotlin or Elm. These language use JavaScript’s garbage collector when they compile to JS. Because WebAssembly can use that same GC—the one that’s built into the engine—these languages will be able to compile to WebAssembly instead and use that same garbage collector. They won’t need to change how GC works for them.

Skill: Exception handling

We also need better support for handling exceptions.

Some languages, like Rust, do without exceptions. But in other languages, like C++, JS or C#, exception handling is sometimes used extensively.

You can polyfill exception handling currently, but the polyfill makes the code run really slowly. So the default when compiling to WebAssembly is currently to compile without exception handling.

However, since JavaScript has exceptions, even if you’ve compiled your code to not use them, JS may throw one into the works. If your WebAssembly function calls a JS function that throws, then the WebAssembly module won’t be able to correctly handle the exception. So languages like Rust choose to abort in this case. We need to make this work better.

Skill: Debugging

Another thing that people working with JS and compile-to-JS languages are used to having is good debugging support. Devtools in all of the major browsers make it easy to step through JS. We need this same level of support for debugging WebAssembly in browsers.

Skill: Tail calls

And finally, for many functional languages, you need to have support for something called tail calls. I’m not going to get too into the details on this, but basically it lets you call a new function without adding a new stack frame to the stack. So for functional languages that support this, we want WebAssembly to support it too.

Where are we on this?

So where are we on this?

Garbage collection

For garbage collection, there are two proposals currently underway:

The Typed Objects proposal for JS, and the GC proposal for WebAssembly. Typed Objects will make it possible to describe an object’s fixed structure. There is an explainer for this, and the proposal will be discussed at an upcoming TC39 meeting.

The WebAssembly GC proposal will make it possible to directly access that structure. This proposal is under active development.

With both of these in place, both JS and WebAssembly know what an object looks like and can share that object and efficiently access the data stored on it. Our team actually already has a prototype of this working. However, it still will take some time for these to go through standardization so we’re probably looking at sometime next year.

Exception handling

Exception handling is still in the research and development phase, and there’s work now to see if it can take advantage of other proposals like the reference types proposal I mentioned before.

Debugging

For debugging, there is currently some support in browser devtools. For example, you can step through the text format of WebAssembly in Firefox debugger.But it’s still not ideal. We want to be able to show you where you are in your actual source code, not in the assembly. The thing that we need to do for that is figure out how source maps—or a source maps type thing—work for WebAssembly. So there’s a subgroup of the WebAssembly CG working on specifying that.

Once those are all in place, we’ll have unlocked JS frameworks and many compile-to-JS languages.

So, those are all achievements we can unlock inside the browser. But what about outside the browser?

Outside the Browser

Now, you may be confused when I talk about “outside the browser”. Because isn’t the browser what you use to view the web? And isn’t that right in the name—WebAssembly.

But the truth is the things you see in the browser—the HTML and CSS and JavaScript—are only part of what makes the web. They are the visible part—they are what you use to create a user interface—so they are the most obvious.

But there’s another really important part of the web which has properties that aren’t as visible.

That is the link. And it is a very special kind of link.

The innovation of this link is that I can link to your page without having to put it in a central registry, and without having to ask you or even know who you are. I can just put that link there.

It’s this ease of linking, without any oversight or approval bottlenecks, that enabled our web. That’s what enabled us to form these global communities with people we didn’t know.

But if all we have is the link, there are two problems here that we haven’t addressed.

The first one is… you go visit this site and it delivers some code to you. How does it know what kind of code it should deliver to you? Because if you’re running on a Mac, then you need different machine code than you do on Windows. That’s why you have different versions of programs for different operating systems.

Then should a web site have a different version of the code for every possible device? No.

Instead, the site has one version of the code—the source code. This is what’s delivered to the user. Then it gets translated to machine code on the user’s device.

The name for this concept is portability.

So that’s great, you can load code from people who don’t know you and don’t know what kind of device you’re running.

But that brings us to a second problem. If you don’t know these people who’s web pages you’re loading, how do you know what kind of code they’re giving you? It could be malicious code. It could be trying to take over your system.

Doesn’t this vision of the web—running code from anybody who’s link you follow—mean that you have to blindly trust anyone who’s on the web?

This is where the other key concept from the web comes in.

That’s the security model. I’m going to call it the sandbox.

Basically, the browser takes the page—that other person’s code—and instead of letting it run around willy-nilly in your system, it puts it in a sandbox. It puts a couple of toys that aren’t dangerous into that sandbox so that the code can do some things, but it leaves the dangerous things outside of the sandbox.

So the utility of the link is based on these two things:

Portability—the ability to deliver code to users and have it run on any type of device that can run a browser.

And the sandbox—the security model that lets you run that code without risking the integrity of your machine.

So why does this distinction matter? Why does it make a difference if we think of the web as something that the browser shows us using HTML, CSS, and JS, or if we think of the web in terms of portability and the sandbox?

Because it changes how you think about WebAssembly.

You can think about WebAssembly as just another tool in the browser’s toolbox… which it is.

It is another tool in the browser’s toolbox. But it’s not just that. It also gives us a way to take these other two capabilities of the web—the portability and the security model—and take them to other use cases that need them too.

We can expand the web past the boundaries of the browser. Now let’s look at where these attributes of the web would be useful.

Node.js

How could WebAssembly help Node? It could bring full portability to Node.

Node gives you most of the portability of JavaScript on the web. But there are lots of cases where Node’s JS modules aren’t quite enough—where you need to improve performance or reuse existing code that’s not written in JS.

In these cases, you need Node’s native modules. These modules are written in languages like C, and they need to be compiled for the specific kind of machine that the user is running on.

Native modules are either compiled when the user installs, or precompiled into binaries for a wide matrix of different systems. One of these approaches is a pain for the user, the other is a pain for the package maintainer.

Now, if these native modules were written in WebAssembly instead, then they wouldn’t need to be compiled specifically for the target architecture. Instead, they’d just run like the JavaScript in Node runs. But they’d do it at nearly native performance.

So we get to full portability for the code running in Node. You could take the exact same Node app and run it across all different kinds of devices without having to compile anything.

But WebAssembly doesn’t have direct access to the system’s resources. Native modules in Node aren’t sandboxed—they have full access to all of the dangerous toys that the browser keeps out of the sandbox. In Node, JS modules also have access to these dangerous toys because Node makes them available. For example, Node provides methods for reading from and writing files to the system.

For Node’s use case, it makes a certain amount of sense for modules to have this kind access to dangerous system APIs. So if WebAssembly modules don’t have that kind of access by default (like Node’s current modules do), how could we give WebAssembly modules the access they need? We’d need to pass in functions so that the WebAssembly module can work with the operating system, just as Node does with JS.

For Node, this will probably include a lot of the functionality that’s in things like the C standard library. It would also likely include things that are part of POSIX—the Portable Operating System Interface—which is an older standard that helps with compatibility. It provides one API for interacting with the system across a bunch of different Unix-like OSs. Modules would definitely need a bunch of POSIX-like functions.

Skill: Portable interface

What the Node core folks would need to do is figure out the set of functions to expose and the API to use.

But wouldn’t it be nice if that were actually something standard? Not something that was specific to just Node, but could be used across other runtimes and use cases too?

A POSIX for WebAssembly if you will. A PWSIX? A portable WebAssembly system interface.

And if that were done in the right way, you could even implement the same API for the web. These standard APIs could be polyfilled onto existing Web APIs.

These functions wouldn’t be part of the WebAssembly spec. And there would be WebAssembly hosts that wouldn’t have them available. But for those platforms that could make use of them, there would be a unified API for calling these functions, no matter which platform the code was running on. And this would make universal modules—ones that run across both the web and Node—so much easier.

Where are we with this?

So, is this something that could actually happen?

A few things are working in this idea’s favor. There’s a proposal called package name maps that will provide a mechanism for mapping a module name to a path to load the module from. And that will likely be supported by both browsers and Node, which can use it to provide different paths, and thus load entirely different modules, but with the same API. This way, the .wasm module itself can specify a single (module-name, function-name) import pair that Just Works on different environments, even the web.

With that mechanism in place, what’s left to do is actually figure out what functions make sense and what their interface should be.

There’s no active work on this at the moment. But a lot of discussions are heading in this direction right now. And it looks likely to happen, in one form or another.

Which is good, because unlocking this gets us halfway to unlocking some other use cases outside the browser. And with this in place, we can accelerate the pace.

So, what are some examples of these other use cases?

CDNs, Serverless, and Edge Computing

One example is things like CDNs, and Serverless, and Edge Computing. These are cases where you’re putting your code on someone else’s server, and they make sure that the server is maintained and that the code is close to all of your users.

Why would you want to use WebAssembly in these cases? There was a great talk explaining exactly this at a conference recently.

Fastly is a company that provides CDNs and edge computing. And their CTO, Tyler McMullen, explained it this way (and I’m paraphrasing here):

If you look at how a process works, code in that process doesn’t have boundaries. Functions have access to whatever memory in that process they want, and they can call whatever other functions they want.

When you’re running a bunch of different people’s services in the same process, this is an issue. Sandboxing could be a way to get around this. But then you get to a scale problem.

For example, if you use a JavaScript VM like Firefox’s SpiderMonkey or Chrome’s V8, you get a sandbox and you can put hundreds of instances into a process. But with the numbers of requests that Fastly is servicing, you don’t just need hundreds per process—you need tens of thousands.

Tyler does a better job of explaining all of it in his talk, so you should go watch that. But the point is that WebAssembly gives Fastly the safety, speed, and scale needed for this use case.

So what did they need to make this work?

Skill: Runtime

They needed to create their own runtime. That means taking a WebAssembly compiler—something that can compile WebAssembly down to machine code—and combining it with the functions for interacting with the system that I mentioned before.

For the WebAssembly compiler, Fastly used Cranelift, the compiler that we’re also building into Firefox. It’s designed to be very fast and doesn’t use much memory.

Now, for the functions that interact with the rest of the system, they had to create their own, because we don’t have that portable interface available yet.

So it’s possible to create your own runtime today, but it takes some effort. And it’s effort that will have to be duplicated across different companies.

What if we didn’t just have the portable interface, but we also had a common runtime that could be used across all of these companies and other use cases? That would definitely speed up development.

Then other companies could just use that runtime—like they do Node today—instead of creating their own from scratch.

Where are we on this?

So what’s the status of this?

Even though there’s no standard runtime yet, there are a few runtime projects in flight right now. These include WAVM, which is built on top of LLVM, and wasmjit.

In addition, we’re planning a runtime that’s built on top of Cranelift, called wasmtime.

And once we have a common runtime, that speeds up development for a bunch of different use cases. For example…

Portable CLI tools

WebAssembly can also be used in more traditional operating systems. Now to be clear, I’m not talking about in the kernel (although brave souls are trying that, too) but WebAssembly running in Ring 3—in user mode.

Then you could do things like have portable CLI tools that could be used across all different kinds of operating systems.

And this is pretty close to another use case…

Internet of Things

The internet of things includes devices like wearable technology, and smart home appliances.

These devices are usually resource constrained—they don’t pack much computing power and they don’t have much memory. And this is exactly the kind of situation where a compiler like Cranelift and a runtime like wasmtime would shine, because they would be efficient and low-memory. And in the extremely-resource-constrained case, WebAssembly makes it possible to fully compile to machine code before loading the application on the device.

There’s also the fact that there are so many of these different devices, and they are all slightly different. WebAssembly’s portability would really help with that.

So that’s one more place where WebAssembly has a future.

Conclusion

Now let’s zoom back out and look at this skill tree.

I said at the beginning of this post that people have a misconception about WebAssembly—this idea that the WebAssembly that landed in the MVP was the final version of WebAssembly.

I think you can see now why this is a misconception.

Yes, the MVP opened up a lot of opportunities. It made it possible to bring a lot of desktop applications to the web. But we still have many use cases to unlock, from heavy-weight desktop applications, to small modules, to JS frameworks, to all the things outside the browser… Node.js, and serverless, and the blockchain, and portable CLI tools, and the internet of things.

So the WebAssembly that we have today is not the end of this story—it’s just the beginning.

Here comes the 26th issue of WebRender’s newsletter. Let’s see what we have this week: Notable WebRender and Gecko changes Bobby reduced GPU memory usage on Windows by making it so ANGLE doesn’t allocate mipmaps for all textures. Bobby further reduced GPU memory usage by sharing the depth buffer for all intermediate targets. Andrew improved … Continue reading WebRender newsletter #26→

Here comes the 26th issue of WebRender’s newsletter. Let’s see what we have this week:

Notable WebRender and Gecko changes

Bobby reduced GPU memory usage on Windows by making it so ANGLE doesn’t allocate mipmaps for all textures.

Bobby further reduced GPU memory usage by sharing the depth buffer for all intermediate targets.

Ongoing work

Doug is making progress on document splitting. This will allow us to render the UI and the web content independently.

Kats and Markus are looking into standing up WebRender in GeckoView (Android). It’s not quite usable yet but early performance profiles are very encouraging.

Nical is auditing WebRender’s resistance to timing attacks.

Matt is investigating SVG performance.

Bobby is looking into further reducing GPU memory usage by improving the texture cache heuristics.

Gankro is making progress on blob image re-coordination.

Enabling WebRender in Firefox Nightly

In about:config set “gfx.webrender.all” to true,

restart Firefox.

2018-10-18T22:02:10Z2018-10-18T22:02:10ZNicalhttps://mozillagfx.wordpress.comhttps://s0.wp.com/i/buttonw-com.pngMozilla Gfx Team Blog2018-11-22T09:15:32Zhttps://blog.mozilla.org/netpolicy/?p=1497Getting serious about political ad transparency with Ad Analysis for Facebook

Do you know who is trying to influence your vote online? The votes of your friends and neighbors? Would you even know how to find out? Despite all the talk … Read more

Do you know who is trying to influence your vote online? The votes of your friends and neighbors? Would you even know how to find out? Despite all the talk of election security, the tech industry still falls short on political ad transparency. With the U.S. midterm elections mere weeks away, this is a big problem.

We can’t solve this problem alone, but we can help by making it more visible and easier to understand. Today we are announcing the release of our experimental extension, Ad Analysis for Facebook, to give you greater transparency into the online advertisements, including political ads, you see on Facebook.

Big tech companies have acknowledged this problem but haven’t done enough to address it. In May, Facebook released the Ad Archive, a database of political ads that have run on the platform. In August, Facebook announced a private beta release of its Ad Archive API. But these are baby steps at a time when we need more. The Ad Archive doesn’t provide the integrated, transparent experience that users really need, nor provide the kind of data journalists and researchers require for honest oversight. The Ad Archive API is only available to select organizations. Facebook’s tools aren’t very useful today, which means they won’t provide meaningful transparency before the midterm elections.

This is why we’re launching Ad Analysis for Facebook. It shows you why you were targeted, and how your targeting might differ from other users. You may be surprised! Facebook doesn’t just target you based on the information you’ve provided in your profile and posts. Facebook also infers your interests based on your activities, the news you read, and your relationships with others on Facebook.

Beyond giving you insight into how you were targeted, Ad Analysis for Facebook provides a view of the overall landscape to help you see outside your filter bubble. The extension also displays a high-level overview of the top political advertisers based on targeting by state, gender, and age. You can view ads for each of these targeting criteria — the kinds of ads you would never normally see.

Political ad transparency is just one of the many areas we need to improve to strengthen our electoral processes for the digital age. Transparency alone won’t solve misinformation problems or election hacking. But at Mozilla, we believe transparency is the most critical piece. Citizens need accurate information and powerful tools to make informed decisions. We encourage you to use our new Ad Analysis for Facebook experiment, as well as our other tools and resources to help you navigate the US midterm elections. It’s all part of learning more about who is trying to influence your vote.

2018-10-18T21:14:04Z2018-10-18T21:14:04ZMarshall Erwinhttps://blog.mozilla.org/netpolicyMozilla's official blog on open Internet policy initiatives and developmentsOpen Policy & Advocacy2018-11-21T10:08:34Z5bc7b1083e4b5900bf502986Introducing Spoke: Make your own custom 3D social scenesSpoke lets you quickly take all the amazing 3D content from across the web and compose it into a custom scene you can meet up in, right in your browser.

Today we’re thrilled to announce the beta release of Spoke: the easiest way to create your own custom social 3D scenes you can use with Hubs.

Over the last year, our Social Mixed Reality team has been developing Hubs, a WebVR-based social experience that runs right in your browser. In Hubs, you can communicate naturally in VR or on your phone or PC by simply sharing a link.

Along the way, we’ve added features that enable social presence, self-expression, and content sharing. We’ve also offered a variety of scenes to choose from, like a castle space, an atrium, and even a wide open space high in the sky.

However, as we hinted at earlier in the year, we think creating virtual scenes should be easy for anyone, as easy as creating your first webpage.

Spoke lets you quickly take all the amazing 3D content from across the web from sites like Sketchfab and Google Poly and compose it into a custom scene with your own personal touch. You can also use your own 3D models, exported as glTF. The scenes you create can be published, shared, and used in Hubs in just a few clicks. It takes as little as 5 minutes to create a scene and meet up with others in VR. Don’t believe us? Check out our 5 minute tutorial to see how easy it is.

With Spoke, all of the freely-licensed 3D content by thousands of amazing and generous 3D artists can be composed into places you can visit together in VR. We’ve made it easy to import and arrange your own 3D content as well. In a few clicks, you can meet up in a custom 3D scene, in VR, all by just sharing a link. And since you’re in Hubs, you can draw, bring in content from the web, or even take selfies with one another!

We’re beyond excited to get Spoke into your hands, and we can’t wait to see the amazing scenes you create. We’ll be adding more capabilities to Spoke over the coming months which will open up even more possibilities. As always, please join us on our Discord server or file a GitHub issue if you have feedback.

2018-10-18T17:30:11Z2018-10-18T17:30:11ZRobert Longhttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttps://blog.mozilla.org/security/?p=2395Encrypted SNI Comes to Firefox Nightly

TL;DR: Firefox Nightly now supports encrypting the TLS Server Name Indication (SNI) extension, which helps prevent attackers on your network from learning your browsing history. You can enable encrypted SNI today and it will automatically work with any site that … Continue reading

TL;DR: Firefox Nightly now supports encrypting the TLS Server Name Indication (SNI) extension, which helps prevent attackers on your network from learning your browsing history. You can enable encrypted SNI today and it will automatically work with any site that supports it. Currently, that means any site hosted by Cloudflare, but we’re hoping other providers will add ESNI support soon.

Concealing Your Browsing History

Although an increasing fraction of Web traffic is encrypted with HTTPS, that encryption isn’t enough to prevent network attackers from learning which sites you are going to. It’s true that HTTPS conceals the exact page you’re going to, but there are a number of ways in which the site’s identity leaks. This can itself be sensitive information: do you want the person at the coffee shop next to you to know you’re visiting cancer.org?

There are four main ways in which browsing history information leaks to the network: the TLS certificate message, DNS name resolution, the IP address of the server, and the TLS Server Name Indication extension. Fortunately, we’ve made good progress shutting down the first two of these: The new TLS 1.3 standard encrypts the server certificate by default and over the past several months, we’ve been exploring the use of DNS over HTTPS to protect DNS traffic. This is looking good and we are hoping to roll it out to all Firefox users over the coming months. The IP address remains a problem, but in many cases, multiple sites share the same IP address, so that leaves SNI.

Why do we need SNI anyway and why didn’t this get fixed before?

Ironically, the reason you need an SNI field is because multiple servers share the same IP address. When you connect to the server, it needs to give you the right certificate to prove that you’re connecting to a legitimate server and not an attacker. However, if there is more than one server on the same IP address, then which certificate should it choose? The SNI field tells the server which host name you are trying to connect to, allowing it to choose the right certificate. In other words, SNI helps make large-scale TLS hosting work.

We’ve known that SNI was a privacy problem from the beginning of TLS 1.3. The basic idea is easy: encrypt the SNI field (hence “encrypted SNI” or ESNI). Unfortunately every design we tried had drawbacks. The technical details are kind of complicated, but the basic story isn’t: every design we had for ESNI involved some sort of performance tradeoff and so it looked like only sites which were “sensitive” (i.e., you might want to conceal you went there) would be willing to enable ESNI. As you can imagine, that defeats the point, because if only sensitive sites use ESNI, then just using ESNI is itself a signal that your traffic demands a closer look. So, despite a lot of enthusiasm, we eventually decided to publish TLS 1.3 without ESNI.

However, at the beginning of this year, we realized that there was actually a pretty good 80-20 solution: big Content Distribution Networks (CDNs) host a lot of sites all on the same machines. If they’re willing to convert all their customers to ESNI at once, then suddenly ESNI no longer reveals a useful signal because the attacker can see what CDN you are going to anyway. This realization broke things open and enabled a design for how to make ESNI work in TLS 1.3 (see Alessandro Ghedini’s writeup of the technical details.) Of course, this only works if you can mass-configure all the sites on a given set of servers, but that’s a pretty common configuration.

How do I get it?

This is brand-new technology and Firefox is the first browser to get it. At the moment we’re not ready to turn it on for all Firefox users. However, Nightly users can try out this enhancing feature now by performing the following steps: First, you need to make sure you have DNS over HTTPS enabled (see: https://blog.nightly.mozilla.org/2018/06/01/improving-dns-privacy-in-firefox/). Once you’ve done that, you also need to set the “network.security.esni.enabled” preference in about:config to “true”). This should automatically enable ESNI for any site that supports it. Right now, that’s just Cloudflare, which has enabled ESNI for all its customers, but we’re hoping that other providers will follow them. You can go to: https://www.cloudflare.com/ssl/encrypted-sni/ to check for yourself that it’s working.

What’s Next?

During the development of TLS 1.3 we found a number of problems where network devices (typically firewalls and the like) would break when you tried to use TLS 1.3. We’ve been pretty careful about the design, but it’s possible that we’ll see similar problems with ESNI. In order to test this, we’ll be running a set of experiments over the next few months and measuring for breakage. We’d also love to hear from you: if you enable ESNI and it works or causes any problems, please let us know.

Opus is a totally open, royalty-free, audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. This 1.3 release brings quality improvements to both speech and music compression, ambisonics support, and more.

Opus is a totally open, royalty-free audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is now included in all major browsers and mobile operating systems. It has been adopted for a wide range of applications, and is the default WebRTC codec.

This release brings quality improvements to both speech and music compression, while remaining fully compatible with RFC 6716. Here’s a few of the upgrades that users and implementers will care about the most.

Opus 1.3 includes a brand new speech/music detector. It is based on a recurrent neural network and is both simpler and more reliable than the detector that has been used since version 1.1. The new detector should improve the Opus performance on mixed content encoding, especially at bitrates below 48 kb/s.

There are also many improvements for speech encoding at lower bitrates, both for mono and stereo. The demo contains many more details, as well as some audio samples. This new release also includes a cool new feature: ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos.

Matrix is an open standard for interoperable, decentralised, real-time communication over the Internet. It provides a standard HTTP API for publishing and subscribing to real-time data in specified channels, so it can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication--the most common use of Matrix today is as an Instant Messaging platform.

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

While Scuttlebutt is person-centric and IPFS is document-centric, today you’ll learn about Matrix, which is all about messages. Instead of inventing a whole new stack, they’ve leaned on some familiar parts of the web today – HTTP as a transport, and JSON for the message format. How those messages get around is what distinguishes it – a system of decentralized servers, designed with interoperability in mind from the beginning, and an extensibility model for adapting to different use-cases. Please enjoy this introduction from Ben Parsons, developer advocate for Matrix.org.

– Dietrich Ayala

What is Matrix?

Matrix is an open standard for interoperable, decentralised, real-time communication over the Internet. It provides a standard HTTP API for publishing and subscribing to real-time data in specified channels, which means it can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication, and anything else that can be expressed as JSON and needs to be transmitted in real-time over HTTP. The most common use of Matrix today is as an Instant Messaging platform.

Matrix is interoperable in that it follows an open standard and can freely communicate with other platforms. Matrix messages are JSON, and easy to parse. Bridges are provided to enable communication with other platforms.

Matrix is decentralised – there is no central server. To communicate on Matrix, you connect your client to a single “homeserver” – this server then communicates with other homeservers. For every room you are in, your homeserver will maintain a copy of the history of that room. This means that no one homeserver is the host or owner of a room if there is more than one homeserver connected to it. Anyone is free to host their own homeserver, just as they would host their own website or email server.

Why create another messaging platform?

The initial goal is to fix the problem of fragmented IP communications: letting users message and call each other without having to care what app the other user is on – making it as easy as sending an email.

In future, we want to see Matrix used as a generic HTTP messaging and data synchronization system for the whole web, enabling IoT and other applications through a single unified, understandable interface.

What does Matrix provide?

Matrix is an Open Standard, with a specification that describes the interaction of homeservers, clients and Application Services that can extend Matrix.

There are reference implementations of clients, servers and SDKs for various programming languages.

Architecture

You connect to Matrix via a client. Your client connects to a single server – this is your homeserver. Your homeserver stores and provides history and account information for the connected user, and room history for rooms that user is a member of. To sign up, you can find a list of public homeservers at hello-matrix.net, or if using Riot as your client, the client will suggest a default location.

Homeservers synchronize message history with other homeservers. In this way, your homeserver is responsible for storing the state of rooms and providing message history.

Let’s take a look at an example of how this works. Homeservers and clients are connected as in the diagram in figure 1.

Figure 1. Homeservers with clients

Figure 2.

If we join a homeserver (Figure 3), that means we are connecting our client to an account on that homeserver.

Figure 3.

Now we send a message. This message is sent into a room specified by our client, and given an event id by the homeserver.

Figure 4.

Our homeserver sends the message event to every homeserver which has a user account belonging to it in the room. It also sends the event to every local client in the room. (Figure 5.)

Figure 5.

Finally, the remote homeservers send the message event to their clients, which in are the appropriate room.

Figure 6.

Usage Example – simple chatbot

Let’s use the matrix-js-sdk to create a small chatbot, which listens in a room and responds back with an echo.

Make a new directory, install matrix-js-sdk and let’s get started:

mkdir my-bot
cd my-bot
npm install matrix-js-sdk
touch index.js

Now open index.js in your editor. We first create a client instance, this connects our client to our homeserver:

Finally, we respond to the events by echoing back messages starting “!”

function handleEvent(event) {
// we know we only want to respond to messages
if (event.getType() !== "m.room.message") {
return;
}
// we are only interested in messages which start with "!"
if (event.getContent().body[0] === '!') {
// create an object with everything after the "!"
var content = {
"body": event.getContent().body.substring(1),
"msgtype": "m.notice"
};
// send the message back to the room it came from
client.sendEvent(event.getRoomId(), "m.room.message", content, "", (err, res) => {
console.log(err);
});
}
}

Learn More

The best place to come and find out more about Matrix is on Matrix itself! The absolute quickest way to participate in Matrix is to use Riot, a popular web-based client. Head to <https://riot.im/app>, sign up for an account and join the #matrix:matrix.org room to introduce yourself.

2018-10-17T15:12:41Z2018-10-17T15:12:41ZBen Parsonshttps://hacks.mozilla.orghacks.mozilla.orgMozilla Hacks – the Web developer blog2018-11-20T17:45:45Zhttps://hacks.mozilla.org/?p=32787Show your support for Firefox with new badges

If you use Firefox and want to show your support, we've made a collection of badges you can add to your website. Whether you're passionate about Mozilla's mission, or just think Firefox is a kick-ass product, we'd love your help in spreading the word.

Firefox is only as strong as its passionate users. Because we’re independent, people need to make a conscious choice to use a non-default browser on their system. We’re most successful when happy users tell others about an alternative worth trying.

If you’re a Firefox user and want to show your support, we’ve made a collection of badges you can add to your website to tell users, “I use Firefox, and you should too!”

You can browse the badges and grab the code to display them on a dedicated microsite we’ve built, so there’s no need to download them (though you’re welcome to if you want). Images are hosted on a Mozilla CDN for convenience and performance only. We do no tracking of traffic to the CDN. We’ll be adding more badges as time goes on as well.

So whether you’re excited to use a browser from a non-profit with a mission to build a better Internet, or just think Firefox is a kick-ass product, we’d love for you to spread the word.

Mozilla’s ninth-annual festival — slated for October 22-28 in London — examines how the internet and human life intersect

Workshops that teach you how to detect misinformation and mobile trackers. A series of art installations that turn online data into artwork. A panel about the unintended consequences of AI, featuring a former YouTube engineer and a former FBI agent. And a conversation with the inventor of the web.

These are just a handful of the experiences at this year’s MozFest, Mozilla’s annual festival for, by, and about people who love the internet. From October 22-28 at the Royal Society of Arts (RSA) and Ravensbourne University in central London, more than 2,500 developers, designers, activists, and artists from dozens of countries will gather to explore privacy, security, openness, and inclusion online.

Tickets are just £45, and provide access to hundreds of sessions, talks, art, swag, meals, and more.

Says Mark Surman, Mozilla’s Executive Director: “At MozFest, people from across the globe — technologists from Nairobi, educators from Berlin — come together to build a healthier internet. We examine the most pressing issues online, like misinformation and the erosion of privacy. Then we roll up our sleeves to find solutions. In a way, MozFest is just the start: The ideas we bat around and the code we write always evolves into new campaigns and new open-source products.”

You can learn more and purchase tickets at mozillafestival.org. In the meantime, here’s a closer look at what you can expect:

Hundreds of hands-on workshops

MozFest is built around hands-on participation — many of your fellow attendees are leading sessions themselves. These sessions are divided among six spaces: Decentralisation; Digital Inclusion; Openness; Privacy and Security; Web Literacy; and the Youth Zone.

Sessions range from roundtable discussions to hackathons. Among them:

A scene from MozFest 2017

“Get the Upper Hand on Misinformation,” a session exploring concepts like confirmation bias, disinformation, and fake news. Participants will also suggest their own tools to combat these issues

“Tracking Mobile Trackers,” a session that teaches you how to detect — and jam — the mobile trackers that prey on your personal data

Talks

The MozFest Dialogues & Debates stage features leading thinkers from across the internet health movement. This year, 18 luminaries from France, India, Afghanistan, and beyond will participate in solo talks and spirited panels. Among them:

“Flaws in the Data-Driven Digital Economy,” a talk by Renée DiResta. Renée investigates the spread of disinformation and manipulated narratives across social networks. She is a Mozilla Fellow; the Director of Research at New Knowledge; and Head of Policy at nonprofit Data for Democracy

New Experiences

MozFest is always evolving — over nine years, it’s grown from a small gathering in a Barcelona museum to a global convening in the heart of London. This year, we’re excited to introduce:

A scene from MozFest 2017

Queering MozFest, a pan-festival experience that explores how internet issues intersect with gender and sexuality. Programming will reflect on the relationships between technology, normalisation, and marginalisation

Tracked, a game spanning the entire festival. The experience will engage players in various activities throughout the venue, demonstrating the trade-offs we each make when it comes to our personal data

Art + Data, a gallery of 36 interactive art installations that merge data and art — from ASCII scarves you can actually wear, to startling visualizations of the amount of personal data that’s public online

Mozilla’s second-ever *Privacy Not Included, a guide to help you shop for private and secure connected gifts this holiday season, will debut as MozFest. Some 70 products will be reviewed to reveal what exactly they do with your personal data

MozFest House

The Festival weekend — Saturday, October 27 and Sunday, October 28 — is where many sessions, talks, and experiences take place. But there’s an entire pre-week of programming, too. MozFest House runs from October 22 to October 26 at the Royal Society of the Arts (RSA) and extends the festival into a week-long affair. MozFest House programming includes:

A screening of “The Cleaners,” a documentary about the dark, day-to-day activities of online content moderators

“MisinfoCon,” a one-day conference exploring the spread of misinformation online — and how to fix it

“Viewsource,” a one-day conference where front-end developers and designers talk about CSS, JavaScript, HTML, Web Apps, and more

MozFest couldn’t happen without the time and talent of our extraordinary volunteer wranglers. And it is made possible by our presenting sponsor Private Internet Access, a leading personal virtual private network (VPN) service. The event is also supported by Internet Society, the nonprofit working for an open, globally-connected, trustworthy, and secure Internet for everyone.

We hope you’ll join us in London — or tune in remotely — and help us build a better internet. mozillafestival.org

2018-10-16T13:27:59Z2018-10-16T13:27:59ZMozillahttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-20T19:23:12Zhttp://blog.mozilla.org/addons/?p=8569Apply to Join the Featured Extensions Advisory Board

Do you love extensions? Do you have a keen sense of what makes a great extension? Want to help users discover extensions that will improve how they experience the web? If so, please consider applying to join our Featured Extensions … Continue reading

Do you love extensions? Do you have a keen sense of what makes a great extension? Want to help users discover extensions that will improve how they experience the web? If so, please consider applying to join our Featured Extensions Community Board!

Board members nominate and select new featured extensions each month to help millions of users find top-quality extensions to customize their Firefox browsers. Click here to learn more about the duties of the Featured Extension Advisory Board. The current board is currently wrapping up their six-month tour of duty and we are now assembling a new board of talented contributors for the months January – June, 2019.

Extension developers, designers, advocates, and fans are all invited to apply to join the board. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board.

To apply, please send us an email at amo-featured [at] mozilla [dot] org with your name and a few sentences about how you’re involved with AMO and why you are interested in joining the board. The deadline is Monday, October 22, 2018 at 11:59pm PDT. The new board will be announced shortly thereafter.

In previous research, The Extended Mind has documented how a 3D space automatically signals to people the rules of behavior. One of the key findings of that research is that when there is synchrony in the design of a space, it helps communicate behavioral norms to visitors. That means that when there is complementarity among content, affordances, and avatars, it helps people learn how to act. One example would be creating a gym environment (content), with weights (affordances), but only letting avatars dress in tuxedos and evening gowns. The contraction of people’s appearances could demotivate weight-lifting (the desired behavior).

This article shares learnings from the Hubs by Mozilla user research on how the different locations that they visited impacted participant’s behavior. Briefly, the researchers observed five pairs of participants in multiple 3D environments and watched as they navigated new ways of interacting with one another. In this particular study, participants visited a medieval fantasy world, a meeting room, an atrium, and a rooftop bunker.

To read more about the details and set up of the user study, read the intro blog post here.

The key environmental design insights are:

Users want to explore

The size of the space influences the type of conversation that users have

Objects in the environment shaped people’s expectations of what the space was for

The rest of the article will provide additional information on each of the insights.

Anticipate that people will want to explore upon arrival

Users immediately began exploring the space and quickly taught themselves to move. This might have been because people were new to Hubs by Mozilla and Social VR more generally. The general takeaway is that XR creators should give people something to discover once they arrive. Finding something will will be satisfying to the user. Platforms could also embrace novelty and give people something new to discover every time they visit. E.g., in Hubs, there is a rubber duck. Perhaps the placement of the duck could be randomly generated so people would have to look for it every time they arrive.

One thing to consider from a technical perspective was that the participants in this study didn’t grasp that by moving away from their companion it would be harder to hear that person. They made comments to the researchers and to each other about the spatialized audio feature:

“You have to be close to me for me to hear you”

While spatialized audio has multiple benefits and adds a dimension of presence to immersive worlds, in this case, people’s lack of understanding meant that they sometimes had sound issues. When this was combined with people immediately exploring the space when they arrived earlier than their companion, it was sometimes challenging for people to connect with one another. This leads to the second insight about size of the space.

Smaller spaces were easier for close conversations

When people arrived in the smaller spaces, it was easier for them to find their companion and they were less likely to get lost. There’s one particular world that was tested called a Medieval Fantasy book and it was inviting with warm colors, but it was large and people wandered off. That type of exploration sometimes got in the way of people enjoying conversations:

“I want to look at her robot face, but it’s hard because she keeps moving.”

This is another opportunity to consider use cases for for any Social VR environment. If the use case is conversation, smaller rooms lead to more intimate talks. Participants who were new to VR were able to access this insight when describing their experience.

"The size of the space alludes to…[the] type of conversation. Being out in this bigger space feels more public, but when we were in the office, it feels more intimate."

This quote illustrates how size signaled privacy to users. It is also coherent with past research from The Extended Mind on how to configure a space to match users’ expectations.

…when you go to a large city, the avenues are really wide which means a lot of traffic and people. vs. small streets means more residential, less traffic, more privacy. All of those rules still apply [to XR].

The lesson for all creators is that the more clear that they are on the use case of a space, the easier it should be to build it. In fact, participants were excited about the prospect of identifying or customizing their own spaces for a diverse set of activities or for meeting certain people:

“Find the best environment that suits what you want to do...

There is a final insight on how the environment shapes user behavior and it is about objects change people’s perceptions, including around big concepts like privacy.

Objects shaped people’s expectations of what the space was for

There were two particular Hubs objects that users responded to in interesting ways. The first is the rubber duck and the second is a door. What’s interesting to note is that in both cases, participants are interpreting these objects on their own and no one is guiding them.

The rubber duck is unique to Hubs and was something that users quickly became attached to. When a participant clicked on the duck, it quacked and replicated itself, which motivated the users to click over and over again. It was a playful fidget-y type object, which helped users understand that it was fine to just sit and laugh with their companion and that they didn’t have to “do something” while they visited Hubs.

However, there were other objects that led users to make incorrect assumptions about privacy of Hubs. The presence of a door led a user to say:

“I thought opening one of those doors would lead me to a more public area.”

In reality, the door was not functional. Hubs’ locations are entirely private places accessible only via a unique URL.

What’s relevant to all creators is that their environmental design is open to interpretation by visitors. And even if creators make attempts to scrub out objects and environments sparse, that will just lead users to make different assumptions about what it is for. One set of participant decided that one of the more basic Hubs spaces reminded them of an interrogation room and they constructed an elaborate story for themselves that revolved around it.

Summary

Environmental cues can shape user expectations and behaviors when they enter an immersive space. In this test with Hubs by Mozilla, large locations led people to roam and small places focused people’s attention on each other. The contents of the room also influence the topics of conversations and how private they believed their discussions might be.

All of this indicates that XR creators should consider the subtle messages that their environments are sending to users. There’s value in user testing with multiple participants who come from different backgrounds to understand how their interpretations vary (or don’t) from the intentions of the creator. Testing doesn’t have to be a huge undertaking requiring massive development hours in response. It may uncover small things that could be revised rapidly – such as small tweaks to lighting and sound could impact people’s experience of a space. For the most part, people don’t feel like dim lighting is inviting and a test could uncover that early in the process and developers could amp up the brightness before a product with an immersive environment actually launches.

The final article in this blog series is going to focus on giving people the details of how this Hubs by Mozilla research study was executed and make recommendations for best practices in conducting usability research on cross platform (2D and VR) devices.

This article is part three of the series that reviews the user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

To read part one of on accessibility, click here.
To read part two on the personal connections and playfulness of Hubs, click here.

2018-10-15T16:46:28Z2018-10-15T16:46:28ZJessica Outlawhttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttps://blog.mozilla.org/security/?p=2397Removing Old Versions of TLS

In March of 2020, Firefox will disable support for TLS 1.0 and TLS 1.1. On the Internet, 20 years is an eternity. TLS 1.0 will be 20 years old in January 2019. In that time, TLS has protected billions – … Continue reading

In March of 2020, Firefox will disable support for TLS 1.0 and TLS 1.1.

On the Internet, 20 years is an eternity. TLS 1.0 will be 20 years old in January 2019. In that time, TLS has protected billions – and probably trillions – of connections from eavesdropping and attack.

In that time, we have collectively learned a lot about what it takes to design and build a security protocol.

Though we are not aware of specific problems with TLS 1.0 that require immediate action, several aspects of the design are neither as strong or as robust as we would like given the nature of the Internet today. Most importantly, TLS 1.0 does not support modern cryptographic algorithms.

The Internet Engineering Task Force (IETF) no longer recommends the use of older TLS versions. A draft document describes the technical reasons in more detail.

We will disable TLS 1.1 at the same time. TLS 1.1 only addresses a limitation of TLS 1.0 that can be addressed in other ways. Our telemetry shows that only 0.1% of connections use TLS 1.1.

TLS versions for all connections established by Firefox Beta 62, August-September 2018

For sites that need to upgrade, the recently released TLS 1.3 includes an improved core design that has been rigorously analyzed by cryptographers. TLS 1.3 can also make connections faster than TLS 1.2. Firefox already makes far more connections with TLS 1.3 than with TLS 1.0 and 1.1 combined.

Be aware that these changes will appear in pre-release versions of Firefox (Beta, Developer Edition, and Nightly) earlier than March 2020. We will announce specific dates when we have more detailed plans.

We understand that upgrading something as fundamental as TLS can take some time. This change affects a large number of sites. That is why we are making this announcement so far in advance of the March 2020 removal date of TLS 1.0 and TLS 1.1.

Other browsers have made similar announcements. Chrome, Edge, and Safari all plan to make the same change.

Changes and updates to the code, data, and tools that support MDN Web Docs. In September, the team launched MDN payments, improved MDN’s accessibility resources, and removed 15% of KumaScript macros. The team also shipped tweaks and fixes by merging 379 pull requests, including 66 pull requests from 38 new contributors.

Launched MDN payments

We’ve been thinking about the direction and growth of MDN. We’d like a more direct connection with developers, and to provide them with valuable features and benefits they need to be successful in their web projects. We’ve researched several promising ideas, and decided that direct payments would be the first experiment. Logged-in users and 1% of anonymous visitors see the banner that asks them to directly support MDN. See Ali Spivak’s and Kadir Topal’s post, A New Way to Support MDN, for more information.

Payment page on MDN

The implementation phase started in August, when Potato London was hired to design and implement payments. Potato did an amazing job executing on a 5-week schedule, including several design meetings, daily standups, and a trip from Bristol to London to meet face-to-face during the MDN work week. Thanks to the hard work from the Potato team, including Charlie Harding, Josh Jarvis, Matt Hall, Michał Macioszczyk, Philip Lackmaker, and Rachel Lee.

In honour of Potato, Tate Modern is exhibiting Magdalena Abakanowicz’s “Embryology”

Improved MDN’s accessibility resources

After the work week, we met with accessibility experts for the Hack on MDN event. Volunteers and staff improved MDN’s coverage of accessibility. This included discussions of accessibility topics, improving and expanding MDN’s documentation, and writing related blog posts. It also included code changes, improving MDN’s color contrast and adding markup for screen readers. See Janet Swisher’s Hack on MDN: Better accessibility for MDN Web Docs for the details.

Removed 15% of KumaScript macros

The MDN team got together for a week at the London office to reflect on the quarter and plan the coming year.

We discussed KumaScript, our macro language and rendering service that implements standardized sidebars, banners, and internal links. It’s been easier to analyze macros since we moved them to GitHub in November 2016. We’re happy with the performance gains, but code reviews take forever, translations are hard, and we’re slow to write tests. These issues contributed to an incident in August where a sidebar macro was broken, and all the API reference pages showed an error for a day (bug 1487640).

Staff is getting impatient with KumaScript, and wants to replace it with something better. Florian wrote up the notes from the meeting on Discourse as Next steps for KumaScript.

The team removed 72 macros in about 2 weeks, and will continue removing them for the rest of the year. This will leave a smaller number of important macros, and we can analyze them for the next steps in the project.

Planned for October

October is the start of the fourth quarter. We have a few yearly goals to complete, including the Python 3 transition, the next round of the payments experiment, and performance experiments. This quarter also contains major holidays and the Mozilla All Hands, which mean it has about half the working days of other quarters. Time to get to work!

Move to Mozilla IT infrastructure

In October, Ryan Johnson, Ed Lim, Dave Parfitt, and Josh Mize will complete the setup of MDN services in the Mozilla IT infrastructure, and switch production traffic to the new systems. This will complete the migration of MDN from Mozilla Marketing to Emerging Technologies, started in February 2018. The team is organizing the switch-over checklist, and experimenting with the parallel staging environments.

The production switch is planned for October 29th, and will include a few hours when the site is in read-only mode.

The latest version of the Things Gateway rolling out today comes with new home monitoring features that let you directly monitor your home over the web, without a middleman. That means no monthly fees, your private data stays in your home by default, and you can choose from a variety of sensors made by different manufacturers.

When it comes to smart home devices, protecting the safety and security of your home when you aren’t there is a popular area of adoption. Traditional home security systems are either completely offline (an alarm sounds in the house, but nobody is notified) or professionally monitored (with costly subscription services). Self monitoring of your connected home therefore makes sense, but many current smart home solutions still require ongoing service fees and send your private data to a centralised cloud service.

The latest version of the Things Gateway rolls out today with new home monitoring features that let you directly monitor your home over the web, without a middleman. That means no monthly fees, your private data stays in your home by default, and you can choose from a variety of sensors from different brands.

Version 0.6 adds support for door sensors, motion sensors and customisable push notifications. Other enhancements include support for push buttons and a wider range of Apple HomeKit devices, as well as general robustness improvements and better error reporting.

Sensors

The latest update comes with support for door/window sensors and motion sensors, including the SmartThings Motion Sensor and SmartThings Multipurpose Sensor.These sensors make great triggers for a home monitoring system and also report temperature, battery level and tamper detection.

Push Notifications

You can now create rules which trigger a push notification to your desktop, laptop, tablet or smartphone. An example use case for this is to notify you when a door has been opened or motion is detected in your home, but you can use notifications for whatever you like!

To create a rule which triggers a push notification, simply drag and drop the notification output and customize it with your own message.

Thanks to the power of Progressive Web Apps, if you’ve installed the gateway’s web app on your smartphone or tablet you’ll receive notifications even if the web app is closed.

Push Buttons

We’ve also added support for push buttons, like the SmartThings Button, which you can program to trigger any action you like using the rules engine. Use a button to simply turn a light on, or set a whole scene with multiple outputs.

Error Reporting

0.6 also comes with a range of robustness improvements including connection detection and error reporting. That means it will be easier to tell whether you have lost connectivity to the gateway, or one of your devices has dropped offline, and if something goes wrong with an add-on, you’ll be informed about it inside the gateway UI.

If a device has dropped offline, its icon is displayed as translucent until it comes back online. If your web app loses connectivity with the gateway, you’ll see a message appear at the bottom of the screen.

Smart plugs

Bridges

Light bulbs

Sensors

These devices use the built-in Bluetooth or WiFi support of your Raspberry Pi-based gateway, so you don’t even need a USB dongle.

Download

You can download version 0.6 today from the website. If you’ve already built your own Things Gateway with a Raspberry Pi and have it connected to the Internet, it should automatically update itself soon.

We can’t wait to see what creative things you do with all these new features. Be sure to let us know on Discourse and Twitter!

2018-10-11T15:19:10Z2018-10-11T15:19:10ZBen Francishttps://hacks.mozilla.orghacks.mozilla.orgMozilla Hacks – the Web developer blog2018-11-20T17:45:45Zhttps://blog.mozilla.org/?p=11753Pocket Offers New Features to Help People Read, Watch and Listen across iOS, Android and Web

We know that when you save something to Pocket, there is a reason why. You are saving something you want to learn about, something that fascinates you, something that will … Read more

We know that when you save something to Pocket, there is a reason why. You are saving something you want to learn about, something that fascinates you, something that will help shape and change you. That’s why we’ve worked hard to make Pocket a dedicated, quiet place to focus so that you can come back and absorb what you save when you are ready.

The trick is, in the reality of our lives, it’s not always that simple. Our lives don’t always have a quiet moment with a coffee cup in hand with Pocket in the other. We have work to do, kids to take care of, school to attend. But with Pocket we’ve always worked hard to ensure that Pocket gives you tools to fit content around your life, freeing you from the moment of distraction and putting you in control.

Today, we’re excited to share a new Pocket, that makes it easier than ever to read, watch, listen to all that you’ve saved across all of the ways you use it: iOS, Android and Web.

Listen: A new way to read

You can listen to content you’ve saved from favorite publishers from all across the web—all from Pocket. Your Pocket list just became your own personal podcast, curated by you. Our new listen feature frees the content you’ve saved to fit into your busy life. It enables you to absorb articles whenever and wherever, whether you are driving, or walking, working out, cooking, or on the train.

With the latest version of listen on iOS and Android, we’re introducing a more human sounding voice, powered by Amazon Polly, and the ability to play through your list easily and hands-free. To start listening, simply open Pocket and tap the new listen icon in the top left corner.

A new Pocket, just for you

With Pocket’s app, we’ve intended it to be a different space from anything else on your device. It’s intentionally an uncluttered and distraction-free environment, built with care so you can really read.

We’ve doubled down on this with a new fresh design, tailored to let you focus, tune out the world and tune into your interests. When you open Pocket, you’ll see a Pocket that’s been redesigned top to bottom. We’ve created a new, clean, clutter-free article view to help you absorb and focus. Introduced new app-wide dark and sepia themes to make reading comfortable, no matter what time of day it is. And updated fonts and typography to make long reads more comfortable.

“At Mozilla, we love the web. Sometimes we want to surf, and the Firefox team has been working on ways to surf like an absolute champ with features like Firefox Advance,” said Mark Mayo, Chief Product Officer, Firefox. “Sometimes, though, we want to settle down and read or listen to a few great pages. That’s where Pocket shines, and the new Pocket makes it even easier to enjoy the best of the web when you’re on the go in your own focused and uncluttered space. I love it.”

Working hard for you

We’re excited to get Pocket 7.0 into your hands today. You can get the latest Pocket on Google Play, App Store, and by joining our Web Beta.

As usual, WebRender is making rapid progress. The team is working hard on nailing the remaining few blockers for enabling WebRender in Beta, after which focus will shift to the Release blockers. It’s hard to single out a particular highlight this week as the majority of bugs resolved were very impactful. Notable WebRender and Gecko … Continue reading WebRender newsletter #25→

As usual, WebRender is making rapid progress. The team is working hard on nailing the remaining few blockers for enabling WebRender in Beta, after which focus will shift to the Release blockers. It’s hard to single out a particular highlight this week as the majority of bugs resolved were very impactful.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

2018-10-11T08:02:06Z2018-10-11T08:02:06ZCornel Ioncehttps://quality.mozilla.orgDriving quality across Mozilla with data, metrics and a strong community focusMozilla Quality Assurance2018-11-21T07:43:48Z5bbe03eb3e4b5900bf502901Firefox Reality 1.0.1 - with recline modeAnnouncing Firefox Reality 1.0.1, including a new mode for viewing web content while reclined.

Firefox Reality 1.0.1 is now available for download in the Viveport, Oculus, and Daydream app stores. This is a minor point release, focused on fixing several performance issues and adding crash reporting UI and (thanks to popular request!) a reclined viewing mode.

We’ve been collecting feedback from users, and are working on a more fully-featured version for November with performance improvements, bookmarks, and an improved movie/theater mode (including 180/360 video support).

Keep the feedback coming, and don't forget to check out new content weekly!

2018-10-10T22:54:01Z2018-10-10T22:54:01ZAndre Vrignaudhttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttps://blog.mozilla.org/security/?p=2386Delaying Further Symantec TLS Certificate Distrust

Due to a long list of documented issues, Mozilla previously announced our intent to distrust TLS certificates issued by the Symantec Certification Authority, which is now a part of DigiCert. On August 13th, the next phase of distrust was enabled … Continue reading

In my previous update, I pointed out that many popular sites are still using these certificates. They are apparently unaware of the planned distrust despite DigiCert’s outreach, or are waiting until the release date that was communicated in the consensus plan to finally replace their Symantec certificates. While the situation has been improving steadily, our latest data shows well over 1% of the top 1-million websites are still using a Symantec certificate that will be distrusted.

Unfortunately, because so many sites have not yet taken action, moving this change from Firefox 63 Nightly into Beta would impact a significant number of our users. It is unfortunate that so many website operators have waited to update their certificates, especially given that DigiCert is providing replacements for free.

We prioritize the safety of our users and recognize the additional risk caused by a delay in the implementation of the distrust plan. However, given the current situation, we believe that delaying the release of this change until later this year when more sites have replaced their Symantec TLS certificates is in the overall best interest of our users. This change will remain enabled in Nightly, and we plan to enable it in Firefox 64 Beta when it ships in mid-October.

We continue to strongly encourage website operators to replace Symantec TLS certificates immediately. Doing so improves the security of their websites and allows the 10’s of thousands of Firefox Nightly users to access them.

2018-10-10T16:02:41Z2018-10-10T16:02:41ZWayne Thayerhttps://blog.mozilla.org/securityMozilla Security Blog2018-11-15T01:49:24Zhttps://blog.mozilla.org/?p=11749Announcing a Competition for Ethics in Computer Science, with up to $3.5 Million in Prizes

Today, computer scientists wield tremendous power. The code they write can be used by billions of people, and influence everything from what news stories we read, to what personal data companies collect, to who gets parole, insurance or housing loans

Software can empower democracy, heighten opportunity, and connect people continents away. But when it isn’t coupled with responsibility, the results can be drastic. In recent years, we’ve watched biased algorithms and broken recommendation engines radicalize users, promote racism, and spread misinformation.

That’s why Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies are launching the Responsible Computer Science Challenge: an ambitious initiative to integrate ethics and accountability into undergraduate computer science curricula and pedagogy at U.S. colleges and universities, with up to $3.5 million in prizes.

“In a world where software is entwined with much of our lives, it is not enough to simply know what software can do. We must also know what software should and shouldn’t do, and train ourselves to think critically about how our code can be used. Students of computer science go on to be the next leaders and creators in the world, and must understand how code intersects with human behavior, privacy, safety, vulnerability, equality, and many other factors.”

Pham adds: “Just like how algorithms, data structures, and networking are core computer science classes, we are excited to help empower faculty to also teach ethics and responsibility as an integrated core tenet of the curriculum.”

Pham is currently a Senior Fellow and Adjunct Lecturer at Harvard University, and an alum of Google, IBM, and the United States Digital Service at the White House. She will work closely with Responsible Computer Science applicants and winners.

Says Paula Goldman, Global Lead of the Tech and Society Solutions Lab at Omidyar Network: “To ensure technology fulfills its potential as a positive force in the world, we are supporting the growth of a tech movement that is guided by the emerging mantra to move purposefully and fix things. Treating ethical reflection and discernment as an opt-in sends the wrong message to computer science students: that ethical thinking can be an ancillary exploration or an afterthought, that it’s not part and parcel of making code in the first place. Our hope is that this effort helps ensure that the next generation of tech leaders is deeply connected to the societal implications of the products they build.”

Says Craig Newmark, founder of craigslist and Craig Newmark Philanthropies: “As an engineer, when you build something, you can’t predict all of the consequences of what you’ve made; there’s always something. Nowadays, we engineers have to understand the importance and impact of new technologies. We should aspire to create products that are fair to and respectful of people of all backgrounds, products that make life better and do no harm.”

Says Thomas Kalil, Chief Innovation Officer at Schmidt Futures: “Information and communication technologies are transforming our economy, society, politics, and culture. It is critical that we equip the next generation of computer scientists with the tools to advance the responsible development of these powerful technologies – both to maximize the upside and understand and manage the risks.”

Says Mary L. Gray, a Responsible Computer Science Challenge judge: “Computer science and engineering have deep domain expertise in securing and protecting data. But when it comes to drawing on theories and methods that attend to people’s ethical rights and social needs, CS and engineering programs are just getting started. This challenge will help the disciplines of CS and engineering identify the best ways to teach the next generation of technologists what they need to know to build more socially responsible and equitable technologies for the future.”

(Gray is senior researcher at Microsoft Research; fellow at Harvard University’s Berkman Klein Center for Internet & Society; and associate professor in the School of Informatics, Computing, and Engineering with affiliations in Anthropology and Gender Studies at Indiana University.)

Responsible Computer Science Challenge details

Through the Responsible Computer Science Challenge, Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies are supporting the conceptualization, development, and piloting of curricula that integrate ethics with computer science. Our hope is that this coursework will not only be implemented, but also scaled to colleges and universities across the country — and beyond.

Between December 2018 and July 2020, we will award up to $3.5 million in prizes to promising proposals. The challenge is open to both individual professors or collaborative teams consisting of professors, graduate students, and teaching assistants. We’re seeking educators who are passionate about teaching not only computer science, but how it can be deployed in a responsible, positive way.

The challenge consists of two stages:

In Stage 1, we will seek conceptsfor deeply integrating ethics into existing undergraduate computer science courses, either through syllabi changes (e.g. including a reading or exercise on ethics in each class meeting) or teaching methodology adjustments (e.g. pulling teaching assistants from ethics departments). Stage 1 winners will receive up to $150,000 each to develop and pilot their ideas. Winners will be announced in April 2019.

In Stage 2, we will support the spread and scale of the most promising approaches developed in Stage 1. Stage 2 winners will receive up to $200,000 each and will be announced in summer 2020.

Projects will be judged by an external review committee of academics, tech industry leaders, and others, who will use evaluation criteria developed jointly by Omidyar Network and Mozilla.

Judges include Bobby Schnabel, professor of computer science at the University of Colorado Boulder and former president of ACM; Maria Klawe, president of Harvey Mudd College; Joshua Cohen, Marta Sutton Weeks Professor of Ethics in Society at Stanford University; Brenda Darden Wilkerson, president and CEO of the Anita Borg Institute; and others.

We are accepting Initial Funding Concepts for Stage 1 now through December 13, 2018. Apply.

~

Pham concludes: “In the short term, we can create a new wave of engineers. In the long term, we can create a culture change in Silicon Valley and beyond — and as a result, a healthier internet.”

The Responsible Computer Science Challenge is part of Mozilla’s mission to empower the people and projects on the front lines of internet health work. Other recent awards include our WINS Challenges — which connect unconnected Americans — and the Mozilla Gigabit Community Fund.

Omidyar Network’s Tech and Society Solutions Lab draws on Omidyar Network’s long-standing belief in the promise of technology to create opportunity and social good, as well as the concern about unintended consequences that can result from technological innovation. The team aims to help technologists prevent, mitigate, and correct societal downsides of technology — and maximize positive impact.

ABOUT OMIDYAR NETWORK

Omidyar Network is a philanthropic investment firm dedicated to harnessing the power of markets to create opportunity for people to improve their lives. Established in 2004 by eBay founder Pierre Omidyar and his wife Pam, the organization invests in and helps scale innovative organizations to catalyze economic and social change. Omidyar Network has committed more than $1 billion to for- profit companies and nonprofit organizations that foster economic advancement and encourage individual participation across multiple initiatives, including Digital Identity, Education, Emerging Tech, Financial Inclusion, Governance & Citizen Engagement, and Property Rights. You can learn more here: www.omidyar.com.

ABOUT SCHMIDT FUTURES

Schmidt Futures is a philanthropic initiative, founded by Eric and Wendy Schmidt, that seeks to improve societal outcomes through the thoughtful development of emerging science and technologies that can benefit humanity. As a venture facility for public benefit, they invest risk capital in the most promising ideas and exceptional people across disciplines. Learn more at schmidtfutures.com

ABOUT CRAIG NEWMARK PHILANTHROPIES

Craig Newmark Philanthropies was created by craigslist founder Craig Newmark to support and connect people and drive broad civic engagement. The organization works to advance people and grassroots organizations that are getting stuff done in areas that include trustworthy journalism, voter protection, gender diversity in technology, and veterans and military families. For more information, please visit: CraigNewmarkPhilanthropies.org

2018-10-10T12:00:56Z2018-10-10T12:00:56ZMozillahttps://blog.mozilla.orgDispatches from the Internet frontier.The Mozilla Blog2018-11-14T14:33:49Zhttps://blog.mozilla.org/security/?p=2389Trusting the delivery of Firefox Updates

Providing a web browser that you can depend on year after year is one of the core tenet of the Firefox security strategy. We put a lot of time and energy into making sure that the software you run has … Continue reading

Providing a web browser that you can depend on year after year is one of the core tenet of the Firefox security strategy. We put a lot of time and energy into making sure that the software you run has not been tampered with while being delivered to you.

In an effort to increase trust in Firefox, we regularly partner with external firms to verify the security of our products. Earlier this year, we hired X41 D-SEC Gmbh to audit the mechanism by which Firefox ships updates, known internally as AUS for Application Update Service. Today, we are releasing their report.

Four researchers spent a total of 27 days running a technical security review of both the backend service that manages updates (Balrog) and the client code that updates your browser. The scope of the audit included a cryptographic review of the update signing protocol, fuzzing of the client code, pentesting of the backend and manual code review of all components.

Mozilla Security continuously reviews and tests the security of Firefox, but external verification is a critical part of our operations security strategy. We are glad to say that X41 did not find any critical flaw in AUS, but they did find various issues ranking from low to high, as well as 21 side findings.

X41 D-Sec GmbH found the security level of AUS to be good. No critical vulnerabilities have been identified in any of the components. The most serious vulnerabilities that were discovered are a Cross-Site Request Forgery (CSRF) vulnerability in the administration web application interface that might allow attackers to trigger unintended administrative actions under certain conditions. Other vulnerabilities identified were memory corruption issues, insecure handling of untrusted data, and stability issues (Denial of Service (DoS)). Most of these issues were constrained by requiring to bypass cryptographic signatures.

Three vulnerabilities ranked as high, and all of them were located in the administration console of Balrog, the backend service of Firefox AUS, which is protected behind multiple factors of authentication inside our internal network. The extra layers of security effectively lower the risk of the vulnerabilities found by X41, but we fixed the issues they found regardless.

X41 found a handful of bugs in the C code that handles update files. Thankfully, the cryptographic signatures prevent a bad actor from crafting an update file that could impact Firefox. Here again, designing our systems with multiple layers of security has proven useful.

Finally, we’d like to thank X41 for their high quality work on conducting this security audit. And, as always, we invite you to help us keep Firefox secure by reporting issues through our bug bounty program.

At Mozilla, we want WebAssembly to be as fast as it can be. This started with its design, which gives it great throughput. Then we improved load times with a streaming baseline compiler. With this, we compile code faster than it comes over the network. Now, in the latest version of Firefox Beta, calls between JS and WebAssembly are faster than many JS to JS function calls. Here's how we made them fast - illustrated in code cartoons.

One of our big priorities is making it easy to combine JS and WebAssembly. But function calls between the two languages haven’t always been fast. In fact, they’ve had a reputation for being slow, as I talked about in my first series on WebAssembly.

This means that in the latest version of Firefox Beta, calls between JS and WebAssembly are faster than non-inlined JS to JS function calls. Hooray!

So these calls are fast in Firefox now. But, as always, I don’t just want to tell you that these calls are fast. I want to explain how we made them fast. So let’s look at how we improved each of the different kinds of calls in Firefox (and by how much).

But first, let’s look at how engines do these calls in the first place. (And if you already know how the engine handles function calls, you can skip to the optimizations.)

How do function calls work?

Functions are a big part of JavaScript code. A function can do lots of things, such as:

assign variables which are scoped to the function (called local variables)

use functions that are built-in to the browser, like Math.random

call other functions you’ve defined in your code

return a value

But how does this actually work? How does writing this function make the machine do what you actually want?

As I explained in my first WebAssembly article series, the languages that programmers use — like JavaScript — are very different than the language the computer understands. To run the code, the JavaScript we download in the .js file needs to be translated to the machine language that the machine understands.

Each browser has a built-in translator. This translator is sometimes called the JavaScript engine or JS runtime. However, these engines now handle WebAssembly too, so that terminology can be confusing. In this article, I’ll just call it the engine.

Each browser has its own engine:

Chrome has V8

Safari has JavaScriptCore (JSC)

Edge has Chakra

and in Firefox, we have SpiderMonkey

Even though each engine is different, many of the general ideas apply to all of them.

When the browser comes across some JavaScript code, it will fire up the engine to run that code. The engine needs to work its way through the code, going to all of the functions that need to be called until it gets to the end.

I think of this like a character going on a quest in a videogame.

Let’s say we want to play Conway’s Game of Life. The engine’s quest is to render the Game of Life board for us. But it turns out that it’s not so simple…

So the engine goes over to the next function. But the next function will send the engine on more quests by calling more functions.

The engine keeps having to go on these nested quests until it gets to a function that just gives it a result.

Then it can come back to each of the functions that it spoke to, in reverse order.

If the engine is going to do this correctly — if it’s going to give the right parameters to the right function and be able to make its way all the way back to the starting function — it needs to keep track of some information.

It does this using something called a stack frame (or a call frame). It’s basically like a sheet of paper that has the arguments to go into the function, says where the return value should go, and also keeps track of any of the local variables that the function creates.

The way it keeps track of all of these slips of paper is by putting them in a stack. The slip of paper for the function that it is currently working with is on top. When it finishes that quest, it throws out the slip of paper. Because it’s a stack, there’s a slip of paper underneath (which has now been revealed by throwing away the old one). That’s where we need to return to.

This stack of frames is called the call stack.

The engine builds up this call stack as it goes. As functions are called, frames are added to the stack. As functions return, frames are popped off of the stack. This keeps happening until we get all the way back down and have popped everything out of the stack.

So that’s the basics of how function calls work. Now, let’s look at what made function calls between JavaScript and WebAssembly slow, and talk about how we’ve made this faster in Firefox.

How we made WebAssembly function calls fast

With recent work in Firefox Nightly, we’ve optimized calls in both directions — both JavaScript to WebAssembly and WebAssembly to JavaScript. We’ve also made calls from WebAssembly to built-ins faster.

All of the optimizations that we’ve done are about making the engine’s work easier. The improvements fall into two groups:

Cutting out intermediaries — which means taking the most direct path between functions

Let’s look at where each of these came into play.

Optimizing WebAssembly » JavaScript calls

When the engine is going through your code, it has to deal with functions that are speaking two different kinds of language—even if your code is all written in JavaScript.

Some of them—the ones that are running in the interpreter—have been turned into something called byte code. This is closer to machine code than JavaScript source code, but it isn’t quite machine code (and the interpreter does the work). This is pretty fast to run, but not as fast as it can possibly be.

Other functions — those which are being called a lot — are turned into machine code directly by the just-in-time compiler (JIT). When this happens, the code doesn’t run through the interpreter anymore.

So we have functions speaking two languages; byte code and machine code.

I think of these different functions which speak these different languages as being on different continents in our videogame.

The engine needs to be able to go back and forth between these continents. But when it does this jump between the different continents, it needs to have some information, like the place it left from on the other continent (which it will need to go back to). The engine also wants to separate the frames that it needs.

To organize its work, the engine gets a folder and puts the information it needs for its trip in one pocket — for example, where it entered the continent from.

It will use the other pocket to store the stack frames. That pocket will expand as the engine accrues more and more stack frames on this continent.

Sidenote: if you’re looking through the code in SpiderMonkey, these “folders” are called activations.

Each time it switches to a different continent, the engine will start a new folder. The only problem is that to start a folder, it has to go through C++. And going through C++ adds significant cost.

This is the trampolining that I talked about in my first series on WebAssembly.

Every time you have to use one of these trampolines, you lose time.

In our continent metaphor, it would be like having to do a mandatory layover on Trampoline Point for every single trip between two continents.

So how did this make things slower when working with WebAssembly?

When we first added WebAssembly support, we had a different type of folder for it. So even though JIT-ed JavaScript code and WebAssembly code were both compiled and speaking machine language, we treated them as if they were speaking different languages. We were treating them as if they were on separate continents.

This was unnecessarily costly in two ways:

it creates an unnecessary folder, with the setup and teardown costs that come from that

it requires that trampolining through C++ (to create the folder and do other setup)

We fixed this by generalizing the code to use the same folder for both JIT-ed JavaScript and WebAssembly. It’s kind of like we pushed the two continents together, making it so you don’t need to leave the continent at all.

With this, calls from WebAssembly to JS were almost as fast as JS to JS calls.

We still had a little work to do to speed up calls going the other way, though.

Optimizing JavaScript » WebAssembly calls

Even in the case of JIT-ed JavaScript code, where JavaScript and WebAssembly are speaking the same language, they still use different customs.

Because JavaScript doesn’t have explicit types, types need to be figured out at runtime. The engine keeps track of the types of values by attaching a tag to the value.

It’s as if the JS engine put a box around this value. The box contains that tag indicating what type this value is. For example, the zero at the end would mean integer.

In order to compute the sum of these two integers, the system needs to remove that box. It removes the box for a and then removes the box for b.

Then it adds the unboxed values together.

Then it needs to add that box back around the results so that the system knows the result’s type.

This turns what you expect to be 1 operation into 4 operations… so in cases where you don’t need to box (like statically typed languages) you don’t want to add this overhead.

Sidenote: JavaScript JITs can avoid these extra boxing/unboxing operations in many cases, but in the general case, like function calls, JS needs to fall back to boxing.

This is why WebAssembly expects parameters to be unboxed, and why it doesn’t box its return values. WebAssembly is statically typed, so it doesn’t need to add this overhead. WebAssembly also expects values to be passed in at a certain place — in registers rather than the stack that JavaScript usually uses.

If the engine takes a parameter that it got from JavaScript, wrapped inside of a box, and gives it to a WebAssembly function, the WebAssembly function wouldn’t know how to use it.

So, before it gives the parameters to the WebAssembly function, the engine needs to unbox the values and put them in registers.

To do this, it would go through C++ again. So even though we didn’t need to trampoline through C++ to set up the activation, we still needed to do it to prepare the values (when going from JS to WebAssembly).

Going to this intermediary is a huge cost, especially for something that’s not that complicated. So it would be better if we could cut the middleman out altogether.

That’s what we did. We took the code that C++ was running — the entry stub — and made it directly callable from JIT code. When the engine goes from JavaScript to WebAssembly, the entry stub un-boxes the values and places them in the right place. With this, we got rid of the C++ trampolining.

I think of this as a cheat sheet. The engine uses it so that it doesn’t have to go to the C++. Instead, it can unbox the values when it’s right there, going between the calling JavaScript function and the WebAssembly callee.

So that makes calls from JavaScript to WebAssembly fast.

But in some cases, we can make it even faster. In fact, we can make these calls even faster than JavaScript » JavaScript calls in many cases.

Even faster JavaScript » WebAssembly: Monomorphic calls

When a JavaScript function calls another function, it doesn’t know what the other function expects. So it defaults to putting things in boxes.

But what about when the JS function knows that it is calling a particular function with the same types of arguments every single time? Then that calling function can know in advance how to package up the arguments in the way that the callee wants them.

This is an instance of the general JS JIT optimization known as “type specialization”. When a function is specialized, it knows exactly what the function it is calling expects. This means it can prepare the arguments exactly how that other function wants them… which means that the engine doesn’t need that cheat sheet and spend extra work on unboxing.

This kind of call — where you call the same function every time — is called a monomorphic call. In JavaScript, for a call to be monomorphic, you need to call the function with the exact same types of arguments each time. But because WebAssembly functions have explicit types, calling code doesn’t need to worry about whether the types are exactly the same — they will be coerced on the way in.

If you can write your code so that JavaScript is always passing the same types to the same WebAssembly exported function, then your calls are going to be very fast. In fact, these calls are faster than many JavaScript to JavaScript calls.

Future work

There’s only one case where an optimized call from JavaScript » WebAssembly is not faster than JavaScript » JavaScript. That is when JavaScript has in-lined a function.

The basic idea behind in-lining is that when you have a function that calls the same function over and over again, you can take an even bigger shortcut. Instead of having the engine go off to talk to that other function, the compiler can just copy that function into the calling function. This means that the engine doesn’t have to go anywhere — it can just stay in place and keep computing.

I think of this as the callee function teaching its skills to the calling function.

This is an optimization that JavaScript engines make when a function is being run a lot — when it’s “hot” — and when the function it’s calling is relatively small.

We can definitely add support for in-lining WebAssembly into JavaScript at some point in the future, and this is a reason why it’s nice to have both of these languages working in the same engine. This means that they can use the same JIT backend and the same compiler intermediate representation, so it’s possible for them to interoperate in a way that wouldn’t be possible if they were split across different engines.

Optimizing WebAssembly » Built-in function calls

There was one more kind of call that was slower than it needed to be: when WebAssembly functions were calling built-ins.

Built-ins are functions that the browser gives you, like Math.random. It’s easy to forget that these are just functions that are called like any other function.

Sometimes the built-ins are implemented in JavaScript itself, in which case they are called self-hosted. This can make them faster because it means that you don’t have to go through C++: everything is just running in JavaScript. But some functions are just faster when they’re implemented in C++.

Different engines have made different decisions about which built-ins should be written in self-hosted JavaScript and which should be written in C++. And engines often use a mix of both for a single built-in.

In the case where a built-in is written in JavaScript, it will benefit from all of the optimizations that we have talked about above. But when that function is written in C++, we are back to having to trampoline.

These functions are called a lot, so you do want calls to them to be optimized. To make it faster, we’ve added a fast path specific to built-ins. When you pass a built-in into WebAssembly, the engine sees that what you’ve passed it is one of the built-ins, at which point it knows how to take the fast-path. This means you don’t have to go through that trampoline that you would otherwise.

It’s kind of like we built a bridge over to the built-in continent. You can use that bridge if you’re going from WebAssembly to the built-in. (Sidenote: The JIT already did have optimizations for this case, even though it’s not shown in the drawing.)

With this, calls to these built-ins are much faster than they used to be.

Future work

Currently the only built-ins that we support this for are mostly limited to the math built-ins. That’s because WebAssembly currently only has support for integers and floats as value types.

That works well for the math functions because they work with numbers, but it doesn’t work out so well for other things like the DOM built-ins. So currently when you want to call one of those functions, you have to go through JavaScript. That’s what wasm-bindgen does for you.

But WebAssembly is getting more flexible types very soon. Experimental support for the current proposal is already landed in Firefox Nightly behind the pref javascript.options.wasm_gc. Once these types are in place, you will be able to call these other built-ins directly from WebAssembly without having to go through JS.

The infrastructure we’ve put in place to optimize the Math built-ins can be extended to work for these other built-ins, too. This will ensure many built-ins are as fast as they can be.

But there are still a couple of built-ins where you will need to go through JavaScript. For example, if those built-ins are called as if they were using new or if they’re using a getter or setter. These remaining built-ins will be addressed with the host-bindings proposal.

Conclusion

So that’s how we’ve made calls between JavaScript and WebAssembly fast in Firefox, and you can expect other browsers to do the same soon.

Thank you

Thank you to Benjamin Bouvier, Luke Wagner, and Till Schneidereit for their input and feedback.

2018-10-08T15:35:06Z2018-10-08T15:35:06ZLin Clarkhttps://hacks.mozilla.orghacks.mozilla.orgMozilla Hacks – the Web developer blog2018-11-20T17:45:45Z5bb533ce3e4b5900bf502851Close Conversation is the Future of Social VR

In many user experience (UX) studies, the researchers give the participants a task and then observe what happens next. Most research participants are earnest and usually attempt to follow instructions. However, in this study, research participants mostly ignored instructions and just started goofing off with each other once they entered

In many user experience (UX) studies, the researchers give the participants a task and then observe what happens next. Most research participants are earnest and usually attempt to follow instructions. However, in this study, research participants mostly ignored instructions and just started goofing off with each other once they entered the immersive space and testing the limits of embodiment.

The goal of this blog post is to share insights from Hubs by Mozilla usability study that other XR creators could apply to building a multi-user space.

The Extended Mind recruited pairs of people who communicate online with each other every day, which led to testing Hubs with people who have very close connections. There were three romantic partners in the study, one pair of roommates, and one set of high school BFFs. The reason that The Extended Mind recruited relatively intimate pairs of people is because they wanted to understand the potential for Hubs as a communication platform for people who already have good relationships. They also believe that they got more insights about how people would use Hubs in a natural environment rather than bringing in one person at a time and asking that person to hang out in VR with a stranger who they just met.

The two key insights that this blog post will cover are the ease of conversation that people had in Hubs and the playfulness that they embodied when using it.

Conversation Felt Natural

When people enter Hubs, the first thing they would do would be to look around to find the other person in the space. Regardless of if they were on mobile, laptop, tablet or in a VR headset, their primary goal was to connect. Once they located the other person, they immediately gave their impressions of the other person’s avatar and asked what they looked like to their companion. There was an element of fun in finding the other person and then discussing avatar appearances. Including one romantic partner sincerely telling his companion:

“You are adorable,”

…which indicates that his warm feelings for her in the real world easily translated to her avatar.

The researchers created conversational prompts for all of the research participants such as “Plan a potential vacation together,” but participants ignored the instructions and just talked about whatever caught their attention. Mostly people were self-directed in exploring their capabilities in the environment and wanted to communicate with their companion. They relished having visual cues from the other person and experiencing embodiment:

“Having a hand to move around felt more connected. Especially when we both had hands.”

“It felt like we were next to each other.”

The youngest participants in the study were in their early twenties and stated that they avoided making phone calls. They rated Hubs more highly than a phone conversation due to the improved sense of connection it gave them.

[Hubs is] “better than a phone call.”

Some even considered it superior to texting for self-expression:

“Texting doesn’t capture our full [expression]”

The data from this study shows that communication using 2D devices and VR headsets has strong potential for personal conversation among friends and partners. People appeared to feel strong connections with their partners in the space. They wanted to revisit the space in the future with groups of close friends and share it with them as well.

Participants Had Fun

Due to participants feeling comfortable in the space and confident in their ability to express themselves, they relaxed during the testing session and let their sense of humor show through.

The researchers observed a lot of joke-telling and goofiness from people. A consequence of feeling embodied in the VR headset was acting in ways to entertain their companion:

“Physical humor works here.”

Users also discovered that Hubs has a rubber duck mascot that will quack when it is clicked and it will replicate itself. Playing with the duck was very popular.

“The duck makes a delightful sound.”

“Having things to play with is good.”

Here's one image to illustrate the rubber ducks multiplying quickly:

It could be a future research question to determine exactly what is the balance of giving people something like the duck as a fidget activity versus a formal board game or card game. The lack of formality in Hubs appeared to actually bolster the storytelling aspects that users brought to it. Two users established a whole rubber duck Law & Order type tv show where they gave the ducks roles:

“Good cop duckie, bad cop duckie.”

People either forgot or ignored the researchers’ instructions to plan a vacation or other prompts because they were immersed in the fun and connection together. However, the watching the users tell each other stories and experiment in the space was more entertaining and led to more insights.

While it wasn’t actually tested in this study, there are ways to add media & gifs to Hubs to further enhance communication and comedy.

Summary: A Private Space That Lets People Be Themselves

The Extended Mind believes that the privacy of the Hubs space bolstered people’s intimate experiences. Because people must have a unique URL to gain access, it limited the number of people in the room. That gave people a sense of control and likely led the them feeling comfortable experimenting with the layers of embodiment and having fun with each other.

The next blog post will cover additional insights about how the different environments in Hubs impacted their behavior and what other XR creators can apply to their own work.

This article is part two of the series that reviews the user testing conducted on Mozilla’s social XR platform, Hubs. Mozilla partnered with Jessica Outlaw and Tyesha Snow of The Extended Mind to validate that Hubs was accessible, safe, and scalable. The goal of the research was to generate insights about the user experience and deliver recommendations of how to improve the Hubs product.

To read part one of the blog series overview, which focused on accessibility, click here.

2018-10-06T02:01:24Z2018-10-06T02:01:24ZJessica Outlawhttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Z5bb650133e4b5900bf502862Drawing and Photos, now in HubsTwo new features that will further enrich the ways you can connect and collaborate in rooms you create in Hubs: drawing and easy photo uploads.

As we covered in our last update, we recently added the ability for you to bring images, videos, and 3D models into the rooms you create in Hubs. This is a great way to bring content to view together in your virtual space, and it all works right in your browser.

We’re excited to announce two new features today that will further enrich the ways you can connect and collaborate in rooms you create in Hubs: drawing and easy photo uploads.

Hubs now has a pen tool you can use at any time to start drawing in 3D space. This is a great way to express ideas, spark your creativity, or just doodle around. You can draw by holding the pen in your hand if you are in Mixed Reality, or draw using your PC’s mouse or trackpad.

The new pen tool shines when combined with our media support. You can draw on images together or make a 3D sketch on top of a model from Sketchfab. You can also draw all over the walls if you want!

You can easily change the size and color of your pen strokes. You can write out text or even model out a rough 3D sketch.

If you’re using a phone, we’ve also added an easy way to quickly upload photos or take a snapshot with your phone’s camera. Just tap the photos button at the bottom of the screen to jump right into a photo picker.

This is a great way to share photos from your library or take a quick picture of something nearby. Selfies can be fun too, but don’t be surprised if people draw on your photo!

We hope you have fun with these new features. As always, please join us in the #social channel on the WebVR Slack or file a GitHub issue if you have feedback!

2018-10-05T15:43:01Z2018-10-05T15:43:01ZKevin Leehttps://blog.mozvr.com/https://blog.mozvr.com/favicon.pngWe are the Mozilla MR team. Our goal is to help bring high-performance mixed reality to the open Web.Mozilla Mixed Reality Blog2018-11-22T10:51:04Zhttp://mozillagfx.wordpress.com/?p=1075WebRender newsletter #24

Hi there, this your twenty fourth WebRender newsletter. A lot of work in progress this week, so the change list is pretty short. To compensate I added a list of noteworthy ongoing work which hasn’t landed yet is but will probably land soon and gives a rough idea of what’s keeping us busy. Without further … Continue reading WebRender newsletter #24→

Hi there, this your twenty fourth WebRender newsletter. A lot of work in progress this week, so the change list is pretty short. To compensate I added a list of noteworthy ongoing work which hasn’t landed yet is but will probably land soon and gives a rough idea of what’s keeping us busy.

Without further ado,

Notable WebRender and Gecko changes

Bobby improved WebRender’s code documentation.

Jeff fixed a crash.

Kats fixed a bug that was causing issues with parallax scrolling type of effects.

WebPush does more than let you know you’ve got an upcoming calendar appointment or bug you about subscribing to a site’s newsletter (particularly one you just visited and have zero interest in doing). Turns out that WebPush is a pretty … Continue reading

WebPush does more than let you know you’ve got an upcoming calendar appointment or bug you about subscribing to a site’s newsletter (particularly one you just visited and have zero interest in doing). Turns out that WebPush is a pretty good way for us to do a number of things as well. Things like let you send tabs from one install of Firefox to another, or push out important certificate updates. We’ll talk about those more when we get ready to roll them out, but for now, we need to know if some of the key bits work.

One of the things we need to test is if our WebPush servers are up to the job of handling traffic, or if there might be any weird issue we might not have thought of. We’ve run tests, we’ve simulated loads, but honestly, nothing compares to real life for this sort of thing.

In the coming weeks, we’re going to be running an experiment. We’ll be using the Shield service to have your browser set up a web push connection. No data will go over that connection aside from the minimal communication that we need. It shouldn’t impact how you use Firefox, or annoy you with pop-ups. Chances are, you won’t even notice we’re doing this.

Why are we telling you if it something you wouldn’t notice? We like to be open and clear about things. You might see a reference to “dom.push.alwaysConnect” in about:config and wonder what it might mean. Shield lets us flip that switch and gives us control over how many folks at any given time hit our servers. That’s important when you want to test your server and things don’t go as planned.

In this case “dom.push.alwaysConnect” will ask your browser to open a connection to our servers. This is so we can test if our servers can handle the load. Why do it this way instead of a load test? Turns out that trying to effectively load test this is problematic. It’s hard to duplicate “real world” load and all the issues that come with it. This test will help us make sure that things don’t fall over when we make this a full feature. When that configuration flag is set to “true” your browser will try to connect to our push servers.

You can always opt out of the study, if you want, but we hope that you don’t mind being part of this. The more folks we have, and the more diverse the group, the more certain we can be that our servers are up for the challenge of keeping you safer and more in control.

2018-10-03T20:25:45Z2018-10-03T20:25:45ZJR Conlinhttps://blog.mozilla.org/servicesEngineering the information superhighwayMozilla Services2018-10-06T00:03:32Zhttps://hacks.mozilla.org/?p=32711A New Way to Support MDN

MDN’s user base has grown exponentially in the last few years, so we are seeking support from our users to help accelerate content and platform development.

Starting this week, some visitors may notice something new on the MDN Web Docs site, the comprehensive resource for information about developing on the open web.

We are launching an experiment on MDN Web Docs, seeking direct support from our users in order to accelerate growth of our content and platform. Not only has our user base grown exponentially in the last few years (with corresponding platform maintenance costs), we also have a large list of cool new content, features, and programs we’d like to create that our current funding doesn’t fully cover.

In 2015, on our tenth anniversary (read about MDN’s evolution in the 10-year anniversary post), MDN had four million active monthly users. Now, just three years later, we have 12 million. Our last big platform update was in 2013. By asking for, and hopefully receiving, financial assistance from our users – which will be reinvested directly into MDN – we aim to speed up the modernization of MDN’s platform and offer more of what you love: content, features, and integration with the tools you use every day (like VS Code, Dev Tools, and others), plus better support for the 1,000+ volunteers contributing content, edits, tooling, and coding to MDN each month.

Currently, MDN is wholly funded by Mozilla Corporation, and has been since its inception in 2005. The MDN Product Advisory Board, formed in 2017, provides guidance and advice but not funding. The MDN board will never be pay-to-play, and although member companies may choose to sponsor events or other activities, sponsorship will never be a requirement for participation. This payment experiment was discussed at the last MDN board meeting and received approval from members.

Starting this week, approximately 1% of MDN users, chosen at random, will see a promotional box in the footer of MDN asking them to support MDN through a one-time payment.

Banner placement on MDN

Clicking on the “Support MDN” button will open the banner and allow you to enter payment information.

Payment page on MDN

If you don’t see the promotional banner on MDN, and want to express your support, or read the FAQ’s, you can go directly to the payment page.

Because we want to keep things fully transparent, we’ll report how we spend the money on a monthly basis on MDN, so you can see what your support is paying for. We hope that, through this program, we will create a tighter, healthier loop between our audience (you), our content (written for and by you), and our supporters (also, you, again).

Throughout the next couple months, and into 2019, we plan to roll out additional ways for you to engage with and support MDN. We will never put the existing MDN Web Docs site behind a paywall. We recognize the importance of this resource for the web and the people who work on it.

The HTTP Referrer Value Navigating from one webpage to another or requesting a sub-resource within a webpage causes a web browser to send the top-level URL in the HTTP referrer field. Inspecting that HTTP header field on the receiving end … Continue reading

The HTTP Referrer Value

Navigating from one webpage to another or requesting a sub-resource within a webpage causes a web browser to send the top-level URL in the HTTP referrer field. Inspecting that HTTP header field on the receiving end allows sites to identify where the request originated which enables sites to log referrer data for operational and statistical purposes. As one can imagine, the top-level URL quite often includes user sensitive information which then might leak through the referrer value impacting an end users privacy.

The Referrer Policy

To compensate, the HTTP Referrer Policy allows webpages to gain more control over referrer values on their site. E.g. using a Referrer Policy of “origin” instructs the web browser to strip any path information and only fill the HTTP referrer value field with the origin of the requesting webpage instead of the entire URL. More aggressively, a Referrer Policy of ‘no-referrer’ advises the browser to suppress the referrer value entirely. Ultimately the Referrer Policy empowers the website author to gain more control over the used referrer value and hence provides a tool for website authors to respect an end users privacy.

Expanding the Referrer Policy to CSS

While Firefox has been supporting Referrer Policy since Firefox 50 we are happy to announce that Firefox will expand policy coverage and will support Referrer Policy within style sheets starting in Firefox 64. With that update in coverage, requests originating from within style sheets will also respect a site’s Referrer Policy and ultimately contribute a cornerstone to a more privacy respecting internet.