March 31, 2015

At the beginning of Q1, I set a goal to investigate races with Thread Sanitizer and to fix the “top” 10 races discovered with the tool. Ten races seemed like a conservative number; we didn’t know how many races there were, their impact, or how difficult fixing them would be. We also weren’t sure how much participation we could count on from area experts, and it might have turned out that I would spend a significant amount of time figuring out what was going on with the races and fixing them myself.

I’m happy to report that according to Bugzilla, nearly 30 of the races reported this quarter have been fixed. Folks in the networking stack, JavaScript GC, JavaScript JIT, and ImageLib have been super-responsive in addressing problems. There are even a few bugs in the afore-linked query that have been fixed by folks on the WebRTC team running TSan or similar thread-safety tools (Clang’s Thread Safety Analysis, to be specific) to detect bugs themselves, which is heartening to see. And it’s also worth mentioning that at least one of the unfixed DOM media bugs detected by TSan has seen some significant threadsafety refactoring work in its dependent bugs. All this attention to data races has been very encouraging.

I plan on continuing the TSan runs in Q2 along with getting TSan-on-Firefox working with more-or-less current versions of Clang. Having to download specific Subversion revisions of the tools or precompiled Clang 3.3 (!) binaries to make TSan work is discouraging, and that could obviously work much better.

Buildduty is the friendly face of Mozilla release engineering that contributors see first. Whether you need a production machine to debug a failure, your try server push is slow, or hg is on the fritz, we’re the ones who dig in, help, and find out why. The buildduty role is almost entirely operational and interrupt-driven: we respond to requests as they come in.

We think it’s important for everyone in release engineering to rotate through the role so they can see how the various systems interact (and fail!) in production. It also allows them to forge the relationships with other teams — developers, sheriffs, release management, developer services, IT — necessary to fix failures when the occur. The challenge has been finding a suitable tenure for buildduty that allows us to make quantifiable improvements in the process.
Originally the tenure for buildduty was one week. This proved to be too short, and often conflicted with other work. Sometimes a big outage would make it impossible to tackle any other buildduty tasks in a given week. Some people were more conscientious than others about performing all of the buildduty tasks. Work tended to pile up until one of those conscientious people cycled through buildduty again. There were enough people on the team that each person might not be on buildduty more than once a quarter. One week was not long enough to become proficient at any of the buildduty tasks. We made almost no progress on process during this time, and our backlog of work grew.

In September of last year, we made buildduty a quarter-long (3 month) commitment. This made it easy to plan quarterly goals for the people involved in buildduty, but also proved hard to swallow for release engineers who were more used to doing development work than operational work. 3 months was too long, and had the potential to burn people out.

One surprising development was that even though the duration of buildduty was longer, it didn’t necessarily translate to process improvements. The volume of interrupts has been quite high over the past 6 months, so despite some standout innovations in machine health monitoring and reconfig automation, many buildduty focus areas still lack proper tooling.

Now we’re trying something different.

For the next 3 months, we’ve changed the buildduty tenure to be one month. This will allow more release engineers to rotate through the position more quickly, but hopefully still give them each enough time in the role to become proficient.

To address the tooling deficiency, we also created an adjunct role called “buildduty tools.” The buildduty person from one month will automatically rotate into the buildduty tools role for the following month. While in the buildduty tools role, you assist the front-line buildduty person as required, but primarily you write tools or fix bugs that you wish had existed when you were doing the front-line support the month before.

Callek will be covering afternoons PT for Massimo in April because, honestly, that’s when most of the action happens anyway, and it would be irresponsible to not have coverage during that part of the day.

Massimo starts his buildduty tenure tomorrow. It’s hard but rewarding work. Please be gentle as he finds his feet.

New versions of Firefox for Windows, Mac, Linux and Android are now available to update or download. For more info on what’s new in Firefox, please see the release notes for Firefox and Firefox for Android.

Introduction

Edison performed 10,000 failed experiments before successfully creating the long-lasting electrical light bulb. While Edison meticulously kept a list of failed experiments, a wider dissemination of earlier failures might have led to a quicker invention of the bulb and related technologies. Scientists learn a great deal from their own mistakes, as well as from the mistakes of others. The pervasive use of computing in science, or “e-science,” is fraught with complexity and is extremely sensitive to technical difficulties, leading to many missteps and mistakes. Our new workshop intends to treat this as a first-class problem, by focusing on the hard cases where computing broke down. We believe that these computational processes or experiment that yielded negative results can be a source of information for others to learn from.

Why it’s time for this workshop

Publicizing negative results leads to quicker and more critical evaluation of new techniques, tools, technologies, and ideas by the community.

Negative results and related issues are real and happen frequently. A publication bias towards positive results hurts progress since not enough people learn from these experiences.

We want to get something valuable out of failed experiments and processes. This redeems costs, time and agony. Analysis of these failures helps narrow down possible causes and hastens progress.

We want to promote a culture of accepting, analyzing, communicating and learning from negative results.

The ERROR Workshop

The 1st E-science ReseaRch leading tO negative Results (ERROR) workshop (https://press3.mcs.anl.gov/errorworkshop), to be held in conjunction with the 11th IEEE International Conference on eScience (http://escience2015.mnm-team.org) on 3 September 2015 in Munich, Germany will provide the community a dedicated and active forum for exchanging cross-discipline experiences in research leading to negative results.

The ERROR workshop aims at providing a forum for researchers who have invested significant efforts in a piece of work that failed to bear the expected fruit. The workshop will provide the community a dedicated and active forum for exchanging cross-discipline experiences in research leading to negative results. The focus is on various aspects of negative results, such as premises and assumptions made, divergence between expected and actual outcomes, possible causes and remedial actions to avoid or prevent such situations, and possible course corrections. Both applications and systems areas are covered, including topics in research methodology, reproducibility, the applications/systems interface, resilience, fault tolerance and social problems in computational science, and other relevant areas. We invite original work in an accessible format of 8 pages.

The nature of Funsize is that we may start hundreds of jobs at the same time,
then stop sending new jobs and wait for hours. In other words, the service is
very bursty. Elastic Beanstalk is not ideal for this use case. Scaling up
and down very fast is hard to configure using EB-only tools. Also, running zero
instances is not easy.

I tried using Terraform, Cloud Formation and Auto Scaling, but they were
also not well suited. There were too many constrains (e.g. Terraform doesn't
support all needed AWS features) and they required considerable bespoke
setup/maintenance to auto-scale properly.

The next option was Taskcluster, and I was pleased that its design fitted our
requirements very well! I was impressed by the simplicity and flexibility
offered.

I have implemented a service which consumes Pulse messages for particular
buildbot jobs. For nightly builds, it schedules a task graph with three
tasks:

All tasks are run inside Docker containers which are published on the
docker.com registry (other registries can also be used). The task definition
essentially comprises of the docker image name and a list of commands it should
run (usually this is a single script inside a docker image). In the same task
definition you can specify what artifacts should be published by Taskcluster.
The artifacts can be public or private.

Things that I really liked

Predefined task IDs. This is a great idea! There is no need to
talk to the Taskcluster APIs to get the ID (or multiple IDs for task
graphs) nor need to parse the response. Fire and forget! The task IDs
can be used in different places, like artifact URLs, dependant tasks,
etc.

Task graphs. This is basically a collection of tasks that can be
run in parallel and can depend on each other. This is a nice way to
declare your jobs and know them in advance. If needed, the task graphs
can be extended by its tasks (decision tasks) dynamically.

Simplicity. All you need is to generate a valid JSON document and
submit it using HTTP API to Taskcluster.

User defined docker images. One of the downsides of Buildbot is that you have a predefined list of slaves
with predefined environment (OS, installed software, etc). Taskcluster
leverages Docker by default to let you use your own images.

Things that could be improved

Encrypted variables. I spent 2-3 days fighting with the encrypted
variables. My scheduler was written in Python, so I tried to use a
half dozen different Python PGP libraries, but for some reason all of
them were generating an incompatible OpenPGP format that Taskcluster
could not understand. This forced me to rewrite the scheduling part
in Node.js using openpgpjs. There is a bug to address
this problem globally. Also, using ISO time stamps would have
saved me hours of time. :)

It would be great to have a generic scheduler that doesn't require
third party Taskcluster consumers writing their own daemons watching
for changes (AMQP, VCS, etc) to generate tasks. This would lower the
entry barrier for beginners.

Conclusion

There are many other things that can be improved (and I believe they
will!) - Taskcluster is still a new project. Regardless of this, it is
very flexible, easy to use and develop. I would recommend using it!

Mozilla’s MDN project envisions a world where software development begins with web development — a world where developers build for the web by default. That is our great ambition, an audacious summary of MDN’s raison d’être. I discussed MDN’s business strategy at length in an earlier post. In this post I will talk about MDN’s product strategy.

Several posts ago I described MDN as a product ecosystem — “…a suite of physical products that fits into a bigger ecosystem that may entail services and digital content and support”, to use one designer’s words. The components of MDN’s product ecosystem — audience, contributors, platform, products, brand, governance, campaigns, and so forth — are united by a common purpose: to help people worldwide understand and build things with standard web technologies.

The Future

The efforts we undertake now must help people years hence. But projecting even a few years into the future of technology is … challenging, to say the least. It is also exactly what MDN’s product vision must do. So what does the future look like?

The three MDN products under heavy development now — the mature Reference product (MDN’s documentation wiki) and the new Services and Learning products — will evolve to meet this future:

Reference

In the future, web developers will still need a source of truth about open web technology development. MDN’s last 10 years of success have established it as that source of truth. Millions of web developers choose MDN over other online sources because, as a reference, MDN is more authoritative, more comprehensive, and more global. The vision of our Reference product is to use the power of MDN to build the most accessible, authoritative source of information about standard and emerging web technologies for web developers.

Services

In the future, MDN will still be an information resource, but its information will take different shapes depending on how and where it is accessed. MDN may look like the present-day reference product when it is rendered in a web browser; but sometimes it may render in a browser’s developer tools, in a pluggable code editor, or elsewhere. In those instances the information presented may be more focused on the things developers commonly need while coding. Some developers may still access MDN via search results; others will get something from MDN the moment they need it, in the context where it is most helpful. MDN’s articles will be used for learning and understanding; but subsets of MDN’s information may also power automation that enhances productivity and quality. These new uses all share one characteristic: They bring MDN’s information closer to developers through a service architecture. The vision for our Services products is to bring the power of MDN directly into professional web developers’ daily coding environments.

Learning

The future’s web developers are learning web development right now. MDN’s present-day material is already essential to many of them even though it is aimed at a more advanced audience. MDN’s new learning content and features will deliver beginner-focused content that authoritatively covers the subjects essential to becoming a great web developer. Unlike many other online learning tools, MDN needn’t keep learners inside the platform: We can integrate with any 3rd-party tool that helps learners become web developers, and we can create opportunities for web developers to learn from each other in person. The vision for our Learning products is to use the power of MDN to teach web development to future web developers.

The power of MDN

The success of all three products depends on something I above call “the power of MDN” — something that sets MDN apart from other sources of information about web development.

I have previously described information about web development as an “oral tradition”. Web development is a young, complex and constantly changing field. It is imperfectly documented in innumerable blogs, forums, Q&A sites and more. MDN’s unique power is its ability to aggregate the shared experience of web developers worldwide into an authoritative catalog of truth about web technologies.

This aspect of MDN is constant: We carry it with us into the future, come what may. For any MDN product to succeed at scale it must implement a contribution pathway that allows web developers to directly contribute their knowledge about web development. MDN’s products advance the field of web development at a global scale by sharing essential information discovered through the collective experience of the world’s web developers.

Together we are advancing the field. In 10 years web development will be concerned with new questions and new challenges, thanks to the state of the art that MDN aggregates and promulgates. We build on what we know; we share what we know on MDN. Here’s to another 10 years!

Last December in Portland, I said that Mozilla needs a more ambitious stance on how we teach the web. My argument: the web is at an open vs. closed crossroads, and helping people build know-how and agency is key if we want to take the open path. I began talking about Mozilla needing to do something in ‘learning’ in ways that can have the scale and impact of Firefox if we want this to happen.

The question is: what does this look like? We’ve begun talking about developing a common approach and brand for all our learning efforts: something like Mozilla University or Mozilla Academy. And we have a Mozilla Learning plan in place this year to advance our work on Webmaker products, Mozilla Clubs (aka Maker Party all year round), and other key building blocks. But we still don’t have a crisp and concrete vision for what all this might add up to. The idea of a global university or academy begins to get us there.

My task this quarter is to take a first cut at this vision — a consolidated approach for Mozilla’s efforts in learning. My plan is to start a set of conversations that get people involved in this process. The first step is to start to document the things we already know. That’s what this post is.

Within 10 years there will be five billion citizens of the web. Mozilla wants all of these people to know what the web can do. What’s possible. We want them to have the agency, tools and know-how they need to unlock the full power of the web. We want them to use the web to make their lives better. We want them to be full citizens of the web.

We wrote this paragraph right before Portland. I’d be interested to hear what people think about it a few months on?

What do we want to build?

The thing is even if we agree that we want everyone to know what the web can do, we may not yet agree on how we get there. My first cut at what we need to build is this:

By 2017, we want to build a Mozilla Academy: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla to unlock the power of the web for themselves, their organizations and the world.

This language is more opinionated than what’s in the Mozilla Learning plan: it states we want a global classroom and lab. And it suggests a name.

Andrew Sliwinski has pointed out to me that this presupposes we want to engage primarily with people who want to learn. And, that we might move toward our goals in other ways, including using our product and marketing to help people ‘just figure the right things out’ as they use the web. I’d like to see us debate these two paths (and others) as we try to define what it is we need to build. By the way, we also need to debate the name — Mozilla Academy? Mozilla University? Something else?

What do we want people to know?

We’re fairly solid on this part: we want people to know that the web is a platform that belongs to all of us and that we can all use to do nearly anything.

We’ve spent three years developing Mozilla’s web literacy map to describe exactly what we mean by this. It breaks down ‘what we want people know’ into three broad categories:

Exploring the web safely and effectively

Building things on the web that matter to you and others

Participating on the web as a critical, collaborative human

Helping people gain this know-how is partly about practical skills: understanding enough of the technology and mechanics of the web so they can do what they want to do (see below). But it is also about helping people understand that the web is based on a set of values — like sharing information and human expression — that are worth fighting for.

How do we think people learn these things?

Over the last few years, Mozilla and our broader network of partners have been working on what we might call ‘open source learning’ (my term) or ‘creative learning’ (Mitch Resnick’s term, which is probably better :)). The first principles of this approach include::

Learn by making things

Make real shit that matters

Do it with other people (or at least with others nearby)

There is another element though that should be manifested in our approach to learning, which is something like ‘care about good’ or even ‘care about excellence’ — the idea that people have a sense of what to aspire to and feedback loops that help them know if they are getting there. This is important both for motivation and for actually having the impact on ‘what people know’ that we’re aiming for.

My strong feeling is that this approach needs to be at the heart of all Mozilla’s learning work. It is key to what makes us different than most people who want to teach about the web — and will be key to success in terms of impact and scale. Michelle Thorne did a good post on how we embrace these principles today at Mozilla. We still need to have a conversation about how we apply this approach to everything we do as part of our broader learning effort.

How much do we want people to know?

Ever since we started talking about learning five years ago, people have asked: are you saying that everyone on the planet should be a web developer? The answer is clearly ‘no’. Different people need — and want — to understand the web at different levels. I think of it like this:

Literacy: use the web and create basic things

Skill: know how that gets you a better job / makes your life better

Craft: expert knowledge that you hone over a lifetime

There is also a piece that includes ‘leadership’ — a commitment and skill level that has you teaching, helping, guiding or inspiring others. This is a fuzzier piece, but very important and something we will explore more deeply as we develop a Mozilla Academy.

We want a way to engage with people at all of these levels. The good news is that we have the seeds of an approach for each. SmartOn is an experiment by our engagement teams to provide mass scale web literacy in product and using marketing. Mozilla Clubs, Maker Party and our Webmaker Apps offer deeper web literacy and basic skills. MDN and others are think about teaching web developer skills and craft. Our fellowships do the same, although use a lab method rather than teaching. What we need now is a common approach and brand like Mozilla Academy that connects all of these activities and speaks to a variety of audiences.

What do we have?

It’s really worth making this point again: we already have much of what we need to build an ambitious learning offering. Some of the things we have or are building include:

We also have an increasingly good reputation among people who care about and fund learning, education and empowerment programs. Partners like MacArthur Foundation, UNESCO, the National Writing Project and governments in a bunch of countries. Many of these organizations want to work with us to build — and be a part of — a more ambitious approach teaching people about the web.

What other things are we thinking about?

In addition to the things we have in hand, people across our community are also talking about a whole range of ideas that could fit into something like a Mozilla Academy. Things I’ve heard people talking about include:

Almost all of these ideas are at a nascent stage. And many of them are extensions or versions of the things we’re already doing, but with an open source learning angle. Nonetheless, the very fact that these conversations are actively happening makes me believe that we have the creativity and ambition we need to build something like a Mozilla Academy.

Who is going to do all this?

There is a set of questions that starts with ‘who is the Mozilla Academy?’ Is it all people who are flag waving, t-shirt donning Mozillians? Or is it a broader group of people loosely connected under the Mozilla banner but doing their own thing?

If you look at the current collection of people working with Mozilla on learning, it’s both. Last year, we had nearly 10,000 contributors working with us on some aspect of this ‘classroom and lab’ concept. Some of these people are Mozilla Reps, Firefox Student Ambassadors and others heavily identified as Mozillians. Others are teachers, librarians, parents, journalists, scientists, activists and others who are inspired by what we’re doing and want to work alongside us. It’s a big tent.

My sense is that this is the same sort of mix we need if we want to grow: we will want a core set of dedicated Mozilla people and a broader set of people working with us in a common way for a common cause. We’ll need a way to connect (and count) all these people: our tools, skills framework and credentials might help. But we don’t need them all to act or do things in exactly the same way. In fact, diversity is likely key to growing the level of scale and impact we want.

Snapping it all together

As I said at the top of this post, we need to boil all this down and snap it into a crisp vision for what Mozilla — and others — will build in the coming years.

My (emerging) plan is to start this with a series of blog posts and online conversations that delve deeper into the topics above. I’m hoping that it won’t just be me blogging — this will work best if others can also riff on what they think are the key questions and opportunities. We did this process as we were defining Webmaker, and it worked well. You can see my summary of that process here.

In addition, I’m going to convene a number of informal roundtables with people who might want to participate and help us build Mozilla Academy. Some of these will happen opportunistically at events like eLearning Africa in Addis and the Open Education Global conference in Banff that are happening over the next couple of months. Others will happen in Mozilla Spaces or in the offices of partner orgs. I’ll write up a facilitation script so other people can organize their own conversations, as well. This will work best if there is a lot of conversation going on.

In addition to blogging, I plan to report out on progress at the Mozilla All-Hands work week in Whistler this June. By then, my hope is that we have a crisp vision that people can debate and get involved in building out. From there, I expect we can start figuring out how to build some of the pieces we’ll need to pull this whole idea together in 2016. If all goes well, we can use MozFest 2015 as a bit of a barn raising to prototype and share out some of these pieces.

Process-wise, we’ll use the Mozilla Learning wiki to track all this. If you write something or are doing an event, post it there. And, if you post in response to my posts, please put a link to the original post so I see the ping back. Twittering #mozacademy is also a good thing to do, at least until we get a different name.

Join me in building Mozilla Academy. It’s going to be fun. And important.

Marcela Oniga has been involved with Mozilla since 2011. She is from Cluj Napoca, Romania and works as a software quality assurance engineer in Bucharest. She has solid Linux system administration skills, based on which she founded her own web hosting company that provides VPS and managed hosting services. She keeps herself focused by being actively involved in many challenging projects. Volunteering is one of her favorite things to do. In her spare time, she plays ping pong and lawn tennis.

Marcela Oniga is from Romania in eastern Europe.

Hi Marcela! How did you discover the Web?

I guess I discovered the Web when I first installed Firefox. Before that, I had read articles about the Internet in computer magazines.

How did you hear about Mozilla?

I heard about Mozilla in 2010. This was a time when open source conferences and events in Romania were not so popular. Now it is very easy to teach and learn about Mozilla projects; Mozillians are all over.

How and why did you start contributing to Mozilla?

I’m passionate about technology and I’m a big fan of open source philosophy. This is one of the reasons why I founded a non-profit organization called Open Source Open Mind (OSOM) in 2010. OSOM compelled me to contribute to open source projects. Along with other fans of open source, I have organized FLOSS events in many cities across Romania. We organize an annual OSOM conference, through which we support and promote Free, Libre and Open Source Software.

Marcela Oniga speaks at the Open Source Open Mind conference in February 2013.

In 2011 Ioana Chiorean and many others started to rebuild the tech community in Romania. At that point Mozilla became the obvious choice for me. I knew it was one of the biggest open-source software projects around and that it was very community-driven.

Marcela Oniga at AdaCamp 2013, San Francisco. During the make-a-thon session she created a robot badge whose eyes are small LEDs.

What’s the contribution you’re the most proud of?

Firefox OS is my favorite project in Mozilla. I’m very excited about having the chance to meet and work with the Firefox OS QA team. I received lot of support from the team. I’m proud and happy that my small contribution to Firefox OS as a quality assurance engineer matters.

You belong to the Mozilla Romania community. Please tell us more about your community. Is there anything you find particularly interesting or special about it?

We are a bunch of different people with different ideas and views but we are have the same mission: to grow Mozilla and help the open web.

Marcela Oniga along with Alina Mierlus and Alex Lakatos at the Mozilla Romania booth during the World Fair, Mozilla Summit 2013.

What’s your best memory with your fellow community members?

My best memory is a trip I took back in 2013 with the coolest Mozilla Romania community members. We went to Fundatica, a commune in the historic region of Transylvania. Best weekend ever!

Marcela Oniga on a 2013 trip to Fundatica, Brașov County, Romania with members of the Mozilla Romania community.

What advice would you give to someone who is new and interested in contributing to Mozilla?

I advise everyone to contribute to open source projects, especially to Mozilla. It is an opportunity to learn something new; it’s fun and interesting and you can only gain from it.

Marcela Oniga with other members of WoMoz – Ioana Chiorean, Flore Allemandou and Delphine Lebédel – in front of the San Francisco Bay Bridge in June 2013.

If you had one word or sentence to describe Mozilla, what would it be?

Open minded community that’s making the web a better place.

What exciting things do you envision for you and Mozilla in the future?

I believe Mozilla’s future is bright. Millions of people around the world will help push the open web forward through amazing open source software and new platforms and tools.

Is there anything else you’d like to say or add to the above questions?

Let’s keep the web open

The Mozilla QA and Tech Evangelism teams would like to thank Marcela Oniga for her contribution over the past 4 years.

Marcela has been a very enthusiastic contributor to the Firefox OS project. She really “thinks like a tester” when she files a bug, and I enjoy looking at the issues she uncovers during her testing. – Marcia Knous

I met Marcela when she invited me to speak at the OSOM conference in Cluj-Napoca, Romania. It was my first time that far in Eastern Europe and I wasn’t too sure of what to expect there. Granted, I had already met Ioana and a few other Mozilla reps from Romania at the whirlpool of activity which is MozFest, but sadly it was just briefly lived interactions.

However, Marcela shone from day one. She organised everything super efficiently, told me what they needed from me, ensured all my questions were answered promptly and made us feel like at home. On the day of the conference, she orchestrated the strings and everything just fell into place, as if it was the most natural thing to do.

Probably the thing I liked most is that she is an accomplished, quiet leader. You don’t need to be loud and immensely popular to be successful. Passion and hard, constant work are what actually matters. Marcela is passionate about doing good and good things, and her dedication is nothing short of spectacular.

I’m equally proud and humbled that she chose to contribute to Mozilla. Thanks for being so excellent, Marcela! – Soledad Penadés

this quarter i’ve been working on redesigning how bugs are viewed and edited on bugzilla.mozilla.org — expect large changes to how bmo looks and feels!

unsurprisingly some of the oldest code in bugzilla is that which displays bugs; it has grown organically over time to cope with the many varying requirements of its users worldwide. while there has been ui improvements over time (such as the sandstone skin), we felt it was time to take a step back and start looking at bugzilla with a fresh set of eyes. we wanted something that was designed for mozilla’s workflow, that didn’t look like it was designed last century, and would provide us with a flexible base upon which we could build further improvements.

a core idea of the design is to load the bug initially in a read-only “view” mode, requiring the user to click on an “edit” button to make most changes. this enables us to defer loading of a lot of data when the page is initially loaded, as well as providing a much cleaner and less overwhelming view of bugs.

major interface changes include:

fields are grouped by function, with summaries of the functional groups where appropriate

fields which do not have a value set are not shown

an overall “bug summary” panel at the top of the bug should provide an “at a glance” status of the bug

the view/edit mode:

allows for deferring of loading data only required while editing a bug (eg. list of all products, components, versions, milestones, etc)

this results in 12% faster page loads on my development system

still allows for common actions to be performed without needing to switch modes

comments can always be added

the assignee can change the bug’s status/resolution

flag requestee can set flags

you can use it today!

this new view has been deployed to bugzilla.mozilla.org, and you can enable it by setting the user preference “experimental user interface” to “on”.

The protocol HTTP/2 as defined in the draft-17 was approved by the IESG and is being implemented and deployed widely on the Internet today, even before it has turned up as an actual RFC. Back in February, already upwards 5% or maybe even more of the web traffic was using HTTP/2.

My prediction: We’ll see >10% usage by the end of the year, possibly as much as 20-30% a little depending on how fast some of the major and most popular platforms will switch (Facebook, Instagram, Tumblr, Yahoo and others). In 2016 we might see HTTP/2 serve a majority of all HTTP requests – done by browsers at least.

Counted how? Yeah the second I mention a rate I know you guys will start throwing me hard questions like exactly what do I mean. What is Internet and how would I count this? Let me express it loosely: the share of HTTP requests (by volume of requests, not by bandwidth of data and not just counting browsers). I don’t know how to measure it and we can debate the numbers in December and I guess we can all end up being right depending on what we think is the right way to count!

Who am I to tell? I’m just a person deeply interested in protocols and HTTP/2, so I’ve been involved in the HTTP work group for years and I also work on several HTTP/2 implementations. You can guess as well as I, but this just happens to be my blog!

The HTTP/2 Implementations wiki page currently lists 36 different implementations. Let’s take a closer look at the current situation and prospects in some areas.

Browsers

Firefox and Chome have solid support since a while back. Just use a recent version and you’re good.

Internet Explorer has been shown in a tech preview that spoke HTTP/2 fine. So, run that or wait for it to ship in a public version soon.

There are no news about this from Apple regarding support in Safari. Give up on them and switch over to a browser that keeps up!

Other browsers? Ask them what they do, or replace them with a browser that supports HTTP/2 already.

My estimate: By the end of 2015 the leading browsers with a market share way over 50% combined will support HTTP/2.

Server software

Apache HTTPd is still the most popular web server software on the planet. mod_h2 is a recent module for it that can speak HTTP/2 – still in “alpha” state. Give it time and help out in other ways and it will pay off.

HTTPS initiatives like Let’s Encrypt, helps to make it even easier to deploy and run HTTPS on your own sites which will smooth the way for HTTP/2 deployments on smaller sites as well. Getting sites onto the TLS train will remain a hurdle and will be perhaps the single biggest obstacle to get even more adoption.

My estimate: By the end of 2015 the leading HTTP server products with a market share of more than 80% of the server market will support HTTP/2.

Proxies

HAproxy? I haven’t gotten a straight answer from that team, but Willy Tarreau has been actively participating in the HTTP/2 work all the time so I expect them to have work in progress.

While very critical to the protocol, PHK of the Varnish project has said that Varnish will support it if it gets traction.

My estimate: By the end of 2015, the leading proxy software projects will start to have or are already shipping HTTP/2 support.

Services

Google (including Youtube and other sites in the Google family) and Twitter have ran HTTP/2 enabled for months already.

Lots of existing services offer SPDY today and I would imagine most of them are considering and pondering on how to switch to HTTP/2 as Chrome has already announced them going to drop SPDY during 2016 and Firefox will also abandon SPDY at some point.

My estimate: By the end of 2015 lots of the top sites of the world will be serving HTTP/2 or will be working on doing it.

Content Delivery Networks

Amazon has not given any response publicly that I can find for when they will support HTTP/2 on their services.

Not a totally bright situation but I also believe (or hope) that as soon as one or two of the bigger CDN players start to offer HTTP/2 the others might feel a bigger pressure to follow suit.

Non-browser clients

curl and libcurl support HTTP/2 since months back, and the HTTP/2 implementations page lists available implementations for just about all major languages now. Like node-http2 for javascript, http2-perl, http2 for Go, Hyper for Python, OkHttp for Java, http-2 for Ruby and more. If you do HTTP today, you should be able to switch over to HTTP/2 relatively easy.

More?

I’m sure I’ve forgotten a few obvious points but I might update this as we go as soon as my dear readers point out my faults and mistakes!

How long is HTTP/1.1 going to be around?

My estimate: HTTP 1.1 will be around for many years to come. There is going to be a double-digit percentage share of the existing sites on the Internet (and who knows how many that aren’t even accessible from the Internet) for the foreseeable future. For technical reasons, for philosophical reasons and for good old we’ll-never-touch-it-again reasons.

The survey

Finally, I asked friends on twitter, G+ and Facebook what they think the HTTP/2 share would be by the end of 2015 with the help of a little poll. This does of course not make it into any sound or statistically safe number but is still just a collection of what a set of random people guessed. A quick poll to get a rough feel. This is how the 64 responses I received were distributed:

Evidently, if you take a median out of these results you can see that the middle point is between 5-10 and 10-15. I’ll make it easy and say that the poll showed a group estimate on 10%. Ten percent of the total HTTP traffic to be HTTP/2 at the end of 2015.

I didn’t vote here but I would’ve checked the 15-20 choice, thus a fair bit over the median but only slightly into the top quarter..

March 30, 2015

17 years ago today, the code shipped, and the Mozilla project was born. I’ve been involved for over 15 of those years, and it’s been a fantastic ride. With Firefox OS taking off, and freedom coming to the mobile space (compare: when the original code shipped, the hottest new thing you could download to your phone was a ringtone), I can’t wait to see where we go next.

TL;DR:Service Worker, a new Web API, can be used as a mean to re-engineering client-side web applications, and a departure from the single-page web application paradigm. Detail of realizing that is being experimented on Gaia and proposed. In Gaia, particularly, “hosted packaged app” is served as a new iteration of security model work to make sure Service Workers works with Gaia.

Last week, I spent an entire week, in face-to-face meetings, going through the technical plans of re-architecture Gaia apps, the web applications that powers the front-end of Firefox OS, and the management plan on resourcing and deployment. Given the there were only a few of developers in the meeting and the public promise of “the new architecture”, I think it’s make sense to do a recap on what’s being proposed and what are the challenges already foreseen.

Using Service Worker

Before dive into the re-architecture plan, we need to explain what Service Worker is. From a boarder perspective, Service Worker can be understood as simply a browser feature/Web API that allow web developers to insert a JavaScript-implemented proxy between the server content and the actual page shown. It is the latest piece of sexy Web technologies that is heavily marketed by Google. The platform engineering team of Mozilla is devoting to ship it as well.

Many things previously not possible can be done with the worker proxy. For starter, it could replace AppCache while keeping the flexibility of managing cache in the hand of the app. The “flexibility” bits is the part where it gets interesting — theologically everything not touching the DOM can be moved into the worker — effectively re-creating the server-client architecture without a real remove HTTP server.

The Gaia Re-architecture Plan

Indeed, that’s what the proponent of the re-architecture is aiming for — my colleagues, mostly whom based in Paris, proposed such architecture as the 2015 iteration/departure of “traditional” single-page web application. What’s more, the intention is to create a framework where the backend, or “server” part of the code, to be individually contained in their own worker threads, with strong interface definitions to achieve maximum reusability of these components — much like Web APIs themselves, if I understand it correctly.

It does not, however, tie to a specific front-end framework. User of the proposed framework should be free to use any of the strategy she/he feel comfortable with — the UI can be as hardcore as entirely rendered in WebGL, or simply plain HTML/CSS/jQuery.

The plan is made public on a Wiki page, where I expect there will be changes as progress being made. This post intentionally does not cover many of the features the architecture promise to unlock, in favor of fresh contents (as opposed of copy-edit) so I recommend readers to check out the page.

Technical Challenges around using Service Workers

There are two major technical challenges: one is the possible performance (memory and cold-launch time) impact to fit this multi-thread framework and it’s binding middleware in to a phone, the other is the security model changes needed to make the framework usable in Gaia.

To speak about the backend, “server” side, the one key difference between real remote servers and workers is one lives in data centers with endless power supply, and the other depend on your phone battery. Remote servers can push constructed HTML as soon as possible, but for an local web app backed by workers, it might need to wait for the worker to spin up. For that the architecture might be depend on yet another out-of-spec feature of Service Worker, a cache that the worker thread have control of. The browser should render these pre-constructed HTML without waiting for the worker to launch.

Without considering the cache feature, and considering the memory usage, we kind of get to a point where we can only say for sure on performance, once there is a implementation to measure. The other solution the architecture proposed to workaround that on low-end phones would be to “merge back” the back-end code into one single thread, although I personally suspect the risk of timing issues, as essentially doing so would require the implementation to simulate multi-threading in one single thread. We would just have to wait for the real implementation.

The security model part is really tricky. Gaia currently exists as packaged zips shipped in the phone and updates with OTA images, pinning the Gecko version it ships along with. Packaging is one piece of sad workaround since Firefox OS v1.0 — the primary reasons of doing so are (1) we want to make sure proprietary APIs does not pollute the general Web and (2) we want a trusted 3rd-party (Mozilla) to involve in security decisions for users by check and sign contents.

The current Gecko implementation of Service Worker does not work with the classic packaged apps which serve from an app: URL. Incidentally, the app: URL something we feel not webby enough so we try to get rid off. The proposal of the week is called “hosted packaged apps”, which serve packages from the real, remote Web and allow references of content in the package directly with a special path syntax. We can’t get rid of packaged yet because of the reasons stated above, but serving content from HTTP should allow us to use Service Worker from the trusted contents, i.e. Gaia.

One thing to note about this mix is that a signed package means it is offline by default by it’s own right, and it’s updates must be signed as well. The Service Worker spec will be violated a bit in order to make them work well — it’s a detail currently being work out.

Technical Challenges on the proposed implementation

As already mentioned on the paragraph on Service Worker challenges, one worker might introduce performance issue, let along many workers. With each worker threads, it would imply memory usage as well. For that the proposal is for the framework to control the start up and shut down threads (i.e. part of the app) as necessary. But again, we will have to wait for the implementation and evaluate it.

The proposed framework asks for restriction of Web API access to “back-end” only, to decouple UI with the front-end as far as possible. However, having little Web APIs available in the worker threads will be a problem. The framework proposes to workaround it with a message routing bus and send the calls back to the UI thread, and asking Gecko to implement APIs to workers from time to time.

As an abstraction to platform worker threads and attempt to abstract platform/component changes, the architecture deserves special attention on classic abstraction problems: abstraction eventually leaks, and abstraction always comes with overhead, whether is runtime performance overhead, or human costs on learning the abstraction or debugging. I am not the export; Joel is.

Technical Challenges on enabling Gaia

Arguably, Gaia is one of the topmost complex web projects in the industry. We inherit a strong Mozilla tradition on continuous integration. The architecture proposal call for a strong separation between front-end application codebase and the back-end application codebase — includes separate integration between two when build for different form factors. The integration plan, itself, is something worthy to rethink along to meet such requirement.

With hosted packaged apps, the architecture proposal unlocks the possibility to deploy Gaia from the Web, instead of always ships with the OTA image. How to match Gaia/Gecko version all the way to every Nightly builds is something to figure out too.

Conclusion

Given everything is in flux and the immense amount of work (as outlined above), it’s hard to achieve any of the end goals without prioritizing the internals and land/deploy them separately. From last week, it’s already concluded parts of security model changes will be blocking Service Worker usage in signed package — we’ll would need to identify the part and resolve it first. It’s also important to make sure the implementation does not suffer any performance issue before deploy the code and start the major work of revamping every app. We should be able to figure out a scaled-back version of the work and realizing that first.

If we could plan and manage the work properly, I remain optimistic on the technical outcome of the architecture proposal. I trust my colleagues, particularly whom make the architecture proposal, to make reasonable technical judgements. It’s been years since the introduction of single-page web application — it’s indeed to worth to rethink what’s possible if we depart from it.

The key here is trying not to do all the things at once, strength what working and amend what’s not, along the process of making the proposal into a usable implementation.

Earlier this month I blogged about improving recognition of our Mozilla QA contributors. The goal was to get some constructive feedback this quarter and to take action on that feedback next quarter. As this quarter is coming to a close, we’ve received four responses. I’d like to think a lot more people out there have ideas and experiences from which we could learn. This is an opportunity to make your voice heard and make contributing at Mozilla better for everyone.

I’d like to announce my new project, readings.owlfolio.org, where I will be reading and reviewing papers from the academic literature mostly (but not exclusively) about information security. I made a false start at this near the end of 2013 (it is the same site that’s been linked under readings in the top bar since then) but now I have a posting queue and a rhythm going. Expect three to five reviews a week. It’s not going to be syndicated to Planet Mozilla, but I may mention it here when I post something I think is of particular interest to that audience.

Longtime readers of this blog will notice that it has been redesigned and matches readings. That process is not 100% complete, but it’s close enough that I feel comfortable inviting people to kick the tires. Feedback is welcome, particularly regarding readability and organization; but unfortunately you’re going to have to email it to me, because the new CMS has no comment system. (The old comments have been preserved.) I’d also welcome recommendations of comment systems which are self-hosted, open-source, database-free, and don’t involve me manually copying comments out of my email. There will probably be a technical postmortem on the new CMS eventually.

Today is my 5 year Mozilla anniversary. Back in 2010, I joined the support team to create awesome documentation for Firefox. That quickly evolved into looking for ways to help users before ever reaching the support site. And this year I joined the Firefox UX team to expand on that work. A lot of things have changed in those five years but Mozilla’s work is as relevant as ever. That’s why I’m even more excited about the work we’re doing today as I was back in 2010. These last 5 years have been amazing and I’m looking forward to many more to come.

The Week in Review is our weekly roundup of what’s new in open science from the past week. If you have news or announcements you’d like passed on to the community, be sure to share on Twitter with @mozillascience and @billdoesphysics, or join our mailing list and get in touch there.

Conferences & Events

The International Science 2.0 Conference in Hamburg ran this past week; a couple of highlights from the conference:

The OKFN highlighted their proposed Open Definition, to help bring technical and legal clarity to what is meant by ‘openness’ in science.

GROBID (site, code), a tool for extracting bibliographic information from PDFs, was well-received by attendees.

Right here at the Mozilla Science Lab, we ran our first Ask us Anything forum event on ‘local user groups for coding in research’, co-organized with Noam Ross – check out the thread and our reflections on the event.

Noam Ross posted a detailed and data driven analysis of what common errors people learning R struggle with (spoiler: an enormous fraction of them boil down to missing information). Ross goes on to consider how these trends can inform an effective strategy for teaching new R users how to read and debug common errors.

Deep Ganguli published an illustrated journey of his first experiences using R for data manipulation and visualization. An experienced Pythonista, Ganguli reports on the advantages of each language, and the importance of using the right tool for the job.

Michael Strack began a blog series on his ongoing journey tackling a major data science challenge in his biological research. Strack describes the series as a case study in the importance of not working in isolation, and the value of a supportive and interactive research community.

In my February 23 blog post, I gave a brief overview of how search engines have evolved over the years and how today’s search engines learn from past searches to anticipate which results will be most relevant to a given query. This means that who succeeds in the $50 billion search business and who doesn’t mostly depends on who has access to search data. In this blog post, I will explore how search engines have obtained queries in the past and how (and why) that’s changing.

For some 90% of searches, a modern search engine analyzes and learns from past queries, rather than searching the Web itself, to deliver the most relevant results. Most the time, this approach yields better results than full text search. The Web has become so vast that searches often find millions or billions of result pages that are difficult to rank algorithmically.

One important way a search engine obtains data about past queries is by logging and retaining search results from its own users. For a search engine with many users, there’s enough data to learn from and make informed predictions. It’s a different story for a search engine that wants to enter a new market (and thus has no past search data!) or compete in a market where one search engine is very dominant.

In Germany, for example, where Google has over 95% market share, competing search engines don’t have access to adequate past search data to deliver search results that are as relevant as Google’s. And, because their search results aren’t as relevant as Google’s, it’s difficult for them to attract new users. You could call it a vicious circle.

Search engines with small user bases can acquire search traffic by working with large Internet Service providers (also called ISPs, think Comcast, Verizon, etc.) to capture searches that go from users’ browsers to competing search engines. This is one option that was available in the past to Google’s competitors such as Yahoo and Bing as they attempted to become competitive with Google’s results.

In an effort to improve privacy, Google began using encrypted connections to make searches unintelligible to ISPs. One side effect was that an important avenue was blocked for competing search engines to obtain data that would improve their products.

An alternative to working with ISPs is to work with popular content sites to track where visitors are coming from. In Web lingo this is called a “referer header.” When a user clicks on a link, the browser tells the target site where the user was before (what site “referred” the user). If the user was referred by a search result page, that address contains the query string, making it possible to associate the original search with the result link. Because the vast majority of Web traffic goes to a few thousand top sites, it is possible to reconstruct a pretty good model of what people frequently search for and what results they follow.

Until late 2011, that is, when Google began encrypting the query in the referer header. Today, it’s no longer possible for the target site to reconstruct the user’s original query. This is of course good for user privacy—the target site knows only that a user was referred from Google after searching for something. At the same time, though, query encryption also locked out everyone (except Google) from accessing the underlying query data.

This chain of events has led to a “winner take all” situation in search, as a commenter on my previous blog post noted: a successful search engine is likely to get more and more successful, leaving in the dust the competitors who lack access to vital data.

These days, the search box in the browser is essentially the last remaining place where Google’s competitors can access a large volume of search queries. In 2011, Google famously accused Microsoft’s Bing search engine of doing exactly that: logging Google search traffic in Microsoft’s own Internet Explorer browser in order to improve the quality of Bing results. Having almost tripled the market share of Chrome since then, this is something Google has to worry much less about in the future. Its competitors will not be able to use Chrome’s search box to obtain data the way Microsoft did with Internet Explorer in the past.

So, if you have ever wondered why, in most markets, Google’s search results are so much better than their competitors’, don’t assume it’s because Google has a better search engine. The real reason is that Google has access to so much more search data. And, the company has worked diligently over the past few years to make sure it stays that way.

This is a short and personal post written with the hope that it encourages you to join the new Open Data Community on Slack – a place for realtime communication and collaboration on the topic of open data.

It's important to foster open data, as it is to provide a place for the discussion and sharing of ideas around the production and use of open data. It's for this reason that we've created the Open Data Community in the hope of not only giving something back for the things that we have taken, but to provide a place for people to come together to help further this common goal.

The Open Data Community is not ViziCities; it's a group of like-minded invidividuals, non-profits and corporations alike. It's for anyone interested in open data, as well as for those who produce, use or are otherwise involved in its lifecycle.

In just 2 days the community has grown to 250 strong – I look forward to seeing you there and talking open data!

March 29, 2015

31.6.0 is available (downloads, release notes, hashes). This includes all the security issues to date, but no specific TenFourFox changes. It becomes final Monday evening Pacific time as usual assuming no critical issues are identified by you, our lovely and wonderful testing audience.

This is a celebratory post. Today I learned that the add-ons I've created for Firefox, Thunderbird, and SeaMonkey have been downloaded over 1,000,000 times in total. For some authors I'm sure that's not a major milestone – some add-ons have more than a million users – but for me it's something I think I can be proud of. (By comparison my add-ons have a collective 80,000 users.)

Here are some of them:

I started six years ago with Shrunked Image Resizer, which makes photos smaller for uploading them to websites. Later I modified it to also make photos smaller in Thunderbird email, and that's far more popular that the original purpose.

Around the same time I got frustrated when developing websites, having to open the page I was looking at in different browsers to test. The process involved far more keystrokes and mouse clicks for my liking, so I created Open With, which turned that into a one-click job.

Later on I created Tab Badge, to provide a visual alert of stuff happening with a tab. This can be quite handy when watching trees, as well as with Twitter and Facebook.

Then there's New Tab Tools – currently my most popular add-on. It's the standard Firefox new tab page, plus a few things, and minus a few things. Kudos to those of you who wrote the built in page, but I like mine better. :-)

Lastly I want to point out my newest add-on, which I think will do quite well once it gets some publicity. I call it Noise Control and it provides a visual indicator of tabs playing audio. (Yes, just like Chrome does.) I've seen lots of people asking for this sort of thing over the years, and the answer was always "it can't be done". Yes it can.

I've become concerned about the attitudes towards government in the
technology industry. It seems to me (perhaps exaggerating a little)
that much of the effort in computer security these days considers the
major adversary to be the government (whether acting legally or
illegally), rather than those attempting to gain illegal access to
systems or information (whether private or government actors).

Democracy requires that the government have power. Not absolute
power, but some, limited, power. Widespread use of technology that
makes it impossible for the government to exercise certain powers could
be a threat to democracy.

Let's look at one recent example: a recent
article in the Economist about ransomware: malicious software that
encrypts files on a computer, whose authors then demand payment to
decrypt the files. The payment demanded these days is typically in
Bitcoin, a system designed to avoid the government's power. This means
that Bitcoin avoids the mechanisms that the international system has to
find and catch criminals by following the money they make, and thus
makes a perfect system for authors of ransomware and other
criminals. The losers are those who don't have the mix of computer
expertise and luck needed to avoid the ransomware.

One of the things that democracies often try to do is to protect the
less powerful. For example, laws to protect property (in
well-functioning governments) protect everybody's property, not just the
property of those who can defend their property by force. Having laws
like these not only (often) provides a fairer chance for the weak, but
it also lets people use their labor on things that can improve people's
lives rather than on zero-sum fighting
over existing resources. Technology that keeps government out risks
making it impossible for government to do this.

I worry that things like ransomware payment in Bitcoin could be just
the tip of the iceberg. Technology is changing society quickly, and I
don't think this will be the only harmful result of technology designed
to keep government out. I don't want the Internet to turn into a
“wild west,” where only the deepest experts in technology
can survive. Such a change to the Internet risks either giving up many
of the potential benefits of the Internet for society by keeping
important things off of it, or alternatively risks moving society
towards anarchy, where there is no government power that can do what we
have relied on governments to do for centuries.

Now I'm not saying today's government is perfect; far from it.
Government has responsibility too, including to deserve the trust
that we need to place in it. I hope to write about that more in
the future.

I mentioned in my previous post a mercurial extension I wrote for making bookmarks easier to
manipulate. Since then it has undergone a large overhaul, and I believe it is now stable and
intuitive enough to advertise a bit more widely.

Introducing bookbinder

When working with bookmarks (or anonymous heads) I often wanted to operate on the
entire series of commits within the feature I was working on. I often found myself digging out
revision numbers to find the first commit in a bookmark to do things like rebasing, grafting or
diffing. This was annoying. I wanted bookmarks to work more like a git-style branch, that has a
definite start as well as an end. And I wanted to be able to easily refer to the set of commits
contained within. Enter bookbinder.

First, you can install bookbinder by cloning:

bash
$ hg clone https://bitbucket.org/halbersa/bookbinder

Then add the following to your hgrc:

ini
[extensions]
bookbinder = path/to/bookbinder

Usage is simple. Any command that accepts a revset with --rev, will be wrapped so that bookmark
labels are replaced with the series of commits contained within the bookmark.

For example, let's say we create a bookmark to work on a feature called foo and make two commits:

Remember hg log is just one example. Bookbinder automatically detects and wraps all commands that have a --rev option and that can
receive a series of commits. It even finds commands from arbitrary extensions that may be installed!
Here are few examples that I've found handy in addition to hg log:

etc.

```

They all replace the single commit pointed to by the bookmark with the series of commits within the
bookmark. But what if you actually only want the single commit pointed to by the bookmark label?
Bookbinder uses '.' as an escape character, so using the example above:

Bookbinder will also detect if bookmarks are based on top of one another:

bash
$ hg rebase -r my_bookmark_2 -d my_bookmark_1

Running hg log -r my_bookmark_2 will not print any of the commits contained by my_bookmark_1.

The gory details

But how does bookbinder know where one feature ends, and another begins? Bookbinder implements a new
revset called "feature". The feature revset is roughly equivalent to the following alias
(kudos to smacleod for coming up with it):

ini
[revsetalias]
feature($1) = ($1 or (ancestors($1) and not (excludemarks($1) or hg ancestors(excludemarks($1))))) and not public() and not merge()
excludemarks($1) = ancestors(parents($1)) and bookmark()

Here is a formal definition. A commit C is "within" a feature branch ending at revision R if all of
the following statements are true:

C is R or C is an ancestor of R

C is not public

C is not a merge commit

no bookmarks exist in [C, R) for C != R

all commits in (C, R) are also within R for C != R

In easier to understand terms, this means all ancestors of a revision that aren't public, a merge
commit or part of a different bookmark, are within that revision's 'feature'. One thing to be aware
of, is that this definition allows empty bookmarks. For example, if you create a new bookmark on a
public commit and haven't made any changes yet, that bookmark is "empty". Running hg log -r with
an empty bookmark won't have any output.

The feature revset that bookbinder exposes, works just as well on revisions that don't have any
associated bookmark. For example, if you are working with an anonymous head, you could do:

bash
$ hg log -r 'feature(<rev>)'

In fact, when you pass in a bookmark label to a supported command, bookbinder is literally just
substituting -r <bookmark> with -r feature(<bookmark>). All the hard work is happening in the
feature revset.

In closing, bookbinder has helped me make a lot more sense out of my bookmark based workflow. It's solving a
problem I think should be handled in mercurial core, maybe one day I'll attempt to submit a patch
upstream. But until then, I hope it can be useful to others as well.

For about a year, the Mozilla community in Pune, India has had an informal task force where four active members go out to local universities and conferences to speak about privacy, security, surveillance, and other topics. Mozilla Reps Ankit Gadgil, … Continue reading

Progress! I got IonPower past the point PPCBC ran aground at -- it can now jump in and out of Baseline and Ion code on PowerPC without crashing or asserting. That's already worth celebrating, but as the judge who gave me the restraining order on behalf of Scarlett Johansson remarked, I always have to push it. So I tried our iterative π calculator again and really gave it a workout by forcing 3 million iterations. Just to be totally unfair, I've compared the utterly unoptimized IonPower (in full Ion mode) versus the fully optimized PPCBC (Baseline) in the forthcoming TenFourFox 31.6. Here we go (Quad G5, Highest Performance mode):

No, that's not a typo. The unoptimized IonPower, even in its primitive state, is 23 percent faster than PPCBC on this test largely due to its superior use of floating point. It gets even wider when we do 30 million iterations:

That's 63 percent faster. And I'm not even to fun things like leveraging the G5's square root instruction (the G3 and G4 versions will use David Kilbridge's software square root from JaegerMonkey), parallel compilation on the additional cores or even working on some of the low-hanging fruit with branch optimization, and on top of all that IonPower is still running all its debugging code and sanity checks. I think this qualifies as IonPower phase 5 (basic operations), so now the final summit will be getting the test suite to pass in both sequential and parallel modes. When it does, it's time for TenFourFox 38!

By the way, for Ben's amusement, how does it compare to our old, beloved and heavily souped up JaegerMonkey implementation? (17.0.11 was our fastest version here; 19-22 had various gradual degradations in performance due to Mozilla's Ion development screwing around with methodjit.)

Yup. I'm that awesome. Now I'm gonna sit back and go play some well-deserved Bioshock Infinite on the Xbox 360 (tri-core PowerPC, thank you very much, and I look forward to cracking the firmware one of these days) while the G5 is finishing the 31.6 release candidates overnight. They should be ready for testing tomorrow, so watch this space.

what's Mozharness?

why in tree?

First and foremost: transparency.

There is an overarching goal to provide developers the keys to manage and stand up their own builds & tests (AKA self-serve). Having the automation step logic side by side to the compile and test step logic provides developers transparency and a sense of determinism. Which leads to reason number 2.

deterministic builds & tests

This is somewhat already in place thanks to Armen's work on pinning specific Mozharness revisions to in-tree revisions. However the pins can end up behind the latest Mozharness revisions so we end up often landing multiple changes to Mozharness at once to one in-tree revsion.

Mozharness automated build & test jobs are not just managed by Buildbot anymore. Taskcluster is starting to take the weight off Buildbot's hands and, because of its own behaviour, Mozharness is better suited in-`tree.

ateam is going to put effort this quarter into unifying how we run tests locally vs automation. Having mozharness in-tree should make this easier

this sounds great. why wouldn't we want to do this?

There are downsides. It arguably puts extra strain on Release Engineering for managing infra health. Though issues will be more isolated, it does become trickier to have a higher view of when and where Mozharness changes land.

In addition, there is going to be more friction for deployments. This is because a number of our Mozharness scripts are not directly related to continuous integration jobs: e.g. releases, vcs-sync, b2g bumper, and merge tasks.

why wasn't this done yester-year?

Mozharness now handles > 90% of our build and test jobs. Its internal components: config, script, and log logic, are starting to mature. However, this wasn't always the case.

When it was being developed and its uses were unknown, it made sense to develop on the side and tie itself close to buildbot deployments.

okay. I'm sold. can we just simply hg add mozharness?

Integrating Mozharness in-tree comes with a fe6 challenges

chicken and egg issue

currently, for build jobs, Mozharness is in charge of managing version control of the tree itself. How can Mozharness checkout a repo if it itself lives within that repo?

test jobs don't require the src tree

test jobs only need a binary and a tests.zip. It doesn't make sense to keep a copy of our branches on each machine that runs tests. In line with that, putting mozharness inside tests.zip also leads us back to a similar 'chicken and egg' issue.

which branch and revisions do our release engineering scripts use?

how do we handle releases?

how do we not cause extra load on hg.m.o?

what about integrating into Buildbot without interruption?

it's easy!

This shouldn't be too hard to solve. Here is a basic outline my plan of action and roadmap for this goal:

land copy of mozharness on a project branch

add an end point on relengapi with the following logic

endpoint will contain 'mozharness' and a '$REVISION'

look in s3 for equivalent mozharness archive

if not present: download a sub repo dir archive from hg.m.o, run tests, and push that archive to s3

finally, return the url to the s3 archive

integrate the endpoint into buildbot

call endpoint before scheduling jobs

add builder step: download and unpack the archive on the slave

for machines that run mozharness based releng scripts

add manifest that points to 'known good s3 archive'

fix deploy model to listen to manifest changes and downloads/unpacks mozharness in a similar manner to builds+tests

This is a loose outline of the integration strategy. What I like about this

no code change required within Mozharness' code

there is very little code change within Buildbot

allows Taskcluster to use Mozharness in whatever way it likes

no chicken and egg problem as (in Buildbot world), Mozharness will exist before the tree exists on the slave

no need to manage multiple repos and keep them in sync

I'm sure I am not taking into account many edge cases and I look forward to hitting those edges head on as I start this in Q2. Stay tuned for further developments.

One day, I'd like to see Mozharness (at least its internal parts) be made into isolated python packages installable by pip. However, that's another problem for another day.

Next week I’m spending most of Monday with my family before heading off to London. I’ll be keynoting and running a workshop at the London College of Fashion conference on Tuesday. On Wednesday and Thursday I’ll be working from the City & Guilds offices, getting to know people and putting things into motion!

Any time Facebook talks about technical matters I tend to listen.
They have a track record of demonstrating engineering leadership
in several spaces. And, unlike many companies that just talk, Facebook
often gives others access to those ideas via source code and healthy
open source projects. It's rare to see a company operating on the
frontier of the computing field provide so much insight into their
inner workings. You can gain so much by riding their cotails and
following their lead instead of clinging to and cargo culting from
the past.

The Facebook F8 developer conference was this past week. All the
talks are now available online.
I encourage you to glimpse through the list of talks and watch
whatever is relevant to you. There's really a little bit for
everyone.

"We don't want humans waiting on computers. We want computers waiting
on humans." (This is the common theme of the talk.)

In 2005, Facebook was on Subversion. In 2007 moved to Git. Deployed
a bridge so people worked in Git and had distributed workflow but
pushed to Subversion under the hood.

New platforms over time. Server code, iOS, Android. One Git repo
per platform/project -> 3 Git repos. Initially no code sharing, so
no problem. Over time, code sharing between all repos. Lots of code
copying and confusion as to what is where and who owns what.

Facebook is mere weeks away from completing their migration to
consolidate the big three repos to a Mercurial monorepo. (See also
my post about monorepos.)

Reasons:

Easier code sharing.

Easier large-scale changes. Rewrite the universe at once.

Unified set of tooling.

Facebook employees run >1M source control commands per day. >100k
commits per week. VCS tool needs to be fast to prevent distractions
and context switching, which slow people down.

Facebook implemented sparse checkout and shallow history in Mercurial.
Necessary to scale distributed version control to large repos.

Quote from Google: "We're excited about the work Facebook is doing with
Mercurial and glad to be collaborating with Facebook on Mercurial
development." (Well, I guess the cat is finally out of the bag:
Google is working on Mercurial. This was kind of an open secret for
months. But I guess now it is official.)

Push-pull-rebase bottleneck: if you rebase and push and someone beats
you to it, you have to pull, rebase, and try again. This gets worse
as commit rate increases and people do needless legwork. Facebook
has moved to server-side rebasing on push to mostly eliminate this
pain point. (This is part of a still-experimental feature in Mercurial,
which should hopefully lose its experimental flag soon.)

Starting 13:00 in we have a speaker change and move away from version
control.

IDEs don't scale to Facebook scale. "Developing in Xcode at Facebook
is an exercise in frustration." On average 3.5 minutes to open
Facebook for iOS in Xcode. 5 minutes on average to index. Pegs CPU
and makes not very responsive. 50 Xcode crashes per day across all
Facebook iOS developers.

Facebook believes IDEs are worth the pain because they make people
more productive.

Facebook wants to support all editors and IDEs since people want to
use whatever is most comfortable.

React Native changed things. Supported developing on multiple
platforms, which no single IDE supports. People launched several
editors and tools to do React Native development. People needed 4
windows to do development. That experience was "not acceptable."
So they built their own IDE. Set of plugins on top of ATOM. Not
a fork. They like hackable and web-y nature of ATOM.

The demo showing iOS development looks very nice! Doing Objective-C,
JavaScript, simulator integration, and version control in one window!

It can connect to remote servers and transparently save and
deploy changes. It can also get real-time compilation errors and hints
from the remote server! (Demo was with Hack. Not sure if others langs
supported. Having beefy central servers for e.g. Gecko development
would be a fun experiment.)

Starting at 32:00 presentation shifts to continuous integration.

Number one goal of CI at Facebook is developer efficiency. We
don't want developers waiting on computers to build and test diffs.

Provide frequent feedback. Developers should know as soon as
possible after they did something. (I think this refers to local
feedback.)

Sandcastle is their CI system.

Diff lifecycle discussion.

Basic tests and lint run locally. (My understanding from talking
with Facebookers is "local" often means on a Facebook server, not
local laptop. Machines at developers fingertips are often dumb
terminals.)

They appear to use code coverage to determine what tests to run.
"We're not going to run a test unless your diff might actually have
broken it."

They run flaky tests less often.

They run slow tests less often.

Goal is to get feedback to developers in under 10 minutes.

If they run fewer tests and get back to developers quicker,
things are less likely to break than if they run more tests but
take longer to give feedback.

They also want feedback quickly so reviewers can see results at
review time.

In addition to test results, performance and size metrics are reported.

They have a "Ship It" button on the diff.

Landcastle handles landing diff.

"It is not OK at Facebook to land a diff without using Landcastle."
(Read: developers don't push directly to the master repo.)

Once Landcastle lands something, it runs tests again. If an issue
is found, a task is filed. Task can be "push blocking."
Code won't ship to users until the "push blocking" issue resolved.
(Tweets confirm they do backouts "fairly aggressively." A valid
resolution to a push blocking task is to backout. But fixing forward
is fine as well.)

In addition to diff-based testing, they do continuous testing runs.
Much more comprehensive. No time restrictions. Continuous runs on
master and release candidate branches. Auto bisect to pin down
regressions.

Sandcastle processes >1000 test results per second. 5 years of machine
work per day. Thousands of machines in 5 data centers.

They started with buildbot. Single master. Hit scaling limits of
single thread single master. Master could not push work to workers
fast enough. Sandcastle has distributed queue. Workers just pull
jobs from distributed queue.

There is a "not my fault" button that developers can use to report
bad signals.

"Whatever the scale of your engineering organization, developer
efficiency is the key thing that your infrastructure teams should be
striving for. This is why at Facebook we have some of our top
engineers working on developer infrastructure." (Preach it.)

Excellent talk. Mozillians doing infra work or who are in charge
of head count for infra work should watch this video.

March 27, 2015

So the other day, Indiana’s governor signed a bill into law that the Republican controlled legislature passed called the Religious Freedom Restoration Act. The reality of this bill is it has nothing to do with freedom of religion and everything to do with legalizing discrimination.

Anyways, to the point, I hate to open a can of worms but when I heard this news I thought back to this same time last year and remembered how gung ho OkCupid was over Mozilla’s appointment of Brendan Eich because of his personal beliefs and that they ultimately decided to block all Firefox users.

I don’t really think OkCupid should block Indiana but their lack of even a public tweet or statement in opposition of this legislation leads me back to my original conclusion that they were just riding the media train for their own benefit and not because they support the LGBT community.

If you are going to be about supporting the LGBT community, try to at least be consistent in that support and not just do it when it will make you look good in the media!

What’s happening at the Mozilla Foundation? This post contains the presentation slides from our recent Board Meeting, plus an audio interview that Matt Thompson did with me last week. It provides highlights from 2014, a brief summary of Mozilla’s 2015 plan and a progress report on what we’ve achieved over the past three months.

I’ve also written a brief summary of notes from the slides and interview below if you want a quick scan. These are also posted on the Webmaker blog.

What we did in 2014

Mozilla’s 2015 Plan

Mozilla-wide goals: grow long-term relationships that help people and promote the open web. By building product and empowering people. Webmaker+ goal: Expand participation in Webmaker through new software and on the ground clubs.

Building Mozilla Learning

By 2017, we’ve built Mozilla Learning: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla Learning to unlock the power of the web for themselves, their organizations and the world.

2015 Mozilla Foundation goals

Deepen learning networks (500 cities)

B uild mass appeal learning product (250k Monthly Active Users)

Craft ambitious Mozilla Learning and community strategy

Q1 Mozilla Foundation highlights

Major victory in US net neutrality, with Mozilla getting 330k people to sign a petition.

Just wanted to give another quick Perfherder update. Since the last time, I’ve added summary series (which is what GraphServer shows you), so we now have (in theory) the best of both worlds when it comes to Talos data: aggregate summaries of the various suites we run (tp5, tart, etc), with the ability to dig into individual results as needed. This kind of analysis wasn’t possible with Graphserver and I’m hopeful this will be helpful in tracking down the root causes of Talos regressions more effectively.

Let’s give an example of where this might be useful by showing how it can highlight problems. Recently we tracked a regression in the Customization Animation Tests (CART) suite from the commit in bug 1128354. Using Mishra Vikas‘s new “highlight revision mode” in Perfherder (combined with the revision hash when the regression was pushed to inbound), we can quickly zero in on the location of it:

It does indeed look like things ticked up after this commit for the CART suite, but why? By clicking on the datapoint, you can open up a subtest summary view beneath the graph:

We see here that it looks like the 3-customize-enter-css.all.TART entry ticked up a bunch. The related test 3-customize-enter-css.half.TART ticked up a bit too. The changes elsewhere look minimal. But is that a trend that holds across the data over time? We can add some of the relevant subtests to the overall graph view to get a closer look:

As is hopefully obvious, this confirms that the affected subtest continues to hold its higher value while another test just bounces around more or less in the range it was before.

Firefox 37 brings more encryption to the web through opportunistic encryption of some http:// based resources. It will be released the week of March 31st.

OE provides unauthenticated encryption over TLS for data that would otherwise be carried via clear text. This creates some confidentiality in the face of passive eavesdropping, and also provides you much better integrity protection for your data than raw TCP does when dealing with random network noise. The server setup for it is trivial.

These are indeed nice bonuses for http:// - but it still isn't as nice as https://. If you can run https you should - full stop. Don't make me repeat it :) Only https protects you from active man in the middle attackers.

Install a TLS based h2 or spdy server on a separate port. 443 is a good choice :). You can use a self-signed certificate if you like because OE is not authenticated.

Add a response header Alt-Svc: h2=":443" or spdy/3.1 if you are using a spdy enabled server like nginx.

When the browser consumes that response header it will start to verify the fact that there is a HTTP/2 service on port 443. When a session with that port is established it will start routing the requests it would normally send in cleartext to port 80 onto port 443 with encryption instead. There will be no delay in responsiveness because the new connection is fully established in the background before being used. If the alternative service (port 443) becomes unavailable or cannot be verified Firefox will automatically return to using cleartext on port 80. Clients that don't speak the right protocols just ignore the header and continue to use port 80.

This mapping is saved and used in the future. It is important to understand that while the transaction is being routed to a different port the origin of the resource hasn't changed (i.e. if the cleartext origin was http://www.example.com:80 then the origin, including the http scheme and the port 80, are unchanged even if it routed to port 443 over TLS). OE is not available with HTTP/1 servers because that protocol does not carry the scheme as part of each transaction which is a necessary ingredient for the Alt-Svc approach.

You can control some details about how long the Alt-Svc mappings last and some other details. The Internet-Draft is helpful as a reference. As the technology matures we will be tracking it; the recent HTTP working group meeting in Dallas decided this was ready to proceed to last call status in the working group.

There are learning, fundraising, and advocacy programs where there weren’t before. We’re empowering hundreds of thousands of people to teach each other the web. We’ve built a $15M/y fundraising program from scratch. And we’ve helped Mozilla find its voice again, playing a lead role in the most significant grassroots policy victory in a generation and the largest ever in telecommunications: the battle for net neutrality.

I’m grateful to Mark and Mitchell Baker for the opportunity and trust to help build something great, to my colleagues for their focus and dedication, and to all of Mozilla for fighting the good fight.

While the 10th will be my last day as an employee, I’ll be around until the end of June as a consultant, helping with the transition of my portfolio to new leadership. I’ll announce my new home closer to that time.

For now, as always, once a Mozillian always a Mozillian.

Thanks again to all of you. I’m looking forward to seeing what we accomplish next.

With that review out of the way, I had to swap a bunch of information about the plugin crash UI for e10s in my head – and in particular, some non-determinism that we have to handle. I explained that stuff (and hopefully didn’t spend too much time on it).

Then, I showed how far I’d gotten with the plugin crash UI for e10s. I was able to submit a crash report, but I found I wasn’t able to type into the comment text area.

After a while, I noticed that I couldn’t type into the comment text area on Nightly, even without my patch. And then I reproduced it in Aurora. And then in Beta. Luckily, I couldn’t reproduce it in Release – but with Beta transitioning to Release in only a few days, I didn’t have a lot of time to get a bug on file to shine some light on it.

Luckily, our brilliant Steven Michaud was on the case, and has just landed a patch to fix this. Talk about fast work!

AirMozilla video

TL;DR: I’m leaving Mozilla as a paid contributor because, as of next week, I’ll be a full-time consultant! I’ll write about that in a separate blog post.

Around four years ago, I stumbled across a project that the Mozilla Foundation was running with P2PU. It was called ‘Open Badges’ and it really piqued my interest. I was working in Higher Education at the time and finishing off my doctoral thesis. The prospect of being able to change education by offering a different approach to credentialing really intrigued me.

I started investigating further, blogging about it, and started getting more people interested in the Open Badges project. A few months later, the people behind MacArthur’s Digital Media and Learning (DML) programme asked me to be a judge for the badges-focused DML Competition. While I was in San Francisco for the judging process I met Erin Knight, then Director of Learning at Mozilla, in person. She asked if I was interested in working on her team. I jumped at the chance!

During my time at Mozilla I’ve worked on Open Badges, speaking and running keynotes at almost as many events as there are weeks in the year. I’ve helped bring a Web Literacy Map (originally ‘Standard’) into existence, and I’ve worked on various projects and with people who have changed my outlook on life. I’ve never come across a community with such a can-do attitude.

This June would have marked three years as a paid contributor to the Mozilla project. It was time to move on so as not to let the grass grow under my feet. Happily, because Mozilla is a global non-profit with a strong community that works openly, I’ll still be a volunteer contributor. And because of the wonders of the internet, I’ll still have a strong connection to the network I built up over the last few years.

I plan to write more about the things I learned and the things I did at Mozilla over the coming weeks. For now, I just want to thank all of the people I worked with over the past few years, and wish them all the best for the future. As of next week I’ll be a full-time consultant. More about that in an upcoming post!

Bug 783846 landing in Nightly means that Firefox for Android users—starting in version 39—can finally paste into contenteditable elements, which is huge news for the mobile-html5-responsive-shadow-and-or-virtual-dom-contenteditable apps crowd, developers and users alike.

"That's amazing! Can I cut text from contenteditable elements too?", you're asking yourself. The answer is um no, because we haven't fixed that yet. But if you wanna help me get that working come on over to bug 1112276 and let's party. Or write some JS and fix the bug or whatever.

Fun fact, when I told my manager I was working on this bug in my spare time he asked, "…Why?". 🎉

I’m taking a month of vacation. Today is my last working day for March, and I will be back on April 30th. While I won’t be totally incommunicado, for the most part I won’t be reading email. While I’m gone, any management-type inquiries can be passed on to Naveed Ihsannullah.

March 26, 2015

Since December 1st 1975, by FAA mandate, no plane has been allowed to fly without a "Ground Proximity Warning System" GPWS (or one of its successors).[1] For good reason too, as it's been figured that 75% of the fatalities just one year prior (1974) could have been prevented using the system.[2]

In a slew of case studies, reviewers reckoned that a GPWS may have prevented crashes by giving pilots additional time to act before they smashed into the ground. Often, the GPWS's signature "Whoop, Whoop: Pull Up!" would have sounded a full fifteen seconds before any other alarms triggered.[3]

Instruments like this are indispensable to aviation because pilots operate in an environment outside of any realm where human intuition is useful. Lacking augmentation, our bodies and minds are simply not suited to the task of flying airliners.

For the same reason, thick layers of instrumentation and early warning systems are necessary for managing technical infrastructure. Like pilots, without proper tooling, system administrators often plow their vessels into the earth....

The St. Patrick's Day Massacre

Case in point, on Saint Patrick's Day we suffered two outages which could have likely been avoided via some additional alerts and a slightly modified deployment process.

The first outage was caused by the accidental removal of a variable from a config file which one of our utilities depends on. Our utilities are all managed by a dependency system called runner, and when any task fails the machine is prevented from doing work until it succeeds. This all-or-nothing behavior is correct, but should not lead to closed trees....

On our runner dashboards, the whole event looked like this (the smooth decline on the right is a fix being rolled out with ansible):

The second, and most severe, outage was caused by an insufficient wait time between retries upon failing to pull from our mercurial repositories.

There was a temporary disruption in service, and a large number of slaves failed to clone a repository. When this herd of machines began retrying the task it became the equivalent of a DDoS attack.

From the repository's point of view, the explosion looked like this:

Then, from runner's point of view, the retrying task:

In both of these cases, despite having the data (via runner logging), we missed the opportunity to catch the problem before it caused system downtime. Furthermore, especially in the first case, we could have avoided the issue even earlier by testing our updates and rolling them out gradually.

Avoiding Future Massacres

After these fires went out, I started working on a RelEng version of the Ground Proximity Warning System, to keep us from crashing in the future. Here's the plan:

In both of the above cases, we realized that things had gone amiss based on job backlog alerts. The problem is, once we have a large enough backlog to trigger those alarms, we're already hosed.

The good news is, the backlog is preceded by a spike in runner retries. Setting up better alerting here should buy us as much as an extra hour to respond to trouble.

We're already logging all task results to influxdb, but, alerting via that data requires a custom nagios script. Instead of stringing that together, I opted to write runner output to syslog where it's being aggregated by papertrail.

Using papertrail, I can grep for runner retries and build alarms from the data. Below is a screenshot of our runner data in the papertrail dashboard:

Finally, when we update our slave images the new version is not rolled out in a precise fashion. Instead, as old images die (3 hours after the new image releases) new ones are launched on the latest version. Because of this, every deploy is an all-or-nothing affair.

By the time we notice a problem, almost all of our hosts are using the bad instance and rolling back becomes a huge pain. We also do rollbacks by hand. Nein, nein, nein.

My plan here is to launch new instances with a weighted chance of picking up the latest ami. As we become more confident that things aren't breaking -- by monitoring the runner logs in papertrail/influxdb -- we can increase the percentage.

The new process will work like this:

00:00 - new AMI generated

00:01 - new slaves launch with a 12.5% chance of taking the latest version.

00:45 - new slaves launch with a 25% chance of taking the latest version.

01:30 - new slaves launch with a 50% chance of taking the latest version.

02:15 - new slaves launch with a 100% chance of taking the latest version.

Lastly, if we want to roll back, we can just lower the percentage down to zero while we figure things out. This also means that we can create sanity checks which roll back bad amis without any human intervention whatsoever.

The intention being, any failure within the first 90 minutes will trigger a rollback and keep the doors open....

The original DruCall was based on SIPml5 and released in 2013 as a proof-of-concept.

It was later adapted to use JSCommunicator as the webphone implementation. JSCommunicator itself was updated by another GSoC student, Juliana Louback, in 2014.

It would be great to take DruCall further in 2015, here are some of the possibilities that are achievable in GSoC:

Updating it for Drupal 8

Support for logged-in users (currently it just makes anonymous calls, like a phone box)

Support for relaying shopping cart or other session cookie details to the call center operative who accepts the call

Help needed: could you be a co-mentor?

My background is in real-time and server-side infrastructure and I'm providing all the WebRTC SIP infrastructure that the student may need. However, for the project to have the most impact, it would also be helpful to have some input from a second mentor who knows about UI design, the Drupal way of doing things and maybe some Drupal 8 experience. Please contact me ASAP if you would be keen to participate either as a mentor or as a student. The deadline for student applications is just hours away but there is still more time for potential co-mentors to join in.

The council is excited to share with you, our second group of new Mozilla Rep Mentors this Year.

These are Reps council has recognized as being equally good at inspiring and empowering others, as they are leading globally and locally in their communities.

As mentorship is core to the program, we are very grateful they have agreed to take on this new responsibility.

A crucial role in the Mozilla Reps ecosystem is that of a mentor. We strive for every Rep to become a mentor for the program to become self-sustaining and for Reps to play a central role in our ambitious goals for growing and enabling the Mozilla Community. We’ve just accepted eight new mentors, bringing the current total to 54.

Mozilla Reps recognizes that our primary goals are best reached through the support, encouragement, and empowerment of community through mentorship. Mentoring is a process for the informal transmission of knowledge, made possible through regular and supportive interaction.

We encourage mentors to be as open to learning from their mentees, as they are to teaching, for the benefit and growth of both individuals and the program as a whole.

Codec development is often an exercise in tracking down examples of "that's funny... why is it doing that?" The usual hope is that unexpected behaviors spring from a simple bug, and finding bugs is like finding free performance. Fix the bug, and things usually work better.

Often, though, hunting down the 'bug' is a frustrating exercise in finding that the code is not misbehaving at all; it's functioning exactly as designed. Then the question becomes a thornier issue of determining if the design is broken, and if so, how to fix it. If it's fixable. And the fix is worth it.

What’s happening at the Mozilla Foundation? This post contains the presentation slides from our recent Board Meeting, plus an audio interview with Executive Director Mark Surman. It provides highlights from 2014, a brief summary of Mozilla’s 2015 plan, and a progress report on what we’ve achieved over the past three months.

What we did in 2014

Mozilla’s 2015 Plan

Mozilla-wide goals: grow long-term relationships that help people and promote the open web. By building product and empowering people.

Webmaker+ goal: Expand participation in Webmaker through new software and on the ground clubs.

Building Mozilla Learning

By 2017, we’ve built Mozilla Learning: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla Learning to unlock the power of the web for themselves, their organizations and the world.

2015 Mozilla Foundation goals

Deepen learning networks (500 cities)

B uild mass appeal learning product (250k Monthly Active Users)

Craft ambitious Mozilla Learning and community strategy

Q1 Mozilla Foundation highlights

Major victory in US net neutrality, with Mozilla getting 330k people to sign a petition.

On Tuesday, the the Science Lab conducted its first Ask Us Anything on the MSL Forum, organized by Noam Ross. The topic was lessons learned in running local study groups, user’s groups, hacky hours and other meetups for researchers using and writing code; many thanks go out to the seven panelists who were available to answer questions:

This was a tremendously successful event, with a sustained conversation of more than a post per minute on the topic for two full hours; a lot of interesting ideas came out of the discussion, a few of which I summarize here, followed by detailed discussion below; also, be sure to check out the full thread.

Summary

A few great ideas for study groups can be distilled from this event:

Get your event information in front of a few hundred people regularly; 10% attendance is normal.

Involve your institution in communicating about your event if possible.

Always have an exit strategy: how will you pass the baton to the next cohort of organizers?

And for online AMA-style events:

Make it a panel discussion; this creates the space for the community to ask questions, but keeps the thread lively and substantial with discussions between experienced community members.

Make it a time-limited event; this encourages active participation and re-creates some of the energy usually found only at conferences and in-person meetups.

Sticking Power

One of the first things the thread discussed, was one of the most common problems for any regular community event: sustainability. How do we get people coming out and participating, month after month, and maintain momentum?

The panel quickly zeroed in on an interesting challenge: will interest be inspired and sustained by highly targeted skills and tools trainings, or will keeping things as general as possible appeal to a wide audience? Highly specific material will be the most attractive to the small group of people already interested in it, while general topics might seem vague or unclear to a potential attendee on how relevant they’ll be, even if they, in principle, apply to a wider range of people.

This led to an important observation: the bigger the pool of people a study group is communicating with, the better its attendance will be. Panelists seemed to have a bit more success with the specific and clearly practically applicable; what allowed these groups to keep attendance up despite getting into the nitty gritty, was developing a large audience of people aware of their activities. Numbers seemed to hover around 10% attendance, if we compare number of actual attendees to size of mailing lists; but with a large audience (critical mass seemed to be around 200 people), there’s sure to be a cohort of people interested in whichever specific topic the group wants to take up.

But what about the early days, before a new group has gotten in front of that first 200? Fiona and Jeff made a key observation: stick to it, even if the first couple of events are just you and one or two other people. It takes time for word of mouth to spread, time for people to make up their minds that they’re comfortable dipping their toe into something like a meetup group – and, worst case, you’ve set aside some time to get some of your own work done and have a beer.

Finally on the topic of sustainability, another common concern that came up was the relationship of organizers to the host institution; post-docs and students move on after only a few short years, and without someone to pick up the torch, efforts can fizzle out. The panel agreed that it’s crucial for senior organizers to cultivate relationships with people who they can hand off to in future, but this calls out another key design question: how can we design a really smooth hand-off procedure between generations of organizers? This is a long term goal a bit beyond the concerns of groups just getting started, but I think with some savvy design, this process can be made quite smooth; more of my own ideas on this will be forthcoming on this blog very soon.

Communication

We need that pool of 200 people thinking about our event – how do we assemble them to begin with?

Organizers found, perhaps surprisingly, that their attendees were pretty quiet on Twitter, and didn’t generate much conversation there, although Twitter might be more effective as a ‘push’ platform, to let people know about events and content. More successful were blogs and mailing lists; panelists cited the familiarity of these formats to most researchers.

A novel approach that a few of the groups based at universities took, was to approach departments for inclusion in departmental news letters and welcome packages for new students. Not only do these communication channels typically already exist in most institutions, they can put a group in front of a large number of potentially interested people quickly, and lend a degree of inclusion into the establishment that helps catch peoples’ attention.

Novel Ideas

One thing I love about getting a bunch of people together to talk, is that novel ideas always come out. One of my favorites was a whole other flavor of event that a study group could put on; Fiona Tweedie described ‘Research Speed Dating’, an event where a bunch of people set up short demos of tools they use in their research, and attendees circulate among them, exploring the tools in short, five-minute introductions to see if they might be interested in looking deeper into them at a future meetup. Topics that garner a lot of interest are chosen for deeper dives at future events, and prospective participants get to meet organizers and start developing connections in a relatively no-pressure atmosphere.

Another observation I found compelling from the discussion came from Rayna Harris – graduate school often involves working on the same project for years, and the singular focus can be maddening after a while. It’s really refreshing to have a project that comes in little, month-long bites; from announcing a meetup to delivering can easily occupy only a few weeks, giving a sense of delivery and completion on a much faster cadence than research naturally provides.

Meta-AMA

A number of people also asked me about the AMA format itself; I think it was a big success, and it was largely thanks to some design decisions Noam Ross made when we were setting this event up:

Have a panel of people to discuss the topic at hand. This worked very well, since even when there weren’t newcomers to ask questions, the panelists all talked amongst themselves, which led to some really deep and insightful questions and answers from old hands in the space. We had a seven-person panel, and everyone participated and seemed heard.

Put it all on one thread. I admit, I had some misgivings about having seven parallel conversations in one thread. It was about as chaotic as I imagined, but Noam was right, it was actually a good thing; it enhanced the panel’s ability to interact and make this as much a panel discussion as an AMA – call it an open panel discussion.

A remarkable thing about this event, was that the same sort of skill and knowledge sharing that happens so naturally at a conference and that I’ve been trying to produce online came out in this event; by sitting a half dozen people down around a topic in a finite time window (we did two hours and it didn’t drag at all), the same sort of connections and mutual understanding came out.

Conclusion

A number of interesting ideas, metrics and goals for study groups came out of this conversation, which we’ll be folding in to our forthcoming support for setting up your own meetup – watch this space for news and opportunities in that project coming very soon, and in the meantime, make sure your local study group is on the map!

Map of Study Groups & Hacky Hours

Given what a great time and what a productive discussion everyone had on the forum on Tuesday, I’m looking forward to making these panel AMAs a regular event at the Lab; if you have a topic you’d like to suggest, post it in the Events section of the forum, or tweet it to us at @MozillaScience and @billdoesphysics. I hope you’ll join us!

In his blog post, Eric is talking about people using refresh to… well refresh the page. He means loading the same exact page over and over again. And indeed it means for the browser to create a certain number of "unconditional and conditional HTTP requests to revalidate the page’s resources" for each reload (refresh).

On the Web Compatibility side of things, I see the <meta http-equiv="refresh" …/> used quite often.

<metahttp-equiv="refresh"content="0;url=http://example.com/there"/>

Note the 0. Probably the result of sysadmins not willing to touch the configuration of the servers, and so front-end developers taking the lead to "fix it", instead of using HTTP 302 or HTTP 301. Anyway, it is something which is being used for most of the time, redirecting to another domain name or uri. Refresh HTTP Header on the other hand, I don't remember seeing it that often.

Should it be documented?

Simon is saying: "it would be good to specify it." I'm not so sure. First things first.

Browser Bugs?

On Mozilla bug tracker, there are a certain number of bugs around refresh. This bug about inline resources is quite interesting and might indeed need to be addressed if there was a documentation. The bug is what the browser should do when the Refresh HTTP header is on an image included in a Web page (this could be another test). For now, the refresh is not done for inline resources. Then what about scripts, stylesheets, JSON files, HTML document in iframes, etc? For the SetupRefreshURIFromHeader code, there are Web Compatibility hacks in the source code of Firefox. We can read:

// Also note that the seconds and URL separator can be either// a ';' or a ','. The ',' separator should be illegal but CNN// is using it."

also:

// Note that URI should start with "url=" but we allow omission

and… spaces!

// We've had at least one whitespace so tolerate the mistake// and drop through.// e.g. content="10 foo"

Good times…

On Webkit bug tracker, I found another couple of bugs but about meta refresh and not specifically Refresh:. But I'm not sure it's handled by WebCore or if it's handled elsewhere in MacOSX (NSURLRequest, NSURLConnection, …). If someone knows, tell me. I didn't explore yet the source code.

So another thing you obviously want to do, in addition to causing the current
document to reload, is to cause another document to be reloaded in n seconds
in place of the current document. This is easy. The HTTP response header will
look like this:

My concern with "Refresh" is that I do not want it to be a global
concept (a browser can only keep track of one refresh)--it looks to be
implemented this way in Netscape 2.x. I would like "Refresh" to apply
to individual objects (RE: the message below to netscape).

Refresh is no longer in the HTTP/1.1 document -- it has been deferred to
HTTP/1.2 (or later).

Should it be documented? Well, there are plenty of issues, there are plenty of hacks around it. I have just touched the surface of it. Maybe it would be worth to document indeed how it is working as implemented now and how it is supposed to be working when there's no interoperability. If I was silly enough, maybe I would do this. HTTP, Archeology and Web Compatibility issues that seems to be close enough from my vices.

As the leader of our global human resource team at Mozilla, Allison will be responsible, above all, for ensuring our people have what they need to help move our mission forward. Specifically, her team will develop and execute the people-related strategies and activities that will help to foster growth, innovation, and our overall organizational effectiveness.

With over 20 years of experience, Allison joins us most recently from GoPro where she served as Sr. Director of HR overseeing the hiring of 900 people, opening offices in seven countries, integrating acquisitions and building the HR processes and systems required to support a dynamic global organization. Prior to GoPro, she developed her HR expertise and track record for inspiring and supporting people at Perforce Software, Citibank, and Ingres.

Allison’s background, experience and passion for the human side of business is an exceptional fit for Mozilla.

Once a month, web developers from across the Mozilla Project get together to design the most dangerous OSHA-compliant workstation possible. While searching for loopholes, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

Michael Kelly: dxr-cmd

A certain blog post author was first with dxr-cmd, a command-line client for making queries to DXR, Mozilla’s source code browser. The tool is installed via pip and supports any query you can make via the web interface. Output can be run through a pager utility such as less, and you can also control the syntax highlighting applied to the output.

Peter Bengtsson: Redunter

peterbe shared Redunter, a web service that helps hunt down unused CSS on your website. By embedding a small snippet of JS into your page and browsing through your website, Redunter will analyze the HTML being rendered and compare it to the CSS being served. The end result is a list of CSS rules that did not match any HTML that was delivered to the user. Redunter even works with sites that modify the DOM by watching for mutation events and tracking the altered HTML.

Scott Michaud: GPU-Accelerated Audio

ScottMichaud returns with more fun stuff using the WebCL extension! Scott shared a demo of WebCL-powered audio where a virtual microphone was surrounded by individual raindrop sounds. By controlling the rate of raindrops, you can simulate a higher audio load and see the difference that pushing audio processing to the GPU can make.

Les Orchard: Parsec Patrol

Senior Space Cadet lorchard shared Parsec Patrol, a vector-based space game for the web. While there’s no full game made yet, there is a webpage with several demos showing collision detection, spaceship navigation, missiles, point-defense systems, and more!

Matthew Claypotch: a9r

Have you ever seen an abbreviation like l10n or i18n and had no idea what it meant? Have no fear, Uncle Potch is here with a9r, the answer to the abbreviation problem! Simply install the command and enter in an abbreviation to receive a list of all words in the SOWPODS word list that match. Got a word that you need to abbreviate? Not only can a9r decipher abbreviations, it can create them!

Matthew Claypotch: socketpeer

In a slightly-less-whimsical vein, potch also shared socketpeer, a simple JavaScript library for 1:1 messaging via WebRTC Data Channels and WebSockets. Extracted from the Tanx demo that Mozilla showed at GDC 2015, socketpeer contains both a server API for establishing peer connections between users and a client API to handle the client-side communication. Potch also shared a demo of a peer-to-peer chat application using socketpeer.

Chris Van Wiemeersch: PhantomHAR

Next up was cvan, who shared PhantomHAR, a PhantomJS and SlimerJS script that generates an HTTP Archive (or HAR) for a URL. A HAR is an archive of data about HTTP transactions that can be used to export detailed performance data for tools to consume and analyze, and PhantomHAR allows you to easily generate the HAR for use by these tools.

Chris Van Wiemeersch: fetch-manifest

Next, cvan shared fetch-manifest, a small library that takes a URL, locates the W3C web app manifest for the page, fixes any relative URLs in the manifest, and returns it. This is useful for things like app marketplaces that want to allow people to submit web apps by submitting a single URL to the app they want to submit.

Bill Walker: robot-threejs

Last up was bwalker, who shared robot-threejs, an experimental steampunk robot game powered by three.js and WebGL. The game currently allows you to fly around a 3D environment that has 3D positional audio emitting from an incredibly mysterious cube. CAN YOU SOLVE THE CUBE MYSTERY?

This month we think we’ve really got something special with our Seki Edge keyboard-and-mouse combo. Order now and get a free box of Band-aids at no additional cost!

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

Earlier this year I wrote about how 2015 will be a big year for Mozilla to scale and build better personalized experiences as we help move the ad industry forward. Today, I’m excited to announce two new additions to our Content Services team as we continue our mission to create innovative content offerings while always upholding Mozilla’s commitment to user privacy.

Accomplished interactive advertising expert Aaron Lasilla has joined Mozilla and our Content Services team as head of content partnerships. Aaron comes to us from EA Games where he served as the global director of brand solutions and co-founded the in-game advertising group. Aaron was instrumental in negotiating and securing a number of strategic partnerships for EA’s publishing division as he built the group it into a new business and revenue channel for EA, including the largest EA Online partnership ever (within Pogo.com’s casual games offering, in 2003). During his tenure, EA was established as the number one publisher of integrated advertising placements and partnership in and around games. Aaron previously managed Microsoft’s Premium Games Advertising offering and also worked in sales and sponsorship capacities for Double Fusion, Clear Channel Entertainment and Kemper Sports Marketing.

As we continue to develop and refine our new offerings like Firefox Tiles, Aaron will be focusing on engagement and value exchange for Mozilla’s offerings while maintaining the same quality and standards of user experience that Mozilla is known for.

In addition, I’m excited to formally announce that long-time Mozillian Patrick Finch joined our group late last year as director of marketing. Patrick has been with Mozilla for over seven years based out of Sweden and has worked in a number of strategic roles on Mozilla’s desktop and mobile projects over that time. Prior to joining Mozilla Patrick spent over ten years at Sun Microsystems in a variety of capacities including working on numerous open source projects.

As we continue the rollout of Firefox Tiles and bring on new partners, you’ll probably be seeing more of Aaron and Patrick on this blog. If you’re interested in partnering with us in our mission or if you’d just like to drop our team a line, feel free to reach out to us at contentservices@mozilla.com.

The connection experience is generally more straightforward (especially after
connecting to a device the first time) than with USB and also more convenient to
use since you're no longer tied down by a cable.

Security

A large portion of this project has gone towards making the debugging
connection secure, so that you can use it safely on shared network, such as an
office or coffee shop.

We use TLS for encryption and authentication. The computer and device both
create self-signed certificates. When you connect, a QR code is scanned to
verify that the certificates can be trusted. During the connection process, you
can choose to remember this information and connect immediately in the future if
desired.

A connection prompt will appear on device, choose "Scan" or "Scan and Remember"

Scan the QR code displayed in WebIDE

After scanning the QR code, the QR display should disappear and the "device"
icon in WebIDE will turn blue for "connected".

You can then access all of your remote apps and browser tabs just as you can
today over USB.

Technical Aside

This process does not use ADB at all on the device, so if you find ADB
inconvenient while debugging or would rather not install ADB at all, then
WiFi debugging is the way to go.

By skipping ADB, we don't have to worry about driver confusion, especially on
Windows and Linux.

Supported Devices

This feature should be supported on any Firefox OS device. So far, I've tested
it on the Flame and Nexus 4.

Known Issues

The QR code scanner can be a bit frustrating at the moment, as real devices
appear to capture a very low resolution picture. Bug 1145772 aims
to improve this soon. You should be able to scan with the Flame by trying a few
different orientations. I would suggest using "Scan and Remember", so that
scanning is only needed for the first connection.

If you find other issues while testing, please file bugs or contact me
on IRC.

Acknowledgments

This was quite a complex project, and many people provided advice and reviews
while working on this feature, including (in semi-random order):

Brian Warner

Trevor Perrin

David Keeler

Honza Bambas

Patrick McManus

Jason Duell

Panos Astithas

Jan Keromnes

Alexandre Poirot

Paul Rouget

Paul Theriault

I am probably forgetting others as well, so I apologize if you were omitted.

What's Next

I'd like to add this ability for Firefox for Android next. Thankfully, most of
the work done here can be reused there.

Planet Mozilla

Collected here are the most recent blog posts from all over the Mozilla community.
The content here is unfiltered and uncensored, and represents the views of individual community members.
Individual posts are owned by their authors -- see original source for licensing information.