Tag Archives: testdev

A few weeks ago, I posted a call out for people to reach out and commit to participate for 8+ weeks. There were two projects and one of them was Developer Experience. Since then we have had some great interest, there are 5 awesome contributors participating (sorted by irc nicks).

gma_fav – I met gma_fav on IRC when she heard about the program. She has a lot of energy, seems very detail oriented, asks good questions, and brings fresh ideas to the team! She is a graduate of the Ascend project and is looking to continue her growth in development and open source. Her primary focus will be on the interface to try server (think the try chooser page, extension, and taking other experiments further).

kaustabh93 – I met Kaustabh on IRC about a year ago and since then he has been a consistent friend and hacker. He attends university. In fact I do owe him credit for large portions of alert manager. While working on this team, he will be focused on making run-by-dir a reality. There are two parts: getting the tests to run green, and reducing the overhead of the harness.

sehgalvibhor – I met Vibhor on IRC about 2 weeks ago. He was excited about the possibility of working on this project and jumped right in. Like Kaustabh, he is a student who is just finishing up exams this week. His primary focus this summer will be working in a similar role to Stanley in making our test harnesses act the same and more useful.

stanley – When this program was announced Stanley was the first person to ping me on IRC. I have found him to be very organized, a pleasure to chat with and he understands code quite well. Coding and open source are both new things to Stanley and we have the opportunity to give him a great view of it. Stanley will be focusing on making the commands we have for running tests via mach easier to use and more unified between harnesses.

Personally I am looking forward to seeing the ambition folks have translate into great solutions, learning more about each person, and sharing with Mozilla as a whole the great work they are doing.

No more 200 minute wait times, in fact we probably are running too many chunks. A lot of heavy lifting took place, a lot of it in releng from Armen and Ben, and much work from Gavin and RyanVM who pushed hard and proposed great ideas to see this through.

What is next?

There are a few more test cases to fix and to get all these changes on Aurora. We have more work we want to do (lower priority) on running the tests differently to help isolate issues where one test affects another test.

In the next few weeks I want to put together a list of projects and bugs that we can work on to make our tests more useful and reliable. Stay tuned!

At Mozilla we have made our unit testing on android devices to be as important as desktop testing. Earlier today I was asked how do we measure this and what is our definition of success. The obvious answer is no failures except for code that breaks a test, but reality is something where we allow for random failures and infrastructure failures. Our current goal is 5%

So what are these acceptable failures and what does 5% really mean. Failures can happen when we have tests which fail randomly, usually poorly written tests or tests which have been written a long time ago and hacked to work in todays environment. This doesn’t mean any test that fails is a problem, it could be a previous test that changes a Firefox preference on accident. For Android testing, this currently means the browser failed to launch and load the test webpage properly or it crashed in the middle of the test. Other failures are the device losing connectivity, our host machine having hiccups, the network going down, sdcard failures, and many other problems. With our current state of testing this mostly falls into the category of losing connectivity to the device. For infrastructure problems they are indicated as Red or Purple and for test related problems they are Orange.

I took at a look at the last 10 runs on mozilla-central (where we build Firefox nightlies from) and built this little graph:

Firefox Android Failures

Here you can see that our tests are causing 6.67% of the failures and 12.33% of the time we can expect a failure on Android.

We have another branch called mozilla-inbound (we merge this into mozilla-central regularly) where most of the latest changes get checked in. I did the same thing here:

mozilla-inbound Android Failures

Here you can see that our tests are causing 7.77% of the failures and 9.89% of the time we can expect a failure on Android.

This is only a small sample of the tests, but it should give you a good idea of where we are.

Last week I created a python webserver as a patch for make talos-remote. This ended up being frought with performance issues, so I have started looking into it. I based it off of the profileserver.py that we have in mozilla-central, and while it worked I was finding my tp4 tests were timing out.

I come to find out we are using a synchronous webserver, so this is easy to fix with a ThreadingMixIn, just like the chromium perf.py script:

Now the test was finishing, but very very slowly (20+ minutes vs ❤ minutes). After doing a CTRL+C on the webserver, I saw a lot of requests hanging on log_message and gethostbyaddr() calls. So I ended up overloading the log_message call and things worked.

class MozRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
# I found on my local network that calls to this were timing out
def address_string(self):
return "a.b.c.d"
# This produces a LOT of noise
def log_message(self, format, *args):
pass

Over the last year and a half I have been editing the talos harness for various bug fixes, but just recently I have needed to dive in and add new tests and pagesets to talos for Firefox and Fennec. Here are some of the things I didn’t realize or have inconveniently forget about what goes on behind the scenes.

tp4 is really 4 tests: tp4, tp4_nochrome, tp4_shutdown, tp4_shutdown_nochrome. This is because in the .config file, we have “shutdown: true” which adds _shutdown to the test name and running with –noChrome adds the _nochrome to the test name. Same with any test that us run with the shutdown=true and nochrome options.

when adding new tests, we need to add the test information to the graph server (staging and production). This is done in the hg.mozilla.org/graphs repository by adding to data.sql.

when adding new pagesets (as I did for tp4 mobile), we need to provide a .zip of the pages and the pageloader manifest to release engineering as well as modifying the .config file in talos to point to the new manifest file. see bug 648307

Also when adding new pages, we need to add sql for each page we load. This is also in the graphs repository bug in pages_table.sql.

When editing the graph server, you need to file a bug with IT to update the live servers and attach a sql file (not a diff). Some examples: bug 649774 and bug 650879

after you have the graph servers updated, staging run green, review done, then you can check in the patch for talos

For new tests, you also have to create a buildbot config patch to add the testname to the list of tests that are run for talos

the last step is to file a release engineering bug to update talos on the production servers. This is done by creating a .zip of talos, posting it on a ftp site somewhere and providing a link to it in the bug.

one last thing is to make sure the bug to update talos has an owner and is looked at, otherwise it can sit for weeks with no action!!!

This is my experience from getting ts_paint, tpaint, and tp4m (mobile only) tests added to Talos over the last couple months.

Yesterday I noticed our Mochitest-1 test was orange (since Nov 17th…still need to train others to watch the results), so I spent a few minutes looking into it and it turned out to be a regression (which was promptly fixed by the mobile guys!) I know automation has uncovered errors in the past, but this is the first time we have had automation that was truly green for a week (we just turned it on) and then turned orange as the result of a unfriendly patch.

Last week I posted about mochikit.jar and what was done to enable testing on remote systems (specifically Android) for mochitest chrome style tests. This post will discuss the work done to Talos for remote testing on Android. I have been working with bear in release engineering a lot to flush out and bugs. Now we are really close to turning this stuff on for the public facing tinderbox builds.

Talos + Remote Testing:

Last year, I had adding all the remote testing bits into Talos for windows mobile. Luckily this time around I just had to clean up a few odds and ends (adding support for IPC). Talos is setup to access a webserver and communicate with a SUTAgent (when you setup your .config file properly.) This means you can have a static webserver on your desktop or the network and run talos from any SUTagent and a host machine.

Talos + Android:

This is a harder challenge to resolve than remote testing. Android does not support redirecting to stdout which talos required. For talos and all related tests (fennecmark, pageloader) we need to write to a log file from the test itself.

Run it for yourself:

Those are the core changes that needed to be made. Here are some instructions for running it on your own: