Jekyll2018-09-14T11:58:05+01:00http://davehunt.co.uk/Dave Hunt, AutomationeerDave HuntEuroPython 20182018-09-14T11:52:42+01:002018-09-14T11:52:42+01:00http://davehunt.co.uk/2018/09/14/europython-2018<p>In July I took the train up to beautiful Edinburgh to attend the
<a href="https://ep2018.europython.eu/en/">EuroPython 2018</a> conference. Despite using Python
professionally for almost 8 years, this was my first experience of a Python conference.
The schedule was <strong>packed</strong>, and it was challenging deciding what talks to attend, but
I had a great time and enjoyed the strong community feeling of the event. We even went
for a <a href="https://www.strava.com/activities/1729797362">group run</a> around Holyrood Park
and Arthur’s Seat, which I hope is included in the schedule for future years.<!--more--></p>
<p>Now that the videos of the talks have all been published, I wanted to share my personal
highlights, and list the talks I saw during and since the conference. I still haven’t
caught up on everything I wanted to see, so I’ve also included my watch list. First,
here’s the <a href="https://www.youtube.com/playlist?list=PL8uoeex94UhFrNUV2m5MigREebUms39U5">full playlist of talks from the conference</a></p>
<p>Here are my top picks from the talks I either attended or have watched since:</p>
<ul>
<li><a href="https://youtu.be/uSp0-TkGx3c">Stephane Wirtel - What’s new in Python 3.7</a></li>
<li><a href="https://youtu.be/8r4wNvbAYUQ">Hynek Schlawack - How to Write Deployment friendly Applications</a></li>
<li><a href="https://youtu.be/scum5a_mqBc">Nicole Harris - PyPI: Past, Present and Future</a></li>
<li><a href="https://youtu.be/3pokUifUyWM">Raphael Pierzina - The Challenges of Maintaining a Popular Open Source Project</a></li>
<li><a href="https://youtu.be/3zWHdyrGlDc">Sarah Bird - The Web is Terrifying! Using the PyData stack to spy on the spies</a></li>
<li><a href="https://youtu.be/tEOGJ_h0Lx0">Doug Hellmann - reno - A New Way to Manage Release Notes</a></li>
</ul>
<p>I also wanted to highlight the following lightning talks:</p>
<ul>
<li><a href="https://youtu.be/iRPPHg7sGrs?t=21m55s">Leszek Jakubwski - The Ops Mindset</a></li>
<li><a href="https://youtu.be/iRPPHg7sGrs?t=6s">Hynek Schlawack - Imagine a Better World</a></li>
</ul>
<p>Here is a list of the other talks I either attended at the conference or have watched
since:</p>
<ul>
<li><a href="https://youtu.be/xOyJiN3yGfU">David Beazley - Die Threads</a></li>
<li><a href="https://youtu.be/ReXxO_azV-w">Yury Selivanov - asyncio in Python 3.7 and 3.8</a></li>
<li><a href="https://youtu.be/1AqW9-E6VCM">Łukasz Kąkol - Pythonic code vs performance</a></li>
<li><a href="https://youtu.be/FCKrfWXBPE4">Romain Dorgueil - Using Bonobo, Airflow and Grafana to visualize your business</a></li>
<li><a href="https://youtu.be/u2kKxmb9BWs">Almar Klein - Let’s embrace WebAssembly!</a></li>
<li><a href="https://youtu.be/JN6RAaufFzU">Pascal van Kooten - When to use Machine Learning: Tips, Tricks and Warnings</a></li>
<li><a href="https://youtu.be/SFqna5ilqig">Bernat Gabor - Standardize Testing in Python</a></li>
<li><a href="https://youtu.be/qx7cumg6EEE">James Saryerwinnie - Debugging Your Code with Data Visualization</a></li>
<li><a href="https://youtu.be/vIkpCOY-yGs">Mark Smith - More Than You Ever Wanted To Know About Python Functions</a></li>
<li><a href="https://youtu.be/IZmlkoOO8Mg">Neil Gall - System testing with Pytest, Docker, and Flask</a></li>
<li><a href="https://youtu.be/dZ4ETY9Mzws">Sven Hendrik Haase - Rust and Python - Oxidize Your Snake</a></li>
<li><a href="https://youtu.be/h5tmNkyNAKs">Becky Smith - Python 2 is dead! Drag your old code into the modern age</a></li>
<li><a href="https://youtu.be/QB59ZibEOZ0">Anastasiia Tymoshchuk - How to develop your project from an idea to architecture design</a></li>
<li><a href="https://youtu.be/9s0AUlyIbUU">Marco Buttu - White Mars living far away from any form of life</a></li>
<li><a href="https://youtu.be/9QXACKrJ-1k">Mika Boström, Alexander Schmolck - Marge: A bot for better Git’ing</a></li>
<li><a href="https://youtu.be/glvXlcpmVQo">Dougal Matthews - 10 years of EuroPython and the Python community</a></li>
<li><a href="https://youtu.be/74AsJ7RET20">Ines Montani - How to Ignore Most Startup Advice and Build a Decent Software Business</a></li>
<li><a href="https://youtu.be/UXSr1OL5JKo">Ian Ozsvald - Citizen Science with Python</a></li>
<li><a href="https://youtu.be/LxoKPGvMXf0">Alec MacQueen - Python and GraphQL</a></li>
<li><a href="https://youtu.be/apcNyycpidw">Alexandre Figura - Integration Tests with Super Powers</a></li>
<li><a href="https://youtu.be/hgPH19nBlrk">Lightning talks on Wednesday, July 25</a></li>
<li><a href="https://youtu.be/z8rYW1UiHNc">Lightning talks on Thursday, July 26</a></li>
<li><a href="https://youtu.be/iRPPHg7sGrs">Lightning talks on Friday, July 27</a></li>
</ul>
<p>Here’s my list of talks I have yet to watch:</p>
<ul>
<li><a href="https://youtu.be/OHaxQZPKURg">Victor Stinner - Python 3: ten years later</a></li>
<li><a href="https://youtu.be/6L3ZVLtSeo8">Nina Zakharenko - Code Review Skills for Pythonistas</a></li>
<li><a href="https://youtu.be/eJsu-7hFXyA">Guillaume Gelin - PEP 557* versus the world</a></li>
<li><a href="https://youtu.be/oo0Nq44d1yQ">Ed Singleton - Autism in development</a></li>
<li><a href="https://youtu.be/zM3cMTcmmk0">Hrafn Eiriksson - Asyncio in production</a></li>
<li><a href="https://youtu.be/DK4SwlyWm-k">Emmanuel Leblond - Trio: A pythonic way to do async programming</a></li>
<li><a href="https://youtu.be/ih2reTLOzWI">Elisabetta Bergamini - Bad hotel again? Find your perfect match!</a></li>
<li><a href="https://youtu.be/7qLNrcYkQiY">Steve Barnes - Why develop a CLI Command Line Interface first?</a></li>
<li><a href="https://youtu.be/1lJDZx6f6tY">Lynn Root - asyncio in Practice: We Did It Wrong</a></li>
<li><a href="https://youtu.be/btqFjNDdTlE">Alex Grönholm - Automating testing and deployment with Github and Travis</a></li>
</ul>
<p>Were you at EuroPython 2018? Let me know if you have any favourite talks that aren’t
already on my list! I’m keen to attend again next year, if my travel schedule allows
for it.</p>Dave HuntIn July I took the train up to beautiful Edinburgh to attend the EuroPython 2018 conference. Despite using Python professionally for almost 8 years, this was my first experience of a Python conference. The schedule was packed, and it was challenging deciding what talks to attend, but I had a great time and enjoyed the strong community feeling of the event. We even went for a group run around Holyrood Park and Arthur’s Seat, which I hope is included in the schedule for future years.Python unit tests now running with Python 3 at Mozilla2018-06-29T15:48:42+01:002018-06-29T15:48:42+01:00http://davehunt.co.uk/2018/06/29/python-unit-tests-now-running-with-python-3-at-mozilla<p>I’m excited to announce that you can now run the Python unit tests for packages in the Firefox source code against Python 3! This will allow us to gradually build support for Python 3, whilst ensuring that we don’t later regress. Any tests not currently passing in Python 3 are skipped with the condition <code class="highlighter-rouge">skip-if = python == 3</code> in the manifest files, so if you’d like to see how they fail (and maybe provide a patch to fix some!) then you will need to remove that condition locally. Once you’ve done this, use the <code class="highlighter-rouge">mach python-test</code> command with the new optional argument <code class="highlighter-rouge">--python</code>. This will accept a version number of Python or a path to the binary. You will need to make sure you have the appropriate version of Python installed.<!--more--></p>
<p>Once you’re ready to enable tests to run in <a href="https://docs.taskcluster.net/docs">TaskCluster</a>, you can simply update the <code class="highlighter-rouge">python-version</code> value in <code class="highlighter-rouge">taskcluster/ci/source-test/python.yml</code> to include the major version numbers of Python to execute the tests against. At the current time our build machines have Python 2.7 and Python 3.5 available.</p>
<p>To summarise:</p>
<ol>
<li>Remove <code class="highlighter-rouge">skip-if = python == 3</code> from manifest files. These are typically named <code class="highlighter-rouge">manifest.ini</code> or <code class="highlighter-rouge">python.ini</code>, and are usually found in the <code class="highlighter-rouge">tests</code> directory for the package.</li>
<li>Run <code class="highlighter-rouge">mach python-test --python=3</code> with your target path or subsuite.</li>
<li>Fix the package(s) to support Python 3 and ensure the tests are passing</li>
<li>Add Python 3 to the <code class="highlighter-rouge">python-version</code> for the appropriate job in <code class="highlighter-rouge">taskcluster/ci/source-test/python.yml</code>.</li>
</ol>
<p>At the time of writing, the <a href="https://pythonclock.org/">pythonclock.org</a> tells me that we have just over 18 months before Python 2.7 will be retired. What this actually means is still somewhat unknown, but it would be a good idea to check if your code is compatible with Python 3, and if it’s not, to do something about it. The Firefox build system at Mozilla uses Python, and it’s still some way from supporting Python 3. We have a lot of code, it’s going to be a long journey, and we could do with a bit of help!</p>
<p>Whilst we do plan to support Python 3 in the Firefox build system (see <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1388447">bug 1388447</a>), my initial concern and focus has been the Python packages we distribute on the <a href="https://pypi.org/">Python Package Index (PyPI)</a>. These are available to use outside of Mozilla’s build system, and therefore a lack of Python 3 support will prevent any users from adopting Python 3 in their projects. One such example is <a href="https://github.com/mozilla/treeherder">Treeherder</a>, which uses <a href="https://pypi.org/project/mozlog/">mozlog</a> for parsing log files. Treeherder is a <a href="https://www.djangoproject.com/">django</a> project, which recently dropped support for Python 2 (unless you’re using their long term support release, which will support Python 2 until 2020).</p>
<p>Updating these packages to support Python 3 isn’t necessary that hard to do, especially with tools such as <a href="https://pythonhosted.org/six/">six</a>, which provides utilities for handling the differences between Python 2 and Python 3. The problem has been that we had no way to run the tests against Python 3 in TaskCluster. This is no longer the case, and Python unit tests can now be run against Python 3!</p>
<p>So far I have enabled Python 3 jobs for our <a href="https://firefox-source-docs.mozilla.org/mozbase/index.html">mozbase</a> unit tests (this includes the aforementioned mozlog), and our <a href="https://pypi.org/project/mozterm/">mozterm</a> unit tests. There are still many tests in mozbase that are not passing in Python 3, so as mentioned above, these have been conditionally skipped in the manifest files. This will allow us to enable these tests as support is added, and this condition could even be used in the future if we have a package that doesn’t have full compatibility with Python 2.</p>
<p>Now that running the tests against multiple versions of Python is relatively easy, it’s a great time for me to encourage our community to help us with supporting Python 3. If you’d like to help, we have a <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1093212">tracking bug</a> for all of our mozbase packages. Find a package you’d like to work on, read the comments to understand what you need and how to get set up, and let me know if you get stuck!</p>Dave HuntI’m excited to announce that you can now run the Python unit tests for packages in the Firefox source code against Python 3! This will allow us to gradually build support for Python 3, whilst ensuring that we don’t later regress. Any tests not currently passing in Python 3 are skipped with the condition skip-if = python == 3 in the manifest files, so if you’d like to see how they fail (and maybe provide a patch to fix some!) then you will need to remove that condition locally. Once you’ve done this, use the mach python-test command with the new optional argument --python. This will accept a version number of Python or a path to the binary. You will need to make sure you have the appropriate version of Python installed.Prototype multi-device Firefox tests2018-03-20T14:26:25+00:002018-03-20T14:26:25+00:00http://davehunt.co.uk/2018/03/20/prototype-multi-device-firefox-tests<p>With <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Tech/Firefox_Accounts">Firefox Accounts</a>, you can access your tabs, history, and bookmarks from any device. You can even send tabs from one device to another, which is great when I find myself on a page that’s not optimised for mobile, or if I get distracted at the weekend and find something I want to pick up when I get to work on Monday morning. While these features are awesome, I’ve had issues when the sync isn’t triggered, or things don’t go as expected. Some of these issues are known (and are being addressed), but currently it’s too easy for regressions to be introduced.<!--more--></p>
<p>Let’s take the simple use case of saving a bookmark using Firefox on your phone, and later opening the bookmark on Firefox on desktop. In this scenario we have the mobile client, the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Tech/Firefox_Accounts">Firefox Accounts</a> service, the <a href="https://wiki.mozilla.org/CloudServices/Sync">Sync</a> service, and the desktop client. We could be using <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_for_Android">Firefox on Android</a> or <a href="https://github.com/mozilla-mobile/firefox-ios">iOS</a> as our mobile client, and we could be using Firefox on macOS, Linux, or Windows. Other scenarios could involve multiple different mobile clients, such as syncing between a tablet and phone. There’s a lot of configuration necessary, and many variations. Whilst each of the components have their own automated tests, there’s currently no automation to take care of the basic end-to-end scenarios.</p>
<p>Part of the issue is that there are many individual components, and many ways they can be combined. Integration testing is currently carried out manually, which is time-consuming, and doesn’t allow us to cover as many scenarios and device combinations as we’d like. Introducing automation to cover the basic scenarios will allow the testers more time to focus on exploration and more edge cases.</p>
<p>Over the last few weeks I’ve built a proof-of-concept test harness to automate end-to-end testing of multiple clients and server components. Initially I have focused on the previously mentioned scenario of syncing a bookmark from mobile to desktop, and limited to Firefox on iOS and macOS for now. Rather than create something entirely from scratch, I’ve brought together existing solutions for this initial prototype. This allowed me to pull something together relatively quickly, but does also bring some limitations and questions along.</p>
<h3 id="the-scenario">The Scenario</h3>
<p>Let’s remind ourselves of our initial scenario:</p>
<ul>
<li>Firefox on iOS:
<ul>
<li>Open a website</li>
<li>Save a bookmark</li>
<li>Sign into Firefox Accounts</li>
<li>Perform initial sync</li>
</ul>
</li>
<li>Firefox on macOS:
<ul>
<li>Sign into Firefox Accounts</li>
<li>Perform initial sync</li>
<li>Verify new bookmark exists</li>
</ul>
</li>
</ul>
<p>Finally, we’ll need to present these results in a way the user can interpret, and can investigate in the event of a failure. For this we’ll need a framework that can pull everything together, which is where I started.</p>
<h3 id="the-test-framework">The Test Framework</h3>
<p>My language of choice is Python, and my preferred test framework is <a href="https://docs.pytest.org/">pytest</a>. Being able to use something that I’m already familiar with for the framework allowed me to focus on the more challenging areas. By using pytest I’m also able to take advantage of several plugins I have built and maintain for other projects. Finally, if we decide in the future to land any part of this into mozilla-central, it shouldn’t require too many changes as Python and pytest are already in use there.</p>
<h3 id="generating-firefox-accounts">Generating Firefox Accounts</h3>
<p>As a prerequisite, we need credentials for Firefox Accounts. Fortunately, we already have the <a href="https://github.com/mozilla/PyFxA">PyFxA</a> package. This allowed me to create a pytest fixture to create accounts as needed and destroy them when they’re done with. The following is a slightly simplified version of the fixture, which creates an account, verifies it, and ultimately destroys it once it’s no longer needed:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">fx_account</span><span class="p">():</span>
<span class="n">account</span> <span class="o">=</span> <span class="n">TestEmailAccount</span><span class="p">()</span>
<span class="n">client</span> <span class="o">=</span> <span class="n">Client</span><span class="p">(</span><span class="s">'https://api-accounts.stage.mozaws.net/v1'</span><span class="p">)</span>
<span class="n">password</span> <span class="o">=</span> <span class="s">''</span><span class="o">.</span><span class="n">join</span><span class="p">([</span><span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">string</span><span class="o">.</span><span class="n">ascii_letters</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">8</span><span class="p">)])</span>
<span class="n">session</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="n">create_account</span><span class="p">(</span><span class="n">account</span><span class="o">.</span><span class="n">email</span><span class="p">,</span> <span class="n">password</span><span class="p">)</span>
<span class="n">account</span><span class="o">.</span><span class="n">fetch</span><span class="p">()</span>
<span class="n">message</span> <span class="o">=</span> <span class="n">account</span><span class="o">.</span><span class="n">wait_for_email</span><span class="p">(</span><span class="k">lambda</span> <span class="n">m</span><span class="p">:</span> <span class="s">'x-verify-code'</span> <span class="ow">in</span> <span class="n">m</span><span class="p">[</span><span class="s">'headers'</span><span class="p">])</span>
<span class="n">session</span><span class="o">.</span><span class="n">verify_email_code</span><span class="p">(</span><span class="n">message</span><span class="p">[</span><span class="s">'headers'</span><span class="p">][</span><span class="s">'x-verify-code'</span><span class="p">])</span>
<span class="k">yield</span> <span class="p">{</span><span class="s">'email'</span><span class="p">:</span> <span class="n">account</span><span class="o">.</span><span class="n">email</span><span class="p">,</span> <span class="s">'password'</span><span class="p">:</span> <span class="n">password</span><span class="p">}</span>
<span class="n">account</span><span class="o">.</span><span class="n">clear</span><span class="p">()</span>
<span class="n">client</span><span class="o">.</span><span class="n">destroy_account</span><span class="p">(</span><span class="n">account</span><span class="o">.</span><span class="n">email</span><span class="p">,</span> <span class="n">password</span><span class="p">)</span>
</code></pre></div></div>
<p>Whilst building out the prototype it was useful to have a Firefox Account that wasn’t immediately destroyed, which is why I <a href="/2018/03/15/command-line-tool-for-firefox-accounts.html">built a command line tool for creating and destroying accounts</a>.</p>
<h3 id="automating-firefox-on-ios">Automating Firefox on iOS</h3>
<p>Fortunately there is already a suite of <a href="https://github.com/mozilla-mobile/firefox-ios/tree/master/XCUITests">automated UI tests for Firefox on iOS</a>, so I was able to build on the existing code. For our scenario I was able to get away with only making a few changes:</p>
<ol>
<li>Created an <code class="highlighter-rouge">IntegrationTests.swift</code> file for the new script. Note that although this is technically a test itself, I’m referring to it as a script. This is because it only forms part of the overall integration test, and is essentially executed as a script. Of course, any failure encountered while running it will result in a test failure.</li>
<li>Added <code class="highlighter-rouge">LaunchArguments.StageServer</code> as a launch argument in <code class="highlighter-rouge">BaseTestCase.swift</code> so the staging environment would be used for Firefox Accounts and Sync.</li>
<li>Switched from using <code class="highlighter-rouge">type</code> to <code class="highlighter-rouge">typeText</code> in <code class="highlighter-rouge">FxScreenGraph.swift</code> for the Firefox Accounts login screen. This allowed entry of characters not displayed on the initial keyboard layout. If we want to use the on screen keyboard, then we’ll need to enhance the screen graph to support switching layouts. Alternatively, we could avoid using such characters.</li>
<li>Added the ability to set the timeout for <code class="highlighter-rouge">waitforExistence</code> as loading the Firefox Accounts login screen and performing the initial sync were occasionally taking longer than the default of 5 seconds.</li>
<li>Modified the existing <code class="highlighter-rouge">Fennec_Enterprise_XCUITests</code> scheme to include environment variables for the Firefox Account email and password so they can be used from the script.</li>
</ol>
<p>With these changes, I was able to create the script to open a website and save it as a bookmark:</p>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">func</span> <span class="nf">testFxASyncBookmark</span> <span class="p">()</span> <span class="p">{</span>
<span class="c1">// Go to a webpage, and add to bookmarks</span>
<span class="n">navigator</span><span class="o">.</span><span class="nf">createNewTab</span><span class="p">()</span>
<span class="nf">loadWebPage</span><span class="p">(</span><span class="s">"www.example.com"</span><span class="p">)</span>
<span class="n">navigator</span><span class="o">.</span><span class="nf">nowAt</span><span class="p">(</span><span class="kt">BrowserTab</span><span class="p">)</span>
<span class="nf">bookmark</span><span class="p">()</span>
<span class="c1">// Sign into Firefox Accounts</span>
<span class="n">navigator</span><span class="o">.</span><span class="nf">goto</span><span class="p">(</span><span class="kt">FxASigninScreen</span><span class="p">)</span>
<span class="nf">waitforExistence</span><span class="p">(</span><span class="n">app</span><span class="o">.</span><span class="n">webViews</span><span class="o">.</span><span class="n">staticTexts</span><span class="p">[</span><span class="s">"Sign in"</span><span class="p">],</span> <span class="nv">timeout</span><span class="p">:</span> <span class="mi">10</span><span class="p">)</span>
<span class="n">userState</span><span class="o">.</span><span class="n">fxaUsername</span> <span class="o">=</span> <span class="kt">ProcessInfo</span><span class="o">.</span><span class="n">processInfo</span><span class="o">.</span><span class="n">environment</span><span class="p">[</span><span class="s">"FXA_EMAIL"</span><span class="p">]</span><span class="o">!</span>
<span class="n">userState</span><span class="o">.</span><span class="n">fxaPassword</span> <span class="o">=</span> <span class="kt">ProcessInfo</span><span class="o">.</span><span class="n">processInfo</span><span class="o">.</span><span class="n">environment</span><span class="p">[</span><span class="s">"FXA_PASSWORD"</span><span class="p">]</span><span class="o">!</span>
<span class="n">navigator</span><span class="o">.</span><span class="nf">performAction</span><span class="p">(</span><span class="kt">Action</span><span class="o">.</span><span class="kt">FxATypeEmail</span><span class="p">)</span>
<span class="n">navigator</span><span class="o">.</span><span class="nf">performAction</span><span class="p">(</span><span class="kt">Action</span><span class="o">.</span><span class="kt">FxATypePassword</span><span class="p">)</span>
<span class="n">navigator</span><span class="o">.</span><span class="nf">performAction</span><span class="p">(</span><span class="kt">Action</span><span class="o">.</span><span class="kt">FxATapOnSignInButton</span><span class="p">)</span>
<span class="nf">allowNotifications</span><span class="p">()</span>
<span class="c1">// Wait for initial sync to complete</span>
<span class="nf">waitforExistence</span><span class="p">(</span><span class="n">app</span><span class="o">.</span><span class="n">tables</span><span class="o">.</span><span class="n">staticTexts</span><span class="p">[</span><span class="s">"Sync Now"</span><span class="p">],</span> <span class="nv">timeout</span><span class="p">:</span> <span class="mi">10</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>
<p>To run XCUITests from outside of Xcode, you need to use the <code class="highlighter-rouge">xcodebuild</code> command line tool. So, using FxACLI to create a test account, I can run my new script using the following commands:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ fxacli create
Account created!
- 🌐 https://api-accounts.stage.mozaws.net/v1
- 📧 test-a478e06856@restmail.net
- 🔑 CokFkuRA
Account verified! 🎉
$ export FXA_EMAIL=test-a478e06856@restmail.net FXA_PASSWORD=CokFkuRA
$ xcodebuild test -scheme Fennec_Enterprise_XCUITests -destination "platform=iOS Simulator,name=iPhone X" -only-testing:XCUITests/IntegrationTests/testFxASyncBookmark
--- snip ---
** TEST SUCCEEDED **
$ fxcli destroy
Account destroyed! 💥
</code></pre></div></div>
<p>In order to run the script from my pytest framework, I created a fixture named <code class="highlighter-rouge">xcodebuild</code>. This fixture patches the environment with the Firefox Account variables, and yields an <code class="highlighter-rouge">XCodeBuild</code> object:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">xcodebuild_log</span><span class="p">(</span><span class="n">pytestconfig</span><span class="p">,</span> <span class="n">tmpdir</span><span class="p">):</span>
<span class="n">xcodebuild_log</span> <span class="o">=</span> <span class="nb">str</span><span class="p">(</span><span class="n">tmpdir</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s">'xcodebuild.log'</span><span class="p">))</span>
<span class="n">pytestconfig</span><span class="o">.</span><span class="n">_xcodebuild_log</span> <span class="o">=</span> <span class="n">xcodebuild_log</span>
<span class="k">yield</span> <span class="n">xcodebuild_log</span>
<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">xcodebuild</span><span class="p">(</span><span class="n">fx_account</span><span class="p">,</span> <span class="n">monkeypatch</span><span class="p">,</span> <span class="n">xcodebuild_log</span><span class="p">):</span>
<span class="n">monkeypatch</span><span class="o">.</span><span class="n">setenv</span><span class="p">(</span><span class="s">'FXA_EMAIL'</span><span class="p">,</span> <span class="n">fx_account</span><span class="o">.</span><span class="n">email</span><span class="p">)</span>
<span class="n">monkeypatch</span><span class="o">.</span><span class="n">setenv</span><span class="p">(</span><span class="s">'FXA_PASSWORD'</span><span class="p">,</span> <span class="n">fx_account</span><span class="o">.</span><span class="n">password</span><span class="p">)</span>
<span class="k">yield</span> <span class="n">XCodeBuild</span><span class="p">(</span><span class="n">xcodebuild_log</span><span class="p">)</span>
</code></pre></div></div>
<p>The <code class="highlighter-rouge">XCodeBuild</code> object has a <code class="highlighter-rouge">test</code> method, which requires the identifier of the test to run. When the <code class="highlighter-rouge">test</code> method is called, the <code class="highlighter-rouge">xcodebuild</code> binary is started in a new process, and the output is redirected to a log file to later attach to the HTML report. The following test, although incomplete at this point, demonstrates running the XCUITest from pytest:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">test_sync_bookmark_from_device</span><span class="p">(</span><span class="n">xcodebuild</span><span class="p">):</span>
<span class="n">xcodebuild</span><span class="o">.</span><span class="n">test</span><span class="p">(</span><span class="s">'XCUITests/IntegrationTests/testFxASyncBookmark'</span><span class="p">)</span>
</code></pre></div></div>
<p>I noticed early on that once a test has signed into Firefox Accounts, the next test to run will remember the email address used and only ask for a password when attempting to sign in. There’s likely a less expensive solution, but for now I’ve resolved this by running <code class="highlighter-rouge">xcrun simctl shutdown all</code> to shutdown all simulators, followed by <code class="highlighter-rouge">xcrun simctl erase all</code> to wipe them.</p>
<h3 id="automating-firefox-on-desktop">Automating Firefox on Desktop</h3>
<p>So far we’ve added a bookmark in Firefox on iOS and performed an initial sync. We now need to sign into Firefox Accounts on desktop Firefox, perform a sync, and verify the new bookmark is added. Like our UI tests for Firefox on iOS, we already have a solution for performing integration tests for Sync. It’s called <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Projects/TPS_Tests">TPS</a>, and I with the following tweaks I was able to get it to work for my prototype:</p>
<ol>
<li>Added “mobile” bookmark folder to <code class="highlighter-rouge">bookmarks.jsm</code>, which is necessary to verify bookmarks in this location.</li>
<li>Removed attempt to load and validate the ping schema. The <code class="highlighter-rouge">_tryLoadPingSchema</code> method attempts to read a schema file from disk, which isn’t present for my prototype so I’ve removed that, and another related code path.</li>
</ol>
<p>The source code for TPS is stored in mozilla-central, and I don’t want the entire repository to be a requirement for running my prototype. If we decide that TPS is the best approach for these tests, then we’d probably need to find a better way to share the code than my current approach of simply copying the extension source code and making local changes.</p>
<p>TPS works by launching Firefox with the extension installed and passing an argument to the test to run. This means it’s necessary for us to write a TPS test in JavaScript for our scenario:</p>
<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">EnableEngines</span><span class="p">([</span><span class="s2">"bookmarks"</span><span class="p">]);</span>
<span class="kd">var</span> <span class="nx">phases</span> <span class="o">=</span> <span class="p">{</span> <span class="s2">"phase1"</span><span class="p">:</span> <span class="s2">"profile1"</span> <span class="p">};</span>
<span class="c1">// expected bookmark state</span>
<span class="kd">var</span> <span class="nx">bookmarksExpected</span> <span class="o">=</span> <span class="p">{</span>
<span class="s2">"mobile"</span><span class="p">:</span> <span class="p">[{</span>
<span class="na">uri</span><span class="p">:</span> <span class="s2">"http://www.example.com/"</span><span class="p">,</span>
<span class="na">title</span><span class="p">:</span> <span class="s2">"Example Domain"</span><span class="p">}]</span>
<span class="p">};</span>
<span class="c1">// sync and verify bookmarks</span>
<span class="nx">Phase</span><span class="p">(</span><span class="s2">"phase1"</span><span class="p">,</span> <span class="p">[</span>
<span class="p">[</span><span class="nx">Sync</span><span class="p">],</span>
<span class="p">[</span><span class="nx">Bookmarks</span><span class="p">.</span><span class="nx">verify</span><span class="p">,</span> <span class="nx">bookmarksExpected</span><span class="p">],</span>
<span class="p">]);</span>
</code></pre></div></div>
<p>The phases allow Firefox to be restarted and for syncing to be performed across multiple profiles. Whilst this could be useful, I’ve currently enforced a single phase/profile for my prototype.</p>
<p>There’s already a TPS test runner written in Python, so I was able to selectively pick what I needed for my prototype. I created several pytest fixtures that work together to package the TPS extension, configure a Firefox profile, and in a similar to our <code class="highlighter-rouge">xcodebuild</code> fixture, provide an interface for executing a test:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">@pytest.fixture</span><span class="p">(</span><span class="n">scope</span><span class="o">=</span><span class="s">'session'</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">tps_addon</span><span class="p">(</span><span class="n">tmpdir_factory</span><span class="p">):</span>
<span class="n">name</span> <span class="o">=</span> <span class="nb">str</span><span class="p">(</span><span class="n">tmpdir_factory</span><span class="o">.</span><span class="n">mktemp</span><span class="p">(</span><span class="s">'addon'</span><span class="p">)</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s">'tps'</span><span class="p">))</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">make_archive</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="s">'zip'</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">here</span><span class="p">,</span> <span class="s">'tps'</span><span class="p">))</span>
<span class="n">os</span><span class="o">.</span><span class="n">rename</span><span class="p">(</span><span class="s">'{}.zip'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="p">),</span> <span class="s">'{}.xpi'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="p">))</span>
<span class="k">yield</span> <span class="s">'{}.xpi'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="p">)</span>
<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">tps_config</span><span class="p">(</span><span class="n">fx_account</span><span class="p">):</span>
<span class="k">yield</span> <span class="p">{</span><span class="s">'fx_account'</span><span class="p">:</span> <span class="p">{</span>
<span class="s">'username'</span><span class="p">:</span> <span class="n">fx_account</span><span class="o">.</span><span class="n">email</span><span class="p">,</span>
<span class="s">'password'</span><span class="p">:</span> <span class="n">fx_account</span><span class="o">.</span><span class="n">password</span><span class="p">}}</span>
<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">tps_log</span><span class="p">(</span><span class="n">pytestconfig</span><span class="p">,</span> <span class="n">tmpdir</span><span class="p">):</span>
<span class="n">tps_log</span> <span class="o">=</span> <span class="nb">str</span><span class="p">(</span><span class="n">tmpdir</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s">'tps.log'</span><span class="p">))</span>
<span class="n">pytestconfig</span><span class="o">.</span><span class="n">_tps_log</span> <span class="o">=</span> <span class="n">tps_log</span>
<span class="k">yield</span> <span class="n">tps_log</span>
<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">tps_profile</span><span class="p">(</span><span class="n">tps_addon</span><span class="p">,</span> <span class="n">tps_config</span><span class="p">,</span> <span class="n">tps_log</span><span class="p">):</span>
<span class="n">preferences</span> <span class="o">=</span> <span class="p">{</span>
<span class="s">'extensions.autoDisableScopes'</span><span class="p">:</span> <span class="mi">10</span><span class="p">,</span>
<span class="s">'extensions.legacy.enabled'</span><span class="p">:</span> <span class="bp">True</span><span class="p">,</span>
<span class="s">'identity.fxaccounts.autoconfig.uri'</span><span class="p">:</span> <span class="n">urls</span><span class="p">[</span><span class="s">'content'</span><span class="p">],</span>
<span class="s">'tps.config'</span><span class="p">:</span> <span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">tps_config</span><span class="p">),</span>
<span class="s">'tps.logfile'</span><span class="p">:</span> <span class="n">tps_log</span><span class="p">,</span>
<span class="s">'tps.seconds_since_epoch'</span><span class="p">:</span> <span class="nb">int</span><span class="p">(</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()),</span>
<span class="s">'xpinstall.signatures.required'</span><span class="p">:</span> <span class="bp">False</span>
<span class="p">}</span>
<span class="k">yield</span> <span class="n">Profile</span><span class="p">(</span><span class="n">addons</span><span class="o">=</span><span class="p">[</span><span class="n">tps_addon</span><span class="p">],</span> <span class="n">preferences</span><span class="o">=</span><span class="n">preferences</span><span class="p">)</span>
<span class="nd">@pytest.fixture</span>
<span class="k">def</span> <span class="nf">tps</span><span class="p">(</span><span class="n">pytestconfig</span><span class="p">,</span> <span class="n">tps_log</span><span class="p">,</span> <span class="n">tps_profile</span><span class="p">):</span>
<span class="k">yield</span> <span class="n">TPS</span><span class="p">(</span><span class="n">pytestconfig</span><span class="o">.</span><span class="n">getoption</span><span class="p">(</span><span class="s">'firefox'</span><span class="p">),</span> <span class="n">tps_log</span><span class="p">,</span> <span class="n">tps_profile</span><span class="p">)</span>
</code></pre></div></div>
<p>I also added a required command line option for the path to Firefox. The TPS test runner actually downloads the latest Firefox nightly, which could be an option in the future.</p>
<p>The <code class="highlighter-rouge">TPS</code> object provided by the <code class="highlighter-rouge">tps</code> fixture provides a <code class="highlighter-rouge">run</code> method, which takes a test file as an argument. After adding this to our Python test we have something that looks like this:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">test_sync_bookmark_from_device</span><span class="p">(</span><span class="n">tps</span><span class="p">,</span> <span class="n">xcodebuild</span><span class="p">):</span>
<span class="n">xcodebuild</span><span class="o">.</span><span class="n">test</span><span class="p">(</span><span class="s">'XCUITests/IntegrationTests/testFxASyncBookmark'</span><span class="p">)</span>
<span class="n">tps</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="s">'test_bookmark.js'</span><span class="p">)</span>
</code></pre></div></div>
<p>Now we have a working prototype that satisfies our scenario. Note that there’s not much to see while the TPS test is running, however if you open the settings menu in Firefox you can watch the state transition from not signed in, to signed in, and the initial sync being performed. If you’re really quick you can also see the mobile bookmark appearing in the menu.</p>
<h3 id="running-the-tests">Running the Tests</h3>
<p>To run the tests you will need to first follow the instructions for <a href="https://github.com/mozilla-mobile/firefox-ios#building-the-code">building Firefox on iOS</a>. You will also need to ensure you have <a href="http://docs.python-guide.org/en/latest/starting/installation/#legacy-python-2-installation-guides">legacy Python</a> (2.7) installed (there are dependencies that have not yet been updated to support modern Python). Finally, install <a href="https://docs.pipenv.org/">pipenv</a>, which will take care of the remaining Python dependencies.</p>
<p>You can then run the tests as follows, making sure you set the correct path to your Firefox binary:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd python
$ pipenv install
$ pipenv run pytest --firefox=/path/to/Firefox.app/Contents/MacOS/firefox-bin
</code></pre></div></div>
<p>The tests will build and install the application to the simulator, which can cause a delay where there will be no feedback to the user. Also, note that each XCUITest that is executed will shutdown and <strong>erase data from all available iOS simulators</strong>. This assures that each execution starts from a known clean state.</p>
<h3 id="reviewing-results">Reviewing Results</h3>
<p>Hopefully you’ll see something like the following in your console:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ pipenv run pytest --firefox=/path/to/Firefox.app/Contents/MacOS/firefox-bin
============================= test session starts =============================
platform darwin -- Python 2.7.13, pytest-3.4.2, py-1.5.2, pluggy-0.6.0 -- /path/to/python2.7
cachedir: .pytest_cache
rootdir: /path/to/firefox-ios/python, inifile: pytest.ini
plugins: metadata-1.6.0, html-1.16.1, mozlog-3.7
collected 1 item
test_integration.py::test_sync_bookmark_from_device PASSED [100%]
-------------- generated html file: /path/to/results/index.html ---------------
========================= 1 passed in 273.26 seconds ==========================
</code></pre></div></div>
<p>Even if the test fails, you should see the ‘generated html file’ somewhere in your console. Open this file in Firefox to review the results. If there was a failure, the report will include the details as shown in the console. You’ll also find environment details including the version of desktop Firefox being used.</p>
<p>For each test there’s a link to logs for TPS and xcodebuild. At the moment these are included regardless of the outcome, however they’re mostly valuable for investigating failures. In the future we may decide to exclude them when tests pass.</p>
<h3 id="whats-next">What’s Next?</h3>
<p>Well, the next thing is to gather feedback on this prototype. Does it make sense to use XCUITests, TPS, and pytest, or have I missed something that would improve the integration testing between multiple devices? If you’ve read this far, I suspect you have some opinions. Please get in touch by leaving a comment or finding me on IRC, Slack, Twitter, email, etc!</p>
<p>After I’ve given some time for feedback to reach me, we’ll need to device how to rollout the prototype so that it can start to provide value. Initially we might start with a dedicated Mac Mini available via remote access to trigger the tests.</p>
<p>Then we’ll need to review the test cases that we’d want to write. Ideally we’d keep the suite relatively small, as the tests will take some time to run. The idea is to cover the basic functionality where risk is high, and the more obscure scenarios will be covered by manual testing.</p>
<p>Other ideas for future plans include:</p>
<ul>
<li>Pre-populate Firefox with history instead of creating them through the user interface. Our prototype test loads a website and adds it to the bookmarks as a user would. Doing it this way is slow and prone to failure, so if we can pre-populate Firefox with this it would be an improvement.</li>
<li>Experiment with going back-and-forth between devices. This could be achieved by saving and restoring state between sessions, or by using an alternative tools that would allow for switching between multiple concurrent sessions.</li>
<li>Create a prototype for Firefox on Android. Perhaps we can reuse parts of this prototype, although we’d probably look into using some combination of UIAutomator, Espresso, and Appium for the Android automation parts.</li>
<li>Experiment with Appium as an alternative to XCUITest. I went with XCUITest because we already had something in place, but perhaps there’s some value in switching these tests to using Appium. It’s at least worth investigating.</li>
<li>Experiment with introducing WebDriver to the prototype. If we have a scenario that requires us to interact with either Firefox or web content (such as Firefox Accounts) then we may need to introduce WebDriver (Selenium).</li>
<li>Look into setting up Continuous Integration for these tests. If the tests prove to be valuable and stable, then it would be ideal to run them whenever a new version of Firefox is avaiable. This could be a simple cron schedule, or something like a Jenkins agent running the tests when triggered.</li>
<li>Create a prototype for Firefox on other devices. In the future we may have Firefox Accounts integration in Firefox for FireTV, so it would be great if we could include coverage here, too.</li>
</ul>
<p>I’ve been pushing my code to a branch on my fork of Firefox on iOS. You can see a comparison between my branch and the upstream master <a href="https://github.com/mozilla-mobile/firefox-ios/compare/master...davehunt:integration">here</a>.</p>
<h3 id="demo">Demo</h3>
<p>If you’re unable to run the tests locally, here’s a recording of the test running along with reviewing the HTML report and logs:</p>
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container"> <iframe title="YouTube video player" width="640" height="390" src="//www.youtube.com/embed/44DOpj4c07U" frameborder="0" allowfullscreen=""></iframe></div>Dave HuntWith Firefox Accounts, you can access your tabs, history, and bookmarks from any device. You can even send tabs from one device to another, which is great when I find myself on a page that’s not optimised for mobile, or if I get distracted at the weekend and find something I want to pick up when I get to work on Monday morning. While these features are awesome, I’ve had issues when the sync isn’t triggered, or things don’t go as expected. Some of these issues are known (and are being addressed), but currently it’s too easy for regressions to be introduced.Command line tool for Firefox Accounts2018-03-15T10:27:39+00:002018-03-15T10:27:39+00:00http://davehunt.co.uk/2018/03/15/command-line-tool-for-firefox-accounts<p>When testing services that depend on <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Tech/Firefox_Accounts">Firefox Accounts</a>, it’s useful to be able to create disposable test accounts. Fortunately we’ve had this ability from the very early days of the service, and our automated tests make heavy use of <a href="https://github.com/mozilla/PyFxA">PyFxA</a> to create, verify, and ultimately destroying accounts. As useful as this is, it hasn’t been particularly easy to create accounts for the purposes of manual testing. For the rare occasion that I’ve needed an account, I’ve either created them manually via main user interface with a disposable email account, or I’ve created a simple one-off script to create a batch of accounts. As I had this need again recently, I decided to write a simple command line tool for creating verified accounts and subsequently destroying them.<!--more--></p>
<p>To use the tool, you’ll need Python (despite a dependency not claiming to have support for Python 3, it’s working fine for me using 3.6) and pip. It can then be installed by simply running <code class="highlighter-rouge">pip install fxacli</code></p>
<p>Any account created by the tool will be stored in a JSON file named <code class="highlighter-rouge">.accounts</code> in the working directory. This allows for one or more accounts to be created, and then destroyed without the need to specify the target account.</p>
<p>To create an account:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ fxacli create
Account created!
- 🌐 https://api-accounts.stage.mozaws.net/v1
- 📧 test-72a888a3f6@restmail.net
- 🔑 IvOhSLzI
Account verified! 🎉
</code></pre></div></div>
<p>To destroy the most recently created account:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ fxacli destroy
Account destroyed! 💥
- 🌐 https://api-accounts.stage.mozaws.net/v1
- 📧 test-72a888a3f6@restmail.net
- 🔑 IvOhSLzI
</code></pre></div></div>
<p>To destroy all accounts created using the tool, pass the <code class="highlighter-rouge">--all</code> flag.</p>
<p>If you want to destroy an account not created by the tool, or if your <code class="highlighter-rouge">.accounts</code> file is for any reason unreadable, you can pass <code class="highlighter-rouge">--email</code> and <code class="highlighter-rouge">--password</code> options.</p>
<p>By default all accounts will be created/destroyed from the staging instance of Firefox Accounts. This is highly recommended, but if you know what you’re doing, the target environment can be specified using the <code class="highlighter-rouge">--env</code> command line option.</p>
<p>Visit the <a href="https://pypi.python.org/pypi/fxacli">FxACLI package on PyPI</a> for links to the source code, issue tracker, and more.</p>Dave HuntWhen testing services that depend on Firefox Accounts, it’s useful to be able to create disposable test accounts. Fortunately we’ve had this ability from the very early days of the service, and our automated tests make heavy use of PyFxA to create, verify, and ultimately destroying accounts. As useful as this is, it hasn’t been particularly easy to create accounts for the purposes of manual testing. For the rare occasion that I’ve needed an account, I’ve either created them manually via main user interface with a disposable email account, or I’ve created a simple one-off script to create a batch of accounts. As I had this need again recently, I decided to write a simple command line tool for creating verified accounts and subsequently destroying them.Effective CI for Firefox projects developed in GitHub2017-06-28T00:00:00+01:002017-06-28T00:00:00+01:00http://davehunt.co.uk/2017/06/28/effective-ci-for-firefox-projects-developed-in-github<p>Whilst the <a href="https://hg.mozilla.org/">canonical repository</a> for the Firefox
source code uses <a href="https://www.mercurial-scm.org/">Mercurial</a>, it’s becoming
increasingly popular for Firefox projects to use <a href="https://github.com/">GitHub</a>
for development. When it’s time to ship, many of these projects will land their
code inside the canonical repository for inclusion in the upcoming Firefox
release. There are a few challenges that come with this approach.<!--more--></p>
<p>Primarily, we won’t know when a change in the project introduces a regression in
Firefox until we attempt to land the project alongside the Firefox source code.
Similarly, we won’t know when a change to Firefox might cause a regression in
the project.</p>
<p>This quarter I spent some time discussing these issues, learning how different
teams are addressing the problems, and helping to define a more generic approach
for the future.</p>
<h2 id="current-solutions">Current solutions</h2>
<p>Here’s a summary of some the solutions I discovered teams are using:</p>
<ul>
<li><a href="https://github.com/mozilla/activity-stream">ActivityStream</a> are adding
their project to a local copy of the Firefox source code and pushing to an
alternate branch. This then runs all the tests and reports results to
<a href="https://treeherder.mozilla.org/#/jobs?repo=pine">Treeherder</a>. This relies on
access to the alternate branch, and is currently a manual process.</li>
<li><a href="https://github.com/mozilla/normandy">Normandy</a> download an archive from
<a href="https://github.com/mozilla/gecko-dev">gecko-dev</a> (a read-only Git mirror of
the Firefox repository), combine this with the project repository, build
Firefox, and run project specific tests. Every two weeks or so, they bundle
the changes up and push to the Firefox repository.</li>
<li><a href="https://github.com/devtools-html/debugger.html">debugger.html</a> use a Docker
image with the Firefox source code, and run project specific tests in
<a href="https://circleci.com/">Circle CI</a>.</li>
</ul>
<h2 id="proposed-solution">Proposed solution</h2>
<p>Whilst I wasn’t able to pull together a prototype this quarter, I did draft a
proposal based on my investigations:</p>
<ul>
<li>Each project would create an entry script <code class="highlighter-rouge">mcmerge.sh</code>, which would be
responsible for integrating the project source code with the Firefox source
code. In some instances this might simply copy files, but it could also apply
patches as needed.</li>
<li>Whenever a pull request is opened against the GitHub project, or a change is
pushed:
<ul>
<li>Check if the author of the pull/push is authorised to trigger continuous
integration. This might be that they’re a repository collaborator, or have
authority to push to the main Firefox repository. If they are not
authorised, we stop here.</li>
</ul>
</li>
<li>Prepare a new patch using the
<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1357597">vcssync</a> tool
currently in development, which will take the entire GitHub project repository
at the current revision and apply it in the patch to a vacant subdirectory in
the Firefox repository.</li>
<li>Push the prepared patch to our <a href="https://wiki.mozilla.org/Try">Try Server</a>,
which will trigger tests according to the commit message. This syntax should
be configurable in the project repository, but would likely default to all
platforms/tests.</li>
<li>If triggered by a pull request, publish a comment indicating that the tests
have been triggered and linking to the results in Treeherder.</li>
<li>During the regular Firefox build, we’d scan for these projects and execute
any <code class="highlighter-rouge">mcmerge.sh</code> scripts we encountered. This would integrate the project with
the Firefox source code.</li>
<li>Build Firefox and execute the tests as usual.</li>
<li>Reviewers of the pull request would be expected to check the results of the
Try push as part of their review.</li>
</ul>
<p>I also identified a few possible directions this may evolve in the future:</p>
<ul>
<li>We could use the same mechanism to automatically land changes that are pushed
to the GitHub repository into the Firefox repository. This would mean that the
Firefox repository would effectively hold a read-only mirror of the GitHub
project. This would remove the need for manually pushing the project to the
Firefox repository, and would allow us to determine when changes to the Firefox
source code causes regressions in the GitHub projects. This may not be
desirable for all projects, and would have implications for the sheriffs, who
may need to back out changes that cause bustage.</li>
<li>Rather than perform full builds of Firefox, we may be able to optimise these
in a similar fashion to the artefact builds. This isn’t something we can do
now, but it is a desirable build optimisation for other reasons.</li>
<li>Due to the intermittent nature of our tests, it’s not currently possible to
give a pass/fail indication in the GitHub pull request. If the scope of the
tests is narrow enough and they’re stable, it’s possible we could enhance this
to poll for the result and indicate the outcome in the pull.</li>
</ul>
<h2 id="next-steps">Next steps</h2>
<p>As mentioned, I was unfortunately not able to bring together a prototype based
on my proposal. I discovered early on that I wasn’t the only one looking into
this, and I even feel that the other work in this area is already in better
hands. The work in
<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1364561">bug 1364561</a> and
<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1357597">bug 1357597</a> are
particularly relevant to this project. It’s possible that I’ll return to this
project in the future, but for now I’ll be following the progress and
contributing to the discussions without actively working on the implementation.</p>Dave HuntWhilst the canonical repository for the Firefox source code uses Mercurial, it’s becoming increasingly popular for Firefox projects to use GitHub for development. When it’s time to ship, many of these projects will land their code inside the canonical repository for inclusion in the upcoming Firefox release. There are a few challenges that come with this approach.Reporting test results to Treeherder2017-04-10T00:00:00+01:002017-04-10T00:00:00+01:00http://davehunt.co.uk/2017/04/10/reporting-test-results-to-treeherder<p>Many of the web and services automated tests at Mozilla run in Jenkins, and until recently our instance was public. This meant it was easy for both paid and volunteer contributors to discover test failures, file issues, and provide fixes either for the tests or the projects they serve. Unfortunately, just like any software, Jenkins has had some security vulnerabilities. Last year, one of these prompted us to remove public access to our instance.<!--more--></p>
<p>As Jenkins is much more than just a dashboard of results, a compromised instance could cause loss of data, or worse, remote execution of code. Since removing access to Jenkins, we’ve been looking into alternative ways to share our test results publicly.</p>
<p>Last month <a href="/2017/03/21/analysing-pytest-results-using-activedata.html">I wrote about</a> how we’re making the test results accessible via <a href="https://wiki.mozilla.org/EngineeringProductivity/Projects/ActiveData">ActiveData</a>. This allows anyone to query our results and perform data analysis and visualisation on them. Whilst this is going to prove to be a valuable tool, it doesn’t provide a convenient dashboard for the results and is unlikely to attract much interest from the community.</p>
<p>The next step is submitting our results to <a href="https://wiki.mozilla.org/EngineeringProductivity/Projects/Treeherder">Treeherder</a> - a public dashboard for commits to Mozilla projects, which displays results of tasks such as builds, linting, and automated tests. Treeherder can be used to monitor the health of projects, and as a tool for investigating failures and raising defects, making it the perfect home for our test results.</p>
<p>The remainder of this post details the steps necessary to submit our test results to Treeherder. Note that whilst working on this I had local instances of most of the services and applications involved. This is a good practice as it reduces latency, and means you’re not filling up an live instances with experimental test data.</p>
<h2 id="creating-a-pulse-user">Creating a Pulse user</h2>
<p>Treeherder ingests information about jobs from <a href="https://wiki.mozilla.org/Auto-tools/Projects/Pulse">Pulse</a> exchanges. Pulse is a RabbitMQ cluster, and if you’re unfamiliar (as I was), I can highly recommend the <a href="http://www.rabbitmq.com/getstarted.html">tutorials</a> to learn more. The first step was to sign into <a href="https://pulseguardian.mozilla.org/">PulseGuardian</a> and create a user for publishing our messages.</p>
<h2 id="adding-project-repositories">Adding project repositories</h2>
<p>To have our project repositories shown in Treeherder, it’s necessary to <a href="http://treeherder.readthedocs.io/submitting_data.html#adding-a-github-repository">tell Treeherder about them</a>. Then, to get result sets showing up you’ll need to add <a href="https://github.com/integration/taskcluster">TaskCluster integration</a> for the repositories. For each revision, TaskCluster will send a message to Pulse, which Treeherder uses to build a collection of result sets for each repository. Note that you will need a GitHub organisation owner or repository administrator to enable TaskCluster integration. It’s also worth noting that historic revisions will not be available in Treeherder, so only new commits will show up.</p>
<h2 id="generating-a-message">Generating a message</h2>
<p>Treeherder provides a <a href="https://github.com/mozilla/treeherder/blob/master/schemas/pulse-job.yml">schema</a> that messages are expected to validate against. As we’re using Jenkins, and have recently <a href="/2017/03/23/migrating-to-declarative-jenkins-pipelines.html">migrated to declarative pipelines</a> with a <a href="https://github.com/mozilla/fxtest-jenkins-pipeline">shared library</a>, I wrote a <code class="highlighter-rouge">submitToTreeherder</code> step to generate the payload using <a href="https://github.com/daveclayton/json-schema-validator">json-schema-validator</a> to ensure we’re conforming to the schema. Whilst developing locally I used a simple Python script based on the <a href="http://www.rabbitmq.com/getstarted.html">RabbitMQ tutorials</a>, as this made it much easier to iterate on the payload. Interestingly, this validation actually highlighted a couple of issues with the schema (<a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1352402">1352402</a>, <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1352403">1352403</a>). I also encountered issues with the Jenkins pipelines (<a href="https://issues.jenkins-ci.org/browse/JENKINS-43195">JENKINS-43195</a>, <a href="https://issues.jenkins-ci.org/browse/JENKINS-43246">JENKINS-43246</a>).</p>
<h2 id="submitting-the-message">Submitting the message</h2>
<p>Once the message has been generated and validated against the schema, we send it as our Pulse user to an exchange including the username with a routing key that contains the username and project. Our Pulse username is <code class="highlighter-rouge">fxtesteng</code>, so our exchange would be <code class="highlighter-rouge">exchange/fxtesteng/jobs</code>, and for the <a href="https://github.com/mozilla/fxapom">FxAPOM</a> project our routing key would be <code class="highlighter-rouge">fxtesteng.fxapom</code>.</p>
<h2 id="using-pulse-inspector">Using Pulse Inspector</h2>
<p>At this point Treeherder isn’t aware of the new exchange, but you can use <a href="https://tools.taskcluster.net/pulse-inspector/">Pulse Inspector</a> to listen on a specified exchange and routing key pattern to see the messages as they’re received by Pulse. If you want to listen to all messages published to the Firefox Test Engineering exchange for Treeherder jobs, enter the exchange <code class="highlighter-rouge">exchange/fxtesteng/jobs</code> and click ‘Start Listening’. Note that traffic on this exchange is currently very low, but it’s useful if you’ve triggered a job and want to see what’s being sent.</p>
<h2 id="registering-with-treeherder">Registering with Treeherder</h2>
<p>Now that we’re sending messages for each job, all that’s left is to <a href="file:///Users/dhunt/workspace/treeherder/docs/_build/html/submitting_data.html#register-with-treeherder">tell Treeherder about your exchange</a>. Once that’s done, the next job published to Pulse should be picked up by Treeherder and displayed under the appropriate repository. The following screenshot shows FxAPOM results in Treeherder.</p>
<p><img src="/assets/treeherder-fxapom.png" alt="FxAPOM results in Treeherder" /></p>
<p>You can <a href="https://treeherder.allizom.org/#/jobs?repo=fxapom&amp;revision=cd87f7f8bb06ff035ac716081e01a6f55046911d">see these for yourself</a>, and take a look at the log files, HTML reports and other details for the results.</p>
<h2 id="whats-next">What’s next?</h2>
<p>At the time of writing, only FxAPOM is configured to submit results to Treeherder. This repository has a low volume of commits, and the tests are run once a day. Next, we’d like to submit results for more of our repositories. As we’re using our shared library, this is a relatively small change for most of our projects. There are also <a href="https://github.com/mozilla/fxtest-jenkins-pipeline/labels/treeherder">a number of enhancements</a> filed for the shared library, which will improve the display of the results in Treeherder. If you’re interested in working on any of these, please add a comment and I’d be happy to mentor you.</p>
<p>I’d like to think that this model could be repeated for other continuous integration services. It’s already provided by TaskCluster, but there’s certainly some value in submitting results from <a href="https://travis-ci.org/">Travis CI</a> or <a href="https://circleci.com/">CircleCI</a>. That said, there are no current plans to implement this (that I’m aware of). If this is something you’d like to work on, let me know!</p>
<h2 id="acknowledgements">Acknowledgements</h2>
<p>I would like to say thanks to the Treeherder team for their assistance and considerable patience whilst I worked on this.
I’d also like to thank <a href="https://github.com/abayer">Andrew Bayer</a> from the Jenkins team for his encouragement and assistance while I battled through a few Groovy and Jenkins pipeline issues. Whenever we’re in the same city, I definitely owe that guy a drink!</p>Dave HuntMany of the web and services automated tests at Mozilla run in Jenkins, and until recently our instance was public. This meant it was easy for both paid and volunteer contributors to discover test failures, file issues, and provide fixes either for the tests or the projects they serve. Unfortunately, just like any software, Jenkins has had some security vulnerabilities. Last year, one of these prompted us to remove public access to our instance.Migrated content from seleniumexamples.com2017-03-24T00:00:00+00:002017-03-24T00:00:00+00:00http://davehunt.co.uk/2017/03/24/migrated-content-from-seleniumexamples-com<p>I ran a blog at seleniumexamples.com from 2009-2011, where I posted examples
for using Selenium. This content has been unavailable for some time, so I
decided to migrate it here. Many of the examples are unlikely to work, and I
didn’t bother to migrate comments, but it might at least prove interesting to
some. Personal highlights include <a href="/2010/05/25/play-pacman-with-selenium-2.html">playing Pacman</a>,
<a href="/2009/11/25/a-successful-first-london-selenium-meetup.html">the first London meetup</a>, and <a href="/2010/11/07/cheesecake.html">cheesecake</a>. You can find the entire
archive <a href="/tag/seleniumexamples.com.html">here</a>.</p>Dave HuntI ran a blog at seleniumexamples.com from 2009-2011, where I posted examples for using Selenium. This content has been unavailable for some time, so I decided to migrate it here. Many of the examples are unlikely to work, and I didn’t bother to migrate comments, but it might at least prove interesting to some. Personal highlights include playing Pacman, the first London meetup, and cheesecake. You can find the entire archive here.Migrating to declarative Jenkins pipelines2017-03-23T00:00:00+00:002017-03-23T00:00:00+00:00http://davehunt.co.uk/2017/03/23/migrating-to-declarative-jenkins-pipelines<p>Last year I shared <a href="/2016/12/08/my-thoughts-on-jenkins-pipelines.html">my thoughts on Jenkins pipelines</a>
and provided a <a href="/2016/12/19/jenkins-pipeline-walkthrough.html">walkthrough</a> of how we’re using
pipelines at Mozilla. Since then, the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Model+Definition+Plugin">Pipeline Model Definition plugin</a> has
came out of beta, and we’ve been migrating our pipelines to the new declarative
syntax with a <a href="https://jenkins.io/doc/book/pipeline/shared-libraries/">shared library</a>.<!--more--></p>
<h2 id="shared-libraries">Shared libraries</h2>
<p>As soon as you’re implementing similar steps in multiple Jenkins pipelines, it
makes sense to consider writing a shared library. Our <a href="https://github.com/mozilla/fxtest-jenkins-pipeline">fxtest-jenkins-pipeline</a>
library allows us to centrally maintain custom steps such as IRC notifications,
or creating variables files with desired capabilities for Selenium, rather
than implementing these in every pipeline.</p>
<h2 id="pipeline-options">Pipeline options</h2>
<p>In our original pipelines it was necessary to wrap steps in order to configure
timeouts, ANSI colours, and timestamps. With declarative this is made so much
better by allowing <a href="https://jenkins.io/doc/book/pipeline/syntax/#options">pipeline-specific options</a> to be
configured. The following snippet demonstrates enabling all three of these:</p>
<div class="language-groovy highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">options</span> <span class="o">{</span>
<span class="n">ansiColor</span><span class="o">(</span><span class="s1">'xterm'</span><span class="o">)</span>
<span class="n">timestamps</span><span class="o">()</span>
<span class="n">timeout</span><span class="o">(</span><span class="nl">time:</span> <span class="mi">1</span><span class="o">,</span> <span class="nl">unit:</span> <span class="s1">'HOURS'</span><span class="o">)</span>
<span class="o">}</span>
</code></pre></div></div>
<p>Everything defined in one place, and with less nesting/indentation vastly
improves the readability and maintainability of the pipelines.</p>
<h2 id="environment-variables">Environment variables</h2>
<p>Another huge improvement is the handling of
<a href="https://jenkins.io/doc/book/pipeline/syntax/#environment">environment variables</a> variables and accessing
credentials. Previously it was necessary to wrap steps in <code class="highlighter-rouge">withEnv</code> for
environment variables, and <code class="highlighter-rouge">withCredentials</code> for credentials. This is what we
previously would have needed:</p>
<div class="language-groovy highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">withCredentials</span><span class="o">([[</span>
<span class="n">$class</span><span class="o">:</span> <span class="s1">'StringBinding'</span><span class="o">,</span>
<span class="nl">credentialsId:</span> <span class="s1">'SAUCELABS_API_KEY'</span><span class="o">,</span>
<span class="nl">variable:</span> <span class="s1">'SAUCELABS_API_KEY'</span><span class="o">]])</span> <span class="o">{</span>
<span class="n">withEnv</span><span class="o">([</span><span class="s2">"PYTEST_ADDOPTS="</span> <span class="o">+</span>
<span class="s2">"-n=${processes} "</span> <span class="o">+</span>
<span class="s2">"--driver=SauceLabs "</span> <span class="o">+</span>
<span class="s2">"--variables=capabilities.json "</span> <span class="o">+</span>
<span class="s2">"--color=yes"</span><span class="o">])</span> <span class="o">{</span>
<span class="c1">// ...</span>
<span class="o">}</span>
</code></pre></div></div>
<p>With declarative, this can be replaced with the following:</p>
<div class="language-groovy highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">environment</span> <span class="o">{</span>
<span class="n">PYTEST_ADDOPTS</span> <span class="o">=</span>
<span class="s2">"-n=10 "</span> <span class="o">+</span>
<span class="s2">"--tb=short "</span> <span class="o">+</span>
<span class="s2">"--color=yes "</span> <span class="o">+</span>
<span class="s2">"--driver=SauceLabs "</span> <span class="o">+</span>
<span class="s2">"--variables=capabilities.json"</span>
<span class="n">SAUCELABS_API_KEY</span> <span class="o">=</span> <span class="n">credentials</span><span class="o">(</span><span class="s1">'SAUCELABS_API_KEY'</span><span class="o">)</span>
<span class="o">}</span>
</code></pre></div></div>
<p>Note that there’s a <a href="https://issues.jenkins-ci.org/browse/JENKINS-42857">regression</a> in version 1.1.1 of the plugin
that prevents strings from being split over multiple lines, but this is already
fixed and will be included in the next release.</p>
<h2 id="post-build-steps">Post build steps</h2>
<p>Possibly the most valuable addition to declarative piplines are the post build
steps. It’s now possible to define steps to execute after each stage or
pipeline depending on the current status of the build. Previously we were using
try/catch/finally for our tests step ensure we reported our results, and a
broader try/catch to send failure notifications.</p>
<p>We now have a <code class="highlighter-rouge">post</code> section immediately after our test stage that publishes
artifacts, similar to the following snippet:</p>
<div class="language-groovy highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">stage</span><span class="o">(</span><span class="s1">'Test'</span><span class="o">)</span> <span class="o">{</span>
<span class="n">steps</span> <span class="o">{</span>
<span class="c1">// ...</span>
<span class="o">}</span>
<span class="n">post</span> <span class="o">{</span>
<span class="n">always</span> <span class="o">{</span>
<span class="n">archiveArtifacts</span> <span class="s1">'results/*'</span>
<span class="n">junit</span> <span class="s1">'results/*.xml'</span>
<span class="o">}</span>
<span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div>
<p>We also have a <code class="highlighter-rouge">post</code> section for the entire pipeline for notifications. This
means that we send notifications when any stage fails, which is another
improvement over our previous pipelines.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In my opinion declarative pipelines are a huge improvement over the original
scripted pipelines. The new syntax is succinct and easier to read and maintain.
As a result of migrating to declarative and our shared library, one of our
pipelines had a 60% reduction in the lines of code but with more functionality!</p>
<p>Here’s a <a href="https://github.com/mozilla/fxapom/blob/0f3b3cdb161940614ef50f2203a4633df1464c74/Jenkinsfile">full example</a> of one of our declarative Jenkins pipelines.</p>Dave HuntLast year I shared my thoughts on Jenkins pipelines and provided a walkthrough of how we’re using pipelines at Mozilla. Since then, the Pipeline Model Definition plugin has came out of beta, and we’ve been migrating our pipelines to the new declarative syntax with a shared library.Firefox: The Puppet Show2017-03-22T00:00:00+00:002017-03-22T00:00:00+00:00http://davehunt.co.uk/2017/03/22/firefox-the-puppet-show<p>This year marked my return to <a href="https://fosdem.org/">FOSDEM</a>, which I last
attended in 2013. On my previous visit I did a joint talk with
<a href="https://www.hskupin.info/">Henrik Skupin</a> on <a href="https://archive.fosdem.org/2013/schedule/event/automating_firefox_os/">Automating Firefox OS</a>
and we thought it would be fun to return with another talk on automating
the desktop Firefox application using Selenium.<!--more--></p>
<p>Unfortunately, Henrik wasn’t able to attend due to sickness in his family, so I
gave the talk on my own. The talk was named after
<a href="http://foxpuppet.readthedocs.io/en/latest/">FoxPuppet</a>, our Python package
that helps us to automate Firefox. I also brought along a real life fox puppet,
which I decided to name Henrik as he wasn’t able to be there in person. You can
<a href="https://fosdem.org/2017/schedule/event/mozilla_firefox_puppet_show/">watch the talk</a>
(sorry for the poor audio quality), and
<a href="https://whimboo.github.io/slides/170204_fosdem/">see the slides</a> online.</p>Dave HuntThis year marked my return to FOSDEM, which I last attended in 2013. On my previous visit I did a joint talk with Henrik Skupin on Automating Firefox OS and we thought it would be fun to return with another talk on automating the desktop Firefox application using Selenium.Analysing pytest results using ActiveData2017-03-21T00:00:00+00:002017-03-21T00:00:00+00:00http://davehunt.co.uk/2017/03/21/analysing-pytest-results-using-activedata<p>When attending <a href="http://2016.seleniumconf.co.uk/">Selenium Conference 2016</a> in
London, I was particularly interested in any talks on how others are analysing
their test results. There have been many times I would like to things such as
how many tests we’re running, what percentage of them are failing, and which
tests take the longest to run. Rather than implement something from scratch, I
was hoping that I could take inspiration from others. I wasn’t disappointed, as
both <a href="http://geekdave.com/">Dave Cadwallader</a> and
<a href="https://twitter.com/hughleo01">Hugh McCamphill</a> gave presentations on how they
gather and analyse their test results.<!--more--></p>
<p>Unfortunately, neither of these fitted neatly into our Python test suites. It
did however confirm that gathering this data in a way that can be queried and
visualised was worth pursuing, and provided the much needed inspiration.</p>
<p>At Mozilla, many of our test results, builds and performance data are already
collected by
<a href="https://wiki.mozilla.org/Auto-tools/Projects/ActiveData">ActiveData</a>, allowing
the data to be publicly available and directly queried. This made it the
perfect destination for the Selenium test suites that I wanted to discover more
about. The difference between the test results already in ActiveData and the
Selenium test results I wanted to introduce was the format of the data. The
existing tests are using a Python package named
<a href="http://mozbase.readthedocs.io/en/latest/mozlog.html">mozlog</a> to generate a raw
structured log, which could then be normalised and ingested by ActiveData.</p>
<p>Rather than modify ActiveData to understand an alternative (and more limited)
data format such as
<a href="https://en.wikipedia.org/wiki/XUnit#Test_result_formatter">xUnit</a> or
<a href="https://en.wikipedia.org/wiki/Test_Anything_Protocol">TAP</a>, it made sense to
add a simple plugin to mozlog so that tests using pytest could easily create
structured logs via the command line. One unfortunate side-effect of this is
that mozlog <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1093212">only currently supports legacy Python</a>,
so it’s not possible to use Python 3 if you want to generate these logs.</p>
<p>Once we started to create these structured logs, it quickly became clear that
the data lacked context. We had the test names, but nothing to indicate the
suite or the application under test. This provided the perfect opportunity to
create a plugin I’ve been thinking about for some time, so I wrote and released
<a href="https://pypi.python.org/pypi/pytest-metadata/">pytest-metadata</a>. This plugin
gathers various data about your test session, including the platform, version
of Python, pytest, associated packages, and plugins. It also detects when tests
are being run in a variety of continuous integration servers and pulls in
relevant details. I had written something similar within
<a href="https://pypi.python.org/pypi/pytest-html/">pytest-html</a> for displaying the
environment details in the report, but had been meaning to split it into a
separate plugin.</p>
<p>With the enriched structured logs now being generated in our continuous
integration, all I needed to do was to publish them to Amazon S3 and hand over
the ActiveData work to <a href="https://github.com/klahnakoski">Kyle Lahnakoski</a>. In no
time at all, Kyle had the data being ingested and available to query. For now
we plan to run queries and plot data as needed, however we’re also planning to
connect our <a href="https://redash.io/">Redash</a> to ActiveData to make it easier for
anyone to visualise this data.</p>
<p>Here’s an example of the test outcomes (excluding passing tests) for the last
two weeks:</p>
<p style="text-align:center"><img src="/assets/test-outcomes.png" alt="Test outcomes" /></p>
<p>If you’re interested in setting something like this up for your tests, then you
may find that ActiveData is overkill. It’s not particularly difficult to
<a href="https://github.com/klahnakoski/ActiveData/tree/master">set it up</a>, but the
ActiveData-ETL pipeline involves
<a href="https://github.com/klahnakoski/ActiveData-ETL/blob/dev/activedata_etl/transforms/fx_test_to_normalized.py">normalising the results</a>, and
<a href="https://github.com/klahnakoski/ActiveData-ETL/blob/dev/activedata_etl/fx_test_logger.py">scanning the Amazon S3 bucket</a>. These are code pointers, but they belong to a bigger network of machines, and code, that uses it. To get this code to work on a single machine will require some coding.</p>
<p>If you’d like to read more about how this works, or you want to run
some queries on our test results you can
<a href="http://firefox-test-engineering.readthedocs.io/en/latest/reference/activedata.html">read the documentation</a>.
Let me know what you discover!</p>Dave HuntWhen attending Selenium Conference 2016 in London, I was particularly interested in any talks on how others are analysing their test results. There have been many times I would like to things such as how many tests we’re running, what percentage of them are failing, and which tests take the longest to run. Rather than implement something from scratch, I was hoping that I could take inspiration from others. I wasn’t disappointed, as both Dave Cadwallader and Hugh McCamphill gave presentations on how they gather and analyse their test results.