The official Sauce Labs Blog

Thanks again to those of you who attended our recent webinar with Applitools on automated visual testing. If you want to share it or if you happened to miss it, you can catch the audio and slides here. We also worked with Selenium expert Dave Haeffner to provide the how-to on the subject. Enjoy his post below.

The Problem

In previous write-ups I covered what automated visual testing is and how to do it. Unfortunately, based on the examples demonstrated, it may be unclear how automated visual testing fits into your existing automated testing practice.

Do you need to write and maintain a separate set of tests? What about your existing Selenium tests? What do you do if there isn’t a sufficient library for the programming language you’re currently using?

A Solution

You can rest easy knowing that you can build automated visual testing checks into your existing Selenium tests. By leveraging a third-party platform like Applitools Eyes, this is a simple feat.

And when coupled with Sauce Labs, you can quickly add coverage for those hard to reach browser, device, and platform combinations.

In it we’re loading an instance of Firefox, visiting the login page on the-internet, inputting the username & password, submitting the form, asserting that we reached a logged in state, and closing the browser.

Rather than storing the Selenium instance in the driver variable, we’re now storing it in a localbrowser variable and passing it into eyes.open — storing the WebDriver object that eyes.openreturns in the driver variable instead.

This way the Eyes platform will be able to capture what our test is doing when we ask it to capture a screenshot. The Selenium actions in our test will not need to be modified.

Before calling eyes.open we provide the API key (which can be found on your Account Details page in Applitools). When calling eyes.open, we pass it the Selenium instance, the name of the app we’re testing (e.g., "the-internet"), and the name of the test (e.g., "Login succeeded").

With eyes.checkWindow(); we are specifying when in the test’s workflow we’d like Applitools Eyes to capture a screenshot (along with some description text). For this test we want to check the page before logging in, and then the screen just after logging in — so we use eyes.checkWindow(); two times.

NOTE: These visual checks are effectively doing the same work as the pre-existing assertion (e.g., where we’re asking Selenium if a success notification is displayed and asserting on the Boolean result) — in addition to reviewing other visual aspects of the page. So once we verify that our test is working correctly we can remove this assertion and still be covered.

We end the test with eyes.close. You may feel the urge to place this in teardown, but in addition to closing the session with Eyes, it acts like an assertion. If Eyes finds a failure in the app (or if a baseline image approval is required), then eyes.close will throw an exception; failing the test. So it’s best suited to live in the test.

NOTE: An exceptions from eyes.close will include a URL to the Applitools Eyes job in your test output. The job will include screenshots from each test step and enable you to play back the keystrokes and mouse movements from your Selenium tests.

When an exception gets thrown by eyes.close, the Eyes session will close. But if an exception occurs before eyes.close can fire, the session will remain open. To handle that, we’ll need to add an additional command to our teardown.

We tell Sauce what we want in our test instance through DesiredCapabilities. The main things we want to specify are the browser, browser version, operating system (OS), and name of the test. You can see a full list of the available browser and OS combinations here.

In order to connect to Sauce, we need to provide an account username and access key. The access key can be found on your account page. These values get concatenated into a URL that points to Sauce’s on-demand Grid.

Once we have the DesiredCapabilities and concatenated URL, we create a Selenium Remote instance with them and store it in a local browser variable. Just like in our previous example, we feedbrowser to eyes.open and store the return object in the driver variable.

Now when we run our test, if there’s a Selenium failure, a URL to the Sauce job will be returned in the test output.

Expected Outcome

Connect to Applitools Eyes

Load an instance of Selenium in Sauce Labs

Run the test, performing visual checks at specified points

Close the Applitools session

Close the Sauce Labs session

Return a URL to a failed job in either Applitools Eyes or Sauce Labs

Outro

Happy Testing!

About Dave Haeffner: Dave is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

If there is one myth in the [browser] automation world that drives me crazy it is that browser automation scripts need to be written in the same language as the application is written in. It seems like that should be a Good Idea; in principle, but in reality it is actually responsible for a lot of ‘failed’ automation efforts.

Let’s choose a language to pick on. How about C# using ASP MVC; has a large user base (especially in the enterprise space) and pretty mature stack to use. (We could have picked any language…)

So now we have a nice ASP MVC application that we think is going to solve some customer’s burning needs and of course it’s nicely unit tested because you are doing some variant of TDD/BDD. Your browser automation scripts should naturally be written in C#, right?

Finally, a win-win-win for development, QA, and security! If your development team is looking for easier ways to incorporate security earlier in a way that’s simple, easy and that your team to understand, we may have a solution for you. Security defects are like any other defect. Finding them early saves money and time. There are tools that execute security tests for security professionals – like NT OBJECTives’ NTOSpider. NTOSpider can use the application knowledge defined Selenium scripts to execute a better, more comprehensive security test on an application. (more…)

Congrats to our VP of Engineering, Adam Christian! His Velocity Conference presentation, “The Black Magic of Leadership,” was featured on the Slideshare blog as one of three best leadership decks. Check out the original post and presentations here or below. (more…)

This is a follow-up post to a series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez. To find out how they were previously handling their stack, visit the first, second, and third posts from June 2014.

There is definitely a huge Docker movement going on in the dev world right now and not many QA Engineers have gotten their hands dirty with the technology yet. What makes Docker so awesome is the ability to ship a container and almost guarantee its functionality. (more…)

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the sixth of eight posts; two new posts will be released each week.

Now that we have our tests written, refactored, and running locally it’s time to make them simple to launch by wrapping them with a command-line executor. After that, we’ll be able to easily add in the ability to run them in the cloud.

Quick Setup

appium_lib comes pre-wired with the ability to run our tests in Sauce Labs, but we’re still going to need two additional libraries to accomplish everything; rake for command-line execution, and sauce_whisk for some additional tasks not covered by appium_lib.

Notice that the syntax in this file reads a lot like Ruby — that’s because it is (along with some Rake specific syntax). For a primer on Rake, read this.

In this file we’ve created two tasks. One to run our iOS tests, and another for the Android tests. Each task changes directories into the correct device folder (e.g., Dir.chdir) and then launches the tests (e.g., exec 'rspec').

If we save this file and run rake -T from the command-line, we will see these tasks listed along with their descriptions.

> rake -T
rake android # Run Android tests
rake ios # Run iOS tests

If we run either of these tasks (e.g., rake android or rake ios), they will execute the tests locally for each of the devices.

Running Your Tests In Sauce

As I mentioned before, appium_lib comes with the ability to run Appium tests in Sauce Labs. We just need to specify a Sauce account username and access key. To obtain an access key, you first need to have an account (if you don’t have one you can create a free trial one here). After that, log into the account and go to the bottom left of your dashboard; your access key will be listed there.

We’ll also need to make our apps available to Sauce. This can be accomplished by either uploading the app to Sauce, or, making the app available from a publicly available URL. The prior approach is easy enough to accomplish with the help of sauce_whisk.

Let’s go ahead and update our spec_helper.rb to add in this new upload capability (along with a couple of other bits).

Near the top of the file we pull in sauce_whisk. We then add in a couple of helper methods (using_sauce and upload_app). using_sauce checks to see if Sauce credentials have been set properly. upload_app uploads the application from local disk and then updates the capabilities to reference the path to the app on Sauce’s storage.

We put these to use in setup_driver by wrapping them in a conditional to see if we are using Sauce. If so, we upload the app. We’re also removing the avd capability since it will cause issues with our Sauce run if we keep it in.

Next we’ll need to update our appium.txt files so they’ll play nice with Sauce.

In order to work with Sauce we need to specify the appium-version and the platformVersion. Everything else stays the same. You can see a full list of Sauce’s supported platforms and configuration options here.

Now let’s update our Rake tasks to be cloud aware. That way we can specify at run time whether to run things locally or in Sauce.

We’ve updated our Rake tasks so they can take an argument for the location. We then use this argument value and pass it to location_helper. The location_helper looks at the location value — if it is not set to 'sauce'then the Sauce credentials get set to nil. This helps us ensure that we really do want to run our tests on Sauce (e.g., we have to specify both the Sauce credentials AND the location).

Now we can launch our tests locally just like before (e.g., rake ios) or in Sauce by specifying it as a location (e.g., rake ios['sauce'])

But in order for the tests to fire in Sauce Labs, we need to specify our credentials somehow. We’ve opted to keep them out of our Rakefile (and our test code) so that we can maintain future flexibility by not having them hard-coded; which is also more secure since we won’t be committing them to our repository.

After choosing a method for specifying your credentials, run your tests with one of the Rake task and specify 'sauce' for the location. Then log into your Sauce Account to see the test results and a video of the execution.

Making Your Sauce Runs Descriptive

It’s great that our tests are now running in Sauce. But it’s tough to sift through the test results since the name and test status are nondescript and all the same. Let’s fix that.

Fortunately, we can dynamically set the Sauce Labs job name and test status in our test code. We just need to provide this information before and after our test runs. To do that we’ll need to update the RSpec configuration incommon/spec_helper.rb.

# filename: common/spec_helper.rb
...
RSpec.configure do |config|
config.before(:each) do |example|
$driver.caps[:name] = example.metadata[:full_description] if using_sauce
$driver.start_driver
end
config.after(:each) do |example|
if using_sauce
SauceWhisk::Jobs.change_status $driver.driver.session_id, example.exception.nil?
end
driver_quit
end
end

In before(:each) we update the name attribute of our capabilities (e.g., caps[:name]) with the name of the test. We get this name by tapping into the test’s metadata (e.g., example.metadata[:full_description]). And since we only want this to run if we’re using Sauce we wrap it in a conditional.

In after(:each) we leverage sauce_whisk to set the job status based on the test result, which we get by checking to see if any exceptions were raised. Again, we only want this to run if we’re using Sauce, so we wrap it in a conditional too.

Now if we run our tests in Sauce we will see them execute with the correct name and job status.

Outro

Now that we have local and cloud execution covered, it’s time to automate our test runs by plugging them into a Continuous Integration (CI) server.

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Last week Sauce Labs’ Chris Wren took a moment to chat with Vlad Filippov of Mozilla on his blog. Topics covered all things open source and front-end web development, so we thought we’d share. Click the image below to read the full interview, or just click here.

We love this blog post written by Quentin Thomas at HotelTonight! In it, he explains how they use Appium to automate their mobile tests. He also walks readers through specifics, such as the RSpec config helper. Read a snippet below.

Thanks to the engineers at Sauce Labs, it is now possible to tackle the mobile automation world with precision and consistency.

Appium, one of the newest automation frameworks introduced to the open source community, has become a valuable test tool for us at HotelTonight. The reason we chose this tool boils down to Appium’s philosophy.

“Appium is built on the idea that testing native apps shouldn’t require including an SDK or recompiling your app. And that you should be able to use your preferred test practices, frameworks, and tools”.

-Quentin Thomas, HotelTonight, June 17, 2014

To read the full post with code, click here. You can follow Quentin on Twitter at @TheQuengineer.

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.

This is the final post in a three part series highlighting Bleacher Report’s continuous integration and delivery methodology by Felix Rodriguez. Read the first post here and the second here.

Last week we discussed setting up an integration testing server that allows us to post, which then kicks off a suite of tests. Now that we are storing all of our suite runs and individual tests in a postgres database, we can do some interesting things – like track trends over time. At Bleacher Report we like to use a tool named Librato to store our metrics, create sweet graphs, and display pretty dashboards. One of the metrics that we record on every test run is our PageSpeed Insights score.

PageSpeed Insights

PageSpeed insights is a tool provided by Google developers that analyzes your web or mobile page and gives you an overall rating. You can use the website to get a score manually, but instead we hooked into their api in order to submit our page visit score to Liberato. Each staging environment is recorded separately so that if any of them return measurements that are off, we can attribute this to a server issue.

Any server that shows an extremely high rating is probably only loading a 500 error page. A server that shows an extremely low rating is probably some new, untested JS/CSS code we are running on that server.

Google’s PageSpeed Insights return relatively fast, but as you start recording more metrics on each visit command to get results on both desktop and mobile, we suggest building a separate service that will run a desired performance test as a post – or at least in its own thread. This will stop the test from continuing its run or causing a test that runs long. Which brings us to our next topic.

Tracking Run Time

With Sauce Labs, you are able to quickly spot a test that takes a long time to run. But when you’re running hundreds of tests in parallel, all the time, it’s hard to keep track of the ones that normally take a long time to run versus the ones that have only recently started to take an abnormally long time to run. This is why our Cukebot service is so important to us.

Now that each test run is stored in our database, we grab the information Sauce stores for run time length and store it with the rest of the details from that test. We then submit that metric to Librato and track over time in an instrument. Once again, if all of our tests take substantially longer to run on a specific environment, we can use that data to investigate issues with that server.

To do this, we take advantage of Cucumber’s before/after hooks to grab the time it took for the test to run in Sauce (or track it ourselves) and submit to Librato. We use the on_exit hook to record the total time of the suite and submit that as well.

Test Pass/Fail Analytics

To see trends over time, we’d also like to measure our pass/fail percentage for each individual test on each separate staging environment as well as our entire suite pass/fail percentage. This would allow us to notify Ops about any servers that need to get “beefed up” if we run into a lot of timeout issues on that particular setup. This would also allow us to quickly make a decision about whether we should proceed with a deploy or not when there are failed tests that pass over 90% of the time and are currently failing.

The easiest way to achieve this is to use the Cucumber after-hook to query the postgres database for total passed test runs on the current environment in the last X amount of days, and divide that by the total test runs on the current environment in the same period to generate a percentage, store it, then track it over time to analyze trends.

Summary:

Adding tools like these will allow you to look at a dashboard after each build and give your team the confidence to know that your code is ready to be released to the wild.

Running integration tests continuously used to be our biggest challenge. Now that we’ve finally arrived to the party, we’ve noticed that there are many other things we can automate. As our company strives for better product quality, this pushes our team’s standards with regard to what we choose to ship.

One tool we have been experimenting with and would like to add to our arsenal of automation is Blitz.io. So far we have seen great things from them and have caught a lot of traffic-related issues we would have missed otherwise.

Most of what I’ve talked about in this series has been done, but some is right around the corner from completion. If you believe we can enhance this process in anyway, I would greatly appreciate any constructive criticism via my twitter handle @feelobot. As Sauce says, “Automate all the Things!”

Have an idea for a blog post, webinar, or more? We want to hear from you! Submit topic ideas (or questions!) here.