There are many components to a successful web testing strategy, but one of the most often overlooked is the importance of visual UI testing in addition to functional testing.

Most teams will focus on one over the other, but to truly catch as many bugs as possible, you’ll need to incorporate both. First, you need to understand what the difference is and why they’re both needed.

Functional Testing

Let’s start by going over what functional testing actually is. Functional testing examines how the software actually works in relation to the given requirements. For example, a homepage on an e-commerce web application might have a menu button; if clicked, it’s expected to drop down with different options.

Functional testing is executed on tests of all sizes from unit tests to end-to-end cases, and it’s critical for making sure that user flows are working as intended and consistent with the product manager’s vision.

So for the same example with the menu button, you may also check to make sure that clicking one of the options on the drop-down will take you the correct page. You can continue to perform a functional test case that includes adding an item to your cart and checking out to ensure each of those actions work one after another and allow the user to successfully make a purchase.

To dive a little deeper, functional tests are basically actions and validations — the tester or tool performs an action with an expected validation, which either works or doesn’t work.

Most of the time, when we talk about functional testing, we’re referring to automated testing where we’re getting a pass or fail test result. When thinking about cross-browser and device testing, functional testing should be included after deciding which browsers and devices to test in order to make sure those tests pass across different configurations.

If you’re in development, design, or QA, this information probably isn’t news to you. However, differentiating between functional and visual testing, and understanding when each is needed, will inform a more intelligent testing strategy.

Visual Testing

While it’s clearly important to test the functional elements of your application, some teams will make the mistake of doing the bulk of their testing to check functional efficiency with little or no regard for visual validation.

Visual testing requires the tester to consider how the application looks in addition to how it works. Often times, the UI of an application can pass a functional test, while design elements that are visually defective can go under the radar.

This becomes extremely important as we look at responsive design and consider the myriad of different devices, browsers, and operating systems. If an application is not built to be responsive, design elements may suffer on different screen sizes, compromising the user experience. This can go unnoticed if just functional testing is performed.

Visual bugs can be annoying or unpleasant to users, but they can go beyond inconvenience to be more severe. In fact, visual inconsistencies can affect the user journey in a way that makes it difficult or even impossible to complete their intended actions even though the test is functionally proficient.

You can see the spectrum of severity when it comes to visual bugs in the following images from the Baymard Institute. In the screenshot of Amazon’s website, the text overlay has not rendered properly and is hard to read. In the example with the form field, visual issues make it difficult to fill out.

Photo via Baymard Institute

Photo via Baymard Institute

The issue may be that teams don’t have the time or capacity to do visual testing, or that they don’t prioritize it, but there are many organizations that are simply unaware that it should be an integral part of their strategy.

Automating Functional and Visual UI Testing

As mentioned, functional UI testing is most often done with a test automation tool, such as Selenium or Record & Replay, that will run your test in multiple browsers and give you a pass or fail result to tell you whether the application is working as intended.

However, this doesn’t mean you have to do visual testing all by hand and manually compare your website in different browsers. So how do you speed up visual testing?

A tool like CrossBrowserTesting can be used for visual UI testing to take automated screenshots across configurations. With the screenshot engine, you can compare a page on different browsers and devices side-by-side with your baseline configuration and evaluate highlighted layout differences that let you know where there may be bugs.

Since your team will probably be adding new features and changing the UI from time to time, you can also look at historical versions of your application for regression testing with an integration like Applitools Eyes.

Additionally, to ensure that your web application is put through regular functional and visual testing, you can also schedule tests in a multitude of ways — through Jenkins with Selenium, or with scheduling for Record & Replay or automated screenshots.

By implementing automation practices and incorporating visual testing into your existing strategy, your team can achieve more testing coverage without wasting any time. And your release cycles won’t be a guessing game.

In Summary…

Make sure your application works correctly and looks great

Visual bugs can be a mild inconvenience, or they could prevent your users from completing a crucial task

Ensure a positive overall user experience by testing across different browsers and devices

Leverage tools to automate both functional and visual testing for faster feedback

When we think of regression testing, we tend to think about functionality and how new code affects the way previously working elements behave. Will a new integration cause something to break? Will everything still work the same after updating backend code?

But just because you’re looking at functionality doesn’t mean you should let visual design testing fall through the cracks during regressions.

In fact, visual regression testing is necessary to make sure that style issues don’t pop up and your web application continues to look just as good as it works.

What is Visual Regression Testing?

Imagine trying to use a web application on your phone and not being able to find certain elements because of an overlapping image or text. Visual testing isn’t just important for making sure that your application looks good but also ensuring that users are able to navigate it without running into style issues that functional testing might pass as technically correct.

While visual testing compares screenshots of a web page in its current state side-by-side with different browser versions of the same page, visual regression testing is used to look at historical differences.

As we know regression testing is used to check that new code has disrupted the functionality of a previous version, visual regression testing allows to do the same with a visual testing tool to make sure that a web page still renders as expected across different browsers after changes to the code.

Many organizations, such as America’s Test Kitchen, use visual regression testing to archive versions of a certain page to record how it changes from iteration to iteration.

With software teams that are shifting left and moving towards more continuous integration and continuous testing, visual regression testing is important to make sure that new changes don’t cause chaos to your layout. This is important to consider as the application evolves from version to version as well as across browsers.

How Can You Automate Visual Testing?

Tools like CrossBrowserTesting use a screenshot comparison engine to take automated screenshots of the same page across multiple browsers and devices. From here, you can choose a baseline browser, then compare highlighted layout differences side-by-side.

However, this process is not completely automated. Since pages are not passing or failing like in a Selenium test, it’s up to you to decide what differences are acceptable and which aren’t. For example, an image may be off by a few pixels in one browser but not the other. This is probably fine unless it’s throwing off other elements.

The same goes for visual regression testing. CrossBrowserTesting can automatically spin up the recent and previous versions of a web page, but you have to analyze where there are bugs or inconsistencies.

Tools and Tips for Automated Visual Regression Testing

There are a few ways to do visual regression testing that are worth considering.

Wraith – You can also use Wraith, which works by crawling two websites, taking screenshots, and comparing them. Like PhantomCSS, it also does take installation and scripting to use, but Wraith is more useful for testing sites with dynamic content in comparison. You can see this tutorial using Wraith for visual regression testing.

Selenium – Selenium isn’t traditionally known as a visual testing tool, but that doesn’t mean you there’s no way to take screenshots. Our Selenium 101 tutorial shows you how to take screenshots during automated tests. This way you can have, automated, functional, visual, and browser testing all-in-one.

CrossBrowserTesting and Applitools – For visual regression testing that requires no installation, setup, or coding knowledge, our integration with Applitools is your best bet. You can run visual comparisons of the same web page on the same browser or mobile device from a previous test session and get alerted of regression errors upon deployment or new releases.

Making Your Website Visually and Functionally Flawless

You don’t have to be shallow to care about how your application looks.

A working web application is important, but to ensure it’s also visually appealing takes its own type of testing.

Visual regression testing allows you to pinpoint every pixel at the UI level so that HTML, CSS, and JavaScript differences don’t affect your user experience.

Have you ever wished that there was a way to combine visual and functional testing without having to either write a new script or manually run both alongside each other? The ability to take automated screenshots in visual testing is clearly valuable when comparing application design across browsers. However, for teams that incorporate test automation, it can also be useful to have screenshots taken when running checks.

Having the ability to debug automated test failures is one of the best ways for QA and Development teams to speed up their shipping, but it can’t be done without proper reporting. Creating a reliable and accurate pipeline of defect feedback is important for teams to be able to understand errors and identify improvements, and sharing screenshots through Jira, Slack, or Email extends these capabilities your team. But how do you take screenshots during a Selenium test?

Well, at CrossBrowserTesting, we strive to provide an all-in-one testing platform and this is just the kind of problem we love to solve. Customers have been enjoying using our automated screenshot capability for years, measuring responsiveness and visualizing mobile layouts on hundreds of browsers at once. Below, we’ll go over some of the ways in which you can combine our Automated Screenshot capability and your Selenium tests.

Use Cases

We’ve already come to the conclusion that it’s possible to combine visual and functional testing, but now let’s get to the why:

Take screenshots when tests fail at the exact moment of failure to help developers debug what might have gone wrong.

Take screenshots during your test to capture specific elements in your viewport to test specific layouts.

Capture each step of your test case both functionally and with a screenshot.

Taking Screenshots When Tests Fail:

To take a screenshot when an assertion fails in your Selenium test, we’ll invoke CrossBrowserTesting’s API. The API endpoint for taking a snapshot can be found here. As you can see, if you already have a Selenium test started, it’s as easy as making a POST request inside your script with the Selenium session ID that you already have. Each WebDriver object creates its won Selenium Session ID, and our hub uses that ID to match up test results. You’ll find an example of doing this in Python:

From there you should be able to see the results from within the app, and the script itself will have printed out a public facing URL where your results can be seen during execution.

Creating Snapshot’s Through CrossBrowserTesting’s API

CBT’s API provides many useful functions that can enhance your ability to retrieve test results through your automated tests. One of the more commonly asked questions is “how can I take a snapshot during my Selenium test?” Doing so through our API is easy, and it can come in handy when catching the page during those all too prevalent bugs.

The API endpoint for taking a snapshot can be found here. As you can see, if you already have a Selenium test started, it’s as easy as making a POST request inside your script with the Selenium session ID that you already have. Each WebDriver object creates its won Selenium Session ID, and our hub uses that ID to match up test results. You’ll find an example of doing this in Python here:

From there you should be able to see the results from within the app, and the script itself will have printed out a public facing URL where your results can be seen during execution.

It’s not difficult to see where adding screenshots could be useful when running your scripts in Selenium. By logging errors and keeping track of whether tests pass or fail, keeping accurate documentation is more attainable for teams to release software faster and without bugs.

Browse the rest of our Selenium 101 series to learn more about getting started with Selenium.

From our humble beginnings as a manual testing tool with just around 50 browsers, CrossBrowserTesting has come a long way to be the reliable end-to-end testing tool you know today. With a range of offerings as old and problematic as IE6 and as new as Chrome 63 amongst our 1,500 configurations, we’re unquestionably the most complete cross-browser testing platform.

Another part of growing up and expanding was adding visual testing and automated testing capabilities in addition to live testing. Today, individuals and teams across the country depend on these features, but we want everyone to get the most bang for their buck. So, in order to truly understand our core functionalities and the best times to implement them, we’re explaining the best uses for live testing, visual testing, and automated testing for websites.

Live Testing

We started with live testing to give people a way to manually test their website in a variety of browsers. Today, manual testing is as important as ever. As teams adopt Agile and Continuous Testing, it’s important that manual testing isn’t overlooked and continues to be a part of the SDLC into the future. Having dedicated testers who understand software and are focused on finding defects and reporting bugs is quintessential to proper live testing.

Best Use Case – Exploratory testing.

Exploratory testing is not a strict set of rules checking whether or not the application works, like automated testing. While there should be a plan going into exploratory testing, it’s also a somewhat ad-hoc process based on testing the limits of a new integration within the software, or even a completely new application, before it’s deployed. It takes a keen sense of observation and an understanding of the application design and purpose to be executed with value.

Using the live testing feature in CrossBrowserTesting, exploratory testing becomes more manageable with access to browsers, operating systems, and devices that teams may not have had previous access to, but their customers are using. By remotely accessing a variety of configurations rather than just the browser your developers are, owning this detailed observation for a broad user base is much more manageable.

Visual Testing

Visual testing is used to assess a web application’s responsiveness across browsers. By performing visual testing, you’re looking at the UI/UX components on the front end to decide whether the application under test is acceptable on a variation of browsers, devices, and screen resolutions since they all provide a different experience.

Best Use Case – Screenshot comparisons for visual validation.

Our screenshot comparison engine lets you take screenshots across multiple browsers in a matter of seconds, allowing you to easily compare full-page layouts. By highlighting the differences from your baselines browser, it’s easier than ever before to decide whether your web page is consistent across browsers without having to manually compare them. With the option to test on hundreds of devices, you can be confident that you’re accounting for every customer.

By performing visual regression testing, you can also compare new changes to historical versions to make sure any added integrations or application updates are supporting improvement rather than adding any glitches to the user experience.

Tips for Visual Testing – Run a risk analysis and pick five to ten of your highest priority pages. Find out what the most popular browsers are that your users are on, and run each of those targeted pages on those configurations to give you a basis as to whether or not the application is visually acceptable for the majority of your users. If you happen to find that a page is faulty, take to live testing to debug.

Automated Testing

Automated testing in CrossBrowserTesting relies on Selenium and Appium open source software to allow you to create test scripts in any major language that can be run across multiple browsers. While effective automated testing with these frameworks does require programming knowledge, teams prefer it to be able to run tests in a fraction of the time without needing to perform every step manually. Alternatively, Record & Replay provides a lightweight, codeless test automation solution for teams that have yet to fully learn how to write Selenium scripts.

Best Use Case – Functional and regression testing.

You can take all the screenshots in the world, but just because something looks great, doesn’t mean it also works like it’s supposed to. Automated testing digs deeper into the backend of a web application to analyze unit, integration, and end-to-end test cases and make sure that code renders properly on different browser configurations. By looking beyond the surface of the application, you can better insight on whether or not it will actually work correctly for users at different stages in their journey through your application.

Test automation is also a preferred method for regression testing since repeated actions of long test suites don’t have to be done manually. Instead, running an automated regression test can check that previously stable code still is functioning as it’s supposed to after a new integration, feature, or bug fix is added.

Automated Testing Best Practices – Running an automated test across browsers in parallel can make testing up to twenty times faster, depending on how many configurations you include. While automation is already a great way to speed up repetitive or boring tests, parallel testing allows you to run that same script on other configurations at the same time rather than waiting until each is done to start the next.

Conclusion

While live testing, visual testing, and automated testing each have it’s own purpose, designing a strategy that involves all three is the key to truly comprehensive testing. As your team and product grow, using these capabilities to help your organization balance speed and quality will be what sets you apart from the competition and ensures your brand’s trustworthiness.

Start the New Year right by joining us on Thursday, January 18th from 1:00 – 2:00 PMEDT as Applitools’ Aakrit Prasad and our very own Chase Cook take you through our newest integration and teach you how to automate visual testing without complicated scripting or a lengthy manual process.

With no coding or Selenium knowledge needed, the integration is truly accessible to anyone. However, whether you’re a QA novice or test automation expert, the capabilities being introduced during this webinar will also be an asset to every software team going into the new year.

By combining the power of 1,500 real desktop and mobile browsers in the cloud with the leader in AI-powered visual comparison testing tools, there’s truly no better alternative for visual regression testing.

Don’t fall behind in 2018. Take your visual testing to the next level by learning how to leverage automated visual testing to simplify and shorten testing cycles at the same time.

What happens when you combine over 1,500 real desktop and mobile browsers with the leader in AI-powered visual comparison testing tools? Simply put, an unbeatable codeless visual testing strategy.

Now it’s easier than ever to take automated browser screenshots across hundreds of different browsers with CrossBrowserTesting, and then have Applitools Eyes analyze the differences, down to the pixel level.

We’ve always thought Applitools did an amazing job with their visual comparisons, and once we investigated the solution further, we saw what a great choice it was for regression testing. We also thought we could couple our browser and device lab’s Automated Screenshot functionality with the Applitool’s Image Comparison engine to create something really special and bring consumers a faster, simpler way to produce the screenshots needed for comparison.

What is Visual Testing?

So, what is visual testing in the first place and why should you be excited about this partnership? Visual testing allows you to test that your website or web application looks correctly at the UI level. While functional testing focuses on making sure things work, visual testing focuses on making sure things look right. And it’s becoming a bigger deal!

Since different browsers will often render an application differently, visual testing is critical in cross-browser testing to make sure the actual user experience aligns with how it’s designed to look in terms of layout, text, images, etc. If a button can be clicked, but it’s nearly off the page entirely on an iPhone 6 and looks awful, this code shouldn’t go to production. But if you simply had Selenium tests or another type of functional testing, this error would be missed as it “technically” did pass our functionality test.

Previously, visual testing on multiple browsers and devices requires technical knowledge of scripting or a long manual process. However, with the CrossBrowserTesting and Applitools integration, comparing screenshot regressions is smooth and streamlined for a tester of any level — no coding necessary.

Getting Started with the Applitools Integration in CrossBrowserTesting

Then, when you pick your browsers, run a test under “Screenshots” in CrossBrowserTesting, and click advanced options, you should see an option to “Send to Applitools”. Click “Set” to enter your Applitools API key, pick the browsers you want to use, run the test, and that’s it! You will receive screenshots of your application in each browser, and the test will be automatically sent to Applitools where you can see each batch.

The best part is that when you run your tests again in the future, it’s easy to find the test in CrossBrowserTesting and select “Run Again” to create new screenshots that will automatically upload to Applitools and match against their earlier versions. That’s when you can go to Applitools for a visual comparison that shows passing and failing tests. These types of visual regression tests can be incredibly powerful and save hours of time writing code.

For more information on using the CrossBrowserTesting and Applitools integration, you can visit our support page here.

Happier Testing

To sum it up, here are a few reasons why the CrossBrowserTesting and Applitools Eyes integration will be your new favorite visual testing duo:

Visual regression testing with screenshot comparisons

1500+ real desktop and mobile browsers

Zero Selenium knowledge or coding skills needed

Easy to save and easy to share test results

While CrossBrowserTesting and Applitools have exceeded expectations on their own terms, the integration will be a powerhouse for teams looking for the ultimate responsive design package. Test faster and across more browsers with no coding and maintenance with confidence that you’ll be releasing an application that will impress your users.

Recently, visual testing tools have been a popular way for designers and developers to evaluate a website’s responsiveness across browsers. However, to some software teams, a visual testing tool might seem like a luxury instead of a necessity.

While the idea of a good full-page screenshot is satisfying to any UX enthusiast, when does visual testing become really valuable? There are a few key moments where software teams tend to end up wishing they had a visual testing tool.

1. When you change code – In a perfect world, we could simply integrate changes to code every day without worrying about it breaking another part of our application. Sadly, that’s not an accurate reality. When we change code, we have to check to make sure everything still works. Then, we have to do this again on a few different browsers. However, doing this manually every time on your work computer is not only annoying, it’s also inefficient. Instead, running visual tests lets you compare your new changes to historical versions across browsers for easier and more accurate regression testing.

2. When you don’t have the same machine your customers use – Again, life would be a lot easier if you could get away with just testing on the same computer you use for developing every time. Of course, your customers are actually on hundreds of different browser and device combinations. Check out Google Analytics to see which configurations you should be testing on, and make sure they’re being visually verified. You can do this laboriously with a device lab or smoothly with a visual testing tool for screenshot comparisons — the choice is yours.

3. When a new browser, OS, or device is released – Even after looking at Google Analytics, you can’t depend on these few machines to cover your testing needs for very long. New browsers, operating systems, and devices are coming out all the time. While a good amount of users might be on the iPhone 7 now, the iPhone X will surely throw a wrench in your testing. Unless you want to go out and buy all these devices yourself and physically compare them side-by-side, a visual testing tool provides a way to access browsers that are continuously added, updated, and maintained via the cloud.

4. When you want to increase communication – Test reporting and documentation is always a struggle, not to mention getting results to the people that need them. Fortunately, visual testing tools usually have a few ways to make test reporting more accessible for everyone. With integrations like Slack, Hipchat, and Jira, screenshots can be easily shared on the messaging platforms that your entire team uses. Additionally, features like visual reporting make it simple to analyze usage and stay on the same page every sprint.

5. When you need extra help finding layout differences – There’s a lot of reasons we still need dedicated testers. But if you’re working on building your QA team or operating as a one-man-band (a.k.a. freelance developer/designer) then having a tool that literally highlights browser differences can be a lifesaver. This way, you can stop searching each browser for elements that ruin your design and instead, get to fixing them faster.

6. When you want to speed things up – We’re living in the golden age of test automation. It’s not good enough just have to have access to unlimited browsers at the drop of the hat, getting the results should be automated. While live testing is a great tool in itself, it can only take you so far when it comes to gaining fast feedback and meeting deadlines. Automating your visual testing is multi-beneficial — you can more easily pull up screenshots, evaluate them more quickly, and debug them in a more timely manner.

7. When you want to test your design before the public sees it – The whole point of testing is to make sure your application looks great before it gets to the user, so what’s the point of running regression tests after you put a redesign into production? Using a tool that includes local testing allows you to address issues before your website goes live so a bug doesn’t ruin your latest unveil.

Conclusion

Visual testing is more than just a pretty interface, it’s a tool that’s inherently helpful during development. Additionally, as teams continue to shift left and users continue to access the web from increasingly fragmented devices, an efficient website visual testing tool will be an asset for creating your goal app.

We had many customers asking for better reporting tools and more intuitive visualization of their testing usage. At CrossBrowserTesting, we always keep an ear out for customer feedback, and we actually implement the really good suggestions into our product whenever we can. This was clearly something that we had to pursue.

Since our front-end is built on Angular we had a few requirements for this project in terms of its scope and the types of libraries we could consider for this task.

When we began the process, we were mainly deciding between 3 libraries:

Angular-nvD3.js

Angular-Chart.js

FusionCharts

In the end, it ultimately came down to the following requirements:

– Ease of implementation – Probably the most frustrating part of open-source development is when you find a great library with terrible/hard-to-understand documentation that makes using it next to impossible. We wanted something that included docs, simple quick-start tutorials, etc. that would make it easier on us.
– A focus on implementation with Angular – While we could build our own Angular directives to wrap around a charting library, we greatly preferred to have one pre-built.
– Powerful customization – If we need borders, we should be able, but not required, to put them on the chart. The same goes for legends, titles, hover states, click handlers, etc.
– A relatively recent library that we’re reasonably sure is maintained – We can’t use a library that hasn’t been updated in several years, or else we run a high risk of running into errors that we don’t have the time to fix.

Out of all the charting tools we used, it was by far the easiest to set up. The documentation was actually the most difficult part of the project because the examples were too simple and didn’t provide the additional context that would’ve made several components — such as setting up the correct axes-scaling — a more intuitive process.

Angular-Chart.js was, as the name implies, an Angular implementation of Chart.js. Again, one of the key attributes we were looking for in a charting library was an Angular implementation, and in this regard, Angular-Chart.js performed flawlessly.

The library strikes an intuitive balance between ease-of-use and customization — we were easily able to turn the chart into exactly what we needed, and it only took us a couple of days. The finer points of the charts are still a bit confusing, but being able to understand the library enough to implement it fully as we needed was part of what helped us to choose Angular-Chart.js.

In terms of power and customization, Angular-Chart.js is second-to-none. We were actually able to use a pie chart to create a gauge that tells our users exactly how many of their parallel tests are being run at once. That is, we used a chart designed for showing percentages and turned it into a clean-looking counter. This is exactly the type of customization we were looking for in an open-source charting framework.

The parallel testing usage chart shows the user what their maximum parallel usage is for manual and automated test versus how much they have used.

The parallel usage gauge shows you how many parallel tests you are running in real time at any given moment.

In the end, the biggest challenge was using the documentation for Chart.js to plug holes in our knowledge of Angular-Chart.js. It took two iterations to complete what is now our current version of our reporting usage page, though we do still have plans for more charting implementation.

On that note, we have a pretty good idea of how documentation could be improved at this point. If anyone on the Angular-Chart.js team wants some suggestions for how to help users onboard more efficiently, we’d be happy to help out if you give us a call — we ♥ Open Source projects.

Thanks again to our users for another amazing suggestion. We hope you enjoy our new visual reporting built with Angular-Chart.js and look forward to hearing more feedback from the testing community.