Category: Software Testing

The Selenium Story

In 2004, Jason Huggins developed an early version of Selenium-RC as an internal tool for ThoughtWorks. Selenium-IDE was originally created by Shinya Kasatani and donated to the Selenium project in 2006. Google got involed in 2009 and is largely responsible for what became WebDriver. Then in 2012 a W3C standard draft was pusblished, solidifying WebDriver’s place as the defacto standard. Today Selenium is widely used in the QA community. Just hop on any employment board and see how many QA Automation positions mention Selenium.

Requirements

You understand basic HTML and JavaScript for the Selenium IDE content.

For the WebDriver content you’ll need a basic understanding of Java or other supported language.

Selenium IDE

The Selenium IDE is a Firefox add-on that allows the user to record, click, and play back actions in the browser. If you have ever used macros in Microsoft’s Office products or AutoIT this will be familiar. These tests are saved in HTML files in a format referred to as Selenese. The test playback is done via JavaScript [source]. So when you have Command = open and Target = http://timothycope.com

it is executed almost like you opened up the Developer Tools Console (F12) and in the console tab ran window.location.href = 'http://timothycope.com';

Selenium Tests

Test steps are comprised of three things; command, target, and value.

A commandis what tells Selenium what to do. Selenium commands come in three ‘flavors’;

Actions are commands that generally manipulate the state of the application. They do things like “click this link” and “select that option”. If an Action fails, or has an error, the execution of the current test is stopped.

Many Actions can be called with the “AndWait” suffix, e.g. “clickAndWait”. This suffix tells Selenium that the action will cause the browser to make a call to the server, and that Selenium should wait for a new page to load.

Accessors examine the state of the application and store the results in variables, e.g. “storeTitle”. They are also used to automatically generate Assertions.

Assertions are like Accessors, but they verify that the state of the application conforms to what is expected. Examples include “make sure the page title is X” and “verify that this checkbox is checked”.

The target is a element locator for the html element. There are a number of supported selectors;

By ID – This is the most efficient and preferred way to locate an element. Common pitfalls that UI developers make is having non-unique id’s on a page or auto-generating the id, both should be avoided. A class on an html element is more appropriate than an auto-generated id.

By Class – “Class” in this case refers to the attribute on the DOM element. Often in practical use there are many DOM elements with the same class name, thus finding multiple elements becomes the more practical option over finding the first element.

By CSS -Like the name implies it is a locator strategy by css. Native browser support is used by default, so please refer to w3c css selectors for a list of generally available css selectors. If a browser does not have native support for css queries, then Sizzle is used. IE 6,7 and FF3.0 currently use Sizzle as the css query engine.

Beware that not all browsers were created equal, some css that might work in one version may not work in another.

By XPATH – At a high level, WebDriver uses a browser’s native XPath capabilities wherever possible. On those browsers that don’t have native XPath support, we have provided our own implementation. This can lead to some unexpected behaviour unless you are aware of the differences in the various xpath engines.

Using JavaScript – You can execute arbitrary javascript to find an element and as long as you return a DOM Element, it will be automatically converted to a WebElement object.

The value is used for certain commands. Official documentation is currently lacking in respect to a list of all API endpoints and their support with various frameworks. Here is a list of all actions, though.

IDE to WebDriver

You can “write” your tests by using Selenium IDE to record your actions. Then you just need to export to your language of choice. You may find that some of your tests are not exported how you might expect. Some libraries are not fully fleshed out so you’ll get a comment in the exported coded saying that the Command is unsupported. When this happens you’ll need to code something language specific to suit your needs.

After you export the test from IDE to WebDriver, your code will now be interpreted by WebDriver and executed on the browser.

It is more likely that you will build a suite of tests and have them executed by something like JUnit or NUnit. You could even run on multiple machines using something like Selenium Grid2, too. Many CI tools, like Jenkins, can be used to kick off the tests automatically with any new code changes.

Selenium WebDriver

The Selenium WebDriver is an application that works as a middle man between your automation code and your browser. This application called be called from many languages and frameworks using the WebDriver API. The level of support you get differs based on the programming language, operating system, and browser you use. Know that Selenium is developed in Java + Windows + Firefox. There is limited support for Linux and OSX. Additional browsers are supported by extensions. Headless browsers, like HtmlUnit, allow a UI-less WebView to be used as a web browser.

Background

I was hired to start up a QA department from scratch for a website agency. The company had been getting along without QA but saw the need to improve their product(s). People had been used to doing things their way, the way that “works”. To get the company in-line would be a challenge.

The Learning Curve

In past positions, I had just walked into a QA department and hit the ground running (they had established processes). This time, though, I had to figure it all out. The benefit was that I got to do it my way. I have lessons learned from previous companies, so I knew what to keep an eye out for. First, I needed to get on the same page as management. I kept hearing buzzwords from the VP and decided that it would be wise to investigate. This consisted of buying a handful of books (I’ll list them off at the end on this post).

Lessons Learned

I had heard “Scrum” and “Agile” before and was part of half-assed implementations. This half-assery led to abandoning it and going back to “Waterfall“. So I knew I needed to dive in completely and commit. Through that experience, I found that being on the bottom of the totem pole sucks. Management, it felt, came out of he 80s school of business where many upper managers adopted strategies that involved setting vague goals and then delegating vague goals.

… many upper managers adopted strategies that involved setting vague goals and then delegating … the responsibility for meeting those vague goals. – Juran on Quality by Design (p. 24)

To fill you in further, my QA Director at the time had suggested that the team go out and subscribe to Scrum and Agile newsletters. This aggravates me now. If you want to adopt commit the entire department and train them. I think the gap there was that the director didn’t understand the concepts fully. My goal is to understand and train.

Learning for Myself

Years ago, I was offered to have an IIST certification paid for by work. I took the class and hated it. As a QA tester, it was hard to look past such a shitty site that only worked in IE8. I would later read reviews and find that most people felt the same. But I digress. I did learn something. Through out the course, Effective Test Design, I kept hearing the narrator reference a book. Reflecting on this I decided that I could read/pay for other people’s interpretations OR I could read those volumes myself and come to my own conclusions. At another previous job, I had killed time when I was slow by watching whatever I could on YouTube (related to my job). I came across one of James Bach‘s keynotes. In it, he talked about the importance of being an autodidact.

Where to Start?

As I mentioned in The Learning Curve (above), I need to get on the same page as management. So, I looked up this word “kaizen” that I kept hearing. Turns out kaizen comes from Taiichi Ohno’s Workplace Management. I bought the book and read it in a few sittings at home. It is a short read and I highly recommend you read it, no matter your position. From this book, I gained an understanding of “Lean“. The company that hired me to create a QA department is big on Lean.

My next book was one that I think was referenced in the IIST course I took called, “The Art of Software Testing“. That book seems to have been the QA Bible for the past 25+ years. Reading through, my mind was on the look out for things we could try. I tell my team to “be all kaizen and shit”; meaning “let us try, committing fully, to continuously improve.” Some lessons learned from the book didn’t apply to just QA. I was able to establish guidelines for Code Reviews (The Art of Software Testing, p. 22) and give our sales team a quote or two so they could sell our QA service as a product (The Art of Software Testing, p. 141).

Next up, “Juran on Quality Design“. This one was recommended to one of my employees by a family friend. A the tag line states, it is about planning for Quality. From this I learned that it is best to train everyone to be experts on quality, rather than to rely on a couple “experts”.

There are some many lessons learned from each book, that I could write a post on each. I’ll keep it short and sweet right now, since I am focused on the subject title, “Training Quality”.

Lessons Learned

Commit fully

Continuously Improve

Always look for “waste”

Implement new processes that you believe will have benefit

Be prepared to admit when things do not work

Management needs to be with and around their team

Quality needs to be understood by everyone

Companies automated processes to save time, but end up automating bad processes

Implementing Changes

Some changes are easier than others to implement. This is mostly due to the perceived interruptions. I had to tell the sales and project management teams that billable hours were going to go up. This shouldn’t have been a surprise but I had set a dangerous precedent. In my efforts to hit the ground running, I did my best to be frictionless as possible and just hammer through tickets until I could judge the work load and get he appropriate number of people working under me. This meant that requirements were lax and no test plans or automation was being done.

First Things First

We needed to get our tickets (for “Kanban“) under control. No one ticket looked like another. We needed a standard. I worked with the VP and PM Director to hammer out a template that would fit all user stories. It ended up being based off of Kent Beck‘s “user stories“.

Without standards, there can be no Kaizen – Taiichi Ohno

It was an uphill battle as people used “what worked” before. I got the VP involved and we sat down with the PM/PO team to explain why the change was needed. Next, we needed to police it. If you tell someone to change the process and they revert, or just as worse “half-ass” it, then no real benefits can be sussed out. I asked the Dev team to help by pushing back tickets (moving it left on the Kanban board) until it was right. This resulted in a lot of side convos of “Oh, I meant this…”, which was never put into the ticket. I had adopted the mantra, “If it wasn’t written, it wasn’t said”. QA passes or fails tickets based off of the documented Acceptance Criteria(s). So then when QA got the ticket in our queue, we would have to send it left.

One suggestion we ran with is “moved-left” tracking. We, in QA, would note on a shared spreadsheet who was fucking up and why. This included anything that made a ticket “move left” on the board by anyone (PMs, Designers, Developers, QA). This was met with some resistance and I had to clarify that we are not on a witch hunt. We are just looking for common issues among people in the same departments or even a single person making the same mistakes, repeatedly. To alleviate the dark nature, we added a “kudos” tab to call out people who are improving. I jokingly hand out a gold star each week. With this arrow in our proverbial quiver, we are able to address issue with standards and metrics!

Lead by Example

We in QA also needed to try out a few things and improve the process while it was still in its infancy. Here is a short list of things we tried and their results:

Extreme QA: Taking a page from Extreme Programming by having two testers on a single ticket

Result: Abandoned. We felt like more issues were being caught but our backlog quickly grew.

4x10s: We were light on work on Monday and Friday so I asked my team to pull 4x10s one sprint.

Result: Optional. We saw that our billable hours evened out but at the cost of energy.

Automating Regression: Automate a ticket after it has been manually tested. A ticket does not move right (to UAT) until it has been automated.

Result: Partial. We had to find a balance. Working at an agency makes it so that a project in constantly in a state of flux until it is done. We now automate everything on the backend and use our best judgement for front-end.

Test Plans

Result: Kept. We went through a few iterations of Test Plan templates and I think our current iteration will suffice. The long story short here is that the “old way” of doing it was too granular. Now it is essentially a collection of user stories.

By trying these and communicating our efforts, we show the company that it is okay to try and fail. I want to take a moment and point out something I picked up from Ohno. A manager does not need reports on processes, they should be on the floor (the “Gemba“) with the team and are able to see it for themselves.

Background

I have worked in “enterprise” settings most of my career. In these settings automation handled a full regression and was costly to maintain. In this article I will point out a few lessons learned from working in an agency setting. This will be scoped to web applications.

Definitions

Enterprise (setting) – Typically an app or two with a very long life cycle. Most often, a bug introduced could mean lost revenue.

Agency (setting) – Applications are developed and handed over to the client when development is complete.

Concurrent (testing model) – ‘Regression’ automated tests are ran after a new build. At the same time, or shortly after, you would test new functionality. A ticket is not complete until the new functionality is added to the automated regression plan.

Considerations

Does the automation require heavy maintenance?

In development a page can change with every commit.

How easy does your automation lend itself to the scrum methodology?

If your automation package requires a hand off longer than a couple hours, then it is likely not set up in an intuitive way.

Can anyone contribute at any time?

Multiple testers on a given project ought to be able to make changes to test cases as needed.

The answer to this question can be reliant on a couple factors;

Do you use source control?

Is your automation framework available to the entire team?

What value does the automation provide?

If the manual regression test takes longer than a day and needs to be done more than once a month, then you are likely investing in automation.

Providing Cost Benefit

In an agency setting we bill time+materials. If it takes me an hour to test a new function, it might take an hour and a half more to make a solid automated test (I always ballpark initial automation as “time to test + 50%”). So already we are at 2.5 hours. This isn’t a big deal, as the regression test provides value in saving regression hours for subsequent builds. Additional cost is incurred with each build that breaks your test. As your regression grows, so to does your required maintenance. A way to minimize this cost is to use a simple framework. QA automation specialist have their choice from many frameworks and those frameworks typically offer more than you really need. Selenium is one such tool. It provides a bevy of commands that mostly go unused and requires many tools to actually use. For Selenium you work in a fragmented system. You could develop your test using Selenium IDE and export to your language of choice. You then fix any issues with your exported code and use something like jUnit to run the tests. If you want to schedule your test and email results, then you’ll have to write that.

I am a fan of using a paid product, called Telerik Test Studio. Telerik allows us to make, edit, run, schedule, and report on tests quite easily. Its downfall is that it is only available on Windows and tests only in IE, Firefox, Chrome, and WPF. Most shops end up using Selenium as its free and can be extended to many devices via third-party plugins. Microsoft Test Manager uses the Selenium driver for Chrome, too.

I use my automation suite to primarily test MVP flows and happy paths. I will include some negative tests, of course, but most of this should have already been addressed in manual testing so these tests provide little value outside of a “sanity” check. “But it doesn’t cover Safari or mobile”, you might contend. I have found mobile device emulators to be unreliable and outside of Appium there isn’t a one size-fits-all solution. I do most of my initial, manual testing in Safari as my primary machine is a Mac. I also have a pile of mobile devices and an intern. If something works in Telerik but not on those devices, it’s usually a device specific issue anyway.

The Value of Automation

As noted before, the automation provides a quick way to verify previous functionality in new builds. The argument is easily won that it is necessary for quicker and more reliable QA. I have never had an employer refuse to purchase an automation solution. My argument was presented in showing how long it takes to create/run test in given frameworks. The cost, compared to Selenium, is often paid for in saved development hours. In the agency setting, I am always thinking of the fairness to the client. Should the client have to pay for extra maintenance hours because you chose the wrong framework? The answer is no. That means that maintenance needs to be as simple and quick as possible. The simplicity would allow any team member the ability to update the test case(s). The speed in which issues are resolved is related to your testing framework.

Other Facets of Test Automation

In addition to automated regression tests, I rely on SortSite. This tool saves a lot of time checking for these things manually and helps ensure a quality product.

A Note on CI

When your dev team finds out you have automation, they’ll likely ask if it can be included as part of the build process. This comes from the love of unit tests included in the project. This is certainly feasible, but not without issues. It is almost guaranteed that some tests will fail. The new build might have changed an element’s id or the xpath has changed. For that reason I don’t include automated regression in the build process. I can just run it when I need to, which is often right after a build or deployment.

Background

HtmlUnit is a “GUI-Less browser for Java programs”. It models HTML documents and provides an API that allows you to invoke pages, fill out forms, click links, etc… just like you do in your “normal” browser.http://htmlunit.sourceforge.net/

Problem

We want to use a headless browser’s functions to scrape a webpage for all instances of <a> to verify each contains a title="" attribute. This will be an accessibility test.

Environment

I am using OSX, Eclipse for Java, and JUnit but everything I cover can be applied to whatever environment you develop in. My environment is the one setup in a previous post, http://timothycope.com/?p=274

Solution- Using HtmlUnit to Scrape Webpages

We’ll need to import HtmlUnit’s .jar file into the Eclipse project. After that’s done we can create an instance of HtmlUnit, called a WebClient.

Background

Markup Validator Web Service

Interface applications with the Markup Validator through its experimental API. This is version 0.2, dated May 2007. For a history of the format, see Change Log.http://validator.w3.org/docs/api.html

To call this service you’ll need to make an HTTP request for http://validator.w3.org/check?output=soap12&uri= with your appended URL. The parameter output tells the API we want a SOAP 1.2 response, which is in XML format. If you remove the parameter, then you will receive an HTML response.

Background

Selenium automates browsers. That’s it! What you do with that power is entirely up to you. Primarily, it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should!) also be automated as well.http://docs.seleniumhq.org/

JUnit is a simple framework to write repeatable tests. It is an instance of the xUnit architecture for unit testing frameworks.http://junit.org/

Eclipse is a platform that has been designed from the ground up for building integrated web and application development tooling. By design, the platform does not provide a great deal of end user functionality by itself. The value of the platform is what it encourages: rapid development of integrated features based on a plug-in model.https://www.eclipse.org/

What is Systems Development Life Cycle

The systems development life cycle (SDLC), also referred to as the application development life-cycle, is a term used in systems engineering, information systems and software engineering to describe a process for planning, creating, testing, and deploying an information system. The systems development life-cycle concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both.http://en.wikipedia.org/wiki/Systems_development_life_cycle

QA Involvement

It is imperative that QA or a representative of QA be present as early in the SDLC as possible. This will allow the QA member or team to begin their process. This includes, but is not limited to:

Resource Planning

Who the tester will be

What the tester will need

What existing test plans can be executed (to include automation)

Release Planning

Given past projects, what is the likelihood of meeting the proposed release date

Can smaller chunks of the User Story be passed to QA sooner

Deployment planning

Smoke tests should be executed on production environments

Resources for Managing SDLC

Gone are the days of being told there is something to test and reporting pass or fail w/ bugs via email or TPS reports. Many systems can be used in conjunction as well. There are benefits to using a mixture of separate systems, mainly cost, but a fragmented system can lead to a waste of man-hours from managing tasks. The big appeal of a unified system, aside from time savings, is tracking. In a application such as TFS or Atlassian, all User Stories, tasks, issues, and hours are logged in a central location.

A QA member can log into these systems to access the work assigned to them. When isssues are found they can be logged and assigned back the the developer and with non-showstopping bugs work can continue. The developer can see the issue assigned to them and submit a fix.

I have worked in a QA department that used FogBugz for the entire life cycle where a single ticket was passed from the Stake Holder to Dev to QA and then back to the Stake Holder. This system was difficult to see where a project was at a given point. We then made a transition to TFS. Tasks were split up and a User Story would be closed after UAT. The sys admin set up a few TVs around the office that would display a KanBan board. This board (also available online) gave an at a glance report of where a project was in development.

At another company, we used a custom WinForms application for story pointing, sprint planning, and hours tracking. We also used BugTracker.net to track issues. In lieu of a KanBan board they used a whiteboard with post it notes. This fragmented system would mean that there were hours wasted in updating a project in multiple systems as well as having to get up and physically move tasks on the white board. Sometimes a developer would set something ‘Ready for QA’ in the WinForms app but not move the post-it note, or vice-versa. Thankfully, this anecdote ends with the transition to TFS.

After each sprint, an analysis can be done using these systems (TFS, Atlassian) to review the development process. Charts and graphs can be populated to show the sucesses and pitfalls of that last sprint.

Incorporating QA into SDLC

Using a system like KanBan it can be easy to flesh out a sprint or release cycle and visualize the work to be done. In KanBan you would have vertical silos for each stage of the development process. You can also add swim lanes to delineate the work needing to be done at each step of the process or split responsibilities by department.

QA will own the QA column and any work “Ready for QA” should have its resources already planned. If a bug is found an issue is attached to the task and either sent back to Dev or work is continued after Dev is notified of the issue. When all tasks are completed and passed acceptance, the task can be moved to the Deployment column. A build should be kicked off and the project deployed to a production environment.

In production, another round of QA is needed as things like a bad code merge could corrupt the project. The QA team should have well established test plans and automation by this point so the time this step takes ought to be minimal.

QA Columns in KanBan

QA testing consist of many subsets, such as; Unit, Path, Security, Integration, Regression, Automation, and UAT. To have a column for each would render the KanBan board too large and require constant moving of tasks by the QA team. Instead, I recommend only three columns:

Background

Telerik Test Studio is an application I have used in Software QA Automation. One project I had was api testing a REST web api for an MVC solution. This code can be used in almost any testing suite that allows you to import .NET 4.5.