We have a software product has a lot of usability features, and we're constantly changing it. We have issues with things like persisting scroll position which seem to get fixed, but then are broken again when we release. We have good testers, but we seem to miss things like this because there are just so many features in the application it's hard to perform thorough regression testing.

How is this problem best solved? I'd love to have automated UI tests, but those are fragile. I'd love to have our testers test every feature every time we implement a new feature, but that would take forever, and seems impractical. I wanted to gauge how other software shops do this and how they avoid recurring bugs.

In addition Selenium can come handy in creating a automated regression suite. You raised a question of these tests being fragile; which is true to an extent. But if done neatly they can prove to be an asset in the long run. You need to balance the automation and manual exploratory effort to get the best of both the worlds.

Gojko has described this beautifully than I ever can, so here's the link

It is hard to draw reliable conclusions about an organization from two paragraphs in StackExchange, but it sounds to me as if you're going too fast. The fact that you are constantly changing the product suggests it is fairly new. With new products and new companies, when most of your users are early adopters, it may be more important to release quickly than to release with high quality. If that's really where you are, you may have to live with that condition until you have the luxury of more time.

You referred to UI issues in the title and twice in the question, so I assume your quality issues are specific to the UI; i.e. the rest of the software is in better shape. If you have the time to spare, I suggest slowing down and taking measures to ensure the UI is more thoroughly and consistently tested. If your testers don't use a test plan, someone should write one for them, at least for the aspects of the UI that are especially buggy. Are your testers using the same kind and variety of environments as your users? For example, if you have a browser-based UI, are your testers using same variety of browsers and screen resolutions?

You might investigate why your developers are creating a lot of UI bugs. Are they aware of the quality problems? From their perspective, is it more important to go fast or to unit-test their work before they check it in? Are the developers using the right infrastructure? If a change in one thing causes a cascade of bugs in other places, you may have a design problem.

I'd agree with the test plan portion of this answer, if you have areas that continually regress you need a checklist, or plan, to make sure certain areas are ALWAY covered regardless. Whether this is automated, but might be hard if you have continual changes, or a checklist that can be used by Devs to check after they make changes you need something structured. I do the same when I have code that is continually changing, and also continually has issues.
–
MichaelFJun 16 '11 at 12:46

+1 for referencing the developers needing to test their work. GUI components and objects shouldn't be that difficult to work with so the fact that their rendering and such are breaking on almost a build to build basis indicates that the developers aren't even exercising their code on a high level.
–
TristaanOgreJun 16 '11 at 16:21

Short anwer: Develop an automated regression suite, UI tests are not fragile when done right.

Longer answer:

I would suggest to invest time in developing an automated regression testsuite using selenium. A lot of times people say UI tests are fragile but in my opinion they're only fragile when they're not programmed right. It is true that it is hard to create a well functioning testsuite but in the end it is worth it.

Think about how much time your testers will spent roughly on executing repetitive tasks until the end of the project and think about how much time it will take to develop a well functioning/maintainable testsuite to get rid of those repetitive tasks.

If you choose to do create an automated testsuite you or your client will end up with a system wich is far easier and cheaper to maintain. A well developed automated testsuite is worth a lot of money. While manual testing is more like throwing away money.

You're team now has a huge technical debt because of the fact that there's no automated regression suite. You should figure out if it's worth the effort to remove the debt right now and if so go ahead and remove it. Try to make everyone involved understand the ROI of developing/maintaining such a suite and try to prioritise development of such a suite along with the prioritization of the rest of the work.

It's not always easy to convince management that it might be a good idea to use a lot of time to develop a testsuite instead of using that time to develop features but I think you should try anyway.

It also might be a good idea to hire a selenium consultant/trainer for a while to train your testers/developers on how to build a well developed, well functioning testsuite (if your company doesn't have any experts allready). Developing a good testsuite is very hard and experts could prevent you from walking into all the pitfalls.

If it's impractical for the test team to thoroughly test the GUI, then see if you can get someone else to test it instead!

Found out what testing the developers do before they hand over the software and see if you can get them to do more / better testing. Alternatively, can you get your customers to test the software for you (i.e. Beta Testing)?

The few problems that your team is facing, and the possible solutions:

Keeping up / too much load - Either the company is trying to roll out things too fast or your product does demand constant feature changes or the dev team does not do their part of testing. In either or a combination of these cases, a lot of load falls on the test team to keep up. This may result into tester's fatigue or the development of the attitude that even if we try our best, there is so much that there will always be bugs.

Automation - The test automation is as fragile as the test team that develops it. If you have engineers who can develop good test automation, that will help beyond words. If you are not confident enough, start with automating tasks that are relatively stable and most repetitive e.g. if you have to test 'create user' each test cycle, it is worth automating. Automation will also provide a level of creative satisfaction to the testers and something to look forward to. And it will, most certainly, give you more coverage at a greater speed.

Communicate with other members on the project - Communicate with the project manager or someone of the likes that the test team is being overloaded and if they can reconsider the number of features being rolled out each release. It is also highly likely that the development team feels the same. As a result, they might not be able to test their code thoroughly before passing it on to the test team. The company has a choice - add more features with more bugs, or less features with better quality.

Test team - Finally and most importantly (if this is not already done), have the manager meet each individual team member at least once in two weeks and get their feedback. How satisfied they are, what do they think about their work, load, etc. Get updates on what they did since the last meeting and what they are looking forward to achieve before the next meeting. Any suggestions to improve the quality of the product. This will also give an insight on how good the team actually is, are they motivated enough, if anyone is not doing enough, and much more. You can also do daily stand-ups for daily updates within the QA team.

I am skeptical about the wisdom of automating tests for rapidly-changing interfaces, but I like and agree with the rest of your answer.
–
user246Jun 16 '11 at 17:20

If the product is of the nature, you can't do anything else. You still have to automate, but selectively automate. Calculating ROI here becomes extremely important.
–
Suchit ParikhJun 16 '11 at 17:42

@user246 Depending upon the tool, the interface may change but you might not break the automation. The tool I use has a feature that maps components and objects to a hierarchal tree with user defined aliases. The component may change, the internal hierarchy of the application may change, but there's enough flexibility in the tool via the aliases that the majority of the automation does not need to change.
–
TristaanOgreJun 16 '11 at 18:12

I think we both agree you need to consider the ROI.
–
user246Jun 16 '11 at 20:44

Fast and reliable automated tests will point out a regression minutes after the problematic checkin. I don't see why you say 'it might not even help at all'
–
Ivo GrootjesJun 16 '11 at 22:32

1

Automated regression tests are great. But pointing out a regression - no matter how quickly - doesn't fix the root problem of "why does this keep happening?" And a development process where the same bug is found within minutes of every checkin is simply a broken process. Far better is to find out "why", then work to cure the problem.
–
Joe StrazzereJun 17 '11 at 11:47

+1. I'd also ask: Are the Dev, QA, and Prod environments identical. We had problems and found that one environment was working with one java library, another environment with a different one.
–
John OglesbyDec 30 '13 at 14:14

I've found in the past that development peer reviews coupled with test peer reviews have helped a lot to reduce regressions such as this. A checklist can be compiled identifying the top 5 to top 10 riskiest areas and set a rule that if any of these break on a developer's machine (provided they have the latest code loaded), they are not allowed to check in. The checklist should take no longer than 5–10 minutes to run. These tests can be manual or automated, depending on how much those screens change.

Yes, it will add to the check-in time but this ensures that you catch those bugs earlier and fix them on the spot rather than spend the time testing, raising bugs and developing it and testing it again.

And with this process in place, soon enough, the developers will learn to check these things before you even do the peer-test review. As a result, they will produce better code.

Automation can address this issue if we plan to test production / real time scenarios. Below is approach towards automating the product.

Phase I

Get started with Automation during development phase

When you have the mock UI, Id's

Phase II

During Testing ensure your automation is in usable state

Add / Update Automation cases bases on bugs# identified

Phase III

When the code is deployed in production, Capture from the logs majority sequence of user workflows

example

50% of users login, do a search and then buy a single product

30% users login, look for offers and order multiple product

Automation can be useful if we leverage it based on actual user scenarios from production logs. This way you can be sure you test all major user workflows

Functional Testing Effort + Regression Test Effort + Updating Automation Effort might be a challenge considering timelines. Possible solution can be Developers running automation suite and identifying bugs in earlier stage. Test Team can make it in scope of work to add production workflow (Major 6 workflows based on captured logs is a criteria for automation effort)

Even if you cannot automate production workflow keep track of production bugs# and missed scenarios and ensure they are regressed for every build

Have a Knowledge repositiory / Guidelines not as test cases but atleast to say (Areas where possible bugs can occur / Checklist ) would help

Above all ownership the individual demonstrates matters. Learning from the bugs missed and adding relevant checkpoints to address it for future releases

Accquiring domain expertise takes time but having good knowlege of product would help to do good exploratory testing

Track the bugs, Check pattern of recurring bugs. If we can arrive at any assessment quality of dev in certain area/ certain developers need to be improved, take it forward as recommedation from QA Team