Tuesday, September 8, 2009

We have been using QA Feature Owner (QFO) methodology for two months now so I guess it's time for me to review where this is going. My vision is very clear to me:

I (QA manager/Team Leader) don't want to decide what will be tested in each build

I want to increase testers' commitment and familiarity with "their" features

I want each tester to be skilled in all three testing methodologies we employ (Greybox/Integration tests, UI automation tests and manual tests) as well as with core QA methodologies and skills such as reading a specification document, analyzing it, deriving risks etc'

If that wasn't clear enough, I'll now emphasize what team I don't want:

I don't want a team in which I have to decide which test will run in each new build. Too many test cases and too many changes in each build for me to do this effectively.

I don't want a team of domain experts who only test their area of expertise. I don't want the UI automation expert doing nothing but recording scripts and I don't want the person who is used to testing manually treat automation like black magic.

I don't want testers who test features when they assigned to them and forget them after the testing is done. I don't want tester who move from feature to feature.

I also don't want tester who got job security because they are the only ones who can "do this thing". This is not exactly QFO related so I'll deal with the versatility issue in another post.

Since we started using the QFO methodology, we are testing differently. When we get a new build I sit with each QFO, asking her what she wants to test and why. Why is this test left out? Why do we run that test again? While I'm there to supervise, each QFO is responsible for creating "test runs" (that's how a collection of tests is called in Testopia, our testing suite) for new and existing features. QFOs run some of the tests themselves, other tests they delegate to other team members. That's OK, that's how the system works.

This system is obviously not suited for every testing team. Team members need to be able to (and want to!) take responsibility over their features. Team members need to delegate tasks to other team members and perform testing tasks which aren't "theirs" so good communication skills are also required. It is geared towards responsible testers who want to be involved with the product and not just perform tests assigned to them by some authority figure.

Monday, July 27, 2009

In our R&D organization, new features are usually led by developers. While managing the new feature, the leading developer is assigned the title Feature Owner for that feature. She delegates some work to other developers while performing some coding tasks personally. The larger the feature, more time is spent on managing and coordinating efforts and in smaller ones the FO spends more time coding. The interesting issue (for me) is the interaction with the testing team.

Most features are multi tiered: there's a backend layer, some application layer and a UI. Tests for each layer are different, of course: Greybox testing for backend layer, SQL scripts for database related tests, UI Automation and manual tests for the UI, etc'. The problem is, most testers don't feel equally comfortable with each testing methodology so what tester do I sign for each feature? If I assign a manual tester to a "mostly UI" feature does it mean we want test the backend? My coder can't handle the UI automation suite, maybe I'll just skip automatic UI test?

The other alternative is assigning several testers for each feature, based on their skills. This ensures that each tester is assigned tasks she's good at but it also poses other problems:

The overhead of testers learning new features is now multiplied by number of testers involved in testing the feature

Dispatching work between the testers and writing a test plan can not be done by the development FO, as she does not have the testing know how. Also, who is to know all the bugs related to the feature? Each assigned tester? Too messy and getting a quality overview is hard.

Why not assign the QA team leader for dispatching tasks and writing a test plan? Because most of the time she will not do the actual testing on the feature, so we have a paradox: the QA person who studied the feature and designed the tests does not test it and the people who test it did not study it thoroughly (overhead, remember?)

The solution we are practicing now in my testing team is assigning a QA Feature Owner (QFO) to the development Feature Owner (FO). The assigned QFO is usually the person who's skills are the most relevant to the feature, but she needn't posses all the required testing skills: the QFO will assign tasks to other testers in the team in areas in which she lacks or simply does not have time to run or design tests.

This is how it works: upon reading the spec, the QFO designs a test plan, a document detailing the general testing strategy. In this document risks are identified and then tests are written as a sketch: a few words describing the general purpose of each test. We find it convenient to start by designing Greybox (GB) tests, then Automatic UI tests (AUI) and finally manual tests.

It doesn't matter if the QFO is a programmer or not when designing GB tests. Using Data Driven Testing methodologies most GB test cases are list of methods and what they are supposed to do and attached CSV files which are no more than input/output files based on the original spec. AUI tests are simply stories in English ("go here, click there") which can also be augmented beforehand using CSV files. Test cases are later reviewed by other team members to validate their testing logic and enrich the tests in areas in which the QFO is weaker. For example, a manual tester can review the manual tests written by a QFO who has strong programming background and suggest additional tests. Test cases will later be examined by the FO and QFO together, in order to ensure full coverage.

Once all test cases are approved by the QFO and feature and test coding begins, the QFO starts following the testing project. The QFO is responsible for ensuring that tests are run but not necessarily by the QFO personally: testing tasks can be assigned to other testers as long as the QFO remains the testing team's focal point for the feature. The QFO does not need to be the one reporting all defects but must know all defects opened which are relevant to her feature. The QFO is responsible for the testing of the feature but the actual tests may be performed by others.

We have started implementing this methodology only a few weeks ago and it remains to be seen whether we benefit from it on the long run. I don't take the fact that we're asking hands-on testers to become project managers for granted and believe some will adapt to it better than others. Others will resent the time spent in meetings and reporting and will feel more at home getting their hands dirty with testing. I have no intention of forcing any tester to become a QFO, I consider it a perk, not a requirement. I do, however, believe that tester assuming the role of QFO will enjoy the challenge of managing a testing project, interacting with more team members from the R&D than they are used to and experience new testing methodologies. I believe a good QFO is a potential testing leader.

Thursday, February 5, 2009

After 8 years in big companies like Mercury (R.I.P) and Symantec I have started working for Delver, a startup company, in late 2007. After 15 months of starting and running the QA team I am wiser about QAing in a startup company.

Testing is Testing is Testing. Good QA is about methodology: be methodic in your risk analysis and in your tests. This principal doesn't correlate to the size of the company. In Delver we won't write a test case to a feature if it's not documented. I don't settle for oral briefing or email overview. No document, no test case. Easy, really. On the same principal, no testing without a test case, even for small features.

Speed over quality. In a startup company the emphasis is on fast development with speed being first priority, quality second. This perspective dictates more investment in speeding up the testing process. We used the following methods:

-Automation for testing relatively stable components-Testing new components outside the main build if the build is not ready-Issuing "non release" build for focusing on delicate features

QA freedom/No safety net. Another aspect of speed oriented, developer dominated organization is the freedom for QA. Since the focus is usually on the developers the QA team in Delver enjoys more professional latitude than what I have experienced on larger organizations. The flip side is the lack of testing peers and no organizational testing history. There is no safety net for me in Delver and when I make a methodical mistake usually there will be no one to correct me. This is an exhilarating feeling, but it's also a scary one. Like all good thrills, I guess.

Smaller budgets. Big, commercial testing software is expensive. Startups don't like "expensive". Startups like "cheap" or even better: "free". On one hand this seems like a major limitation (oh, who am I fooling. It is a major limitation.) but on the other hand it's also a challenge and a cool one. Finding a 300$ alternative for a 80K$ load testing suite is a personal victory and shows me that I can overcome seemingly seemingly impassable hurdles (financial, for example). In the future I'll dedicate a post to cheap and free tools we use in Delver.

No bureaucracy. Well, this one is obvious but it's still cool. When I need to ask something I can go to the guy and ask him. No email to another office, no phone calls to another continent. Go to the guy. Ask. End of story. If I want to hire a candidate I can give him a contract a day or two since the first interview. If I need a new license I simply go to the room of the guy who signs the checks and ask for the money. Simple. Fast.

More flexibility in reporting bugs. I don't open a bug for every problem we encounter. Since the company is a small one, when I'm not sure I can simply go to the room of the developer who is responsible for the code I found the problem in and verify whether it's a bug or not. This can be done in larger organizations as well but sometimes the relevant developer is not in the same building. Or town. Or continent.

Wednesday, January 21, 2009

Often, we find ourselves in a situation in which a feature is ready but the version as a whole is not: It can take anywhere between a day or two to a week until all the developers have finished working on a build and all integration tests are concluded, but during this time some features can be tested. So we came up with a method for speeding up the testing process: testing incomplete builds.

The idea is simple. When a feature is ready, we get an incomplete version. This version is installed on a separate, local, system and only the specific feature (or features) is tested. We don't test other components of the application and when bugs are found there we ignore them. This way, by focusing on specific, new, features, we are able to detect bugs before the release candidate version is released to the QA. Once we get the official version, we are more familiar with the new components and they are usually more robust as the most glaring bugs were purged earlier.

Testing in this method helps us test more and more often, but one must bear in mind that testing this way can't replace thorough testing later when the full version is tested. Some bugs only rear their ugly heads in more complex environments and need to be detected separately.

About Me

I'm in QA since 2000, started by specializing in Performance testing and later set up a QA group from a tiny start-up phase to corporate. I also collect and paint miniatures, raise two daughters and a boy and ride a bike. Usually not at the same time.