Our company has been growing and with it our products and the QA team.

Our QA team previously has only handled a single "headline" company's product and had this "startup"-like flexible and agile relationship with the development team. But, now we've grown into a part of a bigger corporation, we've got a couple of new products that were previously internal and now are becoming important and going to become customer-facing which requires higher quality standards.

We've hired new testers to handle the increased load, but now we have the following question to answer. Should we focus the new testers on this particular product and let them handle the testing and quality assurance, or should we make them a part of a "generic" QA team which is going to be handling all the products.

In other words, should we have multiple product-specific QA teams or a single QA team handling all the products? What are the pros and cons of these two approaches?

Can you answer: 1. Are the products very unique from each other? 2. If you were to split the resources on each product, do you think those resources would be stretched thin?
– MCsuchNsuchNov 19 '17 at 2:03

@MCsuchNsuch good questions! 1. That's something I should've probably mentioned - they have common parts/modules, but overall quite different, so there is some intersection in functionality 2. No, time- and workload- wise, I think we would cover it (at least at this point in time). Thanks!
– alecxe♦Nov 19 '17 at 2:09

9 Answers
9

I'd argue for a combination of the two: a QA team that handles all products, with a mixture of specialists and generalists.

Within that team, you'd ideally want a mixture of skills: some who are skilled in product A, some in product B, and some who can float/are specialized in test areas/techniques (say, performance, security, usability, etc). This way, you can make sure people always have things to do. If you have someone who specialized in performance testing product A, what do they do when product A isn't ready for performance testing? (I'm assuming here that management won't allow someone to spend half their time improving their skills, or building tools, or something similar.)

The pros of this approach are the following:

When a product needs focused testing for a time, you can shift people over to that product, and not have "idle" testers.

It can allow for much easier sharing of tools, techniques, and so on.

From a career point of view, it's easier to compare QA folks against each other versus a QA person versus a developer.

Cons:

You have to find the right sort of person for these various roles. Some people will want to specialize, and others will want to float. It can be a problem if you try to put one sort of person into a role they aren't suited for.

Depending on the products in question, in may be that sharing skills/tools/etc is a problem, if they're different enough that they require vastly different test skills, tools, or whatever.

The larger the team, the more institutional inertia there's going to be. On a small team, it would be easier for them to move to a new tool/technique/etc, as there would be fewer people to persuade.

Honestly, though, one advantage of the larger team model is that at some level, unless your management team is incredibly disciplined, there's almost always going to be some sharing between projects, as the problem project of the moment is going to request/require more resources. With a larger, unified team, you can absorb that more easily.

And, from a management point of view, it can be very helpful to have the QA manager equivalent in stature to development managers. This is because there are always going to be conflicts when development wants to ship, and QA doesn't think the product is ready. You ideally want that decision to be made on technical grounds, not because the development manager is somehow senior to the QA manager. Ideally, development management and QA management will report to the same manager/executive who is responsible for the overall quality of a product, and who can make trade-off decisions from a business point of view.

(This is ignoring another possibility of having the QA folks embedded in development departments, which has its own set of tradeoffs.)

What I've experienced so far is that QA ends up being a little like a consultancy. We're attached to product teams and work closely with them, developing specific product knowledge and automated tests. When we meet, it's to compare notes on teams, skills and tools.

In my experience, attempting to treat QA engineering as one department of interchangeable engineers where everyone is cross-trained on everything just burns everyone out and leads to insufficient testing. And there's a bit of a fallacy in thinking that because one engineer is out, another could test just as efficiently in his or her place. Having 'back-up' qa engineers just means that another person has to be trained and completely up-to-date on a product, in addition to whatever product or products they are normally on. It might be cheaper in the long run to just say "we only have one qa engineer on this product, plan accordingly."

In my opinion you could (should?) evolve what you already have established now: testers that are in a "flexible and agile relationship with the development team". A good moment to mature your apparently agile(-ish?) environment.

I would assume that the different products are also not maintained by only one big and growing development (e.g coders) team. So this could be the best moment to grow into product teams that include coders and testers. As with coders and needs, some testers can also be shared among teams because of there specific specialization. As others might stay on one product.

It depends on your organization and context but in mine coders and testers (and devops and other techies) are reporting to one Engineering manager. That also removes the issue of "QA" as naysayer. Or the separation of "development" versus "QA"...

It might be a bit more challenging if you are trying to compare people in detail as in this case "the team" is more the entity to measure, I guess. And it tends to ask (much) more responsibility and drive from team members. Some people don't like that.

In my experience it does show when certain members are really lacking, misbehaving or otherwise and the team is unable to handle that. In which case the manager needs to step in.

It is not by any means the solution... it really depends on your context ;-)

As there are multiple products, so there must be specific developer teams who are working on specific products. If there is still only a generic team, then it will be very tough to manage testing efficiently.

Rather merging corresponding developers and testers into specific product related teams as well as maintaining a separate generic QA team will be a better choice. The generic QA team will take care of the common modules and also can take part during requirement analysis and admin decisions.

Otherwise QA team will be distorted, headless and there might rise a situation where tester community lacks a voice to stand before developers or admins!

Automation Engineers ('testers') are co-located with developers and work in their respective scrum team. Note that I use the terms "Application Engineer" and "Automation Engineer" to try and address the 2nd class citizen problem that Automation Engineers often experience If there is more than one Automation Engineer then they may be considered a local quality group working within the app dev team

The organization also will have a 'QA' department that the Automation Engineers also belong to. This organization exists to hire, fire, train, educate, share testing knowledge and approaches

Organizations may also have a separate 'personnel' manager who deals with reviews, performance reviews, professional growth and inter-personal issues

The Con of this approach is that the developers perceive the QA team as a roadblock. The Developers may even cause bottleneck situations to make the QA team look inefficient or ineffective. I know it sounds petty, and has no place in a professional work environment, but I have seen this SEVERAL times. Whereas with dedicated QA resources for each team the developers have mostly embraced the QA resources. QA is seen as an asset, instead of a liability by the development team.
– Jeremy KowalskiDec 11 '17 at 18:31

If you are in favor of a single QA team, then are you also in favor of shopping that work out to a consultancy firm?

QA resources in an agile environment should be a part of a cross-functional development team. They should also have a community of practice for QA across the organization, so they can share tools and approaches to refine their QA practice area.

If you are concerned about reporting of metrics across the organization, then the community of practice for the QA resources needs to define and produce a document that specifies what thee reporting requirements for the organization. It is then up to the QA resource on the project teams to implement the reporting standards.