This question is to experienced testers or test leads.
This is a scenario from a software project:

Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB.

This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes.

Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

This is not release of the real implementation to the users. That would be after many iterations. I used the word release to mean deploy after each iteration for system testing.
–
PratikNov 20 '10 at 10:32

3

@Pratik: from the dev team's perspective, it's a "release". The code is in a state that they consider "done" and ready to be seen by external eyes.
–
user4051Nov 20 '10 at 17:06

We use a hybrid approach. For customer release, we definitely have its own branch which are strictly for critical bug fixes only.

Regular development continues on multiple software versions. For example, lets say the latest stable release version is 2.0. All risky features will be added to 3.0 branch. Only bug fixes go into 2.0 branches. Testing by dedicated QA team is done only on stable branches. Customer releases are of course done from another branch based off 2.0. Long running features like the next gen platform development will be done in 4.0 not even 3.0.

All this looks good on paper. But if a customer wants a specific feature, it needs to be added to 2.0 branch itself since 3.0 is not stable enough to be released to customers. This means QA team will have to re-run the entire regression suite.

One thing we do is to do code coverage of each regression test case. Only those regression test cases are run which will be affected by the code changes for the feature. Of course, for a customer release, full regression suite is run.

How does your release to end users work into this process? Your system test team should be less concerned with the development schedule, and instead focus on the customer release schedule.

There is little point in trying to formally test new features while development continues, because chances are good that your next 10 features are going to touch the same functionality and require them to test those areas again.

They can continue to informally test interim internal releases during development and flesh out their test design (hopefully catching most of the bugs in those new features), but they will need an additional period at the end of development for formal testing of new features and regression testing.

When they estimate 5 days required for testing your 10 new features, what they should be considering is that they need 5 days at the end of the development cycle, before the release to customers, to validate the new features (and probably more time to iterate if bugs are found). During this period the customer release can be branched off for testing, and new feature development can continue for the next release.

That really depends on how closely coupled the new features are with the portions that require bug fixes. E.g. if you add new drag and drop feature to one small portion of the UI, it 'should not' affect the bug that is related to the parsing of the file loaded by the user.

Having say that, it is understandable (not necessarily justified) for testers to want to test the fixes 'Ceteris paribus' (all 'other' things being equal).

There might be some other concerns with the manner of release and end-users expectation. E.g. you might need to release only after one iteration of bug fixes+testing and one more new features+testing because users ONLY want re-installing or upgrade when there are new features. Some might demand fixes as top priority ASAP.

The test team definitely has a valid concern, but I would question the need for multiple iterations of testing for each release. Why go through an entire round of testing on a version of code that users will never see?

If the testers are attempting to get a defined release to a customer, that is not expecting the new features then their request is reasonable, justified and you should bend over backwards to deliver it.

If this is just to assist in their "processes" during normal development phases, and ensuring that the bug list is not running out of control then without making an issue of it, ask the head of testing if this restraint can be relaxed a little until we get closer to the release point.

Consider changing your source control system to a distributed product. This will make it much easier to deliver such a release.

You did not ask who's responsibility it is but it is the Configuration Manager's responsibility. This stream strategy should be in his CMP. Otherwise fire him/her.
I think the response from Pierre 303 is also very good but ofcourse where possible
technically (e.g. thinking of a Siebel release...) and organizationally.

The issue is that if they test the bugs on a branch they still need to retest and regression test them on the trunk once they're merged back in (unless they're very trusting which good testers rarely are). This isn't just making more work for the developers, it's making more work for the testers.

There is no right answer here but a few things you should consider:

Might these bug releases (without the new functionality) ever go to the users? If so then yes, it must be branched and tested and everyone needs to accept that as an overhead.

Is it possible to divide the new functionality out in such a way that it exists in entirely separate areas of the application to the previous chunks that were worked on? If so this gives you options - the testers can carry out the bug retests and regression test parts of the application. It's not ideal but it's a compromise that might work and give them some of what they want.

How much work really is it to branch them a release? Generally it's a pain but the actual amount of work isn't normally that great. Obviously you'd need them to confirm it's not just more work for them too (see the very first thing I say) but I've seen places make this work.

Is there a better way to use version control here? Something like Mercurial (see http://hginit.com/ - read it, it's good) or another distributed version control system branches and merges in a different way and may allow you to get round the problem. Really, have a look at it because I think it might be the answer to your problem.

But good luck, it's a pain. Above all remember that best way forward will be very dependent on your company, product and situation so make sure you think about that and don't just pull something off the shelf and believe you must adhere to it 100%.

If the bugs that you describe are actual defects and not design optimizations, then yes you should really try to fix them before beginning work on new features.

If you build new features on top of known bugs, you are creating a house of cards. You may likely develop brittle, unpredictable software. Depending on the level of isolation between the buggy code and your new features, the bugs could impact your new features as well. If so, how would you know if your new features work correctly?

If you fix your bugs first, you will have a stronger foundation for any new features that you add.

Certainly, there are times when external forces pressure you to go against your better judgement. Try to help decision makers reach an informed decision where they are aware of consequences of either course of action (i.e. unresolved defects versus missed feature delivery dates) and allow them to exercise their judgement. There are sometimes legal and financial reasons where a course of action, while not preferable, is necessary.

Where I work we handle this scenario where each intended release to production has it's own branch. For example, let's assume for a second there is going to be a release at the end of June and another at the end of July. The June release would get it's own branch, and all the features would be added there and sent to QA. At this point we would begin working on July's release, and branch from June's branch. When QA finds bugs we fix them in June's branch and once the fixes have been pushed to QA they are merged into July's release branch. This does add a bit of overhead to handle these merges, but typically the merges are fairly painless. Once in a while it is big pain in the you know what, but that only occurs when wholesale changes are made, and those shouldn't happen during the QA cycle (but they happen, more than I like to admit). But with a good suite of tests (unit and integration), and code coverage and all the other TDD buzzwords, the risks are mitigated a bit. To help out, we typically have one person handle merges for each project.