In an agile environment, for sprints for small updates, what approaches to testing are recommended? Do we just do exploratory testing after the SUT has been built before releasing it or do we do both exploratory testing and run through some functional or regression test cases?

Additional Info: We only have test cases for manual testing. We don't have any automated test cases yet as of this time as we're still working on creating them for future sprint.

UPDATE:

Thanks for all your input. Unfortunately, we do not have any automated test cases at this time as we're still creating them. Due to this, what we on the past SU sprints was manually test each task as soon as it gets to the ready for testing column then after all tasks are done, then once the small update is built on the 2nd to the last day of the sprint, everyone just did a manual and exploratory testing of workflows and some critical functional test cases. During downtime in the sprint whenever there's no task available in the testing column, testers just ad hoc around the application. We have about 7 testers in our team so its hard to keep them all busy.

5 Answers
5

As Suchit said, it depends a lot on the nature of the update. I've seen a one-line code change trigger a full regression because that one line happened to be in one of the core calculation engines.

My suggestion for any development process, whether agile or not, is to have multiple suites of automated regression tests running on at minimum a daily basis. These tests should cover the core functionality of the application (or applications) and run against the most recent build available. That way, no matter what happens in development, you have no more than 24 hours between someone introducing a regression to core function and it being detected.

If you don't already have that level of automation, it will take time to build to it. In that case, I'd recommend getting a small slice of core function, preferably one that's used all the time (logging on is a good one, because you'll need that later anyway) into automated regression, then build on that. The automation process should ideally be its own development project - although I've yet to be fortunate enough to work somewhere where this occurs!

If you have no automation, while you're building it, I'd suggest identifying your steel thread/happy path test cases for the application, as well as which of them are going to be used in the process of getting to the new or changed functionality (for instance, if your application requires a valid login to function, every test will start with logging in, so there's no need to specifically regress that) and build a lightweight manual regression to run every time until you can get the automation in place to remove the need.

In software, short of fixing typos in captions, there really is no such thing as a trivial change. I've seen a misplaced "not" break an entire system. If you've got comprehensive automated regression running daily, you can focus your manual testing on the areas most impacted by the change with exploration around it and know that any other critical regressions will be found by your automation. It's really a matter of best using the resources available to you.

This disconnected ramble brought to you by insufficient caffeine and too early in the morning - I hope there's something here you can use.

Testing is not an individual function on a scrum team. The concept that developers develop and testers test (I coded it I am done, you test it) is a waterfall practice. The team is responsible for a feature getting into a "done" state. Testers with SQA experience or as their professional discipline bring value in their understanding of how to test and how to produce quality.

Testing late in the sprint is a recipe for disaster and set's up the old waterfall "testing crunch", not enough time to test because everything upstream ran late. The features must be coded, tested and approved even if the feature includes a vertical slice through many apps, the integration testing must be performed by the team in near real time along with bug fixes. Don't move forward until it's "done". Build often, merge often, release to user testing/review often! Avoid the Scrumfall!

It really depends on what the small update is. It might be a small change but it affects a lot of parts of the system e.g. Updating something in the db.

The testing shouldn't be tied to whether it is a small update or a big one, however defined. It should be tied to how much you think is the effect of the given update. A small but critical update may require extensive testing and the team should not rush to push out the change.

If this is an update for an existing product then presumably you already have some tests for it and likely those tests are automated or can be started in a batch mode. Why would you not run all of them and then see which ones fail?

I think that's probably the first step once an update comes out of development. Then I'd take the list of failed tests and identify which failed because the update changed the way the test should work (changing API parameters or the physical page layout... etc). The rest should be regression bugs.

Then you'll need to add new tests, into whatever testing framework you use, to actually test the features that were updated. This is where you have more flexibility. In my environment, we do the above steps and then manually verify the new features before adding automated tests (Post API/WSDL/watir-webdriver), then we throw the manually verified build on our "preview" environment for feedback / a place for customers view on upcoming features. Ideally we'd have the automated tests in place before that, but honestly in the agile world, it doesn't always work like that.

I'm a huge fan of relying primarily on exploratory testing, possibly with checklists of features or James Bach's "Session Based Testing" approach, for the majority (or all) of the manual testing work. I find checklists or sessions help to avoid holes in exploratory testing. I prefer to run through manual test scripts / test cases only once I am confident that they will probably pass - meaning late in the sprint. There are several reasons for this:

I find I hit the important issues that customers will care about faster when I perform exploratory testing than I do when using manual test cases.

Manual test cases can be inspired when you perform an action while doing exploratory testing and either find a bug (should become a regression test) or find yourself thinking, "This is an action I should try again later." You can make a note while testing, and then come back to it later to write a full script (if a script is needed for your team). This means that exploratory testing also becomes a manual test case design activity.

IME, test cases and scripts in general are better for verifying functionality than for finding bugs. I want to find bugs early, so we have time to fix them. Later in the cycle, I'm usually more concerned about making sure everything is working well enough to ship.

I really don't like scripted manual test cases (in part because the return on effort is so low, in part because they are boring), and don't want to perform them more than once.

However, I prefer even more to combine automated tests with exploratory testing and not perform any scripted manual tests at all.

If you are planning to automate tests in future sprints, manual scripted tests may have some additional value as precursors to that automation - but I would still encourage you to focus on exploratory testing at first as your main method of finding bugs, and use scripted test cases to verify that the functionality that you most care about works before release.