Adam Kolawa is the co-founder and CEO of Parasoft. Kolawa, co-author of Bullet-proofing Web Applications (Wiley, 2001), has contributed to and written hundreds of commentary pieces and technical articles for publications, such as The Wall Street Journal, CIO, Computerworld, Dr. Dobb's Journal, and IEEE Computer. He has also authored numerous scientific papers on physics and parallel processing.

Regression Testing

By Adam Kolawa, Co-Founder of Parasoft

Establishing a policy for regular regression testing is key to achieving successful, reliable, and predictable software development projects. Regression testing provides the only reliable means to verify that code base changes and additions don't "break" an application's existing functionality, and it can have the single greatest impact in controlling product release delays, budget overruns, and the prospect of software errors slipping into released/deployed products. Yet regression testing is not widely practiced. Development organizations often give up on regression testing because they find it complicated and difficult to maintain. But the major reason for failure with regression testing is the absence of a well-defined and implemented policy and an organizational commitment to that policy.

Regression testing identifies when code modifications cause previously-working functionality to regress, or fail, ultimately allowing you to catch regression errors as soon as they are introduced. Most organizations verify critical functionality once, and then assume it continues to work unless they intentionally modify it. However, even routine and minor code changes can have unexpected side effects that might break previously-verified functionality.

The purpose of regression testing is to detect unexpected faults — especially those that occur because a developer did not fully understand the internal code correlations when modifying or extending code. Every time code is modified or used in a new environment, regression testing should be used to check the code's integrity. Ideally, regression testing is performed nightly (during automated builds) to ensure that errors are detected and fixed as soon as possible.

Regression testing should be tightly linked to functional testing, and be built from the successful test cases developed and used in functional testing. These test cases, which verified an application's behavior or functionality, are then rerun regularly as regression tests and become the means for verifying that the application continues to work correctly as new code is being added. During regression testing, specified test cases are run and current outcomes are compared to previously-recorded outcomes. You will be alerted to any discrepancies between current responses and control responses. Subsequent regression test runs will remind you of any discrepancies until the application returns the expected result.

Software development organizations with effective regression testing policies significantly improve the effectiveness of their software development staff and the success of their projects. Early identification of problems introduced by code modifications can save countless hours of development time spent chasing and resolving software errors, and allows the team to maintain and modify the application without fear of breaking previously-correct functionality.

An effective regression testing policy involves the definition of guidelines for regression system usage, and then the implementation and integration of those guidelines (along with supporting technologies and configurations) into your software development lifecycle to ensure that your teams apply the policy consistently and regularly. It also requires a means to monitor and
measure the policy's application as well as report the data it is tracking. Inconsistent usage can result in existing errors going undetected — or new errors being introduced — due to unchecked modifications to the code over time.