How to verify system configuration is done right

Verification is the process of testing whether a a system configuration does what it is designed to do as per the agreed design specification document.

During the verification phase of a system build or modification, the change is tested by the testing team (which may consist of Developers, Quality Assurance and Business Analyst) by putting the software through its paces to confirm that it operates as expected and ensure that it conforms to the agreed design specifications.

Verification testing includes four phases — one pretest phase and three phases of actual testing. The stages are as follows:

SMOKE TEST

Also known as a build verification test, this test determines whether full testing can begin. It reveals any simple failures in the solution that may prevent from executing the tests in the next three phases. Some project teams may link this test to unit testing but I suggest you keep this as a separate test, otherwise you may discovery issues much later when executing unit testing and this could lead to wasted time and effort.

UNIT TEST

The unit test is the first actual phase of testing, it involves testing each unit of the system as a stand-alone test. The Development team generally performs line-by-line testing of both function and structure to find bugs within the unit before any other tests are done.

Although unit tests are normally performed by the Development team, I recommend you have other stakeholders such as Business Analysts and/or Quality Assurance partake in this in order to ensure unbiased testing.

INTEGRATION TEST

The integration testing is the second phase, it ensures the individual units can actually work together. These individual units working together can be considered a sub-system or just linked units. The aim of this test is to find issues with how the components of the system work together, it tests the validity of the software architecture design. The Development team generally performs the integration test, although Business Analysts may help by providing test cases and validating test results.

Sometimes integration tests can have multiple levels of integration. That is, sometimes several sub-systems are brought together and tested, and then those sub-systems are integrated with larger sub-systems.

SYSTEM TEST

This test is the testing phase Business Analysts are most involved in. The aim of the system test is to find issues with how the system meets the users’ needs. You run this test through the entire built system from end-to-end, auditing all units and integrations from a linear perspective.

The system test is the last chance for the project team to verify the product before it gets turned over to the business users for User Acceptance Testing (UAT). It also confirms whether the solution meets the original requirements, answering the “Did we build it right?” question.

REQUIREMENTS VALIDATION TEST

This test verifies the system logic to ensure it supports the analysis requirements. It may seem like this work should be part of validation, you’re actually verifying whether the system has been built according to what the requirements dictate.

REGRESSION TEST

This test is what most project team fail to fully appreciate, but regression test is absolutely vital, failure to carry out regression testing properly could lead to problems arising once the system/configuration is introduced into the live environment, potentially causing costly disruption to the business (regression refers to going backward).

You use this test to ensure that the changes made to the system as part of the solution does not break what was already working. Regression usually impacts more than one program and requires more than one test. When thinking about regression tests, you need to know what applications are impacted by the solution so you can test those applications to make sure nothing has broken. This is where a Traceability Matrix can come in very handy.

DYNAMIC TEST

In a dynamic test, the software is tested to see how it performs when run under different circumstances and check the physical response from the system as those variables change with time. This test term is linked with three different types of tests:

Performance test: Measures how fast the system can complete a function. To determine whether the test passes or does not pass, refer to the non-functional requirements in the documentation that states what the response time should be.

Stress test: The stress test seeks to push software to its limits in terms of users, rate of input, and the speed of the response. If you have only 5 users, you probably can do this test manually; however, if you have to ensure that 1,000 users can be logged in at the same time, you’re probably going to have to use an automated tool to load the system with the number of users.

Volume test: This test checks high-volume transactions to verify the software can handle all growth projections.

SECURITY TEST

Security testing ensures that unauthorised users cannot gain access to sensitive/confidential data. It also confirms that authorised users can effectively complete their tasks. A good diagram to determine which users can perform which functions is a Use Case Diagram or a Security Matrix (a diagram that shows which users may access which functions).

INSTALLATION TEST

This test makes sure the software installs on the server/machine as you expect it to with no problems in the installation process. When testing, make sure the requirements for the system you are installing on are stated.

CONFIGURATION TEST

This test determines how well the product works with different environmental configurations. For example, if your requirements state the product must work on a PC or Mac with Internet Explorer’s latest version or Safari, you need to test installation with both operating systems and with the configuration of the browsers on both systems.

USABILITY TEST

This is really a validation test; however, it’s sometimes done during system test time. If it’s a website that millions of visitors will use, chances are you want to bring in usability engineers to build in usability instead of waiting to test it at the end of the project.

Although your project may not be a multimillion pound release, you still need to ensure that users will be able to effectively use it.

Next time you apply major changes to a system, carry out the above testings before deploying the solution and hopefully it will save you pain and ensure a bug-free release.

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.