If you have been a software tester for any length of time, or if you have earned an ISEB /ISTQB certification (ISEB Foundation, ISEB Intermediate, ISTQB Advanced), you probably understand the fallacy of exhaustive testing. In fact, I wrote a blog post about this a few months ago. This fallacy basically states that it is not always possible to test for every possible variable and outcome because all testing efforts are done under resource constraints. Additionally, software is so complex nowadays that humans are simply not capable of figuring out all possible idiosyncrasies before the testing actually starts.

Related to this concept is the idea that some features simply cannot be tested prior to launch, even if they are known beforehand. So the difference is that the fallacy of exhaustive testing means that it is not possible to identity all testing scenarios 100% of the time, whereas the latter means that an issue has been identified but some barrier exists to actually testing it prior to launch. Barriers may encompass technological limitations, for example some process that may be disabled on the staging server, or the prospect of adding a feature to an upcoming product version that has not yet been deployed in the market.

So if you find yourself in the unfortunate situation where you are unable to test something within the test plan, there are a couple of things you can do to avoid catastrophe. First and foremost, create a log of all known features that cannot be tested. This will allow you to keep a close eye on these items after launch.

Next, for each untestable feature, create a plan of attack should something not go according to plan after launch. Communicate this plan internally, and earmark testing and development resources should something go wrong.

Finally, for any critical untestable issue that you are absolutely sure needs to be tested, make a business case and get it done. For example, if the testing sandbox cannot handle a specific functional item, make a pitch for development resources to upgrade the sandbox to enable the applicable testing.

In the final analysis, it is not always possible to test everything in the real world, but you can and should take steps to “cover your butt” in these situations. Proper documentation and communication are critical. Good luck!