Musings on technical writing...

Testing ASP.NET 2.0

Ever wonder what kind of testing occurs for software systems like ASP.NET? Scott Guthrie's latest blog entry, Testing ASP.NET 2.0 and Visual Web Developer, delves into the testing rigors that the ASP.NET team applies. Obviously, every software product strives to have zero bugs, but that's rarely, if ever possible due to the scope of large software projects. From Scott's blog entry:

For ASP.NET 2.0 and Visual Web Developer, we have to be able to deliver a super high quality product that is rock solid from a functional perspective, can run the world’s largest sites/applications for months without hiccups, is bullet-proof secure, and is faster than previous versions despite having infinitely more features (do a file size diff on System.Web.dll comparing V2 with V1.1 and you’ll see that it is 4 times larger).

Now doing all of the above is challenging. What makes it even harder is the fact that we need to deliver it on the same date on three radically different processor architectures (x86, IA-64, and x64 processor architectures), on 4 different major OS variations (Windows 2000, Windows XP, Windows 2003 and Longhorn), support design-time scenarios with 7 different Visual Studio SKUs, and be localized into 34+ languages (including BiDi languages which bring unique challenges).

In order to perform the sheer quantity of tests needing to be run - over 105,000 test cases and 505,000 test scenarios - the ASP.NET team employs 1.4 testers to every developer, and relies on automated testing techniques to offer the breadth of test cases executed. Specifically, a system called “Maddog” helps run these automated tests.

A tester can use Maddog within their office to build a query of tests to run (selecting either a sub-node of feature areas – or doing a search for tests based on some other criteria), then pick what hardware and OS version the tests should run on, pick what language they should be run under (Arabic, German, Japanese, etc), what ASP.NET and Visual Studio build should be installed on the machine, and then how many machines it should be distributed over.

Maddog will then identify free machines in the lab, automatically format and re-image them with the appropriate operating system, install the right build on them, build and deploy the tests selected onto them, and then run the tests. When the run is over the tester can examine the results within Maddog, investigate all failures, publish the results (all through the Maddog system), and then release the machines for other Maddog runs. Published test results stay in the system forever (or until we delete them) – allowing test leads and my test manager to review them and make sure everything is getting covered. All this gets done without the tester ever having to leave their office.

The lab Scott mentions is actually four labs, containing over 1,200 machines in total. Scott posts a picture of one of the many rows of computers in one of the test labs. During the MVP Conference in April Scott showed myself and a few other MVPs one of these test labs. If I'm remembering correctly, the room had about 20 or so rows like the one shown. It was pretty impressive, especially considering it was just one of four such labs.

For more on the testing process, be sure to read Scott's blog entry, there's a ton of information, Maddog screenshots, and so on. Definitely worth reading if you want a peak into how your favorite Web programming technology is tested.