Overall, the execution of automated tests proceeds according to a four-phased pattern: setup, exercise, verify, teardown. The setup phase is used to place the system into a state in which it can exhibit the behavior that is the target of a test. For data-intensive systems such as Dynamics NAV, the test data is an important part of setting system state. To test the award of discounts, for instance, the setup phase may include the creation of a customer and setting up a discount.

One of the biggest challenges of testing software systems is to decide what test data to use and how and when to create it. In software testing the term fixture is used to describe the state in which the system under test must be to expect a particular outcome when exercised. The fixture is created in the Setup phase. In the case of an NAV application that state is mostly determined by the values of all fields in all records in the database. Essentially, there are two options for how to create a test fixture:

Create the test fixture from within each test.

Use a prebuild test fixture which is loaded before the tests execute.

The advantage of creating the fixture in the test is that the test developer has as much control over the fixture as possible when tests execute. On the other hand, completely creating the test fixture from within a test might not be feasible in terms of development effort and performance. For example, consider all the records that need to be created for a simple test that posts a sales order: G/L Setup, G/L Account, General Business Posting Group, General Product Posting Group, General Posting Setup, VAT Business Posting Group, VAT Product Posting Group, Customer, Item, and so forth.

The advantage of using a prebuild test fixture is that most data required to start executing test scenarios is present. In the NAV test team at Microsoft, for instance, much of the test automation is executed against the same demonstration data that is installed when the Install Demo option is selected in the product installer (i.e., CRONUS). That data is reloaded when necessary as tests execute to ensure a consistent starting state.

In practice, a hybrid approach is often used: a common set of test data is loaded before each test executes and each test also creates additional data specific to its particular purpose.

To reset the common set of test data (the default fixture), one can either execute code that (re)creates that data or restore an earlier created backup of that default fixture. The Application Test Toolset contains a codeunit named Backup Management. This codeunit implements a backup-restore mechanism at the application level. It may be used to backup and restore individual tables, sets of tables, or an entire company. Table 1 lists some of the function triggers available in the Backup Management codeunit. The DefaultFixture function trigger is particularly useful for recreating a fixture.

When executed the first time, a special backup will be created of all the records in the current company. Any subsequent time it is executed, that backup will be restored in the current company.

BackupSharedFixture(filter)

Creates a special backup of all tables included in the filter.

RestoreSharedFixture

Restores the special backup that was created earlier with BackupSharedFixture.

BackupDatabase(name)

Creates a named backup of the current company.

RestoreDatabase(name)

Restores the named backup in the current company.

BackupTable(name,table id)

Creates a backup of a table in a named backup.

RestoreTable(name, table id)

Restores a table from a named backup.

There are both maintainability and performance perspectives on the creation of fixtures. First there is the code that creates the fixture. When multiple tests use the same fixture it makes sense to prevent code duplication and share that code by modularizing it in separate (creation) functions and codeunits.

From a performance perspective there is the time required to create a fixture. For a large fixture this time is likely to be a significant part of the total execution time of a test. In these cases it could be considered to not only share the code that creates the fixture but also the fixture itself (i.e., the instance) by running multiple tests without restoring the default fixture. A shared fixture introduces the risk of dependencies between tests. This may result in hard to debug test failures. This problem can be mitigated by applying test patterns that minimize data sensitivity.

A test fixture strategy should clarify how much data is included in the default fixture and how often it is restored. Deciding how often to restore the fixture is a balance between performance and reliability of tests. In the NAV test team we've tried the following strategies for restoring the fixture for each

test function,

test codeunit,

test run, or

test failure.

With the NAV test features there are two ways of implementing the fixture resetting strategy. In all cases the fixture (re)creation code is modularized. The difference between the approaches is where the fixture creation code is called.

The fixture can be reset from within each test function or, when a runner is used, from the OnBeforeTestRun trigger. The method of frequent fixture resets overcomes the problem of interdependent tests, but is really only suitable for very small test suites or when the default fixture can be restored very quickly:

An alternative strategy is to recreate the fixture only once per test codeunit. The obvious advantage is the reduced execution time. The risk of interdependent tests is limited to the boundaries of a single test codeunit. As long as test codeunits do not contain too many test functions and they are owned by a single tester this should not give too many problems. This strategy may be implemented by the test runner, the test codeunit's OnRun trigger, or using the LazySetup pattern.

With the Lazy Setup pattern, an initialization function trigger is called from each test in a test codeunit. Only the first time it is executed will it restore the default fixture. As such, the fixture is only restored once per test codeunit. This pattern works even if not all tests in a test codeunit are executed or when they are executed in a different order. The Lazy Setup may be implemented like:

As the number of test codeunits grows the overhead of recreating the test fixture for each test codeunit may still get too large. For a large number of tests, resetting the fixture once per codeunit will only work when the tests are completely insensitive to any change in test data. For most tests this will not be the case.

The last strategy only recreates the test fixture when a test fails. To detect failures caused by (hidden) data sensitivities, the failing test is then rerun against the newly created fixture. As such, some type of false positives (i.e., test failures not caused by a product defect) can be avoided. To implement this strategy the test results need to be recorded in the OnAfterTestRun trigger of the test runner to a dedicated table. When the execution of a test codeunit is finished, the test results can be examined to determine whether a test failed and the test codeunit should be run again.

For the implementation of each of these fixture strategies it is important to consider any dependencies that are introduced between test codeunits and the test execution infrastructure (e.g., test runners). The consequence of implementing the setup of the default fixture in the test runner codeunit may be that it becomes more difficult to execute tests in other contexts. On the other hand, if test codeunits are completely self-contained it becomes really easy to import and execute them in other environments (e.g., at a customer's site).

Excellent post, I liked this so much that i guess this will be the first technical automation post on my blog.. My blog is going to have the best of the posts that are posted across the testing blogosphere/web. I shall add this one… Do let me know if you have any thing that you wanna add to it…

But we are not able to use in our Tests specific commands/libraries like Assert, Backup Management, and so on. (I would assume everything from CU130000..137013). At the moment we have a -stupid- workaround to check the result only via IF/ELSE and give an Error Message. I am aware that is not really clever, but that is the only way to use a little bit of the test framework. We would like to use all possibilities of the whole framework for better quality. 😉

Our VOICE Customer No is 5123477. Maybe that can help to identify the issue? Do you need more information? Thanks in advance.

As far as I know the framework can be used with any partner license, that is, I've never heard otherwise. What kind of license do you use? And why can't you use those objects? Is it because you cannot import them?