The values we use for the attributes of objects in our test fixtures and the
result verification parts of our tests are often related to each other in a way
that is defined in the requirements. Getting these values, and in particular the
relationship between the preconditions and the post-conditions, right is crucial
because this is what will drive the correct behavior into the system under test (SUT) and will
help the tests act as documentation of our software.

Often, some of the values can be derived from other values in the same test.
In these cases the value of our Tests as Documentation (see Goals of Test Automation on page X) is
improved if we show the derivation by calculating the values using the
appropriate expression.

How It Works

Computers are really good at math and string concatenation. We can avoid
doing the math in our head (or with a calculator) by coding the math for
expected results as parameters of the Assertion Method (page X) calls
right into the tests. We can also use Derived Value as arguments for fixture object
creation and as method parameters when exercising the SUT.

Derived Values by their vary nature lead to using variables or symbolic constants
to hold the values. These can be initialized at compile time (constants), during
class or Testcase Object (page X) initialization, during fixture set up or
within the body of the Test Method (page X).

When To Use It

We should use a Derived Value whenever we have values that can be derived in some
deterministic way from other values in our tests. The main drawback of Derived Value
is that we could have the same math error (e.g. rounding
errors) in the SUT and the tests. To be safe, we might want to code a few of
the pathological test cases using Literal Values (page X) in the
off-chance that there might be a problem. If the values we are using must be
unique or don't affect the logic in the SUT, we may be better off using
Generated Values (page X).

Variation: Derived Input

Sometimes, our test fixture contains similar values that the SUT might
compare or base its logic on the difference between them. This can be
highlighted by using a Derived Input that is calculated in the fixture setup
portion of the test by adding the difference to a base value. This makes the
relationship between the two values explicit. We can even put the value to be
added in a symbolic constant with an Intent Revealing Name[SBPP]
such as MAXIMUM_ALLOWABLE_TIME_DIFFERENCE.

Variation: One Bad Attribute

A very common example of Derived Input is useful when we
need to test a method that takes a complex object as an argument. Thorough
"input validation" testing requires that we exercise the method with each of
the attributes of the object set to one or more possible invalid values to
ensure that it handles all these cases correctly. Since the first rejected
value could cause termination of the method, we must verify each bad
attribute in a separate call to the SUT and each of these should be done
in a separate test method (each should be a Single Condition Test (see Principles of Test Automation on page X).) We can instantiate the invalid object easily
by first creating a valid one and then replacing one of the attributes with
an invalid value. It is best to create the valid object using a Creation Method (page X) to avoid Test Code Duplication (page X).

Variation: Derived Expectation

When some value produced by the SUT should be related to one or more of
the values we passed in to the SUT as arguments or as values in the
fixture, we can often derive the expected value from the input values as the
test executes rather than using precalculated Literal Values.
We then use this value as the expected value in an Equality Assertion (see Assertion Method).

Motivating Example

Here is an example of a test that doesn't use Derived Values. Note the use of
Literal Values in both the fixture setup logic and the
assertion.

The test reader may have to do some math in their head to fully understand
that relationship of the values in the fixture setup with the value in the
result verification part of the test.

Refactoring Notes

To make this test more readable, we can replace any Literal Values that are in fact derived from other values with a formula
that calculates the value.

Example: Derived Expectation

In this example, there is only one line item but it is for five instances of
the product. So we calculated the expected value of the extended price attribute
by multiplying the unit price by the quantity. This makes the relationship
between the values explicit.

Note that we have also introduced symbol constants for the unit price and
quantity to make the expression even more obvious and to reduce the effort of
changing the values later.

Example: One Bad Attribute

Suppose we have the following Customer Factory Method[GOF] that
takes a CustomerDto object as a parameter. We want to write tests to verify what
occurs when we pass in invalid values for each of the attributes in the
CustomerDto. We could create the CustomerDto inline in each Test Method with the appropriate attribute initialized to some invalid
value.

The obvious problem with this is that we end up with a lot of Test Code Duplication since we need at least one test per attribute. The
problem gets even worse if we are doing incremental development because we will
require more tests for each newly added attribute and we will have to revisit
all the existing tests to add the additional attribute to the Factory Method signature.

The solution is to define a Creation Method that produces a
valid instance of the CustomerDto (by doing an Extract Method[Fowler] refactoring on one of the tests) and using it in each test to create a
valid DTO. Then we simply replace one of the attributes with an
invalid value in each of the tests. Each test now has an object with One Bad Attribute,
each slightly different.