Introduction

In the world of software development, the concept of Agile Development is very popular. It has many different interpretations, and correspondingly many different implementations, but a common aspect that appears more often than not is the idea of Test Driven Development, or TDD.

The idea is easy to explain. For any given software feature, a number of scenarios and use cases are identified. These form the basis of a series of tests. The tests confirm that the code correctly implements the required feature. Once the feature is successfully implemented, the tests can be re-run, repeatedly and automatically, to ensure that subsequent updates to the overall project do not break what is known to be a correct implementation. TDD is not necessarily a perfect solution to the challenges of modern software development, but it is a simple and cost effective way of addressing some key objectives:

What do we want to implement?

How do we know when we have implemented it correctly?

How do we prove that later changes or updates have not broken the implementation?

Interestingly, these three objectives are just as important in content development. Indeed, if we replace the word 'implementation' with 'documentation', we arrive at a nice succinct list of technical writing objectives.

The implication is that the power and simplicity of TDD for coding is also desirable for information developers, too. In this article, we will look at how the principles of TDD can be applied to technical writing, by the use of Test Driven Documentation, or TDDoc.

A caveat

We must begin by recognizing that TDDoc can never be fully equivalent to TDD. To understand why, look again at the first objective above. For the TDDoc practitioner, the objective is read as 'What do we want to document'? The answer depends on the type of documentation (introduction, scenario, reference,... ), the audience (novice, expert, administrator,...) and so on. In other words, we have to take into account semantic and contextual considerations. But these make it practically impossible to create and apply meaningful tests on the content of the kind normally created as part of true TDD.

A similar consideration applies to the second objective, because the words used in 'correct' documentation could vary dramatically from one writer to another.

Nevertheless, given that we have created correct content, we can apply TDDoc to help us ensure that our implementation remains correct. Furthermore, if we have to change the content in the light of later feature clarification, TDDoc helps us ensure that all the required updates are made throughout the documentation; in other words, updates are made to all affected sections of the content, rather than risk missing one or two sections.

The basic concept

The key to understanding TDDoc is the basic concept of creating a testable unit. This is a form of content that can usefully be monitored or tested.

To illustrate the TDDoc concept, assume a simple documentation project consisting of three DITA files (file1.dita, file2.dita, and file3.dita). Content inclusion or exclusion when we build these files is controlled by a single DITAval file (allfiles.ditaval). The deliverable documentation is generated using a typical build tool such as the DITA Open Toolkit. The build process results in corresponding XHTML files (file1.html, and so on).

Creating the testable unit for file1.dita requires the following steps:

Take the generated file1.html

Remove all the XHTML mark up tags in their entirety, leaving behind only 'ordinary' plain text words.

Convert each and every instance of white space (such as spaces, tabs, line feeds, and newlines) to a single newline.

For conversion purposes, treat sequences of two of more white space symbols as one.

Any punctuation 'attached' to a word is treated as part of that word.

Any punctuation surrounded by white space, such as a hyphen symbol, is treated just like any other standalone word.

Call the resulting file file1.test

As an example, the following segment of input content:This is a simple - or easy - example for <i>illustration</i> purposes.
... would be rendered as:Thisisasimple-oreasy-exampleforillustrationpurposes.

We might be tempted to store a simple hash code rather than the complete set of ‘normal’ words from the document, but there is a specific benefit to storing all the words. The benefit becomes clear when we discuss later how to deal with test failures.

Mark up within the generated files is removed, as it often contains build specific data such as a date and time stamp. These would certainly vary from build to build, and so introduce many ‘false positive’ results for our tests. It is true that removing other, simpler, mark up elements, such as a <b> tag for bold text, suggests a loss in content. In practice, however, such elements tend to provide more emphasis than semantic meaning; so removing the elements probably does not undermine the value of the test mechanism significantly.

The reason for creating test unit by working with build files, rather than the original source, is that a single source file might produce several different output files, depending on the inclusion or exclusion filters specified within the corresponding DITAval file. Ultimately, we are trying to help ensure that the final documentation remains correct, and therefore the tests must focus on that same deliverable content rather than the source.

Using the testable unit

Now that we have explained how to create a testable unit, we can describe how to use TDDoc.

TDDoc works by focusing on a known set of testable units for a given content requirement. For example, documenting feature X might require some content in file2.dita. After creating and reviewing the content, and confirming that it is correct, we generate and store a corresponding testable unit, file2.test. This gives us everything we need to define a TDDoc test for feature X, such that if feature X is correctly documented, file2.dita always builds in the same way because the content describing feature X is unchanged. We can confirm this by comparing subsequent versions of file2.test with the original version stored as part of the TDDoc test. In other words, as long as each build of file2.test remains identical with the TDDoc version, we can have a high degree of confidence that the content also remains unchanged.

In effect, the TDDoc test for feature X is defined as a snapshot copy of the file2.test unit. Every time we wish to apply the TDDoc feature X test, we simply run the normal documentation build process, take the freshly built file2.html file and use that to generate the new file2.test file. Finally, we compare the original file2.test unit with the newly generated version. If they match, the test has passed: nothing has changed or broken the correct documentation. If they do not match, something unexpected has happened and an alert should be raised, advising that the implementation of feature X documentation must be checked.

To see an example of how an unexpected change might occur, consider the following scenario. Assume that a colleague has been working on the documentation for feature Y. Suppose the work requires changes to two of the files in our example project: file3.dita and the allfiles.ditaval control file. The changes to file3.dita are unlikely to be of immediate concern to us. However, the changes to the DITAval control file might have an effect on the file2.dita build, perhaps as a result of a change in a DITA attribute definition. The point is that a change arising from an apparently separate work unit might indeed impact us, but the previously created TDDoc feature X test will immediately identify and report the discrepancy.

Dealing with a test failure

Having defined a number of TDDoc tests for a project, we might reasonably expect a test failure to occur at some point. A failure means that a freshly built unit file differs in some way from the value stored following an earlier build of that same file. We can identify exactly where the problem occurs by comparing the ‘before’ and ‘after’ versions, a task made possible by the unit test creation process described earlier.

Each unit test file is still ‘human readable’, albeit consisting of a long sequence of one word lines. Most software ‘diff’ utilities are extremely efficient at comparing such files, making it quick and easy to locate each discrepancy. This is why we store TDDoc unit test files as a ‘normal’ text snapshot, rather than an apparently more efficient hash value for the file.

There are two possible options for dealing with a test failure.

Option 1 is where inspection of the files shows that a change has indeed been introduced, but it does not involve or affect the correct content required to describe the corresponding feature. When this situation is encountered, the correct action is to store the newly created unit test file, replacing the original file associated with the test. In effect, the test conditions are updated to allow for the new content which has no impact on the original feature description.

Option 2 is where the change does affect the previously correct content. Either intentionally or accidentally, a change has been made. Having been alerted to the problem by the test failure, a writer can inspect the content to see what fixes - if any - should be applied. It might be sufficient to ‘roll back’ the changes. It might be necessary to discuss alternatives with the writer who ‘broke’ the previously correct content. Ultimately, either the content is restored to its previous form - in which case the test would now complete successfully - or modified content is accepted as the new correct form and a new unit test file is created and associated with the feature test.

This ability to identify quickly and easily whether changes or updates have broken the documentation is the way in which TDDoc implements the third TDD principle. It is applicable to small or single-person documentation teams, but is especially helpful where there are larger, distributed teams who might be making modifications elsewhere in documentation source.

Limitations of TDDoc

It is not possible for TDDoc to provide complete assurance. For example, if a product name is included in documentation by use of a DITA content reference, a subsequent branding change will instantly propagate through the documentation, resulting in many TDDoc failures. This in itself is not a limitation of TDDoc. But if there are any instances of the product name that were included as hard-coded text, rather than content references, they will not be updated when the content reference changes. TDDoc cannot identify such instances because the old product name forms part of the specific TDDoc test, and by virtue of being unchanged, will not be flagged as a test failure.

Another limitation of TDDoc is what might be termed ‘cascading’ test failures. Certain types of changes might produce a very large number of false test failures. A good example would be a set of documentation which includes publication dates as actual text on each page. If a fresh publication results in a new publication date, then each and every page could be expected to fail the test. However, mitigating this is the fact that each change would be very similar, and therefore very quick to fix - in this case by updating the stored unit test file to ‘accept’ the new date.

We can even use this limitation as a way of promoting qualitative improvement. The likelihood of cascading failures can be reduced by minimizing the size of the original files. In other words, rather than having a small number of large files, you would prefer to have a larger number of smaller files. This approach helps improve quality by emphasizing task-oriented documentation, but also makes it less likely that there are large pages, each of which would report a TDDoc failure whenever a single change is made, even though the vast majority of the page is unchanged.

There are also two aspects of TDDoc that are not actual limitations.

One is the assumption of being applicable to English-only content. In practice, the method of removing only mark up from built files means that ‘ordinary’ text remains, where that text could be in any language or indeed using any script. Further, even whenn the documentation is translated into several languages, it is reasonable to assume that the content is first created using one language, so creating the definitive content which is then translated into other languages. The TDDoc tests need only be applied to that first, definitive, language.

The second aspect is the apparent restriction that built pages being tested must be (X)HTML or web pages. It is true that using XHTML makes understanding and applying TDDoc easier. However, there are various utilities available that convert other output formats into a text-only form so that TDDoc can be applied. A good example would be a PDF-to-Text conversion utility. Some of these work by converting the entire PDF into a single, large text version; other utilities work by converting one page at a time. While not as clean as markup-based TDDoc, text versions of PDFs can indeed be monitored using TDDoc.

Summary

Realistically, we cannot expect a simple tool like TDDoc to detect all changes, and certainly not intelligently assess whether a literal change leaves the semantic meaning unchanged. But TDDoc does provide an easy-to-automate technology that can be included and invoked as part of the normal content build process. As such, it is a helpful and comparatively low cost addition to the content developers toolkit.

Further, in a technical communication world that increasingly depends on distributed teams, working on different part of the content source at the same time, TDDoc helps reduce the likelihood of content error or inconsistency by flagging unexpected changes or side effects as soon as they appear.