Lucid Testing and Description Languages

I believe in the concept of what I call Lucid Testing. I even tried to promote my own BDD-style tool solution called — what else? — Lucid. The focus on lucid testing is making testing a design activity. As such an activity, testing makes the business domain — and how that business domain provides value — lucid (expressive and understandable) to anyone who has a vested interest. Core to these ideas is that testing is a design activity and that the primary value of testing is as a communication activity.

The idea of “being lucid” actually stuck with me when I read Domain Driven Design by Eric Evans. In that book Eric describes a domain-driven team and says (emphases mine):

The biggest gains come when a team joins together to apply a domain-driven design approach and to move the domain model to the project’s center of discourse. By doing so, the team members will share a language that enriches their communication and keeps it connected to the software. They will produce a lucid implementation in step with a model, giving leverage to application development. They will share a map of how the design work of different teams relates, and they will systematically focus attention on the features that are most distinctive and valuable to the organization.

I believe testing, when lucid, can be that shared map and language that provides leverage and enriches communication. Models are just one way that tests can be expressed. In fact, tests are a domain model. And as Martin Fowler has said:

The greatest value of a domain model is that it provides a ubiquitous language that ties domain experts and technologists together.

So let’s talk a bit about how this concept is framed in the current BDD and/or xBehave style of testing, particularly when it’s fused with development and business analysis activities.

Start with a Story

You start with a story. That story is essentially a set of requirements that focus on a specific feature or ability. We’ve all probably dealt with some notion of story. In this context, any portions of a story that are elaborated — meaning that they specify behavior, and thus require validation — should become a test specification (“test spec” for short).

Evolve the Story to a Specification

The idea here is that the parts of a story that are elaborated morph from being a business specification to a test specification. In reality, it doesn’t matter what the overall repository of information is called, but it is a specification and its purpose should for communicating about how a given feature or ability provides value by providing examples of how that feature or ability will be exercised to provide that value.

So each test spec should talk about a particular feature or ability. Let’s consider a few more facts.

This test spec will likely be started by business stakeholders.

They will tend to put in a series of “pass conditions” that describe the high-level business intent of how they want a particular feature to work.

Those pass conditions will be described as scenarios within the test spec.

Testers and developers will work with the business to flesh out the spec with test conditions and data conditions.

This means there will be more “pass conditions” as well as many “fail conditions.”

The test specs should ideally utilize a structuring language. One of the most common is known as Gherkin is popular in tools like Cucumber and SpecFlow. Tools like FitNesse and Robot Framework use HTML pages and tables within those pages as their structuring element. Speaking to Gherkin for a moment:

This is an industry standard, public API that provides a way to organize how scenarios are specified.

Gherkin utilizes Given/When/Then clauses that break up a scenario into context, action, and result.

The test specs also contain a Business Description Language (BDL).

The BDL is everything that comes after a Gherkin clause.

The BDL is everything that is not part of the table in FitNesse or Robot Framework.

The BDL is simply the English used in the business to describe the business domain.

Regardless of the structuring element, what the test specification ultimately becomes are the elaborated requirements written as tests.

An important point there is the traceability between requirement and test is built into a single artifact. Since that artifact should be the result of solution leads, business analysts, developers and testers working together, it means the test spec should be encoding a shared notion of what everyone believes constitutes “quality” for the given feature. Further, if the test spec is created prior to development work starting, the artifact serves as a specification for what developers have to provide.

Make the Specifications Executable

If the specification is written as a set of tests, then the specification is always executable. But this is from a manual execution standpoint. Given the complexity of most applications, as well as the sheer number of pass and fail conditions, automated execution is usually preferred. In order to make the specification executable in an automated way, it’s necessary to translate the natural language BDL down to executable test code logic that can exercise an application. That is handled by test definitions.

Test definitions serve as a bridge between how something is specified and how that specification can be carried out via actions. Remember: specifications specify behavior and thus must be capable of being carried out by a discrete set of actions.

Once test definitions exist that can match up natural language artifacts with code artifacts, the work of how to execute the spec is delegated to a test library. This library handles the low-level details of calling out to a web service/API, interacting with pages in a browser, querying a database, parsing a log file, monitoring file system changes, and so on.

As those actions are carried out, the test structuring elements should guide expression of the results. For example, in Gherkin-like approaches, the results are returned in the context of the natural language phrase that specified the action in the first place. In table-based approaches, usually the table rows themselves are executed.

Either way, what this means is that the spec execution results are described in the exact same language as the test specification so it is immediately clear what was executed, what passed, and what failed.

Now let’s dig into test specs a little bit more.

Test Specs

A test specification (“test spec”) is a set of requirements that are written up in a testable format. With this approach, you have traceability built in. The requirements and the tests are essentially the same thing. You also have acceptance built in because the test spec can be written by developers, testers, or business analysts. Ideally you have everyone collaborating as specs are written. This distributes “user acceptance testing” all through the process of development rather than during some UAT phase at the end, which is the worst place for “acceptance” to be occurring.

Specs, like user stories, are vertical slices of system functionality that are (in theory, anyway) deliverable independently. Stories are generally chunks of work that tend to be small. This is to support their use as a description of scope as well as a means of frequent delivery. This means a single feature can be provided through numerous stories, which would mean that feature can be distributed among several test specs.

There can be many audiences for a test spec. Think about your specs as documentation that will be read much more than they will be changed. Consider that other people will need to read and understand your specs months and perhaps even years after you write it. Certainly one audience for test specs is to serve as a target for development. This target is one that should prevent misunderstanding and ambiguities because of the fact that scenarios are specified as examples. This means scenarios in a spec file do two main things:

Specify the direct value it brings to users of the behavioral functionality the spec is describing.

Describe some aspect of behavior with representative examples of expected outcomes.

Those are achieved by making sure that test specs are …

… written with precisely testable conditions.

… written as specifications, not scripts

… written in business domain language.

… focused on varying data conditions.

… focused on intent, not implementation.

… talking about business functionality, not software design.

Description Language

A Lucid-style — which is a BDD-style — approach has a focus on delivering software with the added caveat that, at all points in the delivery cycle, there is a single source of truth that people can look at to determine the intent of particular features as well as their implementation. This single source of truth serves as a means by which to understand the testing now, but also to preserve that testing in a historical archive that also serves as an executable repository.

This approach requires the construction of a domain language that is relevant and applicable to the business domain. That domain language must then be encoded in an artifact that I would refer to as a test specification or feature specification.

You might hear this domain language referred to as a DSL (Domain Specific Language). That’s not an accurate phrase, however. A DSL tends to be a programmatic term referring to a computer language that has been specialized for a given domain. (SQL being a great example of that.) The test specifications, however, are not a programmatic computer language. They are the natural language that we all use to communicate about our software the business domain. These are more like specification languages as opposed to domain specific languages.

So what is this language? It’s best referred to as a BDL (Business Description Language) or TDL (Test Description Language). In fact, there is no difference between a BDL and a TDL — if you adhere to the idea that testing is a design activity and all elaborated requirements should be stated as tests. Arguably, as a business writer or test writer, you are always using a TDL. The question is whether and to what extent your TDL is structured and what consistent principles guide the expression of tests.

This is still a developing area in the industry as a whole, even though its pedigree goes back to Competitive Engineering. To offset the lack of material I have written some posts on the topic that may assist you in seeing some of the thinking behind using a description language.

I’m planning to cover the use of description language in more posts. Specifically, what I don’t want is testers to get constrained in terms of thinking how to use a BDL/TDL. If you run with the Cucumber crowd, for example, they’ll often be telling you all the ways you are doing it wrong. My view is that a BDL/TDL can be used in many ways, both declarative and imperative. The goal is a language in the service of requirements-written-as-tests serving as a single source of truth mechanism with traceability built in.

About Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words.
If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.