We have one obsession: stopping people from writing so much code

Menu

MDD meets TDD: mapping requirements as model test cases

Executable models, as the name implies, are models that are complete and precise enough to be executed. One of the key benefits is that you can evaluate your model very early in the development life cycle. That allows you to ensure the model is generally correct and satisfies the requirements even before you have committed to a particular implementation platform.

One way to perform early validation is to automatically generate a prototype that non-technical stakeholders can play with and (manually) confirm the proposed model does indeed satisfy their needs (like this).

Another less obvious way to benefit from executable models since day one is automated testing.

The requirements

For instance, let’s consider an application that needs to deal with money sums:

REQ1: a money sum is associated with a currency

REQ2: you can add or subtract two money sums

REQ3: you can convert a money sum to another currency given an exchange rate

REQ4: you cannot combine money sums with different currencies

The solution

A possible solution for the requirements above could look like this (in TextUML):

We expect the operation to fail due to a violation of a business rule. The business rule is identified by an object of a proper exception type.

There you go. Because we are using executable models, even before we decided what implementation platform we want to target, we already have a solution in which we have a high level of confidence that it addresses the domain-centric functional requirements for the application to be developed.

Can you say “Test-driven modeling”?

Imagine you could encode all non-technical functional requirements for the system in the form of acceptance tests. The tests will run against your models whenever a change (to model or test) occurs. Following the Test-Driven Development approach, you alternate between encoding the next requirement as a test case and enhancing the model to address the latest test added.

Whenever requirements change, you change the corresponding test and you can easily tell how the model must be modified to satisfy the new requirements. If you want to know why some aspect of the solution is the way it is, you change the model and see the affected tests fail. There is your requirement traceability right there.

I think it is a relevant question, Damien, and one that I was actually expecting.

For one, the example model is very simple, so it is not like it requires more features that you won’t find in ordinary programming languages. If it dealt with associations, state transitions, events, things that UML supports natively but need to be emulated in programming languages, that would be more evident.

But after all modeling and programming (in Java or assembly) are just different tones of the same gradient, so if you ignore the nuances (like the intended use cases), it is hard to see any differences between them.

Another thing is that no matter the technical architecture of the application, both model and tests will continue to be much simpler to understand than the implementation code, because at the model level we ignore the technical architecture (you won’t see implementation concerns such as database technology, transactions etc). You sure could achieve that with an implementation oriented language (see DDD), but it is not as natural.

[...] at Abstratt we are big believers of model-driven development and automated testing. I wrote here a couple of months ago about how one could represent requirements as test cases for executable [...]