Introduction

How self-describing components emerge from responsibility-driven development practices;

How careful use of the Visual Studio “code regions” tool can help to clarify code intent, and how this can be particularly useful in responsibility-driven development;

How to test component collaborations using behaviour tests and mock objects.

A characteristic common to many .NET class files that I encounter is an ability to obscure their intent by the use of the Visual Studio “code regions” tool. The out-of-the-box, and generally encouraged use of this tool is to categorize the members of a class by accessibility and scope (i.e., grouping all public methods or private fields under a so-called “region”). At best, this approach doesn’t fully reveal the intent of the class, and at worst, it downright confuses the reader. I stumbled on an alternative use for the code regions tool whilst working within a responsibility-driven development environment, and, realizing that this particular usage had emerged naturally from responsibility-driven practices (rather than the other way around), I thought it would be a good idea to introduce this little technique from the responsibility-driven perspective.

For a more in-depth discussion of the practices of responsibility-driven design, I would recommend the resources available from the wirfs-brock website.

Getting Started

To explore the development of self-describing components, and to demonstrate some responsibility-driven and test-driven techniques, we need an example; So, let’s start with a single requirement for a simple message handling and forwarding system.

Story #1

A message owner (originator) passes a message to a channel via a queue. The queue holds the message until the queue buffer has reached a defined capacity, then this message and all other messages in the queue are forwarded to the channel associated with the queue. The message originator is notified when the message they placed on the queue has been forwarded to the channel. At any point in time, the message originator may purge the message queue of messages, resulting in the immediate forwarding of all messages currently in the queue.

Doing It Responsibly

We would like to consider responsibility-driven development practices, but we are conscious also of the value of the “test-code-refactor” cycle recommended by test-driven development (TDD). The responsibility-driven approach - being geared towards design practices more than anything else - may seem at odds with TDD, which favours a no-nonsense, test-first mindset; but we are aiming nonetheless to integrate the two approaches as best as we can, recognizing the value of each.

When we do responsibility-driven development, we imagine how a requirement might be implemented by considering first of all the responsibilities suggested by the requirement. This “mining for responsibilities” is our primary focus, before we move in to the traditional TDD cycle of test-code-refactor. The objective of identifying and sketching-out responsibilities before leaping into writing tests is to break-out from the abstract confines of a requirement (one that might be expressed in natural, conversational language such as the user story above) into something more concrete. This may not seem at all different to some of the traditional test-driven methods for stepping-out from requirements into implementation, but there are differences, and these are discussed a little later.

The extent to which a requirement speaks to us of responsibilities depends largely on the language employed, but a typical well-written user story naturally implies at least some discreet responsibilities. A responsibility-focused developer learns to recognize responsibilities implicit in natural language, and this skill develops much like any other. Developers in responsibility-driven environments are encouraged to mine a requirement for discreet activities and specific “knowledge-keeping” responsibilities that, if implemented by suitably capable components, should yield a full implementation of the requirement.

Responsibilities and Roles Lead to Design Cohesion

The end result of responsibility-driven component development should be a cohesive design, and this begins with the developer giving consideration to responsibilities and roles before anything else. “Thinking in responsibilities” is what responsibility-driven developers do, and it is this attitude of mind that helps the developer arrive at a solution based on loosely-coupled components that implement clearly-defined roles. By roles, we are suggesting interface-based development, where a component invites other components to interact with it via a narrowly-defined, intention-revealing interface. Roles emerge alongside responsibilities, and it is not unknown for a single role to assume more than one responsibility (as is the case in the solution we arrive at later).

When the role of a component is narrow, yet wide enough to justify its existence (i.e., the resulting implementation is not a “lazy class” (Fowler)), and when discreet responsibilities are appropriately allocated within the network of objects, design cohesion is the natural outcome. This is because objects that assume clearly-defined responsibilities, and especially those which delegate responsibility appropriately within the model, should only ever have one reason to change. This is the nub of the single responsibility principle, and it is a characteristic we want to encourage if we are to arrive at a cohesive design. Responsibility-driven development gently steers us towards this outcome by prompting the developer to think about responsibilities and roles before anything else.

Working with a system that expresses narrowly-defined roles can help us to develop components confidently whilst working in a project environment that “embraces change” such as XP or Scrum. When change happens, and we have at our disposal a set of components whose responsibilities and capabilities are clearly defined (e.g., in self-documenting class code), and when their roles are narrow and focused, it is easier to imagine the impact of the change. When roles and responsibilities are presented to us clearly, we can arrive at a decision about how best to utilize the components we have available, and at the same time, we can determine which areas require attention (such as refactoring) in response to the change.

Categorizing and Allocating Responsibilities

A common practice during responsibility mining is to categorize the responsibilities into role stereotypes, such as “Knowledge keeper” (knows and provides information), “Co-ordinator” (makes decisions and closely directs other’s actions), and “Controller” (mechanically reacts to events). Allocating role stereotypes prompts us to think about what would be the core activity of a component capable of implementing a particular responsibility.

Returning to our original requirement, and bearing in mind the role stereotypes just mentioned, we begin to see candidate components emerge from the mix. Because we are aiming to work within the TDD cycle, these candidates become the as-yet un-written components which we are speculating might be capable of assuming one or more identified responsibilities. In keeping with TDD, we haven’t yet written a single line of code for the component. Candidate names are not critical at this stage, but components will ultimately be expected to describe themselves as fully as possible in a single name, all in the spirit of intention-revealing code. The following is a list of responsibilities, roles, and candidates extracted from our single user story, showing where our candidate component might fit the bill (the candidate is identified as ComponentX):

It seems that we’ve allocated nearly all of our responsibilities to ComponentX, and there appears not to be a ComponentY or ComponentZ to worry about. We now turn our attention to defining an “official line” for the candidate component, bringing together the responsibilities we have allocated into a brief description of the component’s overall capability. At the same time, we choose a suitably descriptive name for the component.

Capability

Candidate Implementation (Component)

Accept messages for a specific channel, store them in a buffer, and forward them to the channel when the buffer exceeds capacity

BufferedChannelQueue

That the textual description of the overall capability of our candidate matches the abstract content of our requirement should come as no surprise, and with the exception of the “notify message originator” responsibility (omitted here for brevity), there are no spurious responsibilities hanging around; therefore, we seem to have nailed-down our candidate.

It may appear from the description of the process for discovering responsibilities and candidates, that such a process is a time consuming one; but the decisions made during this process should take mere minutes in most cases, and are for the OO-savvy, pattern-oriented developer, something approaching an automatic response. The aim is not to “design as we go”, and to dwell on responsibilities too much, but to get into the test-driven rhythm as soon as possible to check our assumptions. The TDD rhythm is the reliable partner in our evolving design endeavour, so we aim to get into it sooner rather than later.

Discovering Collaborations

At the same that we recognize discreet responsibilities, roles, and candidates, we begin to identify where collaborations between our “responsible components” might be necessary in order to fulfill the broader requirement. Collaborations emerge when it is clearly preferable for one component to delegate some of its responsibilities to another component. There are numerous reasons for this; perhaps, the interface of the “delegated-to” component conforms to a well-known specification that we are expected to use (in which case we are adapting it), or else, the component is fit-for-purpose in a way that our own component could never be (as is the case with IQueue<IMessage> above - we would not wish to implement our own queuing logic when .NET supplies a perfectly good one). The delegation referred to here is synonymous with delegation in traditional OO designs, and suggestive also of the favouring of delegation over inheritance common to pattern-oriented solutions. When we identify collaborations between responsible components, we are thinking all the time of the end result we are aiming for:

The result should be a network of loosely-coupled, collaborating objects.

Returning to our requirement, we can see some obvious collaborations (note how the name of the component not only suggests its own capability, but happens also to suggest with whom it collaborates):

Component

Collaborates With

Delegated Responsibility

Candidate Collaborator

BufferedChannelQueue

Message Queue (the ‘Queue’ in the name)

Keep messages in a collection with “first-in-first-out” order

Queue<IMessage>

BufferedChannelQueue

Message Channel (the ‘Channel’ in the name)

Deliver a message to an end-point

MessageChannel

Collaborations are discovered rapidly when we are committed to refactoring to patterns during our TDD cycle, which is due mostly to the fact that many common refactorings favour delegation over inheritance. In this context, suggesting a collaboration should not be considered on par with making a rod for our own back – on the contrary, we are introducing a small amount of work (defining the delegation activity) for the much greater reward of a loosely-coupled, extensible network of objects (and that’s before mention of the fact that by refactoring to patterns, we are guarding against design deficit). As more collaborations emerge, it is tempting to carry on building a network of loosely-coupled objects solely through the use of mocks and other test doubles, before considering any of the finer details (internal implementation, algorithms etc). This technique is discussed in this article. In practice, I have found this approach to work very well, and it has a rhythm of its own that fits nicely into TDD.

Below is a static structure diagram that summarizes the roles and collaborations of our solution so far. The collaborations set-out in the table above are indicated on the diagram as points ‘1’ and ‘2’.

Exercising Responsibilities, Roles, and Collaborations

How far should we delve into discovering and documenting responsibilities before writing our first line of code? Well, identifying responsibilities, roles, and collaborations en-masse, then leaping into development has nothing to do with the process of evolving responsible components. We are aiming, as with other agile-friendly practices, for an incremental approach to developing our components; therefore, the sooner we enter into an empirical process to test our assumptions, the better. We should not be entering the TDD cycle with a lengthy list of responsibilities and assumptions about collaborations, but we should instead focus on one responsibility at a time, and then explore the collaborations that spawn-off from that. As soon as we suspect a responsibility can be fulfilled by a particular kind of component (e.g., a problem can be solved with a simple class), we should write a test to confirm our suspicions.

By practicing the test-code-refactor cycle, we can check that our ideas about who should be responsible for what, and about who should collaborate with whom, are realistic. This should encourage us to write loosely-coupled components that, working together, should be able to fulfill all of the identified responsibilities. It is the test-code-refactor cycle that gives us the confidence to experiment with the allocation of responsibilities, and to try-out new collaborations, without fear of breaking what we have so far. This, in turn, should encourage further responsibility and collaboration discoveries, and give us the confidence to experiment with delegation techniques in order to maximize component de-coupling.

Although similar to the traditional XP-style of TDD, responsibility-driven testing “feels” a little different when it gets started. This is, I think, due to the intellectual process of considering responsibilities before anything else. As the first step from requirement to code, thinking about responsibilities can easily introduce some design assumptions as we go along. This is understandably controversial because, if we have speculated about the existence of responsibilities, we have effectively made some design assumptions before testing, and for dyed-in-the-wool XP practitioners, this is YAGNI, and something of a show-stopper. I would have to say, however, that, in practice, to get from our somewhat abstract requirement into an empirical test process, there has to be a first step across the disciplines, and this is always going to include an assumption of some sort. An article by Jeremy Miller, which talks in greater depth about breaking-out from requirements when doing responsibility-driven development, can be found here.

One approach to “breaking the ice” of a requirement should be familiar to TDD practitioners: We imagine how we, as a potential client of the component, would ideally like to use the component, and then we write a test expressing this usage. The test doesn’t compile because the component doesn’t exist, so we implement just enough of the component to compile and pass the test, and so on... This approach is indeed appropriate to responsibility-driven development, but the feeling as we embark down this route is that there is an added “thinking” step just prior to writing our first test. Our first step towards testing is more akin to, “imagine how we would like a component to take responsibility for something”, quickly followed by, “write a test to confirm that there is a component somewhere that can assume this responsibility”. This subtle semantic difference can yield a quite different syntactic expression in the test, which in turn can result in a different story being told to the reader, both in the test suite and in the resulting component code. Perhaps, at the back of our minds, during responsibility-driven testing, should be the following basic goal, which each responsibility test aiming for:

When we write test code to exercise a responsibility, the result should be a test that points to a capable component.

I.e., our test says, "here is a component that can carry-out responsibilities "A", "B", and "C" for you, and here is how to ask the component to do it".

Guidelines for the Process

To build a suite of tests that describes a cohesive group of components, each of which reveal their ability to:

Assume specific responsibilities, and

Collaborate with other components to achieve goals.

In my experience, the following guidelines can help us achieve this goal:

As soon as we have an idea of how we would like a component to assume some responsibility for us, we write a test, expressing this delegation of responsibility.

The test should confirm that the component is able to assume the given responsibility.

It’s usually enough to write a simple unit test; e.g., a call to a method of the component, followed by a check on the state of the component (or the system).

We begin by imagining how we, as a client of this yet-to-be-created component, might wish to use it.

As we evolve and test our responsible components, we begin to recognize the value of collaborations between objects, which brings us to our second guideline:

As soon as we have an idea of how we think two objects should collaborate, we write a behaviour test to exercise this collaboration.

The test should confirm that the components collaborate as expected.

The test should be a “real” behaviour test that uses mock objects to confirm behaviour. It should not rely on state checks.

Such a test might resemble an integration test.

To summarize our use of the different types of tests:

Discreet responsibilities are exercised by state tests.

Collaborations are exercised by behaviour tests.

Behaviour Tests

State tests make a regular appearance as unit tests, and are useful for exercising the discreet responsibilities of components. Behaviour testing - and the use of mock objects - is an approach that becomes valuable when we need to exercise collaborations. The collaborations suggested by our model were:

The passing of messages from the queue buffer to the channel, and

The delegation of the internal queuing algorithm to an appropriately capable .NET collection class (Queue<T>).

It is in exercising these collaborations that we should consider using mock objects, rather than any other kind of test double.

Design for Testing and YAGNI

Take, for example, the collaboration that is required in order to pass messages from a queue to a channel. It is the responsibility of BufferedChannelQueue to forward each message to the destination channel when the buffer exceeds capacity, and in carrying-out this responsibility, BufferedChannelQueue is expected to collaborate directly with a specific channel instance.

Without using mocks to exercise this collaboration, the only option would be to ask BufferedChannelQueue whether it has carried-out its responsibility, by questioning some internal flag or other change of state. This would introduce the need for a public property on BufferedChannelQueue, designed to answer a question such as “Have you sent the messages yet?“ or “Message count please?”. In disciplines such as eXtreme Programming, introducing this property might be considered “design for testing”, and therefore, exempt from all charges of YAGNI (You Ain't Gonna Need It). I would suggest, however, that this property is YAGNI, and that it has been introduced because the alternative (behaviour testing) was not considered. The fact is that state testing does not work for collaborations, it merely serves to muddy the water by encouraging the inclusion of unnecessary code.

Sometimes "design-for-testing" can introduce YAGNI artifacts

As well as introducing spurious code to support state checks, relying too much on “asking” components encourages closely-coupled models. This is because asking implies a greater knowledge of the “component being asked” (on the part of the component doing the asking) than may be necessary. The alternative is to encourage a “tell don’t ask” culture, through the use of events, double-dispatch, and other design patterns.

Using Mocks

Instead of faking behaviour checks by checking state, we should be encouraged to write “real” behaviour tests that make use of mock objects. When we test the above collaboration using a mock of IMessageChannel, we confirm that BufferedChannelQueue has called IMessageChannel as expected, and need not bother asking any further questions regarding the internal state of BufferedChannelQueue – it is simply not relevant. If IMessageChannel was called three times because three messages were in BufferedChannelQueue's message buffer when it was purged, then that is all we need to know. BufferedChannelQueue passes the “collaboration responsibility” test, and we move on.

Below is a diagram showing how a collaboration test could be introduced to exercise and verify collaboration ‘1’ of our model.

Take a look at the test class BufferedChannelQueueTests in the attached code sample; tests 2 and 3 serve to highlight the difference between a “fake” collaboration test (relying on state checks) and a "real" collaboration test (using mocks). Test number 3 (BufferedChannelQueue_ForwardsAllMessages_WhenQueueReachesCapacity) is our "real" collaboration test, designed to exercise collaboration '1' by mocking the IMessageChannel interface, as illustrated above.

The Role of Intention-Revealing Code

The responsibility-driven approach to implementing requirements can be effective, but it can be immediately undermined when code is poorly-documented, and when class modules, in particular, fail to reveal their intent. This undermining is particularly acute when an entire team is used to working with components that describe their capability, making it easy for developer-designers to make appropriate decisions regarding the use of such components. A team that is geared towards “thinking in responsibilities” can be thrown-off-track when the class files they read tell them nothing more about the capabilities of the class than the accessibility of its members.

It is generally accepted that how a class presents itself in human-readable form can determine whether it is an enabler of knowledge-sharing, or a hindrance to it. It is because of this that our components should aim to reveal as much about themselves in the code file as they do in any other form of documentation, and perhaps even more so. Developers who are used to dealing with rapidly-changing requirements in agile project environments understand the frustration of working with components that do not self-describe - it simply slows down the decision-making process. Under these often pressurized circumstances, instead of seeking to create more and more UML diagrams, verbose technical architecture documents (that quickly become stale), and architectural scribbling on the whiteboard that never get updated, what we need to aim for is:

Tests that describe the capability of components.

E.g., A suite of tests that proves how components “A” and “B” are capable of assuming responsibilities “X”, “Y”, and “Z”.

Components that describe themselves fully.

If you need something to assume a particular responsibility for you, then maybe "ComponentX" is the one to do it.

A quick and easy way to search for responsibility descriptions, and to match those descriptions to capable components.

E.g., a tool enabling responsibility descriptions to be searched, and components (e.g., class files) to be located, along with their tests.

Self-Describing in .NET

When the responsibilities we identify are to be implemented by one or more .NET classes, development teams can benefit from an effective use of the Visual Studio “code regions” tool to clarify intent. Within the attached code sample, there are two versions of the BufferedChannelQueue component, both implemented as .NET classes of the same name, differentiated by namespace:

ResponsibilitiesExample.BufferedChannelQueue

An implementation that makes use of standard .NET code regions formatting.

When we compare the two classes, we see how clarity can be introduced by emphasising component responsibilities using the code regions tool. Firstly, we note the characteristics of ResponsibilitiesExample.BufferedChannelQueue, which conforms to the "out-of-the-box" code regions formatting standard:

When the reader “collapses to definitions”, the class communicates little more than member scope.

Compare and contrast this with the formatting applied to ResponsibilitiesExample.ResponsibilityFocused.BufferedChannelQueue:

Class members are grouped by responsibility. This means that public methods, private methods, overridden members, and interface implementations may actually be grouped together, regardless of their accessibility.

Class members - whether they are collaborators (e.g., _messageChannel) with their own responsibilities, or simple value objects (e.g., _queueCapacity) - responsible for holding state - are often grouped together with related methods.

See NotifyMessageForwarded(IMessage message), which is coupled with the MessageForwarded event (this event being the collaborator in notifying the message originator).

Compare this to classes that split-up collaborators, and choose rather to group private members in a “Private members” region somewhere at the top of the file.

By “collapsing to definitions”, the reader can return to a view that tells them all they need to know about the responsibilities, and ultimately, capabilities of the class.

Member accessibility and scope is not the overriding concern. If the reader needs this kind of information, then they can refer to the drop-down list of members at the top-right of the code pane, which groups class members by accessibility.

Conclusion

I started this article by recalling my experience of using the Visual Studio code regions tool to clarify code intent. This technique had emerged from the broader discipline of responsibility-driven development, which I have presented in this article, with an emphasis on the test-code-refactor cycle. One of the most powerful tools for confirming behaviours and collaborations is testing with mocks, which I hope I have described here (and demonstrated in the attached code) in enough detail for readers to appreciate its usefulness.