Featured in
Architecture & Design

Mini-talks: The Machine Intelligence Landscape: A Venture Capital Perspective by David Beyer. The future of global, trustless transactions on the largest graph: blockchain by Olaf Carlson-Wee. Algorithms for Anti-Money Laundering by Richard Minerich.

Featured in
Process & Practices

In-App Subscriptions Made Easy

There are various types of subscriptions: recurring, non-recurring, free-trial periods, various billing cycles and any possible billing variation one can imagine. But with lack of information online, you might discover that mobile subscriptions behave differently from what you expected. This article will make your life somewhat easier when addressing an in-app subscriptions implementation.

Featured in
Operations & Infrastructure

Mini-talks: The Machine Intelligence Landscape: A Venture Capital Perspective by David Beyer. The future of global, trustless transactions on the largest graph: blockchain by Olaf Carlson-Wee. Algorithms for Anti-Money Laundering by Richard Minerich.

Featured in
Enterprise Architecture

Mini-talks: The Machine Intelligence Landscape: A Venture Capital Perspective by David Beyer. The future of global, trustless transactions on the largest graph: blockchain by Olaf Carlson-Wee. Algorithms for Anti-Money Laundering by Richard Minerich.

Active Architecture for Agile Projects

I had previously blogged about how I felt that "System User Stories" were required and filled an important gap. I now believe that extending User Stories to create these System User Stories is not the best solution. You find that blog post at this link.

I still believe that there is a gap in Agile techniques that needs to be addressed for certain types of applications. For simple CRUD applications it is very possible that you will not require these additional techniques. This discussion is meant to address those non-trivial CRUD applications. I think my error was in trying to extend the brilliant User Story concept to cover System Stories. I now see that it was a bit of a forced fit. What I would propose instead is something I refer to as Active Architecture documentation.

The argument for Active Architecture

Too frequently in Agile, the focus on Requirements is only on User Stories. Sometimes the focus is on User Stories and Technical Tasks, but not on any architecture or holistic system design requirements. While this may be acceptable for Web applications that are simple, trivial, and primarily event-driven, it certainly does not suffice for applications that may have complex back-end components such as:

Unfortunately, the amount of Architectural and Design Documentation has been greatly reduced by Agile due to the enormity of it and the waste. I agree 100% that there is waste, but I would suggest that the correct response was to create a new, leaner type of Architecture document. Instead of the traditional voluminous passive Architecture document, I recommend a new Active Architecture document that is composed of Component Conversations as illustrated below:

(Click on the image to enlarge it)

The issue with the traditional architecture documents are that they defined what the architecture was in a passive and distant way. Somewhat like a road map. What I am proposing is to define the architecture in an active way. Rather than just defining the map of roads, the documentation needs to define how the routes will be driven and in that active way, describe the road map. The traditional way of architecture documentation was too far removed from the actual act of system use to be relevant and effective. Architecture documents need to define what actions the architecture support, just like User stories did for user functions.

User stories capture interactions between the user and the system efficiently, and technical tasks capture lower level tasks that the system is required to do. But where in the Lean documentation do we define at a High level the interactions between system components?

The types of Agile Requirements I recommend are:

User Stories - Stories of how the Userinteracts with the application or manual processes

Component Conversations - Conversations between components of the application

Technical Tasks - Technical tasks that the applications needs to perform within a component.

Creating these component conversations and thinking them through for the entire system will ensure the following:

That our User Stories are consistent in how functionality is handled across components

That our User Stories do not create an undue amount of rework when an original story is encountered in a later iteration

Ensures that the entire solution has been thought through at a high level

Reduces the chance that a story will be discovered late that will require earlier stories to be revisited

That our technical tasks are implemented consistently across components

They will define the complex back-end high level requirements that would be inefficient and possibly inconsistent to define on a story by story basis

They will define complex back-end high level requirements that may not be covered easily by user stories

Components conversations can be reviewed to ensure we have all the system functionality covered and there are no gaps.

Ideally in a graphical way

These Component Conversations define the Active Architecture of the solution. One possible format of the Components Conversations are: (and you can customize to fit your particular requirement)

Component Conversation Example

Perhaps an example would be beneficial to illustrate what these Component Conversations would look like. One important aspect is the timing as to when these Component Conversations should be created. Component Conversations should be created after User Stories have been created, and ideally after a User Story Mapping session has been held. These User Stories and a User Story Map provide inputs to the creation of Component Conversations.

Let’s use the FaceBook function of browsing suggested friends as a well-known example of a user event driven system that could benefit from Component Conversations.

Disclaimer: These stories are just a fictional representation of a well-known interface. They do not imply or propose actual Facebook functionality.

If we have the User Stories already defined, we could create the following Component Conversations to provide additional detail and context:

[1][SuggestedFriends] does
[create].[SuggestedFriendList] by
[filtering common connections across all friends].[Connection-DB] when
[user logs on]

Now these four stories provide a very rudimentary example, but I hope it illustrates the level of information that are captured in the Component Conversations`. Even with these rudimentary examples, I think the following four ideas have been captured:

There is a process that creates these SuggestedFriendLists separate from the User interactions

There is a Connection-DB that is used to create this SuggestedFriendList and possibly the HideFriendList

There is an object that contains the Pending Invitations. (And possibly a similar concept for other pending objects)

There is the concept of a persisted HideFriendList to ensure suggestions that have been selected to be hidden will not appear again.

These ideas probably would have eventually been discussed when the stories are worked on in later iterations, but the real advantage of Component Conversations is to discuss them early, think them through, and ensure they are handled in a consistent way across the entire application. If these Component Conversations are discussed early they will also address the risk of potential rework when items are discovered after some development is complete. For example, the Pending Invitations object might be designed differently depending on all the objects it needs to support. If we can provide a process with Component Conversations where the high level requirements and behaviours are understood early, we should minimize these re-work situations occurring in later iterations.

The technical tasks would just be technical tasks that arise out of these Component Conversations, just like functional tasks are generated out of User Stories. (For example, a technical task might be “ensure Pending object is persisted and supports friends, events, and messages”).

Summary

I've used these component conversations on my current project and they have greatly improved the understanding for both myself and team members. It is crucial that these component conversations should not be very involved and take a long time. I time box of 1-2 days should be sufficient to ensure a consistent understanding of the system exists. The component conversations can then be modified and changed as we proceed through iterations. The focus should be on Lean and what is enough to provide the required value.

The concept has provided great value for my current project.

About the Author

Terry Bunio is currently a Principal Consultant at Protegra. Terry never wanted to be a Project Manager. He started as a software developer and found his technical calling in Data Architecture. Along the way Terry discovered that he enjoys helping to build teams, grow client trust and encourage individual career growth, completing project deliverables, and helping to guide solutions.

It seems that some people like to call that Project Management. As a practical Project Manager, Terry is known to challenge assumptions and strive to strike the balance between the theoretical book agile and the real world approaches.

Terry considers himself a born again agilist as Agile implemented according to the Lean Principles has made him once again enjoy Software Development and believe in what can be accomplished.Terry is a fan of Agile implemented according to the Lean Principles, the Green Bay Packers, Winnipeg Jets, Data Architecture, XML databases, and asking why.

If you do the following:On [Event]: sorted list of ([Actor] [action] [subject] [context])

You'll get a use case description.

There's nothing wrong with that, UCA was a brilliant, and really lightweight method in its age! Except is not fully technocrat (like unit tests are), and it was seen as a formal documentation, hence thrown out.

Another disadvantage is, that in reality, it cannot step out of documentation phase: it always has to be bended a little, it could be never formalized well enough to actually run it as code, or you either loose some of its advantages, you make it hard to write (and/or read) or get a UX disaster as an output.

Nevertheless, in my mind it stays one of the most beautiful methods, which makes development also automatic - also a thing which wasn't welcome by early agilists, as they needed creative freedom - automatism is for computers.

Perhaps we could write some domain-specific languages in specific cases, I doubt that it could be automatized while retaining its readability however,but you could try it of course...

It's true that you should always start from use cases; however, use cases were ruthless to dreamers: they showed if something is absolutely useless, while User Stories are able to hide this cold thruth from their authors. How many times did I stay in front of a SCRUM board, looking at user stories, starting with "As a User I want..." and thought... "No user wants that, it's just to show off to people who won't use the product anyway". It's that way sometimes, after all, the people who just want to look cool are also stakeholders, aren't they?

Yes, use case-based modeling is a brilliant way to do things and whenever you do something, you should make sure, that the overall system is consistent in the mirror of its real dialogues between user and computer; that those dialogues require the minimal effort from the user to maintain. It would be cool if we could automate them well enough, but I'm not sure it could be done well.

I do undrestand the use cases but how do you integrate use case concept with agil methodologies?

Short answer: You do it on a per task basis.

Long answer: it takes about 10 minutes to an hour to do. What's most important, that it forces you to think in some ways, yet it doesn't restrict you by requiring runnability, and that you can go back to the customer, asking, is this what you'd like to see?

Nowadays I don't use written scenario-based approach, but rather, I do it as a set of screens, but there are still important factors:

A use case is something a user needs to do, and that gains benefit for the user

The usage of the system is a dialogue between a user and a machine

The user does not click, the user chooses from options, does something, or requests something to be done - click is an solution to the problem, not the story of it

Concentrate on problems, not solutions: you need to understand the problem at hand first in order to find the best solution for sure.

The intent of the system is not to support 'wants' but help the end-user reach real-world goals

Use case writings catch a lot on why something has to be done, they give context, they catch intent. This 'why' is really important, if you want to get out of the customer-wish-executor-slave mode of some agile teams. Unfortunately, with screen flows they're lost, yet they're more visual.

More can be read at Writing Effective Use Cases from Alistair Cockburn, or at his blog here or at here, and he even wrote about Agile vs Use cases.

Or you can use most of UML books of course; the old RUP/UML system, or the 4+1 system were centered around use cases: what the system is needed for, what are the steps to solve those needs, and everything else was just a support for this - the class diagrams, the action flows, everything. It didn't concentrate on customer-devteam, or management-customer problems like user stories do: it did concentrate on user-computer dialogs and real-world goals.

As a Solution Architect, I'm scratching my head to think that "component conversations" are an approriate treatment for a highly complex environment. Modules have conversations with each other - okay I get that, but have we answered the HOW part? Using WHAT infrastructure? Just because modules communicate with other doesn't necessarily mean they do it efficiently, reliably or securely - the are often on different platforms, different OS, and geographically isolated. At what point does Agile address these issues?

I can see these as an upfront approach to walking through the architectural impact of a set of user stories. What I don't get is how they "can then be modified and changed as we proceed through iterations."

User stories get thrown away. There's no point to keeping them because, like all design documents, they go stale. What stays maintained are the runnable acceptance tests. To the extent that these component conversations are going to similarly be maintained, they really need to become clear high-level code.

I agree that these stories don't address all aspects of Architecture and in particular the infrastructure. This method was intended to be a step in the right direction of envisioning the solution more than what is typically done with Agile User Stories.

I view the questions you have asked as important ones that need to be examined. But my perspective is those questions need to be examined after you have determined that the software solution is complete, consistent, and correct. Now we can start to talk about the how and the many options available.

I agree that it is not a traditional Architecture document, but hopefully it can start to move us in the direction of creating more architectural deliverables in Agile.

I have found them easy to maintain if they are at a high enough level. Although it does take a bit of effort, the value is realized in that these stories provide an excellent introduction to the solution for new team members.

I'm not a believer in the principle that your runnable tests are your system documentation. You never have enough time to create tests for all situations and that could potentially create large gaps in your system documentation and understanding.

I think Terry has been exposed some less sophisticated Agile teams if he thinks that Stories only work for simple web sites. That's a bogus premise from the outset.

Stories express the 'who', 'what' and 'why' of a feature. The 'how' evolves from the product vision that is developed with the business partner.

In "complex" applications, there are always constraints. Which database can we use, which tables can we use, data access must be through stored procedures or services, the application must have 99.99 uptime, etc.

Architecture evolves too and there may be constraints here that need to be taken into account as well, but they are constraints, not stories. The simpler the architecture, the better of course. "Make the solution as simple as possible, but no simpler", paraphrasing Einstein, but to arrive at simple, one must start with simple.

I don't buy that anything more than acceptance tests that prove the appropriate functionality (and non-function requirements) are necessary. I can always reverse engineer diagrams out of the living code; it's the only design document that's not stale.

I don't buy that anything more than acceptance tests that prove the appropriate functionality (and non-function requirements) are necessary. I can always reverse engineer diagrams out of the living code; it's the only design document that's not stale.

Second, if you have only automated acceptance tests,you're likely to do a software which is used by other softwares. If it's not the case, the chances that you're messing it up are pretty high. I've seen environments, a lot of them, where developers don't meet users at all. This is really sad, I also know people who like it this way, but it doesn't mean automated acceptance tests will help you in solving the human problems your system is to solve at the end.

(Problems are always human problems. Even if you're writing a kernel driver, you're doing that so that the humans using the given device can solve their real-life problems better. Or at least if it gets ever used.)

As for design documents, you have to create the most elegant (the simplest, yet not a single step simpler) solution for the problem at hand. If you can understand the problem on-the-go, that's fine, but if you can't, you'll have to write things down.

Yes, they'll go stale, but they shall reflect an understanding of the moment. How could you do a solution to something which you don't understand? Are you sure that the efforts needed to do every little detail shall go into runnable code, instead of paper? It's not important to be able to run them sometimes, regression is not an issue for most cases; it's important to record them and runnable tests are anything but a simple solution to that problem.

Write problems down. Ask users if that's the problem they have. Build a demo perhaps. Make sure you take note of things, and why are they important. Make sure you keep consistency. Don't put efforts into making a computer understand something to which your human brain with a pen and paper would be perfectly eligible.

No, you can't reverse engineer diagrams from living code, that's for one. Information gets lost

Nothing loses information faster than a written document. There is tacit knowledge that never makes it into the document and that means something like 50% information loss every handoff. See the Poppendiecks on the subject of the seven software development wastes, and a good write up here Handoff Waste

That means documents are good short term reminders for the writer, but little else. If, as is usual in 'complex environments', there are Architects doing System Architecture and handing the documents over for implementation, not only is that not Agile, it's a lossy method of information exchange.

Second, if you have only automated acceptance tests,you're likely to do a software which is used by other softwares. If it's not the case, the chances that you're messing it up are pretty high. I've seen environments, a lot of them, where developers don't meet users at all. This is really sad, I also know people who like it this way, but it doesn't mean automated acceptance tests will help you in solving the human problems your system is to solve at the end.

I guess you've never worked with Selenium or Watir or any of the other automated UI testing frameworks that act like a user, but I agree completely that you can't leave the user out of the process. Neither can you leave out the other stakeholders like your business partner, the aforementioned architects, compliance officers, project governance personnel, etc. It's a collaboration all around.

When I talk about acceptance tests, I'm talking about tests that prove the solution does what it's supposed to do. I'm talking about unit tests, integration tests, regression tests, functional tests, static analyses, security scanning, etc. If you can run them once, you can run them repeatedly.

And I would say ALWAYS automate testing at every level you can. As you point out, humans make mistakes; they don't run the tests the same way every time. Sometimes they don't run them at all.

Perhaps my response was reactionary. I find that whenever I hear (or read) someone talking about 'practical' Agile, what they really mean is, "I'm not comfortable moving ahead without a lot of analysis up front."

I don't accept that. I think 'just enough' to continue is appropriate, and I never trust an architect that doesn't write code.

As an aside, they did a study that analyzed people's understanding of written communications, especially e-mail and similar (like this forum I would imagine), and they found that fully 50% of the time, the reader understood exactly the opposite of what was meant by the writer.

I suggest that your response was one of those misunderstandings because, aside from the basic disagreement that documents are a good method of communicating design, etc., we agree completely.

Nothing loses information faster than a written document. There is tacit knowledge that never makes it into the document and that means something like 50% information loss every handoff. See the Poppendiecks on the subject of the seven software development wastes, and a good write up here Handoff Waste

That means documents are good short term reminders for the writer, but little else. If, as is usual in 'complex environments', there are Architects doing System Architecture and handing the documents over for implementation, not only is that not Agile, it's a lossy method of information exchange.

Don't buy everything Poppendieck writes. Especially not her conclusions. I think a lot of misunderstandings are there. Maybe she was part of a lean-handoff, with all the respect.

If I hand you over a paper, saying "The fish is blue", and ask you to make a photocopy of it, and hand over to the next guy, with the same instructions, with our modern technology, it'll contain the same information, no matter where are you in the waterfall. The information loss is 0%, which is 50% less than what Poppendieck writes.

Whereas oral communication usually gets out some garbage after round 2.

There are methods to record knowledge. There are techniques to record knowledge effectively. Worse, they teach it in every high school in the world for good.

When I write documentation, I don't write it the way I do on forums, I pretty much learned how to do it, in a way that the person executing the ideas written down has a clue what to do and why to do it.

If I'd tell them orally, I wouldn't have a checklist, saying, ok, done this, done that, missing this, work on that.

And there's a huge problem with recording something in code: when you record something in code, you can check it back, but the customer can't. When we write a checklist for you, both of us can check it.

If you can backtrace what the application is used for exactly and why from pure code, good luck with that. If this would be possible, maintainance developers would do a bugfree job always. Good luck also from figuring that one out from acceptance tests; I haven't seen good acceptance tests yet.

I guess you've never worked with Selenium or Watir or any of the other automated UI testing frameworks that act like a user, but I agree completely that you can't leave the user out of the process

I'm familiar with Selenium, I'm working in environments where people use it for around 5 years. Back then I was at a startup and I wrote our own selenium tests. It's not useless, it helps you in a lot of things. Test automation is a good thing, yet it's simply inadequate to test UI effectiveness. It also doesn't tell you too much about why's.

A Selenium test is a bureaucrat: it doesn't know why it does things, it only knows it was told to click on this button, enter John Smith, click on this dropdown, choose Mr, click on another button, and check if the color of the box named "status" is green. Why is this to be done, Selenium does not and won't have a clue about, anything you do.

It easily could be, that the Mr dropdown is not used for years by users, but the selenium worker will still do it, and people would still try to adhere to this acceptance test. If you'd have an understanding on what it does, you would be able to argue for and against it.

As an aside, they did a study that analyzed people's understanding of written communications, especially e-mail and similar (like this forum I would imagine), and they found that fully 50% of the time, the reader understood exactly the opposite of what was meant by the writer.

If someone is trained at any kind of design, it's easy to pick up how to write in a way that it cannot be misunderstood. It takes practice, and it takes a lot of mistakes.

Emails and other informal communications are a bad example: I don't take the time here to consider each word, have a dictionary with me, compare to different vocabularies (ie. make sure I use the same word consistently in the same way, just like with classnames) - and it's a really hard business, but it can be done. You need to care, and you need to be humble, just like with development.

To humble to what, you could ask? The only thing I could say is that humbleness in software engineering is towards the requirements against the software you write, that each part of your software is about satisfying those requirements. But these requirements are told you by humans: your acceptance tests are not your requirements, your unit tests are not your requirements, as you can't discuss those with the people who required them. The only way to do it precisely is through written, non-runnable documentation.

PS: I'm an architect who hates to code, but does anyway when the market seemingly can't make a difference as it's currently; I'm told I'm an outstanding coder, which came as a surprise to me compared how I'm bored with implementation; you shouldn't do too much upfront design, I have no problem with the schedule: I have problem with the amount.

Good discussion gentlemen. This is exactly the type of discussion I think it is valuable to have.

I think Terry has been exposed some less sophisticated Agile teams if he thinks that Stories only work for simple web sites. That's a bogus premise from the outset.

Who knows how sophisticated the teams are that anyone has dealt with? My belief is that Agile is a continuum and it's practices are not absolutes. I did not say that User Stories only work for simple web sites, they work for almost all applications. But I believe some other forms of documentation can also work depending on the circumstances. To discount all other deliverables and place faith only in user stories and tests can be very risky.

Unwavering faith in the Agile practices can be just as misplaced as the belief that you can define a complete Work Breakdown Structure. The trick is always where in the middle does the most value lie for each client and project.

Any technical decision by a software architect that does not write code will most probably be a waste.Though even those who do write code get a lot of misses, because the loo many assumptions that people make rarely translate into the real world. The architect has to be an active developer of the application. The best experiences with architects that I had, involved architects that were writing code along with me. That allowed the architects to change elements in architecture to match reality.

"I'm not comfortable moving ahead without a lot of analysis up front."

I always wondered how can one analyse something that does not yet exist? In short "analysis up front" is predictions and those seldom come true.

You skip over how you make the components. These are the architectural elements of the software design. This is the architects main job. A functional spec is not the "architecture" and just an interesting doc to an architect who is trying to come up with a software architecture. OO is the best way to proceed from User Stories. It is well know that functioanl decomposition is never the best way to design a system - as N people will come up with N differet designs. It is not a reproducible process hence a useless way to develop. You want an approach that differnt people will use and come to the same or similar architectures - hence use OO. After you have the User Stories. Do an OO analysis, do an OO design, code it and deliver it. Breaking up the software into arbitrary component is a waste of time. Any new User Stories are used to add to the OO analysis, design etc in a lnear way and never causes you to refactor or perform other waste of time activities. OO was adopted because of these characteistics but is downplayed today as "agile" is a better term to have on a resume. If you break code in components then do it based on SOA principles which mean you have to have the process flow diagrams of the business process. This is reproducible if N people do the task. The full flow diagrams may not be available until the end of a project (if it ever ends) so it will always be problematical to form components in an agile dev process. You never have the whole picture needed to make the best solution. The best you can do it keep moving software between components as you get a better understanding of the business process and what services exist. The OO code will not chnage as the basic actions of the flow process will always be the operations performed by roles. So write your code so that such re-organization is possible - George