Abstract
This article focuses on the use of the ‘Architect’ role within Agile environments by taking into consideration the experience of the author, as well as objective opinions from other Software professionals who have found their own version of successful software architecture via different means in the agile environment. The consideration of using the ‘architect’ role within an agile team is discussed along with how the architect can ensure the software does not become fragmented as well as ensuring architectural governance and accountability.

Architecture for any software project is the glue and foundation that keeps the software together, keeps it stable, keeps it maintainable, and keeps it performing well, among other things. How you “get to the architecture” depends on many factors, including, the structure of the project, the methodology used, the people on the project, and other factors. This article is going to focus on architecture as it is used specifically within Agile environments.

In addition to my own background and insight having worked in Architecture in Agile (as well as waterfall and others), recently, I put out a few questions to the architecture and agile communities hoping to gain additional opinions from those working with Architecture in Agile environments and how architecture is used and influenced in these different cultures. The response and encouragement has been overwhelming. Because this article is part opinion and part research, I take into account the experiences and opinions of myself as well as other top professionals. My goal is to be as objective as possible while sharing the experiences and opinions of other professionals weather or not I have shared similar experiences, use different methodologies, or agree or disagree.

Using the waterfall methodology, many software projects try to nail down all of the requirements up front, and these include both functional and non-functional requirements. Architecture documents are also created to various levels of detail describing the architecture. Waterfall typically can’t account well for variations in business or technical requirements along the way, so trying to get it right the first time (never happens) is typically important for many organizations. So typically, lots of time is spent up front tying to nail this down as best as possible.

Agile teams handle architecture differently, and depending on the team, there may be an initial iteration (or two, or three, etc) to nail down the initial architecture. Some agile teams will try to at least nail the most significant decisions as related to architecture (see, Introducing Significant Architectural Change within the Agile Iterative Development Process) hopefully trying to mitigate future architectural changes while understanding the costs associated with it. Other Agile teams let their software grow organically, as many Agile proponents promote YAGNI (You ain’t gonna need it), or also verbosely explained as “let’s not write anything until we actually have a business or technical reason to do it – aka, let’s not over-architect). So, YAGNI is great for a lot of things, but it doesn’t go far enough to account for significant architectural decisions that need to be baked into the software without significant cost down the road. The “Last Responsible Moment” principal tries to address this as well, and it works great for some teams, but it’s up to teams to determine when the “Last Responsible Moment” for creating and implementing the architecture is, and that becomes very subjective with the prospect that if you wait too long, the more re-engineering work is required.

So, how are Agile teams doing Architecture out in the field? My experience tells me how I’ve done it, as well as what has worked in the past for me and my teams. How does everybody else do it? Do they do it like me, do they have a better way, do they do it worse? Are they successful? Knowing that “the team” makes all the difference and what works for one team won’t necessarily work for another team, I have compiled what I think are the best responses, along with my commentary, to the questions put out to the community.

Dani Mannes – Agile Modellers & Developers

Dani Mannes is the Founder and Chief Architect at ACTL Systems Ltd. His work focuses on the defence industry where he is as a consultant and trainer to helping this predominately waterfall industry adopt agile.

In Dani’s approach, he uses the terms “agile developers” and “agile modellers” to define agile team members which do either development or design. He runs into a typical problem that I have seen in many Agile environments, and as Dani puts it “The teams are supposed to refactor, but they often don’t do it because of time pressure. This leads ultimately to spaghetti code/architecture and the velocity will eventually drop dramatically.”

The modellers will use a modelling tool to sketch out and ensure architecture documentation is up to date. “So in each sprint you have an architecture description of the sprint scope. But the architecture should not only focus on the current sprint but also take into consideration stories that will most probably be tackled in the next 2 sprints”. Dani emphasizes that taking into consideration the future stories is essential, but these should not be modelled at this point since “taking into consideration means only to think about them but not actually find a solution for them”.

In Dani’s world, ”the team acts as the architect”, and he states that there is no need for the architect role. But, he does keep one person in the role of architect, or “architect champion” just to keep discussions short, and to monitor the need for architectural change during each sprint.

“Our experience has shown that when the team applies a model based design process during first days of each sprint where focus is set on the sprint scope and attention is given to the scope of the next 2-3 sprints, the team is capable of coming up with a good architecture that serves as guidance during implementing the sprint scope.”, says Dani, and “Since the team has come up together with the architecture, all members are aware of the modules”.

Lee Fox – Architecture as Part of The Team

Lee Fox is a Software and Cloud Architect, Agilist, and Innovator, and he ensures that the architect is always a contributor and a team player. In his experience with waterfall projects, the architect is isolated from the rest of the team and isn’t necessarily even part of it, “As part of the team, I preach that the architect MUST be a contributing member and with some degree of consistency even contribute to team deliverables.”, and “the architect needs to really enhance the idea of empowerment and encourage the team to make architectural decisions.” Lee Fox also values that the architect must have the teams trust and maintain the big picture vision. This is a recurring theme in many agile processes when it comes to working within an agile team.

Lee’s vision is that “Agile architects work both in the low level with the team as well as the high level with the business. They use their broad exposure with the business to help guide a team’s decisions in the right direction. “ He is fine with architects working on multiple teams, but cautions that the “the architect must contribute to each team he is a member of and keep up with the big picture”. What Lee has seen through his approach and coaching is an increase in both velocity and code quality.

The Need for Governance – Dan’s Thoughts

In Agile, teams are self organized where there is no one on the team who should have additional responsibilities than anyone else as everyone is working together to achieve the same goal. Work is picked up and worked on by any team member and the expertise that is created is shared amongst all team members. However, I believe, as well as other respected Software Professionals, such as, Simon Brown, author of Software Architecture for Developers (https://leanpub.com/software-architecture-for-developers), that you need someone responsible for the Big Picture of the Software, and that includes the architecture. This creates accountability and governance for the architecture and all non-functional requirements (NFRs).

There are multiple ways to approach this, but ultimately realizing that having that architect who is responsible for the big picture can help ensure that the architecture is continually in-line with the functional and non-functional requirements. The architect will have increased access to both the technical side and the business side to ensure that the team(s) are continually aligning to not only the business/functional requirements, but the architecture is in alignment with both short and long term functional and non functional requirements, and that the existing architecture is followed, revised, and re-worked as necessary.

Ok, so I know some agile purists out there are thinking “Long term requirements are very subjective. Until a user story gets chosen by a ‘Product Manager’ and moves from the backlog to being actively worked on during a sprint, it’s not really a requirement yet”. Ok fine. I get that, and I understand the advantages here which is makes agile a process which helps you to change direction or add new to the project features midway through a development phase. User stories are (should be) always functional requirements though. When considering the architecture, we need to understand what is in the backlog or the general type of functional items that are in the backlog which helps the architect to create the technical vision. The technical vision can change as the project progresses, but ultimately understanding the grand vision will help ensure an architecture that takes into account current requirements and ensure it meets future product functionality with minimal re-work.

A drawback of Agile teams without architectural governance is that the system tends to fragment or suffers from too much rework and often non-functional requirements (such as performance, scalability, and others as related to architecture) get tossed out the window. Imagine a team has estimated a total of 20 points for the upcoming sprint. They have to consider, how can they get it done? Among these considerations is what is needed now, and unfortunately what is needed now, is often at the expense of technical accountability. Fragmentation of the system (or of the architecture) occurs when the now becomes the most important piece rather than ensuring we are adhering to sound architectural principals an meeting our NFRs as well as the long term architectural vision of the software. This is why architectural governance is important.

In agile planning meetings, the team will typically talk about design and may also discuss architectural changes. Ultimately, the team may still be on their own to make these decisions and ensure that the architecture they have will meet the existing business and technical requirements of the sprint, but it’s up to the architect to ensure that the decisions and approaches undertaken by the team are in fact in line and consistent with the architectural vision. The architect is accountable, and accountable for the team’s decisions related to architecture and any architectural changes that come about.

The architect is also a team member who may also code but ultimately has responsibility for the continuous evolvement of the architecture. Typically the architect needs to work in the trenches alongside the development and business teams to ensure constant technical team and business communication. The architect should be involved in all of the agile processes from planning, development, to retrospectives. If the architecture has failed, or we spent too much time on a sprint worrying about “the now” and compromised our architecture, it needs to be brought up and addressed (think retrospective). The whole team should be able to come up with reasons why it failed and how we can improve.

In Summary

When using agile teams, the need for the role of the architect needs to have a high level of consideration. It’s important for this role to work alongside the team and have a very sound vision of how the architecture will meet short and long term objectives with minimal re-work. The architect should be part of all regular agile processes and held to accountability while allowing the entire team to propose and come up with architecture, but ensuring the architect is accountable. Having the architect maintain accountability provides a level of governance to the project from a technical architecture perspective that would otherwise get lost and lead to fragmentation in most agile environments.

This article contrasts Scrum with Agile methodologies and points out that Scrum can fit within an Agile development team, but Scrum alone doesn’t mean your organization has become Agile. Scrum processes such as task estimation and burndowns may be established Scrum processes, but upon further analyzing their effects you see that they could actually be working against your team’s agility. It’s important to evaluate the value of established processes, to determine if they are helping or are detrimental to your organization’s Agile initiatives. A real world example is used to demonstrate an example of how a good intentioned process becomes inefficient within an organization’s Agile implementation.

Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?

Recently, a colleague asked me about task estimation and how to reduce the amount of time it’s taking their team to do it. I gave him some suggestions, but it got me thinking more about task estimation and why agile teams are doing it at all. I’ve also been involved with many organizations who are following Scrum processes, calling themselves agile, and they are getting there, but they are still putting Scrum processes before people and not really thinking about the value of the processes they are following. This article is going to talk about how using Scrum processes such as burndown charts and task estimation can actually be working against your organization’s initiatives to improving agility.

Scrum actually predates agile, so it’s fair to say that just because you are using Scrum you are not necessarily embracing agile practices, however when you look up Scrum in Wikipedia you will see the following “Scrum is an iterative and incremental agile software development framework for managing software projects and product or application development.”. Ok, so it is an “agile software development framework”, so you can forgive people’s perception (even mine) when they feel that by following scrum processes we are Agile. But, I believe there still needs to be a further distinction here. Scrum is still just a process for managing software development, and I see Scrum as a set of rules that “could” be used in an agile environment, but by following Scrum processes to the letter of the law, we could actually be putting processes over people – which is definitely the opposite of what the agile manifesto is trying to achieve.

Scrum, like many other agile implementations has the basic premise of a user story backlog. We estimate story points with a number of relativity, instead of using an absolute measurement of time, and over time we can determine our velocity – which is how many story points we can generally complete in a sprint or iteration. We can now easily judge our backlog to get an idea of how long it will take to complete the stories in the backlog.

Once estimated, we task out the stories for the sprint, but unlike standard Scrum implementations, I don’t like to estimate hours for tasks. Inherently there is nothing wrong with it, but myself and many Agile experts don’t see a lot of value . It’s just very difficult to estimate work accurately in absolute measurements of time – that’s why we story point the stories to begin with. Estimating at even the task level also has the same problems of not being so accurate.

With Scrum implementations, estimating tasks in absolute time is still a vital part of the process. Wikipedia defines a scrum task as the following: “Added to the story at the beginning of a sprint and broken down into hours. Each task should not exceed 12 hours, but it’s common for teams to insist that a task take no more than a day to finish. “ [Author’s note, the latest version of the Scrum guide has removed task estimation from the Scrum requirements]

I had a friendly discussion with a previous client on estimating tasks and I could never get a good answer as to why they do it other than “it’s part of the process”, and “it’s agile”. There certainly is a lot of confusion out there about what exactly is Agile, and what constitutes adding value as part of an agile implementation. Sometimes teams feel that by following a Scrum process, we’re agile – but as a team we need to think about value. Scrum, agile, or not – what value are we actually getting from spending the time to estimate tasks? If, as a team, we cannot answer that question we need to re-evaluate this process and determine if it’s a waste of time or not. If there is value, sure let’s continue doing it, but many times a process is followed for process sake.

However, it’s easy to see why task estimation is being done. We need the information for our burndown charts. These charts tells us how many task hours we have completed versus how many hours are remaining and are part of the Scrum process that should be presented during the daily stand-up meeting. Ok, sure – then theoretically, IF we need burndown charts, then we need tasks estimated in hours.

But, do we need burndown charts?

There are a few teams out there that can accurately estimate blocks of development work in absolute units of time, but it’s not the majority. The reason we estimate in story points for user stories is because software estimates in absolute measurements of time are barely ever accurate. So, why does Scrum insist on estimating in story points for user stories, but insist on estimating task hours for individual tasks within the user stories? Even when estimating dozens of small tasks individually we succumb to the same inaccuracy as we would if we were estimating user stories the same way.

The only thing that is important at the end of the sprint (or iteration) is a completed story. A non-completed story isn’t worth anything at the end of the sprint even if 10 of 12 hours are completed. A quick look at completed tasks vs non-completed tasks should be enough motivation for the team to know and decide what needs to be done to complete the work by the end of the sprint. Many agile experts would agree, including George Dinwiddie who recommends in a Better Software article to use other indicators instead of hours remaining for the burndown charts such as burning down or counting story points. Gil Broza, the author of “The Human Side of Agile” will also recommend that burndown charts shouldn’t be used at all, and instead among other things, use swim lanes for tracking progress.

In my experience, knowing the number of hours outstanding in a sprint by looking at task hours doesn’t help and is an inaccurate metric to use to plan additional “resource” hours in the sprint. Even though some organizations do this, the thought of using task hours to help plan for additional “resource” hours during a sprint isn’t valuable since the absolute time measurements aren’t accurate.

If you really are determined to be Agile, you need to make sure you understand where processes make sense, After all, the agile manifesto preaches “individuals and interactions over processes and tools”. Following Scrum processes doesn’t necessarily make you Agile, and you need to really think about the value of these processes and determine if they are working for or against your initiatives to become more agile. In the real world examples of task estimation and burndown charts, it was clear that these established Scrum processes were actually working against the organization in terms of becoming more agile. Even if your goal isn’t to be agile, by identifying and eliminating processes that don’t add value, you are eliminating waste and opening the door to replace the processes with initiatives that can truly add new value.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions. Dan has been the architect lead on over 15 development projects.

This article helps clarify agile manifesto number two “Working software over comprehensive documentation” by defining the different types of documentation being referred to. An analysis is given on code documentation indicating the complexities that warrant it, and how refactoring and pair programming can also help to reduce complexity. Finally, an approach is given on how to evaluate when to document your code and when not to.

The second item in the agile manifesto “Working software over comprehensive documentation” indicates that working software is more valued than comprehensive documentation, but it’s important to note that there is still some real value in documentation.

I see two types of documentation being referred to here. 1) Code and technical documentation typically created in the code by developers working on the system, and 2) System and architectural documentation created by a combination of developers and architects to document the system at a higher level.

I will save discussing system and architectural documentation for future articles, as it is a much more in-depth topic.

So let’s discuss the first point – code documentation

Questions that many teams ask (agile, or not) are – “How much code documentation is enough?” or “Do we need to document our code at all?”. Some teams don’t ask, and subsequently don’t document. Many TDD and Agile Experts will indicate that TDD and will go a long way to self-document your code. You don’t necessarily need to do TDD, but there is a general consensus that good code should be somewhat self-documenting – to what level is subjective and opinions will vary.

Software code can be self-documenting, but there are almost always complex use cases or business logic that need to have more thorough documentation. In these cases it’s important that just enough documentation is created to reduce future maintenance and technical debt cost. Documentation in the code helps people understand and visualize what the code is supposed to do.

If technology is complex to implement, does something that is unexpected such as indirectly affecting another part of the system, or is difficult to understand without examining every step (or line of code), then you should at a minimum consider refactoring the code to make it easier to follow and understand. Refactoring may only go so far, and there may be complexities that need to be documented even if the code has been nicely refactored. If the code cannot be refactored for any reason, you have another warrant for documentation.

Think about what can happen if code isn’t documented well. Developers will spend too much time looking at complex code trying to figure out what a system does, how it works, and how it affects other systems. A developer may also jump in without a full understanding of how everything works and what the effects are on other systems. This could create serious regression bugs and create technical debt if the original intent of the code is deviated from. A little documentation gives you the quick facts about the intent of the code. Something every developer will value when looking at the code in the future.

Of course, in addition to documentation, pair programming can and should be used to aid in knowledge transfer and can be a good mechanism for peers to help each other understand how the code and system work. Pair programming is also a good mechanism for helping junior and intermediate developers understand when and where they should be documenting their code.

The way I distinguish what code should be documented and what shouldn’t be is based on future maintenance cost. Consider how your documentation lends itself to ease of ongoing maintenance and how easing the ongoing maintenance will reduce technical debt and contribute to working software over time. If your documentation will directly contribute to working software by eliminating future complexity and maintenance, then document. If there is no value to the ongoing future maintenance, then don’t. If you are unsure, ask someone a little more senior, or do some pair programming to help figure it out. I’ve also seen value in peer reviews to help ensure documentation is being covered adequately, but I still prefer to instill trust within the team to get it done properly rather than a formal review process. When in doubt, document – a little more documentation is always better than missing documentation.

A question came in the comments on how to weight the amount of documentation needed in a project. My article was meant to address that question regarding code documentation with my own opinions. System level documentation is much more in-depth, so look for a future article addressing documentation at the system level.

-Dan

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions. Dan has been the architect lead on over 15 development projects.

This article introduces the reasons why organizations choose to standardize a technology stack to use on existing and future projects in order to maximize ROI of their technology choices. When selecting technology for new projects, the architect should consider both technologies within and outside of the existing technology stack, but the big picture needs to be carefully understood and consideration needs to be placed on the ROI of introducing new technology versus using existing technology within the technology stack.

Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection

As a Software Architect, understanding the long term affects and ROI of technology selection is critical. When thinking about technology selection for a new and upcoming project, you need to consider the project requirements, but also look beyond them at the bigger picture. Even though at first glance, a specific technology might seem perfect for a new project, there may already be a more familiar technology that will actually have a much bigger return on investment in the long term.

Many organizations stick to specific technology stacks to avoid the cost, overhead, and complexity of dealing with too many platforms. An architect should have specialized technical knowledge of the technology stack used by the organization, and if the organization’s technology stack isn’t standardized, the architect should work to standardize it.

Advantages to an organization by standardizing their technology stack

Development costs – It’s easier to find employees who are specialized on a specific platform instead of having multiple platform specializations with conflicting technologies. When you need developers with conflicting skillsets (ex: .NET and Java), you will likely need to pay for additional employees to fill the specialization gap.

Licensing costs – It’s typically advantageous to stick with only a few technology vendors to attain better group discounts and lower licensing costs.

Physical tier costs – It’s cheaper and more manageable to manage physical tiers that use the same platforms or technologies. Using multiple platforms (ex: Apache and Windows based web servers require double the skillset to maintain both server types and to develop applications that work in both environments)

As an architect you have responsibility with technology selection as it retains to ROI. Once you are familiar with the organization’s technology stack and the related constraints, you can make a better decision related to technology selection for your new project. You may want to put a higher weight on technology choices known collectively by your team, but it comes down to understanding the bigger picture beyond your current project and understanding the ROI of both the project and ongoing costs of the technology choices used. You may need to deviate from your existing technology stack to get a bigger ROI, but be careful that the cost of supporting multiple platforms and technologies doesn’t exceed the cost savings of using a specific specialized technology for a specific case in the long term.

When Microsoft released .NET and related languages (VB.NET and C#) in 2001, many organizations made the choice to adopt VB.NET or C# and fade out classic VB development. Those that made the switch early had an initial learning curve cost. Organization’s that chose to keep their development technology choice as classic VB eliminated additional costs at the onset, however they paid a bigger price later when employees left the company, finding classic VB developers became more difficult, and the technology became so out of date that maintenance costs and technical debt began to increase dramatically.

Sometimes the choice and ROI will be obvious; the technology in question might not be in use by your organization, but it lends itself well to your existing technology. For example, introducing SQL Server Reporting Services is a logical next step if you are already using SQL Server, or introducing WPF and WCF will compliment an organization that is already familiar with development on the Microsoft .NET platform.

In another case, it may make sense to add completely new technology to your technology stack. For example, it may be advantageous from a cost perspective to roll out Apple iPhones and iPads to your users in the field, even though your primary development technology has been Microsoft based. Users are already familiar with the devices, and there are many existing productivity apps they will benefit from. Developing mobile applications will require an investment to learn Apple iOS development or HTML5 development, but the total ROI will be higher than if the organization decided to roll out Microsoft Windows 8 based devices just because their development team is more familiar with Windows platform development.

Finally, there will be cases where even though the new technology solves a business problem more elegantly than your existing technology stack could, it doesn’t make sense to do a complete platform change in order to get there. In these cases, the ongoing licensing costs, costs of hiring specialized people, and complexities introduced down the line far outweigh any benefits gained by using the new technology.

Summary

It’s important that the software architect facilitates the technology selection process by evaluating technology based on ROI of the project while also considering the long term ROI and associated costs of the selected technology. It’s important not to focus only on your existing technology stack however, and consideration should be given to unknown or emerging technologies within the technology selection process. Careful consideration should be given to the cost of change and ongoing maintenance of any new technology and the ROI needs to be evaluated against the ROI of sticking to the existing technology stack over the long term.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions. Dan has been the architect lead on over 15 development projects.

As an architect working within an iterative agile environment, it’s important to understand how significant architectural decisions need to be made as early in the development process as possible to mitigate the high cost of change of making these decisions too late in the game. In iterative development, it’s important to realize the distinction between requirements that require significant consideration to the architecture and those that are insignificant. This article contrasts the difference between significant and insignificant requirements and demonstrates the best approach to implementing both types of requirements. In agile environments, it’s important to realize that significant architecture decisions will sometimes need to be determined iteratively and late in the development cycle. Guidance is provided on determining how to move forward by considering many factors including ROI, risk, cost of change, scope, alternative options, regression, testing bottlenecks, release dates, and more. Further guidance is provided on how the architect can collaboratively move forward with an approach to implementation while ensuring team vision and architectural alignment with the business requirement.

Introducing Significant Architectural Change within the Agile Iterative Development Process

As agile methodologies such as scrum and XP focus on iterative development (that is, development completed within short iterations of days or weeks), it’s important to distinguish the requirements that are significant to your architecture within the iterative development process. Iteratively contributing to your software architecture is very important to maintaining your architecture, ensuring the right balance between architecture and over architecture, and ensuring that the architecture is aligned with the business objectives and ROI throughout the iterative development process

Many agile teams are not making a conscious effort to ensure significant decisions relating to software architecture are being accounted for iteratively throughout the software development process. Sometimes it’s because there is a rush to complete features as required within the iteration and little thought is given to significant architectural changes, or sometimes it is lack of experience or lack of a team vision. There is usually a lack of understanding how the important guiding principles of the architecture need to be continually established and how they shape the finished product and align with the business objectives in order to see a return on investment. This is where the architect plays a huge role within the iterative development process.

The lack of attentiveness to significant architecture decisions whether at the beginning or mid-way through a release cycle can cause significant long term costs and delay product shipping significantly. Many teams are finding out too late in the game that guiding principles of their architecture are not in place and strategies to get to that point still need to be put in place at the expense of time, technical debt, and being late to ship. When requirements from the product team involve core architecture changes or re-engineering, changes are sometimes done so without recognizing the need to strategize and ensure that guiding principles of your core-architecture are in place to ensure on-going business alignment and minimal technical debt cost.

Within the iterative development process, it is important that the agile development teams (including the architect) learn to recognize when new requirements are significant and when they are not. Deciding which requirements are significant and will be carried on as guiding principles of your architecture can be worked out collaboratively with the developers and architects during iteration planning or during a collaborative architectural review session. This will help ensure that development is not started without consideration to the architecture of these new significant requirements, and that there is time to get team buy-in, ensure business alignment, and create a shared vision of the new guiding principles of your architecture.

In addition, to help prevent surprises during iteration planning, the architect can be involved and work with the product team when preparing the user story backlog to help identify stories that could have a significant impact on the architecture prior to iteration planning. Steps can be put in place to help the product team understand the impact and to assist the product team in understanding what the ROI should be in contrast to the cost to implement major architectural changes.

Separate which requirements have architectural significance with ones that do not where significant is distinguished by the alignment of business objectives and a high cost to change, and insignificant is more closely aligned with changing functional requirements within the iterative development process.

An insignificant architectural requirement will be significant as it retains to the functional requirements’, but not significant in terms of the core architecture. To further contrast the difference between which decisions to consider significant and which to consider insignificant take a look at the following table.

Significant

Insignificant

A high cost to change later if we get it wrong

We write code to satisfy functional requirements and can easily refactor as needed

Functionality is highly aligned to key business drivers such as modular components paid for by our customers and customer platform requirements

A new functional requirement that can be improved or duplicated later by means of refactoring if necessary.

The impact is core to the application and introducing this functionality too late in the game will have a very high refactoring and technical debt cost

Impact is localized and can be refactored easily in the future if necessary.

Decisions that affect the direction of development, including platform, database selection, technology selection, development tools, etc.

Development decisions as to which design patterns to use, when to refactor existing components and decouple them for re-use, how to introduce new behavior and functionality, etc.

Some of the ‘ilities’ (incl: Scalability, maintainability/testability and portability).

Some of the ‘ilities’, such as usability depending on the specific case can be better mapped to functional requirements. Some functional requirements may require more usability engineering than others.

It is best to handle the significant decisions as early as possible in the development process. As contrasted in the table below, you can see how the iterative approach lends itself well to requirements that have an insignificant impact on the architecture. You can also see how significant architectural requirements really form guiding principles of your architecture and getting it right early on lessons the impact of change on the product.

Insignificant Decisions

How to Approach

Examples:

New functional requirements (ex: to allow users to export a report to PDF. There is talk in the future about allowing export to Excel as well, but it is currently not in scope.)

Modifications and additions to existing functionality and business logic

Use an agile iterative approach to development. This is a functional requirement, with a low cost to change and a low cost to refactoring. Write the component to only handle its specific case and don’t plan your code too much for what you think the future requirements might be. If the time comes to add or improve functionality, then we refactor the original code to expose a common interface, use repeatable patterns, etc, In true agile form, this will prevent over-architecture if future advances within the functional requirements are never realized, and there is minimal cost to refactor if necessary.

Significant Decisions

How to Approach

Examples:

Our customers are using both Oracle and SQL Server

Performance and scalability

Security Considerations

Core application features that have a profound impact on the rest of the system (example, a shared undo/redo system across all existing components)

These decisions need to be made early as possible and an architectural approach has to be put in place to satisfy the business and software requirements’. These decisions are usually significant as there is a huge cost to change (refactoring and technical debt) and potential revenue loss if not put in place correctly, or needs to be refactored later. These decisions are core to the key business requirements and will have a huge cost to change if refactoring is required later.

Agile Isn’t Optimized For Significant Architectural Change

It’s unfair to assume that the agile development process is built to excel at the introduction of significant functionality and architecture changes without a large cost. This is why some requirements are significant and need to be made as early on in the development cycle as possible while ensuring there is alignment with the business objectives.

As great as it would be to map out every possible significant requirement as early on as possible, there are sometimes surprises. This is agile development after all. We need to understand that along with changing functional requirements, significant business changes part way through the development process can occur and could still have a significant impact on our core architecture and guiding principles, so we need to mitigate and strategize how to move forward.

Certainly, it’s possible to introduce significant core-architecture changes by means of refactoring, or scrapping old code and writing new functionality – that’s the agile approach to changing functional requirements and how agile can help prevent over-architecture and ensure we are only developing what’s needed. The problem is that it doesn’t work well for the significant decisions of your architecture, and when we do refactor, the cost can be so high that it will exponentially increase your time to market, cause revenue or customer loss, and potential disruption to your business. In these cases, the architect, along with the product and development team need to create a plan to get to where they need to be.

Mitigating the Cost of Significant Change

There is always a cost to introducing core architectural functionality too late in the game. The higher the cost of the change and the higher the risk of impact, the more thought and consideration needs to be put into the points below. Here are some points that need to be considered.

The refactoring cost will be high. Is there an alternative way we can introduce this functionality in a way that will have a minimal impact now without affecting the integrity of the system later?

This change is significant and will require a huge burden by the development team to get it right. Will we have a significant ROI to justify the huge cost to change? For example, is Oracle support really necessary after developing only for Sql Server for the first 6 months or is it just a wish from the product team? Do we really have customers that will only buy our product if it’s Oracle, and what are the sales projections for these customers? Is there a way we can convince these customers to buy a Sql Server version of the product? The architect needs to work with the product and business teams to determine next steps.

How will this affect regression testing? Are we creating a burden for the testing team that will require a massive regression testing initiative that will push back our ship date even further? Is it worth it?

How close are we to release? Do we have time to do this and make our release ship date?

What is the impact of delaying our product release because of this change?

Is it critical for this release or can it wait until a later release?

Can we compromise on functionality? Can we introduce some functionality now to satisfy some of the core requirements and put a plan in place to slowly introduce refactoring and change to have a minimal impact up front, but still allow us to meet our goal in the future?

What is the minimal amount of work and refactoring we need to do?

What is the risk of regression bugs implementing these major changes late in the game? Do we have capacity in our testing team to regression test while also testing new functionality?

Are we introducing unnecessary complexity? Can we make this simple?

All individuals involved in the software need to be aware of the impact and high cost that significant late to game changes will have on the system, development and testing teams, ship dates, complexity, refactoring, and technical debt that could be introduced. There are strategies that can be used, and the points above are a great start in determining how to strategize the implementation of significant architecture changes. One of the roles of the architect is to help facilitate and create the architecture and guiding principles of the system and ensure its long term consistency. As the system grows larger and more development is complete, introducing significant architecture changes becomes more complex. The architect needs to work with all facets of the business (developers, qa, product team, sales, business teams, business analysts, executives, etc) to help ensure business alignment and a solid ROI of significant architecture decisions.

Moving Forward

Once a significant decision is made that will form part of your architectures significant guiding principles, the architect needs to understand the scope of work, determine what will be included and what won’t, collaboratively create a plan on how we will get there, and understand how the changes will fit within the iterative development cycle moving forward. The architect needs to ensure that the product and development teams share the vision, understand the reasons we are introducing the significant change, and understand the work that will be required to get there. If your team is not already actively pairing, it may be a good time to introduce it or alternatively introduce peer reviews or other mechanisms to help ensure consistent quality when refactoring existing code to support significant architectural changes.

Depending on the level of complexity, the testing team may need to adjust their testing process to ensure adequate regression testing takes place that tests new and existing requirements that are affected by the significant architecture change. For example, if we make a significant change to support both Oracle and Sql Server we need to ensure existing functionality that was only tested for Sql Server support is now re-tested in both Oracle and Sql Server environments. The architect or developers can work with the testing team for a short time to help determine the degree of testing and which pieces of functionality specifically need to be focused on and tested to ensure the QA/testing teams are correctly focusing their efforts.

Summary

It’s important to distinguish significant architecture decisions from requirements that are insignificant as they relate to the core architecture of your system. When introducing significant architecture changes iteratively within an agile environment, it’s important to understand the impact and complexity that significant changes have when introduced late in the game. It’s important to understand the business impact of the changes and it’s important that the architect works with the rest of the organization to determine business alignment, risk, and ROI of these changes while understanding the cost of change before moving forward with a plan to introduce the significant architectural changes.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions. Dan has been the architect lead on over 15 development projects.

The ongoing consistency of your software architecture depends on its alignment with your business objectives, how suitable the technology choices are, and that your team has bought into and is following the architectural vision. This article demonstrates key factors in determining how suitable your software architecture is in being able to sustain a collaborative long term vision and growth of your software along with the difficulties of trying to ensuring business value without a shared vision or collaborative team input. Real world examples from the field are used to demonstrate successful scenarios, the thought behind them, how buy-in was attained, and how they complemented the QA testing strategy of the business objectives. An approach is given on how to recover failing projects where there was no consistent strategy and to turn chaos into a coherent strategy that is aligned with the business objectives.

Your Team’s Ongoing Vision Will Determine the Long Term Consistency and Alignment of Your Software Architecture

One of the problems that the architecture of your software should address is consistency. Consistency with technology, design patterns, approaches, layers, frameworks, etc. As software projects evolve, it’s important to ensure that the architecture, especially as aligned with your business objectives, remains consistent. Functional requirements can certainly change along the way, but the key is to do enough architecture work, and ensure consistency, for those big design decisions that need to be made as early in the game as possible.

Ensuring consistency in this capacity isn’t about guidelines for best practices such as re-use of existing components and developing components that are decoupled from one another. These best practices should be part of most (all) development projects. Ensuring consistency of the architecture is about ensuring that the guiding principles of your core-architecture have buy-in, are clear, are being followed, and are driving the long term success of your software in relation to your critical business objectives.

Ensuring consistency can require a certain level of control in some scenarios, however more often than not, very little controls are required when you have team buy-in, and a shared vision from the beginning as to how the business value is being provided. This is especially true once you have a team that is consistently delivering business value with the software. Enforcing too strict controls on teams can be demoralizing for most, and I’m very opposed to forcing development teams to do things a certain way or dictating how work will be done. I’m all for ensuring consistency and a good architecture across the application, but this can be done without forcing, controlling, and dictating how it will be done.

To help ensure consistency and buy-in across the board, it is important to consider the following when coming up with your architecture:

How crucial are the recommendations at hand to the business?

Will we see a business benefit by evaluating and following guidelines set around our evaluation?

What is the long term detrimental impact to the software of not doing this or doing it too late in the game?

Will following these recommendations eliminate refactoring costs and technical debt later on?

These considerations will help ensure buy-in and ease collaborative agreements with the team as the team will have a better understanding and vision as to the importance of the architecture to the business. Having a consistent vision of the business value helps ensure consistency with the architecture moving forward.

Be sure that team members who want to participate can help collaborate on the architecture or guidelines. This provides ownership by the team members which helps drive and ensure continued consistency. I’ve said many times that architects should not dictate requirements, rather they should create recommendations facilitated by understanding the software, technology, business, customers, etc. These recommendations should involve collaboration and review with the other team members before final architecture decisions are agreed upon and finalized.

Let’s look at some real life examples from the field

Example 1) The sales team had trouble in the past selling to some large customers who used primarily Oracle database servers. The architect discussed the scenario with the business leaders who made up a business case for supporting both Oracle and SqlServer. Collaboratively with the development team, the architect determined that the code base of the entire application could remain the same, but the data layer could be swapped out to support different platforms. The business case helped ensure buy-in of an ORM mapping tool to be used as a data layer that supported both platforms. The team collaborated and evaluated different options finally selecting a tool that met all of the business requirements for performance, scalability, and multiple platforms. In fact, the tool selected would work easily with SqlServer and Oracle platforms with little overhead. It was clear to the entire team how important the ongoing use of this ORM framework throughout development would be to the business and that deviating from this could cause considerable damage to the product and business model. The team was completely onboard with the vision. In addition, this drove the QA testing team to put controls in their testing processes to ensure compliance with this business requirement. The QA testing team made sure, as part of their process, to test the software on both SqlServer and Oracle platforms and multiple platform testing environments were created.

Example 2) Another scenario allowed customers to pay for specific application modules , but not others. It was evident that the architecture needed to adhere to this business model that consistency of the architecture throughout the development process would ensure ongoing compliance with the business objectives. Collaborative team buy-in of the business benefit of this along with the dependency injection and inversion of control framework ensured that the components being built were modular were able to be swapped in and out of the application easily. This would also drive QA testing initiatives that would ensure test plans accounted for and tested this modularity as it is a core part of the business model. From a development standpoint, the team came to a shared vision as to the business reason components are built using this approach. Teams understand that by not doing this or by deviating from this approach that they are creating a refactoring and technical debt cost to fix this later on. Of course, teams are free to improve upon this when new functional requirements are added within the iterative development process.

In these two examples, it was clear to see that team buy in and a shared vision were established because the architectural approaches were well thought out, evaluated, aligned to the business requirements, and had a huge cost to change if we got it wrong. Importantly, the entire team had an opportunity to be involved collaboratively in coming up with the architecture which further strengthened their commitment as they implicitly took ownership among themselves in coming up with the architecture.

It is much more difficult to ensure consistency across development teams without a shared vision about the value being delivered. For example, dictating certain design patterns to be used over other patterns is a subjective decision that will likely fail buy-in as the business value isn’t clear. A forced buy-in approach which will likely fail and will likely lead to team demoralization.

Guidance, recommended patterns, approaches, and coding standards can all be put in place. In reality, it means very little unless we are leading by example and have shown through practice, team buy-in, and business value why we are using said approaches and what the business advantage is. Instead of working on aligning architecture with business requirements, I’ve seen teams spend weeks coming up with coding practices (how to declare variables, which variable naming pattern to use, etc) for new projects. The problem is that most people don’t read or care to look at the documents created from these team sessions, and ensuring compliance for compliance-sake can be difficult and is a waste of time. Even if there is a little bit of value in ensuring how code is written and that variable naming is consistent across the board, I don’t believe documents standardizing the approach provide the value or incentives to do this.

Sometimes, to rescue a failing project, you may need to assert more control and constraints in order to get to a point where the software is coherent and beginning to meet the business objectives. This is a state that we are trying to avoid by doing our up-front architecture work and ensuring consistency with a shared vision. However, as consultants, sometimes we are brought into the project too late in the game. If this happens, trying to fix the solution may require short term measures and controls, but don’t lose sight of the fact that the real value is in ensuring consistent business value through architecture and a shared vision. There may be a lot of refactoring that needs to be done, but the teams still need to share a vision as to the business value of what is trying to be accomplished. My experience is that dictating control will only work in emergency scenarios in the short term just to get to a stabilization point, but for the long term, the team needs to work with a shared vision and understanding of the business value to make consistent progress.

Getting the team’s buy-in and creating a shared vision may be a bit challenging and may take longer in failing projects where the vision wasn’t there from the beginning, but it’s the best bet for the long term success of your software and for the consistency and sake of your business objectives. Once the team has a shared vision and is consistently contributing to the business value, less controls will be required. Your software will be continuously aligned with your business objectives as both the development and testing teams work together to ensure that your software is adhering to your critical business objectives.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions. Dan has been the architect lead on over 15 development projects.

Software development projects need to be aligned with your key business objectives. The key business objectives that have the biggest cost to change need to be baked into your core architecture as early in the development process as possible. Key business objectives relating to what the customer is paying for, such as, modularity of components (ex: paying for some components, but not others), performance and scalability, and data accessibility, are a few of a plethora of possible key business objectives that need to be baked into the core-architecture. Failure to do so can double or triple your development costs, leading to months of refactoring and potential customer and revenue loss. In agile environments, this article places importance on ensuring that significant core-architecture decisions are not made too late in the game as the myth of “no upfront architecture in agile” is debunked with real world examples where user stories, while aligned with functional requirements, were not aligned with the key business objectives nor the core-architecture.

Aligning Your Software Architecture With Your Key Business Objectives and Why Your Business Needs It

Software projects need an architecture – a core architecture, but defining that architecture and ensuring it meets the business and technology objectives is a bit more thought out than just doing “architecture”. This article will focus on specific areas in creating a core-architecture as related to key business objectives.

Listing some design patterns and layers for your new system and presenting to the team might have some technical merit, but it isn’t enough if you want to ensure a well thought out and successful software system that meets your business objectives. Architecture isn’t just design patterns, layers, and code design. In fact that’s a very small part of it, if it even qualifies at all. Architecture is more about making very significant decisions that will help ensure the alignment of your completed software with your business and technology objectives.

Creating the right architecture requires business and domain knowledge, product and customer knowledge, research, communication, technological evaluations, technical agility, and expert experience in software development. The decisions made here will shape the final software solution, so your core-architecture really represents these decisions that have been made during this architecture creation phase.

Now, separate this from day to day software development. Teams make “architectural-like” decisions all the time when they determine how to implement specific functionality. Ideas will get tossed around about how many layers of abstraction will be implemented, which design patterns to use, and so forth. Although some would categorize this solely as design, in a general sense this is still architecture, but maybe not your core-architecture. You could certainly say that not all of these design decisions will have a significant impact on the software.

Grady Booch states that “All architecture is design, but not all design is architecture”. He also states that “Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change”

Your core-architecture needs to represent those significant decisions and most importantly of all, they need to be aligned with the business requirements.

Some examples of these significant decisions that become part of your core architecture include technology selection (development languages, frameworks, server platforms, etc), application deployment considerations, and technology considerations for key business objectives.

Some examples of architectural decisions to support technology considerations for key business objectives are as follows:

The system must work on both Oracle and SQL Server databases. -> An ORM tool is selected that will allow the application to easily swap DBMS vendors. The ORM tool that has been selected has been evaluated against other tools and also meets the performance requirements. Using this tool will also lead to faster data layer development time over the alternative tools.

Customers can use and pay for a variety of modules that need to be plugged in at run time. The modules need to work together to share information, but also operate independently if related modules are not available. -> Dependency Injection and Inversion of Control frameworks are evaluated against writing an internal DI/IoC framework, and an approach is selected to control how and when the components get loaded and used. The components will adhere to a specific interface to ensure compatibility and modularity within the system.

The system has to be fast – a single implementation must handle a minimum of 100 customers and 5,000 simultaneous users with no performance degradation. This performance must be maintained with over 100,000,000 database records in the core database tables, and will be measured by… etc… -> Performance considerations relating to how data is retrieved and stored, caching, scalability, and data load operations are reviewed. Frameworks and architecture decisions related to this are reviewed and selected to ensure that performance considerations are baked into the core architecture. Coding standards and design patterns are put in place to ensure the UI is always responsive and UI data is loaded asynchronously to not freeze the user experience for any amount of time.

Customers are paying to be able to access their data in a standard way using 3rd party tools, and want to write automated scripts to retrieve and access this data. -> Business layer will be accessible to and have a corresponding REST API where all customer data will be accessible through this secure REST API.

We have licensed 3rd party vendors that pay a yearly fee to write and sell reports to our customers. We need an open reporting tool with easy access to report data. -> A technology evaluation is completed and multiple technology vendors are evaluated. The choice is made to use SQL Server Reporting Services to allow other vendors to easily create and sell reports to our customers. Reporting will not be done on the live database to minimize performance impact and mitigate the risk of rogue reports making their way into the reporting module. A separate star-schema analytics server will be deployed that contains aggregated customer data that is suitable for very fast customer reporting with no impact to the live production system.

As part of this process you may need to include specific details about how this will be implemented, measured, how the testing team will test the performance, the specific technology in question and how it will be used, etc. Your project will also need a shared vision of this architecture as the success of the project will depend on ensuring this architecture is maintained throughout the development process.

It should be clear to see that the core-architecture will represent these decisions that will have a big impact on the completed software. Trying to change or implement these core-architecture decisions mid-way through the development process can have vigorous consequences and there will be a refactoring cost that could require months of development time. The worst case scenario is that it could be deemed to be virtually impossible without a major re-write if you have moved too far in another direction.

Could you imagine discovering close to a customer release that your software that was supposed to scale, doesn’t scale? Or that specific functionality doesn’t work when more than a few users are accessing it at the same time? It happens often when the software team was not mature enough to ensure that there was a core-architecture aligned with the key business objectives and ensure that the developed software code was aligned with the core-architecture. These things must be accounted for and mitigated against by your core-architecture and the concepts of the architecture need to be baked into all development decisions made by the team throughout the development process.

If the problems mentioned above happen on your development project it is likely that your core-architecture was incorrect or never established, never followed, or effectively established too late in the development process. These problems alone could easily double or triple your total development costs, cause considerable delay to product shipping, and cost your organization customer and profit losses.

“What about Agile? We don’t do up front architecture in Agile, we do it as needed during our iterations and we follow the last responsible moment design principle in doing so.”

I’m a proponent of agile methodologies, and I’ve seen the great benefits that effective agile teams can have on a development project, but one thing I’ve seen over and over again in many agile environments is the assumption that less (sometimes zero) time should be spent on the architecture up front. The principle itself is sound as removing most up-front design for functional requirements to be developed as slices of functionality throughout the iterations provides a benefit of not over complicating and over architecting code. However, the assumption that we throw away all up-front design is incorrect. Even on agile projects there are significant decisions relating to your core-architecture and key business requirements that need to be made before development begins and other significant decisions that need to be made as early in the development process as possible. These decisions and related architecture need to be reviewed, updated, and maintained throughout the iterative development process.

Agility is fantastic when dealing with functional requirements and the need to respond quickly to changing functional requirements. However, this agility needs to be kept separate from the up-front core-architectural decisions that need to be made that are aligned with the key business objectives and that will help ensure your software products success and conformance to these key business objectives.

Even in agile environments where YAGNI (You aren’t gonna need it) and “build now, refactor later” are the trends, refactoring code too late in the development process in order meet significant design decisions that align with the business objectives is going to cost you. As mentioned earlier, refactoring costs of months and months of development time is the norm for organizations that didn’t account for core-architecture decisions in their development – especially in agile environments where there is a misconception that these architecture decisions were supposed to be made as late in the game as possible.

In many agile implementations, these core-architecture decisions are made too late as they don’t typically relate to a single user story and they end up having a huge technical debt cost because there is a huge cost to change and cost to refactor existing code once the architecture decisions get made. Days, weeks, or months, can be lost to refactoring.

Another problem on some agile teams is what I call “the race to the finish”. Development teams race to satisfy the requirements of the user stories as quickly as possible to ensure they complete the stories within their iteration and to keep their velocity up (average user story points completed per sprint). And although the functional requirements of the user stories are solid and intact, thought isn’t always given to core-architecture decisions such as modularity, scalability, performance, etc, even though the core-architecture is aligned in parallel with the business objectives. Especially, if the core-architecture hasn’t been or has only loosely been defined, you can expect even less consideration, as the focus turns to completing the work as described instead of ensuring the development is in alignment with the core-architecture. To mitigate this in true agile form, agile teams and the product owners need to ensure that business objectives relating to the core architecture are part of the acceptance criteria for the user stories, and that they are thoroughly tested before the testing team can give the “Ok” on the completed user stories.

Depending on the agile team, how experienced they are, and how senior they are, the focus on how and when design decisions are made can also vary, and not every decision needs to be made during the inception of the project. Typically the biggest decisions with the biggest cost to change should be made as early in the software development process as possible. I would make sure 100% that you don’t lose sight of the “architecture” and the significant design decisions of that architecture and how these decisions are aligning with the key business requirements.

In summary, part of a successful architecture for a successful software product will require significant design decisions to be made up front. The bigger the cost of change for the design, the sooner the decision needs to be made and implemented within the solution. Not establishing an architecture can lead to months of wasted development time refactoring code that wasn’t originally aligned with key business objectives such as modularity, performance, and scalability objectives. In agile environments, it’s especially important to ensure that thought is given to up-front architecture to mitigate the cost of refactoring in the future. Establishing and maintaining a core-architecture which is tied in parallel with the key business objectives will go a long way to ensuring a successful product rollout.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions. Dan has been the architect lead on over 15 development projects.

Perhaps you want to learn more about how to establish the architecture, how to present and collaborate with your development team to finalize the architecture and create a shared vision, how to ensure consistency across the architecture, how to bake in non-business requirements such as logging, cross-cutting, and other technical concerns into the architecture.

Or maybe your architecture is established, and your team has done a damn fine job of ensuring the architecture is solid and that it will meet the key business and technology objectives. How do you maintain this? How do you cope with architecture change when it is warranted or when the business changes?

Stay tuned, as I am writing a series of articles on these topics which will be available in the future.

My latest blog post touched on the fine line between no architecture and over architecture in software. I talked a lot about technical debt and why it’s bad. I got some feedback and some were wondering exactly how do you find that fine line, so here are some suggestions for improving in order to get closer to that fine line.

There are a lot of factors to look at to find that ‘fine line’ between over architecture and no architecture. If we are looking at a team, some questions to ask might be – How well is the team working together? What are the team dynamics like? Solid trust within the team? Is the team focused on team goals and team wins or is it more or less individuals on the team looking more for personal glory or wins as a higher priority than the teams goals? Or, does the team even have any team goals – is it strictly individual goals that are being strived for? Are people afraid of their superiors and feel free to provide their own opinion even if it seems constructive in order to better achieve the right team solution?

Asking these questions can help narrow down if there are general team problems that are contributing to the software problems…

If it’s a team problem, I’d suggest starting improvements at that level. Strategize some team building sessions, meet weekly to discuss and resolve any issues from the prior week (in agile environments, this would be done at the retrospective at the end of each sprint/iteration.). Get the team working together, setting goals together, and build a culture where the team is working together all the time. People aren’t afraid to speak up and everybody is working together to improve because the only goals that are important are the team goals. If the team wins, everybody wins, if the team loses everybody loses.

The teams that are the most successful are the teams that work well together. This creates a much bigger win than individuals working in silos would be able to do. The result in software is much much improved software and less technical debt. The same could be said for a sports team or a team of any nature.

Ok, so that’s some basics around team stuff and sorting out team dysfunction.

Once that is sorted out you need to collectively create goals. With mutual respect in place among team members, better architectural discussions can be had with everybody speaking up about the architecture and coding standards. And do some pair programming as well :)…

Some say code reviews are ineffective and don’t work. They do work, I’ve seen them work. So, try code reviews. A great way for the entire team to review and interact on existing code and talk about refactoring for future software updates.

Understand the business requirements as best as you can up front, ask the right questions, and get everyone on board with where the software is going.

One thing I’ve always done when thinking about what to include in an architecture is come up with a scenario where the said architecture or pattern will help solve a problem (problems could be barriers to implementation, complexity concerns, business use cases, etc) and think about how the architecture lends itself to that. Look at the cost of implementing and maintaining the architecture and also the learning curve required versus the added value you are providing. Sometimes, I’ll only partly implement a pattern or architecture just so it’s there and if I really need to fully implement it in the future, the refactoring is simplified to make it so. This is to have a minimal affect on maintainability and over architecture as it creates a scenario where you can do the architecture up later when needed without mega refactoring.

Also – try not to create lists, weighting scales, or pro and con arguments on paper or a white board in coming up with your design. If you are solo just take some time out and really think about it, and talk to colleagues about it. In a team, get the team together and pound it out. Don’t analyze it to death though, it’s not worth the cost of that ;)

Have the architects create a vision and strategy and then discuss it with the entire team to come up with something even better. Implement it and continually review and adapt it as necessary. Don’t have someone make an individual decision about an architecture or pattern and then leave it to the other developers to have to deal with it and the technical debt it could create. Always get team buy in for technical strategy.

Someone at the Senior Architect level, by my standard, should surely know how to listen and communicate with the developers on these types of decisions in order to get the best possible outcome.

As I am writing this, I am sitting on my condo’s terrace in downtown Toronto, baking in the sun while I take in the noise of the city and cars and people down below at the street level. There is a helicopter that seems to be circling the downtown core for quite a while now. It’s freaking hot today and I could probably use some water. Be right back.

At the time of writing...

(1 minute later…)

Ok – water has been acquired….. I also grabbed a Kilkenny and poured it into my Kilkenny glass (Kilkenny requires the right glass to be enjoyed properly)

It’s been a while since I’ve posted, and career wise a lot of big and exciting changes have been made. I made the move to independent consulting, and I am enjoying it big time! Now my efforts are shifting from helping one organization develop bleeding edge scalable systems to helping many organizations with development, architecture, minimizing technical debt, and team building.

Along the lines of the type of work I’m focusing on, I want to write a bit about bad architecture, over architecture, and technical debt.

Ok, so I’m going to talk about the benefits of a “value added approach” to software. I’ve seen a lot of systems in my day – ranging from poorly architected apps that deliver high amounts of business value to over architected systems that, instead of delivering their expected business value, became a total utter failure for a multitude of reasons… and everything between.

To the business, a successful system is typically one that delivers on it’s promises on value to the business. The negative impact of technical debt is not always seen by the business teams and is sometimes seen, albeit indirectly, as “necessary”. So, in these scenarios – why is it necessary? is it job security for the dev team? Lack of training? Lack of standards? There could be a plethora of reasons, but in the end the technical debt introduced by these systems is high and could cost the organization millions of dollars.

High business value systems and mission critical systems can suffer from an endless amount of technical debt due to lack of design standards and architecture. This technical debt is not always apparent to the senior business teams. The business loves the system however, but they sure don’t understand why it takes such a long time to add new features or track down bugs. The business leaders at the top see the system as great too, but the fact that it’s overly fragile, requires daily maintenance and an overly large team just to support it seems somewhat necessary – plus it’s “just the way it is”, right?

The real deal here is that technical debt, and overly large dev support teams are just not necessary at all. The right people, training, technical skills, system architecture, and business leaders have the potential to create the right systems and find that fine line between a proper development architecture and business value.

Yet, there is another extreme…..Over architecting and big ego’s….

People have ego’s… Fact of life .. When architecting a system, I’ve seen many software architects with an ego. Their system is great, using all the right design patterns.. Look at how cool my undo system is with the command pattern implementation I came up with. Watch how I can add one entry to the configuration file and all the sudden the entire behavior and business logic of the app can be changed… Our customers can now create their own custom modules using my handy dependency injection techniques and augment the app with their own fancy things they want to do….. Pretty sweet eh?

I agree, ok it’s pretty sweet (and fun to work on) and I’ve seen and created my fair share of ‘coolness’ in my systems. There are always business cases for these types of things and having the right technical team and abilities to implement them properly is key. The problem is over architecting when they aren’t needed or for the dreaded reason ‘just in case in the future’. The fact is, this can sometimes delay shipping the product and complicate the development. There is typically a big disconnect between the development team and the business value in these scenarios. You end up having a technical team who is more focused on architecting than on providing business value. Focus on what’s needed, and if it is, and there is a case for having it… Build away and have fun!

Both of these scenarios above can lead to long term technical debt. The trick in software is building a team who can leave their ego’s out of it, share knowledge of the system and the business value proposition, and focus on what’s important for the long term success of the project while minimizing technical debt. Doing this has the potential to create phenomenal systems that are well architected, bugs become easy to track down, the system can grow organically the way it needs to, the size of the development team can remain minimal, and the business is happy knowing that maintenance costs are reduced and new features can be added to the system quickly. There is no more fragility – the system becomes clear and small changes are less likely to have unexpected bad consequences.

There is a fine line between both of these scenarios and it requires practice and discipline to build the right team who can achieve and continually create well architected solutions that maximize business value and eliminate technical debt. It can be done and can be done very successfully. To truly master this as a software/business team, it needs to be instilled as part of the culture of the team. The right leaders and people are very important in truly creating world class software solutions.

A well architected component is easy to develop given the right technical knowledge, practice, skill, and motivation to do the right thing. I’m taking a practical example of a component I developed recently, described in the blog post titled Getting to the Monetary Value of Software Architecture. Relatively speaking this component wasn’t complex to develop, but it did require some trial and error and some clever thinking to perfect it. Development of this component took just over 4 hours to complete – and contains 80+ lines of code (that’s code, not including comments or blank lines), and once it was complete we could apply and reuse this functionality in various places with extreme simplicity. Actually, with only one line of code!! That’s the power of simplicity and reusability. See the actual code for the component here: Class to Add Instant ‘As You Type’ Filter Functionality to Infragistics UltraCombo Control

In a recent blog post, I talked about the monetization of well architected solutions. Here I am going to put some hard values against it. I used the following variables to come up with the data to put into the chart.

Initial development time required: 240 mins

Time required to duplicate this functionality once (non architected solution): 15 mins (assuming the user is somewhat familiar with the code and is copying and pasting) – I would comfortably say it could take 30 mins for someone who isn’t quite sure how the code worked to begin with and therefore not knowing exactly which code is required to be copied – this would double the blue time line as shown on the char below.

Time required to duplicate this functionality once (well architected solution): 1 min (it’s literally one line of code!)

These numbers were taken from the time it took to actually do the implementation several time. The 15 mins value required to duplicate the functionality for the non architected solution was derived from practicing a copy and paste scenario to integrate code to add the functionality required.

This does not account for the extra effort required in the future to maintain the code or make updates and enhancements. With the well architected component you have to make an update somewhere in 80 lines of code to fix a bug or add an enhancement. In the non architected version, you would be making a change within (80 x N) lines of code where N is the number of implementations. Let’s say 10 – so we’ve got 800 lines of this code that essentially do the same thing over again that need to be looked at to make a code change! Plus all of the other application code intertwined within it that you have to ignore.

This is just one example! There are great architectures all over the place and we all (mostly) know how valuable they are. Other than just saying “Yeah, they save tons of time..”, I’m here putting some hard values against it.

This is one scenario, but there are many more scenarios that save even more time – I’d even say, exponentially save more time!

Software Architecture is a huge topic and it’s something I am passionate about. I believe, and can prove, that continuous improvement in this area will contribute to overall better system design, faster bug tracking and fixing, reduced maintenance time, faster development time, developer happiness, quicker time to market, and allowing you to allocate more time to keep up to date on the newest technology – to name just a few things….

Now, translate this into the Bottom Line of the Business….

Faster ROI for software projects

Reduced Downtime

Faster time to market for new features

Reduced labour costs on software projects where the additional labour can be put towards additional stages of the project or other areas

Bottom Line -> Reduced Costs, Greater Profits

Let me share a case in point that compliments this theory nicely along with details to indicate the higher monetary value of the well architected solution …..

I’m going to discuss a feature we added to an existing project and how it was implemented using a sound architectural strategy. This is just a small piece, an enhancement really, to an existing system that has been well architected.

We developed a solution to allow users to search and filter a combo box while typing into the text area of the combo. This allowed users to find what they were looking for faster. The before and after scenarios are contrasted below…

Before:

User had to know the complete item name in the list of potentially thousands of items and carefully scroll through the list to find the item.

After:

Users only need to know part of the item name that they are looking for and just have to type it into the ComboBox to filter. This is an incredibly quick and easy feature for the user and eliminates time required for them to scroll through the list. As an added feature, a little filter icon is shown in the combo box to denote that the combo box list is only showing items that match the filter criteria.

(Note: Some data has been blurred to keep certain customer information confidential)

As you can see above, the user just types in ‘007’ which is the suffix to the part they were looking for in the list. This enabled them to quickly find what they were looking for; in this case, only one part had that specific suffix. In this case, there are over 240 items in the drop down list that the user would have to navigate through to find what they were looking for if this filter functionality was not in place.

Implementation Scenarios

I could have implemented this in many ways, but I decided on an architecture that would allow me to reuse this functionality many times over with little effort. I created a class that encapsulated all of the functionality required for the filter functionality to work. This approach requires no additional filter specific code in the main application. With this new class (about 64 lines of new code), I can create a new instance of it in any project and have instant and seamless filter/search functionality for any Combo Box.

It’s now literally one line of code to add this functionality (essentially 64 lines worth of functionality) to any of our ComboBoxes. You could loosely (very loosely) say that your productivity is increased by 64 times when implementing this filter functionality in any new scenario using this approach. However, read further and I’ll cover a more accurate metric.

I’ll contrast this implementation scenario with another very common scenario:

User: “This application is great, but man it sure is hard to find the information I need sometimes – especially when I only know the suffix to the part I am looking for”

Developer: “I have a great idea, let me add filter functionality for you to search your list and find what you need. “

User: “That sounds great! That’ll save me a ton of time!”

The developer opens the project and finds the Combo Box he is going to add the filter functionality to and starts coding away by handling the events of the Combo Box, adding code to various places, debugging, and a few hours later he has something that works pretty well. He/she does a bit more testing, fixes some bugs, makes a few changes, and here we have something that seems to be perfected.

The change is rolled into production and the users LOVE it! They can think of how useful it would be to have this functionality on a few more Combo Boxes.

(this is where it all goes wrong)

The developer goes to add this functionality to a few more Combo Boxes. For each combo box the developer is doing the following:

1) Find all the code in another area of the application that has the filter combo box functionality and copy it …. (“Where is all the code I need? Grrr – which pieces of it do I need again?”)

2) Paste it into the code module in another area of the application where the new functionality is needed

3) Look through all of the code and replace control names, key strings, and other variables with the ones we want to use for this instance

4) Test everything to make sure we aren’t missing anything

5) Ooops, something is not working right – maybe I forgot to copy something or change a value somewhere?

6) Ahh found it, I didn’t handle one of the events of the Combo Box properly and this was causing all kinds of problems

7) Copy and paste this piece of it

8) Code has been added in various places to support this new functionality, I need to do some serious integration testing to make sure I didn’t screw something else up

9) Ok, finally, everything is a go – let me post this.

Now the user wants the functionality in a few more places. The developer finds this tedious, but continues to do it this way for each combo box. This is tedious and time consuming and introduces more opportunity for bugs due code integrated into multiple places and tightly coupled within the system; due to time constraints it becomes tougher to introduce this functionality in too many additional areas.

With the well architected solution we get the following direct benefits that we would not see in the above scenario:

Write once, reuse many times – easily!

Effort does not need to be repeated for each place we want to use this functionality

Changes to functionality are made in the Class and bug fixes and enhancements do not need to be manually replicated for as many times as the functionality has been implemented

Code is refined, tested, and encapsulated from the rest of the project and from the ComboBox itself

Test Driven Development is supported as the idea is to de-couple the functionality from the system so that it can be reused – this decoupling makes it easy to write automated tests if necessary

Testing of the main components limits any re-testing required because the code is always the exact same code and not just copied and re-integrated from place to place

How can we put a monetary value on this?

Now, add up the time it took to create, test, and debug the initial filter component and get it working once. This is your Y value.

Now, add up the time it takes to copy the functionality to one more area as per the steps listed above and multiply that by the number of times you need to duplicate the functionality. This is your X value.

Now, come up with a number as to how many places could benefit from this new introduced functionality. This is your N value.

Now, add up the time it takes to add one line of code to a project to enable this filter functionality on an additional Combo Box. This is your Z value.

Cost of a well architected approach to implement: Time = Y + (N x Z)

Non architected solution: Time = Y + (N x X)

This just shows the initial up front cost savings. Take the other benefits into account, as listed above, and you can see a much higher monetary value.

To Add Insult To Injury…..

Now, wouldn’t it be great if we could have the filter search all of the columns for the combo box instead of just the one. We could just add a new option to the class to allow that or enable it by default for all ComboBox’s using the filter. Let’s hope they are all using this architecture – I don’t know a developer who would enjoy going back and modifying (or adding) the code for every combo box so that it can support this extra functionality. Unfortunately, in that scenario you’ll have added unnecessary labour cost to the project to get this additional functionality or the functionality just wouldn’t be added and the feature set of the system would suffer.

In a follow up blog post, I’ll actually discuss the code used and the approach taken with the code for this solution in particular. I will however share, in this blog post, the one line of code required to duplicate this functionality (64 lines of dispersed code, reduced to 1):

Reusability is the art of planning and developing application components so that they can be easily reused in other areas, be easily built on top of, and provide a decoupled approach to development and testing,

When developing software and writing your code, a great amount of care has to be taken into consideration on the subject of reusability. The first question you should be thinking about is:

Which components do we have available to use that have already been developed?

So, we’re thinking about which components or software code that have been developed already that we can reuse (either within the same application or a new application). Components in this context, could mean any of the following scenarios:

1. Code that we have available that wasn’t necessarily designed to be reusable:

This is code that we may have developed within another application without really thinking about it’s reusability, and even though it wasn’t designed to be reusable we can still harness the value of the code. Depending on the situation we could abstract the code from the original location and make it reusable – this would be important to do if you can really see this piece of code or component being reused again and again. This requires modifying the original code or component and the application using it in order to get the abstraction. The original application will now use this component and your new application will be reusing that exact same component. Improvements to the component could now affect both applications.

Another option is the good ol’ copy and paste method. Bad, Bad, Bad! Well sometimes it’s bad – not always. Go in to the other application, select what you want to copy, and paste it into the new application – modify as needed. Presto! We’ve all done this, and it can be justified if the effort required to copy/paste/modify as many times as you project you will ever need this code is much less than the time it would take to decouple it. Sometimes you may just do it to be lazy – hopefully if you do it due to laziness it doesn’t bite you in the ass the next three or four times you want to re-use the same code again – wishing you’d have decoupled it from the get go.

Sometimes, you may have code or components that you want to reuse but you are having difficulty decoupling it from it’s original source. Reusability wasn’t taken into account when the code was originally written and it’s too tightly coupled to the original application. The reusability factor here is lost and typically you have to duplicate the effort and re-write from scratch into a new application. Now, hopefully the second time it gets written – it’s designed to be reusable.

2. 3rd party components we can plug into our application:

There are tons of time saving components from 3rd party vendors out there that we can plug into our applications. These components typically provide functionality that is not native out of the box functionality for your application. Some examples of available third party components are: Data grids, ORM mapping, charting, reporting, etc. These can provide an enhancement to your applications that will save you development time in exchange for the licensing fee of the component. Purchasing new 3rd party components can be time consuming as you want to do an extensive search and evaluation on competing components from many vendors before making a purchase decision.

3. Using free source code or components found online:

There are many great source code examples and free components available online that you can plug into your application. These can be a real timesaver, but typically should be thoroughly tested before production to a greater extent than other components as they typically provide no warranty and can sometimes introduce very unexpected bugs if you are not careful.

Ok, so you’ve thought about the ideas above but still feel you must begin development with new code – you now need to think about future reusability of the code you are writing.

I’ll get into more detail about developing for reusability in Part II of this blog posting. Coming Soon!

At our last CIPS executive meeting held on July 21, 2009. We decided that it would be a good idea to create the first Coffee and Code event held jointly by the CIPS chapter of London (Ontario) and the new London .NET User Group. We held the event last night (August 18, 2009 – on my birthday – not purposely). As part of this event I developed and presented a PowerPoint presentation titled The Basics of Software Architecture for .NET Developers. The presentation touches on the basics of Software Architecture, along with ideas, tools, and resources that go along with software architecture for the .NET developer.

I’d like to give a shout out to Tony Curcio, President of the London .NET User Group. http://www.forestcityasp.net/ He’s done a great job of putting together this new .NET User Group in London. Tim Hodges, is the CIPS London chapter president who also has done a great job in the past and present organizing local CIPS events.

Content on slides 2, 3, and 4 was taken from the Software Architecture presentation I worked on along with team members Adam DeMille and Matt Higgins for the Top Gun training program in 2008.

Implicit requirements are those that engineers automatically include as a matter of professional duty. Most of these are requirements the engineer knows more about than their sponsor. For instance, the original Tacoma Narrows Bridge showed that winds are a problem for suspension bridges. The average politician is not expected to know about this, however. Civil engineers would never allow themselves to be in a position to say, after a new bridge has collapsed, “We knew wind would be a problem, but you didn’t ask us to deal with it in your requirements.”

This is a great analogy to implicit requirements within software architecture, and I believe that this idea separates the experienced senior software developers and software architects within the industry.

In determining the architecture of a software system, it is the “duty” of the software architect to determine potential problems or risks with a design and mitigate or eliminate these risks. The stakeholders of the project, don’t necessarily understand these risks nor do they necessarily understand their importance to the long term success of the project.

Let me describe four risks in software architecture and development that a Software Architect needs to implicitly understand and realize about the system they are designing. When it comes to these potential risks, getting it right the first time should be a top priority in the architecture of the system.

Scaling

Recognizing the scalability requirements of an application are very important. It is important to understand what the projected future usage, user growth, and data growth will be in the future. A good rule of thumb is to then multiply this by a factor of 2 or 3 and develop the system based on that projected future growth. It is important that your development environment be continually tested against this high usage and ensure that your development methods, strategy, tools, environment, and connected systems will effectively scale as well.

Also, regardless of the future requirements to scale, experience will demonstrate the type of development or tools to use that will scale well that do not necessarily have an impact on the development time or effort required. These are approaches that should always be used, and they are a testament to the skill and experience of the developer or the individual leading the developers, such as the Software Architect. An example of this would be in developing your database views or queries: It is known, based on experience that there are good ways to develop these queries that will give the best performance out of the box versus other designs that may give the same results, but are slower, inefficient, and don’t scale well.

By overlooking the importance of scalability, there is the potential for complete system breakdown when the usage of the system exceeds its capacity. This will leave developers scrambling to spend additional time to fix the core scaling problems of the system, or force a potentially expensive purchase of beefier hardware (that otherwise should not be required) to support a badly scaling system.

Incompatibility

It is necessary to identify any points of incompatibility with the software system. You have to look at all of the interfaces and interactions of the software system, human and systems, currently and in the future. This ranges from the users using the system to the other software and hardware components that interact with the system in a direct or indirect way. It also includes future compatibility because it’s important to look at future requirements and ensure that the system is developed to meet those requirements. To do this effectively, the Software Architect needs to have a broad understanding of a wide range of technology in order to make the right choices and also the business processes around the software system. In essence, based on experience and skill, the Software Architect will pick the correct technology to support the current and future system compatibility of the application.

Failing to effectively perform this step could result in overlooking a critical system connection and cause additional development, resources, and funding to correct. A system could leave some users in the dark and unable to access or use the system if they are using older or unsupported systems. A good architecture would have accounted for this from the beginning to ensure all users (legacy and current) can use the system. Another example could be a web application or intranet site that doesn’t work properly with a newer browser such as Internet Explorer 8. Now, additional time and money would need to be spent in order to get it up to a standard that will work across multiple web browsers. This could also impede a company wide initiative to upgrade all web browsers to Internet Explorer 8.

Future Maintenance and Enhancements

The future maintenance of a software system is incredibly important. This idea should be instilled in your brain from the beginning of the software project. Future maintenance and enhancements encompass everything that will make future updates, bug fixes, and new functionality easier. A solid framework for your application is important, along with development/coding consistency, standards, design patterns, reusability, modularity, and documentation. It is important to understand these concepts, in order to fully benefit and utilize them to their full extent. An experienced Senior Developer or Software Architect should have a full understanding of these concepts, why they are important, and how to implement them effectively.

Overlooking this key factor could leave you with a working application, but code updates, fixes, enhancements, testing, and the learning curve required for new project members will be greatly diminished.

This step is sometimes what I call a “silent killer”, missing this step or lacking experience in this area may not be apparent to the end users or stakeholders of the software system at first, but it will have a huge drain on the ability to use, leverage, and maintain the application.

Some serious disadvantages I’ve seen first hand with this type of system are that users will report critical bugs that are difficult for the developers to track down and fix and developers become “mentally drained” and discouraged from doing any kind of maintenance or enhancements to the system. Because of this and the fact that it will take many times longer to add new functionality to a poorly maintainable application, these types of systems evolve poorly and in a lot of cases end up being completely replaced by another system. Think about the potential for unfortunate long term financial and business consequences when this step is overlooked!

Usability

The software has to be useable. You need to determine which functions are most common to the user and ensure they are easy to find and are the most prevalent features within the application. Looking at a way to allow the user to customize the application goes a long way in order for individual users to customize the user interface so they get the most bang for their buck.

The user interface, the technology behind the user interface (is it web? windows? java? or a combination of these technologies?), user customization, colors, contrast, and user functionality are important. I also believe that a user interface has to look somewhat attractive. The application itself should be useable and self describing without requiring the user to read a manual or documentation. You’ll find that you will have more enthusiastic users and less technical or service desk support calls when the application is easy to use and performs the functions that the user needs to perform. It should make the job of the user easier! Simple things in your application such as toolbars, context sensitive menus, tabbed navigation, and even copy/paste functionality should not be overlooked. User interface standards also need to be followed, as you do not want the user to be confused if the basic operation of the application differs substantially from the applications that they are used to.

Basically, if users or customers do not want to use the software or application because it is too difficult or cumbersome, you end up with the users not actually using it and going back to the old way of doing things, or being forced by “the powers that be” to use it against their own objections about its usability. Neither of these situations are ideal and they result in lost productivity or the inability to have future potential productivity gains come to fruition.

Conclusion

A failure to identify and mitigate or eliminate these issues could mean a failure or breakdown of the system. This costs large amounts of money and time to do an “after the fact” correction that’s required, or in a worse case – completely wasted money on a failed implementation that ended up getting axed all together. I’ve witnessed first hand accounts of both of these scenarios and they are not pleasurable for anyone involved. As part of eliminating wasteful time and money we need to make sure that we do software right; gaining the right experience, skill, and paying attention to and understanding the implicit requirements expected of a Software Architect, we’ll have high functioning software that will serve its current and future requirements well, and provide a continual and exceptional value and return on investment.

In addition to the points above and though not touched on in this posting, I haven’t forgotten about Buy-In, Security, Availability, Having Proper Business Processes In Place, The Role of The Business Analyst, Communication, Team Leadership, etc. These points are also very important in order to have a solid foundation for a software project. I’ll definitely talk about these items in a future blog posting.

Thanks for reading! I welcome any comments (positive or constructive).

The speech was developed for a non-technical audience using layman’s terms and examples. I really tried hard in the speech to simplify the ideas of software architecture. The night I did the speech, there were three speakers speaking about different topics with an audience of about 20 people. At the end of the evening, I was voted as “Best Speaker” by the audience members. I felt good about that; Toastmasters really is a great organization to help you grow your public speaking.

Ok, here is the speech!

In my last speech I talked about being effective as a member of the Information Technology field. I briefly discussed steps involved in developing a solution – from concept to development. In this speech, I will take this one step further and go over another important step in the overall software development process. I will discuss important points to consider in the area of software architecture.

Software architecture is the fundamental design of a computer program. Consider the architecture of a car. A basic architecture of a standard car should have four wheels, an engine, fuel tank, etc. The architecture of a car also defines how these components will work together to produce a working vehicle. In software, it is the same idea. The basic architecture of a computer program dictates how the computer program will work, and how it will work together with other computer programs. Along with this basic architecture are proven design patterns. Design patterns are fundamental patterns that are proven and reliable and used as a template for developing pieces of your applications. To put this in perspective, imagine someone attempting to develop a new car without knowledge of how current cars work and what their fundamental “design patterns” are. Consider the time that would be saved and the ease of future maintenance if they could build this new car upon an existing template. Would it not make sense to take in the knowledge of an existing proven design, and use it as a base model for your new software applications? Of course you may improve on top of the initial fundamental design while still keeping the fundamental concept of how the car works as per the basic car “design pattern”.

Looking at the option of being able to reuse existing components as building blocks or even sometimes main features to your software application is important. Why re-invent the wheel? Consider, when developing a new car, the cost savings that could be had in re-using standard components that have already been developed and proven on previous model years. Why start from scratch? Again, software design is very similar. Purchasing pre-existing components that have been developed by third party vendors or that are freely available could be highly beneficial. Take the following example: Your application requires a rich user experience that is highly functional with a look and feel very similar to Microsoft Word. Your development team could spend one month developing and testing this new feature themselves, but this time could have an estimated cost of $10,000. Meanwhile, there are dozens of vendors out there who are offering this as a component, or a “building block” that you can plug into your application to give you the functionality you need. All that your development team has to do is customize these proven components to suit the needs of the application. This could have an estimated cost of $1,000 for the license to use the components, and maybe another $1,000 worth of development time to customize the components as you need to. In this example, you could easily see an overall savings of $8,000.

Modularity is an important factor to consider as well when designing your software application. To be modular, is to be described as, something that consists of “plug-in units” which can be added together and on top of one another to make the system larger or to improve its capabilities. As an example, think of a modular cabinet system where you can purchase additional cabinets and add them together to make one larger cabinet. This saves you money because you don’t have to throw away your existing cabinet when you need something bigger or better. In software the same ideas apply. You can design an application to be modular so that future enhancements can be developed faster without having to re-design the application from scratch or change it to be able to add additional functionality. As an example: Part of your application contains information about customers, and now new changes will require the application to contain information about about your customer’s suppliers. Being able to develop an independent “unit” that can be plugged into the application that contains this supplier information, limits the amount of changes that need to be made to the existing program. This will save time in the future.

Designing a good workable software architecture takes time and practice, but if done properly will save you even more time in the future as you begin to work on implementing this design. Using effective and proven design patterns, pre-made components from 3rd party vendors, and keeping modularity in mind can help your development team come up with a stable and effective software architecture.

This blog post will go over the application architecture used to create mobile bar code scanning applications that take advantage of a reusable framework, design patterns, and a layered approach. We have several applications that run on mobile scanner devices running Windows Mobile OS (Example: Intermec CK31 device). This layered approach works for applications developed natively using the .NET Compact Framework as well as applications developed using the full framework running over a terminal server. I got involved in developing a new mobile scanning application recently, so I went over the source code of our previous scanning applications and found a big problem: Old mobile scanner applications were not layered or structured in a way that made architectural sense, were not easily understandable, and the code was tedious to look at yet alone update. I also looked for ways I could refactor the code to make it generic and reusable. Previously, all of our scanning applications had code to handle basic tasks like accepting input into a text box, validating the contents, moving to the next field to scan, checking for the proper qualifier (the beginning character of the scan), etc, but in addition to the items listed above, the code was not re-usable and looked like a garbled mess.

I created a project titled ScannerInput to encapsulate the functionality and validation that was typical in our bar code scanning applications. The project encapsulates all of the logic required for scanning functionality for our mobile scanning applications and its object model supports extensibility in functionality so that features such as custom validation can be implemented in a structured and clean way. In addition, the application framework internally references the user interface components to provide tighter control and logic flow within the UI elements.

What is does:

Validates qualifier used for the scanned input (multiple qualifiers can be specified)

Moves to next field if scanned data is valid (yes, the framework references the UI element and takes care of this for the client application developer)

Prevents the user from changing fields if the current field does not pass validation

Clears field and raises validation event on client if scanned data is invalid

Raises event back to client when last field has been successfully scanned (the client decides if it should restart the process, or do whatever else the client wants to do)

Raises an event to the client if the user cancels the scanning process (the client decides what it wants to do in this event)

Client has control of the functionality of the ScannerInput objects by setting properties and handling events

Here is a class diagram representing the two main classes in the ScannerInput project:

ScannerInput Class

This class represents a textbox control on the form, but also the qualifier or qualifiers required for the input and how the textbox value should be validated. This is done using the strategy pattern (see below). Here is an example of client code that would instantiate a ScannerInput class and specifies the validator strategy to use in the New constructor:

Internally, when the TextBoxControl property is assigned, the ScannerInput class adds its own event handlers to the control. For example, it handles the KeyPress event to know when to validate the contents of the field. The ScannerInput code handles all of the logic and user interface control flow such as navigation between text boxes as input is scanned, preventing changing focus of text boxes if validation hasn’t passed, validating qualifier, and validating scanned data.

In four lines of code we now have a text box that is jacked into the scanning routine of the framework. Now, it will easily accept and validate scanned input. If scanned data is valid it will move on to the next text box for more input or can be directed to open another form or do any other programming task as events are raised back to the client that the client can handle and do what it likes if it chooses to.

ScannerInputControls Class

This class inherits a generic List (Of ScannerInput) class. It allows the client to add ScannerInput objects to the list (see below). It exposes a set of events that are raised back to the client. To add a ScannerInput control to the list you would simple use code as follows:

m_ScannerInputControls.Add(si)

Strategy Pattern Used For Validation Extensibility and Code Separation of Validation Logic

The strategy pattern is used for the validation piece of the architecture. As data is scanned into a field, the contents need to be validated. Here is the Interface used for the strategy and a couple of examples of the strategy implementation.

The interface:

Public Interface IScannerInputValidater
''' <summary>
''' Validates the content of the ScannerInput control textbox
''' Part of Strategy design pattern
''' </summary>
Function Validate(ByVal ScannerInputObject As ScannerInput, ByVal Value As String) As Boolean
End Interface

The nice thing here is that there are generic validators to validate simple things like numeric values, or field length ranges. A validator could easily be written and re-used to check a database table to see if an item scanned, for example, is an actual item that exists in a backend database. This could be done without modifying any of the ScannerInput project files, but simply by passing in the instance of the validator to the New constructor of the ScannerInput instance. Looking back at our old way of doing things, we had scanner input control logic intertwined with validation code along with constants used to specify the qualifiers used for each field- etc… and all of this was in a single code file – it was unsightly.

Now, if you are wondering.. What if you want to have multiple validators validate a ScannerInput object?

The New constructor of the ScannerInput object has only one validator parameter option, but you can easily bundle calls to multiple validators by creating a new class object of type IScannerInputValidater that internally uses multiple validators or has the capability to add validators to a collection at runtime and use them by sequentially calling the validate method of each one. An instance of the object would just need to be passed into the New constructor if the ScannerInput object.

Client Application

I just want to briefly describe the application used to develop and test the Mobile Scanner Application Framework. The client application was designed using an n-Layer approach (UI, Data, Business, and Facade layers), ORM mapping, and the Microsoft Enterprise Library Logging and Instrumentation Block. Future mobile scanning applications can use the Mobile Scanner Application Framework by referencing the ScannerInput project from the user interface layer and simply create ScannerInput objects and assign them a few properties (as we saw in the code samples above). The ScannerInput project takes care of the sequential program flow and validation of data scanned into the text boxes. There is very little code in the UI client project in comparison to our previous scanning applications. Most of the code in this new approach is creating a couple instances of ScannerInput objects and assigning their properties and then handling the events raised back to the client by the ScannerInputControls list object.

Here’s a screen shot of a client using the Mobile Scanner Application Framework:

In this case the letter in brackets indicates the qualifier (first character(s) in the scanned input required to be there to be considered valid (handled entirely by the framework).

Each text box is tied to a ScannerInput object (see code example above) which handles validation and sending error notification back to the client to handle – in this case the client handles it by displaying a validation error message on the client form.

As scanned input is completed the framework takes care of moving between fields.

As per the object model listed at the top of this post, events are raised at certain points within the framework (Validation, CycleCompleted, CycleCanceled)

Conclusion

The situation just required a little thought (On deciding to develop the framework I asked myself: How could I make this easier, more structured, and more maintainable?)

Client mobile scanner applications are easier to maintain than our non-structured mobile applications without sacrificing anything in the User Interface

The strategy pattern is used for validation and new validators can be added by creating a class to hold your custom validation logic that implements IScannerInputValidator and then simply passing it to the New constructor of the ScannerInput object. This seperates validation logic from the rest of the application logic making initial development, maintenance, updates easier, more effective, and cleaner

Bugs are easier to track down and fix – you can see this as it is very easy to identify where validation takes place, user interface code, and mobile scanning code takes place. Couple this with a well designed user interface (layers, best practices, etc) and you have an easy to maintain application.

It took less time to create the ScannerInput project and write the client code from scratch than it would have taken if I used an archaic approach of copying the code from an old scanning application and pasting the code into the new project where the program flow relied on writing almost the same code for the KeyPress, Focus, LostFocus events for every single input text box on the form

New Mobile Scanning applications can be created with ease

Writing new mobile scanner client applications is much quicker and much less frustrating

Orchestrating and modifying the program flow of code written all over the place (from validation, to program flow logic) would not have been fun if I had used an unstructured approach

Adding a new text box for scanner input to the form requires 3 or 4 lines of code to hook it up into a ScannerInput object. The old way would have required copying and pasting event handlers and creating a new method in the code file for its own validation routine, etc.

Client applications can benefit by enhancements to the program flow and logic contained within the framework immediately as they are developed.

Note: I don’t have the ScannerInput project source code posted to this blog right now. I am hoping to find some more time very soon to revisit this and post the rest of the code including the internals of the ScannerInput object. If you are looking for more information in the meantime please contact me or leave me a comment.

Follow Blog via Email

Dan Douglas is based in Toronto, Ontario, Canada and does consulting work for both small organizations and large global organizations through his consulting company, Douglas Information Systems Corporation. He is an experienced and proven subject matter expert, decision maker, and leader in the area of Software Development and Architecture.

With over 16 years of experience, Dan has been the Architect Lead on over 15 development projects and has successfully delivered large scale “best in class” end to end solutions. Dan has developed and architected solutions across a wide vertical, including, government, medical, automative, hr, manufacturing, technology, consulting, and software firms.
Dan writes a lot of code as a hands on developer and is passionate about delivering the right solutions to customers through better code, better architected solutions, better business alignment, and better process.

"My articles are inspired by what's possible. My experience in my software consulting practice has given me the inspiration to write about what I've seen and what I've done, and to write about 'What's possible in software'." - Dan Douglas