My guest today is my good friend Mark Richards. Mark is an independent hands-on software architect, with 20 of his 30 years of experience in the industry playing some type of architecture role on a myriad of different software projects. He is the author of several books including Java Message Service, 2nd Edition, and is featured in the book 97 Things Every Software Architect Should Know. In addition to that he is a frequent speaker on the conference circuit (which is how we met), and gives various extremely popular software architecture trainings around the world.

After Mark provides us with some interesting aspects of his background (he started his career as an astronomer!), we start by discussing the horizontal and vertical aspects of the evolution of software architecture, and how each move along one axis can drive changes in the opposite axis, as depicted in Mark’s keynote slide:

Mark's keynote slide depicting the horizontal and vertical elements of the evolution of software architectures.

Some of these drivers are technical - especially often hardware taking some time to catch up with the needs of newer ideas and software - but other times these changes are driven by changes in the business. Mark relates a story that I tell in my book, Migrating to Cloud-Native Application Architectures, about the transitions that have taken place in the last twenty years in how we perform the simple task of checking our bank account balance. These transitions have caused banking systems to transition from what Mark calls controlled access, such as visiting a human teller in a bank branch, to uncontrolled access, such as using a mobile banking application. These changes in access have caused a radical increase in the scale demands place on these same systems. Most legacy systems were never built to handle this kind of demand, and this has driven us toward new types of architectures that scale much better.

The conversation then transitions into coverage of three specific factors that drive architectural decision making;

agility: having the characteristics of speed and coordination; the ability to react quickly and appropriately to change.

velocity: the speed of something in a given direction.

modularity: having independent parts that can be connected or combined in different ways.

Mark describes these factors as “ingredients that go into an evolutionary cauldron.” Many companies are trying to embrace agility, both from a technology and a business point of view. And this “agility in isolation” is what causes many businesses to fail, as they spin rapidly in circles and never go in the right direction. This is where the importance of velocity comes into play. Agility combined with velocity allows us to move quickly, respond to change, and move in the right direction. But these must also be combined with modularity, which itself has both a technical and a business aspect.

As we discuss business modularity, it becomes time for the requisite Conway’s Law reference. But we quickly transition back to technical modularity, and the concepts of loose coupling and high cohesion. We’ve talked about these things for many years, but it very often doesn’t lead to significant improvements in actual project architectures. Why? Mark sees the problem is very often inherent in a lack of drivers. Modularity comes with a definite price. It is very often driven by the desire for agility - it’s hard to achieve agility with a monolith. But the more modular our architecture becomes, the less reliable and available it becomes as we introduce distribution. Coming back to the idea of drivers, Mark brings up Martin Fowler’s blog entitled “Sacrifical Architecture,” or this concept of throwing away portions of our architecture that no longer support the required business functionality, something that we can only accomplish with a truly modular architecture. We see these same drivers inherent in the move to microservices, which result in the same costs.

Volatility often becomes one of the key drivers moving us toward modularity and microservices. Many of the aspects of our system - such as admin or reporting functionality - simply aren’t that volatile. And so we can make the mistake of moving these things to microservices when there isn’t any payoff. But it always comes with a cost (what I like to call “the distributed systems tax”).

And so we turn the discussion to the million dollar question: “What microservices should I have?” Where should I draw the boundaries? I relate the analogy of how we described velocity in physics as a vector, where the magnitude of the velocity was indicated by the length of the vector, and the direction of the velocity was indicated by the direction of the arrow. I compare these to the discussion of independent value streams that we find in the DevOps conversation. How many velocity vectors should you have? Well, how many different value streams do you have that move in different directions and at different speeds? These different velocity vectors can then be aligned with different deployable artifacts with independent lifecycles (i.e. microservices), thus preventing the tangling of these vectors together. Mark sees this as the perfect blending of agility, velocity, and modularity.

Mark transitions the conversation back to the three ingredients and how they provide us with a high degree of deployability, testability, and scalability. These lead us toward a definition of competitive advantage. But it’s about more than being able to push out product quickly. It’s also about having a feedback loop to which we can react quickly and change appropriately.

We start to wind down the conversation by discussing “What’s Next?” As Mark described three ingredients in the evolutionary cauldron, he also described three characteristics of the next evolutionary stage in software architectures:

A tighter integration of data and functionality

Self-healing systems

Architectures that constantly evolve

Streaming architectures are one step in the direction of a tighter integration of data and functionality, but Mark sees a greater paradigm shift in not seeing data as somehow a separate entity in the architecture, but part of the greater whole.

Reactive architectures are currently one of Mark’s key passions. Mark describes the application of patterns that allow systems to grow without any human intervention. Systems that can handle any spikes in load or transaction volume. Systems that can self heal, almost like biological systems. Patterns that include the Thread Delegate pattern and Workflow Event pattern.

The final item is really a call to action, to discontinue the fool’s errand of gazing into crystal balls, trying to figure out what our architectures will look like. It’s impossible to do that kind of predictive analysis anymore. But instead we leverage these first two aspects to create architectures which truly can evolve over time.

Mark closes with an exhortation to aspiring software architects to focus on improving their people skills. He sees this as not only the most important skill set for the software architect, but also the most difficult one to learn.

My guest for this episode is Tudor Girba. Tudor builds tools and techniques for improving the productivity and happiness of software teams. He currently acts as a software environmentalist at feenk gmbh, a coaching and consulting company that he co-founded.

In 2014 he received the Dahl-Nygaard Junior Award for his work on modeling and visualization of evolution and interplay of large numbers of objects.

He leads the work on the Moose platform for software and data analysis, he initiated the work on the Glamorous Toolkit project for reinventing the software development environments, and he is a board member of the Pharo programming language and environment. He also authored the humane assessment method for making software engineering decisions, and the demo-driven approach to embedding design thinking in software development.

We start out by discussing why simply reading code to solve problems is actually utilizing what Tudor calls an “inhumane assessment” method, similar to how we might say the same about a person plowing a field with their bare hands. When we write code, we’re very often trying to help individuals make decisions by hiding the raw data from them, instead presenting them with a usefully summarized view of those same data. But what do we do when trying to make decisions about our code? We go straight to the data. We read code.

We then get into a conversation about the types of tools that we can build to improve our decision making as developers, tools that Tudor says we should be able to construct in minutes instead of hours or days. Most of the problems we have are search problems. What do we also search quite frequently? A database. How do we do that? We write a query - a query that defines with rich semantics how we want the result to appear. So why don’t we have a query language for code? Why doesn’t the IDE have a query box? The same can be said for architectural problems. And queries are the types of tools we should be able to construct in minutes. We then combine these result sets with powerful visualization tools that allow us to see how our results are clustered together.

The power comes in how cheap these queries are to write, because you’re no longer building with a view toward reuse. Tudor states that “most developers throw away many queries every day,” but that’s only because they are so cheap to write. If your decision making tools for code and architecture had the same economics (relatively cheap compared to their power), then the entire game is changed. And this is the type of environment that Moose strives to create.

After we spend some time talking about how the Moose environment works, we transition into a discussion of detangling monoliths, looking for natural seams in the code. While we’d like to simply just determine what seams the business model should have and decompose from there, very often the code is structured in such a way that the business seams and the natural seams don’t overlap. Sometimes this can stem from a team not having a strong understanding of how they want to model the business. This problems worsens when the business itself doesn’t understand how to model the business.

The conversation then pivots into a discussion of the common motivations behind moving to microservices, and many people say they are seeking modularity. Tudor’s assertion is that they’re really looking for constraints to help them maintain modularity, and that the distributed system model is a high price to pay for those constraints. But if we can write a query of our system’s architecture, we can turn that same query into a test, and that test can become the architectural constraint. And this is exactly the type of tool that Moose provides, in a DSL for architectural constraints.

We close with a conversation about agile architecture. As it turns out, there’s not one architect and several developers, but several architects. And architectural decision making is a commons-based approach. So how do we steer the architecture? And this is where human assessment comes into play, helping us to perform the assessments needed to make appropriate tradeoffs in decision making as a group of architects.

Audio Notes: roughly 20 minutes into the conversation, there are a few minutes of background artifiacts that we couldn’t isolate from the recording. The same happens around 41:58 with a loud motorcycle just outside where we recorded the episode. We apologize for the poor listening experience.

My guest for this show is Tim Berglund, Vice President of Developer Education at DataStax. We start by discussing the unique challenges of distributed systems for architects, eventually landing on the most important piece of distributed systems advice a software architect can receive. We then dive into types of distributed systems and how to build up one's skill set in this important area.