I just ran across Alan Cooper's keynote at Agile 2008. The gist is that he's making the case for integrating interaction design into Agile development, something that is near and dear to me, as well. I was pleasantly surprised by his talk, and I recommend it to all my dev friends.

You can quickly scan through the slides and his notes to get the whole story. I'm not sure if I could have said it better myself!

I'm becoming more and more averse to the term architecture and architect in terms of creating software, partially because it is such an overloaded term that seems to cause so much continual opining about its meaning but, more importantly, because I don't think it is like what we do, at least not to the extent that seems to be commonly thought.

We are not building houses (or bridges, or skyscrapers, or cities). Houses, and other physical constructions, rely on pretty much immutable laws of nature, of physics, chemistry, etc. These sciences are sciences in the established sense--you can perform experiments repeatedly and get the same results, and others who perform those experiments will get the same results. Physical building, architecture, and engineering is fundamentally a scientific endeavor because it is essentially serving scientific laws.1

Software Serves Human Social NeedsSoftware, on the other hand, is fundamentally a human and social endeavor. Above the basic electrical and magnetic level, i.e., hardware, it is purely human constructs built on layers of human-created abstractions built to serve human social needs--for, ultimately, business or pleasure. As such, we (as a human industry) are pretty much free to create the abstractions as we see fit.

Beyond the basic hardware translation layer, we are not bound by elemental laws, only by our imagination. The problem is, it seems to me, that early software development was very closely tied to the electrical engineering disciplines that gave birth to computing machinery, so the early abstractions were engineering-oriented and assumed an unnecessary scientific and engineering bent. Subsequent developments, for the most part, have built on this engineering basis, and our educational system has perpetuated it. Even though relatively few software creators these days need to understand the inner workings of the hardware (and even one layer of abstraction up), such low-level engineering is at the core of many computer science curricula.

As the power of computing machinery has grown, we've expanded the uses of software to take advantage of the new power, but we have remained an essentially engineering-based culture and have accrued other engineering-related words such as architecture and architect. We have engineers and developers, systems analysts, and architects. We have projects and project managers, and many try to manage software projects as if they were building projects. We have builds, and we say we're building or developing or engineering software.

We have, built into our very language, an implicit association with physical building, and we have the association repeatedly reinforced by those who want to draw direct analogies between our trades. Certainly, there are similarities, but I tend to think much of those similarities have been manufactured--they're not inherent to the nature of software. We've painted ourselves into a corner by such analogies and borrowing of techniques and language.

Perceived Crisis of Complexity and TerminologyNow we're having this crisis, as some seem to paint it, where we need to elaborate further and push the idea that our systems are like cities and that we need varying levels and kinds of architects to help plan, build, maintain, and expand these software cities. We have folks struggling to define what an architect is, what architecture is, and creating various stratifications within it to expand on this analogy. We purportedly need enterprise architects, solutions architects, infrastructure architects, data architects, and more.

There is I think a well-intentioned effort to fix it because we do see this corner we've painted ourselves into, but we're reaching for the paint brush and bucket to solve it--reaching for those same ill-fashioned analogies, techniques, mindset, and culture. We see all this accrued complexity, and our solution is to make things even more complex, both terminologically and systematically, because we're engineers and scientists, and scientific problems are solved with scientific methods and precision, no?.

It seems the underlying problem is that we're approaching the problem all wrong. The problems we're solving are fundamentally human problems, particularly social problems. And by social, I don't mean social networking software that is now en vogue; I mean social in the basic sense of dealing with interactions between humans, be that economic, entertainment, education, social connection, or whatever. It follows, then, that the best solution will be fundamentally human in nature, not scientific, not engineering.

Realigning with Our Core Problem DomainMaybe we should avoid likening ourselves to engineering and scientific disciplines, and especially, we should shun terminology that ties us to them and binds our thinking into those molds. As a man thinks, so is he, as the saying goes. Surely, we can and should learn what we can from other disciplines, but we need to be more reticent to insinuate them into our own as we have done with building.

I do think various solutions have been tried to better align software with its problem domain. Object-oriented design is at a generic level an attempt to urge this sort of alignment, as is its more developed kin, domain-driven design. Agile and its like work toward human-oriented processes for creating software. Natural language systems, workflow systems, small-scale (solution-level) rule engines, and even some higher-level languages have attempted this. And in fact, as a rule, I think they succeed better than those more closely tied to the computing and building conceptual models, except that even these more human-oriented abstractions are chained by the lower level abstractions we've created.

What we need to do is continue to develop those human-oriented models of creating software. It seems that we may be at a breaking point, however, for our continued use of the building paradigm. Our repeated struggles with the terminology certainly seem to speak to that. Our terribly confused and complicated enterprise systems landscape seems to speak to that. Our control-driven, formal, gated processes have been most well shown to be broken and inappropriate to the task of software creation.

New TerminologyTo make the next step, perhaps we should reexamine at a fundamental level how we think about software, both the artifacts and how we create them. I think we need to make a clean break with the engineering and building analogy. Start fresh. Unshackle our minds. Maybe we need to drill down the abstraction layers and figure out where we can most effectively make the transition from hardware control to our human, social domain. I imagine it would be lower than we have it now. Or maybe it is just a matter of creating a better language, an intentional language (or languages) and a move away from our control-oriented languages.

At a higher level, we certainly need to rethink how we think about what we do. Some folks talk about the "architect" being the "bridge" (or translator) between the business and the technical folks. If that is a technical role, which I tend to doubt, it seems like a more appropriate title would be Technical Bridge or Technical Translator or Technical Business Facilitator or even just Software Facilitator. Call it what it is--don't draw unnecessarily from another dubiously-related profession.

But maybe thinking this role is best served with a technical person is not ideal. Maybe we technical folks are again trying to solve the problem with the wrong tools--us. Well-intentioned though many are, if we are technical in tendency, skills, talent, and experience, we are not as well equipped to understand the squishy, human needs that software serves or best identify how to solve such squishy human problems.

Since software is essentially a human-oriented endeavor, perhaps we need a role more like that which has been emerging on the UX side of things, such as [user] experience designer or interaction designer. They are better-equipped to really grok the essentially human needs being addressed by the software, and they can provide precise enough specification and translation to technical folks to create the experiences they're designing, even with the tools we have today.

Then again, some say that architects are the ones concerned with high-level, "important" views of a solution, interactions among individual pieces, that they are those who model these high-level concerns and even provide concrete tools and frameworks to help effectively piece them together. I say that we could call this role solution coordinator, solution designer, or solution modeler. But then, according to folks like Eric Evans, these folks should be hands-on to be effective,2 which I also believe to be true. In that case, what they become, really, is a kind of manager or, simply, team leader, someone who's been there and done that and can help guide others in the best way to do it. At this point, the skills needed are essentially technical and usually just a matured version of those actually crafting the solution.

Instead of software developers and architects, how about we just have technical craftsmen? The term is appropriate--we are shaping (crafting) technology for human use; it also scales well--you can add the usual qualifiers like "lead," "manager," "senior," whatever fits your needs. There's no unnecessary distinction between activities--whether the craftsman is working on a higher-level design or a lower-level, it is all essentially the activity of shaping technology for human use. Depending on the scale of the team/endeavor, one craftsman may handle all levels of the craft or only part, and in the latter case, the division can easily be made based on experience and leadership. And finally, it does not introduce cognitive dissonance through extremely-overextended and inaccurate analogy (like developer and architect).

Even if you don't like the term craftsman--we could collaborate to choose another that doesn't chain us to wrong thinking--the point remains that we should recognize that we've introduced unnecessary and unhelpful distinction in our discipline by using the dev and architect terminology. We could begin to solve the conundrum by abandoning these titles.

Resisting the Urge to Rationalize and ControlAlso, by looking at each solution as a craft--an individual solution tailored to address a particular human problem, it becomes clearer that we need not be so ready to try to rationalize all of these solutions into some greater system. As soon as we do that, we fall back into the engineering and computing mode of thinking that will begin to impose unnatural constraints on the solutions and inhibit their ability to precisely and accurately solve the particular human need.3

As I suggested before, we should rather treat these solutions more like a biological ecosystem--letting selection and genetic mutation mechanisms prevail in a purely pragmatic way that such systems have so well embedded in their nature. I believe it is a misplaced good intention to try to govern these systems in a rationalistic, control-driven way. We deceive ourselves into thinking that we are managing complexity and increasing efficiency when in reality we are increasing complexity that then, recursively, also has to be managed in such a philosophy (creating an infinite complexity management loop). We also reduce efficiency and effectiveness (well-fittedness) of solutions by interfering with solutions with controls and imposing artificial, external constraints on them to serve our governance schemes.4

Wrapping It All UpOnce we stop trying to align ourselves with a fundamentally different endeavor--physical building--we free ourselves to essentially orient what we're doing towards the right domain--human social problems. In doing so, we can re-examine our abstraction layers to ensure they most effectively fit that domain at the lowest possible level, and then we can start building new layers as needed to further enable effective (well-fitted) solutions for that domain. By changing our language, we solve cognitive dissonance and illuminate where distinctions are truly needed, or not needed, and may even recognize where skills that are not inherently technical would better serve our solutions (such as UX pros). And lastly, by treating the solutions as fundamentally human, we recognize that the most efficient, effective, time-tested5 and proven technique for managing them is more biological and less rational. We see that they can best manage themselves, adapting as needed, to fit their environment in the most appropriate way possible.

If we're going to have a go at fixing the perceived current problem of complexity in software and, by extension, further understand how to solve it through our profession, I suggest that a somewhat radical departure from our current mode of thinking is needed, that we need to break away from the physical building analogy, and it seems to me that something like what I propose above has the most long-term promise for such a solution. What do you think?

Notes1. I should note that I recognize the artistic and ultimately social aspects physical constructions; however, they are still fundamentally physical in nature--bridges are physically needed to facilitate crossing of water or expanse, buildings are needed physically for shelter. The social aspects are adornments not inherent to the basic problems that these constructions solve. The same cannot be said of software; it exists solely to serve human social advancement in one form or another. 2. See Eric Evan's "Hands-On Modeler" in Domain-Driven Design: Tackling Complexity in the Heart of Software. 2. As an aside, I truly do wonder why we should have to try to convince businesses of the need for the "architect" role. If you ask me, the need, and our value/solution, should be obvious. If it takes a lot of talking and hand waving, maybe we should question if the solution we're proposing is actually the right one. ? 3. I have to nuance this. Obviously, if there are governmental regulations you have to follow, some such controls are required; however, if you think about it, this is still adapting the solution to best fit the human problem because the human problem likely involves some need of societal protection. Certainly not all systems need such controls, and even only some within an organization need them. Keep the controls scoped to the solutions that require them due to the human social conditions. On the whole, I'd say that there are vastly far more systems that don't need them, though the ones that do loom large in our minds. 4. By this I mean to say that, according to evolutionary theory, biological processes have developed over many millions of years and have proven themselves as an effective means for highly emergent, living systems to self-govern. Businesses and human social structures in general, especially these days, are highly emergent, dynamic, and living and need software that reflects that mode of being.

At a fascinating talk at the XP 2002 conference1, Enrico Zaninotto, an economist, analyzed the underlying thinking behind agile ideas in manufacturing and software development. One aspect I found particularly interesting was his comment that irreversibility was one of the prime drivers of complexity. He saw agile methods, in manufacturing and software development, as a shift that seeks to contain complexity by reducing irreversibility—as opposed to tackling other complexity drivers. I think that one of an architect’s most important tasks is to remove architecture by finding ways to eliminate irreversibility in software designs.

How interestingly this melds with my recent thoughts on managing complexity.2 You see, adding processes, management systems, and "governance" in general makes things more ossified, more difficult to change, i.e., less reversible. According to Zaninotto, this would mean that the more governance we put in place to, theoretically, manage the complexity of our software systems, the more complex they are bound to become, which I think logically means that we are increasing our complexity woes rather than helping them through such efforts.

I came across this in a recent thread on our (now-retired-)architect MVP email list, where the age-old discussion of "what is an architect?" has come up again. I have to admit, when I first seriously confronted this question, I was drawn in and fascinated. I even wrote an article about it on ASPAlliance.3 Since writing that, I've been keeping an eye on the developments at IASA and elsewhere in this space, and developing my own thoughts.

I've delved even more into agile approaches, particularly Scrum and domain-driven design (DDD), and into this thing we call "user experience,"4 which at first glance seems counter to our architectural/engineering approaches to building software. I've gained more experience building software as an architect and manager and observing software being built at the commercial level. I've been more involved in the business and marketing side of things, and I've been blessed with the opportunity to learn from some of the leading minds in our profession.

At this point, I'm of the get 'er done school, which I suppose might map loosely to Fowler's Architectus Oryzus, Eric Evans' Hands On Modeler, and others along those lines. I'm bought into User-Centered Design (or human-centered design, for those who prefer that), though I think we need to figure out a good way to merge DDD with UCD and a smattering of service orientation (as needed!) to make software the best it can be.

Software is complex enough without our making it more so with artificial taxonomic and gubernatorial schemes. Software should be teleological by nature. It exists to serve an end, a purpose, and if it isn't serving that purpose, the answer is not to create counterproductive metastructures around it but rather to make the software itself better.

One of the chief complaints about IT is that we seem resistant to change or at least that we can't change at the speed of business. Putting more processes, formalization, standardization, etc. in place exacerbates that problem. The other biggie is that software doesn't meet the need it was designed to meet. Both of these, at their core, have the same problem--ineffective and inefficient processes that are put in place to manage or govern the project.

I tend to think that projects need managing less than people need managing or, rather, coaching. You get the right people, you give them the equipment, the training, and the opportunity to do the right thing, and you get out the way and help them do it. You don't manage to dates (or specs!); you manage to results. If you don't have a solution that meets or exceeds the need at the end of the day, you failed. In fact, I might go as far to say that if what you built matches the original specs, you did something wrong.

Any managerial involvement should have a concrete and direct end in mind. For instance, coordination with marketing and other groups requires some management, but such management should be communication-oriented, not control-oriented. Start small and evolve your management over time. Management, like other things that are designed, is best evolved over time5 to meet these concrete, real needs--and you should keep an eye out for vestigial management that can be extracted.

Similarly, I don't think we need to tackle the software (IT) profession by trying to define and stratify everything we do. In fact, I feel it would be a rather monumental waste of our collective valuable time. One thing is certain, our profession will change. New technologies and new ideas will combine with the rapidly changing business needs, and new roles will emerge while old roles will become irrelevant (or at least subsumed into new roles). Monolithic efforts at cataloguing and defining (and by extension attempting to control) will, in the best of all possible worlds, be useful only for a short time.

It's clear that there are many approaches to doing software. It's axiomatic that there are many distinct, even unique business needs (inasmuch as there are many unique individuals in the businesses). What we should be doing, as a profession, (indeed what I imagine and hope most of us are doing) is focusing on how to make great, successful software, not wiling away our lives and energy talking about ourselves.

If you ask me what I do (e.g., on a demographic form), I tend to put software maker, or just software. Obviously, that's not specific enough for hiring purposes. But really, in hiring, we're really looking for knowledge, experience, skills, talents, and attributes, not a role or title. A title is just a hook, a handy way to get someone interested. If the market shows that using "architect" in a title catches the attention you want, use it (whether you're a worker or looking for workers). The job description and interview process will filter at a finer level to see if there's a match.

Outside of that, we don't really need to spend a lot time discussing it. We're all just making software. We all have unique knowledge, experience, talents, skills, and attributes, so there really is very little use in trying to categorize it much beyond the basic level. So how about we stop agonizing over defining and stratifying "architecture" and "architect," stop worrying about controlling and governing and taxonomifying, and instead invest all that valuable time in just doing what we do--better!?

This thought occurred to me the other day. Maybe the right approach to managing complexity in business software is something akin to creating a biological ecosystem. By this, I mean designing correcting mechanisms to address chaos as it emerges and, ultimately, (the dream) would be designing systems that are biologically aggressive, that is, they look for niches to fill and also take steps to preserve themselves.

I don't know. I'm sure I'm not the first person to think about this. It just hit me the other day as I was walking into work. It seems like the more common approach we take is to try to create a mechanical system as if the complexities of human interactions (i.e., business) can be specified and accounted for in a closed system.

I attended a session on managing complexity at the ITARC in San Diego last October, and the presenter was, if I recall correctly, advocating the usage of more precise specification of business rules through the use of Object Role Modeling (and in fact Dr. Terry Halpin was in attendance at that session and was a active participant). I had attended another session the previous day by a fellow from Fair Isaacs on business rule management software.

All of these folks struck me as very intelligent and knowledgeable, and yet it seems to me that they are going in exactly the wrong direction. In fact, I left that conference feeling very whelmed. I felt as if I were living in a separate universe; at least I got the sense that there is a software multiverse, parallel software development universes, with me living in one and a lot of those guys in another. All this talk of "governance" and highfalutin systems (e.g., grid SOA) leaves one feeling so disconnected from the everyday experience of being a software professional.

It seems to me that the solution to complexity in IT is not to create ever more complex mechanical systems, policies, and infrastructure to "govern" the problem. It seems like that's throwing gasoline on the fire. Not only that, it seems fundamentally opposed to the reality that is business, which is essentially a human enterprise based on humans interacting with other humans, usually trying to convince other humans to give them money instead of giving it to some other humans that want their money.

Because humans are intelligent and adaptable, particularly humans driven by, dare I say, greed (or at least self-preservation), these humans are constantly tweaking how they convince other humans to give them money. The point is, business is fundamentally a human and an aggressively biological, enterprise. It consists of humans who are constantly on the lookout to fill new niches and aggressively defending their territories. So it seems to me that business software should be modeled, at a fundamental level, on this paradigm rather than on the mechanical paradigm.

Of course, the problem is that the materials we're working with are not exactly conducive to that, but therein lies the challenge. I tend to think that the efforts and direction being made by the agile community and approaches like domain-driven design are headed in the right direction. At least they're focusing on the human aspects of software development and focusing in on the core business domains. That's the right perspective to take.

Extend that to IT governance, and that means giving various IT departments within an enterprise the freedom to function in the best way that meets the needs of their local business units rather than trying to establish a monolithic, central architecture that attempts to handle all needs (think local government versus federal government). It means developing with a focus on correction rather than anticipation, building leaner so that when change is needed, it is less costly (in a retrospective sense as well as in total cost of ownership).

I'm not advocating giving ourselves over to the chaos; I'm just thinking that this is a better way to manage the chaos. And as we learn the best patterns to manage complexity in this way, it seems not too far a stretch to think that we could start automating mechanisms that help software systems be ever more agile and ultimately even anticipate the change that is needed by the business, either automatically making the adjustments needed or at the very least suggesting them. That would be true business intelligence in software.

Maybe it's a pipe dream, but I think that without such dreams, we don't improve. At the very least, I think it suggests that the agile approach to software is the right one, and that this approach should be extended and improved, not only in software development but also in architecture and IT in general.

Far be it from me to put words in Phil's mouth, but I hope that folks recognize that his post about favoring composition over inheritance is not specifically about that one best practice (the comments seem to indicate this is being missed). It's pretty clear to me that the thrust of that post is around a philosophical approach that he thinks the ALT.NET community should make.

Two things stand out from Phil's post in this respect: 1) don't appeal to authority, and 2) don't organize yourself around a set of technical principles (best practices), but rather organize yourself around the non-technical values of independent thinking and desire to improve. I hope that everyone can agree that these latter two values are good ones that should indeed be encouraged.

That said, should a community like ALT.NET eschew forming a more formal consensus on technical best practices? I tend to think not. While independent, critical thinking is valuable, it is not the summit of perfection. The summit of perfection, in the realm of ideas at least, is conformance with truth (what actually is versus what I think is), and independent thinking at odds with what is true is not only not valuable in itself, it can be downright detrimental.

For instance, what if you independently and critically think that security and privacy are not important aspects of the online banking application you are tasked with building? Is that kind of independent, critical thinking valuable in itself? Or will it potentially lead to great harm? Independent, critical thinking is valuable only in as much as it deepens one's understanding of and conformance to truth.

So I think that there is value in a community such as ALT.NET expending the effort to define principles through critical thinking and argumentation that it will hold up as ideals, i.e., things that seemed to be most in accord with the truth as we know it. This is where things like patterns and best practices come into play; it is the shared, accumulated wisdom of the technical community.

Now what about the broader idea of eschewing appealing to authority? Far be it from me to claim to be an authority in logic, but it seems to me that all appeals to authority are not invalid (the wikipedia article Phil links to discusses this to some degree but does not go far enough, in my estimation). The valid reasons for appealing to authority are discussed at the bottom of that article: 1) not enough time and 2) concern at one's ability to make the other understand the reasoning underlying the truth being expressed.

In terms of logic, it is not a fallacy to appeal to an authority on a topic that is accepted by all those involved in an argument. We're talking about presuppositions here, and without them, we'd never get anywhere in our search for truth. If you always have to argue from first principles (if you even acknowledge those), you simply get stuck in a quagmire. In terms of the topic at hand, if folks accept (as they generally do) that the GoF et al are authorities on the subject of OOD, then it is valid, logically speaking, to appeal to their authority to establish the principle that you should favor composition over inheritance.

The thing to watch out for in appeals to authority is 1) thinking that the authority is incapable of being wrong and 2) ensuring that the parties involved accept the authority. With the latter, you simply cannot argue (or at least the argument won't carry weight) from authority if the authority is not accepted. With the former, unless it is a presupposition shared by those involved that the authority is indeed infallible, you should keep in mind that even if you buy into the authority's credentials, it is still possible that the authority can be wrong.

So I would nuance what Phil says and say that if the ALT.NET community agrees that GoF is an authority, it is valid to appeal to them, while remaining open to criticism of the concepts involved (even those backed by an authority). The authority adds logical weight; it does not impose absolute authority.

We just don't have time to argue everything from first principles. Others who are generally acknowledged to be qualified have already taken the time to research, think about, and propose some good patterns and practices, and unless there is good reason to object, there is no need to rehash those. Instead, I'd suggest that the community focus on spreading knowledge of these patterns and practices all the while refining them, functioning essentially as a group in the way that Phil recommends individuals function--thinking critically and always working to improve. Doing this will help ensure that the community does not fall into a quagmire of unnecessary argumentation, and it will ensure that the patterns and practices that they agree upon can be continuously refined and enhanced as new technologies emerge and greater wisdom is gained over time.

Further, it gives the group a purpose that has meaning. After all, if the group's message is only "think for yourself and be all that you can be," there isn't much of substance to say after that. On the other hand, because it is a technical community that espouses that philosophy, it should take that philosophy on itself (as a group, not just the individuals in it). I would suggest this includes establishing greater consensus on best practices and patterns and then spreading the word about them to others. Be better together. :)

You see, it is not about setting down an infallible manifesto and excluding those who disagree, which is I think more than anything what Phil is concerned about. However, it also isn't about best practices just being true for you but not for me (best practices relativism?). Put another way, I suggest ALT.NET should favor thoughtful adherence to best patterns and practices, not blind adherence.

As I read1 the works of Christopher Alexander, I grew increasingly concerned that the software industry may be missing the point of patterns. Well, maybe that's not the right way to put it. I think we may be missing the real value that patterns bring to the table.

For whatever reason, it seems we approach them (broadly speaking) almost as loose algorithms to be applied here and there as it seems fit. Or maybe we just see them as convenient ways to talk about things we already know, or maybe we even use them to learn particular solutions to particular problems. And then maybe we just use them because it is en vogue.

It seems to me that the real value to derive from patterns (as an idea, not necessarily as they are often proposed in the software world) is in learning to see and think about creating software in the best way. What Alexander proposes at the end of The Timeless Way is that it isn't using patterns or pattern languages, per se, that give our creations the quality without a name. No, he proposes that the value lies in helping us to recognize the quality and teaching us to build in the timeless way.

The timeless way is more than patterns. The thing is, patterns help us to get there. I think in some ways, we do get it. Those who are really into patterns do seem to recognize that patterns are not the solution to everything. The problem is, I think, in that we are not using patterns in the most profitable way.

I think part of the problem is in not using patterns as a language. We have numerous catalogues of patterns. To be sure, we do not lack for patterns, and sure, there is obviously value just in having these catalogues and in using the patterns here and there. But I think that as long as we see patterns as individual things in a pattern catalogue, we won't use them to their full effectiveness.

Perhaps what we need to do is to figure out how to use them as a language. Perhaps we need to weave them into our thoughts so that when we approach the problem of building software, patterns are there, guiding our thinking, helping us to best arrange a solution to fit the problem. When we use our natural language, it does the same thing. Our thoughts are constrained by our languages, but at the same time, our thoughts are guided by our languages. The ideas form in our heads and rapidly coalesce into some structure that is based upon our language, and the structure works because of the language--it tells us what works and what doesn't work to articulate our ideas.

I think that a pattern language would have the same power. If we get the patterns into our heads, then when we're faced with articulating a solution to a problem, we will think in terms of the patterns. The patterns will give form to our solution, and because they are patterns, the solution will work. The pattern language will both guide and shape our thinking towards things solutions that have the quality without a name.

But then, as Alexander says of "the kernel," once we master the language, we move beyond it, so to speak. The language is not an end in itself but a means to an end, a means to learn the timeless way. It shapes our thinking to the extent that we are able to perceive the way even without a pattern. And this is the superlative value in patterns that I think we're missing.

Patterns, in themselves, have value, but as many have noted, they can be abused and misapplied. The reason for this is not that a pattern (or patterns in general) are bad but that we're using them as an end in themselves. If we simply let patterns shape the way we think about designing software, if we let them become a language, then we will learn to use them in ways that make sense and ultimately go beyond them and build great software even where a pattern doesn't exist.

So how do we do this? Well, I think to some extent, we already do it. I think there are people who use the language, who know the way, without necessarily being conscious of it. And I think that there is a lot of great guidance out there that in a roundabout way does lead to building great software, even though it may not be conscious it is using patterns as a language. But I do tend to think that there is far more bad or failed software out there that has come about because the language is not known, it is not explicit.

I think that what we need to do is to continue identifying patterns as best we can, but we need to start thinking about how to more firmly incorporate them into how we create software. In fact, I think doing this, attempting to incorporate patterns more into development, will drive the further identification of patterns, to fill out patterns where we are lacking. I also think it will help us to realize how patterns relate to each other, which is a big part of using them as a language and not just a bunch of monads floating about in the ether. As we see them relating, see how they work together to form complete solutions, we'll better understand the language as well as the value of the language, and ultimately, we'll be able to impart that language to enable more of us to speak it.

This calls for those who build great software, who theoretically already know the way, to be introspective and retrospective. It's not just a matter of looking about in the software world for repeated, similar solutions. It's about identifying good solutions, solutions that bring software to life, not just addressing functional requirements, and forming from those solutions a language of patterns for building such software. What do you think?

Previously, I mentioned I was working on an example of using Visual Studio to create a concrete domain model using object thinking, and here it is. The domain I ended up modeling was that of a shared event calendar, including event registration and agenda planning. This is something that's been kind of rolling in and out of my mind for quite a while now because it seems that we need a good system for this for all the code camps and like events that occur. Of course, lately I've come across a few solutions that are already built1, but it seemed like a domain I knew enough about that I could take a whack at modeling it on my own. I also figured it was small enough in scope for a sample.

So without further ado, I present you with the domain model:

I put this together in about an hour, maybe an hour and a half, on the train up to SD Best Practices. When I started out modeling it, I was actually thinking more generally in the context of a calendar (like in Outlook), but I transformed the idea more towards the event planning calendar domain. So you see some blending of an attendee being invited to a meeting with the event planning objects & behaviors (agenda, speaker, etc.). Interestingly, they seem to meld okay, though it probably needs a bit of refactoring to, e.g., have an Attendee Register(Person) method on the Event object.

So the interesting thing to see here, contrasting it to the typical model you see in the .NET world (if you're lucky enough to see one at all!), is that there is pretty much no data, no simple properties or attributes, in the model. The model is entirely objects and their behaviors and relationships to other objects. You can look at this model and get a pretty darn good feel for the domain and also how the system functions as a whole to serve this domain. I was able to identify and model the objects without once thinking about (and getting distracted with) particular data attributes.2

In the story of our Tangerine project, I describe in some depth the compromise I had to make with the .NET framework when it comes to data properties. I think if I were to continue with this event calendar project, after I had nailed down the objects based on their behaviors (as begun in this example) and felt pretty good that it was spot on, at that point, I'd think about the data and do something like I did on Tangerine, having the open-ended property bag but also adding strongly-typed properties as needed to support framework tooling.3

I hope you can imagine how you could sit with your clients or whoever your domain experts are and quickly map out a lightweight model of the domain using the VS Class Designer DSL. I'll wager that if we took this diagram and showed it to a non-technical person, with a little help (maybe adding a key/legend), they'd quickly understand what's going on with the system. And if you're building it with the domain expert, you'll have that dialog done already so that everyone will be on the same page.

Sure, there will be further refinement of both the domain model and the code; the nice thing about using the class designer DSL is that tweaking the model tweaks the code, so the two stay in sync. We already mentioned the need to focus on the data at some point, and depending on your situation, you can do this with the domain experts or maybe you'll have an existing data model to work with. As the developer, you're going to want to get in there and tweak the classes and methods to use best coding and framework practices, things that aren't best expressed in such a model. You will have other concerns in the system to think about like security, performance, logging, user interface, etc., but that's all stuff you need to do regardless of how you approach analyzing and modeling your domain.

In the end, you will have a fairly refined model of the domain (or part of the domain) that is using a language that everyone gets and agrees on (Eric Evan's "ubiquitous language"); you'll have identified the objects in the domain accurately based on their behaviors and relationships, and you'll even have a starting point in code for the implementation. You also have objects that are responsible and that collaborate to get the job done, so in that way you avoid code complexity by reducing imperative control constructs. All in all, it seems like a great foundation upon which to build the software.

Notes1. Such as Microsoft Group Events, Community Megaphone, and Eventbrite.2. Okay, so maybe I was tempted once or twice, but I fought the urge. :) 3. I suppose another option would be to create LINQ-based DTOs; I have to think more about how best to meld this kind of domain modeling with LINQ.

I finally got around to finishing The Timeless Way of Building, by Christopher Alexander (most well known in software for being the source of the patterns movement). The last part of the book is called "The Way" to build things. His focus is physical architecture, but it is interesting how closely it resembles agile software development.

There are a few similarities that I see. First, he advocates (or at least shows in his example) working directly with the folks who are going to be using the building(s) when designing it with the pattern language. You design it together with them. Similarly, agile seems to advocate the same process of working as closely as possible with those who will be using the system.1

But Alexander goes on to say, using this real-world example of a health care complex he helped to build, that it almost failed (in terms of having the quality without a name) because even though it was initially designed using the pattern language, it was in the end passed off to builders who conformed the design to "drawings" (think UML) that ultimately caused it to lose a large amount of the quality.

The point he goes on to make is that you can't just use the language up front and then go translate it into formal design techniques and end up with the quality. Rather, you have to build using the language, and in particular, build each part of the structure piecemeal, to best fit its particular environment, forces, context, and needs. This is the only way that you can get the quality. Here I see another similarity with agile and its focus on iterations and regular feedback. You build in pieces, adapting each piece to its whole as it is built and ensuring that it best fits the needs, context, forces, and environment.

He also says that invariably our initial ideas and designs for a solution don't exactly reflect the ways in which the solution will be used. And this disparity between our design and reality gets worse as the solution grows in scope. Again, this is true in software and why the regular feedback is important, but Alexander proposes repair as a creative process in which we better adapt the solution to its environment based on deepening understanding of needs or when the design just isn't working or breaks. This is akin to what we call refactoring, and like we do in software, Alexander advocates a continual process of repair (refactoring). And this process doesn't stop when the initial thing is built--we keep tweaking it ad infinitum.

This seems somewhat intuitive, yet in software we're always talking about legacy systems and many have and continue to suggest "rewrites" as the answer to software woes. While I understand that this is one area where software differs from real-world building (in the relative ease that something can be redone), I do think that we software folks tend to err too much on the side of rewriting, thinking that if only we can start from scratch, our new system will be this glorious, shining zenith of elegance that will last forever.

It is this thinking, too, that even causes many of these rewrites to fail because so much time is spent trying to design a system that will last forever that the system is never completed (or becomes so complex that no one can maintain it), providing the next impetus for another "rewrite of the legacy system." On the contrary, some of the best software I've seen is that which has simply been continuously maintained and improved, piece by piece, rather than trying to design (or redesign) an entire system at once.

What is interesting to me in all this is the similarities between the process of building physical structures and that of building software, the general applicability of Alexander's thought to the creation of software. I continually see this in Alexander's writing. In part, it is good to see a confirmation of what we've been realizing in the software industry--that waterfall just doesn't work, that pre-built, reusable modules don't really work well, that we need regular, repeated input from stakeholders and users, that we shouldn't try to design it all up front, that we shouldn't use formal notations and categories that create solutions that fit the notations and categories better than their contexts, environments, and needs, that we should create and use pattern languages that are intelligible by ordinary people, and more.

There is one last observation I'd make about The Timeless Way of Building, regarding the "kernel of the way." Alexander says that when it comes down to it, the core (the kernel) of the timeless way of building is not in the pattern language itself (the language is there to facilitate learning the timeless way); he says the core is in building in a way that is "egoless."

In some ways, I think the concern about ego is less pronounced in the software world--rarely is a piece of software admired as a piece of art--but at the same time, the underlying message is that you build something to fit just so--not imposing your own preconceptions on how the thing should be built. For software developers, I think the challenge is more in learning to see the world for what it is, to really understand the problem domain, to look at it through the eyes of the users and design a solution to fit that rather than trying to foist the software worldview onto the users. To put it another way, we need to build software from the outside in, not the inside out. The timeless way is really about truly seeing and then building to fit what you see.

Notes1. At this point, another interesting thought occurs to me about pattern languages; I see a relation to Eric Evan's "ubiquitous language" in that the language you use needs to be shared between the builders and those using the thing being built. What stands out to me is the idea of building a pattern language that is intelligible enough by non-software experts to be incorporated into the ubiquitous language shared by both the domain experts and the software experts. Software patterns vary on this point; some are intelligible and some are not so intelligible; we need to make them intelligible.

As I sit here on the train home, I've been thinking (and writing) about a lot of stuff. But I figured I should put this post together for completeness and finality, even though I only made it to one session today before I left early. Last night I was shocked and somewhat dismayed to find that I had somehow managed to book the train for return on Saturday afternoon rather than today. I looked at my reservation email, thinking surely the ticket was misprinted, but nope, the reservation says the 22nd clearly in black and white.

Now, those who spend much time with me know that I tend to be sort of the absent-minded professor type. I often have trouble keeping track of the details of day-to-day things (but I can tie my shoes!). I like to think good reasons for this, but whatever the reasons, that's me. So I can totally imagine that somehow I tricked my brain into thinking that the 22nd was the day I wanted to return when I booked the train.

That said, I think this is a good opportunity to observe a way in which the UX of the reservations system could be improved. If it had simply said somewhere that the return trip was on SATURDAY and not just used these obscure things called numeric dates, I'd immediately have seen and avoided my mistake. But nowhere online nor in the email nor on the ticket does it say Saturday. In fact, there is SO MUCH GARBAGE on the ticket, that the non-initiate has trouble finding anything of value. So think about that if you're designing some sort of booking system--show the day of the week, please. :)

Lean Process Improvement So this morning, on top of being tired because I stayed up late writing, I was late for the class I wanted to attend, one called Agile Architecture. Unfortunately, it was in the smallest room in the conference (same one as the manager meeting yesterday), and unfortunately, the planners didn't anticipate attendance to that session correctly. Plus, this room had this odd little old lady who felt it was her duty to prevent anyone from attending who had to stand.

Yesterday, I watched her try to turn away (a few successfully) quite a few folks, even though there was plenty of room on the far side to stand. She kept saying "there really is no room," but there was. What made the whole scene kind of comical was that she refused to go sit OUTSIDE the door, so rather than simply preventing folks from coming in and causing a distraction, she let them come in, then animatedly tried to convince them to leave, causing even more distraction.

Well, when I peeked in the door this morning, saw the full room and saw her start heading toward me, I knew I was out of luck. I just didn't have the heart to muscle by her and ignore her pleading to go stand on the other side, and besides, I don't like standing still for 1.5 hours anyway. So I was off to find an alternative.

I knew there wasn't much else I wanted to see during that hour, but by golly I was there and this was the only slot I could make today, so I was going to make it to a session! After two more failed entries into full sessions and studiously avoiding some that sounded extremely dull by their titles, I finally found one that sounded nominally interesting and had a lot of open space. I really had no clue what I was getting into...

It ended up being somewhat interesting. It was about applying the "lean process" from the manufacturing space to software development. I'm personally not really into process and methodologies, particularly when they come from disciplines that are only marginally like our own. But this did sound like it could be useful in some situations, particularly in software product (i.e., commercial) development.

He talked about value stream mapping, which is basically modeling the process flow of specific activities in product development from beginning to end (so you'd do one for new feature dev, one for enhancements, one for hot fixes, etc.). It sounds like it does have potential to be useful as long as you don't spend too much time on it. Particularly if you think you have a problem in your process, this method can help you to both visualize and identify potential problems. If you do product development, it's worth a look.

Final Thoughts After that session, I made off to go to the 12:05 mass at the chapel outside the convention center. My deacon friend had let me know about it, and I was glad of it. And he was there, so after mass, we went back into the conference to grab lunch together. Talked more about the usual, and then I had to run off to catch my train.

Looking back, I feel that this is definitely a conference worth attending. Of course, your mileage will vary. I wouldn't come here to go to a bunch of sessions on topics you're already an expert on. But the nice thing about this conference over others I've been to is that it really is focused on best practices. It's not really focused much on technology-specific stuff (though there was a bit of that), so you can derive value whether you do Java, C/C++, .NET, or whatever.

Also, it is a good place to come to meetings of minds from other technology experts, so you get some more exposure than you might normally to how folks are doing software outside of your technological community. And one interesting thing I noticed is that there is a tangible presence of software product developers, and that's a different and valuable perspective for those who are more used to, say, standard custom/consulting/corporate IT software.

Overall, if you look over the sessions and see topics that you haven't had a chance to explore in depth or maybe you want to just get exposed to other ideas in the software space, this seems like a good conference for that. I really enjoyed it.

Today I stumbled into Barnes & Noble (because it had the nearest Starbucks), wandered into the notebook section, and was reminded that my current Moleskine notebook was almost full. Silly me, I still have two back at the office, so I thought it must be fate for me to go ahead and restock while I'm here. I highly recommend Moleskine; I like the small, book-like ones without lines because small is convenient enough to put in pocket and I don't like to conform to lines or have even the suggestion that I should, but they have all kinds. Good, tough little notebooks, and supposedly they've been used by some famous people. This has not been a paid advertisement for Moleskine. Now we return you to your regular program.

Applying Perspectives to Software Views (Cont'd) Yesterday I talked about Rebecca Wirfs-Brock's session on software views. There's a lot more to what she said than what I communicated, but I'm just propounding what stuck with me. Looking at my notes, I forgot to mention another key thing, which is that you should model these views and model them in a way that effectively communicates to the stakeholders that their needs are being addressed. She threw up some UML diagrams, commenting that they're probably not good for most business folks. (I think UML is not good for most technical folks either, but I'm a rebel like that.) The point she made, though, was regardless of what notation you use, provide a key that let's people know how to unlock the meaning of the model. Good point for sure.

Actually, this reminds me of Beautiful Evidence, by Edward Tufte. I recommend Tufte for his area of expertise, though I'd suggest skipping the chapter on Powerpoint (which sadly was released as a separate booklet) because it's not his area of expertise and it shows. Anyways, when he is sticking to the realm of visual communication, he is excellent, and Beautiful Evidence is a pretty easy read that helps you start thinking about how to communicate "outside the box" as it were. I bring it up here because applying his ideas in the area of modeling software, particularly to non-technical audiences, is something we should explore.

Now, back to Day III.

Software Managers The first session I made it to kind of late (and it was absolutely packed--standing room only) was a session on tips for being a good technical/software manager. Having become one of these this year, it is definitely a subject of interest, and I'm always on the lookout for more tips, though I must say that I think management books (as a rule) are really bad about regurgitating each other. You get to where it becomes increasingly hard to find new, good insights the more you read them.

But I thought this session would be good since it is specifically focused on managing technical teams. Some of her points were standard managerial stuff, but it was nice to have it focused in on the IT industry. I always end up feeling a bit guilty, though, because I know I've already made numerous faux pas (not sure how to pluralize that). I hope my guys know I love them even though I screw up being a good manager at times. :)

One recurring theme I keep coming across is having regular 1-1s with your peeps. I've heard weekly and bi-weekly, but it seems like both of those would be overkill for my group since we have daily meetings, often go out to lunch, etc., so I'm going to try monthly first. It'll be better than nothing!

I have to say that managing well is at lot harder than I expected it to be. For those of us who aren't natural people persons, it is definitely an effort. I'm sure it is tough regardless, but I gotta think that it'd be easier if I was naturally more a people person. Anyways, I keep tryin' for now at least.

Designing for User Success Went to another Larry Constantine session around UX. This one was really good. He, like Patton, affirmed that "user experience is about everything." Again, it's nice to know I'm not crazy, and it takes a burden off me knowing that I won't be a lone voice crying out about that. It seems that maybe just those who don't know anything about UX think it is "just another term for UI." Of course, these "UX professionals" are naturally focused in on their areas of expertise (usability, information architecture, human factors, human-computer interaction, visual design, interaction design, etc.), so maybe I'm still a bit odd in my contention that architects must be the chief experience officers on their projects.

Anyhoo, this session focused in on "user performance" as a distinct focus, meaning that you are providing the tools to get the best performance out of people. Though none of the session was spent explicitly justifying the importance of a focus on UX, implicitly the whole session was an illustration of why it is important. I have a ton of good notes from this session, but I won't bore you with them (you can probably get most of it from his slides or other presentations he's done). If you get nothing else, though, it's to change the way you think about designing software--design from the outside in. If you're a smart person, you'll realize this has huge implications. And also, recognize that you won't make all parts of your system perfectly usable, so prioritize your usability efforts based first on frequency of use and second on severity of impact (i.e., those things that will have serious ramifications if not done correctly).

Human Factors in API Design The next session I hit was one related to UX for developers. Here are some salient one-liners:

Consistency is next to godliness.

API = Application Programmer Interface

When in doubt, leave it out. <-- More specifically, unless you have at least two, real use cases, don't stick it in your API.

Use the Iceberg Principle. <-- This means what people see of your code should only be the tip of the iceberg--keep it small, simple, and focused.

This session actually seemed to be a blend of general UX guidelines (yes, they apply here, too, not just on end-user interfaces) and more general framework design principles that only had varying degrees of pertinence to ease of use. Some highlights:

Default all members to private; only raise visibility with justification.

Prefer constructors to factory/builder pattern, and setup object fully with constructor where possible.

There's a good deal more, and I'm not offering the justification he proposed (for brevity's sake). I agree to varying levels of vehemence with most of what he said, but one area where I think I have to disagree is his advice to only refactor to patterns. I can imagine where this comes from--because patterns can be abused (paternitis as he said). But I think saying refactor to patterns shows a big misunderstanding of the point and value of patterns. This is why it's important to pay attention to the context and rationale in a pattern--so you know when to apply it. But patterns should be used where they apply--they're known, established, tried and true ways of solving particular problems in particular contexts! If consistency is akin to godliness, using patterns is ambrosia.

One last interesting note from this session was the admonition to consider using or creating a domain-specific language where it helps with the usability of the API. His example was around JMidi and JFugue, where JMidi is a terribly verbose API, requiring the construction of and coordination of a host of objects to do something simple like play a note, JFugue offers a simple string-based DSL that is based off of musical notation to let you place a whole series of notes very compactly. Good/interesting advice.

Pair Programming The last session I went to today was one based on practical pair programming. I was actually on my way to a class on Business Process Modeling Notation, which would have been potentially more intellectually stimulating, but I walked by the room with the Pair Programming session on it and had a sudden feeling I should attend it. When I thought about it, I figured that I'd put off giving the idea fair play long enough and that I should take the time to hear it in more depth. I figured it'd have more immediate relevancy to my current work situation in any respect.

I won't belabor all the points because I suspect with good reason that they're all the standard arguments for pair programming along with a good bit of the "how" to do it in real situations. He actually has a number of patterns and anti-patterns to further illustrate good/bad practices in pair programming. It was an interesting extension of the pattern-based approach (to people). Suffice it to say, I think if you can get buy in in your organization it is definitely worth a try. There are numerous difficulties with it, chief one being it is hard to do effectively in a non-co-located environment, but I think I'd try it given the opportunity.

Random Thoughts One thing that I've come to the conclusion on being here is that TDD seems to be unanimously accepted by those who have actually tried it as a best practice. The API guy went so far as to say that he won't hire devs who don't have TDD experience. (I think that's a bit short-sighted, but I take his point.) It's something to think about for those still hesitating to adopt TDD.

I met up again with the same fella I met last night. We were both in the pair programming class at the end of the day; he's been doing pair programming on a few teams at his company for years and is a fan, though he definitely attests to the difficulty of dealing with prima donnas, which apparently are more tolerated in his LOB (because they have very specialized knowledge that requires PhD level education). So he wasn't able to carry XP to his entire company. He also said that pairing (which was echoed by the presenter) is a taxing process; 4-5 hours max is good.

We also had a good long chat about things Catholic. It's good to know that we Catholics will be getting another good, solid deacon in him. I imagine tonight won't be the last time we talk.

All in all, another great day. Learned a bunch. No sessions I regret going to thus far, which is I think a big compliment for a conference. :)

The first class I attended was a whirlwind tour of user experience. I was heartened to learn that I am not alone or crazy in recognizing that there are a number of disciplines that go into this thing we call UX, and the presenter, Jeff Patton, also recognizes that actually virtually every role in developing software has an effect on UX, which is also something I have come to the conclusion of (as I hint at on IG's UX area). I develop the idea more explicitly in an unpublished paper I'm working on. (I'm hoping the inputs I get from this conference will help me to finish that out.)

I actually think that all of this UX stuff falls under the architect's purview because (in my mind at least) he or she is primarily responsible for designing the software as a whole. This means that architects need to have a conversational familiarity (at least) with the different disciplines that people traditionally think of as user-oriented disciplines, but I'd take it a step further and say that the architect needs to be the chief experience officer, as it were, on a software project. The architect needs to ensure that the appropriate expertise in user-oriented disciplines is brought to bear on his or her project and also needs to understand how the other aspects of software design and development impact UX and optimize them for good UX.

That discussion aside, Jeff had a pretty clever graph that showed how the kind of software being developed affects the perceived ROI of expenditure on UX. His talk also was about as effective an introduction to UX that I can imagine. He dealt with what it is, why it's important, and then offered a high-level overview of key bits of knowledge for people to make use of. I want to steal his slides! :)

Global Teams & Outsourcing Agilely

The keynote during lunch today was done by Scott Ambler. It was nice to finally see/hear him in person since I've heard so much about him. I got the feeling (from what he even admitted) that he was presenting stuff that wasn't just his--he was from what I could tell presenting an overview of a book that IBM publishes (related) on the subject. But that didn't take away from the value of the knowledge by any means. I'd definitely check it out if you're going to be dealing with geographically distributed teams.

Usability Peer Reviews

In my continuing quest to learn more about UX (part of which is usability), I attended a class by Larry Constantine about lightweight usability practice through peer review/inspection (related paper). I was actually surprised because he has a very formal methodology for this, which means he's put a lot of thought into it but, more importantly, he's used it a lot in consulting, so it is tested. I personally am not a big fan of being too formal with these things. I understand the value in formalizing guidance into repeatable methodology, but I've always felt that these things should be learned for their principles and less for their strictures. Of course, that runs the risk of missing something important, but I guess that's a trade off. Regardless of if you follow it to a T or not, there's a ton of good stuff to be learned from this technique on how to plug in usability QA into the software process.

Applying Perspectives to Software Views

After that, I slipped over to another Rebecca Wirfs-Brock presentation on applying perspectives to software views in architecture. (She was presenting the subject of this book.) To me, the key takeaway was that we should figure out the most important aspects of our system and focus on those. It echoed (in my mind) core sentiments of domain-driven design, though it used different terminology and approach. I think the two are complementary--using the view approach helps you to think about the different non-functional aspects. Using strategic DDD (in particular, distilling the domain) helps you and stakeholders to focus in on the most important aspects of the system from a domain strategy perspective, and that will inform which views and perspectives are the ones that need the focus.

This approach also echoes the sentiment expressed by Evans yesterday that says you can't make every part of the system well-designed (elegant/close to perfection). Once you accept that, you can then use these approaches to find the parts of the systems where you need to focus most of your energies. I really like that this practical truth is being made explicit because I think it can help to overcome a lot of the problems that crop up in software development that have to do with the general idealistic nature that we geeks have.

Expo

After the classes today, they had the expo open. In terms of professional presentation, it was on par with TechEd's Expo, but certainly the scope (number of sponsors) was far smaller. That said, I popped into the embedded systems expo. That was a new experience for me. It was interesting to see almost every booth with some kind of exposed hardware on display. As a software guy, I tend to take all that stuff for granted. They even had a booth with specialized networked sensors for tanks of liquid. This stuff stirred recollections of weird science and all the other fun fantasies that geeky kids have about building computerized machines. The coolest thing there was the Intel chopper, which apparently was built by the Orange County Chopper guys, but it had a lot of fancy embedded system stuff on it. I didn't stick around to hear the spiel, but it was pretty cool.

After the expo, I bumped into a guy at Cheesecake factory. We started chatting, and it turns out that he's in the process of becoming a Roman Catholic deacon. Pretty cool coincidence for me! We talked about two of my top passions--my faith and software development (as exemplified here on dotNetTemplar!). It was a good dinner. He works at a company that does computer aided engineering; sounds like neat stuff with all that 3D modeling and virtual physics. Way out of my league!

I meant to write this last night, but I didn't get back to my room till late and just felt like crashing. I'm at the SD Best Practices conference in Boston this week, which is a new experience for me. It's one of a very few non-MS-oriented conferences I've attended, and I really wanted to come because best practices are a passion for me (and part of my job). Infragistics was kind enough to send me. I thought I'd share my experiences for anyone else considering going (and just for my own reference.. hehe) Anyways, enough of the intro...

Day 1 - Tuesday, 18 September 2007

First off, let me say I like the idea of starting on a Tuesday. It let me work for a good part of the day on Monday and still make it out here by train on Monday night. I've found in the past that attending sessions non-stop for a few days can really wear you out, so four days seems about right.

The conference is in the Hynes convention center, and I'm at the Westin, a stone's throw away. Also, it's right next to the Back Bay Station, so thus far the logistics aspect has worked out quite well for me. I'd personally much rather take a train over a plane anytime.

Responsibility-Driven Design

Tuesday was a day of "tutorials," which are half-day sessions. So in the morning, I attended Rebecca Wirfs-Brock's tour of responsibility-driven design (RDD?). I actually had her book at one point because it was mentioned in a good light by Dr. West in his Object Thinking, but somewhere along the line I seem to have lost it. Anyways, I was glad to get a chance to learn from the author directly and to interact.

From what I can ascertain, RDD has some good insight into how to do good object design. It seems to me that thinking in terms of responsibilities can help you properly break apart the domain into objects if you struggle with just thinking in terms of behavior. It's potentially easier than just thinking in terms of behaviors because while behaviors will certainly be responsibilities, objects can also have the responsibility to "know" certain things, so it is a broader way of thinking about objects that includes their data.

That said, it doesn't really negate the point of focusing on behaviors, particularly for folks with a data-oriented background because I do think that focusing on the behaviors is the right way to discover objects and assign them the appropriate responsibilities. I think the key difference is that with the object-thinking approach, you know that there will be data and that it is important to deal with, but you keep it in the right perspective--you don't let it become the focus of your object discovery.

Another beneficial thing I think Ms Wirfs-Brock has is the idea of using stereotypes as a way to discover objects in the domain. This is more helpful, I think, when dealing with objects that are more part of the software domain than those in the business domain because the stereotypes are very software-oriented (interfacers, information holders, etc.).

In terms of process, she advocates this idea of having everyone on a team write their thoughts down about the problem being faced in a few sentences, focusing on what seems like it'll be a challenge, what will be easy, what you've run into before, etc. Then have everyone bring those to the initial design meetings. I like the idea because it bypasses the introvert-extrovert problem you sometimes get in meetings and you can start out with a lot of ideas to really jump sta

rt the design. It's a good way to ensure you don't miss out on ideas due to personality issues.

The other thing I like in her process is writing down a purpose statement for objects as you discover them and thinking of them as candidates. This is part of the CRC card process (the first C is now "candidates"). The reason I like it is that it helps you to focus on the point of the object and sort of justify its existence, which can help weed out some bad ideas.

What I don't like about the process is the overall CRC card idea. While it surely is more lightweight than many ways to approach object design, you still end up with a bunch of paper that you then have to translate into code at some point. I much prefer to use a tool that will literally be creating the code as I design. I've found the VS class designer serves this purpose quite well. In fact, on the way up here, I spent some time doing up a sample class diagram using the object thinking approach to share as an example of domain modeling. I'll be sharing it soon, but I just mention it to say this is not just speculation. It was actually very lightweight and easy to discover objects and model the domain that way, and at the end I had literal code that I can then either fill out or hand off to other devs to work on who can then further refine it.

Domain-Driven Design

The second session I attended was one by Eric Evans on strategic domain-driven design. Eric wrote a book on the subject that's been well received by everyone I've encountered who spent time with it. I've seen a presentation on it, and I've read parts of Jimmy Nillson's Applying Domain-Driven Design and Patterns book. So I thought I was acquainted well enough with the ideas, but as I often find to be the case, if you rely on second-hand info, you'll inevitably get a version of the info that has been interpreted and is biased towards that person's point of view.

For instance, most of what I've seen on DDD is focused on what Eric calls "tactical" DDD, i.e., figuring out the objects in the domain and ensuring you stay on track with the domain using what he calls the "ubiquitous language." Eric presented parts of his ideas yesterday that he calls "strategic" because they are more geared towards strategic level thinking in how you approach building your software. Two key takeaways I saw were what he calls context mapping, which seems to be a really effective way to analyze existing software to find where the real problems lie, and distilling the domain, which is a way to really focus in on the core part of a system that you need to design.

In short (very abbreviated), he claims (and I agree) that no large system will be completely well designed, nor does it need to be. This isn't to say you're sloppy but it helps you to focus your energies where they need to be focused--on the core domain. Doing this actually can help business figure out where they should consider buying off-the-shelf solutions and/or outsourcing as well as where to focus their best folks. It's a pretty concrete way to answer the buy vs. build question.

Anyways, I'm definitely going to get his book to dig in deeper (it's already on the way). Please don't take my cliff's notes here as the end of your exploration of DDD. It definitely warrants further digging, and it is very complementary to a good OOD approach.

After all this, I was privileged enough to bump into Eric and have dinner, getting to pick his brain a bit about how all his thinking on DDD came together, his perspectives on software development, and how to encourage adoption of better design practices (among other things). Very interesting conversation, one that would have been good for a podcast. I won't share the details, but I'm sure folks will eventually see some influence this conversation had on me. Good stuff.

Software for Your Head

I almost forgot about Jim McCarthy's keynote. I've only seen Jim twice (once in person and once recorded). He's a very interesting and dynamic speaker, which makes up for some of the lack of coherence. I find the best speakers tend to come across a bit less coherent because they let speaking become an adventure that takes them where it will. But I do think there was definitely value in his message. I tend to agree that he's right in asserting that what we all do on a daily basis has a larger impact on humanity than we realize, and I can't argue with his experience in building teams that work. http://www.mccarthyshow.com/ is definitely worth a look.

Overall, Tuesday was a big success from an attendee perspective. So far so good!