Value-Driven IT

Reviews

“People discuss aligning IT with the business every day, without really understanding what it means or the implications. Cliff not only brings visibility around the types of issues that cause misalignment, but also provides meaningful, applicable, and concrete ways of recognizing the issues, resolving them, and measuring success through business value…" - Craig Connell, former Director of IT, Unistar Nuclear.

“I couldn’t agree more. Having, over the last couple of weeks, read this book ... I am pleased that more practical guidance is being offered in this space." - John Thorp, in a review on his blog.

The ExpresswayTM IT business value modeling and design tool alpha 0.3 (with GUI) is now available to early adopters who request it. Click here to be notified of updates.)

Generic Models1Value of Risk MitigationValue of Incomplete SystemsValue of IT Strategy ContinuityValue of DecouplingValue of Architectural Boundaries or ConstraintsValue of Decentralization or AutonomyValue of Retaining GeneralityValue of Refactoring and ConsolidationModeling Lifecycle CostValue of Operational EffectivenessValue of ControlsAccounting for Risk PreferencesDependencies Across Projects

Enterprise Models1The Build-Versus-Buy DecisionThe New Capability Investment DecisionThe Invest in Better Tools DecsionFootnotes:1. As these become available, each model will become a link.

About Me

Friday, April 22, 2016

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

Programmers experience the resulting pain of the poor implementation of the Web on a daily basis. Two glaring examples are REST and JSON.

You might wonder, Wait - JSON was created as a better alternative to SOAP, so isn't it really better? And REST was created as a better alternative to WSDL - so isn't that better also?

Well, better, yes, but that's a pretty low bar. Let's not dredge back up WSDL and SOAP - let's please leave those in the trash can of horrors where they belong. REST and JSON are sufficiently terrible that we don't need to go back to things that were even more terrible.

So why is REST terrible? It all started with the notion that inter-system messages need to be human readable. Really, I think it started with firewalls: in the late '90s programmers wanted to make remote requests across firewalls, and existing protocols had trouble doing that, so programmers turned to HTTP, which was designed for human readable content. Programmers wanted to send inter-system messages, so they grabbed XML, which was a nice data format that could easily be pumped over HTTP. Then OASIS and W3C got into the mix and soon we had WSDL and a raft of other standards - all of which repeat the mistakes of HTTP, namely the lack of type safety, and the lack of scoping of a standard so that you don't have to figure out what you don't know that you need to know - e.g., which HTTP headers are appropriate for the data you are sending - given that header types are defined in an ever growing list of ever updated RFCs, and there is no "header compiler" - no way to validate your headers or content body format without actually running the code.

HTTP is, frankly, a mess.

REST tried to simply the horrors of WSDL by defining a simpler approach. After all, all we are trying to do is send a friggin' message. REST says, Just put the message in an HTTP payload - forget all the WSDL definition. The client will parse the payload and know what to do.

The problem is, clients now have to parse the message. Message parsing is something that should be done behind the scenes - it should be automatic. Client and server endpoint programs should be able to work with an API that enables them to send a data structure to another machine, or receive a data structure - in the language in which they are working. Application programmers should not have to parse messages.

Languages like Go make JSON and XML parsing easier because parsing support is built into the language, but it is still a-lot of work - and a-lot of code. E.g, in Go, a JSON stream will be parsed into a data structure - but it is not the data structure you want: it is a hashtable of "interface{}" types. You have to programmatically convert the hashtable into your desired strongly typed object. It is all quite klunky.

JSON was created as a better alternative to XML, which is very hard to read. However, JSON suffers from the fact that it is still a message syntax - that is, one writes an actual message in JSON, rather than defining a message schema. Thus, there is no compiler - and therefore no way to check a JSON message until you actually run your code and send a message. Actually, that is not entirely true now - someone has realized this problem and invented a JSON schema tool. But then if one has defined a schema, why code JSON messages by hand? - why not generate the code that does the message marshaling and unmarshaling?

Ironically, Google - the creator of Go - has come up with Protocol Buffers as an alternative to REST and JSON. And guess what? - messages are not human readable, and the programmer only defines the message schema - all the parsing code is automatically generated. Hmmm - that's what CORBA did. Why did Google do this? Answer: it turns out that message processing efficiency matters when you scale: imagine that REST/JSON messages require X CPU cycles to marshal and unmarshal, and Y amount of bandwidth, and that the same application using protocol buffers requires X/100 CPU cycles and Y/100 bandwidth - if X and Y are Internet-scale, that translates to real dollars, like needing ten machines instead of 1000 machines. Google has switched to Go for the same reason: natively compiled code runs faster than scripted code - a-lot faster - and that translates to less compute resources.

So we are back to the future. We have come full circle. What a circuitous detour. So much wasted effort.

Tuesday, December 29, 2015

In XP, the [elements of planning] are the stories. The [scope units] are the estimates attached to the stories. The [scope constraint] is the amount of time available.

Yet, it seems like every time I have coached an Agile team, the team is compelled by management to do task level planning - that is, decomposing each story into work tasks. On top of this, most of the popular Agile planning tools, including VersionOne, TFS, Rally, and Jira, all have a heavy emphasis on task level planning: e.g., in Rally, you cannot define a story without defining its tasks. As someone who has used Rally a great deal, I found this to be a horrible nuisance.

Task level planning is very counter to Agile in many ways, and I have seen task planning greatly undermine Agile teams. Some of the problems with task level planning in an Agile project are,

1. Task level planning is excessively time consuming; and since planning involves the entire team, this ties up the team for too much time - the team would rather get to work.

2. Task level estimates are usually wildly wrong, in contrast to story level estimates - which are often very accurate, in terms of their consistency.

3. The actual tasks needed to complete a story do not reveal themselves until the developer starts working on the story.

4. Party because of #3, adding up a story's tasks does not yield the time required to complete a story.

5. Task completion does not prove progress - only story completion does: that is the entire point of stories - that a story represents demonstrable progress, and that completion is defined by the story's acceptance criteria and the team's definition of done for its stories. Tasks do not have these attributes. This is central to Agile: waterfall projects are notorious for being "on schedule" until release day, when they suddenly need more time - yet the project hit all of its prior milestones, with all tasks such as "design", "code", etc. completing - but with nothing actually demonstrable. It is the crucible of running software, passing acceptance tests, that proves progress - nothing else does.

6. Completion of a task often (usually?) does not mean that the task is really complete: since tasks are often inter-dependent, completing one task might reveal that another task - which was thought to be done - is actually not done. For example, a test programmer might write some acceptance tests, but when the app programmer runs them against the story's implementation, the programmer finds that some tests fail that should pass - indicating that the tests are wrong, and meaning that the testing task was not actually done - yet it had been marked as done. Only running software, passing tests, proves that the story is done. Task progress is suspect.

That said, some level of task planning is useful. For example, it makes sense to especially when more than one person is involved in implementing a story, such as a test programmer and an app programmer. One can then have tasks for the story, such as "write automated acceptance tests" and "write unit tests and app code". But, progress should not be measured based on task completion; and it is a total waste of time to come up with estimates for these tasks ahead of time. Instead, it is better to merely have people estimate on the spot the day they plan to work on a task - that is likely to be more accurate than an estimate done a week or two before.

Some of the consequences of paying too much attention to tasks in an Agile project are,

Parties external to the team, such as the project manager, start to think of the work at a task level, and report progress based on that, with all of its pitfalls (see #5 above).

Parties who pay attention to task estimates, such as the team lead, will be constantly disappointed, because of #2,3,4 above.

Teams will lose an entire day or more to planning each sprint, because of #1.

Team members will collaborate less, feeling that "I did my task - now its in your court", instead of working together to get app code to pass its tests.

Even though many Agile authors talk about tasks, and many "Agile" tools support task level planning, task level planning is antithetical to Agile. As the Agile Manifesto says,

Friday, December 25, 2015

I have been using go for the past six months, in an effort to learn a new natively compiled language for high performance applications. I have been hoping that go was it - sadly, it is not.

Go is, frankly, a mess. One of its creators, Ken Thompson of Unix/C fame, called go an "experiment" - IMO, it is an experiment that produced Frankenstein's monster.

It is OO, but has arcane and confusing syntax

Go is object oriented, but unlike most OO languages, the syntax for defining interfaces and concrete objects is completely different: one defines an "interface" and then one defines a struct - and these are quite different things. But also unlike many OO languages, the methods of a concrete object type are not defined with the object type - they are defined outside the object definition - in fact, they can be in any file that is labeled as belonging to the "package" in which the object type (struct) is defined. Thus, you cannot tell at a glance what a type's methods are. On top of that, there is no syntax for saying that "concrete type A implements interface I", so you cannot tell if a concrete type implements an interface unless you try to compile it and see if you get an error: the rule is that a concrete type implements an interface if the concrete type has all of the methods that are defined by the interface - and yet the concrete type's methods are strewn all over the place. What a mess.

As a result, there is no language-provided clear declaration of a type network - interface types and the concrete types that implement them. You have to keep track of that on a piece of paper somewhere, or using naming to link them. The reason for this chaos escapes me, as I have not see any helpful language feature that results from this - you cannot extend types dynamically, so I see no advantage to the forceful decoupling of interface types, concrete types, and the methods that belong to the concrete types. Perhaps this was part of the experiment - and with terrible results.

Its polymorphism is broken

Go lets you define an interface and then define concrete types (structs) that implements that interface (and possibly others). Yet, the way that this works is very peculiar and is likely to trip up programmers. E.g., if you create an instance of a concrete type and then call an interface method on it, you will get what you expect - the right method for the concrete type will be called. But if you pass a concrete type into a method (via another method call) and then call the method, the wrong one might be called - the method for the abstract type will likely be called - it will if the calling method uses an abstract type for its parameter. Go does not actually have abstract types, so to create one you have to define a struct and give it a dummy method for each method that you don't want to implement. My point here is that the behavior of the polymorphism is statically determined and so depends on the context - and that is very confusing and likely to introduce subtle errors - it defeats most of the value proposition of polymorphism.

When you run it, you will see that the getParentId method defined by InMemResource will be called - instead of the getParentId defined by InMemDockerfile - which is the one that, IMO, should be called, because the object (struct) is actually an InMemDockerfile. Yet if you call curresource.getParentId directly from the main function, you will get the expected polymorphic behavior.

to the above program, it works. Thus, the above program did not work because one of the methods being called did not have an implementation by the concrete type (InMemDockerfile) - that effectively obscured the actual type from the final method in the call sequence. Programmers who are accustomed to dynamic typing like Java will find this behavior surprising.

Type casting affects reference value

Another peculiarity of the go type system is that if you compare a value with nil, it might fail (so it is not nil), but then if you type cast it and compare with nil again, it can succeed. Here is an example:

The line in red executes; draw your own conclusions - but regardless, I expect this unexpected behavior to be the source of a great many bugs in programmers' code.

Its compilation rules are too confining

With C, one compiles to a binary that one can then link with or save somewhere. With go, the binaries are managed "magically" by the compiler, and you have to "install" them. Go's approach tries to make compilation and binary management simple for stupid people - yet anyone using go is not likely to be stupid, and anyone using go will likely want to be able to decide how they compile and manage binaries. In order to get out of the go "box" one has to reverse engineer what the tools do and take control using undocumented features. Nice - not!

Its package mechanism is broken

Go's package rules are so confusing that when I finally got my package structure to compile I quickly wrote the derived rules down, so that I would not have to repeat the trial and error process. The rules, as I found them to be, are:

Package names can be anything.

Subdirectory names can be anything - as long as they are all under a directory that represents the project name - that is what must be referenced in an install command. But when you refer to a sub-package, you must prefix it with the sub-directory name.

When referring to a package in an import, prefix with project name, which must be same as main directory name that is immediately under the src directory.

Must install packages before they can be used by other packages - cannot build multiple packages at once.

There must be a main.go file immediately under the project directory. It can be in package “main”, as can other files in other directories.

Are there other arrangements that work? No doubt - this is what I found to work. The rules are very poorly documented, and they might even be specific to the tool (the compiler) - I am not sure, and it seems that way. And here is an interesting blog post about the golang tools.

It is hard to find answers to programming questions

This is partly because of the name, "go" - try googling "go" and see what you get. So you have to search for "golang" - the problem is that much of the information on go is not indexed as "golang" but as "go", because if someone (like me) writes a blog post about go, he/she will refer to it as go - not as "golang" - so the search engines will not find it.

Another reason is that the creators of go don't seem to know that it is their responsibility to be online. Creators of important tools nowadays go online and answer questions about the language, and that results in a wealth of information that helps programmers to get answers quickly; with go, one is lucky to find answers.

The Up Side

One positive thing that I did find was that go is very robust when refactoring. I performed major reorganization of the code several times, and each time, once the new code compiled, it worked without a single error. This is a testimony to the idea that type safety has value, and go has very robust type safety. I would venture to say that for languages such as go, unit testing is a waste of time - I found that having a full suite of behavioral tests to be sufficient, because refactoring never introduced a single error. This is very different from languages such as Ruby, where refactoring can cause a large number of errors because of the lack of type safety: for such languages, comprehensive unit tests are paramount - and that is a large cost on the flexibility of the code base because of the effort required to maintain so many unit tests. I found that with go, a complete behavioral suite was sufficient.

Summary

When I finish the test project that I have been working on, I am going to go back to other languages, or perhaps explore some new ones. Among natively compiled languages, the "rust" language intrigues me. I also think that C++, which I used a-lot many years ago, deserves another chance, but with some discipline to use it in a way that produces compact and clear code - because C++ gives you the freedom to write horribly confusing and bloated code. I am not going to use go for any new projects though - it has proved to be a terrible language for so many reasons.

Saturday, November 21, 2015

I smile when I hear younger programmers talk about Web services; but my smile is a smile of sadness - because what I am thinking is that they don't know what they are missing. They don't know just how broken things are.

A colleague of mine recently had to implement a Web app that accesses a set of REST services running in another Web service. Being a little stale in the current tools - because they change yearly - he had a learn a set of new frameworks. He got up to speed quickly and things went pretty well until he tried to access the REST service directly from the Javascript side (bypassing his Web service) - at that point he hit "CORS" wall - the Web service did not set the "Access-Control-Allow-Origin" header.

He worked around that and things went fine until he tried to use a REST method that required some form parameters and also required a file attachment. He ended up wading through headers and the "multipart/form-data" versus "application/x-www-form-urlencoded" mess. It took him a week to figure out what the problem actually was and use his framework to format things the way that the REST service was expecting.

It doesn't have to be this way. Frankly, the foundation of the Web - HTTP - is a horrendous mess. From a computer science and software engineering perspective, it violates core principles of encapsulation, information hiding, and maintainability. HTTP mixes together directives for encoding with directives for control, and it is a forest of special cases and optional features that are defined in a never-ending sequence of add-on standards. The main challenge in using HTTP is that you cannot easily determine what you don't know but what matters for what you are doing. Case in point: my friend did not even know about CORS until his Javascript request failed - and then he had to Google for the error responses, which contained references to CORS, and then search out what that was, and eventually look at headers (control information). Figuring out exactly what the server wanted was a matter of trial and error - the REST interface does not define a clear spec for what is required in terms of headers for the range of usage scenarios that are possible.

Many of the attacks that are possible in the Web are the result of the fact that browsers exchange application level information (HTML) that places control constructs side by side with rendering constructs - it is this fact that makes Javascript injection possible.

Yet it could have been like this: Imagine that one wants to send a request to a server, asking for data. Imagine that the request could be written as in a programming language, such as,

getCustomerAddress(customerName: string) : array of string

Of course, one would run this through a compiler to generate the code that performs the message formatting and byte level encoding - application level programmers should not have to think about those things.

Yet today, an application programmer has to get down into the details of the way the URL is constructed (the REST "endpoint"), the HTTP headers (of which there are many - and all defined in different RFCs!), the type of HTTP method to use, and data encodings - and the many attacks that are possible if one is not very careful about encodings!

The result is terrible productivity for Web app development - especially when someone learns a new framework, which is a frequent activity nowadays.

The problem traces back to the origin of the Internet, and the use of RFCs - essentially suggestions for standards. It appears that early RFCs did not give much thought to how the Internet would be used by programmers. From the beginning, all the terrible practices that I talk about were used. Even the concept of Web pages and hyperlinking - something that came about much later - is terribly conceived: the RFC for URLs talks about "unsafe" characters in URLs. Instead, it should have defined an API function for constructing an encoded URL - making it unnecessary for application programmers to worry about it. The behavior of that function could be defined in a separate spec - one that most programmers would never have to read. Information hiding. Encapsulation of function. Separation of control and data. The same is true for HTTP and all of the other myriad specs that IETF and W3C have pumped out - they all suffer from over-complexity and a failure to separate what tool programmers need to know versus what application programmers need to know.

Today's younger programmers do not know that it could be better, because they have not seen it better. I remember the Object Management Group's attempt to bring order to the task of distributed computing - and how all that progress got swept away by XML-based hacks created to get through firewalls by hiding remote calls in HTTP. Today, more and more layers get heaped on the bad foundation that we have - more headers, more frameworks, more XML-based standards, except that now we have JSON, which is almost as bad. (Why is JSON bad? Reason: you don't find out if your JSON is wrong until runtime). We really need a clean break - a typesafe statically verifiable messaging API standard, as an alternative to the HTTP/REST/XML/JSON tangle, and a standard set of API-defined functions built on top of the messaging layer.

Tuesday, November 11, 2014

This is for those who think that Agile is a recent evolutionary advance in software engineering. It is not. Before the 1990s, a great many - perhaps most? - software projects were executed in a non-waterfall way. Some were agile, some were not. In the 1980s I was fortunate to have been on many that were: projects with a servant leader, with full automated regression testing run daily, with test results displayed from a database, with a backlog of small demonstrable features, with co-location (individual offices side by side), with daily sharing of issues, with collaborative and evolutionary design, and with a sustainable pace. I can recall personally writing up to 1000 lines of tested C code in a day on my Sun Unix "pizzabox" workstation: those projects were highly productive - today's tools and methodologies do not exceed that productivity.

However, over time more and more large software projects came to be managed by administrative program managers and procurement managers who had never personally developed software, and they foolishly applied a procurement approach that is appropriate for commodities - but not for custom built software. This was motivated by a desire to tightly control costs and hold vendors accountable. Waterfall provided the perfect model for these projects: the up-front requirements could be done first and then serve as the basis for a fixed cost, fixed schedule "procurement" involving the implementation phases.

This was a horrible failure. Software people knew in the 1960s that this approach could not work.

So in the late 1990s a movement finally came together to push back on the trend of more and more waterfall projects, by returning to what had worked before: iterative development of demonstrable features by small teams, and a rejection of communication primarily by documents. This basic approach took many forms, as shown by the chart. And that is why I am against "prescriptive Agile" - that is, following a template or rule book (such as Scrum) for how to do Agile. There are many, many ways to do Agile, and the right way depends on the situation! And first and foremost, Agile is about thinking and applying contextual judgment - not "following a plan"!

And then you have young people come along, their software engineering experience dating no farther back than 1990, and they claim that Agile is a breakthrough and that the "prior waterfall approach" is wrong. Well, it was always wrong - people who actually wrote code always knew that waterfall was idiotic. There is nothing new there. And Agile is not new. So when an Agile newbie tells a seasoned developer that he/she should use Scrum, or that he/she is not doing Agile the right way, it demonstrates tremendous naiveté. People who developed software long before the Agile Manifesto during the '70s and '80s know the real Agile: they know what really matters and what really makes a project agile (lowercase "a") and successful - regardless which "ceremonies" you do, regardless of which roles you have on a team, etc. It turns out that most of those ceremonies don't matter: what matters the most - by far - is the personalities, leadership styles, and knowledge.

This chart was developed by a colleague at a company that I worked at, Santeon. The information in the graphic was taken from an article by Craig Larman. Here is the article.

Thursday, November 6, 2014

Recently I wrote a performance testing tool in Ruby and I have been rewriting it in Java. The tool uses Cucumber, so I have decided to substitute JBehave, since JBehave is the predominant BDD tool in the Java space, and also because I tried to use the Java version of Cucumber but it is broken and incomplete. (Sigh - why not call it "beta"?)

So I first looked at the JBehave docs, and was irritated to discover that there are no code examples: you have to jump through hoops, such as running Etsy.com, in order to just see an example. I don't know what Etsy.com is and I don't want to know - I just want to see a friggin' code example. So I googled and found one - a good one - here.

Even better, the example gets right to the point and shows me how to run JBehave without having to use any other tools - most JBehave examples use JUnit, which I detest. I just want to run JBehave. Period. No complications. This is how you do it:

The file path ending in ".story" is from the example, and I wanted to find out the exact rules for what that path could be (the explanation of the example is not clear), so I went to the JBehave Javadocs, and this is what I found:

Are you kidding me??? - oh, I already said that.

I am used to Javadocs serving as a definitive specification for what a method does. In contrast, the JBehave methods have no header comments, and so the Javadoc methods have no specs. How is one supposed to know what each method's intended behavior is?

Am I supposed to go and find the unit tests and read them and infer what the intended behavioral rules are? Maybe if I had hours of spare time and that kind of perverse gearhead curiosity I would do that, but I just want to use the runStoriesAsPaths method. An alternative is to dig through examples and infer, but that is guesswork and needlessly time consuming.

Unfortunately, this is a trend today with open source tools: not commenting code. The method name gives me a hint about the method's intended behavior, but it does not fill in the gaps. For example, can a path be a directory? Is the path a feature file? What will happen if there are no paths provided - will an exception be thrown or will the method silently do nothing?

This is trash programming. Methods need human readable specifications. Agile is about keeping things lean, but zero is not lean - it is incompetent and lazy. A good programmer should always write a method description as part of the activity of writing a method: otherwise, you don't know what your intentions are: you are hacking, trying this and that until it does something that you want and then hurrying on to the next method. This is what I would expect a beginner to do - not an experienced programmer.

Yet so many tools today are like this. It used to be that if you used a new tool, you could rely on the documentation to tell you truthful things: if something did not work, you either did not understand the documentation or there was a software bug. Today, the documentation is often incomplete, or just plain wrong: it often tells you that you can do something, but in reality you have to do it in a certain way that is not documented. That is what I found to be the case with the Java plugin for Gradle. Recently I wrote a Java program that took me two hours to write and test (without JUnit or any other tools - just writing some quick test code), and then I spent a whole day trying to get the Gradle Java plugin to do what I wanted. That is not a productivity gain!

Tools that are fragile and undocumented are a disservice to us all. If you are going to write a tool, make sure that the parts that you write and make available work, and are documented, and work according to what the documentation says - and don't require a particular pattern of usage to work.

Saturday, October 25, 2014

Test driven development (TDD) is one of the sacred cows of certain segments of the agile community. The theory is that,

1. If you write tests before you write behavior, it will clarify your thinking and you will write better code.
2. The tests will expose the need to remove unnecessary coupling between methods, because coupling forces you to write "mocks", and that is painful.
3. When the code is done, it will have a full coverage test suite. To a large extent, that obviates the need for "testers" to write additional (functional) tests.
4. The tests define the behavior of the code, so a spec for the code's methods is not necessary.

Many people in the agile community have long felt that there was something wrong with the logic here. What about design? To design a feature, one should think holistically, and that means designing an entire aspect of a system at a time - not a feature at a time. Certainly, the design must be allowed to evolve, and should not address details before those details are actually understood, but thinking holistically is essential for good design. TDD forces you to focus on a feature at a time. Does the design end up being the equivalent of Frankenstein's monster, with pieces added on and add on? Proponents of TDD say no, because each time you add a feature, you refactor - i.e., you rearrange the entire codebase to accommodate the new feature in an elegant and appropriate manner, as if you had designed the feature and all preceding features together.

That's a-lot of rework though: every time you add a feature, you have to do all that refactoring. Does it slow you down, for marginal gains in quality? Well, that's the central question. It is a question of tradeoffs.

There is another question though: how people work. People work differently. In the sciences, there is an implicit division between the "theorists" and the "experimentalists". The theorists are people who spend their time with theory: to them, a "design" is something that completely defines a solution to a problem. The experimentalists, in contrast, spend their time trying things. They create experiments, and they see what happens. In the sciences, it turns out we need both: without both camps, science stalls.

TDD is fundamentally experimentalism. It is hacking: you write some code and see what happens. That's ok. That is a personality type. But not everyone thinks that way. For some people it is very unnatural. Some people need to think a problem through in its entirely, and map it out, before they write a line of code. For those people, TDD is a brain aneurism. It is antithetical to how they think and who they are. Being forced to do it is like a ballet dancer being forced to sit at a desk. It is like an artist being forced to do accounting. It is futile.

That is not to say that a TDD experience cannot add positively to someone's expertise in programming. Doing some TDD can help you to think differently about coupling and about testing; but being forced to do it all the time, for all of your work - that's another thing entirely.

Doesn't the Agile Manifesto say, "Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done."

I.e., don't force people to work a certain way. Let them decide what works best for them. Don't force TDD on someone who does not want to work that way.

But if everyone on a team does TDD, there is consistency, and that is good

The argument is always, "If we all do TDD, then we can completely change our approach as a team: we don't need testers, we don't need to document our interfaces, and we will get better code as a team. So people who can't do TDD really don't fit on our team."

So if Donald Knuth applied to work on your team, you would say, "Sorry, you don't fit in"; because Donald Knuth doesn't do TDD.

What ever happened to diversity of thought? Why has agile become so prescriptive?

Also, many of the arguments for TDD don't actually hold up. #1 above is true: TDD will help you to think through the design. But, TDD prevents you from thinking holistically, so one could argue that it actually degrades the design, and constrains the ability that many people have to creatively design complex things. And that's a shame. That's a loss.

#2 about improving coupling is true, but one does not have to do TDD for that. Instead, one can write methods and then attempt to write unit tests for them. The exercise of writing the unit tests will force one to think through the coupling issues. One does not have to do this for every single method - something that TDD requires - one can merely do it for the methods where one suspects there might be coupling issues. That's a-lot more efficient.

It can be argued that the enormous number of tests that TDD generates results in less agility - not more. Full coverage tests at an interface level provide plenty of protection against unintended consequences of code changes. For those who use type-safe languages, type safety is also very effective for guarding against unintended consequences during maintenance. One does not need a mountain of unit tests. Type safety is not about productivity: it is about maintainability, and it works.

#3 about code coverage is foolish. The fox is guarding the henhouse. One of the things that tests are supposed to check is that the programmer understands the requirements. If the programmer who writes the code also writes the tests, and if the programmer did not listen carefully to the Product Owner, then the programmer's misunderstanding will end up embedded in the tests. This is the test independence issue. Also, functional testing is but one aspect of testing, so we still need test programmers.

One response to the issue about test independence is that acceptance tests will ensure that the code does what the Product Owner wants it to do. But the contradiction there is that someone must write the code that implements the acceptance criteria: who is that? If it is the person who wrote the feature code, then the tests themselves are suspect, because there is a-lot of interpretation that goes on between a test condition and the implementation. For example, "When the user enters their name, Then the system checks that the user is authorized to perform the action". What does that mean? The Product Owner might think that the programmer knows what "authorized" means in that context, but if there is a misunderstanding, then the test can be wrong and no one will know - until a bug shows up in production. Having separate people - who work independently and who both have equal access to the Product Owner - write the code and the test is crucial.

I saved the best for last. #4.

Let me say this clearly.

Tests. Do. Not. Define. Behavior.

And,

Tests. Are. A. Horrible. Substitute. For. An. Interface. Spec.

Tests do not define behavior because (1) the test might be wrong, and (2) the test specifies what is expected to happen in a particular instance. In other words, tests do not express the conceptual intention. When people look up a method to find out what it does, they want to learn the conceptual intention, because that conveys the knowledge about the method's behavior most quickly and succinctly, in a way that is easiest to incorporate into one's thinking. If one has to read through tests and infer - reverse engineer - what a method does, it can be time wasting and confusing.

The argument that one gets from the TDD community is that method descriptions can be wrong. Well, tests can be incomplete, leading to an incorrect understanding of a method's intended behavior. There is no silver bullet for keeping code complete and accurate, and that applies to the tests as well as the code comments. It is a matter of discipline. But a method spec has a much better chance of being accurate, because people read it frequently (in the form of javadocs or ruby docs), and if it is incomplete or wrong people will notice it. Missing unit tests don't get noticed.

Conclusion

If people want to do TDD, it is right for them and it makes them productive, so let them do it. But don't force everyone else to do it!