Archive for the software development Category

When people hear about the many new mobile services that we have introduced and are continuing to introduce very rapidly they say “so are you doing agile development?”. At one level, we are building a platform that allows our mobile services to be built very quickly and which facilitates easy iteration and development of features for those services. So we certainly want to facilitate “agile development” of mobile services, where agile has a lowercase a in the beginning. As to whether on our core platform and tools we develop those with the Agile Development model such as Scrum, the answer is probably no, although many of processes that we follow (daily brief status meetings, continuous builds, short release cycles, primacy of the bug database) may seem to have parallels there.That said, here are my “top ten issues with Agile Development and Scrum” especially as applied to the development of services These points came out of a talk today that I gave at the New Software Industry conference sponsored by Carnegie Mellon and UC Berkeley.

bias against upfront analysis in conflict with service-oriented development

Agile advocates wax poetic about the inability to determine requirements ahead of time. They state its just impossible to know a system’s inputs and outputs and cite reams “industrial process control science” to do so. But service oriented development applied to a large system hinges on doing just this: determining the major services required by a system, and then designing contracts for their implementation. That investment in some small amount of upfront analysis (scoffed at by agile advocates) pays off in allowing multiple teams to build services independently as long as they comply with the designed contracts.

flaws in the team scaling model

Agile advocates recommend a “scrum of scrums” with scrum leaders represented to manage cross-team dependency. They actually need this because there is no emphasis on service contracts. Problems need to get hashed out in a broader freewheeling way. With service contracts issues are far more likely to get solved bilaterally between the service provider and the service consumer.

“old wine in new bottles”

Much of the better ideas of agile have been done for a long period of time in the more successful development organizations. Such ideas include running projects day to day and driving all work from the bug database (the Product Backlog). At Microsoft and Google and many other companies this is almost a religion. Small teams (7+-2). Daily short status meetings run roundrobin. Continuous builds. Small code reviews before checkins (almost XP). Short releases (less than three months). Due to being fortunate enough to be around some great engineering leaders I’ve been doing projects this way for over 15 years. So why is it a problem to call this agile or Scrum and give these best practice artifacts new names? Teams can lose some of the process maturity and nuance which they’ve built around usage of these techniques.

wholesale process change or “incrementalism” and “team ownership of process”

As mentioned, and acknowledged by agile gurus (such as Schwaber and Beedle) in the better shops most of these best process techniques are already being applied. However, there are almost always improvements that can be made in how a team approaches the development cycle and day to day work. I have found that building on what works already and incrementally changing processes (i.e. not every change in one release cycle, which I also like to be short) works best.I’ve also found that the best thing is to have an open and questioning attitude towards the best next incremental process change, involving all team members in deciding on changes, is valuable. The team that feels that they have made a decision about what the process should be has much better buyin, compliance and consistent volunteer enthusiastic participation in the process. This is better than “ok folks, now were doing Scrum”. Another way of putting it is for any team or organization considering introducing a new methodology (or “antimethodology”) is “what problem in the existing process are you trying to solve?”. Then go solve that problem directly rather than starting from scratch.

arbitrary timeboxes

Almost all of the timelines and timeboxes for specific events and durations in Scrum are arbitrary. This ranges from daily status meetings, to every other time cycle, meeting or artifact. Doesn’t seem very agile to not react to the exigencies of a specific projects. For example, the one month “Sprints” are a very arbitrary timeline and it is often difficult to build risky and time consuming blocks of features within those timeboxes. Agile advocates acknowledge this and say cite examples of releases with two Sprints: a “feature sprint” and a stabilization sprint. This sounds not just arbitrary but pretty mushy and vague as well. More specifically…

one month Sprints

I’ve already mentioned my generic issues with the arbitrariness of one month Sprints. There is the practicability of such timeframes. Anecdotally (after working on dozens of software projects in my career) my experience is that this can work for “end applications” or simpler websites even from “version 1”. My experience is that early releases of platform, infrastructure or tools code this is not quite feasible. I do accept that in either “app” or “platform” code several releases into a products lifetime the one month Sprints are feasible. There is still then the issue of the arbitrariness and optimality of such a timeframe (there is a high overhead of QA and integration effort for such a short cycle). Again, the question is “what problem are you trying to solve?”. Given that stabilization overhead is decreased as a percentage of effort in a two month release, why is one month better. It may be that “the business” (operations, marketing, sales and of course the users themselves) are not ready to make maximum use of a monthly release.Additionally the “way out” is to say that most projects are much larger than one month sprints and that a project is inherently “a bunch of Sprints”. There are already more mature existing best practices for “themed incremental milestones”. And most successful software-focused organizations have evolved more subtlety and insight about how to handle such complexity than is evident in most agile screeds.

arbitrary guidelines are presented with pseudoscientific justifications of dubious certitude

Much of agile practices that I’ve seen recommended have a core of common sense. My big issue is that the best organizations are already working at a level of sophistication better than those approximations. And those approximations are dressed with questionable garb from parallels to other sciences. For example, 7 as team size doesn’t have much to do with brains ability to handle 7+-2 objects [Schwaber]. In some prominent agile writings, other parallels are drawn to process control, chaos theory and physics . The problem with the dubious connections is that they are held as evidence for arbitrary decisions, rather than grounding them on evidence from development projects themselves.

artificial straw man of traditional best practices

The process and the foibles of “waterfall model” (does anyone really espouse such a completely serial model anymore or use that term to describe how they do things?) mentioned in Fowler’s Agile Manifesto and other books such as Schwaber and Beedle’s Agile Manifesto, doesn’t resemble anything conducted in dozens of projects that I’ve participated in. More normal is that even in the early 90s I worked on teams where product managers prepared requirements, program managers and dev leads wrote specs, devs prototyped code, and QA leads wrote test plans simultaneously early in a project. The desirability of such an approach that was anything but “waterfall” was recognized by most major software companies long before anyone ever used the “agile development”. Microsoft codified this in written form as the Microsoft Solutions Framework. But when I was there it was just “how stuff got done”. Books such as “Rapid Development” summarized what had already become industry wide recognized best practices. But the “words for things” remained the same as they were in the eighties. Much of Agile seems to be an attempt to put new words around widely accepted best practices, and arbitrary guidelines (one month sprints, 7 person teams) around reasonable defaults that most development shops were already more nuanced about.

optimal software process is an evolution

The terminology, milestones and deliverables in “conventional” ideas about the software process has evolved over many years in evolutionary (not revolutionary) response to changing technology and customer expectations. It can and should continue to evolve. For example, there may be deliverables that can be abbreviated or intentionally skipped (e.g. a requirements phase may be folded into a smaller specification). But it doesn’t hurt to make that choice consciously.

modern software tools can make “old artifacts” lightweight and even magnify their original value

Even more powerful is using modern tools to make such steps in the process much lighter weight, collaborative and distributed. As an example, take the case of two bogeymen of “agile development”: the idea that upfront requirements and specifications are too timeconsuming to be worthwhile. In our shop feature requests collect in the bug database (e.g bugzilla or trac) for a subsequent release (similar to a Scrum Product Backlog). Related sets of features are written up in narrative form on a requirements wiki page often initially by a product manager (this is a chance to include use cases when and where helpful). But they are commented on, and revised (with change control history of course) by many interested team members including the likely assigned developers and QA.After this brief in person requirements review, the feature items are assigned to a responsible engineer for spec and implementation (along with other related features). The dev writes a few paragraphs about the feature set on a specs wiki page, which again invites comments from and revisions by interested parties such as PMs, other devs or QA. As with the requirements wiki page, the value of the spec page is to allowed related features to be discussed, analyzed and described in toto. The scope of just how much specificity is required or optimal is determined by the dev. Often much of the content from the specs page can be leveraged later in a user manual if one is required. More often the specs page is all that is required for anyone who needs to use or maintain the software. When the spec is complete the dev holds a timeboxed spec review with interested parties, and if there is consensus on the approach proceeds with implementation. Its difficult to find the wasted overhead in what is described, and use of tools such as wikis can make these steps introduce very little overhead and slowdown to the process. In my shop we’ve automates the linkages between the wiki and bug database to make this even easier.

There’s nothing magic about this approach. The overall point is that these “traditional artifacts” of requirements and specifications are easier to do with modern tools and their value is I think even higher with the ability of more stakeholders to contribute to and benefit from the process.

In summary, on balance among an existing high performing team, I prefer to see more familiar processes and nomenclature iteratively refined to meet the needs of a particular business, product or team, rather than starting afresh with a new methodology (even one that claims not be a methodology) with a fresh set of names.

I’ve spent a lot of my career in the web services world (before it was even called web services). So a lot of my colleagues often ask me “you like Rails?! how is that possible, the Rails community hates SOAP-based web services?”. I’ve often wondered about this myself. More specifically I’ve wondered at the seemingly religious antipathy of the Rails community (including the illustrious DHH himself) to the “WS-Deathstar”.So (primarily for the people who are confused by a web services’ zealots embrace of Rails) here’s my take on the matter: REST is great for many problems and applications. The first class support that DHH is providing for REST in the Rails core is very useful. I love the scaffold_resource generator and what it provides (as I’ve mentioned earlier here). I also like the approach to providing it: the idea of single controllers that respond to different clients (HTML, REST, and others) with different content. I do have reservations about the need to put wordy respond_to clauses in each controller action (I’ve proposed what I think is a simpler way here). Anyway, automatic support for REST is a good thing. REST is a great way to build simple point to point distributed app to app connectivity. And I agree that, for someone who wants to be aware of the innards of the code they are working with that it is simpler. I also agree that there are scenarios where a REST interface just makes it easier for multiple arbitrary clients on more diverse platforms and languages (there is perhaps stil a small chance that it may be difficult to integrate with a client SOAP stack on a device, but there’s basically no chance that it will be hard to invoke a REST interface from another platform).

But does that mean that SOAP is bad? In the simple app to app connectivity scenario my opinion is that it basically doesn’t matter. In the vast majority of languages and platforms the work of consuming a SOAP-based web service is just as easy as consuming a REST service. But I agree that there’s little tangible advantage to using SOAP if all you ever need is a distributed method call.

The value of the SOAPbased WS-Deathstar is when more complex connectivity is required: multihop message routing, guaranteed delivery, publish-subscribe one-to-many integration, more advanced security in the plumbing than https (the latter probably only necessary when you’re routing messages). If this is never going to be necessary (probably true for the typical consumer-facing website), then the whole SOAP vs. REST debate is moot. And going with the default easier path of REST doesn’t hurt one whit. For more advanced distributed connectivity problems (endemic to many “enterprise applications” that are seemingly anathema to the Rails community), robust SOAP stack with extensive WS-Deathstar support would be useful for developers building complex connectivity. And yes I mean a whole bunch of those nasty evil WS-* specs, including those not yet widely implemented. My top requests include WS-Addressing, WS-Eventing, and full WS-Security. Doing these would also obviate the need to provide other APIs for such such services as publish/subscribe message routing. Due to Ruby’s stunning productivity advantages I think any and all of these would be trivial to implement. I remember implementing a very early WS-Eventing client prototype in C# in about a page of code. It would be much easier in Ruby (if there’s demand for it, I’d consider rewriting it, if only to demonstrate that a seemingly complex spec can in fact be made easy to use).

Are the WS-* overly complex? Yes, I think many of them are. There are plenty of “design by committee” artifacts in there. For the most part such unnecessary complexity can be hidden from the programmer using them. A good API with reasonable default behavior (convention over configuration) covers a thousand design sins. And the layered nature means that you can just pick and choose what facilities help you. Embracing the Deathstar specs would make Ruby a more powerful tool for a wider set of applications. The availability of Rails plugins (or just Ruby libraries) for these capabilities would make it more likely that Ruby gets used for a wider variety of programming tasks which can only help to grow and advance the Ruby and Rails community.