What do you mean by less agile? Do you mean less productive? Agile is not about being more productive (in the sense of lines of code per week) but more effective: you may slow down the production process by adding extra activities that have to be carried out more often but you reduce the risk of producing something the customer does not want by detecting problems (ill specified requirements) earlier.
–
GiorgioFeb 23 '14 at 13:08

Less agile in the sense that you will be less inclined to change a feature when you spent all that time polishing it to be ready for shipping. This means you are less adaptive to change - less agile.
–
EugeneFeb 23 '14 at 13:17

1

If you think skipping or postponing the steps necessary to the "definition of done" lowers costs, you are buying into a false economy.
–
CodeGnomeFeb 23 '14 at 17:31

@Eugene You really should read the Agile Manifesto. The things you're describing here and in some of the comments to the answers below are not Agile, just one person's attempt to formalize something that is inherently informal. Agile is about interacting with humans instead of following rigid processes and ongoing, frequent collaboration with project owners. You should never follow a process, whatever it's name may be, that does not deliver the best possible product in the shortest time possible.
–
David SchwartzFeb 24 '14 at 2:38

4 Answers
4

On the contrary, creating a potentially shippable product is the very definition of agile, both the lowercase and uppercase varieties.

Working in cycles or iterations but not producing anything shippable at the end of it is essentially what most people in the business call water-scrum-fall. It's a kind of cargo cult, following the motions and rituals of some Agile methodology without actually being, well, agile.

If "agile" does not mean "ability to ship at (almost) any time", then what good is it? If you say that your team is agile, you are communicating to the business that you are able to respond to changes to requirements or to the business environment in general. But if you can't actually ship anything, then how are you "responding" to that change? If you can't ship, then all you are really telling the business is "we won't complain too loudly when you change your mind", which is a lie of omission when they think they're hearing "we can deliver whatever you need, whenever you need it".

I can't find the link anymore, but Microsoft recently did a case study on this. There was a team that converted to some sort of cycle/increment-based Agile methodology and relied on frequent product demos to show progress, but when management said "OK, we're ready, release it" it turned out that the team wasn't actually "done" with any of the work they'd demoed. There were hundreds of critical bugs that had to be fixed, data migration that needed to be performed, validation and security requirements, and so on and so forth. And when all was said and done, it took something like 6 more months to actually get it ready to ship.

My team had a similar experience last year, during which I was the lead and learned my lesson pretty well. Oh, I was obsessive about quality, and so we really were done according to our definition of done. But, as it turned out, a ton of mostly-unpredictable faults happened when we finally hit the staging environment, and it took almost two more months before we were able to go from "done" to "shipped". The business was fairly understanding, considering the size and importance of the product, but it was stressful as hell for those of us essentially getting no "real" work done for two months, and during those two months the business could potentially have been bringing in revenue on a perhaps less complete, less polished, but still shippable product.

The idea of bringing pain forward is at the heart of Continuous Integration. CI says that if integration is hard and expensive, you should do it today, not tomorrow or next week, because the total theoretical amount of effort required to integrate is the same whether you do it as a series of 10-minute interruptions or one 10-day death march. Not only that, but in practice, it actually costs more to integrate later, because individual contributors/teams are able to diverge from each other much more (intentionally or unintentionally) between monthly integrations than they would be between daily integrations, and the higher number of risky or potentially destructive changes means a higher percentage of time spent on just debugging. So, smart teams use CI, and not just in the sense of having a build server but in the sense of forcing everyone to either commit directly to the mainline or integrate with it every day, and deploy to a shared environment every day (or every hour, or whatever).

But CI is a beginning, not an end. CI is for the dev team; it doesn't help the business at all. Just because you got something working in CI doesn't mean it's shippable. And all of the problems with delayed integration that CI aims to solve have technical or business counterparts when production releases are delayed for too long. There's a risk that the dev team has diverged very far from the product requirements, or that the CI infrastructure/environment has diverged very far from production. The only way to minimize that risk is to ship often, which is what "continuous delivery" practices aim to solve, by extending the CI pipeline to include end-to-end test automation, infrastructure automation, and release automation.

You don't have to automate everything; plenty of teams scrape by with manual QA and deployments. But they maybe manage bi-monthly releases, or monthly if they're lucky. That's nowhere near as agile as some companies.

When I hear the word agile, I think of companies like Amazon, Facebook, or Google. It's easy to be agile when your product is only 500 lines of code and you don't have to do much testing, but the real test is when you're managing sprawling products with millions of lines of code and hundreds or thousands of developers. Facebook releases twice a day. Google sometimes releases the same product a few times in the same week. Amazon is on record as releasing something every 11.6 seconds (2011 statistic), although I think Amazon is a bit of a special case because they aren't actually releasing the same thing each time, their architecture is massively service-oriented and distributed.

Simply put, if you can't basically push a button to release shortly after the end of an iteration, then you aren't agile. Being able to respond to changes in or from the business means that you can ship. The business doesn't care that you can start working on something else at the beginning of each iteration, they care if you can deliver what you worked on at the end of it.

N.B. Things like "writing tests" and "polishing the UI design" should happen during the iteration, not at the end of it! Most agile teams have a "definition of done" and those things are very, very high on the list of priorities. I don't even consider that part of shipping, that's fundamental product work that is very dangerous to defer. You're not even close to being agile if those things aren't done at the end of a sprint/iteration/whatever. If you're having trouble completing those things on time, you may need to revisit your team structure - it's a common problem in manufacturing-style teams where developers, designers, testers, and infrastructure are all treated as different teams and separated by literal or metaphorical walls. This is a terrible model but some companies still use it because there's too much bureaucracy or simply too much stubbornness to reorganize.

In non agile environment you can develop the feature, but not to fix all bugs, not to do thorough testing, not update the user manual or other documentation. You can still show the feature to the user and get feedback.

I think you have a big misconception about the term "agile". Agile in its core means that you adapt your development process to your needs - instead of blindly following some fixed written process. Using your brain is permitted, especially in an agile process. If you have a requirement for which you know that you first need a prototype to discuss it with some of your users (but without writing complete docs or full testing), then develop one. Agile or not, you don't want your users to use such an unfinished feature in the "production version" of the product, so take measures to prevent that.

Agile also means that you implement small feature slices you believe you have understood the requirements for to bring them into production quickly. To my experience, it is likely that you will get more and better feedback from your user base when they are using the thing in production. When you have discussed that feature beforehand using a prototype, it is unlikely that you have to rewrite the documentation and the tests completly then, but it is likely that you have to make some corrections to those documents and code.

I fail to see any point in there that is specific to agile. Would you leave any one out in other methodologies, or do you think you could, e.g., save money by updating the user manual for different changes all at once?

Also, typically, later points in that list cause work located in earlier points (the failure of the waterfall model). From my experience (and I believe that is backed by the standard literature on software design, but I’d need to go looking for references), going back considerably later is more expensive than doing so sooner after you thought you were done with that step. Which means that if polishing the user interface shows a design flaw and how you could make the feature much more user-friendly, you are more likely to do that if the programmers declared their part done three days ago than if they’ve been working in other projects for two months.

It all depends on how often and by whom you get feedback. If you are thinking about change requests from the VP who only looks at your software twice a year, then sure, changing finished features is more expensive than changing half-finished ones where you held off for exactly that occasion. Agile requires frequent customer feedback for that reason.

I agree that its less expensive to do all the stuff related to the feature right when it is developed rather than half a year later. I don't necessarily agree that in order to show the user the feature you need it 100% completed (including user manual and such). Most of the time you'll get decent feedback even if the feature has some bugs and it is not very polished. Regarding architectural flaws: yes, its better to discover them as soon as possible, and producing a potentially shippable product helps with that. But my original point still stands. Changing 100% completed feature is harder.
–
EugeneFeb 23 '14 at 13:14

1

Who says you should have the feature 100% done before showing it? Agile means you show whatever you have, as often as possible, certainly many times a week. And if it’s just an idea on a napkin, that’s a great place to get feedback on. Exactly because it’s hard to change anything 100% completed – that is one of the things agile is trying to address, not something it freshly introduces. If you could highlight in your question why you think that is a problem in agile development in particular (maybe contrasting why it is less so otherwise), that might help.
–
Christopher CreutzigFeb 23 '14 at 13:27

@Christopher Creutzig: I think Eugene refers to all the activities that have to be performed regularly no matter how complete a feature is: unit tests must all pass, the feature (even an incomplete one) must have been tested (an possible bugs entered in the bug tracking systems), a sprint presentation has to be prepared, and so on. You have to go through this cycle more often, which causes more overhead.
–
GiorgioFeb 23 '14 at 13:41

@Giorgio That may or may not be true, depending on what you measure against and what your environment is, but in either case, I don’t think he is talking about it at all, since he explicitly and predominantly talks about 100% completion.
–
Christopher CreutzigFeb 23 '14 at 14:21

@ChristopherCreutzig: All other things being equal, performing an activity always takes more time than not performing it. Regarding the 100% completion: maybe you are right, even though I admit that the term "potentially shippable" would actually suggest that: who would want to ship a non-finished feature?
–
GiorgioFeb 23 '14 at 14:28

If you define "potentially shippable product" as a product that can be taken into production and shipped to the end-user without further work, then there can be situations where producing a potentially shippable product each sprint can create a insurmountable burden.

To use my current situation as an example (in the medical field, so strongly regulated), for an official release, we need about two weeks of manual (regression) testing on a single stable version. This testing can not be automated, because it involves measurements on a human body. In this case, doing a full regression test each sprint is not effective, as it would consume the majority of the sprint.

We are not doing agile, but I am considering to propose it, with the following 'adaptations' from the "textbook implementation":

A "potentially shippable product" means that the product is ready for a (final) formal verification test, where the developers have high confidence that no major issues will be found in either the product or the test cases.

Documentation is kept up-to-date, but not officially signed off for release until the decision falls to actually ship the product. The same goes for all the regulatory paperwork.

When the decision to ship comes, there should be no need to perform any rework on either UI, code or documentation.

@Aaronaught: For the forseeable future: Yes. It would be a multi-manyear project in itself to develop a proper programmable simulation of the human body and prove (to FDA standards) that it works accurately. In our case, most bugs are found during informal testing that gets done on the daily build. The formal two-week regression test is only done when we are confident that we can actually release that version.
–
Bart van Ingen SchenauFeb 23 '14 at 17:45

Then you're basically in line with where HP was. Their technical team did a 1-year embargo on development so that they could redo their architecture and delivery process in order to automate nearly everything. Don't ask me how they got the executive team to okay it; I think they were truly desperate. Please don't interpret this as an implication that I think you're doing anything wrong, I'm just saying, "never say never".
–
AaronaughtFeb 23 '14 at 17:52

Seems fine, your definition of done is that you've done everything you can to get the product to your regression testing team. That's perfectly valid.
–
AndyFeb 23 '14 at 23:42