The presentation started out as an intro to agile development with the usual description of TDD, acceptance and regression tests, and explanations about how agile produces high quality code. Scott talked about how developers and stakeholders together do testing within each iteration and how since each iteration is supposed to end in working, quality code, there is less and less of a need for a completely separate testing org that gets handed a test plan and some software and left alone to their own devices. Scott even predicted that testing groups in large companies will be reduced to 10% of their original sizes thanks to agile development.

Is this true, or even desirable? QA is often the last bulkhead protecting against lazy developers or poor requirements, and often helps define requirements and tests simply by using the application "from the outside in," instead of testing code itself. Is Scott recommending something that would be an IT shop's equivalent to suicide?

My experience when using agile development methods reflects Scott's - you don't need the large monolithic testing organization to whom you throw the system over the wall just prior to release.

What you do need, however, is intimate involvement by QA people from the start of the project to the end. This point seems to be missed in Floyd's post, although perhaps Scott didn't communicate that very well. When he talks about reducing the QA staff size to 10%, Scott is talking about organizations that don't use any form of automated unit testing by the developers, and do all of the acceptance testing at the end of the development cycle.

Agile methods stipulate that the developers should be writing and maintaining comprehensive unit test suites that can be run at any time, and that acceptance testing is performed with each development iteration.

I don't believe that QA will lose its importance in an agile world at all, quite the opposite in fact. What I do see (and have already seen in practice) is that they are involved at the beginning of development, and continuously thereafter. QA effectively works with the Customer to validate that a system is doing what it's supposed to from a business perspective, rather than finding system bugs that should have been caught by the developers. To me, that's a much more productive role.

I don't believe that QA will lose its importance in an agile world at all, quite the opposite in fact. What I do see (and have already seen in practice) is that they are involved at the beginning of development, and continuously thereafter. QA effectively works with the Customer to validate that a system is doing what it's supposed to from a business perspective, rather than finding system bugs that should have been caught by the developers. To me, that's a much more productive role.

+1

TDD should free QA people to focus on broader quality issues that frequently get ignored in the rush to just make the system work.

I think the question is: Is QA a distinct discipline requiring a special skillset, or is it a skill that every developer should possess?

Personally, I think there's enough to it, whether it's the technical skills required to do large-scale automated integration testing or the business skills required to work with the customer to validate the system, to justify dedicated persons with special skillsets on large enough projects or organizations.

TDD should free QA people to focus on broader quality issues that frequently get ignored in the rush to just make the system work.

Yes!! I absolutely agree... QA should be finding business logic that hasn't been properly implemented (or perhaps properly articulated), and not simple coding errors.

I think the question is: Is QA a distinct discipline requiring a special skillset, or is it a skill that every developer should possess?

I'm personally not yet ready to hand over the testing keys to the developers. The QA people that I have worked with in an agile environment still do better exploratory testing that the programmers (myself included). As for just validating that the business requirements have been satisfied, that can be done through tests. What you can't test, though, are issues such as useability, aesthetics, etc. You still need a warm body with training in that area to perform that type of testing.

Agile methods stipulate that the developers should be writing and maintaining comprehensive unit test suites that can be run at any time, and that acceptance testing is performed with each development iteration.

Very true. A key issue is therefore the quality of the tests, so the next question is whether Dev or QA people produce "better" tests. But that question cannot and should not be answered because it is an unhelpful question.

More helpful questions are "what are the weaknesses in Dev peoples' tests and in QA people's tests", and "can they cover each other's weaknesses". IMHO yes, but as usual it is critically dependent on the organisational culture and on the individuals.

A key issue is therefore the quality of the tests, so the next question is whether Dev or QA people produce "better" tests.

Actually, they produce completely different tests. The unit tests written by the developers are intended to test the down and dirty guts of the code, focusing on a single class at a time. These are programmer tests.

The tests produced by QA are business-facing tests - they verify that the more integrated business logic has been properly implemented. These are acceptance tests.

There's a move in the agile world now to stop calling them tests at all... they are specifications of what the code should do. The unit tests are object-level specs, and the acceptance tests are business-level specs. By having comprehensive suites of these behaviour specs, you can simply use (insert tool here) to run the suite whenever you wish, and verify that the system still conforms to the specs.

Actually, they produce completely different tests. The unit tests written by the developers are intended to test the down and dirty guts of the code, focusing on a single class at a time. These are programmer tests.The tests produced by QA are business-facing tests - they verify that the more integrated business logic has been properly implemented. These are acceptance tests.There's a move in the agile world now to stop calling them tests at all... they are specifications of what the code should do. The unit tests are object-level specs, and the acceptance tests are business-level specs. By having comprehensive suites of these behaviour specs, you can simply use (insert tool here) to run the suite whenever you wish, and verify that the system still conforms to the specs.

We also use this method for our development. When the system is developed, the developers create unit tests to cover the code. This is mostly done at the server-side.

In parallell, the QA staff turns the requirements into high-level test cases that ensures the validity of the system in co-operation with the customers. This part just can't be replaced with unit tests. These tests are turned into automatic tests which cover the entire application, from GUI to the database. When the product is delivered, the automatic tests in combination with the unit tests serve as regression tests for the system.

If you believe that unit tests is the end-all of testing, you are going to spend enormous amounts of time writing these or risk that large parts of the system goes away untested.

If you believe that unit tests is the end-all of testing, you are going to spend enormous amounts of time writing these or risk that large parts of the system goes away untested.

One misapprehension that I have seen too often is that the Unit Under Test in a Unit Test should only be a single class. The UUT should be whatever is appropriate, from single class, to a set of closely interacting classes, to an FSM, to a complete subsystem, to a database.

In fact I can't remember ever seeing just a single class tested, except in noddy academic exercises. The "single class" always includes external library classes, even if they are only Integers and Strings.

If you believe that unit tests is the end-all of testing, you are going to spend enormous amounts of time writing these or risk that large parts of the system goes away untested.

One misapprehension that I have seen too often is that the Unit Under Test in a Unit Test should only be a single class. The UUT should be whatever is appropriate, from single class, to a set of closely interacting classes, to an FSM, to a complete subsystem, to a database. In fact I can't remember ever seeing just a single class tested, except in noddy academic exercises. The "single class" always includes external library classes, even if they are only Integers and Strings.

If you are testing a complete subsystem, it's not a unit test. Whether the class depends on other classes for it's behavior isn't relevant.

If you are testing a complete subsystem, it's not a unit test. Whether the class depends on other classes for it's behavior isn't relevant.

But that says nothing about whether or not the test is needed, only that it's testing something bigger than a unit.

Which is why arguing about what really is a unit test is a waste of time. How many classes fail apart in-and-of-themselves, and how many fail because of their interactions with other classes?

IMHO, three sets of automated tests are needed:1. One that can be run very quickly by the developer2. One that can be run in less than half a day3. One that validates the system to the maximum extent possible

Obviously if you can validate the system in five seconds you only need one set of tests.

If you are testing a complete subsystem, it's not a unit test. Whether the class depends on other classes for it's behavior isn't relevant.

But that says nothing about whether or not the test is needed, only that it's testing something bigger than a unit.Which is why arguing about what really is a unit test is a waste of time. How many classes fail apart in-and-of-themselves, and how many fail because of their interactions with other classes?IMHO, three sets of automated tests are needed:1. One that can be run very quickly by the developer2. One that can be run in less than half a day3. One that validates the system to the maximum extent possibleObviously if you can validate the system in five seconds you only need one set of tests.

I view a unit test as verifying the behavior of a distinct and fairly atomic piece of a system. The point is not to verfiy any sort of business requriement, it's to verify the implementation of that code. I develop lots of classes that are merely building blocks to a larger solution. There is no requirement to track them back to, no one outside of the developers needs to know about it or care about it. But it's important that piece of functionality is correct because everything else depends upon it. Just testing the higher-level functionality tells me it works in that specific scenario but doesn't ensure it will work as designed when Joe Blow down the cube-hall tries to use it later. Unit tests are like checking welds in a peice of heavy machinery. What does it do with a null is passed in? what does it do when a negative value is passed in? What does it do when the configuration file isn't found? These are things that high level testing does not always address because you don't pass nulls in the normal use of the class and the config file is normally there.

If you are testing a complete subsystem, it's not a unit test. Whether the class depends on other classes for it's behavior isn't relevant.

But that says nothing about whether or not the test is needed, only that it's testing something bigger than a unit.Which is why arguing about what really is a unit test is a waste of time. How many classes fail apart in-and-of-themselves, and how many fail because of their interactions with other classes?IMHO, three sets of automated tests are needed:1. One that can be run very quickly by the developer2. One that can be run in less than half a day3. One that validates the system to the maximum extent possibleObviously if you can validate the system in five seconds you only need one set of tests.

I view a unit test as verifying the behavior of a distinct and fairly atomic piece of a system. The point is not to verfiy any sort of business requriement, it's to verify the implementation of that code. I develop lots of classes that are merely building blocks to a larger solution. There is no requirement to track them back to, no one outside of the developers needs to know about it or care about it. But it's important that piece of functionality is correct because everything else depends upon it. Just testing the higher-level functionality tells me it works in that specific scenario but doesn't ensure it will work as designed when Joe Blow down the cube-hall tries to use it later. Unit tests are like checking welds in a peice of heavy machinery. What does it do with a null is passed in? what does it do when a negative value is passed in? What does it do when the configuration file isn't found? These are things that high level testing does not always address because you don't pass nulls in the normal use of the class and the config file is normally there.

While I agree with you that acceptance (functional) tests are quite different from unit and integration testing, I don't think there is a neat distinction between the 2 last ones.

A unit test is a mini-integration test unless you use mock objects everywhere and you verify every call made on them by the tested object but in my mind it breaks the purpose of encapsulation and the contract theory. You never test a class alone, you just verify that after calling some method it's state is consistent and the same as what was expected. Personally, I classify integration tests as a test that check the correctness of a coarse-grained component (for instance an application layer or the entire application) while unit tests focus on verifying specific class behaviors that aren't complete components by theirselves.

However I do agree that functional testing can't be achieved by unit/integration testing and is definatly better handled by a different team.

While I agree with you that acceptance (functional) tests are quite different from unit and integration testing, I don't think there is a neat distinction between the 2 last ones. A unit test is a mini-integration test unless you use mock objects everywhere and you verify every call made on them by the tested object but in my mind it breaks the purpose of encapsulation and the contract theory.I agree completely and your point is very well stated.

My point was really that if you are using Junit as your only testing, then you are not testing a lot of things or you are using Unit tests in the wrong way. Unit testing is mainly used to verify code works correctly. It's not meant to address whether the requirements are met.

A little out of topic but I am a little bit confused here. What do you define as agile methodologies?

Where I work we use a kind of RUP wich is usually recognized in the industry as a semi-agile process. We use autamatic unit-testing, writting our tests before writting the code, refactoring, ... For me, those are development practices and not a methodology in itself. You can use them in almost all development process. Heck even in a waterfall style of development.

We don't want to use XP because of it's lack of global architecture and it puts too much responsabilities in the hands of the customer. We use some form of TDD because it is actually true that tests help design correctly but at a fined grained level. You still need a global architecture to know where you are going in my mind and if it going to fit in the big picture. What we like about XP wich is also found in RUP is the iterative incremental approach. But I agree more with RUP point of view, the focus of tasks tend to change as you progress. For instance, most use cases should have been identified and detailled in the last iterations and the focus should be more on development and on testing.

From what I have heard, there are main 2 dangers in using XP for a large project:

- It's lack of global direction. For instance, since there is no architecture, there isn't any use cases wich are priorized and the development can result in an endless cycle of iterations.- It's put too much responsability in the customer hands. For instance, if the customer representant doesn't take it seriously, you are in trouble since he won't be responsible for the project failure. And sometimes the customer doesn't even know what he wants. In my opinion, it's the functional analyst task to help him clarify his needs.

I could be wrong, I am far from being a XP expert but I think XP is a gathering of great development practices but I find it is a rather weak process usuable only in small teams. So I guess I am wondering if XP is usable as a process or if people are just using the good practices it enforce. Any experiences?

As far as architecture goes, many organizations tailor AMDD techniques, http://www.agilemodeling.com/essays/amdd.htm , into XP to scale it. Actually, my March column in Software Development (www.sdmagazine.com) discusses techniques for how to scale agile processes, so you might want to check it out.

With respect to the customer representative issue, if your stakeholders don't have their act together you're pretty much out of luck regardless of the methodology that you use. At least XP makes it incredibly clear that you're in trouble right away, most other methods allow you to hide/ignore this problem and actually compound the damage that's caused. You might find http://www.agilemodeling.com/essays/activeStakeholderParticipation.htm interesting.

With respect to the customer representative issue, if your stakeholders don't have their act together you're pretty much out of luck regardless of the methodology that you use. At least XP makes it incredibly clear that you're in trouble right away, most other methods allow you to hide/ignore this problem and actually compound the damage that's caused.

+1 You beat me to it. If you don't believe Scott, again have a look at the Standish research:

That's from the 1994 report. The 1996, 1998, 2000 and 2002 reports all list User Involvement as either the #1 or #2 factor affecting project success (with the other top factor being Executive Support).

As Scott says, what distinguishes agile methods from traditional ones is that this level of Customer Involvement is enshrined in the process itself. It isn't something that just happens or is a happy side-effect, it has to be done in order to be successful.

So now that Customer involvement is in place, what other items from the Standish Top 10 list do agile methods use:

Clear Business Objectives - you need a target, and the combination of the Customer's involvement and the business-oriented Stories provide this.

Minimized Scope - the practice of short releases nails this one.

Firm Basic Requirements - User Stories accomplish this very well, and having small pieces of the software ready for feedback from the Customer on a regular basis improves upon it.

Reliable Estimates - because Stories are broken down to such a granular level, the estimates are much more reliable. Plus, the constant planning review process allows the team (including the Customer) to refine the estimates over time as they learn more.

The other items in the list, such as Executive Support, an Experienced Project Manager, and Standard Software Infrastructure are outside the scope of agile methods. All of them, though, are critical to any process being used.

The one item I hadn't mentioned is Formal Methodology. If you read the description of that item in the section below the table, it describes how the methodology should be reflective ("Lessons learned can be adapted into active projects") and adaptable to changing requirements ("A project team can proceed with a higher level of confidence or steps can either be halted or altered to fit changing requirements. This ability to adjust in real time enhances project skills and reduces project risk"). That's agile!

Again, keep in mind that the Standish Group has no axe to grind regarding a particular methodology or approach to building software. They are simply providing the statistical basis, and agile methods are exploiting it.

So do the research, and try the methods with an open mind. If you find yourself saying, "That's great, but it will never work here.", try asking instead, "What would have to happen in order to make that work here?"

Clear Business Objectives - you need a target, and the combination of the Customer's involvement and the business-oriented Stories provide this.

Minimized Scope - the practice of short releases nails this one.

Firm Basic Requirements - User Stories accomplish this very well, and having small pieces of the software ready for feedback from the Customer on a regular basis improves upon it.

Reliable Estimates - because Stories are broken down to such a granular level, the estimates are much more reliable. Plus, the constant planning review process allows the team (including the Customer) to refine the estimates over time as they learn more.

You totally lost me there since I am not a XP expert. What is the difference between user stories and the traditionnal use case approach where each use case is always detailled by a main scenario and some secondary one?

I still feel like XP is not really a process overall but a set of good practices that you can mix with the traditional ones (using CRC cards instead of drawing some System Sequence Diagrams illustring how each use case are realized). An anology could be OOP vs design patterns. Design patterns don't replace OOP but they can really enhance your design if you use them wisely as with XP practices.

However, in my mind as a process, XP still looks like an endless cycle of development iterations. XP seems to adress just the micro aspects and pretend the macro aspects are not important and are going to be fixed by theirselves. While it is true in the past we accorded too much importance at having a high vision of everything, I want to say the other extreme is not better. This is where XP lost me. I would like to hear your opinion about this.

There's a move in the agile world now to stop calling them tests at all... they are specifications of what the code should do. The unit tests are object-level specs, and the acceptance tests are business-level specs. By having comprehensive suites of these behaviour specs, you can simply use (insert tool here) to run the suite whenever you wish, and verify that the system still conforms to the specs.

For a while now I've suspected that agile development isn't about reducing documentation, it's about ensuring that documentation is produced in a useful form. In this, it assumes that the only detailed specification for an application that the customer finds useful is a working system. I don't fully agree with this, although in practice conops and requirements specs that are intended for the customer turn out to be useless to them.

I've worked with agile techniques as a software developer and manager for over 5 years, and with RUP, waterfall, and total chaos in the 10 years before that. I believe that QA is absolutely necessary (although how many people you need is debatable). I'd summarize the ideal role of QA as follows:

1. Works with developers up front to understand customer requirements, and to ask tough questions that don't occur to people without a testing mindset. Example: Will this new AJAX entry screen support cut and paste like the dumb text field used to?

2. Creates acceptance tests in conjunction with the customer. Helps to think of exceptional usage and performance scenarios. Example: What happens if I hold down the Enter key in your new AJAX widget?

3. Reports the state of the system in business terms. When someone asks "Is it ready to ship", QA says: "The core system is stable, but the new AJAX widget would be risky to ship today. We have run only 30% of the tests we had planned for it, and it failed 3 significant ones that customers will likely be exercising." They are not gatekeepers, they are information providers.

Agile methods focus on having the developers produce the best quality output they can, so they can stay nimble and not get bogged down in maintaining crappy code. But that doesn't mean that you don't want someone independent thinking about real world usage scenarios, exceptional cases, etc. This is typically lumped into the "customer" role, but it's still important, and it's usually performed by QA staff.

This is one of my favorite topics. I have often said: "When you go into an organization that is producing a poor quality product, the first thing you do to improve the quality is to get rid of the entire QA team." People ask me, "Well, do you hire a new one? What do you replace them with?" The answer is simple: Nothing.

The problem with QA is that programmers lose the responsibility for quality, because they start to think "it's QA's job." No, quality is not QA's job. It is the programmer's job, and the designer's job, and the architect's job. Blaming QA people for bugs is like blaming the canary for the bad air in a coal mine. (Note that while I suggest getting rid of the separate QA function, I do not suggest getting rid of the canary in the coal mine.)

You want quality in software? Make your programmers do peer reviews of all code before it gets checked in. (BTW Pair programming in XP just does this "in real time".) And don't let them do peer reviews with just their buddies that agree with all their hair-brained programming ideas -- make them defend their code to their worst critics.

You want quality in software? Don't let your programmers start writing code until they can explain it clearly in English (or whatever spoken language your team has standardized on) to someone else on the team. Make sure the design makes sense before they get half-way done building the new pile of spaghetti code. (BTW This can help to achieve the XP goal of "Never Add Functionality Early." You'd be surprised how many "absolutely necessary features" become not-so-necessary when you shine the light of peer review on them.)

Our industry -- the software industry and those who write application -- as a whole produces mostly crap. Most applications simply do not work. I mean it. Most applications do not work. We rail on and on about how bad Microsoft Windows is, but I guarantee you that Windows is 100x better quality than the average business application that is out there. And until we take quality as seriously as we take all these other stupid fads and gimmicks, our industry is going to continue to produce mostly crap.

The problem with QA is that programmers lose the responsibility for quality, because they start to think "it's QA's job." No, quality is not QA's job. It is the programmer's job, and the designer's job, and the architect's job. Blaming QA people for bugs is like blaming the canary for the bad air in a coal mine. (Note that while I suggest getting rid of the separate QA function, I do not suggest getting rid of the canary in the coal mine.)

Removing the people who find the problems in an organization that is ready to transfer blame for the problems it creates from the people who create the problems to the people who find them won't fix anything. It will just make it easier for them to push errors through.

You want quality in software? Make your programmers do peer reviews of all code before it gets checked in. (BTW Pair programming in XP just does this "in real time".) And don't let them do peer reviews with just their buddies that agree with all their hair-brained programming ideas -- make them defend their code to their worst critics.

This is a far too code-centric view. Maybe I'm just a dolt, but unlike the people in the Matrix I can't look at a sea of symbols and instantly envision a working application. An application can have terrible code and meet the business objectives that it's meant to satisfy. An application can have extraordinary code and fail (although some would argue this would mean the code is bad).

Someone has to validate that the application not only works, but that it will meet it's objectives and it's users will understand it.

Our industry -- the software industry and those who write application -- as a whole produces mostly crap. Most applications simply do not work. I mean it. Most applications do not work. We rail on and on about how bad Microsoft Windows is, but I guarantee you that Windows is 100x better quality than the average business application that is out there. And until we take quality as seriously as we take all these other stupid fads and gimmicks, our industry is going to continue to produce mostly crap.

Absolutely. I couldn't agree more, except that you must see some pretty high-quality business systems if you think Windows is only 100x better.

Are you advocating getting rid of the separate QA Team, or the QA function altogether? I don't believe that getting rid of the QA people is a good idea, but rather having them as an integrated part of the development team from start to finish.

No, quality is not QA's job. It is the programmer's job, and the designer's job, and the architect's job.

Yes, it's everyone's job! (Sounds like an old Ford commercial)

Developers have gotten off light for many years for producing crap. Their management has gotten off light for many years for forcing unrealistic schedules on the developers and estimating their work for them. Customers (of the software) have gotten off light for many years by throwing nebulous requirements at a development team and disengaging themselves from the development process. The poor QA people then have to deal with a testing schedule that has been cut due to development time overruns, low quality systems that suffer because the developers didn't have enough time to properly test and write their code, and test plans that don't really correspond with what the system does in the end, and what the Customer actually wanted in the first place!

Explain to me again why I made this my career? :)

And until we take quality as seriously as we take all these other stupid fads and gimmicks, our industry is going to continue to produce mostly crap.

Our industry -- the software industry and those who write application -- as a whole produces mostly crap. Most applications simply do not work. I mean it. Most applications do not work. We rail on and on about how bad Microsoft Windows is, but I guarantee you that Windows is 100x better quality than the average business application that is out there. And until we take quality as seriously as we take all these other stupid fads and gimmicks, our industry is going to continue to produce mostly crap.

Amen to that. The atrocity of the average business application is so profound that I'm sometimes in disbelief. But the worse thing in a consulting engagement is when you have grasped the full atrocity of the software beast under examination and the client then asks you to implement terrible ideas that would make the mess even worse, much worse! I sometimes have to politely explain that I'm not interested in further increasing the entropy of the universe through bad code.I second your point about the importance of design and code reviews. Of course there are many places where doing this is not politically acceptable! Regarding QA, it's not only about the code but about complying with the functional and non-functional requirements. It's also quite common that some QA tasks like full-scale load testing cannot be performed in a development environment. I'd say developers should be made responsible for quality as far as possible, but then you really need a second line of QA with a broader perspective before you can be confident that a system can go live.

It's also quite common that some QA tasks like full-scale load testing cannot be performed in a development environment.

Why? I would expect this to be tested by the developers before it goes to QA to ensure they are just verifying that it does work.

I agree that developers should do as much load testing as possible themselves while they build the system - and as early as possible too, not when it's complete! But in some projects / environments, "full-scale", end-to-end load testing requires various things such as a monster SMP box or connectors to other internal systems, or even third-party systems that are available only during tightly controlled time windows and only with complex security requirements that restrict their usage to the testing environment. You see ? So I think we are in agreement but I wanted to stress that not everything can always be fully tested by the developers themselves in their development environment.

So I think we are in agreement but I wanted to stress that not everything can always be fully tested by the developers themselves in their development environment.

Very True. But the devs (some of them) need to be involved in the full scale testing. It just isn't QA's job. I've had that happen. They load tested it. Well, we think the did. They said it didn't work. No idea on why or how or ... . Because those who actually understood the technology were not involved. I am not saying, however, that all the developers would have been able to solve it.

I don't think that anybody in the agile community is claiming that the developers will do all of the testing. I certainly didn't in the presentation. Unit testing is important, but it's only one part of the overall picture as I try to describe at http://www.ambysoft.com/essays/floot.html .

What we are saying, and research is now starting to show this, is that agile developers are producing significantly higher quality systems as the result of practices such as code refactoring, database refactoring, and test first design. This shouldn't be a surprise: we're doing far more testing than traditionalists typically do and we actively strive to keep our design the highest quality possible.

It's also quite common that some QA tasks like full-scale load testing cannot be performed in a development environment.

Good point. Developer workstations and even build machines that are performing continuous integrations builds are not the same environments that can address performance and stress testing. This is one area that is often left out of the testing cycle and only when code ends up in production does it exhibit some undesirable characteristics (over-synchronization causing locked threads, memory leaks, pool sizes and poor performing external processes). Unfortunately, these are often the most difficult issues to solve and unless you have a performance environment and QA staff to conduct these tests, you'll need to form a team of developers just to address these production performance problems.

Exactly. This is why in my presentation I discussed the need for supporting various types of sandboxes, a pre-production sandbox for system/acceptance testing being one of them. See http://www.agiledata.org/essays/sandboxes.html for details.

Some manufacturing companies a moving over to Total Quality Management (TQM) which is seen as the next step from QA. One of the ideas of TQM is that responsibility for quality is placed back in the hands of those on the production line, rather than a separate QA department. This is because when quality is seen as the preserve of a separate department, workers on the production line, do not see the need to take responsibility for quality. In TQM the QA department, still exists but act as consultants rather than enforcers of quality.

If the software industry wants to follow best practice in manufacturing it should be making developers, architects totally responsible for the quality of what they produce. They should be no need for testers. Having a separate test department only increases waste - through lost time in passing defects back and forth between developers and testers and through increased costs, due to the lost time and also through maintaining a separate test team.

The software industry should aim for zero defects, through improved tooling and training. This is an ideal but as the Japanese philosophy of Kaizen (continuous improvement) says, if you don't start to take small steps towards an ideal, you'll never improve.

Some manufacturing companies a moving over to Total Quality Management (TQM) which is seen as the next step from QA. One of the ideas of TQM is that responsibility for quality is placed back in the hands of those on the production line, rather than a separate QA department. This is because when quality is seen as the preserve of a separate department, workers on the production line, do not see the need to take responsibility for quality. In TQM the QA department, still exists but act as consultants rather than enforcers of quality.

If the software industry wants to follow best practice in manufacturing it should be making developers, architects totally responsible for the quality of what they produce. They should be no need for testers. Having a separate test department only increases waste - through lost time in passing defects back and forth between developers and testers and through increased costs, due to the lost time and also through maintaining a separate test team.

The software industry should aim for zero defects, through improved tooling and training. This is an ideal but as the Japanese philosophy of Kaizen (continuous improvement) says, if you don't start to take small steps towards an ideal, you'll never improve.

That is exactly what I'm talking about. Quality is TQM. You can't have real quality without a healthy lust for kaizen, and you can't have it outside of an environment in which all participants feel individually and corporately responsible for it. Thank you for putting it so succinctly.

Some manufacturing companies a moving over to Total Quality Management (TQM) which is seen as the next step from QA. One of the ideas of TQM is that responsibility for quality is placed back in the hands of those on the production line, rather than a separate QA department. This is because when quality is seen as the preserve of a separate department, workers on the production line, do not see the need to take responsibility for quality. In TQM the QA department, still exists but act as consultants rather than enforcers of quality.If the software industry wants to follow best practice in manufacturing it should be making developers, architects totally responsible for the quality of what they produce. They should be no need for testers. Having a separate test department only increases waste - through lost time in passing defects back and forth between developers and testers and through increased costs, due to the lost time and also through maintaining a separate test team.The software industry should aim for zero defects, through improved tooling and training. This is an ideal but as the Japanese philosophy of Kaizen (continuous improvement) says, if you don't start to take small steps towards an ideal, you'll never improve.

That is exactly what I'm talking about. Quality is TQM. You can't have real quality without a healthy lust for kaizen, and you can't have it outside of an environment in which all participants feel individually and corporately responsible for it. Thank you for putting it so succinctly.Peace,Cameron PurdyTangosol Coherence: Clustered Shared Memory for Java

Although TQM isn't new, especially manufacturing. We were doing it in the AF 20 years ago. But thanks for posting this. I had forgotten about TQM and my training. Well, it does affect my thinking. But since software dev seems to be so anti-TQM ... how soon me forget.

* Testing: The process of executing a system with the intent of finding defects including test planning prior to the execution of the test cases. * Quality Control: A set of activities designed to evaluate a developed working product. * Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.

(from the ANSI/IEEE standard)

Prior to the advent of the Aglie Age, most software development organizations did not have QA -- they only had QC/Testing. Testing by itself does not necessarily assure quality. (I'll leave the proof of this as an exercise for the reader. :) )

When you boil it down, the Agile methodologies are nothing but QA methodologies for software development so one could assert that with the introduction of Agile development the QA staff has blossomed!

It's true that the number of dedicated testers will like be reduced as the developers shoulder some of that work. But in most organizations that do not have full-life-cycle QA methodologies (i.e. not just testing), the role of the tester was only to smoke test anyway (or, if you were lucky, to do some form of rudimentary acceptance testing) which is really only providing the illusion of quality.

I think there's different levels of testing, but the core problem of quality software is good developers and discipline. Without good developers and an established discipline, crap code will become the norm rather than the exception. I'm sure others will disagree, but I think QA should be a second level of validation.

by that I mean the QA team validates the functionality at the user level and does things developers do not expect. Often when developers work on a piece of functionality, they will write tests that test the intended functionality and maybe some negative cases. QA's job is to do all the stupid things users do and validate the software still behaves according to the specifications. Things like stress, load and installation testing is also important to test.

A "separate testing org that gets handed a test plan and some software and left alone to their own devices" would indeed be a waste of time. It's the QA team that takes the requirements and produces the test plan in the first place. Good QA teams do this very well and think of things that a dev team usually have not. Why would a dev team tell the QA team what to test ? This is the implication, and I'm puzzled to hear of any setup that considers that a good idea.

Developers produce stuff that they think does the job, why wouldn't they. Unit testing is great for a developer to prove that something does what they think it should. A unit test is not much more than a pointless tautology. The producer of code (and unit tests) should be the last person to assess the quality of what they've produced.

Add to that the fact the units fit together to produce a functionally coherent whole (OK there's Integration Testing) and the end result is that dev team testing is really just the start, not the end.

Even a fairly simple statement of business requirement, a single paragraph in a larger document, can explode into a mass of execution paths that could and should be proven through functional testing.

I must say I have heard similar statements to FM's from pugnacious dev team leads and these have ALWAYS ended with embarrasing climb downs and the inevitable consequence - more QA resource, not less.

It's the QA team that takes the requirements and produces the test plan in the first place. Good QA teams do this very well and think of things that a dev team usually have not. Why would a dev team tell the QA team what to test ?

I agree with this wholeheartedly. QA Testing is a function of the business, not of the development team. They need to work together, just as the Customer needs to the work with development, from the start of a project to the end.

Unit testing is great for a developer to prove that something does what they think it should. A unit test is not much more than a pointless tautology.

I'm puzzled by this. How can a unit test be pointless? Besides, in Test-Driven Development (or Behaviour-Driven Development) the test isn't for verification, it's the specification.

It's the acceptance tests that "put it all together" to ensure that all the parts work as a whole, and work according the business requirements.

Even a fairly simple statement of business requirement, a single paragraph in a larger document, can explode into a mass of execution paths that could and should be proven through functional testing.

Agile methods use very small requirements (known as User Stories in XP). Each one would probably correspond to what you describe as an execution path. Each one of those by itself is, im my experience, relatively easy to test.

If you don't believe me, I and many others from the agile community would be happy to show you! ;)

I'm puzzled by this. How can a unit test be pointless? Besides, in Test-Driven Development (or Behaviour-Driven Development) the test isn't for verification, it's the specification.

I didn't say that a unit test was pointless, just that it can be almost pointless. As for the test being the specification, the specification of what, the expected behaviour of a dicrete, arbitrary unit in the software architecture that has no relationship necessarily to a requirement.

As for testing "user stories", or "happy paths" through use-cases, or exception paths or whatever else, this is all well and good. But if you develop a deliverable that satisfies the requirement of a story and that has no impact on the other stories in the book (so to speak) then that again suggests to some degree a decomposition to satisfy the developer and not the Business or QA team that represents them.

If agile development delivers better quality software then the facts will speak for themselves. If QA departments were to suddenly be confounded by being presented with perfect systems that they could find no fault with then their days would be numbered. What company would fund a pointless exercise.

I hope that AD can one day deliver this utopia, but in the meantime I remain realistic.

As for the test being the specification, the specification of what, the expected behaviour of a dicrete, arbitrary unit in the software architecture that has no relationship necessarily to a requirement.

In agile development, no code in the system should exist without a relationship to a requirement. I would go so far as to say that the above sentence should apply to any development process.

Regarding User Stories, they are written by the Customer in the Customer's terminology. They are a fundamental statement of business value, e.g.:

A Customer that purchases more than $5,000 is a Preferred Customer.

And a related story could be:

A Preferred Customer receives a 10% discount on all prices.

Those Stories have absolutely nothing technical in them - they are statements about the business, and something that the system's Customer believes should be in the system. They are also something from which the QA people can start developing their acceptance tests.

When it comes time to actually implement the stories, the developers break them down into the discrete tasks that are required. This is done in collaboration with the Customer and QA people in order to define what the expectations are in terms of the interface, persistence, work flow, etc. That's when the technical aspect enters into the process.

My experience over the past 5 years using agile methods has been that the quality of systems developed using agile methods is hands down better than that of traditional methods. There are caveats however- the Customer (the Business) must be engaged in the process at all times. If they disengage, the quality goes with them.

In agile development, no code in the system should exist without a relationship to a requirement. I would go so far as to say that the above sentence should apply to any development process.

The fact that a piece of code is traceable back to a requirement does not mean that it satisfies that requirement. It may participate in satisfying a requirement, or many requirements. From an object oriented perspective, maintaining complete traceability is not practicable as it suggests that as each feature of each component is developed it is tagged with all related requirements. A developer does not have this perspective and nor should they.

I accept that bidirectional traceability promises alot and that it is seductive to imagine that a clause in a statement of requirements can map to a concept in an analysis model that can map to an entity in an implementation model to a class in the software model to a test in a test suite to a path through a use case and that by pulling the string at one end you can assess the impact at the other.

CMM suggests this level of traceability for full compliance but I have never seen it achieved and have not seen any real benefit come from focussing on this above other more realistically achievable quality aims.

I take your point on the stories being business driven.

I am not against agile development. But I think that the grey areas that you fill with best case scenarios I am filling with the worst.

I am certainly an advocate of better testing across the lifecycle but feel, from my own experience, that repeatable end-to-end functional test scenarios that simulate end user activity is the most robust practicable solution to software quality and am presently prodding at this proposition with the help of Jython and related technologies.

I accept that bidirectional traceability promises a lot and that it is seductive to imagine that a clause in a statement of requirements can map to a concept in an analysis model that can map to an entity in an implementation model to a class in the software model to a test in a test suite to a path through a use case and that by pulling the string at one end you can assess the impact at the other.

I have seen this Holy Grail sought after for a decade or so. Has anyone seen any ROI studies that show it is worth the effort to attempt to forge traceability? Anybody actually doing it and using it for positive effects? If so, please describe how you do it, how you use it, how you maintain it, and what cost benefits were realized.

There is only one thing wrong with FM's proposition - everything.A "separate testing org that gets handed a test plan and some software and left alone to their own devices" would indeed be a waste of time. It's the QA team that takes the requirements and produces the test plan in the first place. Good QA teams do this very well and think of things that a dev team usually have not. Why would a dev team tell the QA team what to test ? This is the implication, and I'm puzzled to hear of any setup that considers that a good idea. Developers produce stuff that they think does the job, why wouldn't they. Unit testing is great for a developer to prove that something does what they think it should.

I agree with this in part. I have worked on teams where developers did all the testing. The quality of the software was dreadful. In short, the problem is that a developers aren't going to realize something when they put on their tester's hat that they didn't realize when they were wearing their developer's hat. They don't get a different brain.

I disagree that unit tests are not useful. Just because a developer thought they coded something to work a certain way doesn't mean it does. During the lifecycle of an application or module, they are very helpful in verfiying changes did not break adjacent code. But in no way are they enough by themselves. They are input-output checks.

I was on a team where developers did all the testing that later added a functional team that gathered and created the requirements and then tested the product to verify that it met the requirements. The quality of our product improved dramatically over when the developers did all the testing. The customers were estatic.

I see several reasons for this. 1. Developers hate testing. I don't mean unit testing, I mean opening the GUI and clicking x y ands z in a 100 different scenarios. They'd much rather be fixing and writing code and/or automating unit/regression tests. 2. A lot of developers don't really like development. They aren't cut out for it. They chose it as a profession because it paid well. Often these people make good testers. 3. The functional people are creating the requirments so they know whether they are fufilled. It was amazing to me how many times I thought I had fufilled a requirement and had it fail when the functional team tested it. Sometimes the requirement wasn't clear, sometimes I just misinterpreted it. In either case, customer unhappiness was easily averted. Before we had a functional team, we would have 3 BETA tests. After, we had less bugs at the BETA test than we would normally have at the system acceptance test.

Where I work now, we have a QA team. AFAICT, they don't do any testing at all and they have never even seen the requirements. They have no ownership and don't care. Part of the problem is that they are contractors. In any event, I don't know what I am describing is a QA department but a functional team works. they own the requirments, if the software doesn't fit the bill, it's their problem. Because their asses are on the line, they make sure they get what they want from the developers. But the ownership is key. If they can point to the developers and the developers can point to them, you'll get crap.

I disagree that unit tests are not useful. Just because a developer thought they coded something to work a certain way doesn't mean it does. During the lifecycle of an application or module, they are very helpful in verfiying changes did not break adjacent code. But in no way are they enough by themselves. They are input-output checks.

Of course, but you can gain the same assurances from a functional test, executed from a perspective point closer to the end-user. If a unit test is not enough by itself then why not focus on something that is.

I disagree that unit tests are not useful. Just because a developer thought they coded something to work a certain way doesn't mean it does. During the lifecycle of an application or module, they are very helpful in verfiying changes did not break adjacent code. But in no way are they enough by themselves. They are input-output checks.

Of course, but you can gain the same assurances from a functional test, executed from a perspective point closer to the end-user. If a unit test is not enough by itself then why not focus on something that is.

I would say that unit-tests save me a lot of time. When I develop a new class, I create some basic unit tests to test my assumptions on it's behavior. This often uncovers a bug or two right off the bat. This saves me from having a higher-level test fail and searching for that bug. In short, it saves a lot of time. Then later on, changes in the system that break my class become apparent well ahead of a full test cycle. Using high-level testing to uncover simple coding errors is not efficient.

OK, yes, I'm currently working in testing, and am a contractor - so whether I get re-hired depends on whether I do my work well and in a professional way.

Don't forget I'm effectively having to take an appraisal every few months if I get re-hired or go for a new piece of work, in contrast to 'employee' staff who get appraised usually once a year, and won't get sacked on the basis of it.

So please say again what the problem is - has your manager taken on contract staff who are no good at their work: in which case it's a recruitment/interviewing problem.

From personal experience I have seen projects which hardly weren't unit tested but were focussed on business delivery and tested by their actual end-users from day one being much more succesful (and even stable too) than projects which took double the time to build, were full of junit tests but didn't have that business focus. My point here: I don't know really. *Maybe* that unit testing everything is overrated for business applications and actually sucks away time that can be used to stay focussed on the business/ functionality side of things.

From personal experience I have seen projects which hardly weren't unit tested but were focussed on business delivery and tested by their actual end-users from day one being much more succesful (and even stable too) than projects which took double the time to build, were full of junit tests but didn't have that business focus. My point here: I don't know really. *Maybe* that unit testing everything is overrated for business applications and actually sucks away time that can be used to stay focussed on the business/ functionality side of things.

Unit testing is a developer tool. I don't see it as having a direct relationship to business requirements. It's a way to verify components of the technical solution. As a developer tool, I find them to be excellent.

exactly. It's a part of the build/deployment process. Developers use unit tests as an addition (or alternative) to compilation. Just because a piece of code compiles doesn't mean it works. Similaryly, just because a it compiles and passes a few additional simple input validations doesn't mean it works either.

From personal experience I have seen projects which hardly weren't unit tested but were focussed on business delivery and tested by their actual end-users from day one being much more succesful (and even stable too) than projects which took double the time to build, were full of junit tests but didn't have that business focus.

I agree totally. This highlights the key importance of the Customer role in any development process - if they are involved, the product will be better. It's a simple as that.

The difference with agile methods is that they acknowledge this fact and embrace it. By having the Customer involved, and rigorous testing, you have your cake and you get to eat it too!

From personal experience I have seen projects which hardly weren't unit tested but were focussed on business delivery and tested by their actual end-users from day one being much more succesful (and even stable too) than projects which took double the time to build, were full of junit tests but didn't have that business focus. My point here: I don't know really. *Maybe* that unit testing everything is overrated for business applications and actually sucks away time that can be used to stay focussed on the business/ functionality side of things.

I have to agree, though I couldn't put it in such black and white terms based on my own experience.

For business systems (i.e. not system software or whatever) let's focus on functional, end to end, business centric testing. And yes, at the expense of (J)unit testing.

I have to agree, though I couldn't put it in such black and white terms based on my own experience.For business systems (i.e. not system software or whatever) let's focus on functional, end to end, business centric testing. And yes, at the expense of (J)unit testing.

It wasn't meant to be a black and white remark though. Lots of times unit testing is great, and it sure saved my bottom a couple of times. What I am saying is that I don't think overdoing it is a good thing. A lot of XP people (including Scott Ambler who I specifically asked about it on a conference one month ago) think that any code should be covered 100% by test cases. I think that goes way to far - for your average business system that is -, as a lot of them won't be of any real value at all, and my concern related to keep focussed on the business side of things is just that writing those tests cost time which could alternatively be spent communicating with customers etc. And functional testing by real end users, not so much the average QA staff I've seen... that works really great to get the product out of the door that fulfils actual customer demand.

Disclaimer: this is all written from the perspective of shops that have to deliver business applications with limited resources for customers that are usually in a hurry to get their problems fixed; not talking about OSes, frameworks, large public sites etc.

If someone is to assure "quality" of a product I hope it has more to it than just finding code bugs. In our organisation, the QA is responsible for checking that the user interface works as expected, that similar functions works similarly UI-wise throughout the entire product, and such things. Aspects of the output which are difficult for a single developer to have a coherent view of. This is useful QA work in my opinion. No amount of "testing" by any individual developer is going to be able to remove that work.

The things found by QA should be fixed and recurring problems should be put into a guideline document that the developers can use, so that there's a feedback loop between the developers who have a narrow view, and the QA testers which have a broad view.

To say that "everyone should be all things" is silly. "Everyone should be like Superman and everything would be great". Right. It just doesn't work that way.

It has little to do specifically with software, and everything to do with complex system development. The principles and practices that the GAO is advocating are what the agile community have been talking about all along.

I work for a small english company, in which I came a little more then two years ago. The biggest problem at the time: bugs were delivered to the clients, with or without QA, the cost were up, for developing applications.

When you develop an app for 10 bucks, and 5 is the profit, and you develop it in 2 weeks, that means that it costs you $2.5 to develop per week. Delivering application with bugs increases costs from 3 reasons:- your image is jeoperdized- solving the bugs does not mean you're paid extra but it still cost. So if you need another week to solve the bugs, and for the you get nothing, it still costs you $2.5, so the initial profit of 5, is now 2.5.-in the time you solved bugs, you could have done something else, so developing for a week could have got you 5 income -2.5 cost = 2.5. But since you were busy doing something else this money is lost too, so your real profit is?

So, doing TDD, building automated tests for everything from code units to complex functional and business requirements, can really get your costs down. If it takes you 2 weeks and 3 days to develop the application, completely tested, it costs: 5+1.5 = 6.5, you get 10, and keep 3.5. This is only in the first iteration, as in the next, having tests means that you can refactor a lot and move faster forward, which mean that you cost will decrease more and more as the project gets bigger. You won't have fear of changes in big projects anymore.

So is TDD, pair programming and other agile methods, do work and very well. We have implemented them for the last two years, and more intensively for the last year, with great results. Some of the projects are BIG, and have been developed for 1 and 1.5 years but they are still quite easy to change and manage, because of tests.

Using TDD, really decreased the amount of bugs, and made the critical bugs almost dissapear. We noticed now the bugs, only show as change that label to ... or this is spelled incorrectly, or in cases the business requirements aren't understood corectly. But we are now taking this front also, with Fit/Fitnesse.

Actually, I've worked in a variety of F500 companies, non F500 companies, and government organizations. And yes, I've applied agile techniques in these organizations.As for a real world example, ever hear of Eclipse?- Scott

Yes. I use it everyday. Pretty mediocre. Found some bugs in the SWT libraries.

Actually, I've worked in a variety of F500 companies, non F500 companies, and government organizations. And yes, I've applied agile techniques in these organizations.As for a real world example, ever hear of Eclipse?- Scott

Yes. I use it everyday. Pretty mediocre. Found some bugs in the SWT libraries.

Being that we've established the typical application as being "extremely crappy," Eclipse is ahead of the game.

Actually, I've worked in a variety of F500 companies, non F500 companies, and government organizations. And yes, I've applied agile techniques in these organizations.As for a real world example, ever hear of Eclipse?- Scott

Yes. I use it everyday. Pretty mediocre. Found some bugs in the SWT libraries.

Being that we've established the typical application as being "extremely crappy," Eclipse is ahead of the game.

Actually, I've worked in a variety of F500 companies, non F500 companies, and government organizations. And yes, I've applied agile techniques in these organizations.As for a real world example, ever hear of Eclipse?- Scott

Yes. I use it everyday. Pretty mediocre. Found some bugs in the SWT libraries.

Being that we've established the typical application as being "extremely crappy," Eclipse is ahead of the game.

Compared to other IDEs?

I've never managed to tolerate Eclipse for more than a couple days, and that's usually only because it had some plugin that I wanted to use. NetBeans just works. Eclipse just needs one more plugin.

Eclipse is a great example of a product that is very different from the avarage business systems most of us grunts are building. For Eclipse, requirements can be very clear, and end-users are developers that are used to think about specifications. Stability and hence unit testing is ultra important for a product like Eclipse, and the resources to build this project are huge (including all those people that are developing for it on their free time). Furthermore due to the visibility of the project, you have an test base not many products can dream off. It is heavily beta tested for free, and you don't need to deal with company politics warfare to get your specification. Etc. Etc.

We had Scrum founder Ken Schwaber come and speak at our company and he was saying there were alternative roles for QA folks. One he cited was Scrum master so as we move towards agile we are using QA staff in that role. I think that in the end if you eliminate QA as this article proposes you will fall into the trap of developers testing their own code which is proven to fail. Customers don't have the time to be substitutes for QA.

Yup, software is perceived as higher quality when developers and end-users and testers work together throughout the planning and development cycles.

What a revelation. Nobody ever said that before.

Before the 60s, I mean, specifically regarding software, I mean.

Creeping Jeebus ! The nakedness of this particular emperor is so visible. I simply can't understand why people pay attention to these 'agile' salesmen.

Now, understanding why it does not happen that way would be valuable. Understanding why it is unlikely to change would be valuable.

To understand why many large organizations will, in fact, preserve a seperation of concerns for development and QA, how QA is not testing, and why it is NOT a good idea for developers ALONE to be 'personally responsible' for product quality, talk to a business manager or an experienced management consultant, not a software platitude recycler.

Yup, software is perceived as higher quality when developers and end-users and testers work together throughout the planning and development cycles.What a revelation. Nobody ever said that before. Before the 60s, I mean, specifically regarding software, I mean.Creeping Jeebus ! The nakedness of this particular emperor is so visible. I simply can't understand why people pay attention to these 'agile' salesmen.Now, understanding why it does not happen that way would be valuable. Understanding why it is unlikely to change would be valuable.To understand why many large organizations will, in fact, preserve a seperation of concerns for development and QA, how QA is not testing, and why it is NOT a good idea for developers ALONE to be 'personally responsible' for product quality, talk to a business manager or an experienced management consultant, not a software platitude recycler.But, yeah, more press for the salemen, I guess, here at TSS.

I have to agree here. We had thoughtworks consultants (they became known within the company as "I thought it worked") come to our company and they lasted 3 weeks. The story cards were comical, they encouraged to do no upfront design, forget architecture.

There are components in XP I really like such as unit testing and direct communication with the business guys. But with every process, you extract what you truely need.

I have to agree here. We had thoughtworks consultants (they became known within the company as "I thought it worked") come to our company and they lasted 3 weeks. The story cards were comical, they encouraged to do no upfront design, forget architecture.

I'm not sure if you are saying Agile doesn't work because these guys couldn't produce but I don't think that is fair.

Everything boils down to a people-problem doesn't it? Methodologies come and go. I'd spend more time working on getting the best people on my team. That is truly the most difficult task in software. Experienced and knowledgeable people will produce good results regardless (and many times in spite of) of the chosen process.

Notice I said "getting the best people on my team" and not "training the people on my team". I can't make Joe Scripty a better programmer - only teach him a few tricks. Ivanna Learn has promise though.

I have to agree here. We had thoughtworks consultants (they became known within the company as "I thought it worked") come to our company and they lasted 3 weeks. The story cards were comical, they encouraged to do no upfront design, forget architecture.

I'm not sure if you are saying Agile doesn't work because these guys couldn't produce but I don't think that is fair.Everything boils down to a people-problem doesn't it? Methodologies come and go. I'd spend more time working on getting the best people on my team. That is truly the most difficult task in software. Experienced and knowledgeable people will produce good results regardless (and many times in spite of) of the chosen process.Notice I said "getting the best people on my team" and not "training the people on my team". I can't make Joe Scripty a better programmer - only teach him a few tricks. Ivanna Learn has promise though.

Well said. This is the thing that drives me crazy about the focus on process. People think that they can hire a bunch of trained bonobos and if they have the right process they can produce great software. The developer is just a cog in the machine. Completely interchangeable with any other developer.

I think that you might want to distinguish one bad experience with a team claiming to be agile from the viability of agile software development in general.

First and foremost, we really don't know what happened with the TW guys, all we have is the biased report of someone who may not have even been directly involved with the team. As we've seen in this conversation about my presentation, there can be a significant difference between what gets reported in a blog and what actually happened in the real world.

My advice to everyone is to do some reading on your own, talk with some people with actual experience trying agile techniques, and then think it through for yourself. As I indicated in another posting, agile techniques work, they're growing in popularity, and they're here to stay. The agile adoption curve is virtually identical to the object adoption curve of 10-15 years ago. In the early 90s a lot of people chose to stick their heads in the sand and ignore OO, and many people are doing the same when it comes to agile software development. It's always a good idea to learn from the mistakes of others.

Ethan, do you realize that you sound like one of the people from the early 90s that said that object technology was BS, didn't work, blah blah blah? The fact is that agile techniques work, many people are successful applying them, and they're clearly growing in popularity.

Instead of trying to tears things down, why don't you instead try to find ways to help other people learn effective software development techniques. If you think something is missing from agile, then why don't you find positive ways to help fill in those holes? Or would that require more maturity than you're capable of?

Ethan, do you realize that you sound like one of the people from the early 90s that said that object technology was BS, didn't work, blah blah blah? The fact is that agile techniques work, many people are successful applying them, and they're clearly growing in popularity.Instead of trying to tears things down, why don't you instead try to find ways to help other people learn effective software development techniques. If you think something is missing from agile, then why don't you find positive ways to help fill in those holes? Or would that require more maturity than you're capable of?- Scotthttp://www.ambysoft.com/scottAmbler.html

LOOOOOOOOOOOL

Scott, following the analogy, you really thing you could sell yourself for a new Alan Kay?

Scott, do you realize that you sound like one of those people that didn't read my post ?

Of course much of what you are selling works. Of course it does ! There is no contovery here. We knew that in the 60s, for software. We knew it in the 1200s when we were building churches, too, but that's (maybe) another story. Does it work always, everyhwere ? Of course not, no more than anything else. No biggie.

The only thing - the ONLY thing, Scott - that irritates me about the brand you are trying to sell is that you pretend that it's new - that something called 'agile' invented so many of these ideas. That's nonsense, and you know that better than most, because you've studied the history of software development.

You're even using your response to my post in an attempt to reestablish your position as an innovator and a supporter of some kind of rebellious new idea that guys like me refuse to accept - but you are not a supporter of a new idea, at least as far as the topics covered in this thread are concerned, and guys like me have these lean options for methodology built into our practice. (We also know some of the times they will not work.)

Now, some of the stuff TSS spouted here is simply untrue and exaggerated. Integrating QA staff with development staff is not the same as doing away with QA staff. I see that as a way to get attention, which is OK ... sort of. I am not blaming you for that, as it is TSS, and possibly the American way of BS.

But I am unhappy with the claims for originality coming from not only yourself but any number of people concerning 'agile' methodologies. It creates an unecessarily divisive posture and slows methodological process. That's why I posted.

The only thing - the ONLY thing, Scott - that irritates me about the brand you are trying to sell is that you pretend that it's new - that something called 'agile' invented so many of these ideas. That's nonsense, and you know that better than most, because you've studied the history of software development.

Show me where we're saying that it's new? I even posted a link to Larman and Basili's article on the history of Iterative and Incremental development that traces its roots back the Mercury Space Program in the late 1950's.

Kent Beck freely admits that XP circa 1996 was based on all the practices that he had collected in his experience that worked. He never claimed to invent them, but rather just the specific collection of practices that supported each other.

If the names bother you, how about Responsible Programming or Practical Development instead of XP or Agile?

If supporting Agile Development means that people like Scott Ambler and myself are supporting old practices, then so be it. I wish that more people would support these practices so that we could save the global economy 10's of billions of dollars per year in lost revenue due to software projects that fail outright, are late, are full of defects and, sometimes, kill people.

<blockquoteHe never claimed to invent them, but rather just the specific collection of practices that supported each other.Right, and that's BS.

Why not call it ... software development ? The effort to brand it as something different is also BS, Dave.

Of course we all support better software. That's not the exclusive province of projects run with an iterative methodology ... or of any single methodology.

We need to get off the brand-name hype and start thinking in terms of a toolbox of MANY methodologies. Sometimes big, fat, slow and formal is appropriate. We need to have total business, not software development unit, focus.

Why not call it ... software development ? The effort to brand it as something different is also BS, Dave.

Because so many people are NOT doing it, and thus we have the level of crap in software development that exists today. You speak of millions of successful projects - show me the stats. I can quote the Standish CHAOS reports, the US GAO, the Canadian Auditor-General... what do you have? Are there successful projects out there? Yes. Are there more challenged or unsuccessful projects? Absolutely.

Of course we all support better software. That's not the exclusive province of projects run with an iterative methodology ... or of any single methodology.

Then why aren't you open to doing something different (compared to current common practices)? Is testing a good thing? Are peer reviews a good thing? Is working closely with the Customer a good thing? Is shipping smaller chunks of software are regular intervals a good thing?

If you're doing all of this, then I applaud you! If not, what are you afraid of?

Why not call it ... software development ? The effort to brand it as something different is also BS, Dave.

Because so many people are NOT doing it, and thus we have the level of crap in software development that exists today. You speak of millions of successful projects - show me the stats. I can quote the Standish CHAOS reports, the US GAO, the Canadian Auditor-General... what do you have? Are there successful projects out there? Yes. Are there more challenged or unsuccessful projects? Absolutely.

Of course we all support better software. That's not the exclusive province of projects run with an iterative methodology ... or of any single methodology.

Then why aren't you open to doing something different (compared to current common practices)? Is testing a good thing? Are peer reviews a good thing? Is working closely with the Customer a good thing? Is shipping smaller chunks of software are regular intervals a good thing?If you're doing all of this, then I applaud you! If not, what are you afraid of?Dave RooneyMayford Technologies

Testing ? Peer review ? JAD ? Incremental releases ? Those ARE current common practices, Dave. Why do you think they are new ? I'm a consultant, so I've been many places ... none of these are unusual practices.

Stats for successful programs ... Chase (the giant financial) WORKS. Multiply times how ever many American businesses you care to define as successful times the decades they've been successful.

I'm well aware that the stats for project 'failure' range from 33% - 67% depending on who you talk to. But ... Chase still works. Let's not ignore the problems ... but let's not try to overstate the problems, either.

'Crap' is an interesting term ... I've heard it applied to Windows software, for instance, many times. Often, I think people use the word because they are speaking from a software developer's point of view rather than a businessman's point of view. Both views are legitimate, of course. From the businessman's point of view, however, the Windows OS line is the best consumer software of all time. He does not confuse the laudable desire for higher quality with the goals of the organization.

Testing ? Peer review ? JAD ? Incremental releases ? Those ARE current common practices, Dave. Why do you think they are new ? I'm a consultant, so I've been many places ... none of these are unusual practices.

I was too till last year. I didn't find them anywhere. Except where I tried to implement them. And I hear the same from many others.

Testing ? Peer review ? JAD ? Incremental releases ? Those ARE current common practices, Dave. Why do you think they are new ?

Why do you keep saying that I'm saying that those practices are new? Going back over the posts, I can't see anywhere where I gave that impression.

I found out about XP just over 5 years ago. Another consultant and I were doing some design on part of a framework for an upcoming large project. We were working together in Rose on his machine, then we switched to my machine and wrote code for that design. The act of writing the code showed where we needed to change the design somewhat, so we switched back to his machine. While reworking the design a bit, we refined it some more. Then, we switched back to my machine and updated the code.

My colleague at that point said, "Hey, this sounds like that Extreme Programming thing I heard about!" A quick search on the Net brought me to sites describing XP and it's practices. Of the 12 practices originally defined in XP, I had done 10 of them... we were Pair Programming at the time we searched the Internet for XP information... with Test-Driven Development and the System Metaphor being the only ones that I hadn't done before (although I had done plenty of after-the-fact testing).

One of the things that I read about XP almost immediately was that the practices weren't new, but the combination of them and the level to which you used them were. Can you honestly tell me that you have been doing Test-Driven Development for your whole career? I don't mean writing some tests after the fact, but writing a test, then writing the code to make that test pass. Then writing another test, writing the code to make that test pass. Rinse, repeat.

I don't believe for a second, though, that agile methods aren't without drawbacks:

You need a high level of competency on the team. This may sound bloody obvious, but the better the people are then the less time you can spend on up-front design.

A corollary to the first point is that people who aren't that good will stick out like a sore thumb. It can make people very uncomfortable when they're having to be accountable for their work on a daily basis.

You absolutely have to have an engaged Customer. Again, this applies to any development method, but it's especially important in agile.

You need to have an integrated team, not developers who beg for changes from the Database High Priests :), and then throw their code over the wall to the testing group.

Those points can be applied to any situation, but they are especially important when using agile development methods.

Again, and just to make myself Crystal Clear (which is an agile method from Alistair Cockburn!), the practices aren't new, but the collection of them together and how they work together are.

I'm a consultant, so I've been many places ... none of these are unusual practices.

I've been a independent consultant for 12 years, and worked as an employee of a consulting company for 3 years before that. I've "been around" too. What I have seen is people "try real hard" to do the practices of whatever waterfall methodology or even RUP, but they always toss them out and just hammer away at the code when push comes to shove at the end of the development cycle. Places with really good people tend to do the practices that support agile development in the end anyway... testing, incremental development, customer involvement, etc.

The Standish Group's initial 1994 CHAOS report was based on surveys of 365 companies and represented 8,380 applications. In the 2001 report (based on year 2000 data sampling), they extrapolated their sampling to indicate that there were 300,000 new projects started in the U.S. Of those, 137,000 were late, over budget, didn't ship the expected features or a combination of those problems. 65,000 projects failed outright. Think for a second about the economic impact of just the 65,000 failed projects. If we, as an industry, could cut that rate in half, it would mean literally billions of dollars in revenues and tax money that would be saved. The factors outlined by the Standish reports for achieving project success align very well with agile methods, and the Standish people have no vested interest in agile.

So, forget about the arguments about the practices being nothing new... we agree on that point. Educating people on how to use them in order to improve their success rate is perfectly valid, just as educating people on good design and coding practices is valid.

I don't believe for a second, though, that agile methods aren't without drawbacks:

You need a high level of competency on the team. This may sound bloody obvious, but the better the people are then the less time you can spend on up-front design.

A corollary to the first point is that people who aren't that good will stick out like a sore thumb. It can make people very uncomfortable when they're having to be accountable for their work on a daily basis.

You absolutely have to have an engaged Customer. Again, this applies to any development method, but it's especially important in agile.

In my opinion, in most organizations the purpose of processes is to make up for the fact that attracting and retaining enough highly competent and motivated people to staff hundreds or even thousands of projects seems impossible. Not only because such people are in demand, but also because it's almost impossible to tell who they are without working with them. Hiring a person is expensive. A person who's great on one project may be worthless on another. Consequently, companies decide they need to solve the problem through other means: process.

The same thing goes for customer involvement. Customers that have the time and patience to really be involved in the development project are rare. And when you do find a customer with a few individuals are make themselves available, chances are those individuals aren't in the least bit representative of the general user community.

Futhermore, predictability over long period is often times more much important than efficiency.

Agile methods don't address these problems. At least not from what I've seen. Properly implemented "heavyweight" processes do (but I'll say that's extremely rare). These problems need to be addressed.

IMHO what's needed is an effective, scalable way to ensure the right people are assigned to teams. I think that would mean teams could be much smaller. Smaller teams means smaller staffs, which means more money available to ensure people stay (and to attract them away......).

Ethan, thanks for sharing your complete and utter ignorance of what I actually spoke about the other night.

The reality was that I was very clear about the fact that none of this stuff is new, and that if people were to go back and read the writings of Fred Brooks, Gerry Weinberg, Tom Demarco, and others from the 1970s that they would see that these ideas have been around for a long time. I was also very clear that we're simply going back to those values that proved to work in practice.

Furthermore, I have also stated the same thing in print many times, even as far back as 2002 in the Agile Modeling book.

Luckily Floyd recorded the presentation and will be posting it online as several downloads, he's going to break it up into multiple parts, so you'll actually be able to see what was said at some point.

Ethan, you're the one who is claiming that I said agile software development techniques are new. Had you actually been at the presentation, or taken the time to read my writings on this subject, you would know that this simply isn't true. In short, from what I can tell you're just one more crank, and that's why I'm reacting to your postings the way that I am.

And that, after all is said and done, is really the point I wanted to make.

All infighting to the side, however, here's an interesting piece re: people from a current newsfeed :

<"America's Corporate Coach" is coming to Milwaukee to provide advice to Wisconsin employers, who will face the greatest labor shortage in American history over the next several years.

Author and business consultant Jack Lannom will be the keynote speaker at the Bravo! Entrepreneur luncheon, which will be held May 3 during the Wisconsin Business & Technology Expo.

Lannom will provide advice for company owners and executives about how they will need to change their core human resource philosophies to address the labor shortage that looms just around the corner.

Wisconsin will face the "perfect storm" over the next several years, when baby boomers (born 1946-64) are retiring en masse, to be replaced by a much smaller Generation X (born 1965-79) and the beginning of the Millennials generation (1980-1999).

According to Lannom, a drastic change is coming to corporate America. The U.S. Bureau of Labor Statistics predicts that by 2011, the American workforce will experience the worst shortage of skilled workers in U.S. history. As the baby boomers retire over the next six years, the U.S. economy will lose an estimated 10 million skilled workers.

The shortage will create an unprecedented need for organizations to attract and retain their high-performing and loyal employees.

Lannom's message is described in detail in his newest book, "People First."

"The new business model for high profitability and long-term viability will be a philosophy that puts people first," Lannom said. "Every human being needs to know that who they are and what they do in a company has purpose, meaning, and immense significance. People will work in excellence when they are treated excellently, and with this excellence will come profits … If you want your profits to grow, you must grow people first."

Lannom developed his philosophy after spending 30 years in the industry and working in the trenches with top executives from various Fortune 500 companies such as Citibank, AT&T and Blockbuster Video.>>>>>>>>>>>>>>>>

So selling the importance of people is not just for software anymore, I guess. How this interacts with massive layoffs is another question ... Anyway, I imagine that Lannom would also point out that his ideas are not new.

Ethan, do you realize that you sound like one of the people from the early 90s that said that object technology was BS, didn't work, blah blah blah? The fact is that agile techniques work, many people are successful applying them, and they're clearly growing in popularity.Instead of trying to tears things down, why don't you instead try to find ways to help other people learn effective software development techniques. If you think something is missing from agile, then why don't you find positive ways to help fill in those holes? Or would that require more maturity than you're capable of?- Scotthttp://www.ambysoft.com/scottAmbler.html

Scott, what's with all the fallacious rehtoric? You would be a lot more persuasive (to me) if you just made reasoned arguments. Because someone said that OO wasn't going to work in the past doesn't means anyone who says some other new philosophy won't work is wrong? Your critics are just immature?

If you really feel that you are right in your assertions, you shouldn't need to stoop to these cheap tricks. I feel that you do have some good points so I don't know why you are tainting your posts with this kind of thing.

Ambler is just trying to refocus the discussion on whether his 50 year old ideas work. Of course they work; that was not the point of my post - but it is harder to respond to my point. Maybe it's unncessary. We all have to make a living somehow.

I'm a young guy, bu the way, not an old guy. Sometimes, though, I believe we should acknowledge that the benighted managers of those massive old COBOL projects knew very well what they were doing - especially considering that the American buisness world still runs on those programs. One of the things they understood was that their organizations were not about software, and that projects could not necessarily be run successfully based only on the practices required to produce high-quality software. Sometimes it had to be processes that militated against those practices ... sometimes not. Just like today.

New ideas are good and welcome. but someone trying to claim these old ideas are new just gets my goat. I feel like someone's trying to tell me half the story and asking me to pay. Feh.

I'm a young guy, bu the way, not an old guy. Sometimes, though, I believe we should acknowledge that the benighted managers of those massive old COBOL projects knew very well what they were doing - especially considering that the American buisness world still runs on those programs. One of the things they understood was that their organizations were not about software, and that projects could not necessarily be run successfully based only on the practices required to produce high-quality software. Sometimes it had to be processes that militated against those practices ... sometimes not. Just like today.

It kind of makes software development sound like a fantasy novel were the world is dependent on some magic/technology that nobody remembers how to work, but that's ok because it just works. Until, of course, it stops working or someone evil figures out how to manipulate it.

Only instead of a twisted wizard with a devil whispering in his ear, it will be someone from marketing with a consultant whispering in his ear.

"You can rewrite the system that's been working 24x7 for 30 years in 4 days using RoR and agile methods!"

It kind of makes software development sound like a fantasy novel were the world is dependent on some magic/technology that nobody remembers how to work, but that's ok because it just works. Until, of course, it stops working or someone evil figures out how to manipulate it.Only instead of a twisted wizard with a devil whispering in his ear, it will be someone from marketing with a consultant whispering in his ear."You can rewrite the system that's been working 24x7 for 30 years in 4 days using RoR and agile methods!"

LOL ... I think it was a science-fiction writer, Ray Bradbury, who argued that 'any sufficiently advanced technology is indistinguishable from magic'. Which is true, in most senses. I could barely tell you how a light comes on when I flip the switch. Why eating pills works when I am sick is beyond me.

But it was Issac Asimov who pointed out that 'the distinguishing characteristic of the religion of science is, that it works' - and while I would not try to equate magic and religion, the two thoughts work well together ;-) ... and in that sense, technology differs from magic ...

If the QA function is going to be moved from a seperate team to the development team you need to be careful. Capers Jones did a study that found that in large projects (these were all non-agile projects) Quality Control is the most important single factor in determining success or failure:

"Effective software quality control is the most important single factor that separates successful projects from delays and disasters. The reason for this is because finding and fixing bugs is the most expensive cost element for large systems and takes more time than any other activity."

If the QA function is going to be moved from a seperate team to the development team you need to be careful.

Just to be clear - I'm not advocating moving the QA responsibility to the developers. I'm advocating the integration of the QA people into to the development team.

What we're trying to get away from is throw it over the wall testing, in which development says they're finished, and the QA team spends a very long time testing the system. Essentially, the whole process including testing is iterative.

With respect to the title of your post, I couldn't agree more - quality control is the most important factor. However, the QA team is not the sole place where that needs to occur. Quality can be very effectively controlled through iterative development, which includes testing at all times. Having the Customer involved provides the feedback required to provide real business value, and having the developers write unit tests provides both feedback that the code is working, and a regression safety net. Quality control is the responsibility of the entire team.

If the QA function is going to be moved from a seperate team to the development team you need to be careful. Capers Jones did a study that found that in large projects (these were all non-agile projects) Quality Control is the most important single factor in determining success or failure:

"Effective software quality control is the most important single factor that separates successful projects from delays and disasters. The reason for this is because finding and fixing bugs is the most expensive cost element for large systems and takes more time than any other activity."

Again, I have to respectfully disagree. I am sure that Caper Jones is far more educated and scientific than I am, but I guarantee you (because I've seen it happen time and time again in the real world) that in most applications that ultimately fail, by the time the first line of code is written the application is already doomed. In other words, way before QA has a chance to tell you that something is broken, it is already way too late.

Someone mentioned Ford. Do you think that Ford sticks a whole bunch of untrained people on an assembly line, gives them some tools they've never seen before, gives them a bucket of screws and some sheet metal, lets them play with it until something falls off the other end of the assembly line and then turns whatever random pile of resulting junk over to QA and says "make sure this car works"?

Well, that's what we do in the computer software industry. It's insane, but we tell ourselves: "We've got QA, so everything will be OK."

In other industries, they're pretty sure that what comes off the assembly line will work. They designed it to work. They modeled it. They simulated it. They put it in front of focus groups. They sketched it. They built a prototype. They trained the line workers. They did a trial run.

"If architects designed buildings the way that software people design software, the first woodpecker that came along would be the end of civilization."

QA is a subset of quality control, not the entirety of quality control. Your illustration of putting untrained people on an assembly line is a quality control problem. The article makes it clear that quality control has to be integrated into the entire software development process.

Projects that are doomed to fail before the first line of code is written usually have management that is so bad that no one has any confidence in the management team from the beginning. They make bad decision after bad decision and refuse to listen to the development team.

I recently left a project that was a $31 million failure and it was doomed from the beginning. Everyone on the development team that had enterprise level experience told the government managers very clearly that what they were doing would fail, but they refused to listen. The indifference that the managers showed for the opinions of the developers was matched only by the low opinion the developers held the managers in.

That failure was guaranteed by a disfunctional management culture that couldn't even get to the point where quality control mattered. It takes a disfunctional culture to doom a project to failure before it is started. Assuming that you don't have that then I think the article is correct about quality control being the most important factor.

Cameron, QA is a subset of quality control, not the entirety of quality control. Your illustration of putting untrained people on an assembly line is a quality control problem. The article makes it clear that quality control has to be integrated into the entire software development process.

Then you and I are probably in agreement, but using different terms; I would say that the "illustration of putting untrained people on an assembly line is a quality problem."

The term "quality control" to me has always implied the statistical aspect of measuring products against their specifications. If you do a quick google on "quality control", you'll see that the top results are all dominated by statistical references (measuring rates of defects in manufacturing, charting statistics of failures, etc.)

To me, the term "quality" encompasses quality control (what we call "QA" in software, since it is something that is supposed to measure the resulting product against its specifications), but it is much broader. Quality is a way of life. Quality is about building an environment for success, about planning for success, about reaching concensus that success is achievable at every stage of vision, architecture, design, coding, etc.

Someone (Ethan?) mentioned in a different that some of the talk (e.g. getting rid of the QA team) was just typical yank bluster, but it's not. I honestly believe that zero defect software is an achievable goal, although not necessarily with the platforms, languages, tools and processes that we use today. Furthermore, the institution of QA into software development has given managers a false hope and a false sense of quality. I think we need to burst that bubble and start looking at the core problems of software quality.

If you do a quick google on "quality control", you'll see that the top results are all dominated by statistical references (measuring rates of defects in manufacturing, charting statistics of failures, etc.)To me, the term "quality" encompasses quality control (what we call "QA" in software, since it is something that is supposed to measure the resulting product against its specifications), but it is much broader.

And in other industries, like the one you mentioned, QC and/or QA doesn't happen just at the end. The majority is during the process.

If you do a quick google on "quality control", you'll see that the top results are all dominated by statistical references (measuring rates of defects in manufacturing, charting statistics of failures, etc.)To me, the term "quality" encompasses quality control (what we call "QA" in software, since it is something that is supposed to measure the resulting product against its specifications), but it is much broader.

And in other industries, like the one you mentioned, QC and/or QA doesn't happen just at the end. The majority is during the process.

I'm listening. Tell me how you see quality control integrated into the entire software product lifecycle.

If you do a quick google on "quality control", you'll see that the top results are all dominated by statistical references (measuring rates of defects in manufacturing, charting statistics of failures, etc.)To me, the term "quality" encompasses quality control (what we call "QA" in software, since it is something that is supposed to measure the resulting product against its specifications), but it is much broader.

And in other industries, like the one you mentioned, QC and/or QA doesn't happen just at the end. The majority is during the process.

I'm listening. Tell me how you see quality control integrated into the entire software product lifecycle

I don't. I was agreeing with you, I think, and adding to what you said, I think. :)

Or do you mean how do I see that it should be integrated? Well, not to cover everything, I see QC as everyones job. Someone needs to be the SME, but just like PM, it is everyones job. Small cycles. QA should be involved then. Whatever is learned should be rolled in and tested for up front next time. It isn't easy. It isn't fun. Someone we(developers) have to got have time and the gumption to test

Do you think that Ford sticks a whole bunch of untrained people on an assembly line, gives them some tools they've never seen before, gives them a bucket of screws and some sheet metal, lets them play with it until something falls off the other end of the assembly line and then turns whatever random pile of resulting junk over to QA and says "make sure this car works"?

Well, hopefully not anymore.

Side note, I know of one guy who did start off in one of the Auto plants (before he HAD to get out) and some of things he had to tell, well, you'd be suprised cars run or at least as many do.

This has been an interesting thread, but unfortunately a lot of FUD seems to be getting shared. Here's a synopsis of what I said last night:1. At the beginning of the talk, and several times during, I made it very clear that some of what I was talking about wouldn't be possible in the current organizational culture that many people work in. I also pointed out, via an example, that many people would even struggle with some of the ideas presented because it was too far out of their current experience. To try to make this point, I asked the audience whether they could rename a column in the Customer table within their production database and safely roll it into production in a single day. Only one person said that was possible, but he worked for a small company so I said we should ignore that. Later in the presentation I showed them how the could in fact safely implement this change following an agile technique called database refactoring. The solution is presented at http://www.agiledata.org/essays/databaseRefactoringCatalogStructural.html#RenameColumn, BTW. The implication is that even though they couldn't conceive of a solution on their own, and certainly couldn't within their current environments, agilists could in fact do so.2. Most of the talk focused on agile techniques, such as TDD, refactoring, agile approaches to modeling and documentation (see http://www.agilemodeling.com), continuous integration, and so on. The point was to show how agile techniques address many of the quality concerns typically addressed by QA departments.3. The 10% figure was an off-the-cuff answer to a lady who badgered me for it. That question had followed another question by someone who asked for an org chart for a typical agile team. My answer is that the roles on agile teams are pretty straightforward -- Coach/Scrum Master, Programmer/Developer, and Customer/Stakeholder. She struggled with the concept that we don't have the range of specialist roles (such as QA) that you typically see on traditional projects, that instead agilists are typically generalizing specialists ( http://www.agilemodeling.com/essays/generalizingSpecialists.htm ). 4. I was very clear about the need for a separate QA role which validated the work of the project team. I had a slide which showed a logical breakdown of different environments (developers, project, pre-production QA, production). I also had a slide showing different development iterations, and very clearly stated (several times in fact) that we hand off working software into the pre=production test environment on a regular basis.5. I pointed out that testers and QA professionals can still be active members of agile teams, but that they would have to expand their horizons and be prepared to do more than just QA or just testing. I also stated that they bring valuable skills to the table that should be shared with developers to enable them to improve their skills via pairing and other forms of collaboration.

I think that many of the traditional QA people in the audience, and perhaps in this conversation, struggled with the concept that agilists address quality in many of their development techniques and don't need as much involvement by QA/test specialists. Many people also struggled with the idea that there is far less effort invested on documentation, we still do it as I point out at http://www.agilemodeling.com/essays/agileDocumentation.htm , it's just that we focus on high-value documentation and thus end up doing a lot less of it.

To try to make this point, I asked the audience whether they could rename a column in the Customer table within their production database and safely roll it into production in a single day. Only one person said that was possible, but he worked for a small company so I said we should ignore that. Later in the presentation I showed them how the could in fact safely implement this change following an agile technique called database refactoring. The solution is presented at http://www.agiledata.org/essays/databaseRefactoringCatalogStructural.html#RenameColumn, BTW. The implication is that even though they couldn't conceive of a solution on their own, and certainly couldn't within their current environments, agilists could in fact do so.

What I am struggling with is what does this DBA trick have to do with the topic? Are you saying that you couldn't create a new column and a trigger unless you are 'agile'? Are you claiming that agile methods caused this trick to be thought of? This is a technical technique and it's of questionable value.

What I am struggling with is what does this DBA trick have to do with the topic? Are you saying that you couldn't create a new column and a trigger unless you are 'agile'? Are you claiming that agile methods caused this trick to be thought of? This is a technical technique and it's of questionable value.

No, I believe Scott's point (certainly what it communicated to me) is that there are ways to refactor even databases that don't necessarily require a full release, conversion and regression testing.

What I am struggling with is what does this DBA trick have to do with the topic? Are you saying that you couldn't create a new column and a trigger unless you are 'agile'? Are you claiming that agile methods caused this trick to be thought of? This is a technical technique and it's of questionable value.

No, I believe Scott's point (certainly what it communicated to me) is that there are ways to refactor even databases that don't necessarily require a full release, conversion and regression testing.It's the mindset, not the trick itself.Dave RooneyMayford Technologies

It's a specious argument that because only one (ignorable) person raised his hand, this can only be thought of by a 'agile' person. It also ignores the fact that all the code that refers to the old column still needs to change. We are supposed to pretend that work doesn't count, I suppose.

What I am struggling with is what does this DBA trick have to do with the topic? Are you saying that you couldn't create a new column and a trigger unless you are 'agile'? Are you claiming that agile methods caused this trick to be thought of? This is a technical technique and it's of questionable value.

No, I believe Scott's point (certainly what it communicated to me) is that there are ways to refactor even databases that don't necessarily require a full release, conversion and regression testing.It's the mindset, not the trick itself.Dave RooneyMayford Technologies

It's a specious argument that because only one (ignorable) person raised his hand, this can only be thought of by a 'agile' person. It also ignores the fact that all the code that refers to the old column still needs to change. We are supposed to pretend that work doesn't count, I suppose.

The trick is eliminating QA. A QA person would have asked: Exactly why do you consider grepping the codebase as good as regression testing?

And eliminating the DBA: Exactly what do you gain from renaming the column?

And the project manager: Why are you spending time and money renaming a column?

If you are truly agile, there is no one to question the wisdom of the developer except the customer, and the customer probably doesn't know what it means.

Serious, I don't think being agile means the same thing as being irresponsible. There's a difference between embracing change and snorting it.

If you are truly agile, there is no one to question the wisdom of the developer except the customer, and the customer probably doesn't know what it means.

And considering a lot of the developers out there, I'm not sure this is always such a good idea.Let me point out that I am not saying I am against Agile development. On a very personal level I find it very attractive. But I question it along with everything else.

In fact the situation in the first quote would be a serious problem for traditional American business, regardless of the Godlike insight of the developer. I presume the first quote is either out of context or meant as a joke ?

Let me point out that I am not saying I am against Agile development. On a very personal level I find it very attractive. But I question it along with everything else.

I think that's more or less how most of us feel.

Personally, before I create anything, I always try to ask myself: "Is this going to be used after I create it? If so, how? Is it worth the effort? Will I really finish it to the point that it will be useful? If not, do I need it simply a step in my thought process?"

There are times when spending months documenting requirements makes perfect sense. IMHO, requirements are something that either belong in an incredibly refined and reviewed document, or just as a hit list in a spreadsheet or database. Half-baked requirements documents are the biggest waste of time on the planet. Either the problem is complicated, poorly understood, and needs something to bring cohesion to the project - or it's not and all that's needed is something to track what's done, what's to be done, and what's been thrown out.

The same thing can be said for pretty much any other documentation. Sometimes it's needed, sometimes it's not, but it's always worthless when it's half-baked.

That's the problem with code. Lots of half-baked code is useful (meaning a decent proportion of existing half-baked code, not that more is needed), and consequently no one really tries to stop the production of half-baked code.

That's the problem with code. Lots of half-baked code is useful (meaning a decent proportion of existing half-baked code, not that more is needed), and consequently no one really tries to stop the production of half-baked code.

Where I work no one tries to stop half-baked documentation. Documentation is much more important than code here. If you document something, the results are fairly immaterial.

This is because the all-important process is about documents, not code or results. We continually work to improve our process but we do not work to improve our code or our results.

My dad used to be fond of saying "I don't want excuses. I want results." I feel like saying, "I don't want process. I want results." I can name a dozen problems that our developers deal with on a daily basis without breaking a sweat. Nobody cares. They are busy adding more documents to our process as if one more document is going to solve all our problems. The most ironic thing about this is that we use a document control system that's virtually unsearchable so once you put in there, it's highly unlikely anyone else will ever look at it. That reminds me, I need to email Scott Adams. Apparently he's running out of ideas.

The point was that the traditional community is stuck in its ways. They've in effect given up and assumed that it's incredibly difficult to refactor database schemas. With the exception of the one person in the "very easy scenario" everyone else in the audience worked in organizations that couldn't do the trivial task of successfully renaming a column. If they can't do trivial things, then certainly it must also be challenging for them to make changes which actually matter to the business. This is a clear reflection of the mindset within the traditional data community.

However, the agile community has come along and realized that it should be possible to evolve your database schema over time just like you evolve your application source code. So we thought about it, experimented a bit, and figured out how to do it. We've done something that the traditional data community has never been able to figure out, and frankly are still in denial of (sadly).

The reason why I worked through this example was to point out that people need to think outside of the box, because the agile community is certainly doing so. The example had the additional benefit of showing how it is possible to improve the quality of database schemas, something that many people have given up on.

And yes, during the presentation we also discussed the need to remove the old column and the trigger after the transition period expired. In fact, I spent several minutes discussing this because someone worked in an organization that was so dysfunctional he doubted that they could even do something simple such as drop a column and trigger at some point in the future.

And we also discussed the need to refactor the external applications to work with the improved database schema. One person suggested that we needed tools to do this automatically, a good idea be probably unrealistic. However, I suggested that if he was able to develop a tool that could refactor the schema, the COBOL source code, the Java source, the VB source, ... which was coupled to the schema then he'd probably be a very rich man. Personally, I'm not going to hold my breath waiting for this wonder tool.

When Refactoring Databases, http://www.ambysoft.com/books/refactoringDatabases.html , is published in March you'll see that we go into great detail as to how to implement each individual refactoring, including both database source code (Oracle examples) and application source code (Java examples) and sometimes Hibernate mappings. We also provide the code to run to remove the old schema and any scaffolding code (e.g. the triggers) once the transition period has expired. The book also includes chapters describing the process for implementing the refactorings, it's more complicated than you would think, and for deploying them into production.

Database refactoring is no trick, it's an important and complex technique which frankly we've needed for years. Unfortunately, the so-called "thought leaders" within the data community never really came up with a viable strategy for evolving database schemas; it took the "practice leaders" within the agile community to do so. So perhaps there are a few new things in agile software development after all.

The point was that the traditional community is stuck in its ways. They've in effect given up and assumed that it's incredibly difficult to refactor database schemas. With the exception of the one person in the "very easy scenario" everyone else in the audience worked in organizations that couldn't do the trivial task of successfully renaming a column.

I believe you said "in a day". Not being able to rename a column in a day is not the same thing as not being able to rename it.

If they can't do trivial things, then certainly it must also be challenging for them to make changes which actually matter to the business. This is a clear reflection of the mindset within the traditional data community.

Not being part of the data community, I can't really comment.

[snip]We also provide the code to run to remove the old schema and any scaffolding code (e.g. the triggers) once the transition period has expired. The book also includes chapters describing the process for implementing the refactorings, it's more complicated than you would think, and for deploying them into production.Database refactoring is no trick, it's an important and complex technique which frankly we've needed for years. Unfortunately, the so-called "thought leaders" within the data community never really came up with a viable strategy for evolving database schemas; it took the "practice leaders" within the agile community to do so.

You say that the "data" community never came up with a solution but what about views? What about stored procedures? Are these not solutions?

That it's difficult to refactor the DB is not the data community's fault, it's the software community's fault. The only reason it's hard to rename a column is because there is code out there that depends upon it i.e. there's a very strong coupling between the software and the DB schema. How is that the fault of the data community? If a good abstraction layer was built between the database and the code, changing the DB would be a piece of cake.

It's the software community that has crippled the ability of the DBAs to refine the scehma. Don't blame the data community for the developer community's mistakes. The only thing I cam blame the data commuinity for is not being more adamant that code not refer to concrete tables directly and allowing the software development community to tell them "no views, no stored procedures."

That it's difficult to refactor the DB is not the data community's fault, it's the software community's fault. The only reason it's hard to rename a column is because there is code out there that depends upon it i.e. there's a very strong coupling between the software and the DB schema. How is that the fault of the data community? If a good abstraction layer was built between the database and the code, changing the DB would be a piece of cake.

Think about bigger refactorings. I once was working on a system that had a table (well, class, it used a proprietary ORM) where three columns formed the primary key. Unfortunately, if they person who orginally designed the table (a consultant, gotta love'em) had any clue about the meaning of the data, the primary key would have been a single column. So you've got thousands of records with tens of thousands of relations to other records that you need to clean up, with more added every day. Furthermore, when you merge two records you can't arbitrarily make one be the master. So now you need to work with a domain expert to define what operations need to be performed, and write a custom script to process the operations and do all the update and delete actions.

That's expensive refactoring. It was also critical refactoring, because junk data was being added every day because the contraints on the table were wrong.

There's a reason why the data community doesn't want you thinking the database is easy to refactor. It's because once the database actually has data in it it can be very difficult to do non-trivial refactorings.

Think about bigger refactorings. I once was working on a system that had a table (well, class, it used a proprietary ORM) where three columns formed the primary key. Unfortunately, if they person who orginally designed the table (a consultant, gotta love'em) had any clue about the meaning of the data, the primary key would have been a single column. So you've got thousands of records with tens of thousands of relations to other records that you need to clean up, with more added every day. Furthermore, when you merge two records you can't arbitrarily make one be the master. So now you need to work with a domain expert to define what operations need to be performed, and write a custom script to process the operations and do all the update and delete actions.That's expensive refactoring. It was also critical refactoring, because junk data was being added every day because the contraints on the table were wrong.There's a reason why the data community doesn't want you thinking the database is easy to refactor. It's because once the database actually has data in it it can be very difficult to do non-trivial refactorings.

I'm not sure I understand your example completely but if the table had too many columns in the primary key, that's a problem but if the software wasn't using these three values as it's logical key, there shouldn't be any daa to clean up. In other words, the database isn't forcing anyone to write code that puts junk data in it. My guess is that the table designer was a developer, right?

I have been through huge daabase refactorings (changing the primary key for virutally every table a DB and normalizing a number of 100+ column tables.) While the DBA's work was significant, one person was able to do it while half-a-dozen were required to work for a longer perios of time to modify all the code. Actually changing the database and correcting the data represented a small portion of the total cost.

I'm not sure I understand your example completely but if the table had too many columns in the primary key, that's a problem but if the software wasn't using these three values as it's logical key, there shouldn't be any daa to clean up. In other words, the database isn't forcing anyone to write code that puts junk data in it. My guess is that the table designer was a developer, right?

The software was using all three columns as the logical key and had been for years.

I never met the table designer. I met other consultants from the same company. He probably had a title like "Solution Architect" that conveyed oh-so-much information about what he did.

In my experience it's changing the data that takes up all the time. Changing code and configuration files can be annoying - but that's all electronically searcheable. Having people look over thousands of records takes a lot of time and money.

I'm not sure I understand your example completely but if the table had too many columns in the primary key, that's a problem but if the software wasn't using these three values as it's logical key, there shouldn't be any daa to clean up. In other words, the database isn't forcing anyone to write code that puts junk data in it. My guess is that the table designer was a developer, right?

The software was using all three columns as the logical key and had been for years.I never met the table designer. I met other consultants from the same company. He probably had a title like "Solution Architect" that conveyed oh-so-much information about what he did.In my experience it's changing the data that takes up all the time. Changing code and configuration files can be annoying - but that's all electronically searcheable. Having people look over thousands of records takes a lot of time and money.

Yeah, one of the 'process improvements' I'd like to see is not bringing in 'consultants' and assuming they have brains.

I guess I don't understand why the records are not electronically searchable but again, I'm not sure what the problem was exactly.

But you have confirmed to me that the problem was not merely a DB problem it was also a software issue. It seems to me that most DB structures are specified by developers with little or no oversight by 'data people'. So again, I say that these kinds of issues are normally caused by developers. Not by the stodgy old data geezers who just don't 'get it'.

I guess I don't understand why the records are not electronically searchable but again, I'm not sure what the problem was exactly.

It is/was electronically searcheable. But all electronically searching can do is generate a list of records that would violate integrity constraints if the logical key was changed.

In other words, there are four records in the table that should be one record. Each one has different data. A domain expert has to pick a master, possibly by selecting data from all four. Then scripts have to make all the necessary updates to the database.

But you have confirmed to me that the problem was not merely a DB problem it was also a software issue. It seems to me that most DB structures are specified by developers with little or no oversight by 'data people'. So again, I say that these kinds of issues are normally caused by developers. Not by the stodgy old data geezers who just don't 'get it'.

Actually, it was (mostly) a matter of understanding the domain. The logical key was fairly obvious if you took a little time to understand the domain. I suspect the raw data that was initially loaded into the table came from a spreadsheet that wasn't normalized. So if you just looked at the data without understanding it you would come to the conclusion that multiple columns were required for the logical key.

Side Note: I've been bashing consultants. I have worked with some great consultants and some awful consultants. In my experience consultants are useful when your organization lacks a very specialized skill that's needed on a temporary basis. The problem is when consultants are brought in because your organization lacks a specialized skill that it's going to need on a long-term basis, or when consultants are brought in as advisors to people who won't listen to their own employees. In other words, consultants should never be used as schedule-accelerators (i.e. keeping a project from crashing while employees climb the learning curve) or as instruments of change. Contractors (longer-term than consultants but typically shorter-term than employees) can be good for meeting temporary demands, or needs for special skills that fall too far below one person to justify hiring an additional employee.

I'm a firm believer in abstraction layers, in fact years ago I published a design for building one (which has been used as the basis for several persistence frameworks). In Refactoring Databases we also discuss the need for such a layer and actively promote the concept. However, the reality is that you'll need many layers because they typically aren't multi-platform. Most organizations that I work in have COBOL, Java, VB, C, ... code.

So, encapsulation is definitely a step in the right direction and it's part of the overall solution, but it's not the entire solution.

I'm a firm believer in abstraction layers, in fact years ago I published a design for building one (which has been used as the basis for several persistence frameworks). In Refactoring Databases we also discuss the need for such a layer and actively promote the concept. However, the reality is that you'll need many layers because they typically aren't multi-platform. Most organizations that I work in have COBOL, Java, VB, C, ... code. So, encapsulation is definitely a step in the right direction and it's part of the overall solution, but it's not the entire solution.- Scott

My organization uses Tibco, COBOL, Java, PowerBuilder, at least three flavors of DMBS, Windows, Mainframe, multiple flavors of Unix, Linux and probably a lot of other things I don't even want to know about.

I'm talking about encapsulation at the DB layer. If no code (regardless of what it's written in) acceses the physical tables, you get a data abstraction layer across all platforms.

That it's difficult to refactor the DB is not the data community's fault, it's the software community's fault. The only reason it's hard to rename a column is because there is code out there that depends upon it i.e. there's a very strong coupling between the software and the DB schema. How is that the fault of the data community? If a good abstraction layer was built between the database and the code, changing the DB would be a piece of cake.

It's the software community that has crippled the ability of the DBAs to refine the scehma. Don't blame the data community for the developer community's mistakes. The only thing I cam blame the data commuinity for is not being more adamant that code not refer to concrete tables directly and allowing the software development community to tell them "no views, no stored procedures."

Funny. I see the data community as the major problem. Treating the db as the center of the universe. Not that a lot of developers haven't contributed to the problem. Views and SPs can help, but they don't provide an adequete abstraction layer. At least not on their own. Of course a lot of blame lays on Management too.

Funny. I see the data community as the major problem. Treating the db as the center of the universe. Not that a lot of developers haven't contributed to the problem. Views and SPs can help, but they don't provide an adequete abstraction layer. At least not on their own. Of course a lot of blame lays on Management too.

It may just be a difference in experience. What I've seen is that the DBAs are marginalized. They are treated as DB babysitters and have no direct role in data modelling. In some cases, they don't have the skills but this isn't suprising since no one seems to desire those skills in DBAs. I've also seen how a person focusing on the daa model, involved in the architecture and design can contribute a lot to a system and how a DBA can help developers to use the DB more effectively.

A lot of the problems I see in the Java world with the DB are a result of the 'Core J2EE Pattern' of not using the DB to it's potential and burying all kinds of SQL in inscrutible code. Hibernate seems to be changing this for the (much) better.

It seems to me that expecting developers to all build proper data abstractions is a pipe dream, especially in a IT department as large and varied as the one I work in.

If nothing was allowed to retreive data except through views or stored procedures, the DBAs could restructure and repair the DB without affecting any code in many cases.

4. I was very clear about the need for a separate QA role which validated the work of the project team. I had a slide which showed a logical breakdown of different environments (developers, project, pre-production QA, production). I also had a slide showing different development iterations, and very clearly stated (several times in fact) that we hand off working software into the pre=production test environment on a regular basis.5. I pointed out that testers and QA professionals can still be active members of agile teams, but that they would have to expand their horizons and be prepared to do more than just QA or just testing. I also stated that they bring valuable skills to the table that should be shared with developers to enable them to improve their skills via pairing and other forms of collaboration.I think that many of the traditional QA people in the audience, and perhaps in this conversation, struggled with the concept that agilists address quality in many of their development techniques and don't need as much involvement by QA/test specialists.

Thanks for the clarification, Scott, that's a more balanced view than was presented at the top of this discussion.

I've worked with QA people who were really interested in all aspects of the development process, and I think those people will be fine in an Agile world. Those who are more focused on a narrow definition of QA as verification of software against formal requirements may have a harder time adjusting.

This is, of course, also true of developers, project managers, and business stakeholders.

And to respond to an earlier post about people being the main success criteria for a project, I have to agree with that. Having great people is far more important than having a great process. But since most teams have only a few great people (if any), we have to do the best we can, and agile techniques can help.

Individuals and interactions over processes and toolsWorking software over comprehensive documentationCustomer collaboration over contract negotiationResponding to change over following a planThat is, while there is value in the items onthe right, we value the items on the left more.

Individuals and interactions over processes and toolsWorking software over comprehensive documentationCustomer collaboration over contract negotiationResponding to change over following a planThat is, while there is value in the items onthe right, we value the items on the left more.

Although they have accomplished much, and, quite frankly, I admire and respect its leaders, I sometimes think the only thing not agile about the agile movement is the training, publishing and consulting industry which has sprung up around it.

There is room for a second Agile Manifesto for these areas, which would contain principles like:

Although they have accomplished much, and, quite frankly, I admire and respect its leaders, I sometimes think the only thing not agile about the agile movement is the training, publishing and consulting industry which has sprung up around it.

Agile methods are too intuitive to need consultants. The Agile Manifesto is great - and I think if most people just kept it in mind when planning projects and making decisions I think we would be a lot better off.

But that doesn't require consultants. Where would processes come from if weren't for consultants?

As a tester, and the person who invited Scott to present at TASSQ (but who was unable to attend the presentation because I was away, studying and teaching testing), some might be interested in my perspective. It's here:

I don't label myself as "quality assurance"; I don't have control over schedule, budget, people, product scope, and so on. Those are management functions, and in that sense, "QA" as we know it--in which non-management personnel believe that they can assure quality without that authority -- probably should go away.

I have a different job. I'm a tester (and teach testing to people). Testing (and testers) will exist as long as someone wants to know something about the product, something important that they might not have known or considered before that would present a threat to the product's value. I believe that I'm highly skilled at that, and I believe that I can thereby provide value to virtually any team, agile or not.

There are some members of the agile community that believe that testers are useless. There are some useless testers (just as there are some useless developers), and there are processes and organizational models that practically guarantee that testers will be useless, mostly by trying to ignore or mitigate the risks of tester incompetence in some way. I'd like those processes AND those testers to go away, just as much as Scott would.

I'm also convinced, though, that through a fundamental ignorance of what testing is and what it can be, the agile community spouts a good deal nonsense about the role of testing and testers. (Some of that nonsense is restated in this thread.) Specifically, the agile lore on testing (just like the traditional lore on testing) is focused on confirmatory, rather than investigative perspectives on testing. Excellent software requires both, or a whole slew of risks will be missed or ignored. I contend that agile approaches promote a better job of confirmatory testing (thus makes much of the traditionalist QA role unnecessary, as Scott argues), but that it is still mostly clueless about investigative, risk-focused, exploratory (and, yes, manual) testing. I hope to help change that, but the community needs to be willing to listen, too.

And in my experience, really hip people bring up the guy in Office Space...

---Michael B.

That was an (attempted at least) quote from the movie, or are you trying to be sarcastic? Anyway, sorry, I'm sure you are tired of hearing or reading that kind of thing.The smiley should have suggested that I was trying to be sarcastic, but in a lighthearted way. No problem, James.

I contend that agile approaches promote a better job of confirmatory testing (thus makes much of the traditionalist QA role unnecessary, as Scott argues), but that it is still mostly clueless about investigative, risk-focused, exploratory (and, yes, manual) testing. I hope to help change that, but the community needs to be willing to listen, too.

I'm very willing to listen, Michael! Drop me a line to discuss this offline - I'd like to hear your ideas. Refer to the Contact page at Mayford Technologies for an e-mail address.

I personally believe that there are things that automated tests can't test well - aesthetics, ease of use, proper correlation to the business work flow, and, yes, exploratory testing. After all, there are things that testers do that normal humans would never dream of! :)

At a recent client, we were using agile development methods. We also had 3 dedicated testers. The quality of the system was better as a result of their testing, because they can go far beyond the types of test that can be automated.

After all, there are things that testers do that normal humans would never dream of! :)

Hey... who you callin' abnormal?! :)

Actually, that's a great example of a reframe that we like to apply, and that we try to teach to improve tester skill. When a programmer (or anyone else, for that matter) says "No user would ever do that," we restate it: "No user <em>that I've thought of</em>, and <em>that I like</em>, would do that <em>on purpose</em>." What users are you forgetting? What users might you not like? What might a user that you like do by accident?

What users are you forgetting? What users might you not like? What might a user that you like do by accident?

Exactly. As I was once told by someone very wise, "Nothing is foolproof, because fools are so ingenious!"

Early in my career, I worked on a system where all features had to be "Bob-tested". One of the users (that would be Bob!) just seemed to have a knack for accidentally finding that little unchecked execution path that you left hiding in the code. So, before I put anything into production, I'd have Bob play with it for a while. The result was a good quality system, and a good dose of humility for yours truly!

Alright, I have read the first twenty posts and stopped. IMHO this flat format is barely useable, at least for threads with tens of messages.

I have a very interesting question: Guys, how do you manage to track who replies to whom and when? Do you use some kind of a reader or feed? Please, drop me a line at imeshev at yahoo dot com if you have a solution.

Scott,
I have to say that although, yes, I can't find any quotes of you saying literally that "agile is a silver bullet", you do use rhetorical devices in your writings to imply just that.
For instance, in an earlier thread, you said, "The agile adoption curve is virtually identical to the object adoption curve of 10-15 years ago. In the early 90s a lot of people chose to stick their heads in the sand and ignore OO, and many people are doing the same when it comes to agile software development. It's always a good idea to learn from the mistakes of others." Clearly then, anyone who ignores software agile development is making a mistake, right? And this is because the curves of OO adoption and Agile SW adoption are similar and we all know how "silly" those people now look who were against OO in the early 90's.
In another thread, you noted that "people need to think outside of the box, because the agile community is certainly doing so."
In your postings in general, you seem to group software developers into camps: agilists and non-agilists. You imply that a developer is either agile or is not; if I'm agile, then I'm with you, if I'm not agile, then I'm apparently against you. You state that to be agile one needs to be flexible, even if that means using RUP (see http://www.agiledata.org/essays/differentStrategies.html). So I'm puzzled. I am a very practical developer. I'll choose a method that will help me out to get the end user the software they need given all the resource restrictions in place (money, people, time, etc.). Does that mean that I'm an "Agilist"? Do I now believe in "Agilism"? And if "Agilism" incorporates RUP, then what exactly is an agile method? Is it something different or just a practical way of approaching a problem with the appropriate methodology?
I find your rhetoric to be fairly divisive. We really don't need another figure seeing the world in terms of "us versus them".

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.