I think this is simple scenario that can happen during coded UI testing and it could be useful if someone could point to right direction.

Let's say we have ASP.NET application that has several controls on one web form. We wrote coded UI test and everything is ok. After some time development team decided to change the content of the form according to the new requirements and one button was removed. But, the test remained the same as was when button was on the form. What to do in this case? Of course, we can change the test as well. But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?

I'm interesting if we can use .uitest file or anything else as central repository of elements on all forms that we're testing. Is this possible to achieve?

But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?

Using reflection. Load the assembly, enumerate all forms, enumerate all components on those forms. You can then write all the names of the controls into a database-table, along with a date. You could add other properties too, might be easier to decorate them with a custom attribute.

When validating changes, loop the controls again and compare them to the values in the database. Count how many things have changed, and drop all tests scoring more than a previously defined threshold.

Or, ask the developers to mark the forms' that they've been modifying extensively.

1. Conceptual Design - in which marketing types toss around ideas that will make life miserable for the engineers and programmers who will eventually be called upon to actualize their insane, drunken imaginings.

2. Detail Design - in which phase the marketers release a "requirements" document to engineering, leading to much anguish and scribblings on cocktail napkins.

3. Implementation - wherein the engineers attempt to read the alleged minds of Marketing, and provide specifications to the programmers who have to code the vague descriptions from Marketing into a product that someone will want to buy.

4. Internal Testing (alpha) - in which phase the experts are asked to test their own code against the ever-changing requirements promulgated by Marketing; they patch the most obvious problems themselves, bypassing version control.

5. External Testing (beta) - during which selected computer-savvy customers are given free software to try out in real-world situations in return for feedback and bug reports to help the programmers make Marketing's drug-induced wet dream into a product someone will actually find useful.

6. Release - finally a product that does something useful, however badly! Of course, it only works for those computer-savvy beta testers; real people haven't a clue how to make it work, and there's no manual.

7. Maintenance - pesky customers will persist in finding flaws that must be fixed, else those stock options will expire worthless. Support programmers are busy in this phase just making the product function for users who want to do more than just log on and watch the pretty videos.

8. Retirement - the phase that begins about 30 minutes after entering the Maintenance phase - maintenance is expensive! Tech Support changes their phone number, and patches are phased out over a period of time. After all, the new version has just been released; who could possibly be using the old one?

Of course, for those on a tight budget, the Microsoft Endrun is available:

We have a well defined lifecycle, which we follow with a messianic zeal, except for those times when we don't or can't be bothered or the timescale is too tight and the money too good, but apart from that we never deviate.

It's pointless waiting till the end of the project to determine whether or not the lifecycle is actually working. We have quality gates at several key points in the process that we use to determine the actual usefulness of the process to the particular project. There is no, one size fits all, process that we can actually use as we tend to have to fit in with the way the clients work, so while we can agree up front on things such as maintenance releases, release schedules, beta drops, inphase documentation and the likes, other parts have to be fluid and we cope as we see fit.

"WPF has many lovers. It's a veritable porn star!" - Josh Smith

As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.