There's something to be said for voodoo that works :)
–
corsiKa♦May 26 '11 at 15:39

Question, Bruce... are you talking general "automated testing" or the specific subset of "automated regression testing"? Depending upon what you're doing, there are additional things you'll want to consider for automated regression... (see sqa.stackexchange.com/q/656/453) for a conversation on the difference.
–
TristaanOgreMay 27 '11 at 14:01

12 Answers
12

I think its hard to determine what is a "Best Practice" in this field since many aspects of software/hardware under test tend to be customized to the environments they are developed for what works in one environment does not work in another. There are generics, such as highly repetitive tasks that have low return or have standard results which make good candidates for automation. Other aspects of the environment are pretty much subjective, and you know what areas would best be suited to automation either by their stability or the way they work where you know that X input may only result in Y output or set Z though which you can verify one of many results depending on some external factor.

Decided to automate is hard, but I believe we know our environments best and can decide what tasks are:

I would add deterministic to your list, as differentiated from "stable enough". I tend to think of "stable" as a feature of the component being tested (rate of churn/change), while "deterministic" to be will it always pass when things are good and fail when there is a problem, without false positives.
–
Tom EMay 26 '11 at 20:56

I always think through the ROI to make my decision. I don't do anything formal, I usually just ask myself a few questions. This is generally a quick mental exercise. Things to consider (in order of priority).

+1 : ROI is really the only measure, though some people will weight different "Return" differently and some will weight "Investment" differently. I'd add: How many times will the test be run, as that will impact the answers to your points 5 and 6. If the test is only run once, meh. If it's run many, many times you keep reaping the benefits.
–
Peter K.May 28 '11 at 23:59

Of course, it can be hard to quantify ROI.
–
user246Jun 20 '11 at 22:20

While ROI seems to be a good measure, I have to object to "3. How likely is it that this code will fail in the wild?" and "4. How often is this code expected to change after release?". Regarding 3.: If you think that you can estimate the likeability of an error, you have already failed. Errors often appear in the most unlikely places. And sometimes these errors are most critical. Regarding 4.: It is not necessarily the code itself that changes, but maybe code that it depends on, so this is not a good criteria, either. Having said that, I think that the ROI can not be determined beforehand.
–
Daniel AlbuschatJul 16 '14 at 10:36

I'd recommend looking at tedious, highly repetetive tests that need to be repeated on a regular basis. For instance, if you're routinely spending several days on manual regression in a specific area of the application each time you release, it's probably a good candidate for automation.

I don't think there is a "best practice" for what should be automated, because the decision depends a lot on the application under test, the environment, legislative requirements, hardware dependencies, and potential impacts.

In my place of work, for instance, the goal is to automate as much of the transactional testing as we can - because we develop point of sale systems for a niche industry, and need to be able to handle a variety of tax scenarios for customers who face serious financial issues if the calculations are even slightly off (among other things). We focus on transactions over configuration because transactions happen more often in the live environment, and because transactions are far more important to our customers than the setup. If they need to use an awkward workaround to set up, that's not as bad as if they can't sell something.

Of course, the best internal ROI can be achieved by automating the one task everyone in your team utterly detests! (Tax regression.... at least in my world).

1.Is it need based automation ?
e.g. -
you can not run the test/observe the test results accurately in a non-automated manner.it is a very controlled test environment.

here the question of "when" does not arise . You just have to take a call whether to automate a test or not .

2.Is the time based automation ?
e.g. -
regression tests.
Generally these are the ideal candidates for automation and that usually takes place when the functionality has 'matured' ,when you have a safe level of confidence in the stability of future (including a risk analysis of visible changes)
For such candidates,firstly ,automation is easier to develop ,tests are more reliable(false failures are less probable) and you can concentrate on "new" functionalities to test

These are good considerations that go into the ROI you will get when automating something. I especially like that you covered Utility items.. this is a case where automation is a test tool to aid manual testing (or other automated testing), it might be getting the system into a known state, or populating with known data etc.. This is a great use for automation beyond the automating of test cases, and is something that should not be neglected when considering where to utilize automation resources.
–
Chuck van der LindenMay 30 '11 at 1:29

why not automate ? except for the obvious- a test that can't be automated (not uncommon in embedded systems) some reasons are: - the automated test is not the same as the manual test, for example using GUI events is not exactly the same as real usage. - automating the test requires too much resources, from people to test equipment or plain time - automatically parsing the results of the test is difficult, I once automated a complicated setup including a SmartBits machine, Wi Fi access point and several Wi Fi stations. Exercising it was easy, deciding a pass/fail result impossible.
–
RsfMay 26 '11 at 11:13

1

That you cannot automate doesn't mean you shouldn't if you could. And it's only a matter of time before manual test cost more than automated test, but that doesn't mean automated test completly replace manual test.
–
Alexis DufrenoyMay 26 '11 at 11:59

@Rsf: I disagree: the last embedded system I worked on, we managed to get 80% of our system tests automated. OK, so some tests couldn't be (bad ROI), but 80% is still a good figure.
–
Peter K.May 28 '11 at 23:57

let me know when you find a cheap and easy way to automate usability. Or look and feel. Especially ones that don't require a ton of test time to update when some designer says 'that bar should be a gradient instead of a solid color' Also there may be OTHER tests that will yeild a lot more ROI that you could be automating instead of the borderline ones where they payoff won't be seen for several years. I'd agree that most functional tests ought to be able to be automated in most conditions, but there's a lot of other classes beyond functional many not suited to being automated.
–
Chuck van der LindenMay 30 '11 at 1:25

Buggiest first is not always a good choice, you might want to leave the buggiest areas to experienced exploratory testers.
I usually start by sorting the tests by their expected complexity level. Then I further sort by groups of related tests, and for each group define what is the needed infrastructure.
In many cases your ROI will be better by implementing a lot of simple tests rather than few complicated tests so I begin by doing a group of related simple tests and their related infrastructure. I will usually choose a group that has more in common with others.

I am afraid of using word "best practice" but I suggest to automate tests from the very beginning, at least not to wait till application UI is available to you.
I begin with Selenium test as soon as I have application mock available to me. Only part I would be missing on are application objects and I fill these place holders once I have app UI available to me.

This approach works well when you are not hard coding app objects in test itself and externalizing application navigation using Page Object.

Apropos of, what to automate: If you have large application and least resources available (which is very practical) for automation then first pick up your User Acceptance Tests and also the scenarios which have resulted in defects.

But if you have umpteen resources (which I highly doubt) then sea is the limit...

I would suggest that things that take the longest time to test manually would be best candidates for automation. This then frees up more time for you to perform exploratory testing (which is where you are most likely to find the horrible defects).

When you need to simulate large number of users who are using the
application resources

When AUT is having comparatively stable UI

When you have large set of BVT cases

Wanna select Automation Tool for Your Project?
Automation testing success largely depends on the selection of right testing tools. It takes lot of time to evaluate relevant automation tools available in the market. But this is a must one time exercise that will
Here are the criteria you need to consider before selecting any testing tool:

1) Do you have necessary skilled resource to allocate for automation tasks?

2) What is your budget?

3) Does the tool satisfy your testing needs? Is it suitable for the project environment and technology you are using? Does it support all tools and objects used in the code? Sometime you may get stuck for small tests due to inabilities of the tool to identify the objects used in the application.