Can you explain or provide a reference to those 5 model? Where they come from? Particularly, was input variables they use to quantify the effort and what assumptions they take?
– dzieciouJun 8 '13 at 17:40

1

"accuracy of testing efforts"? Do you mean "accuracy of the estimate"? Or something else?
– Joe StrazzereJun 9 '13 at 22:16

2 Answers
2

No doubt there are academic papers that look at least some of those test efforts, but you should be reluctant to generalize them to your own work.

Here are a couple of some reasons why:

There are lots of confounding factors that impact the accuracy of an estimation model, e.g. adherence to the methodology under study, experience level of the testers who perform the estimation, experience level of the testers doing the actual testing, quality of the system under test, and amount of churn during the test process. Any worthy academic paper will include a section on potential confounders and their potential impact on the conclusions, but you should not assume they list them all, or are even aware of them all.

There are implicit biases in academic literature, e.g. publication bias (a paper that confirms what we already believe is less likely to be published than a paper with a surprising result).

If you want to generalize a study to your own organization, ask yourself whether their organization is sufficiently like your own.

I'm sure there's no shortage of academic work on this line, but like user246 I'd be hesitant to use them to estimate anything I'm doing.

Some of my reasons:

No two code bases are directly comparable. Test automation effort depends a lot on how testable the system under test is. Two applications can seem very similar on the surface but without further research it there's no telling whether they're using closed components that offer nothing for an automator to interact with or whether they're well encapsulated and have nice accessible handles for automation to grab.

A well managed test automation framework takes less time to add to than a poorly managed one (been there... Got the scars to prove it). The same level of functional additions using any of the methods above can take much more time to build when the automation framework is a patchwork of ad-hoc additions.

A badly documented framework always takes longer to work with - because of the time wasted to see if there are any helper functions to perform common tasks, the time taken to build your own when you don't find any (even if they do exist but are living in an obscurely named, undocumented library - been there, too).

The unknown unknowns will bite you every time - and in my experience what happens is that something that should be simple proves to be a major pain to work with and takes ages to automate, completely destroying your estimates.

All these factors apply to every model of test automation in existence - estimates are functionally meaningless unless the person giving the estimate is already familiar with the features and functions of the application and how they interact with the automation tool. Even then, estimation is only accurate for well-defined small-scope scenarios. It's no different from estimations for software development in general in that regard.