Software testing has never been as difficult as software development !!!
I would prefer not to get into discussing the outlined statement above. Let us focus on something that adds more value to the time we spent on this portal. Managing testing with optimum utilization of the test management system is the primary goal to touch-base on the article outlined below.

When we talk of designing the test cases in a system, there are some basic parameters that is offered by various tools which enhances the overall ease of managing the work. Let us discuss on some aspects that will add more to what we in general keep doing. We will not discuss the basic ones like title , steps description, automation status etc as we all know and are adept in managing them.

In addition to what we have we should have below outlined parameters also in a manner that can be better than is mentioned below. Let us visit them one by one. Why we need these additional parameters and in what type of situation , we can en cash on the opportunity that will be offered by these additional parameters.

1. Iteration : This parameter provides us the opportunity to segregate the test cases based on their implementation in the product as a feature. We have our software developed in multiple cycles and this is one aspect which helps us to track the test case creation and first time inception in the system. And gradually with the progress of the iteration we end up with test cases belonging to various iteration. Any test case can belong to just one iteration and that property will help us to identify the feature - implementation - iteration as well. Iteration 1 Iteration 2 Iteration 3

2. Functional Area : This property enables us to identify the feature to which a test case belong to. We may have multiple features implemented within a software. let us take an example from e-commerce domain, The primary features in this case would be the various business segments such as Apparels, Housing decors, Electrical appliances etc. This an ideal property within a test case to have the tree structure. Just because implementation of such big features may not be in full in a particular iteration. So how to track such timelines ? Here is the solution that the tree structure will provide . Apparel can be broke into multiple like women , men, children, and so on and so forth . This depth can go to any level. Here one thing we must have at the back of our mind , a single feature should not flow into multiple iteration, as that results in multiple complications in terms of reporting and traceability. Simple rule of thumb will be - "A single iteration can have multiple Functional areas but its vice - versa is not possible" . Payment Authentication/Authorization Reports Usability

4. Priority :P1 - Critical Path Business Scenarios
Important to reject/accept a build.P2 - Average Path Business Scenarios
Important to decide on pre - scheduling of any interim dropP3 - Low Path Business Scenarios
Non - functional aspects of the system under test

Will continue it once I have had some sleep. Too tired to type. Though thinking continues to make it more interesting !!

Ensure that the SQL Server
instance has ‘Optimize for Ad Hoc’ enabled. This will store a plan stub
in memory the first time a query is passed, rather than storing a full
plan. This can help with memory management.

SELECT * is
not always a good idea and you should only move the data you really need to move and
only when you really need it, in order to avoid network, disk and memory
contention on your server.

Keep transactions as short as
possible and never use them unnecessarily. The longer a lock is held the
more likely it is that another user will be blocked. Never hold a
transaction

open
after control is passed back to the application – use optimistic locking instead.

For small sets of data that
are infrequently updated such as lookup values, build a
method of caching them in memory on your application server rather than constantly
querying

them
in the database.

When processing within a
transaction, do not updates
until last
if possible, to minimize the need for exclusive locks.

Cursors within SQL Server can cause
severe performance bottlenecks.

The WHILE loop
within SQL Server
is just as bad as a cursor.

Ensure your variables and
parameters are the same data types as the columns. An implicit
or explicit conversion can lead to table scans and slow performance.

A function on
columns in the WHERE clause or JOIN criteria means that SQL Server can’t use indexes
appropriately and will lead to table scans and slow performance.

Use the DISTINCT, ORDER BY, UNION carefully.

Table
variables do not have any statistics within SQL Server. This makes them useful for working
in situations where a statement level recompile can slow performance. But,
that lack of statistics makes them very inefficient where you need to do
searches or joins. Use table variables only

where
appropriate.

Multi-statement
user-defined functions work through table variables- which don’t work well in
situations where statistics are required. Avoid using them If a join or
filtering is required.

One of the most abused query
hints is NO_LOCK. This can lead to extra or missing rows in data sets.
Instead of using NO_LOCK consider using a snapshot isolation level such as
READ_COMMITTED_SNAPSHOT.

Avoid creating stored
procedures that have a wide range of data supplied to them as parameters.
These are compiled to use just one query plan.

Try not to interleave data
definition language with your data manipulation language queries within
SQL Server. This can lead to recompiles which hurts performance.

Temporary tables have
statistics which get updated as data is inserted into them. As these
updates occur, you can get recompiles. Where possible, substitute table

variables
to avoid this issue.

If possible, avoid NULL
values in your database. If not, use the appropriate IS NULL and IS NOT NULL
code.

A view is meant to mask or
modify how tables are presented to the end user. These are fine
constructs. But when you start joining one view to another or
nesting views within views, performance will suffer. Refer only to tables within
a view.

Use extended events to
monitor the queries in your system in order to identify
slow running queries.

You get exactly one clustered
index on a table. Ensure you have it in the right place. First choice is
the most frequently accessed column, which may or may not be the
primary key.
Second choice is a column that structures the storage in a way that helps
performance.

This
is a must for partitioning data.

Clustered indexes work well
on columns that are used a lot for ‘range’ WHERE clauses such as BETWEEN
and LIKE, where it is frequently used in ORDER BY clauses or in GROUP BY
clauses.

If clustered indexes are
narrow (involve few columns) then this will mean that less storage is
needed for non-clustered indexes for that table.

Avoid using a column in a
clustered index that has values that are frequently updated.

Keep your indexes as
narrow as
possible. This means reducing the number and size of the columns used in
the index key. This helps make the index more efficient.

Always index your
foreign key columns
if you are likely to delete rows from the referenced table. This avoids a
table scan.

A clustered index on a GUID
can lead to serious fragmentation of the index due to the random

nature
of the GUID. You can use the function NEWSEQUENTIALID() to generate a GUID that
will

not
lead to as much fragmentation.

Performance is enhanced when
indexes are placed on columns used in WHERE, JOIN, ORDER BY, GROUP, and
TOP.

A unique index absolutely performs faster
than a non-unique index, even with the same values.

Data normalization is a
performance tuning technique as well as a storage mechanism.

Referential integrity
constraints such as foreign keys actually help performance, because the optimizer
can recognize these enforced constraints and make better choices for joins and other
data access.

Make sure your database
doesn’t hold ‘historic’ data that is no longer used. Archive it out, either into
a special ‘archive’ database, a reporting OLAP database, or on file. Large
tables mean longer table scans and deeper indexes. This in turn can mean
that locks are held for longer. Admin tasks such as Statistics updates, DBCC checks, and index builds
take longer, as do backups.

Separate out the reporting
functions from the OLTP production functions. OLTP databases usually have short transactions
with a lot of updates whereas reporting databases, such as OLAP and data
warehouse systems, have longer data-heavy queries. If possible, put them
on different servers.

I love talking people from the software industry specially those that have crazy definition on the automation in QA process. Let me just outline some of the recent ones that I underwent.

Hey Hi Vip how are you doing as an automation tester ?

I am an automation tester primarily responsible for automating test cases related to UI and also database test automation.

Oh great so what is it that you can automate ?

I can automate set of test cases that are written as part of test cases within a test suite.

Oh that sounds really good , so how many test cases can you automate in a day ?

Well, the answer is very simple . I cannot give you an exact or even a near count of the same.

Why, but you said you are an automation tester and primarily responsible for automating test cases related to UI.

So now tell me from your experience how much is it that your productivity is in terms of automating the test cases ?

It is really not a feasible thing to answer unless I see the test suite that needs to be automated.

Do you mean to say that the estimation can be given on the basis of the test suite and the test cases it contains ?

Of course that is what I mean to say, because when you have a test suite , it is then that an automation tester can give you an estimate , because the automation tester does not believe in stories in terms of estimation.

There are several parameters that need to be assessed before it can be concluded as to what sort of breakdown in terms of the test script point needs to be done to provide an estimate that shall hold relevance and instill confidence in other stakeholders.

I thought only project managers give such long stories when it is the estimation time for project schedule, but now even automation testers have grown in same shoes.

Yeah just because even automation testing has grown in same lines and now you have lot of test managers available in the industry that need to take care of these things.

OK. So you were talking of several parameters on which you can provide the test automation estimation.

Oh yes at least now I could figure out that you are listening seriously to what I kept talking for last couple of minutes .

Nope , I am absolutely perfect with what you said. It is not the first time I see someone talking logical for the test automation topic, but with bandwidth available, we can take it to some extent. So what are those parameters.

Yeah, basically no automation suite targets execution of every test cases, it is just a subset of the test cases that is taken into account when we target automation.

Now the criterion of that subset creation can be decide by three set of people.

Oh that seems to be going interesting now - Subset of test cases and then subset of people.

Yes, so one from technical category on feasibility from technology front as to which tool can be used in current application in terms of support, for example if we target automation of a UI that has SAP stuffs and trying to use the QTP , we need the SAP add in with QTP licenses. This person shall also keep knowledge on pricing of tool in terms of script development timelines , the coding complexity and the resources available ready made or if not then time needed to work on ramping up people by getting them trained by professionals or self coaching. Another parameter here can be in terms of coding language efficiency of the script writers for example if you want to use the IBM rational tool, you should have people who are well versed with JAVA.
Now that sounds a bit wary, would you also say that the testers who will do automation scripting should have programming skills.
Oh yes that is an entry criterion to build up the team. Strong logical thinking is a must or else you would end up no where. But then a lot has been done in this area by making tools more friendly with users , such as the one that Visual studio offers with its Coded UI test classes. Most of the things do get auto generated , and if you have 7/10 rating in programming language , you can learn in short timelines. But analytical and logical reasoning skills are something that should be in abundance.
Why do you think so? I think with the test suite available you do not need to have those skills as the automation tester need to just script down what is manually written in test cases ! Or am I missing something here ?
Yes dear you are missing something greatly. It will impact the execution timelines hugely if you donot organize the test cases while writing test scripts as you would not like to come to same page again and again if you have that analytical ability to create scripts in a manner that the number of times you do same set of activity gets minimized.
I still cannot understand this parameter dear. People and technical parameter is well understood though.
Ok. So a basic thing I ask you. If I ask you to go to shop and buy some medicine for the mild headache now I am having due to your dull brain, what would you do? I know being a good friend of mine you would do the needful. But then I ask you again to bring some sandwitches from the shop which was on the same way. Do not you feel irritated ?
Oh yes I would blast at you. Why the heck you did not tell me when you ask me to bring medicine, as it is on same street ?
Exactly.Yeah now you are getting it. So plan it upfront which scripts need to be run in continuation with another script rather doing hell lot of round trip. You got it this time !
WOW.
Hmm, my dear friend now understand when he himself is forced for un-necessary round trips. So do avoid these by bringing in your strong analytical and logical skills during the course of script development and integration of several modules. This does come up handy when you have certain things that get created in Module A and consumed in some other module B or at times in multiple modules.

There are indeed lot of things that need to be taken care of before estimating. It is not that simple though it seemed from the outset .
Good now you will listen to what I say.
So what else do you guys take into account ?
Now it is the simple breakdown of the test cases that has been identified as a part of Automation test suite.Breakdown of test cases ? What is it that you will do now ?
One very simple thing we all understand is here :
All test cases will not have same level of technical complexity involved while scripting.
All test cases will not have same number of steps in it.
So some sort of normalization will come in picture.Technical complexity is something that may not be estimated in absolute precision , but with experience this can be accounted for and kept as separate addition to the estimated timeines.
However the number of steps can be estimated in clear calculation.
For example :Simple test cases - Upto 7 stepsMedium test cases - 8 -15Complex test cases - 16 -23
Above 23 ? What will you do then ?
Oh you are listening logically and analytically using your brain. Cool . Generally I do not prefer to have such test cases as one test case, but if at all we are in a scenario where we are not the author of the test cases, it will be done as under30 test steps = 1 complex test case + 1 simple test case. And so on and so forth.

Suppose my automation suite has 50 test cases
Simple - 15
Medium - 25
Complex - 10
I will do some normalization over here to calculate some test case point in simple terms by associating some weight factors as decided by the team.
Weight factor for Simple - 1
Weight factor for Medium - 3

Weight factor for Complex - 5

Now my suite in terms of Test case point will look like this :

15*1 + 25*3 + 10*5 = 140

Check the productivity of converting a test case point into test script point .
On an average it is 30 Test script point for 1 Man day.So the scripting timeline would be 140/30 = 5 Man days + some buffer for technical complexity

Are you awake buddy ! Good keep sleeping ...

So that is how it is and these numbers are not absolute , but they are relative, I just gave you an overview on how we do estimate when some automation testing need to be done. In ideal cases we should target a POC first and check for all feedback in terms of code modularization, multiple environment in which the execution need to be done - such as run of the test scripts in two URLs simultaneously, additional features such as cross browser script run, logs for the executed test scripts with time stamp. These feedback should be addressed within the POC phase so that no surprise springs up during the course of scripting.