"To Error is Human" is the fact which drives the need for automation testing. Because, manual testers may not execute the test cases correctly. There will be lot of possibilities for making mistakes. They may give wrong input data due to typo, or they may not notice the actual behavior of the system correctly, or they may not report the test result correctly, or they may miss to execute some test cases, or they may forget to run some preconditions, or they may change the sequence of test case execution in case sequence is important.

· Another important factor is, Automation test scripts will be used as a way of storing domain/project/task Knowledge gained by the Test Engineers. Say for example, if a Tester works in project for one year, he might have spent more time for learning the domain, purpose of the project, modules in the project, flow of all functionalities. He will be familiar with known issues and challenges.
If this Tester leaves from the project, the knowledge gained by him also will leave.
It is very difficult for the newly joining Tester to understand everything from the Test Case document.

If automation test scripts are already available then the new Tester can just start the testing by running the automation scripts, without gaining much knowledge about the project.

He can understand the flow/data by seeing the execution of the automation test scripts. But anyway, he should gain project/domain knowledge to enhance/update the automation scripts further.
So we can say that test automation is a way of storing knowledge.

· Automation tool such as QTP (Quick Test Professional) has feature for storing screenshot of each and every page navigated during the execution. So it can be used as a proof for completion of testing, and also we can refer the screenshots of previous executions if there is any need to refer them.

· Test report can be automatically written to a customized report page which will ensure accuracy of the report and also it can improve look & feel of the report.

· The very important advantage of automation testing over manual testing is, execution speed. Test execution can be completed quickly and also we can execution the scripts in night time also without human involvement. So ultimately total time needed for testing can be reduced which will significantly help for timely project completion.

· There may be requirement of doing some testing at specific time. It can be easily achieved by putting execution of those automation test scripts in a task scheduler/crone job. The tool such as QTP supports automation object model to achieve this.

· The functional test automation scripts will be useful for doing performance testing also. Because many performance test tools will support reusing/calling of these test scripts.

· Some type of testing involves comparing large amount of data between previous version and current version as part of regression testing. Practically it may not possible for doing it manually. This problem can be easily solved by simple shell script or any other scripts such as vbs, wsh.

· As the automation test tools support Data Driven Testing, Test execution can be done repeatedly with many different data sets.

Before start evaluating tool we should analyze whether automating software testing will really give any benefit over manual testing for your needs.

Actually, Software Test Automation is a good way to cut down time and cost.

But, it will reduce the cost and time only when it is really necessary or it is used effectively.

Test Automation is not required if you are going to use your application one time only or for short period only. For example, assume that you are having a website developed in ASP, and you are making some changes in this website. And, assume you are having solid plan for converting/migrating this ASP site into either ASP.NET or PHP in near future.

In this situation, it is not advisable to automate the testing of the new changes done in the ASP site.
In this case, simply you can complete the testing manually and then you can start your automation testing preparation once after the migration is done.

So, basically we need to automate our testing procedure when we have lot of regression work.

Once after taking decision to do the test automation, the next step is selecting appropriate automation tool.

There are hundreds of tools available for automating the software testing and some of them are Free.Test complete, SilkTest, SilkPerformer, QARun, QALoad, TestPartner, WinRunner, LoadRunner, QTP, Rational Robot,Selenium,WATIR and openSTA are some of them.

Some of these Tools (e.g Selenium) are open-sourced.

We need to select appropriate tool based on below factors.

Budget allocated for the Testing process - Price for each automation tool will vary. Some of them are costly, and some of them are even free.
License pattern will be varying for each tool. License cost of some tools will vary according to geography location also. And, some tool vendors will fix different price for seat license and floating license.

So, first we need to decide about our licensing needs. i-e Ask below questions,
- In case the tool price changes according to geographic location, whether it will be cost effective for your location.

- How many Automation Test engineers will simultaneously work in your automation project?

- Whether you need separate set up for developing the scripts and for executing the scripts?

- Whether you are having plan to automate your any other testing activities? Whether the selected tool can be used for other projects also?

Support available for the Automation Tool. We need to evaluate whether the Tool provider will provide enough support and bug fix releases. And, we need to think about the support provided by the Forum community also.

Analyze whether the execution speed of the automation tool matches with your requirements.

Check the installation requirements (both Software and Hardware) for installing the automation tool in your test script development environment.

List down current skill set of your testers and check whether the tool can be effectively used by your testers. For example QTP will support vbscript, if your testers know vbscript they can easily learn using QTP.

Feasibility study is very important before finalizing the Tool. Most of Tools will provide evaluation or Trail offer. For example QTP can be downloaded from HP site and we can use it for 14 days. During this trial period try to automate different portions of your application to make sure that the Tool can be used for automating your Testing needs.

Analyze the Market Share and financial Stability of the vendor of the tool. It will get significance if you are going to use the Automation tool for long term regression testing purpose.

Check whether the Tool can be easily integrated with bug tracking tools. For example, QTP will be closely integrated with Quality Center (Test Director) which is a Test Management Tool.

The best approach is, we can prepare a list with all these factors and add remarks for each tool. And, we can select the tool by analyzing this list.

LDAP, Lightweight Directory Access Protocol, is an Internet protocol that email and other programs use to look up information from a server.

Every email program has a personal address book, but how do you look up an address for someone who's never sent you email? How can an organization keep one centralized up-to-date phone book that everybody has access to?

That question led software companies such as Microsoft, IBM, Lotus, and Netscape to support a standard called LDAP. "LDAP-aware" client programs can ask LDAP servers to look up entries in a wide variety of ways. LDAP servers index all the data in their entries, and "filters" may be used to select just the person or group you want, and return just the information you want. For example, here's an LDAP search translated into plain English: "Search for all people located in Chicago whose name contains "Fred" that have an email address. Please return their full name, email, title, and description."

LDAP is not limited to contact information, or even information about people. LDAP is used to look up encryption certificates, pointers to printers and other services on a network, and provide "single signon" where one password for a user is shared between many services. LDAP is appropriate for any kind of directory-like information, where fast lookups and less-frequent updates are the norm.

As a protocol, LDAP does not define how programs work on either the client or server side. It defines the "language" used for client programs to talk to servers (and servers to servers, too). On the client side, a client may be an email program, a printer browser, or an address book. The server may speak only LDAP, or have other methods of sending and receiving data—LDAP may just be an add-on method.

If you have an email program (as opposed to web-based email), it probably supports LDAP. Most LDAP clients can only read from a server. Search abilities of clients (as seen in email programs) vary widely. A few can write or update information, but LDAP does not include security or encryption, so updates usually require additional protection such as an encrypted SSL connection to the LDAP server.

LDAP also defines: Permissions, set by the administrator to allow only certain people to access the LDAP database, and optionally keep certain data private. Schema: a way to describe the format and attributes of data in the server. For example: a schema entered in an LDAP server might define a "groovyPerson" entry type, which has attributes of "instantMessageAddress", and "coffeeRoastPreference". The normal attributes of name, email address, etc., would be inherited from one of the standard schemas, which are rooted in X.500 (see below).

LDAP was designed at the University of Michigan to adapt a complex enterprise directory system (called X.500) to the modern Internet. X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service "for the rest of us."

LDAP servers exist at three levels: There are big public servers, large organizational servers at universities and corporations, and smaller LDAP servers for workgroups. Most public servers from around year 2000 have disappeared, although directory.verisign.com exists for looking up X.509 certificates. The idea of publicly listing your email address for the world to see, of course, has been crushed by spam.

While LDAP didn't bring us the worldwide email address book, it continues to be a popular standard for communicating record-based, directory-like data between programs.

VSTS 2010 as commonly known as visual studio has a lt more of the Testing end stuffs than any of the other previously released versions. But herein the Microsoft Corp has tried to enter the arena of the Automated Testing Software. It has been highly capitalised by the Hp Qtp a tool of marvellous excellence especially in the way it creates its logically related Vbscripted files and the Object Repository, a plus point that has no competitor in the arena as yet.
Google has been investing heavily on the other Tools especially the Free Source world's Selenium.
However the VSTS as a tool in this world needs to mature a lot than its currently existent standard.
Right now it basically depends more on the source files content, and that really sounds pretty odd as it just is part in the way objects are recognised in the tool.

Vsts can recognise objects on the basis of the object initialised context and the ending context.
Coming to hieracrhy concept in QTP it is pretty simple, but fails in the recognition of controls which are dynamically generated . Also no freamework has really come into occurance as yet in this regard,such as the Hybrid, Keyword testing and many more in which Data driven is a really successful one.

Also the script maintenance, code modularisation which is awesome in the QTP is virtually cipher herein.
I still go by the QTP.

Policy-Based Management enables the efficient management of multiple SQL Server instances from a single location. Easily create policies that control security, database options, object naming conventions, and other settings at a highly granular level. Policies can evaluate servers for compliance with a set of predefined conditions and prevent undesirable changes being made to servers.

Performance Data Collection (Data Collector)

The Data Collector provides a convenient way to collect, store, and view performance data automatically. It collects disk usage, server activity, and query statistics data, which it loads in a management data warehouse and performance data can be reviewed in SQL Server Management Studio or by using third-party tools.

Data Compression

Data compression reduces the amount of storage space needed to store tables and indexes, which enables more efficient storageof data. Data Compression does not require changes be made to applications in order to be enabled.

Resource Governor

The Resource Governor enables administrators to control and allocate CPU and memory resources to high priority applications. This enables predictable performance to be maintained and helps avoid performance from being negatively affected by resource-intense applications or processes

Transparent Data Encryption

Transparent Data Encryption enables data to be stored securely by encrypting the database files. If the disks that contain database files become compromised, data in those files is protected because that data can only be de-encrypted by an authorized agent. SQL Server performs the encryption and de-encryption directly, so the process is entirely transparent to connecting applications. Applications can continue to read and write data to and from the database as they normally would. Backup copies of encrypted database files are also automatically encrypted.

External Key Management / Extensible Key Management

External Key Management enables certificates and encryption keys to be stored using third-party hardware security modules that are designed specifically for this purpose. Storing the keys separately from the data enables a more extensible and robust security architecture.

Data Auditing

Data Auditing provides a simple way to track and log events relating to your databases and servers. You can audit logons, password changes, data access and modification, and many other events. Tracking these events helps maintain security and can also provide valuable troubleshooting information. The results of audits can be saved to file or to the Windows Security or Application logs for later analysis or archiving.

Hot-Add CPUs and Hot-Add Memory

Hot-add CPUs, a feature available with the 64-bit edition SQL Server Enterprise, allows CPUs to be dynamically added to servers as needed, without the need to shut down the server or limit client connections. Hot-add memory enables memory to be added in the same way.

Streamlined Installation

The SQL Server 2008 installation process has been streamlined to be easier and more efficient. Individual SQL Server components, such as Database Services, Analysis Services, and Integration Services, can be optionally selected for installation. Failover cluster support configuration has also been added to the installation.

Server Group Management

Server Group management enables T-SQL queries to be issued against multiple servers from a single Central Management Server, which simplifies administration. Stream results of multi-server queries into a single result set or into multiple result sets enables the option of evaluating policies against a server group.

Upgrade Advisor

The Upgrade Advisor generates a report that highlights any issues that might hinder an upgrade. This provides administrators detailed information that can be used to prepare for upgrades.

Partition Aligned Indexed Views

Indexed Views let SQL Server persist the results of a view, instead of having to dynamically combine the results from the individual queries in the view definition. Indexed Views can now be created to follow the partitioning scheme of the table that they reference. Indexed views that are aligned in this manner do not need to be dropped before a partition is switched out of the partitioned table, as was the case with SQL Server 2005 indexed views.

Backup Compression

Backup compression enables the backup of a database to be compressed without having to compress the database itself. All backup types, including log backups, are supported and data is automatically uncompressed upon restore.

Extended Events

The extended events infrastructure provides an in-depth troubleshooting tool that enables administrators to address difficult-to-solve problems more efficiently. Administrators can investigate excessive CPU usage, deadlocks, and application time outs as well as many other issues. Extended events data can be correlated with Windows events data to obtain a more complete picture that will aid in problem resolution.

Dynamic Development

Grouping Sets

Use GROUPING SETS to obtain results similar to those generated by using CUBE and ROLLUP, however GROUPING SETS is more flexible, offers better performance, and is ANSI SQL 2006 compliant. GROUPING SETS enables the GROUP BY clause to generate multiple grouped aggregations in a single result set. It is equivalent to using UNION ALL to return a result set from multiple SELECT statements, each of which has a GROUP BY clause.

MERGE Operator

The new MERGE operator streamlines the process of populating a data warehouse from a source database. For example, rows that get updated in the source database will probably already exist in the data warehouse but rows that are inserted into the source database will not already exist in the data warehouse. The MERGE statement distinguishes between the new and updated rows from the source database so that the appropriate action (insert or update) can be performed against the data warehouse in one single call.

LINQ

Language Integrated Query (LINQ) is a .NET Framework version 3.5 feature that provides developers with a common syntax to query any data source from client applications. Using LINQ to SQL or LINQ to Entities, developers can select, insert, update, and delete data that is stored in SQL Server 2008 databases using any .NET programming language such as C# and VB.NET.

ChangeDataCapture

Use Change Data Capture (CDC) to track changes to the data in your tables. CDC uses a SQL Server Agent job to capture insert, update and delete activity. This information is stored in a relational table, from where it can be accessed by data consumers such as SQL Server 2008 Integration Services. Use CDC in conjunction with Integration Services to incrementally populate data warehouses, enabling you to produce more frequent reports that contain up-to-date information. It also allows sync-enabled mobile and desktop applications to perform efficient data synchronization between client and server, without requiring changes to the database.

Table-Valued Parameters

Table-Valued Parameters (TVPs) allows stored procedures to accept and return lists of parameters. Developers can write applications that pass sets of data into stored procedures rather than just one value at a time. Table-valued parameters make the development of stored procedures that manipulate data more straightforward and can improve performance by reducing the number of times a procedure needs to call a database.

ADO.NET Entity Framework and the Entity Data Model

SQL Server 2008 databases store data in a relational format, but developers typically access the data they contain by using an application that was developed in an object-oriented programming language. Creating such applications can be made more complex if you need to build knowledge of the underlying database schema into the applications.

The ADO.NET Entity Framework allows a database to be abstracted and modeled into business objects, or entities, which can be more efficiently used by object-oriented programming languages such as C# and VB.NET. Applications can then use LINQ to query these entities without having to understand the underlying physical database schema.

Synchronization Services for ADO.NET

Synchronization Services for ADO.NET enables developers to build occasionally connected systems (OCSs) such as personal digital assistants (PDAs), laptop computers, and mobile phones to synchronize with server based databases. Users can work with a copy of the data that is cached on their local device and then synchronize changes with a server when a connection becomes available.

CLR Improvements

Common Language Runtime functionality in SQL Server 2008 has been improved in several areas. User-defined aggregates (UDAs) now support up to 2GB of data and can accept multiple inputs. User-defined types (UDTs) are, like UDAs, and also support up to 2GB of data. CLR table-valued functions now feature an optional ORDER clause in the CREATE FUNCTION statement, which helps the optimizer to run the query more efficiently.

Conflict Detection in Peer-to-Peer Replication

In a peer-to-peer replication scenario, all nodes in the replication topology contain the same data and any node can replicate to any other node, leading to the possibility of data conflicts. Use conflict detection to make sure that no such errors go undetected and that data remains consistent.

Service Broker Priorities and Diagnostics

Service Broker provides an asynchronous communication mechanism that allows servers to communicate by exchanging queued messages. Service Broker can be configured to prioritize certain messages so that they are sent and processed before other lower priority messages. Use the Service Broker Diagnostic Utility to investigate communication problems between participating Service Broker services.

ADO.NET Data Services

Microsoft ADO.NET Data Services provides a data access infrastructure for Internet applications by enabling Web applications to expose SQL Server data as a service that can be consumed by client applications in corporate networks and across the Internet.

Beyond Relational

Spatial data with GEOGRAPHY and GEOMETRY data types

New GEOGRAPHY and GEOMETRY data types allow spatial data to be stored directly in a SQL Server 2008 database. Use these spatial data types to work with location-based data that describes physical locations, such as longitude and latitude.

GEOGRAPHY enables you to represent three-dimensional geodetic data such as GPS applications use. GEOMETRY enables you to represent two-dimensional planar data such as points on maps. Spatial data types help you to answer questions like ‘How many of our stores are located within 20 miles of Seattle?’

Virtual Earth Integration

Use the new spatial data types in SQL Server 2008 with Microsoft Virtual Earth to deliver rich graphical representations of the physical locations stored in a database. Use Virtual Earth support to create applications that display data about locations in desktop maps or web pages. For example, SQL Server 2008 makes it easy to show the locations of all company sites that are less than 50 kilometers from Denver.

Sparse Columns

Sparse columns provide an efficient way to store NULL data in tables by not requiring NULL values to take up space. Applications that reference sparse columns can access them in the same way as they access regular columns. Multiple sparse columns in a table are supported by using a column set.

Filtered Indexes

A filtered index is essentially an index that supports a WHERE condition and includes only matching rows. It is a non-clustered index that is created on a subset of rows. Because filtered indexes generally do not contain all rows in the table, they are smaller and deliver faster performance for queries that reference the rows it contains.

Use filtered indexes to optimize performance for specific queries by ensuring that they contain only the rows referenced by the queries.

Integrated Full-Text Search

Full text indexes enable queries to be performed for words and phrases on text stored in your databases. The Full-Text Engine in SQL Server 2008 is fully integrated into the database and full-text indexes are stored within database files rather than externally in the file system. This allows Full text indexes to be fully backed up and restored along with the rest of the database. Full-text indexes are also integrated with the Query Processor, so they are used more efficiently.

FILESTREAMDataFILESTREAM enables binary large object (BLOB) data to be stored in the Microsoft Windows NTFS file system instead of in a database file. Data that is stored using FILESTREAM behaves like any other data type and can be manipulated using T-SQL select, insert, update and delete statements.

Unlike traditional BLOB storage, FILESTREAM data is logically shackled to the database while being stored efficiently outside the database in the NTFS file system. FILESTREAM data participates in all SQL Server transactions and backup operations, along with the rest of the database.

Large User-Defined Types (UDTs)

Create user-defined types (UDTs) that go beyond the traditional data types supported to describe custom data types. UDTs in SQL Server 2008 are more extensible than previous versions since the 8KB size limit has been increased to 2GB. Note that the powerful new spatial data types GEOMETRY and GEOGRAPHY in SQL Server 2008 were developed using this new UDT architecture.

Large User-Defined Aggregates (UDAs)

SQL Server 2008 features a set of built-in aggregate functions that can be used to perform common aggregations such as summing or averaging data. Create custom, user-defined aggregates (UDAs) to manage custom aggregations. UDAs in SQL Server 2008 are more extensible than previous versions since the 8KB size limit has been increased to 2GB.

DATE / TIME Data Types

SQL Server 2008 introduces several new date and time based data types. DATETIME2 references the Coordinated Universal Time (UTC) instead of the system time for greater accuracy and can store date and time data to a precision of 100 nanoseconds. The new DATE and TIME data types enable you to store date and time data separately. The new DATETIMEOFFSET data type introduces time zone support by storing date, time and offset such as ‘plus 5 hours’.

Improved XML Support

SQL Server 2008 features several XML enhancements including Lax validation, the DATETIME data type, and union functionality for list types all provide greater flexibility for defining XML schemas. XQuery includes support for the let clause, and the modify method of the xml data type now accepts xml variables as input for an insert expression.

ORDPATH

Hierarchical data is organized differently to relational data, typically in the form of a tree. An example of hierarchical data is a typical organization chart that outlines the relationships between managers and the employees they manage. A column in a table that uses the HierarchyID data type contains data that describes the hierarchical relationships between rows explicitly in the form of a path. ORDPATH makes it efficient to program hierarchical data by using the HierarchyID data type.

Pervasive Insight

Fixed Query Plan Guides (Plan Freezing)

Freezing Query Plans enables you to influence how the SQL Server query optimizer executes queries. SQL Server 2008 allows existing query execution plans to be imported. Plan Guide to force the query optimizer to always use a particular execution plan for a specific query. Using fixed query plans ensures that queries will be executed in the same way every time they run.

Star Join Query Optimization

Data warehouses are often implemented as star schemas. A star schema has a fact table at its centre, which typically contains a very large number of rows. Star join query optimization can provide improvements in performance for queries that select a subset of those rows. When SQL Server processes queries using star join query optimization, bitmap filters eliminate rows that do not qualify for inclusion in the result set very early on, so that the rest of the query is processed more efficiently.

Enterprise Reporting Engine

The reporting engine in SQL Server 2008 Reporting Services enables the pulling together of data from multiple heterogeneous sources from across an Enterprise. Large and complex reports can be produced in various formats, including list, chart, table, matrix, and tablix (a table/matrix hybrid).

Access and manage reports through a Microsoft SharePoint Services site, simplifying administration, security, and collaboration, and making reports more easily available.

Report Builder Enhancements

Report Builder is an end-user tool for the creation and editing of reports. Report Builder in SQL Server 2008 has an interface that is consistent with Microsoft Office 2007 products, and because it masks the underlying complexity of report building, nontechnical users can create sophisticated reports with relative ease.

Improving Rendering for Microsoft Office® Word and Excel

Reports generated by SQL Server 2008 Reporting Services can be viewed and edited by using Microsoft Office Excel and Microsoft Office Word. The Excel rendering extension produces .xls files that are compatible with versions of Microsoft Office Excel from version 97 upwards.

It offers improved options over previous versions, such as the rendering of subreports. The Word rendering extension, which new in SQL Server 2008 Reporting Services, produces .doc files that are compatible with versions of Microsoft Office Word from version 2000 upwards.

Partitioned Table Parallelism

Parallelism refers to using multiple processors in parallel to process a query, which improves query response time. On a multiprocessor system, SQL Server 2008 uses parallel processing when you run queries that reference partitioned tables.

When SQL Server 2008 processes such a query, rather than allocating just one processor for each partition referenced by the query, it can allocate all available processors, regardless of the number of partitions referenced.

IIS Agnostic Report Deployments

Reporting Services in SQL Server 2008 does not depend on IIS to provide core functionality as it did in SQL Server 2005. Reporting Services can directly generate and deliver reports by accessing the HTTP.SYS driver directly. This has the effect of simplifying the deployment and management of Reporting Services in addition to offering better performance when generating larger reports.

Persistent Lookups

SQL Server Integration Services packages use lookups to reference external data rows in the data flow. Lookup data flow transformations load the external data into cache to improve the performance of this operation. SQL Server 2008 Integration Services uses persistent lookups so that data loaded into the lookup cache is available to other packages, or to multiple pipelines within the same package, without the need to reload the cache.

Analysis Services Query and Writeback Performance

Cell writeback in SQL Server Analysis Services enables users to perform speculative analysis on data. Users can modify specific data values and then issuing queries to see the effect of the changes. This can be useful for forecasting, for example.

In SQL Server 2008 Analysis Services, the values that a user changes are stored in a MOLAP format writeback partition, which results in better query and writeback performance than the ROLAP format that was used in SQL Server 2005 Analysis Services.

Best Practice Design Alerts

Good design is fundamental to creating optimal Analysis Services solutions. SQL Server 2008 Analysis Services uses Analysis Management Objects (AMO) warnings to alert you when the choices you make in your design deviate from best practice.

Design problems are underlined in blue, similar to the way spelling mistakes are underlined in red in Microsoft Office Word. You can see the full text of warning by placing your arrow over the underlined object. You can disable AMO warnings if you choose.

Analysis Services Dimension Design

Various new features in SQL Server 2008 Analysis Services contribute to improving and simplifying the dimension design process. Analysis Management Objects (AMO) warnings help ensure designs comply with best practice, the Attribute Relationship Designer is a visual tool for defining attribute relationships, and key column management is easier with the key columns dialog box.

Analysis Services Time Series

Microsoft Time Series enables trends over time to be forecasted. For example, you can use it to predict product sales over the coming 12 month period. SQL Server 2008 Analysis Services includes the same algorithm for short term analysis that SQL Server 2005 Analysis Services used, and additionally introduces an algorithm for long term trend analysis. Both algorithms are used by default and you can also choose to use just one or the other.

Data Profiling

SQL Server 2008 Integration Services includes the Data Profiling task, which enables the quality of data to be inspected before adding it to your databases. The task creates a profile that includes information such as the number of rows, NULL values, and distinct values that are present. Read profiles created by the Data Profiling task by using the Data Profile Viewer, and then clean and standardize the data as appropriate.

msgbox "Html Id Not found Searchng for name"
objId=obj(x).GetROProperty("name")
If Not(strcomp(objId,"",1)=0) Then
msgbox "Name :"&objId
else
msgbox "Html Id As Well as Name Not found Searchng for Innertext"
objId=obj(x).GetROProperty("innertext")
If Not(strcomp(objId,"",1)=0) Then
msgbox "innertext :"&objId
else
msgbox "Html Id As Well as Name As well as innertext Not found Searchng for ???"
objId=obj(x).GetROProperty("??")
If Not(strcomp(objId,"",1)=0) Then
msgbox "??:"&objId
else
msgbox "Html Id As Well as Name As well as innertext Not found Searchng for ???"
End If
End If
End If
End If
Next

Set GetAllSpecificControls = Page.ChildObjects(Desc)
End Function
' get each of the links
Test1 0

Virtual EnvironmentA friend insists that we'll only know the recession is over when software vendors no longer start every whitepaper with the phrase "In these tough economic times ..." It may be as reliable an indicator as any.

Meanwhile, in these tough economic times, I often read of factories suffering so badly that they are "operating at only 50% of capacity." For a manufacturing plant, such low utilization is a disaster. So, gentle reader, what do you think would be the average utilization of your data center's capacity? Nothing like 50%, that's for sure. Typical enterprise servers run at about 10% utilization according to a recent McKinsey report. They may, just may, be able to reach as high as 35% with a concerted effort.

There are many good excuses for this situation, with both business and technical justifications. Enterprise applications on the same server do not always play together nicely. One will demand all the memory it can get, sulking unresponsively in a corner if it can't get it; another will push over less aggressive applications in order to grab more CPU. In the SQL Server world, we're working on that continuously, with every version adding better resource governance and management. (See http://bit.ly/ss2008rg for specific information about SQL Server 2008.) Then again, these same applications are often mission-critical and it is business requirements which force us to isolate them: from the risk of downtime, or other disruptions. Approaching our problems in this way, it's quite easy to add a new server for this app, and another server for that one, and sure enough, the result is soon 10% utilization.

It won't do. There's a capital cost, and fixed running costs, for every server we add, not to mention the environmental considerations of wasted energy and resources that weigh heavily on many of us, recession or not. I have visited datacenters in emerging economies from Egypt to China where simply having enough power available is a problem and resource management is imperative.

In the database world, we have traditionally approached these problems by running multiple native instances of servers on the same box. This can indeed consolidate hardware and reduce costs. Nevertheless, IT managers and DBAs are increasingly looking to virtualization. Why? There are numerous advantages. For example, with virtualization each application can have a dedicated, rather than shared, Windows instance: especially useful for mixed workloads; and with virtualization, instances are limited only by the capacity of the machine, rather than the native 50-instance limit.

SQL Server 2008 works exceptionally well with Windows Server 2008 R2 and Hyper-V to deliver effective virtualization. In SQL Server 2008 R2 (shipping in the first half of 2010) we will support up to 256 logical processors on that platform to scale those solutions even further. There are some great scenarios for this. Business Intelligence applications such as Analysis Services and Reporting Services are prime candidates, especially when mixed BI and operational workloads peak at different times. Virtualization has other benefits for the database user: for example, the lifecycle from development to test to production becomes easier to manage with a consistent, virtualized, environment.

It's really worth considering virtualization, and building up your understanding of the technology and requirements. There's a great whitepaper at http://bit.ly/sqlcatvirtual with sound advice and background for any SQL Server 2008 DBA considering this technology. Good material to have to hand, in these tough economic times.

Actions help divide your test into logical units, like the main sections of a Web site, or specific activities that you perform in your application.

There are three kinds of actions:

non-reusable action — an action that can be called only in the test with which it is stored, and can be called only once.

reusable action — an action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests.

external action — a reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.

Types of Recording Modes

Normal Mode: QuickTest’s normal recording mode records the objects in your application and the operations performed on them.

Analog Mode: Analog recording mode records exact mouse and keyboard operations. The track of the operations recorded are stored in an external data file. Note: You cannot edit analog recording steps from within QuickTest.

Low-Level Recording: Supports the following methods for each test object:
Click, DblClick, Drag, Drop, Type, Activate, Minimize, Restore, Maximize Note: Steps recorded using low-level mode may not run correctly on all objects.

Running Modes

Run - Normal Run, if we don't have any errors or show stopers then we can use this mode

Run From Step - Perticular portion of the scripts we can execute or we can execute specific Action.

Update Run - This can be used to update Object Properties, Check Points Properties and Active Screen images and values.

Debug Run - To Identify the problems we can use the Debug mode. By using Step-In, Step_Out and Step-Over we can identify the problem fastly.

Debug Run

Step Into
Choose Debug > Step Into, click the Step Into button, or press F11 to run only the current line of the active test or component. If the current line of the active test or component calls another action or a function, the called action/function is displayed in the QuickTest window, and the test or component pauses at the first line of the called action/function.

Step Out
Choose Debug > Step Out or click the Step Out button, or press SHIFT+F11 only after using Step Into to enter a action or a user-defined function. Step Out runs to the end of the called action or user-defined function, then returns to the calling action and pauses the run session.

Step Over
Choose Debug > Step Over or click the Step Over button, or press F10 to run only the current step in the active test or component. When the current step calls another action or a user-defined function, the called action or function is executed in its entirety, but the called action script is not displayed in the QuickTest window.

Reusable action are more efficient then vb function, since objects placed in OR reduces memory leakWith a Re-usable Action, others can copy these and any changes made to the object repository of the copied Actions will be rippled to all copies.automatically. That's the coolest thing about Actions that I can see. HP recommends that you create one big test with all of your Actions in it and call that the "Action Repository". From there everyone can copy these and use them in their own Test Flows. This can't be done with Function Libraries ,because Function Libraries don't have Object Repositories. You can play with this yourself...just create a test Action Repository with a Login Action, a Query Action, and a Logoff Action. Then create a test that calls copies of these and then test it and change the names of the buttons or Edit boxes in the repository and watch how the new names of the objects are automatically rippled into the copies you made. Now imagine how powerful that can be if you have hundreds of Actions.The theory is that you would not have to change any of the Test Flows that were copied but in only the one place.The Action Repository! That's where the fun stops in my opinion because when you do the "MERGE" to convert the Local Repositories to Shared Repositories, I think here is where you can get into trouble....at least I do.

Our concept is ok as all others thinks but it the original process of
identification is different .
Object identification:

QTP will identify the object in the following manner during the running.

It will understand the script statement.

Then it releases that it needs to from some action on some object for that it
needs some information about that object.

For that information it will go to the object repository and get the information
from their.

With that information it will try to identify that object. If it all the object
is identified it will perform action on that object. Here the information means
object properties.

And now the question is that how it is learning those properties to identify
the objects.

This is the point where every one thinking his one style but the fact is that.
That the process of learning is like this.

There are two types of object identification that QTP normally uses apart from
the ordinal identifiers.

In general there are four types of properties that QTP having.

1. Mandatory properties.

2. Assistive properties.

3. Base filter properties.

4. Optional filter properties.

And ordinal identifiers like location index and creation time.

As I told that QTP will be using two types of object identification.

The first one is normal identification.

In this identification the learning of the properties will be in the following
way.

First the QTP will learn all the mandatory properties at a time. And with these
properties it will try to identify the object if at all it feels these
properties are sufficient to identify the object uniquely and then it will stop
learning and use those properties. If at all it feels these properties are not
sufficient to identify the object uniquely then it will learn the first
assistive property. And try to identify the object with those properties
(mandatory and first assistive property) if at all it feels that these
properties are sufficient to identify the object uniquely then it will stop
learning and use those properties to identify the object. If it feels these
properties are not sufficient to identify the object uniquely. Then it will
learn the second assistive property and repeat the same process till it identify
the object uniquely. If at all it fails to identify the object uniquely by using
all these properties also. Then the ordinal identifiers come into picture. This
is the process QTP identifies the objects when the smart identification is
disable.

The second type of identification is smart identification.

In the process QTP will be learning all the mandatory base filter properties
and optional filter properties at time. But the identification process is as
follows.

If you invoke the smart identification then also first QTP will try to identify
the object by using normal identification process. That is the above process
apart from the ordinal identifier. Now it will not use the ordinal identifiers.
If the normal identification fails then it will come to the smart
identification. And forget about the normal identification also all those
properties learnt in the normal identification. And now it will using all the
base filter properties at time and tries to identify the object uniquely. If at
all it feels that these are sufficient then it will use these properties to
identify the object uniquely. Otherwise take the first optional filter property
and same process will continue as above till the object is identified uniquely.
If at all the smart identification is also fails then it will go to use the
ordinal identifiers. This the actual process of identification.

1 What is use of library file and what is contains.?
2 What is difference between writing discripting programming and writing
code in library file.?
All the thing s that you can not do in the QTP can be done in vb script eg.
implementing business logic.. This code can be written in the vb script in
the library file. using library files you can separe the code so u will get
kind of modularity in the code.

2. When the objects in the AUT are dynamic in nature the we can use DP for
identifying the objects DP code can be written in the library file.

DP is a kind of programming which eliminate the existence of the OR.

You can pass the property value pair to identify the object or Use
Description object to do the same..

how can write a discriptive program to check first five check boxes out of ten check boxes.
Dim strChkBox
Dim intCntr
Dim intMaxChkbxCnt
Dim intTotalChkboxes

Set strChkBox=Description.create()

strChkBox("micclass").Value="WebCheckbox"

Set
intMaxChkbxCnt=Browser("YahooMail").Page("YahooInbox").ChildObjects(strChkBox)

intTotalChkboxes=intMaxChkbxCnt.Count()

For intCntr=1 To intTotalChkboxes

If intCntr<=5 Then
Browser("YahooMail").Page("YahooInbox").WebCheckBox(intCntr).Set
"ON"

Else
Exit For
Next
If an object attributes are changing in runtime, it is different. Often we can
handle dynamic changes by using regular expression, but sometimes RE doesn't
help. Here is an example:

RowCnt = DataTable.GetSheet("Action1").GetRowCount
ReDim Arr(0)
If (UBound(Arr) < RowCnt) Then
ReDim Preserve Arr(RowCnt-1)
End If
For i = 0 to (RowCnt-1)
Arr(i) =
DataTable.GetSheet("Action1").GetParameter("RollNos").ValueByRow(i+1)
Next
For i = 0 to UBound(Arr)
For j = i+1 to UBound(Arr)
If (Arr(i) Arr(j)) Then
Tmp = Arr(j)
Arr(j) = Arr(i)
Arr(i) = Tmp
i = i - 1
Exit For
End If
Next
Next
For i = 0 to (RowCnt-1)

Checkpoint Enhancements. Ability to alter properties of all checkpoints during run-time. E.g. Expected bitmap of a bitmap checkpoint, expected values of a Table/DB checkpoint etc
. Ability to create checkpoints at run-time
. Ability to load checkpoints from a external file at run-time
. Ability to enumerate all checkpoints present in current scriptSet oDosW = Description.CreateoDosW("regexpwndtitle").Value = "C:\\Windows\\System32\\cmd\.exe"oDosW("regexpwndtitle").RegularExpression = False Window(oDosW).Activate'Launch the window with title MyPuttyTestingSystemUtil.Run"cmd", "/K title MyPuttyTesting" 'Launch the window with title MyFTPTestingSystemUtil.Run"cmd", "/K title MyFTPTesting" 'Uniquely Recognize console application simultaneously without any ordinal identifierWindow("title:=MyPuttyTesting").ActivateWindow("title:=MyFTPTesting").ActivateTo get the current script name we can use the below line of code

Msgbox Environment("TestName")

'Description: Function to import the all the sheets present in the file'Params:'@FileName - File to importFunction ImportAllSheets(ByVal FileName)Dim oExcel, oBook 'Launch excelSet oExcel = GetObject("", "Excel.Application") 'Open the file in read only modeSet oBook = oExcel.WorkBooks.Open(FileName,,True) 'Enumerate through all the sheets present in the fileFor each oSheet in oBook.WorkSheets 'Check if a DataTable with current name already existsIfNot IfDataSheetExist(oSheet.Name) Then'DataTable cannot be imported if the sheet does not existDataTable.AddSheet oSheet.NameEndIf 'Import the sheetDataTable.ImportSheet FileName, oSheet.Name,oSheet.NameNext Set oBook = Nothing 'Quit ExceloExcel.QuitSet oExcel = NothingEndFunction'Function to check if a DataTable sheet exists or notFunction IfDataSheetExist(ByVal SheetName)IfDataSheetExist = TrueOnerrorresumenextDim oTestSet oTest = DataTable.GetSheet(SheetName)Iferr.numberThen IfDataSheetExist = FalseOnerrorgoto 0EndFunctionSmart identification is a algorithm which QTP uses when it is not able to identify a object. The algorithm tries to figure out if there is a unique control on the page which matches the some of the properties of the failed object.This happens because when QTP SI starts looking for a Logout button there is none, but there is only one button on the page. So QTP SI assumes that it is the Logout button which we were looking for.Setting("DisableReplayUsingAlgorithm") = 1'or using AOM codeCreateObject("QuickTest.Application").Test.Settings.Run.DisableSmartIdentification = True

Scripting Enhancements. More options for other scripting languages i.e. JScript, VB.NET, C#.NET etc…
. Support for start and finish events in Action and Test. E.g. – Test_Init, Test_Terminate, Action_Init, Action_Terminate. Currently this can be achieved through use of class but it would be easier for people to use if the functionality is built-in
. Ability to execute specified VBScript even before the test execution starts. This script would run outside QTP and would help make changes to QTP settings that cannot be done during run-time. This would help in overcoming limitations like Libraries cannot be associated at run-time
. Ability to execute specified VBScript even after the test execution starts. This script would run outside QTP and would help perform post execution code. This would be helpful in scenarios like sending email with the report
. Ability to create error handler functions which are automatically called when an error occurs
. Performance improvement when using Descriptive programming when compared to Object Repository usage
. Ability to Debug libraries loaded during run-time using Execute, ExecuteGlobal and ExecuteFile
. Ability to unload a library at run-time
. Ability to register generic functions for any object type. Currently RegisterUserFunc can only be used to register the method for a single object type. So if a new Set method is created then multiple RegisterUserFunc statements need to be written. QTP should provided some way to use a pattern or something to apply the same method to multiple object types
. Ability to Unlock a locked system from within the code
. Ability to prevent screen locking during the execution
. Ability to debug error which occurs during terminations of script. Currently any errors that occur during the termination of the script cannot be debugged and launches
. Ability to Enumerate current Action parameters
. Ability to Enumerate current Test Parameters
. Ability to Enumerate current Environment parameters
. Ability to Export environment variables to XML in code
. Ability to save data directly to Design time data table
. Ability to load Excel 2007 files into data table
. Ability to Encrypt the script code
. Ability to password protect the scripts
. Ability to save the list of libraries open and the libraries to automatically open when opening the script
. Ability to save the breakpoints and bookmarks with the script
. Ability to move a script from write mode to read-only mode. Currently Enable editing button only enables read-only to write mode transition in tests/components/libraries and not vice-versa
. Ability to execute in thread. Even one thread would do
. Ability to call Windows API which require structures
. Ability to Load/Unload add-ins at run-time
. Add-in SDK documentation to create new add-ins
. Ability to use a external excel file directly as a DataTable and have access to all the excel objects as well
. Ability to pause/break a script. Similar to Debug.Assert method in VB
. Ability to reconnect to QC automatically when the connect times out to the server
. Built-in support for Web 2.0 and AJAX
. Ability to call one Test from another test
. Auto include of SOR and Associated libraries when calling a external action present in some other script
. Ability to clear session cookies of a browser without closing the browser
. Ability to get the stack trace in the code in case of an error
. Improvement in error messages in case of general error messages
. Ability to get the current function name from inside the function
. Ability to get the time taken by a transaction when using Services.EndTransaction
. Ability to do OCR on images (captcha) and convert them to text

QTP AOM Enhancements
. Ability to open Business process script in QTP
. Ability to open/close library file in UI using AOM
. Ability to create a library file through AOM and add code to the same
. Ability to create, open and modify a BPT Application area through AOM
. Ability to set/delete breakpoints in IDE using AOM
. Ability to change any option at run-time and having the change impacted instantly
. Ability to convert a BPT component to a normal test using AOM and through UI as well

Recovery Scenarios Enhancements. Recovery scenarios (RS) to run in a separate thread. Currently recovery scenarios run in the same thread as QTP. This causes recovery scenarios to be useless in case a Modal dialog blocks QTP
. Option to stop chaining in recovery scenarios. Currently if RS1 and RS2 both match the trigger criteria then both of the scenarios are executed. There is no way to specify that RS2 should not be executed if RS1 is executed
. Currently there is no way to enumerate recovery scenarios present in a File
. Recovery scenarios don’t work when they are associated at run-time
. Ability to test the RS trigger from the UI. This would help debug in debugging the issues when recovery scenario does not get fired

Test Reporting Enhancements . Ability to create reports in different format. Excel, Word, HTML etc…
. Reporting of errors or failure in different categories and priority. E.g. – Reporting a error with severity, priority and category (Application, Script, Validation, Checkpoint)
. Exact location of the error in the script. This should include the line of code, error number, error statement
. Direct ability to send report to a specified list of email addresses when the test end
. Currently the reports can only be viewed through QTP reporter viewer and cannot be viewed on machines without this tool. Report should be made completely Web compatible so that I can be viewed in any browser in the same way it is displayed in report viewer
. Ability to view status of current Action. Currently Reporter.RunStatus only provides the status of whole test and not for current action
. Ability to enumerate the current status from within a Test i.e. Actions executed with details, checkpoints executed with details, recovery scenarios executed with details
. Hosting web server to capture and display script status in batch
. Ability to report to a existing result file
. Ability to override the Action Name in reports
. Ability to specify the time zone to be used in reports
. Ability to specify the results path during the run