This site is all about automation. Tools and frameworks explored and explained

Author: Luzju

1 Introduction

Ansible is one of the most popular systems for remote server configuration management.

1.1 Benefits of using Ansible

Ansible is used to automate high number of servers say in a datacenter

Ansible scripts are easier to maintaining than shell scripts and are easier to scale up

Ansible makes it easier to to deploy a desired state as opposed to a shell script (which can go wrong and the recovery becomes difficult to manage)

Ansible makes it easier to work across different Linux distributions (as opposed to shell scripts)

Ansible is a configuration management tool. Once a desired state is defined through Ansible, if the current state changes, just reapply the desired state

Ansible is easier than other solutions such as Puppet, Chef, Salt, CFEngine etc.

Ansible does not need an agent on managed servers. It uses SSH

Ansible is modular and therefore flexible

Ansible modules are usually written in Python which is a highly popular language. There are over 100 modules already available to administrators

1.2 Installing Ansible

1.2.1 Minimum Requirements

One server acting as the Controller Node and one server acting as a Managed Host. RHEL 7.3 or CentOS 7.3 (or above) are the recommended platforms.

1.2.2 Installing the Controller Node

In order to install the controller node you need to install Python 2.x. The Ansible version that comes with the repos for 7.3 supports Python 2.x. If you are using a later version of Ansible (say 2.4 or later) then Python 3 is supported.

You need to add the EPEL repo since this is the one that contains Ansible. You will also need to create a non-root user which is used to perform all Ansible tasks.

1.2.3 Installing the Managed Noded

The requirements are the same as the controller node. You need to install Python. You will also need to set up SSH communications.

1.2.4 Configuring SSH

Set up SSH Key-based authentication using ssh-keygen. This creates a public key as well as a private key. The server that has the public key sends a challenge that can only be answered with the private key. The private key should be kept in the local user account on the control node. The public key should be sent to the ~/.ssh/authorzed_keys file in the target user home directory. To transfer the public key use the command ssh-copy-id user@remotehost. Notice that the local user name and the remote user name do NOT have to be the same, but it is convenient to have such a setup.

Note: if you also want to use Ansible to manage the controller node, run the same command locally to copy the public key to ~/.ssh/authorzed_keys. In this case the command would be like this: ssh-copy-id user@localhost

1.3 Managing Inventory

After installation is complete, you can use Ansible against remote hosts. The remote hosts that need to be managed must be defined in an inventory file (Please note the inventory file is NOT similar to the hosts file found under /etc/hosts. It is set up completely differently. This file is very specific to Ansible). The hosts are registered in the inventory file using their Fully Qualified Domain Name (FQDN) or IP address. The hosts may be registered more than once but in different logical groups. This means that you can address different groups to manage desired state configurations and a host may be a member of more than one logical group which is quite normal. When running Ansible commands you mention both host names as well as the inventory file you intend to use. you many have more than one inventory file.

For example:

ansible server1.cybg.com, server2.cybg.com -i myinventory –list-hosts

You would usually create an Ansible project directory in your home directory and put the inverntory file in it. It is uncommon that you would use only one inventory in the whole Ansible Environment. You might have many administrators accessing Ansible and they would in turn have a number of inventory files to manage their servers.

1.4 Hands on

1.4.1 Installing Ansible

Log onto the controller node

Sudo to root

su -

Password:

Enter the password.

Enter the command:

visudo

Search for the group definition wheel. You should see a line similar as follows :

%wheel ALL=(ALL) ALL

Note the line must be active as above (There is no remark , no # at the front of the line). This means that members of wheel will be allowed to use the sudo command and perform administrator priviliged tasks. This way there is no need to use the root account.

Now assuming you have created a user called user on both the Controller node and the managed host, add the user to the wheel group on both machines:

usermod -aG wheel user

Now we move onto installing python since this is a prerequisite for Ansible. On both the controller node and the managed host run the following command:

yum install -y python2 epel-release

Now lets install Ansible on the controller node:

yum install -y ansible

If you do not have FQDN set up on these machines you might want to set up the naming using /etc/hosts file. For example on both controller node and manged host in /etc/hosts you can put in the following lines:

192.168.4.200controller.cybg.com controller

192.168.4.201managedhost1.cybg.com managedhost1

1.4.2 Setting up SSH

On the controller node log out from rootand log back in as user. Remember this user can now run sudo tasks.

Next we need to make sure that the remote host key is cached on the controller. On the controller node type in the following:

ssh managedhost1

you will be prompted that the authenticity of this host can’t be established. ECDA key fingerprint is xxxxxxxxxxxx. are you sure you want to continue connecting (yes/no)?

Type in yesand proceed and type in the password to finish caching the key to the local configuration file, then close the ssh connection. Since Ansible heavily relies on SSH we do not want Ansible to prompt for public key verifications half way through a set of commands.

Now we move onto generating the public / private key for the controller node. Type in the following:

ssh-keygen

Accept all the defaults, no need to change the default file name or enter any passphrase at this stage. Under /home/user/.ssh you will find 2 new files. The private key id_rsaand the public key id_rsa.pub. Now we copy the public key to managedhost1. Type the following:

ssh-copy-id managedhost1.cybg.com

Type in yes to confirm you want to proceed on authentication. Type in the password when prompted. Notice if you use only the hostname and then the fully qualified domain name you will be prompted again to accept the name and store it in the cache. So now you have 2 names in the cache on the controller node. Notice if you are going to perform tests using the all group defined in the inventory file, then you need to at least ssh once also to controller and controller.cybg.com. Essentially this must be done for every hostname and FQDN listed in a group in an inventory file, to avoid authentication and authorization prompts.

So repeat the ssh copy command to the controller node so it is also set up with a public key.

ssh-copy-id controller.cybg.com

So at this point we want to create a project directory for this exercise. Whenever a new Ansible project is in the works, usually a project directory is created to compartimentalize the work. Type the following on the controller node:

mkdir install

cd install

Next we create an inventory file called inventorywith the following content:

So the above command is stating select the group all found defined in the file inventory which is relative to the current directory and issue the command –list-hosts which prints out a list of all the hosts defined in the said group.

1.5 Creating the Ansible Config File

When working with Ansible you need to pass a set of configuration options and these can be held in an Ansible configuration file. The file is called ansible.cfg and can be found in different locations on the controller node. For example:

The generic file /etc/ansible/ansible.cfg

The user specific file ~/ansible.cfg

The ansible.cfg file in the project directory (takes precendence)

It is common practice to use an ansible.cfg firle in the project directory , anlternatively you can specify exacctly which ansible.cfg file by defining the $ANSIBLE_CONFIG environment variable.

It is very important that the ansible.cfg file that is used contains all environment variables.

To find out which ansible.cfg file is being used run the command ansible -v

In the ansible.cfg the following entries must be specified:

become

specifies how to escalate privileges on managed hosts (for example use sudo)

become_user

specifies which user account to use on the remote host (for example a generic username used by ansible set up the same on all managed hosts)

become_ask_pass

use this to determine whether or not a password should be asked when becoming another user

inventory

specifies which inventory file to use

remote_user

specifies the name of the user account on the managed machine. This is not set by default which results in the local user name being used if not specified

1.6 Understanding Privilege Escalation

Ansible runs tasks on the managed hosts with the same user account as the local user. Make sure that the SSH keys are copied to the user’s SSH config on the remote host

Set remote_user in ansible.cfg to specify another user to be used

If remote_user is not specified, privilege escalation can be used

Enable in the [privilege_escalation] section in ancible.cfg as shown below

become=True

become_method=sudo

become_user=root

become_ask_pass=False

The above settings ensure that tasks are run using the root user through the sudo command and no password prompts will appear to stop a task from proceeding.

Privilege escalation needs a sudo configuration in order for it to work. For the control node Ansible default account, create a sudo file on all the Ansible managed hosts (including the controller node). A typical sudo file for the user userwould be created in /etc/sudoers.d/user:

cat /etc/sudoers.d/user

user ALL=(ALL) NOPASSWD: ALL

1.6 Running Ansible ad-hoc commands

Ad-hoc commands are Ansible commands that you can run on the command line. This is not typically how you want to use Ansible. You’ll typically want to create playbooks to automate tasks against multiple ansible servers. To quickly make changes ot many managed hosts, ad-hoc commands are convenient. Ad-hoc commands can also be used for diagnostic purposes, like querying a large number of hosts. In ad-hoc commands, modules are typically used.

1.6.1 Understanding Modules

A module is used to accomplish specific tasks in Ansible.

Modules can run with their own specific arguments. They are written in Python.

Modules are specified with the -m option followed by the name of the module.

Module arguments are referred to with the -a option followed by the argument name.

The default module can be set in ansible.cfg. It is predefined to the commandmodule

This module allows you to run random commands against managed hosts. These are random linux commands.

As commandis the default module, it does not have to be referred to using the -m module

Notice that the commandmodule is not interpreted by the shell on the managed host and for that reason cannot work with variables, pipes and redirects

consider using the shell module if you need full shell functionality.

1.6.2 3 Common Modules

command

runs a command on a manged host

shell

runs a command on managed host through the local shell

copy

copy a file, change content on a remote host in a target file

1.6.3 Ad-hoc Comand Examples

ansible all -i inventory -m command -a id

Runs the command module with the id command as its argument against all hosts defined in the group [all] in the inventory file.

ansible all -i inventory -m command -a id -o

Runs the same command as above, but provides one single line of output

ansible all -i inventory -m command -a env

This one fails as the command module is refering to a internal shell command and this module doen not work through the shell

2.2 Ansible and Windows

Windows is also supported, but using powershell, not SSH

Powershell scripts can be pushed and executed

all Windows features can be managed

2.3 Running Ansible Deployments

In real life in a server farm environment a new server gets installed with some initial tool like Kickstart or similar. Once installed , Ansible can be used to finish the configuration and take care of multiple tasks, according to the needs on the server for example:

Configuration of software repositories

Applicaton installation

Configuration file modifications

Opening ports in the firewall

Starting services

2.4 Connection Plugins

Ansible uses plugins to extend what the system is doing under the hood. Connection plugins provide specific communication between different elemets such as:

native SSH

paramiko SSH

local

chroot

docker

2.5 Understanding Modules

Modules are programs that Ansible runs to perform tasks on managed hosts.They are included in playbooks, or they are referred to when running ad-hoc commands. Ansible comes with hundreds of modules and administrators can write custom modules (with Python). Core modules are included with Ansible and maintained by the Ansible developers. Extra modules are additional modules that are maintained by the community. This may also include external communities, such as OpenStack. Custom modules are the modules that administrators develop. Module location depends on the Linux distribution. On CentOS the are in /ust/lib/python2.7/site-packages/ansible/modules. The best place to look for documentation on the modules is to refer to the authoritative documentation on docs.ansible.com.

You can also find information by using the command:

ansible-doc -l

To get a module specific information use the following command:

ansible-doc <modulename>

modules can be included using the ansible -m <modulename> command. For example:

ansible -m ping all

where all is the group defined in the inverntory file.

Modules can also be included in an Ansible task in a playbook. For example:

tasks:

- name: Install a package

yum:

name: vsftpd

state: latest

3 Working with Playbooks

3.1 Understanding YAML

YAML stands which stands for “YAML Ain’t Markup Language” is a serialization standard that was developed to represent data structures in an easily readable way. Structures are represented using indentation, and not by using braces and brackets or opening and closing tags which is the case in many other serialization standards. Space characters are used for indentation, the only requirement is that data elements at the same level in the hierarchy must have the same indentation. Do NOT use tabs for indentation! It is common, but not required, to start a YAML file with three dashes, and to end it with three dots. This allows you to include YAML code in other files.

3.2.1 Sample YAML File

---

# example YAML file

item1

parameter1

parameter2

option1

option2

item2

parameter3

...

Note: The indentation is 2 spaces long.

3.2.2 YAML File contents

Typically in a YAML file you will define a dictionary. A dictionary is a key/value pair, written in key:value notation. You can also use lists which are used to represtnt a list of items. Lists are enumerated as – item. A space behind the – is mandatory. You can use strings as well which can be enclosed in either double or single quotes but they are not mandatory. In a multi-line string, the first line is ended by either a | or a > and the next lines are indented. To verify YAML file syntax, run it with ansible-playbook –syntax-check mycode.yaml.

3.2 Creating Playbooks

3.2.1 Playbook Structure

A playbook is a collection of plays. Each play defines a set of tasks that are executed on the managed hosts. Tasks are performed using ansible modules. Ordering is important: Plays and tasks are executed in the order they are presented. A playbook defines a desired state. Ansible playbooks are idempotent. This means that playbooks will not change anything on a managed host that already is in the desired state. Avoid using modules like command, shelland rawas the commands they use are not idempotent by nature. Multiple playbooks may be defined, each playbook will have its own YAML file.

3.2.2 Playbook Contets: The Task Attribute

There are different types of attributes that may be used, depending on the included Ansible modules. the most important attibute is the taskattribute:

tasks:

- name: run servise

service: name=vsftpd enabled=true

In the above example, the – marks the beginning of a list of attribues. The service item is indented at the same level, which identifies it as another task that is at the same level as name. If multiple tsks are defined, each first attiebute of the task starts with a –.

3.2.3 Playbook Contets: Other Attributes

These are the most common generic attributes:

name:

Used to specify a specific label to a play

hosts:

Uses patterns to define on which hosts to run a play

remote_user:

Overwrites the remote_usersetting in ansible.cfg

become:

Overwrites the becomesetting in ansible.cfg

become_method:

Overwrites the become_methodsetting in ansible.cfg

become_user:

Overwrites the become_usersetting in ansible.cfg

3.2.4 Formatting Playbooks

If playbooks are getting larger, playbook formatting becomes more important to increase readability. Imagine a module that is invoked with multiple arguments. Multi-line formatting allows you to specify all arguments as multiple lines. Dictionary formatting specifies all arguments on differnet lines, using indentation. Block formatting allows you to group tasks. As you can see there are multiple ways a playbook can be formatted.

3.2.5 Running Playbooks

Playbooks are executed with ansible-playbook

Use ansible-playbook –syntax-check simple.yml for a syntax check

Use ansible-playbook -C simple.yml for a dry run

Use ansible-playbook –step simple.yml for a step-by-step execution

where simple.yml is an example playbook

4 Working with Variables, Inclusions and Task Control

4.1 Working with Variables

Using variables makes it easier to repeat tasks in complex playbooks and are convienient for anything that needs to be done multiple times such as creating users, removing files, installing packages etc. These are typically tasks where you do not want to define the specific names of users, files or packages. A variable is a label that can be referred to from anywhere in the playbook, and it can contain different values, referring to anything. Variable names must start with a letter and can contain letters, underscores and numbers. Other characters (i.e. – or # etc) are not valid characters.

Variables can be defined at different levels. They can be defined with a different scope such as:

Global scope: these variables are set from the command line of from the Ansible configuration file

Play scope: these variables relate to the play and related structures

Host scope: variables are defined on groups and individual hosts (this can be done through an inventory file)

Variables can be defined in a playbook or included from external files. See example below:

- hosts: all

vars:

user: linda

home: /home/linda

When using variable files, a YAML file nees to be created to contain the variables. The file uses a path relative to the playbook path. This file is called from the playbook, using vars_files:

- hosts: all

vars_files:

- vars/users.yml

Looking at the contents of inside the project directory vars/users.yml:

user: linda

home: /home/linda

user: anna

home: /home/anna

4.1.1 Using Variables

To use variables in a playbook, the variable is referred to using double curly braces. If the variable is used as the first element to start a value, you need to use double quotes around the curly braces too. See example below:

tasks:

- name: Creates the user {{ user }}

user:

name: "{{ user }}"

Notice the different uses of the variable user.

4.2 Managing Host Variable and Group Variables

A host variable is a variable that applies to one host that is defined in the inventory file. A group variable applies to multiple hosts as defined in a group in the inventory file. These variables may be defined in the inventory file, but that method is deprecated. The recommended method is to use group_vars and host_vars directories. So within the project directory, which contains the inventory file, create directories group_vars and host_vars.

If for example you have a host group called webservers that is defined in the inventory file, create a file with the name group_vars/webserversand in that file define the variable. This is the same for individual host variables, create a file with the name of the host and put it in host_vars.

At any time, variables can be overwritten from the command line using the -e “key=value” command line option to the ansible-playbookcommand.

4.2.1 Demo

Create a directory vardemo.

Under vardemocreate an inventory file inventorywith the following contents:

[webservers]

server1.example.com

server2.example.com

[ftpservers]

server3.example.com

server4.example.com

Under vardemocreate a subdirectory group_vars. Under vardemo/group_vars create a file webserverswith the following variables:

package: httpd

Under vardemocreate a subdirectory group_vars. Under vardemo/group_vars create a file ftpservers with the following variables:

package: vsftpd

Now with this setup you can define a generic playbook that uses the variable packagethat will load the httpdstring when used in relation to the webserversgroup and vfstpdstring when used in relation to the ftpserversgroup. This way we are splitting static content from dynamic content making playbook maintenance and reuse easy.

4.3 Understanding Arrays

An array is a variable that defines multiple values, including specific properties. You refer to a cell using dot notation for example from users.linda.first_name would be defined as follows in vars/users.yml:

users:

linda:

first_name: linda

last_name: thomsen

home_dir: /home/linda

anna:

first_name: anna

last_name: jomes

home_dir: /home/anna

4.4 Understanding Facts

A fact contains discovered information about a host. Facts can be used in conditional statements to make sure certain tasks only happen if they are really necessary. The setup module is used to gather fact information. You can run Ansible on a host to gather the facts. For example:

ansible managed1.ansible.local -m setup

Facts provide a lot if information. Filters can be applied on the level 1 information that is displayed by the facts. Level 1 is the first indentation level as shown when displaying facts. To limit, use a filter by passign the -a ‘filter=…” option for example:

The result of the filter can then be used to assess conditionals. Say for example in a playbook, proceed to the next step and install a package provided the kernel version is above a certain level.

4.4.1 Defining Custom Facts

You can also create and work with custom facts. Custom facts can be created by administrators to display information about a host. Custom facts must be defined in a file using the INIor JSONformat and the .fact extension. The fact files must be stored in the /etc/ansible/facts.d directory and will be shown as an “ansible_local” fact. Below is an example of a .fact file:

4.5 Using Inclusions

Inclusions makes it easy to create a modular Ansible setup. Tasks can be included in a playbook from an external YAMLfile using the includedirective. Using task inclusion makes sense in complex setups, as it allows for the creation of separate files for different tasks, which can be managed independently. If task inclusions are used, the main variables would need to be set in the master ansible file, whereas the generic tasks will be defined in the included files. Variables can be included from a YAMLor JSONfile using the include_vars directive. Using this method overrides any other method to define variables. If you want to do this, make sure the include_varshappens before the actual usage of the variables. Notice that all these different ways of working with variables can make it difficult to find out which is going to be effective.

4.5.1 Variable Precedence

With all these different methods of defining variables, it is good to know about precedence. Check the following order:

Variables defined with include_vars

Variables with a global scope (set from the command line or Ansible configuration file)

Variables defined by the playbook

Varables defined at the host level

5 Using Flow Control, Conditionals and Jinja2 Templates

5.1 An Introduction to Flow Control

Flow control works with loops and conditionals to process items. A loop is used to process a series of values in an array (like creating multiple users, installing multiple packages etc.) A conditional is a task that is executed only if specific conditions ate met (for example using implemented facts like “{{min_memory}} < 128”)

5.1.2 Understanding Loop Types: Simple Loops

A simple loop is just a list of items that is processed through the with_items statement for example:

- yum:

name: "{{ item}}"

state: latest

with_items:

- nmap

- net-tools

Note: The item variable in the above example follows from the with_items loop type.

A more complex items list defined in the multi array below:

- name: create users

hosts: all

tasks:

- user:

name: "{{ item.name }}"

state: present

groups: "{{ item.groups }}"

with_items:

- { name: 'Linda', groups: 'wheel'}

- { name: 'Lisa', groups: 'root'}

5.1.3 Understanding Loop Types: Nested Loops

A nested loop is a loop inside a loop. When these are used, Ansible iterates over the first array, and applies the values in the second array to each item in the first array. This is useful if a series of tasks needs to be executed on items in the array. For example:

5.2 Understanding Conditionals

Conditionals can be used to run tasks on hosts that only meet the specific conditions. In conditionals, operators are used such as string comparison, mathematical operators and booleans. Conditionals can look at different items for validation. For example, values of registered variables, Ansible facts and output of commands. In conditionals, different operators are used. See the table below:

Equal

==

“{{ max_memory }} == 1024”

Less than

<

“{{ min_memory }} < 128”

Greater than

>

“{{ min_memory }} > 256”

Less than or equal to

<=

“{{ min_memory }} <= 512”

Greater than or equal to

>=

“{{ min_memory }} >=1024”

Not equal

!=

“{{ min_memory }} != 512”

Variable exists

is defined

“{{ min_memory }} is defined”

Variable does no exist

is not defined

“{{ min_memory }} is not defined”

Variable is set to yes, true or 1

“{{ available_memory }}”

Variable is ser to no, false or 0

not

“not “{{ available_memory }}”

Value is present in a variable or array

in

“{{ users }} in users[“db_admins”]”

5.2.1 Using the When Statement

The when statement is used to implement a condition. For example:

- name: Install the mariadb package

package:

name: mariadb

when: inventory_hostname in groups[ "databases"]

Multiple conditions can be combined with the and and or keywords, or grouped with parenthesis. For Example:

5.3 Understanding Jinja2 Templates

Jinja2 temlates are Python-based templates that are used to put host specific data on hosts, using generic YAML and Jinja2 files. Jinja2 templates are used to modify YAML files before they are sent to the managed host. Jinja2 can also be used to reference variables in playbooks. As advanced usage, Jinja2 loops and conditionals can be used in templates to generate very specific code. The host specific data is generated through variables or facts.

Below is an example of a jinja2 template, called motd.j2

This is the system {{ ansible_hostname }}.

Today it is {{ ansible_date.date }}.

Only use thissystem if{{ system_owner }} has granted you permission

The variables referred to above are in a YAML file in this case MOTD.yml. This YAML file is in turn referring to the motd.j2 template

Web elements are the foundation of all that is visible through a browser. For more information check W3Schools. A web page is made up of HTML elements. These are buttons, links, labels, input, etc. etc. etc. Selenium WebDriver addresses and recognizes these as Web Elements. These Web Elements are what Selenium interacts with to interrogate and manipulate a web page.

A Test Automation Developer uses Selenium to locate , interrogate and interact with these Web Elements to produce a desired state and eventually feed the result to a set of verifications to pass or fail a test. So Selenium is part of a bigger picture but a very essential part.

Moving On . . .

You might question Yourself “How do I know what to look for ? First I need to see what I have and then interrogate it.” You are right we need to see the source of a web page of a site and then figure out which element we want to interact with, locate it and make sure we got the right one and then act on it. We need to be able to do this with an accurate source and not an old dirty read of the page, because a page can be very dynamic and asynchronous in nature. Luckily there is something you can do about it and avoid the hassle of having to go through a fine comb through the hundreds of lines of elements on the page source.

I will show you two ways of doing this using some tools. The first method is no longer supported however I will still show you how just in case you are working for a company such as say a bank, and you are restricted by the versions of tools you are allowed to use.

The second method is the way to go although not as easy as the first method, it is the newly supported method and that is what we are moving onto.

We will now go ahead and install some additional tools.

Install the latest version of Firefox ESR

You might be asking yourself why Firefox ESR and not the normal Firefox. Well Firefox ESR as its name suggests is the Extended Support Release. This is what companies use for their applications. They need a stable browser to act as a front end to the systems that they use. If the browser is always accumulating new functionality there is a risk of encountering backward compatibility issues and the result is broken legacy systems.

Therefore for this exercise we will use Firefox ESR as the basis for other important plugins that will help us identify web elements.

Notice the multitude of supported languages. I tend to download the British English localised version (notice how I spelled localised not localized lol). At this point select your language preference and download the 64 bit version for your operating system. Since I am installing this on Windows I am downloading the 64 bit Windows version.

Go ahead and install it !

Installing Firebug

Ok once it is installed we are going are going to install our first helper plugin. Firebug integrates with Firefox and then you can edit, debug, and monitor CSS, HTML, and JavaScript live in the web pages you navigate to. Although this is no longer maintained and we should be moving onto Firebug.next, this is what the old test automation developers grew up with, so it is only fair to cover it. I still use it today.

Click on the Firebug 2.0.19link and you will be redirected to the following page

Click on the Add to Firefoxbutton

The add-on starts downloading.

When the add-on is successfully downloaded the Installbutton is enabled. Click it to complete the installation.

If you navigate to the Extensions in your browser notice Firebug is now installed and enabled.

Navigate to any page and click on the bug.

Firebug opens at the bottom of the page and you can investigate the loaded DOM for the page.

Installing FirePath

Navigate to https://addons.mozilla.org and in the search bar, search for FirePath. Once it is located click on the Add to Firefox button

Once installed Firefox might ask you to restart. Do That. Then navigate to a page of your choice and click on the Firebug button situated on the top right next to the Search bar. Notice the new FirePath tab.

Installing Firefinder

Navigate to https://addons.mozilla.org and in the search bar, search for Firefinder. Once it is located click on the Add to Firefox button

Once installed Firefox might ask you to restart. Do That. Then navigate to a page of your choice and click on the Firebug button situated on the top right next to the Search bar. Notice the new Firefinder tab.

A Demo to look for some elements

Ok so now that we have the tools installed, how do we go about using them? Let’s use my blog site as an example: I would like to be able to address the Searchinput box and the Searchbutton. How do I go about it? Easy !

Click the Firebug button

Once Firebug opens at the bottom click the inspect button

Next move the mouse to the desired web element, in this case the search input box and click it.

notice in the HTML tab, the relevant line representing the web element is automatically found and highlighted.

Notice the type of element which is an input, the class name and the other attributes. These can all be used in identifying a web element.

Click the FirePath tab , then again click the inspect button and then click the search text box. Notice the xpath that is automatically created to specifically point to the Search text box.

xpath for search image

Notice that the xpath itself is very specific and can become brittle in the future if this page is changed. To avoid such situations it is recommended to try and simplify and generalize the xpaths used in your tests. Let’s try :

from

.//*[@id='search-4']/form/label/input

to

//aside[@id='search-4']/form/label/input

The above returns one matching node which is the same as the Search text box. If you hover on the matching result you can see what web element gets highlighted.

Ok let’s move on to addressing the SEARCH button.

Click the inspect button

Next move the mouse to SEARCHbutton and click it.

Notice that the xpath :

.//*[@id='search-4']/form/input

Let’s change it a bit. Let’s try :

//aside[@id='search-4']/form/input

We get the following result:

Bye Bye Firebug… Hello Firefox Dev Tools

Since the Firebug add-on is now in maintenance mode and nearing extinction, we need to look at the new way of doing things. It is time to get acquainted with the built-in Firefox dev tools and figure out how to achieve retrieving the same information to locate web elements.

We will use my blog again for this example:

Navigate to my blog

Press F12 to open up DevTools.

Select the console tab and type in :

$x("//aside[@id='search-4']/form/input")

Press enter and notice the result. The Console returns an array of one element in it, which is what we want.

This turns out to be the SEARCHbutton. If you hover over the element in the array you get the following visualisation.

That’s it ! This is the new way of checking for elements, in this case using xpaths (my favourite). I know I know, it is not as helpful as FirePath, but the reality is that the default result provided by FirePath tends to be a lazy way for a developer to get the gist of the xpath and then clean it up. You can do this also by looking at the html directly and then test it out by doing $x(“put your xpath here”) in the console in the Firefox devtools.

So there you go. Do spend time familiarising yourself with xpaths. There are numerous resources out there. Remember Google is your friend.

If you are in the business of automating web sites, testing all the bits and bobs on pages, validating workflow, and pushing the boundaries of stress testing application servers, maybe even extend to denial of service tests, well then, look no further than Selenium WebDriver.

Selenium is very powerful and is backed up by major Browser Vendors. It is open source and does not come with any license fees. These are big pluses for both employers , professionals and students who can freely download the library and start using it with their preferred language. Have a look at http://www.seleniumhq.org and navigate through the site to investigate if your language of choice is supported and get on with automating! My language of choice has always been java. If your choice language is C#, Python, Ruby … then go ahead what are you waiting for ?

In this article I will describe how to set up a solid development environment that is scalable and ready for heavy-duty automation. I will describe in detail how to install the latest version of Java JDK and Eclipse EE for java which includes the very powerful Maven. I will then install TestNG which will be used to organize and drive our tests and finally download all the different WebDriver drivers for the various browsers of interest.

Finally I will write the first Selenium class example and execute against a browser.

I am very excited with the capabilities WebDriver especially Remote WebDriver and Selenium’s GRID capabilities. I personally only use this type of setup for my tests at work. It tends to be ready to scale up and also easy to integrate with dev ops tools such as Jenkins. I will cover Remote WebDriver and GRID setup in a separate article. Jenkins deserves its own set of articles.

One step at a time …

Downloading and Installing the Java JDK:

Open your browser and navigate to the site java.oracle.com. On the Top Right hand side locate the link to the latest Java JDK and select it.

Click on the Download button under the JDK. We want the development kit not just the JRE.

Next accept the license agreement and select the right binary for your OS (in my case it is the Windows x64 bit installer).

Once the binary is downloaded successfully, locate it and execute it. Click Yes button for any security prompts and go ahead with the installation.

The installer starts and displays an interactive Window. Click Next Button

In the next screen select optional features. I tend to leave it all with default values. Click the Next Button

The installer starts the background configuration.Then it prompts you to select the destination folder. I leave mine as default and click next

Then the installers starts doing its job. Just sit tight and wait for the installer to complete.

Once the Installer states it has completed the installation, click the close button.

The next task is very important and this is the way eclipse (my favourite IDE) will know how to find the JDK and be able to launch itself. We need to set a system environment variable JAVA_HOME.

From explorer, select the Computer, right mouse click and select Properties from the menu.

This will open the System window. Choose the Advanced system settings option on the left hand side.

In the new window click Environment Variables.

We want to add a system variable so click the New… button at the bottom of the window.

Enter the following values :

Variable name: JAVA_HOME

Variable value: C:\Program Files\Java\jdk1.8.0_141

Note if you have a different version of java (later version than mine) and also if you have changed the default installation path, this must be reflected in the value field.

Click OK to accept the new system variable. Click OK to close the Environment Variables window.

And Finally to verify that the JAVA_HOME is working , open a command prompt and type in java -version. You should not get any errors and the version of java is displayed on the screen.

So now that we have Java installed we move on to the next part of the setup which is the IDE. In this instance I am installing Eclipse. As of this article Oxygen is the latest version, so I will go ahead with that.

Select the Java EE version. In my case I am downloading the x64 version for Windows.

Hit the download.

Save the file to your downloads location. We will extract it later.

Once the download completes, extract the zip file to the Installation location.

I tend to keep my installed folders in partent folder called Apps. Do what best works for you.

Click the Extract Button.

Once the Extract is complete we are ready to work on customizing eclipse.

Customizing Eclipse:

To Launch eclipse, navigate to the installed folder. Launch the eclipse executable

Click the Run buton. And off it goes.

Accept the default workspace provided and click the Launch button

The IDE loads and you are presented with a welcome page. Close the page and let’s get down to business.

Installing TestNG:

TestNG will be our mechnism / framework to group and invoke tests. We can do loads of beautiful test customizations. So lets go ahead and install. In the IDE navigate to the Help Menu and select Eclipse Marketplace.

In the Find field enter the string testng and hit Enter. The listing updates and shows the TestNG plugin for Exlipse. Click the Install button.

Click the Confirm button.

Accept the license and click The Finish button.

Notice the installation progress in the bottom right hand corner of the IDE.

When you get a warning about unsigned software click the Install anyway button.

Click the Restart Now button.

Once the Eclipse IDE restarts go to the Window menu and select Preferences

Notice the new addition of a TestNG entry on the left hand side of the window. This means TestNG has installed successfully.

Click the Cancel button to dismiss the window.

Setting up a Maven Project and configuring the POM:

The beauty of installing Eclipse IDE for Java EE is that it comes already configured with Maven built in plugin. This is ideal since Maven is one of the most popular tools to maintain external libraries in synch with your java project. Most software companies use the Maven repository to store and regulate softwate library releases. Since we will be using TestNG and also Selenium libraries, we might as well take advantage of Maven.

So first thing is to create a new Maven project.

In your eclipse IDE Select File Menu -> New -> Maven Project

Click the Next button.

Leave the default archetype and click the Next Button

Fill in the Group id with your company’s URL name in reverse. Fill in the project name in the Artifact id. In my case it used the values com.code-test-automate and AutomateMyWay. Click the Finish button.

You will see in the Project explorer and new project listed. In my case it is the AutomateMyWay.

Once the pom.xml is saved. Select Project menu -> Clean. This will force the new dependencies to be downloaded from the maven repos.

Click the Clean button

Now go to your user directory and look for the .m2 directory. Notice the 2 new directories one Seleniumhq and one testng.

Now before we delve into the coding world of automation we need to make sure we have the browser and associated driver software. I am going to use Internet Explorer which comes with the OS. If you want to use a different browser consult the http://www.seleniumhq.org pages and go ahead and download the appropriate driver for you browser of choice.

I decided to save mine in a directory structure that is easily maintainable. I chose c:\Drivers\WebDriver\IE. When you have selected the destination click the OK button.

And here we have the driver ready for us to use and abuse.

Writing your first automation program

Time to code and run some magic. Under the project structure in ser/main/java locate App.java and double click it. It will oprn int he main view. Select all the code and replace it with the following :

Provided you did not commit any typing errors you should be ready to run this. Right click on the App.java. Select Run As -> Java Application

The program executes. It launches Internet Explorer and navigates to this website! After 5 seconds it closes the browser and the program terminates.

This concludes out first run! You are now set up and ready to start exploring the world of Selenium WebDriver. In the next posts I will introduce TestNG and we will start writing test scripts that run automation tasks with Selenium WebDriver.

This is my very first post! YAY!!!. My name is Luzju Cardona , my mother always called me Luallan, every body else calls me Lu. I am a an automation techie and love to see things happening like magic! So I have finally decided to create a site to put all the stuff that I learned during my automation feats. The idea behind this site is to share all the knowledge gained and the pains I went through to get automation off the ground, the challanges I faced and still face on customer projects and hopefully a meaningful one stop shop for your automation queries.