Pre-requirement

How to start Jmeter GUI

Go where you have downloaded the binaries then run `jmeter.bat` using command line. Or type jmeter.bat if you have the bin folder of jmeter in the PATH.

Do not run performance test with GUI. Only for design and debugging purpose with low charge.

How to run Jmeter in command line

jmeter -n -e -l appLog.csv -o appReport -t app.jmx

`-n` to run in cli mode

`-e` to create a report at the end of the run. It requires `-l`

`-l` log file name and path

`-t` testplan file and path

`-o` path where the report is generated

Record script

Use the HTTP(S) test Script Recorder with the port 8000

On the Root > Add > Non-Test Elements > HTTP(S) test Script Recorder

Use firefox and setup the proxy to 8000

Record under the current : “HTTP(S) test Script Recorder” then copy the step in each “Recording Controller”

On a Thread Group > Add > Logic Controller > Recording Controller

Record each step with a different label

HTTP Sample settings : Prefix

Update each step with a meaning step name

Avoid to “Retrive All Embedded Ressources” to keep it simple.

Avoid “Redirect Automatically”

Allow “Follow Redirects”

Alkow “Use KeepAlive”

Thread Group structure

“HTTP Request Defaults” : to setup the default root url

On a Thread Group > Add > Config Elements > HTTP Request Defaults

“HTTP Cookie Manager” : to handle the cookie reset at each iteration

On a Thread Group > Add > Config Elements > HTTP Cookie Manager

“User Parameters” : to handle the variables and users

On a Thread Group > Add > Pro Processors > User Parameters

“Debug Sampler” : to retrieve all the variables for debug purpose

On a Thread Group > Add > Sampler > User Parameters

“View Results Tree” : to see all the step details

On a Thread Group > Add > Listener > View Results Tree

“Summary Report” : to see the metrics

On a Thread Group > Add > Listener > Summary Report

To catch value in response

Use Json Extractor : It allow to get a value in json by using JSON Path expression and assign to an existing (JMeter Variable Name to use) or new variables (Names of created variables) with n variables separated by coma.

I had some trouble lately with some Angular interface. So here interesting work around I found.

Scenario 1: You have a list of checkbox to click, they are not hidden but hidden under an other element. Like hidden behind a pop-up window or a menu button, but they should partially visible for a human user.
The work around : move the checkbox to a visible position in the page. How?

Business case

When a customer website is critical for his business and a lot of users are interacting with the website. Even with a content review process some non-compliant contents or issues may be missed. And above that, sometime the pictures have less attention then the text.
But how could we test picture content?

Quality objectives

Here are some non-exhaustive objectives. What are the quality objectives on the pictures displayed on the website?

Picture object are relevant with the description.

Avoid certain type of object content in pictures.

The picture content doesn’t contain unauthorised logo.

People on the picture are not happy enough.

The picture colors are not following the graphical chart.

Misspell on title in the picture.

What are the technology solutions to achieve this goal?

There could be several solutions, but let’s narrow a bit to one particular solution that we are working on it.

We have started from a test factory based on Serenity, Cucumber and Selenium.

Depending on the test scenario, when it is required to validate a picture, we send the picture to an service that allows to validate a picture content based of acceptance criteria like defined in the quality objectives above.

How this picture validation works?

The service is based on Google Vision Api. This Api is able to identify objects on picture. But this Api is not perfect and only provide relevancy percentage on different options. In order or be accurate, the AI need some context and that’s what we provide him to get a better relevancy and have a final answer.

This service solution is not free. Google is charging any transaction to the Api. So some caching process need also be taken in account when the same picture with identical content are analysed several time.

This Google API is using the machine learning and and his big data. This Api is the most popular service even used by the US Air Force to identify objects on picture taken by drones or satellite. Each day it becomes more and more “clever”. That’s why we have chosen this Api to build our service for picture object identification.

The object identification could be items like human constructions, human faces behaviours, animal and vegetal identification and much more.

AI or not AI

Some AI “expert” could be not agree with this, but we want to keep this simple. For us AI means “a branch of computer science dealing with the simulation of intelligent behavior in computers” (merriam-webster). Google Vision API is based on Google Big Data and Machine Learning to be able to simulate this human behaviour. So for that reason we consider as AI.

AI service release

We are still working on this service. This service using AI is in alpha stage and we are preparing for Q1 2019 an initial release. We are in an optimisation and testing phase. Stay tuned for more info on twitter @fanaticaltest.

What’s new in 2.0?

We have remove our own framework and we decide to adopt Serenity BDD framework. We have decide to go with Serenity BDD in order to focus on test delivery and not reinvent the wheel. Also this framework has so many contributors that allows to keep it-self up-to-date.

Also the factory is delivered with a Gradle including some build tasks examples to manage several level of tests.

Last but not least, we provide a docker container for selenium agent in order to have a full selenium grid.

On Thursday 30th Nov 2017 with Itecor, we have released for one of its customers (in health industry) a tool to set a MTP (Master Test Plan). It allows a Test Strategy definition, define the Requirements and the Test Cases. Also the tool allows to export the Requirements and the Test Cases in TestLink.

With this tool, the customer will organize a standardized Test Strategy across hundred projects. With this MTP, it will centralize all the Test Management in a TestLink instance that will also handle all the Test Campaign and define the automated test.

Also this tool has a wizard that will help the project manager and the product manager to define a Test Strategy. The main constrain in this implementation is the low-maturity level in the Test Management. This tool should take this organisation in the next level of maturity.

Now the next step is to define a Global Automated Test Strategy and implement Tosca as an Automated Testing Tool.

On Friday 17th of November with Itecor, we have released for one of its customer (in a food and beverage industry) an IoT Test lab. It allows to run continuously automated test with IoT devices.

In the DevOps model, there are many new test challenges. One of these are to automate test with IoT devices without having a human to interact with the device. Everything is handled by custom controller for IoT Devices.

In term of architecture there plenty ways to connect to an IoT devices, at least there are:

Devices connected to a mobile

Devices connected to a desktop or a laptop

Devices connected to a cloud solution

Today we will talk about IoT devices connected to a mobile. How these devices are connected?

Mainly they use Bluetooth. If they use Wifi, then they are using a cloud solution as proxy. That means we could categorize them as devices connected to a cloud solution. We will discuss how we could test them in a future article.

Before exploring how we could test them automatically, let gives a few solution examples of IoT devices connected to a mobile:

Watch connected to a mobile (Apple Watch, FitBit, etc.)

Beverage cooler and Freezer

Coffee machine

Construction tools

Healthcare equipment

Last point before starting, here will cover the software part of the test. The hardware will be not covered in this article.

Challenges

The first challenge is how we get feedbacks from the IoT devices? Mainly the IoT devices have a very simplified interface with a very limited scope of functionality. But when we test, we need to access to a bit more of functionality, like to have access to details logs, have feedback on functional that return nothing, have a feedback when it changes state. Usually the hardware and firmware vendor of the device need to provide board that simulate the hardware but it uses the real firmware as it is a real device. In fact, what we test here is if the firmware is reacting properly as per the specification. And the firmware will interact with the board as it is a real hardware.

When we talk about mobile phone device, we need to cover a huge number of devices version (hardware and OS). And sometime, the way the Bluetooth works, it could vary between different mobile phone type.

There are no End to End solution for this type of architecture. You will need to take what already exists and you have developed, then put them together.

Architecture and test

Here is an example of architecture and which tools to automate test.

To test this architecture, we could use a Java Test factory based on Appium, Selenium and Rest-Assured. This factory could be driven by a BDD (behavior-driven development) like Cucumber to define the test cases. The only missing part is the machine controller.

As mentioned before the main challenge is to set a Hardware emulator to test the real firmware and have a full access to all input and output logs. This emulator will be handle by a custom developed machine controller. This controller will interface with the device through a serial port and the machine is expected to implement parsing the MCP messages.

Regarding Appium, the setup is required to test a Native or Web application running in a Mobile device.

To test the backend, let assume they are in majority accessible by Web interface or Rest Api. So, the best tool for that are Selenium for the web and Rest-Assured for the API.

Conclusion

The major effort to set this kind of End 2 End test is to set the machine controller, and that could take more than 50% of the time required to setup this environment. What is missing is a standard protocol to manage device through serial. And also, I’m quite sure any time soon this pain will be solved by any major Automated Test Framework leader (HP, Tosca) or by an open-source community. But in the meantime…good luck!

Related Article

Introduction

Appium Desktop is a good way to understand the Appium mechanism. You will see how the elements are identified and what type of interaction you could accomplish. After that we could start industrialization with a test factory.

Today we are focusing on an iOS application. So we will use a iOS test application and it will require a Mac environment.

There is 2 main purposes to use Appium Desktop :

Investigate how the application could be automated and identify the objects.

Add in “Desired Capabilities” the required properties to run the simulator and define the application to test.

Here we use a test application that allows you to practice and understand how to use Appium. (http://appium.s3.amazonaws.com/TestApp7.1.app.zip)

The Appium will show the screen of the iOS simulator (1), the inspector pan (2), the detail properties for a selected element (3). The (4) is the real simulator running on your machine, Appium is using a simulator to interact with your application. Appium is not providing its own simulator and this is a good approach, we could compare the simulator as the browser for Selenium.
Appium Desktop (1) (2) (3) and iOS Simulator (4)

When you select an element in the (1) by double clicking, you will see in (2) the element name that has been selected and in (3) all the properties of the selected element. In this example you could “Send Keys” or “Tap” in the selected field.

How an element is defined and how you could interact with.

Conclusion

Now you can play with Appium and see how you could automate test with your App. It is a good way to see the challenge you could face when you try to automate your iOS, Android or WindowsPhone app.