Subscribe

Tag: Testing

In Open Event Frontend, once a pull request is opened, we see some tests running on for the specific pull request like ‘Codacy’, ‘Codecov’, ‘Travis’, etc. New contributors eventually get confused what the tests are about. So this blog would be a walkthrough to these terms that we use and what they mean about the PR.

Travis: Everytime you make a pull request, you will see this test running and in some time giving the output whether the test passed or failed. Travis is the continuous integration test that we are using to test that the changes that the pull request you proposed does not break any other things. Sometimes you will see the following message which indicates that your changes is breaking something else which is not intended.

Thus, by looking at the Travis logs, you can see where the changes proposed in the pull request are breaking things. Thus, you can go ahead and correct the code and push again to run the Travis build until it passes.

Codacy: Codacy is being used to check the code style, duplication, complexity and coverage, etc. When you create a pull request or update the pull request, this test runs which checks whether the code followed certain style guide or if there is duplication in code, etc. For instance let’s say if your code has a html page in which a tag has an attribute which is left undefined. Then codacy will be throwing error failing the tests. Thus you need to see the logs and go correct the bug in code. The following message shows that the codacy test has passed.

Codecov:

Codecov is a code coverage test which indicates how much of the code change that is proposed in the pull request is actually executed. Consider out of the 100 lines of code that you wrote, only 80 lines is being actually executed and rest is not, then the code coverage decreases. The following indicates the codecov report.

Thus, it can be seen that which files are affected by what percent.

Surge:

The surge link is nothing but the deployment link of the changes in your pull request.

Thus, checking the link manually, we can test the behavior of the app in terms of UI/UX or the other features that the pull request adds.

With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements.

What is Selenium?

Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction.

First things first:
To install these package run this code on the CLI:

pipinstallselenium==2.40pipinstallnose

Don’t forget to add them in the requirements.txt file

Web Browser:
We also need to have Firefox installed on your machine.

Writing the Test
An automated test automates what you’d do via manual testing – but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser – mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application.

We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window.

In this blog I will be describing some of the recent improvements made to the Loklak apps site. A new utility script has been added to automatically update the loklak app wall after a new app has been made. Invalid app query in app details page has been handled gracefully.

A proper message is shown when a user enters an invalid app name in the url of the details page. Tests has been added for details page.

Developing updatewall script

This is a small utility script to update Loklak wall in order to expose a newly created app or update an existing app. Before moving into the working of this script let us discuss how Loklak apps site tracks all the apps and their details. In the root of the project there is a file names apps.json. This file contains an aggregation of all the app.json files present in the individual apps. Now when the site is loaded, index.html loads the Javascript code present in app_list.js. This app_list.js file makes an ajax call to root apps.json files, loads all the app details in a list and attaches this list to the AngularJS scope variable. After this the app wall consisting of various app details is rendered using html. So whenever a new app is created, in order to expose the app on the wall, the developer needs to copy the contents of the application’s app.json and paste it in the root apps.json file. This is quite tedious on the part of the developer as for making a new app he will first have to know how the site works which is not all directly related to his development work. Next, whenever he updates the app.json of his app, he needs to again update apps.json file with the new data.

This newly added script (updatewall) automates this entire process. After creating a new app all that the developer needs to do is run this script from within his app directory and the app wall will be updated automatically.

Now, let us move into the working of this script. The basic workflow of the updatewall script can be described as follows. The script loads the json data present in the app.json file of the app under consideration. Next it loads the json data present in the root apps.json file.

When we are loading the json data we are using object_pairs_hook in order to load the data into an OrderedDict rather than a normal python dictionary. We are doing this so that the order of the dictionary items are maintained. Once the data is loaded we invoke the expose method.

The apps.json file contain a key called apps. This value of this key is a list of json objects, each object being the json data of an individual app’s app.json file. In the above function we iterate over all the json objects present in the list. If we are unable to find a json object whose name value is same as that of the newly created app then we simply append the new app’s app.json object to that list. However if we find an object containing the same name value as that of the newly created app, then we simply update its properties. In short, if the app is a new one, its data gets added to apps.json otherwise the corresponding app data is updated.

Handling invalid app names in the URL of details page

The url of the app details page takes the app name as parameter. If any user wants to see the store listing of an app then he has to use the following url.

https://apps.loklak.org/details.html?q=<app_name>

Here app name is a url parameter used to load the store listing information. Now if anyone enters an invalid app name, that is an app which does not exists, then a proper error message has to be shown to the user. This can be done by checking whether the given app name is present in the root apps.json file or not. If not present if simply set a flag so that the error message can be conditionally rendered.

Writing tests for store listing page

Almost the entire content of the store listing is loaded dynamically by Javascript logic. So it is very important to write tests for store listing page. Protractor framework has been used to write automated browser test. The tests make sure that for a given app, the content of the middle section is loaded correctly.

Loklak apps site’s home page and app details page have sections where data is dynamically loaded from external javascript and json files. Data is fetched from json files using angular js, processed and then rendered to the corresponding views by controllers. Any erroneous modification to the controller functions might cause discrepancies in the frontend. Since Loklak apps is a frontend project, any bug in the home page or details page will lead to poor UI/UX. How do we deal with this? One way is to write unit tests for the various controller functions and check their behaviours. Now how do we test the behaviours of the site. Most of the controller functions render something on the view. One thing we can do is simulate the various browser actions and test site against known, accepted behaviours with Protractor.

What is Protractor

Protractor is end to end test framework for Angular and AngularJS apps. It runs tests against our app running in browser as if a real user is interacting with our browser. It uses browser specific drivers to interact with our web application as any user would.

Using Protractor to write tests for Loklak apps site

First we need to install Protractor and its dependencies. Let us begin by creating an empty json file in the project directory using the following command.

echo {} > package.json

Next we will have to install Protractor.

The above command installs protractor and webdriver-manager. After this we need to get the necessary binaries to set up our selenium server. This can be done using the following.

The package.json file currently holds our dependencies and scripts. It contains command for starting development server, updating webdriver and starting webdriver (mentioned just before this) and command to run test.

Next we need to include a configuration file for protractor. The configuration file should contain the test framework to be used, the address at which selenium is running and path to specs file.

We have set the framework as jasmine and selenium address as http://localhost:4444/wd/hub. Next we need to define our actual file. But before writing tests we need to find out what are the things that we need to test. We will mostly be testing dynamic content loaded by Javascript files. Let us define a spec. A spec is a collection of tests. We will start by testing the category name. Initially when the page loads it should be equal to All apps. Next we test the top right hand side menu which is loaded by javascript using topmenu.json file.

As mentioned earlier, we are using jasmine framework for writing our specs. In the above code snippet ‘it’ describes a particular test. It takes a test description and a callback function thereby providing a very efficient way to document our tests white write the test code itself. In the first test we use expect function to check whether the category name is equal to All apps or not. Here we select the div containing the category name by its id.

Next we write a test for top menu. There should be five menu options in total for the top menu. We select all the list items that are supposed to contain the top menu items and check whether the number of such items are five or not using expect function. As it can be seen from the snippet, the process of selecting a node is almost similar to that of Jquery library.

Next we test the left hand side category list. This list is loaded by AngularJS controller from apps,json file. We should make sure the list is loaded properly and all the options are present.

At first we maintain two lists of category id and category names. We begin by confirming that Category title is equal to Categories. Next we get the list of categories and iterate over them, For each category we check whether the corresponding id is present in the DOM or not. After confirming this, we match the names of the categories with the expected names. Elements.all function allows us to get a list of selected nodes.

Finally we check the click functionality of the left side menu. Expected behaviour is, on clicking a menu item, the category name should get replaced with the selected category name. For this we need to simulate the click event. Protractor allows us to do it very easily using click function.

Once again we maintain two lists, category id and category names. We obtain the present list of categories and iterate over them. For each category link we simulate a click event. For each click event we check two values. We check the new browser URL which should now contain the category id. Next we check the value of category name. It should be equal to the category selected.
FInally after all the tests are over we get the final report on our terminal.
In order to run the tests, use the following command.

Travis is a continuous integration service that enables you to run tests against your latest Android builds. You can setup your projects to run both unit and integration tests, which can also include launching an emulator. I recently added Travis Continuous Integration Connfa, Giggity and Giraffe app. In this blog, I describe how to set up Travis Continuous Integration in an Android Project with reference to Giggity app.

Once you’re signed in to Travis CI, and synchronized your GitHub repositories, go to your profile page and enable the repository you want to build:

Now you need to add a .travis.yml file into the root of your project. This file will tell how Travis handles the builds. You should check your .travis file on Travis Web Lint before committing any changes to it.

You can find the very basic instructions for building an Android project from the Travis documentation. But here we specify the .travis.yml build accordingly for Giggity’s continuous integration. Here, language shows that it is an Android project. We write “language: ruby” if it is a ruby project. If you need a more customizable environment running in a virtual machine, use the Sudo Enabled infrastructure. Similarly, we define the API, play services and libraries defined as shown.

Now when you make a commit or pull request Travis check if all the defines checks pass and it is able to be merged. To be more advanced you can also define if you want to build APKs too with every build.

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Now we are heading toward a release of Phimpme soon, So we are increasing the code coverage by writing test cases for our app. What is a Test Case? Test cases are the script against which we run our code to test the features implementation. It is basically contains the output, flow and features steps of the app. To release app on multiple platform, it is highly recommended to test the app on test cases.

For example, Let’s consider if we are developing an app which has one button. So first we write a UI test case which checks whether a button displayed on the screen or not? And in response to that it show the pass and fail of a test case.

Steps to add a UI test case using Espresso

Espresso testing framework provides APIs to simulate user interactions. It has a concise API. Even, now in new Version of Android Studio, there is a feature to record Espresso Test cases. I’ll show you how to use Recorder to write test cases in below steps.

Setup Project Directory

Android Instrumentation tests must be placed in androidTest directory. If it is not there create a directory in app/src/androidTest/java…

Write Test Case

So firstly, I am writing a very simple test case, which checks whether the three Bottom navigation view items are displayed or not?

Espresso Testing framework has basically three components:

ViewMatchers

Which helps to find the correct view on which some actions can be performed E.g. onView(withId(R.id.navigation_accounts). Here I am taking the view of accounts item in Bottom Navigation View.

ViewActions

It allows to perform actions on the view we get earlier. E.g. Very basic operation used is click()

ViewAssertions

It allows to assert the current state of the view E.g. isDisplayed() is an assertion on the view we get. So a basic architecture of an Espresso Test case is

onView(ViewMatcher)
.perform(ViewAction)
.check(ViewAssertion);

We can also Use Hamcrest framework which provide extra features of checking conditions in the code.

The hardware design of the PSLab aims to achieve the maximum possible performance from a very conservative bill of materials. There are several analog components such as Op-amps, voltage dividers, and level-shifters involved in input signal processing that have inherent offsets and slopes that must be corrected in order get the best results. Similarly, some analog output signals from the PSLab are also modified by buffers, amplifiers, and level shifting circuits.

One way to improve the initial accuracy is to choose high performance analog components that are factory calibrated , and do not require any additional correction to achieve error margins that are less than the least count of the PSLab’s measurement capabilities. However, such components such as laser trimmed resistor pairs, and low-offset Op-Amps are quite expensive, and we must instead use software based correction methods to achieve similar performance from affordable parts.

Identifying a suitable calibrator for analog signals

In order to calibrate a device, we must first own a similar device whose measurements we can trust, and which has a finer resolution that the PSLab itself. Calibration is a one-time task that will quantify and store the gain and offset errors, and these errors are not expected to behave very differently unless a significant change in temperature, or mechanical stress is experienced.

Such a device may be as expensive as 24-bit, research-grade multimeters which generally cost upwards of $500 , or can be inexpensive analog to digital convertors that might require some expertise to extract data from them, but can still be used for calibration.

Fortunately, we have been able to identify a cheaply available device that puts the calibration process within the reach and capabilities of the end user. The ADS1115 16-bit ADC is a 4-channel, 0-3.3V ADC that can be interfaced via I2C. Typical initial accuracy of the internal voltage reference 0.01% and data rates higher than 500SPS are possible. It is cost effective, and is available in convenient module formats that can be directly plugged into the PSLab itself. It can be purchased through various vendors ( A , B , C )

Therefore, it appears to be most suited to calibrate individual PSLab devices.

Basic requirements for the calibration process

The process to calibrate the analog inputs and outputs involves looping them externally , and monitoring the actual values via the external calibrator.

We’re killing two birds with one stone by calibrating inputs and outputs in tandem, and it makes for a faster calibration process. The complete calibration process for Digital to Analog converter outputs has enough complexity to warrant a separate blog post.

Let’s take an example; PV1 ( an analog output that can be set between -5V and +5V) can be connected to CH1 (An analog input which can read voltage values between -16 and +16 Volts) with a small segment of wire, and various voltage values can be set on PV1, and read back by CH1 . At the same time, the external calibration utility will also monitor this voltage, and store the error in PV1 (Set Voltage – Actual Voltage) as well as the error in CH1 ( Read Voltage – Actual Voltage ) .

In a similar manner , PV2 can be connected to CH2, and the second channel of the ADS1115 calibrator can be used to monitor the real value, and so on .

Deviations of various analog input channels and their different voltage ranges from the actual values. As is evident from the graph, errors can be as much as 40mV in a full scale range of +/-16,000mV . But since these errors are quite repeatable, we can apply a calibration polynomial to correct these.

Integral Non-linearity of the ADC

In addition to the overall slope and offset, you have probably observed in the previous image, a sawtooth pattern with an amplitude as small as the least count of the analog inputs superimposed on them. This error arises from the integral non-linearity (INL) of the analog to digital convertor of the PIC, and affects all analog inputs uniformly. While in principle we can ignore this for all practical purposes, in order to further improve the analog accuracy, we can also store this INL error of the ADC, and apply this correction to any channel after its slope and offset has been corrected.

The overall slope and offset are caused by the analog references and components, and can be corrected with a simple 3-degree polynomial. However, the sawtooth pattern is characteristics of the INL, and must be stored in a correction array with 4096 elements ( Each element represents the error of the corresponding ADC code of the 12-bit ADC )The yellow trace represents the error in readings from the ADC after applying polynomial and table based correction. There appears to be a small offset that can be attributed to a change in ambient temperature , but can be neglected as it is in the order of 100uV

The following utilities and code are necessary for this process

An I2C communication library for ADS1115 must be present in order to acquire data from it via the PSLab

The library should be able to handle the following tasks

read single ended , and differential voltage values from any of the channels

Enable selection of voltage range and voltage reference

A graphical interface with the following features and algorithms will be required:

Vary the output voltages from PV1,2,3 in small, definite intervals

Store the errors in the analog outputs and inputs as a function of the actual voltage

Generate Cubic interpolation functions for each input and output channel

The Programmable Current Source can be calibrated using a measured Load resistor, and calibrated analog channel. Its interpolation function must also be stored.

Write all calibration constants into flash memory after assigning a timestamp

Store raw calibration data in a client-side folder

Testing of the analog inputs after applying calibration polynomials. It can be observed that the accuracy has been brought within a +/-5mV range for the wide input channels. For CH3, a +/-1mV accuracy is achieved.

Like all other helper functions in FOSSASIA‘s Open Event Server, we also need to test the exception and error helper functions and classes. The error helper classes are mainly used to create error handler responses for known errors. For example we know error 403 is Access Forbidden, but we want to send a proper source message along with a proper error message to help identify and handle the error, hence we use the error classes. To ensure that future commits do not mismatch the error, we implemented the unit tests for errors.

There are mainly two kind of error classes, one are HTTP status errors and the other are the exceptions. Depending on the type of error we get in the try-except block for a particular API, we raise that particular exception or error.

Unit Test for Exception

Exceptions are written in this form:

@validates_schemadefvalidate_quantity(self,data):if'max_order'indataand'min_order'indata:ifdata['max_order']<data['min_order']:raiseUnprocessableEntity({'pointer':'/data/attributes/max-order'},"max-order should be greater than min-order")

This error is raised wherever the data that is sent as POST or PATCH is unprocessable. For example, this is how we raise this error:

raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'}, "min-quantity should be less than max-quantity")

This exception is raised due to error in validation of data where maximum quantity should be more than minimum quantity.

To test that the above line indeed raises an exception of UnprocessableEntity with status 422, we use the assertRaises() function. Following is the code:

deftest_exceptions(self):# Unprocessable Entity Exceptionwithself.assertRaises(UnprocessableEntity):raiseUnprocessableEntity({'pointer':'/data/attributes/min-quantity'},"min-quantity should be less than max-quantity")

In the above code, with self.assertRaises() creates a context of exception type, so that when the next line raises an exception, it asserts that the exception that it was expecting is same as the exception raised and hence ensures that the correct exception is being raised

Unit Test for Error

In error helper classes, what we do is, for known HTTP status codes we return a response that is user readable and understandable. So this is how we raise an error:

ForbiddenError({'source': ''}, 'Super admin access is required')

This is basically the 403: Access Denied error. But with the “Super admin access is required” message it becomes far more clear. However we need to ensure that status code returned when this error message is shown still stays 403 and isn’t modified in future unwantedly.

Here, errors and exceptions work a little different. When we declare a custom error class, we don’t really raise that error. Instead we show that error as a response. So we can’t use the assertRaises() function. However what we can do is we can compare the status code and ensure that the error raised is the same as the expected one. So we do this:

deftest_errors(self):withapp.test_request_context():# Forbidden Errorforbidden_error=ForbiddenError({'source':''},'Super admin access is required')self.assertEqual(forbidden_error.status,403)# Not Found Errornot_found_error=NotFoundError({'source':''},'Object not found.')self.assertEqual(not_found_error.status,404)

Here we firstly create an object of the error class ForbiddenError with a sample source and message. We then assert that the status attribute of this object is 403 which ensures that this error is of the Access Denied type using the assertEqual() function, which is what was expected.The above helps us maintain that no one in future unknowingly or by mistake changes the error messages and status code so as to maintain the HTTP status codes in the response.

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize. For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

In this function, we send the image url, the width and heightto be resized to, and the aspect ratio as either True or False along with thefolder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

In the above code from create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code: