Archive

Today I will share this tool that will help you to perform some SQL Injection tests on your website.

What is SQL Injection tests ? It is a type of security tests that you can perform on your web application. You need to be sure that your website is preventing users and hackers to access your database through SQL injection.

To test if your web page has a SQL injection vulnerability, you need to check if it accepts dynamic user-provided values via GET, POST or Cookie parameters or via the HTTP User-Agent request header. You need to explore them to retrieve as much information as possible from the back-end database management system, or even be able to access the underlying file system and operating system.

This tool, sqlmap, can automate the process of identifying and exploiting this type of vulnerability. I will give you some tips here:

Hey guys, today I will post about an example of Docker file where you can run your protractor automation in a Docker container on firefox and chrome.

You will need to have the known hosts and the public key from github to be able to clone the repository and run the automation. Also you need to install java to be able to run the selenium server in your docker container.

After adding these 2 functions about creating the report for multiple browsers and adding the version and the device name (desktop or mobile) in the report, you can add pictures in case of the scenario fails:

You can always customise and improve the functions, if you feel that you need a picture after all the scenarios even they passed, you can just remove the condition in the after. Also you have different available themes and other options on https://www.npmjs.com/package/cucumber-html-reporter

I usually write about the best practices to follow when writing your BDD scenarios. This time I will do different and write some examples that I found of how to not write your BDD scenarios.

Example 1 – Too many steps :

Scenario:Valid data entered
Given a user exists
When I visit the details access page
And I fill in email with "test@email.com"And I select "read only" from "Permissions"And I fill in"Valid until" with "2010-03-01"And I press "Grant access"Then I should see "Access granted till 2010-03-01."And I should be on the details page

Example 2 – UI elements dependency:

Scenario: Adding a picture
Given I go to the Home page
When I clicktheAdd picture button
And I click on the drop down"Gallery"
And I click on the first image
Then I should see the image added on the home page

Example 3 – Excessive use of tables :

Scenario: Adding a new data user
Given I am on user details page
When I select an existent user
And I send some new user data
|name |age|country|language|address |telephone|
|James|20 |england|english |street 1|123456789|
|Chris|30 |spain |spanish |street 2|987654321|
Then I should see the correct previous registered data
|gender |married|smoke|
|male |true |true |
|male |false |false|

Example 4 – Code and data structure:

Scenario: Include attachment
Given I go to the Attachments page
When I click the Add attachment button with css "#attachment-button"
And I click on the first csv file with class".file .csv"
Then I should see the file attached with the id ".attachment"

Write declarative scenarios.

Write at a higher level of abstraction, instead of being concerned with clicking widgets on a page. Surfacing UI concerns on a feature can make it fragile to ordinary design changes.

Try to stick to having not more than 5 steps per scenario. It’s easier to read.

When working in an Agile team, the feedback loop should be as quick as possible. If you don’t send the feedback at the right time, future bugs could be costly as the feature has already a big amount of code.

What is this feedback loop ?

If you have implemented Continuous integration and have automated tests running after a new commit on your dev environment, you need to report back as soon as possible the result of these tests. This is the feedback loop and you need to know the correct time to report the issue to the dev team.

If your automation is taking too long to run the tests after a new commit, this is a sign that you need to improve your smoke tests, maybe your scope is too long or maybe your automation is taking too long to run for other reason, sleeps, not scalable, etc.

The feedback loop influences how your agile process works and if you are saving or wasting time when developing. Tight feedback loops will improve performance of the team in general, give confidence, save time and avoid costly bug fixes.

Feedback loops are not only about the continuous integration, it is about pair programming and unit tests as well, but this time we will focus on Continuous integration tests.

When you are implementing a new scenario in your automated tests, you want to know ASAP if something you implemented is breaking some other scenario or the same scenario. Same situation when you are developing something related to that feature and you want to know if this new implementation is breaking the tests. It is easier to fix, it is fresh in your mind, you don’t need to wait 30 minutes to know there is a bug when you changed the name of a variable…

In my personal opinion, if you don’t have parallel tests to check in multiple browsers or mobiles at the same time if something is broken, it is better you focus on the most used browser/mobile, since this is first priority in all the cases.

Use case: 90% users are on Chrome on Desktop, other 5% users are on Firefox mobile, other 5% are on Safari mobile. What is the best strategy ?

After commit:

Run Smoke tests and all the browsers, take 15 minutes to receive feedback ?

Run Regression tests and all the browsers, take 40 minutes to receive feedback ?

Run Smoke tests and only the most used browser, take 5 minutes to receive feedback, leave to run on all the browsers every hour ?

Run Regression tests and only the most used browser, take 10 minutes to receive feedback, leave to run on all the browsers every hour ?

There is no rule to follow, since in this case you don’t have parallel tests, I would go for the third option. Then, you can focus on the most used browser and leave running the other browsers in a dedicated job each hour. Why not fourth option ? Because you need to keep in mind the business value.

Of course that we need to delivery the feature on all used browsers, but when the time is tight (very often) and you need to deliver as fast as you can, you go for the most business value option and implement the other browsers after. Don’t forget when you automate, you don’t think only about helping the development, but also you think about helping the end users.

If you are wondering how long each type of test should take to give feedback, you can build your own process basing on this graph:

For how long should the team keep the test reports ?

Depends of how many times you run the tests through the day. There is not a rule for that, so you need to find what is the best option for you and your team. In my team we keep the test reports until the 15th run on Jenkins and after this we discard the report and the logs. In most of the cases, I’ve found that if something goes back more than 3 major versions, look for more resolution is a waste of time.

If regressions are reported as soon as they’re observed, the reporting should include the first known failing build and the last known good build. Ideally these are sequential, but this isn’t necessarily the case. Some people like to archive the old reports outside Jenkins. I didn’t feel the need for this until now, but is up to you to keep these reports outside Jenkins.