iAmALittleTesterhttps://imalittletester.com
Testing. With Java, Selenium, TestNG, Maven, Spring, IntelliJ and friends.Fri, 09 Mar 2018 09:12:06 +0000enhourly1http://wordpress.com/https://secure.gravatar.com/blavatar/f2dcaaa778c06e273ef795cfb0b1f89d?s=96&d=https%3A%2F%2Fs2.wp.com%2Fi%2Fbuttonw-com.pngiAmALittleTesterhttps://imalittletester.com
The tester and the code reviewhttps://imalittletester.com/2018/03/08/the-tester-and-the-code-review/
https://imalittletester.com/2018/03/08/the-tester-and-the-code-review/#respondThu, 08 Mar 2018 03:48:48 +0000http://iamalittletester.wordpress.com/?p=1247Continue reading The tester and the code review→]]>Code review, although very important and frequent in the software development world, is not as frequent in the automation testing world. Normally, it would be part of the whole process: someone writes code, reviews it, makes it available to the rest of the team, they review it, and if changes are needed they will be made, and the improved code will now be available back to the team. This helps in having better code and having awareness inside the team on what is being implemented.

Code is still code, no matter whether it is created for implementing or testing a feature, so there should be code reviews for all of it.

Who should participate in code reviews?

For the developer’s code

If the developer creates a review for some code he/she wrote, normally only developers are asked to take a look at the changes. However, for a tester it is equally useful to read the code, to identify possible testing scenarios or to understand how the feature implemented will affect other features. Sure, the tester might not understand all the code, but at least he/she will have the option to read the implementation. And, with a little help from the developer, gain more insight into how and why the code was written.

For the tester’s code

When it comes to automation test code, not many testers feel confident enough or want to create code reviews. Let alone to include developers in them. Many times testers feel they will get a negative response to showing their code to others, when in fact, code review is nothing more than a learning process and a way to improve. Sure, sometimes feedback can be a bit negative in the way it is written, but the idea is not to take it personal. Instead, testers should see that code review can offer good suggestions on how to improve what was already written, leading to more documentation reading and learning new things.

That will help in finding a good approach whenever similar code needs to be written in the future. Of course, in the best case scenario, nothing will need to be changed in the initial test implementation. But having code reviews can make the testing team aware of what new tests were written and what testing helper code has been added (that code that can be used in future tests written for other functionalities under test).

Consider the code review: a process for improving code, learning new things and getting everyone up to speed on what tests and utilities are now available.

Types of feedback from reviews

Personally, i consider feedback comes in several flavors, and can be categorized according to severity and actions to be taken.

Level 0

Or the “i should have fixed it before committing the code” level

This category includes things that should have been corrected before committing the code to the repository. These are common sense things, that are also most definitely highlighted by the IDE that you are using, in case you missed them yourself. Such things might be: unused imports, declared but never used variables, errors that lead to the code not compiling, incorrect naming (for example having written a method name starting in upper case) and so on.

Tip: before committing the code, do a proper review of it yourself. Depending on the IDE you are using, there are some sort of warnings thrown by it when trying to commit the code. Additionally, in the case of IntelliJ, you can use the Inspect code feature to identify changes that you need to do before making the code available to your peers. It’s better to fix these issues yourself instead of allowing this kind of code to get as far as the code review process.

Level 1

Or the “i should have studied this a bit more” level

To say it nicely, these items will get a bad review if they are found inside the committed code. Developers would freak out if they would see these. Simply because they are bad coding. One example would be a ‘for’ inside a ‘while’ inside which there are 10 ‘ifs’ and a few try catches. Basically gross code that totally contradicts any coding best practice and defies logic.

Tip: when you need to write code that seems too complex or difficult, read some documentation on the topic, or feel free to discuss with your developers. They can advise you on a solution that will be efficient and elegant. There is no shame in asking for advice in order to find the best solution to your automation code. It is way better to talk to them, instead of committing code that contradicts any coding good practice.

Level 2

Or the “there’s another way to do it” level

This category contains rather suggestions than a clear indication that something needs to be changed. In the world of code, most tasks can be written in several different ways. Therefore people participating in a code review might have more inclination towards a library to be used for the code that is needed, or towards a strategy of writing it from scratch. So there might be a difference in opinion between the person who wrote the code and the reviewers. The code the initial writer thought of is not bad, but could be written in a different way.

Tip: when a suggestion comes to write code in another manner, first you need to evaluate the differences between the approach you, the code writer, took, and the approach that a reviewer suggested. You need to understand the pros and cons of both the approaches, and if you feel that what you wrote initially is the better choice, you can leave it as it is. However if the suggestion made by the reviewer seems a better fit, don’t hesitate to refactor your code according to the reviewer’s suggestion. Things to consider when choosing an approach or the other include: how much code is written, how long does it take for the test to run, any negative impact on the resources of the machine where this runs, and so on.

Level 3

Or the “looks are everything” level

Sometimes the feedback from a code review does not relate to how the code was written, but more to what it looks like. Many projects set their own standard for how to structure the code, which might include things like: naming conventions for certain types of files, patterns for messages that are written to the console, how many spaces to indent the code with, what to name utility classes, and so on. These are all meant to give the code a uniform and unitary look.

Tip: before writing code, for such projects, you need to get familiar with all the rules regarding how to structure the code. This will help avoid the need to fix the appearance of the code later on. Also, if such feedback is received during code review, an update should be done on the code, to make it follow the “design” guideline of the project.

Level Na-ah

Or the “this is not a valid point” level

Sometimes, it just happens that the points specified as feedback are not valid.

Tip: if you feel that the feedback received does not seem valid, again, check documentation and get in touch with developers. If it turns out the feedback is indeed invalid, just disregard it, and possibly provide some feedback-back (explain why the points made were not valid to the people who raised them).

]]>https://imalittletester.com/2018/03/08/the-tester-and-the-code-review/feed/0capreoaraAutomated testing of translations by using property fileshttps://imalittletester.com/2018/01/30/automated-testing-of-translations-by-using-property-files/
https://imalittletester.com/2018/01/30/automated-testing-of-translations-by-using-property-files/#commentsTue, 30 Jan 2018 04:31:36 +0000http://iamalittletester.wordpress.com/?p=1084Continue reading Automated testing of translations by using property files→]]>Whenever you need to write tests that check for a text in several languages, you don’t need to write one test for each language that you check for. Instead, you can use property files to store translations and just write one test that will check the text across all supported languages. Read below to see how and checkout my GitHub project for the examples presented in this post.

A property file in Java is nothing more than a file that contains key/value pairs. In the case of translations, you can think of the key as the identifier of the text that you need to check, and of the value as the corresponding text in the language for which you built that property file.

Creating the property file

The property file for each language that is in scope of your testing needs to have a name that reflects that language. For example, you could choose “en” as the name for the English language, and “de” for the German file.

Basically you need to follow some sort of localization convention, and you can check out the Java locale support documentation for an idea of what to name these files. This documentation suggests going further than just naming the English file “en”, by specifying whether you are referring to British English or American English. In this case you might have a “en_GB” file and a “en_US” file. But if you only need to generically test English texts, you can simply name the property file “en”.

All of the translation files need to have a .properties extension, therefore you will create the “en.properties” file. All these file need to be placed inside your Java project, under the “src/main/resources” folder. Here, you can further create other folders, to reflect what you are doing.

This approach is particularly useful because it allows to store all translations related to a feature in the same location, thus easily finding them in case you need to look at or modify them.

Adding the key/value pairs

Now that you have the files, you need to add the translated text in all of these. First you will need to define the key that will represent the text. The recommendation for key names is to be all lower case, and if you need a separator inside the key name, you should use a dot character (“.”). The translated text will be the value. A key/value pair will look like key=value, each on a new line.

You can use just one word as the key, if that is enough. However if you need to define several keys for related items, you should have a structure like “feature.firstkey”, “feature.secondkey” and so on.

As an example, if the key needs to represent just one color, the key can be the name of the color in English. For example blue. An entry for blue in the English property file will be blue=blue. The same key needs to be present in all the translation property files you will use, together with the translated value. So, for German you would have blue=blau, for Spanish blue=azul, and so on.

Now, if you want to define more colors, you can group them like: color.blue, color.green, color.red. Therefore the English file will contain, each on a new line and without separators at the end of the line: color.blue=blue, color.green=green, color.red=red. The corresponding Spanish file would contain: color.blue=azul, color.green=verde and color.red=rojo. You can use this same convention when you have several words as a key and you need a separator for them. For example, if your key represents the “grocery store timetable”, you can name it grocery.store.timetable.

Reading from the property file

Now that you have the translations in place, you need to have a way for your tests to read them. For this purpose you can create a method as the one below:

The method i wrote here opens the file and reads the value from the file, corresponding to the key it receives, which is the first parameter of this method. The file is determined from the second parameter, which represents the name of the file you defined earlier (without the .properties extension). You can see that i wrote the path to the language files inside the method, that is why you only need to send the file name to this method. In my case, the translation file is inside the src/main/resources/languages folder. If you store your translations elsewhere, this needs to be reflected in this method.

A usage example for this method would be:

getTranslation(key, language);

Let’s say, for key color, and language German, this call would be:

getTranslation("color", "de");

Writing the test

Ok, so now you have the property files, and a way to read them. How do you write just one test to check for the translated text? Well, looking at the getTranslation() method, and thinking that in a test you need to check for one property in each language where it is available, that means that when calling the getTranslation() method, the key will always be the same (when you test for that particular key). But you will need to run through all the available languages. For this purpose you can use a data provider which will pass the language to the test. Therefore a test would look something like:

The texts that need to be checked based on these files are: in one test an “i love testing” text, translated into all languages mentioned above, and a superheros’ first name and last name. For this task, the following three keys were defined: testing, superhero.firstname and superhero.lastname.

So in this case, the call to the getTranslation() method receives the testing key, and for each run, a different language. Based on it, the test will read the translation corresponding to this key from all available languages. My test here will just print these results to the console, as follows:

Why you shouldn’t use your developers’ property files

Chances are that for the same task for which you need to check translations, the developers already created property files structured as i describe in this post, for implementing the feature you are testing.

You shouldn’t use their files to define your expected translations, since that is the text they are using for the implementation. It would mean comparing the actual text with itself. If there is anything wrong in those files, your tests would not pick that up, since their “expected” content would be the “actual” one.

That is why i recommend to keep separate property files for the testing activities.

]]>https://imalittletester.com/2018/01/30/automated-testing-of-translations-by-using-property-files/feed/1capreoaraWrite automated tests with repeatable resultshttps://imalittletester.com/2018/01/23/write-automated-tests-with-repeatable-results/
https://imalittletester.com/2018/01/23/write-automated-tests-with-repeatable-results/#commentsTue, 23 Jan 2018 04:14:10 +0000http://iamalittletester.wordpress.com/?p=1169Continue reading Write automated tests with repeatable results→]]>Writing automated tests is no longer the biggest challenge in the testing community. Writing reliable automated tests is. So many times tests that were once written are sent to the garbage bin or thrown into oblivion. They are unreliable and people will just ignore them when they are running, simply because they have a history of failing for various random invalid reasons.

One of the reasons of having such useless tests is that they are written either in haste, or carelessly. Many times people have tasks they need to accomplish within a pre-allocated timeframe, usually to finish the testing during the sprint. Or they have to write a specified number of tests during one day. These are the cases when, once a test gets written, and passes at least one time, people are happy to cross it off their list and move on the next one. Even if, from 3 test runs, it only passed once. Managers are probably happier hearing that 100 new tests were written, instead of hearing that 5 sturdy and reliable ones were. They like math and reporting.

But we are testers, and we should not be concerned with these numbers. Our main focus should be to write tests that are reliable, which means a lot of things. Among others, a good automated test must have repeatable results. This means that if the software under test does not change, each test run must have the same outcome each time it runs. If you create a test that checks for the happy flow, and all the code behind the happy flow is correct, that test should pass every time, when running on the same code version. That is if there is no bug in the software.

How to achieve repeatable results?

Well, to start with, run a test that you wrote more than three times. I usually run mine about 20 times around the time I finish writing them. But I also run them several times throughout the day, to see how they get impacted by the different activity going on in the environment where they run. And I also run them several times throughout the week. Simply to have them running on different ‘states’ of the environment. If you are doing continuous delivery that should be no problem, as you can schedule periodic CI jobs to run them. Even if you don’t, you can just pick them up and run them by yourself.

Doing that will easily reveal if there are any issues in your tests that you need to address. If you are doing front end testing for example, several test runs will highlight any timing issues and any tweaks you need to implement due to pages not loading at the same speed each time you interact with them.

When you run tests enough times, it will seem like each time they fail there is another reason for the failure. Look at them one by one and see why they happen: is it a bug, is it an environment issue, or is it an inevitable delay or some similar events? If it’s a bug, raise it with the team. If it’s an environment issue, raise it with the relevant people. If it’s just one of those timing issues (for example), just update the test to handle the timing. Don’t make the test pass no matter what. But make it pass by addressing its bottlenecks.

The update might be tricky at times, you might need to do plenty of debugging and have lots of patience to get to a green test. But consider it a challenge, and once the challenge is solved, you will feel good about having created a reliable test that will be consistent across all its’ future runs.

The best thing about having a reliable test with repeatable results is that it will require less maintenance in the future, and it will not require any re-running, hence it will not add any extra run time when you are running a huge suite of tests at once (like when you are preparing for a release).

Therefore, if you see a test that does not pass at each run, when it should, don’t hesitate to take a look at it and transform it into a beautifully reliable test. Managers are also happy to hear that you only spend more time on writing a good test once, as opposed to spending even more whenever the test randomly fails, when it shouldn’t.

]]>https://imalittletester.com/2018/01/23/write-automated-tests-with-repeatable-results/feed/2capreoaraWhy you need to test your production environmenthttps://imalittletester.com/2018/01/16/why-you-need-to-test-your-production-environment/
https://imalittletester.com/2018/01/16/why-you-need-to-test-your-production-environment/#commentsTue, 16 Jan 2018 07:22:00 +0000http://iamalittletester.wordpress.com/?p=1208Continue reading Why you need to test your production environment→]]>It occurred to me lately, after chatting with some people from the testing community, that not everyone runs automated tests or does any kind of testing in the production environment. For me that seems a bit unnatural, since i have been doing it on all the projects that i worked on. So, here are a few thoughts that might convince you that you do need to run automated tests even in production:

when you have no production monitoring: there are those cases when the project you are working on is either very old, or simply no production monitoring or tests are in place. When there is no production monitoring whatsoever, how exactly can you know when your software is not working properly? Usually it’s when customers ring in, complaining about the issues they have using your software. But that late feedback can be avoided by creating an automation suite that will run periodically in your production environment. Having tests that run for the most important user flows will help get an early feedback that there is some kind of mishap in the production environment.

when you have production monitoring: it is very useful to have production monitoring. However you need to keep in mind that the monitoring usually detects hardware issues or load issues. Not functionality issues. In many cases monitoring cannot determine that specific user scenarios are not working. They can tell you that the application is slow, that certain aspects of it will not work (if a dependency/service is down for example), that a user transaction is not going through. But it cannot tell you that the user could not increase the number of products in his shopping cart because of the button not working, or that he could not choose a product to have expedite shipping because that dropdown did not work, and so on. Automated tests for user scenarios can detect issues that users have when trying to use your software and can give you an insight on why certain functionalities are rarely or never used, even if you would expect them to be.

production environment architecture and setup is different from the ones in a test environment: how many times does a test environment correctly reflect the architecture or setup of the production environment? Possibly never. Test environments are usually the worst . Not enough resources allocated, not the same configuration, not the same architecture, and so on. Therefore, many times, tests that run on a test environment, if instructed to run in production would fail. That means a functionality will not work in production, because it was created and tested in an environment different from the one it was really meant for. Just because a piece of software works properly in an environment, that does not guarantee that it will run the same way in another one. Keep in mind there might be all kinds of settings in production that can make your software behave different from the test environments.

number of active users is different from test environments: in a test environment you cannot realistically simulate the amount of users active on your software during normal functioning hours. The load generated by users can also be a factor that can lead to software degradation, so having automated tests running in production under normal and high load can identify issues caused by load.

one of your dependencies performs an announced released: many times the features you push to production depend on other features or libraries. If somebody who works on these dependencies changes something which affects your own features, production tests can detect that your features are not working properly anymore. This is useful when your own dependencies change things without letting you know, and without you having the option to test these changes in a pre-production environment.

if your site is content managed: someone could change some areas of your product without you knowing, through a content management system (CMS) which can lead to broken pages, invalid or missing content. Having your automated tests running periodically will validate if those areas still work properly, without you having to be aware of when the content changes are done.

you might have some flags that need to be set up in production: maybe you want to enable certain features or set some feature properties by enabling a flag somewhere. If you have automated tests running periodically, you can check whether the expected features were indeed enabled or not when they were supposed to be. Also, after enabling the flag, the automated tests will report a failure if for some reason the flag value was modified by mistake to a value it shouldn’t have.

regression when changes that should not affect a feature do affect that feature: sometimes the changes that are done in production are rather small and before releasing you might not think you need to do full regression. Periodically run production automated tests can detect whether deploying these small changes affected the main important flows.

One thing is sure: the set of tests that need to run in production is not the same set of tests that run in the test environments. Production tests are lighter and should cover only the main critical scenarios, not every test scenario you can think of for a given feature. They need to cover at least the happy flows, to make sure that the user can use the site according to its purpose.

]]>https://imalittletester.com/2018/01/16/why-you-need-to-test-your-production-environment/feed/1capreoarahttps://imalittletester.com/2017/12/20/1204/
https://imalittletester.com/2017/12/20/1204/#respondWed, 20 Dec 2017 13:12:05 +0000http://iamalittletester.wordpress.com/?p=1204Checkout my article for TechBeacon on Top 5 Apache Commons utilies for automation engineers: https://techbeacon.com/5-best-apache-commons-utilities-automation-engineers
]]>https://imalittletester.com/2017/12/20/1204/feed/0capreoaraTestTalk on why automation is funhttps://imalittletester.com/2017/12/17/testtalk-on-why-automation-is-fun/
https://imalittletester.com/2017/12/17/testtalk-on-why-automation-is-fun/#commentsSun, 17 Dec 2017 15:04:13 +0000http://iamalittletester.wordpress.com/2017/12/17/testtalk-on-why-automation-is-fun/Checkout the talk i had with Joe on why automation is fun and why you should also get into automation: https://joecolantonio.com/testtalks/183-test-automation-fun-corina-pip/

Happy listening!

]]>https://imalittletester.com/2017/12/17/testtalk-on-why-automation-is-fun/feed/1capreoaraMy HUSTEF 2017 conference experiencehttps://imalittletester.com/2017/11/29/my-hustef-2017-conference-experience/
https://imalittletester.com/2017/11/29/my-hustef-2017-conference-experience/#respondWed, 29 Nov 2017 12:18:35 +0000http://iamalittletester.wordpress.com/?p=1178Continue reading My HUSTEF 2017 conference experience→]]>My last conference of the year recently wrapped up in beautiful Budapest. I took part, as a speaker, in 2 days filled with practical and useful advice, in a fantastic location.

The conference structure was different from any other that i previously attended. In the mornings we had one keynote each day, followed by short presentations by representatives of the sponsors. A sort of 5 minute lightning talks, mostly regarding the sponsor’s endeavors. Fast paced and entertaining way to start the morning.

Then, the tracks started. There were 2 parallel tracks at each time, and talks were grouped by interest. For each interest, there were three 25 minute talks with a 5 minute break between them. To give you an idea, interests included: automation, security testing, delivery, process improvement and so on.

The days closed with another set of keynotes, followed, on day one, by the conference party, where there was street food, wine, a live band, and even trips to the nearby historical sites.

There were around 650 participants, so there was a lot of chatting and knowledge sharing going around. And everyone was very friendly and fun.

I would like to congratulate the organizers of the event. Everything was very well thought of and properly planned. Some of the highlights when it comes to the organizers include:
– the conference venue was amazing. Right near the Danube, below the Budapest Castle, inside a newly renovated building with impressive architecture. The night view of the whole area is simply wow.
– providing useful information to speakers, and being very warm and welcoming towards them was great, so thank you for that
– having an official app where information was distributed to the participants, and where participants could share their thoughts and photos taken at the event
– one thing about the app that i loved was that they had polls before each set of talks, where participants would choose which set of talks they would go to, before they started. The voting determined whether a set of talks would be held in the larger or in the smaller room. This way they could accommodate the right amount of people in the properly sized room. Something i never saw before at a conference, but found very useful.
– also integrated in the app was the Sli.do module, which allowed for questions to be addressed to the speaker during his or her talk. The speaker was not aware of the questions until the end of talk, in the Q&A timeframe. But this way more questions could be addressed during this timeframe, since they were already available, and there was no need to have people running across the room, passing microphones to whomever would raise their hand with a question. Very useful approach.

To wrap it up, i can only say: don’t hesitate to attend this conference next year. It is fun, you learn things, in a beautiful setting, and the prices are very affordable.

]]>https://imalittletester.com/2017/11/29/my-hustef-2017-conference-experience/feed/0capreoaraCheck that the value you think you typed into a field has actually been typed correctlyhttps://imalittletester.com/2017/10/30/weekly-automation-1-check-that-the-value-you-think-you-typed-into-a-field-has-actually-been-typed-correctly/
https://imalittletester.com/2017/10/30/weekly-automation-1-check-that-the-value-you-think-you-typed-into-a-field-has-actually-been-typed-correctly/#commentsMon, 30 Oct 2017 13:58:19 +0000http://iamalittletester.wordpress.com/?p=1158Continue reading Check that the value you think you typed into a field has actually been typed correctly→]]>You are writing some automated tests with Selenium, that require you to fill in some text fields in a form. You are pretty confident you typed the values you expected to type, into the field you expected to type into. But, here are just 3 reasons why you should write some code that checks that you actually wrote what you thought, where you thought, before submitting the form you are trying to fill in.

Before jumping to the “why we should check those values”, let me give you a few short tips that will help you when working with text fields:

before typing into a field, make sure you clear the field beforehand. That way, when you start typing, you are more confident that you don’t have any residual text present along with what you want to type. This is equivalent to textFieldWebElement.clear()

checking what was typed into a field is normally done by checking the “value” attribute of the text field. Therefore, if you want to read the value of the text written into the field, you should do something like: assertEquals(fieldWebElement.getAttribute("value"), expectedText);

Now, why would you want to make sure you typed what you think you typed:

Many times your fields will come with some kind of suggestions that tell the user what pattern should be used for typing into that field. Sometimes, this suggestion is displayed right inside the field, maybe as a more grayed out text. In this case, in order to make sure that the text you typed does not include the suggestion too, make sure to check the value of the text from inside the field.

If you are mistaken about what you typed into a field, and in fact you typed something else, your test is not relevant because you are not sending the data you thought you were sending into the system. So you mistakenly think that for a particular String, the system works fine, but in fact that String never made it into the system.

When you need to type into many text fields to fill in all the data in a form, you sometimes just copy paste the sendkeys command (the one that writes text inside a field), and then you just rename the field names. For example, you want to write First Name, Last Name. If you write the code for filling in the First Name, than you just copy paste that code, and forget to change the second occurrence of the code with the webElement corresponding to the Last Name, you will in fact never type into the Last Name field. You will just type into the First Name field twice, replacing what you typed initially with the text from the second occurrence of the code. Even worse, if the application under test would have these as mandatory fields, and there is a bug in the system, you might not realize that the “mandatory” part of the system has failed, since: you thought you typed into all mandatory fields, but you did not, and the system did not prompt an error.

]]>https://imalittletester.com/2017/10/30/weekly-automation-1-check-that-the-value-you-think-you-typed-into-a-field-has-actually-been-typed-correctly/feed/1capreoaraMy TestCon Vilnius 2017 experiencehttps://imalittletester.com/2017/10/25/my-testcon-vilnius-2017-experience/
https://imalittletester.com/2017/10/25/my-testcon-vilnius-2017-experience/#respondWed, 25 Oct 2017 05:31:43 +0000http://iamalittletester.wordpress.com/?p=1140Continue reading My TestCon Vilnius 2017 experience→]]>‘Twas a not so warm day in October, that in Vilnius awesome testers united to talk about their crafts. Some crushed candies, some with machines worked, others Selenium magic performed. And this is how it all unfolded.

The TestCon event consisted of one day of workshops, followed by one day of actual conference (talks). I was only present for the talks, and i was pleasantly surprised to see a lot of attendees.
The location was quite fun. A cinema inside a shopping mall. We basically had the cinema all to ourselves. It was quite interesting to give a talk while standing in front of the huge screen, with attendees sitting comfortably on the cinema chairs, some even having grabbed some popcorn. The projection screen displayed the speaker for everyone to see, even in the back, and the speakers laptop screen. The speaker was a bit unaware of how the guys in the room felt about the talk, since the projectors were focused on the stage, and the rest of room was in semi-darkness. But i am pretty sure not many people fell asleep even with this setup. That’s because the talks i saw were awesome. And there were plenty of questions and prizes to go around in the room, for those who decided not to be shy and just grab the microphone and ask what was on their mind.

The first thing i have to mention is that, because i had a very late flight, i was not able to wake up in time to see the first talk of the day, the keynote. Nor did i see the last keynote. But still, great content i saw. I am happy to say that i had to write down all kinds of things to do research on after the conference.
The first talk that i saw was by Eddy Bruin on knowing your customer. This was an interesting perspective on how to test your products by thinking as the user. Some key points from the talk: using personas to simulate user groups and test from their perspective; doing usability testing to understand how easy it is for customers to use the software you are developing, to see whether they can find what they need easily; having super review sessions, where anyone in the company can come and take a look at what your team developed, on all kinds of devices, in order to provide feedback; and from time to time, participating in helpdesk support, to get first hand feedback from the customers.

Next up was a talk on how AI is used for testing the famous Candy Crush game. Alexander Andelkovic showed how neural networks are used to create bots that play the game in trying to simulate a regular user’s gameplay as much as possible. Some of the key points from this talk were, for me: how bots are evolving, learning, trying different moves in the game, and mixing together once they are stuck with their learning process; to do some reading on the NeuroEvolution of Augmenting Topologies (quite a mouthful) and see how that can be applied on other projects.

Next up i went to the talk on accessibility testing by Jurij Nesvat. The talk started with some introduction, examples and terminology, and some examples of what are common accessibility issues. The thing that i noted (having previously done this type of testing) was that apparently there is a concept of accessibility labs. In such places, people are wearing all kinds of devices that restrict their normal functionality, like muscle movement or vision, in order to simulate how a person with disabilities would use the products under test.

Right after lunch and my talk, i went to see Gil Tayar talk about how to do automation testing for front end components, without using the browser. Quite an interesting approach, of using Node and JSDOM to simulate the DOM and the browser, and interacting with this DOM. The promise is of even managing to simulate a click event, without the actual clicking inside a browser window. I am really looking forward to the slides on this one, as it is something i would probably like to use in the future. Oh and the talk ended with a poem that summarizes what was shown.

The last talk i managed to see was on machine learning used for automation testing, by Dzmitry Humianiuk. Probably the second slide of this talk showed a ton of mathematical equations, which hinted at how complex this activity can be. The talk focused on how the approach is used by the team who develops the Report Portal tool.

I have to give a big thank you to the people who organize this event because: they were very kind and helpful in getting us, the speakers, to the event; all our costs were covered by them; and we also got a very cute thank you gift. The event itself was very fun and the talks very useful. Don’t hesitate to attend this conference: the cost is very decent and you get the chance to learn and meet a lot of people from the testing community. Aaaaaand…there is ice cream

]]>https://imalittletester.com/2017/10/25/my-testcon-vilnius-2017-experience/feed/0capreoaraTest design: write tests with proper console output to easily identify failure reasonshttps://imalittletester.com/2017/10/16/test-design-write-tests-with-proper-console-output-to-easily-identify-failure-reasons/
https://imalittletester.com/2017/10/16/test-design-write-tests-with-proper-console-output-to-easily-identify-failure-reasons/#respondMon, 16 Oct 2017 04:59:31 +0000http://iamalittletester.wordpress.com/?p=1066Continue reading Test design: write tests with proper console output to easily identify failure reasons→]]>When automated test are running, they are either running on your own machine (when you write them or run them to check something), or in your CI.

When you run the test on your machine, if there are failures, it might be easy for you to look at what is running (if you have some visual tests, that interact with either browsers or apps on your machine). You can just rerun a failed test and visually inspect for failure reasons. But, if tests are running on a CI machine, visual inspection is either very difficult or even impossible. You might not have access to connect to that machine, or to see how tests are being run.

When you have a failure, the first thing you will want to do is try to reproduce the failure, to understand what caused it. Many times you want to do that manually. Or you just want to understand the data that was used during test execution. Without proper output from the tests, you can see at what line of code the test failed, but it is more difficult to understand the context in which it failed.

Therefore, proper console output about the state of the test run is advisable. Although developers would argue that writing plenty of System.outs is messy, that you should not write messages to the console, in case of running automated tests it can become very useful and helpful.

When would you want to write something to the console within the test? Here are just some examples.

Note: Below i might say “print”, “write to the console” or similar, but in fact they all mean to write a System.out line of code. Of course you might choose to use another means of writing to the console, possibly with the help of an external library. But you get the point. The result should be relevant text in the console.

Also, whenever a failure occurs inside an assertEquals statement, the console output is not needed, as both the actual and expected values are displayed when the failure occurs. Think about the need for output more like when assertTrue or waits are used, for example. When no other clues about test failure are available, then you should throw some yourself.

Whenever you are using randomly generated data: print it so that you can understand what values were used in the tests. Since random data is, well, different every time a test runs, knowing exactly the values generated for the particular test run that failed can you help reproduce the full scenario the test was covering.

When your test expects for a certain url to be loaded in the browser, but another one is actually loaded. You need to be aware what url really loaded, so that you understand if an error was thrown or if the initial page did not trigger the correct event that would cause a new page to load.

When you expect for an image source to be a specific one, but it isn’t. Print the currently configured image source.

When you expected the button you wanted to click on to have a certain label, but it has a different one. Print the label to understand whether for some reason the selector behind it is the one you thought, but in fact it has a different purpose.

Before a more complex step is being performed, you might want to type something like: “starting this very complex step”. Just so you know the test is not hanging, if the step takes a long time to be processed. Many times when you run a test in the CI, you will just look at the CI output but you will not see anything, and you won’t be aware that the complex step is in progress. You will have no clue about what step the test has reached. Also, when it finished executing, you could write a “this complex step finished successfully” message.

If you select a random value from a dropdown in each test: know which value was selected by printing it.

Well, these are just some examples. But the point is: whenever you feel some of the data being used in the tests would be useful to whoever is looking at the results, print them to the console. Just make sure you don’t pollute the test output with information that is not relevant or useful.