When I was searching for different types of testing, I found a type of testing that I never heard about, which was documentation testing. I was curious since I only knew documentation that testers/developers added at the beginning of the methods/classes/test cases to give others enough basic information about them. I wanted to know more about this type of testing, therefore, I chose a basic introduction of documentation testing to read. Below was the URL of the post.

In this blog, Kirti Mansabdar explained the definition of documentation testing and some common documents that should be used and maintained regularly, the importance of documents. According to her, documentation testing was a non-functional type of software testing. Poor quality documentation showed badly on the quality of the product and vendor. She also provided some advantages and disadvantages of preparing documentation. Some of the advantages were making project testing easy and systematic, saving time and cost, maintaining good relationship with the client, making the client satisfied, etc.

Kirti said that in many cases, projects could be rejected in the proposal/acceptance phase for lack of documentation. This should be highlighted to get the attention since I thought that being rejected just because of lack of documentation was unworthy. In the beginning of the blog, she mentioned that one of the reasons people did not talk much about documentation in software testing was they did not want to waste time preparing documents. They wanted to spend all their time on the more functional aspects of their jobs. I disagree with them. In my opinion, even though testers knew what they were doing, but their projects would be presented and used by people who knew a little or nothing about the projects. Without documents, it would be hard for others to fully understand what the projects were and how useful they were.

In some important documents that Kirti recommended to use and maintain regularly, there were some of them that I never heard, never did, and wanted to try when I had a chance, like Test Plan Document, Weekly Status Report, User Acceptance Document. Test Plan Document included testing schedule, team structure, H/W-S/W specification, environment specifications, risk analysis and scope of testing among other points. Weekly Status Report included the status of all bugs and requirements. It could help in improving the quality of the product. This one would be useful for me since I rarely report anything weekly. It was also good to know that these documents could be used as evidence for the delay when having problem regarding product delivery.

“Did my tests cover all the code?” – this would be a question that absolutely popped up in testers’ mind often when they were writing their tests. Code coverage could answer this question for them. Code coverage helped testers understand how much of their code was tested. Since I had not used any code coverage tools before, I thought that it would be a good idea to start learning about them through an introduction post. Below was the URL for the post.

In this post, Sten Pittet, who had been in the software business for ten years in various roles from development to product management, introduced the definition of code coverage, the common metrics in the coverage reports, a tip to choose the right tool for different projects. He also discussed about what percentage of coverage that testers should aim for. Moreover, he thought that testers should focus on unit testing first, use coverage reports to identify critical misses in testing, make code coverage part of their continuous integration flow.

Sten mentioned a few metrics that testers should pay attention to when reading coverage reports They were function coverage, statement coverage, branches coverage, condition coverage, line coverage. Function coverage showed how many of the functions defined had been called. Statement coverage showed how many of statements in the program had been executed. Branches coverage showed how many of the Boolean sub-expressions had been tested for a true and a false value. Line coverage showed how many of lines of source code had been tested. The example given in the post helped me understand the terms easier.

In Sten’s opinion, 80% code coverage was a good goal to aim for. Trying to reach higher coverage might turn out to be costly, while not necessary producing enough benefit. He also said that it was normal to have low coverage for the first run, and testers should not feel pressure to reach 80% coverage right away. To be honest, I really surprised that he only recommended 80% coverage. But when I thought harder, it made sense that getting higher coverage might be costlier, less benefit since real-life projects were usually bigger than school projects. He also highlighted that testers should write tests based on the business requirements of the applications rather than write tests that hitting every line of the code.

Furthermore, Sten emphasized that good coverage did not equal good tests. Code coverage tools could help testers understand where they should focus next, but they would not tell if their existing tests were robust enough for unexpected behaviors. Therefore, beside achieving great coverage, testers should have a good robust test suite and verify the integrity of the system. I agreed with him. Looking at his example, I could see clearly how bad it was if we only relied on the tools to write tests. Beside information about code coverage, I could apply his advice whenever I wrote tests.

This week, I choose Test-Driven Development (TDD) as my topic. After reading many articles, I realized that I only knew a little bit about it, which was the part that I needed to create test before writing functional code. Therefore, I thought that I should research more about it through an introduction article. I chose this article below because it had detailed information about TDD. Below was the URL of the article.

In this article, Scott Wambler explained the definition of TDD, the two levels of TDD, the advantages and disadvantages of TDD, the importance of documentation, the Test-Driven Database Development, the Agile Model-Driven Development, the myths, and misconceptions about TDD. He also provided the adoption rates of TDD and a list of TDD tools.

The basic idea of TDD is dividing the coding process into smaller steps: writing one test first for a small bit of corresponding functional code at a time. According to Scott, taking TDD approach required great discipline because it was easy to “slip” and write functional code without first writing a new test. I agreed with him, since I still sometimes forgot creating test first when coding. There were two levels of TDD: Acceptance TDD (ATDD) and Developer TDD. The goal of ATDD, also known as Behavior Driven Development (BDD), was to specify detailed, executable requirements for the solution on a just in time basic. Developers only needed to write acceptance test and enough functional code to fulfill that test. The goal of developer TDD, or simply called TDD, was to specify a detailed, executable design for the solution on a just in time basic. Developers only needed to write developer test (unit test) and enough functional code to fulfill the test. Developers could apply both levels or one of them for their projects. I usually applied developer TDD for my projects, therefore, I would consider using the other level as well.

According to Scott, there were three challenges that made Test-Driven Database Development (TDDD) not work as smoothly as application TDD: the lack of tool support, the lack of motivation in taking a TDD approach, and the popularity of model-driven approach. I understood the challenges that TDDD had when it was a “new” approach in many data professionals’ eyes. In my opinion, if we could overcome the lack of tool support, the situation would be better. With enough tool support, we could encourage people to try taking this approach.

Scott also said that TDD worked for both small projects and “real” projects. He used his experience and Beck’s report to support for his statement. Personally, I doubted that I could even use it for “real” projects, since the adoption rates of TDD (mentioned in the article) was not high. Moreover, most of my previous partners did not even use TDD for our small projects. Therefore, it might not be useful right now for me, but I would consider it in future.

Since last week blog did not had a lot of information about Record and Replay (or Record and Playback), I did not know whether I should use it for (Graphic User Interface) GUI testing or not. Therefore, I decided that I should learn more about it and its advantages/disadvantages. After reading blogs and articles related to Record and Playback, I chose this particular article because it clearly stated the problems testers could have when using Record and Playback tools and the scenarios when Record and Playback could be useful. Below is the URL of the blog:

In this article, Troy T. Walsh, a principal consultant at Magenic in St. Louis Park, shared his thought about Record and Playback as a trap that many projects fell into. He provided the disadvantages that these tools had, for example, high maintenance cost, limited test coverage, poor understanding about the tools, poor integration, limit features, high price, locked in. He also gave some scenarios when Record and Playback might be a good option, like learning the underlying automation framework can be leveraged from the code, loading testing, and proving concept.

According to Troy, Record and Playback had limited test coverage. Since it followed the exact steps the testers recorded, it limited to testing against the user interface. Therefore, it made sense when Record and Playback was recommended for GUI testing last week. But for test automation, it did not have great value. He also thought that most testers had an incomplete understanding of what exactly these tools were doing which could lead to huge gaps in the test coverage. In my opinion, this disadvantage could be fixed if the testers studied more about the tools before using them. Furthermore, Record and Playback tools had limited features which are important for test automation like remote execution, parallelization, configuration, data driving and test management integration. Furthermore, to use feature rich options, the users needed to pay a lot of money every year. Beside those disadvantages, Record and Playback could be used to study the underlying automation framework of the code by recording the steps and observing what get generated.

After reading the advantages and disadvantages of Record and Playback, I could see that it was not a good tool for test automation since it was limited in many aspects. It had high price, high maintenance cost, limited features, limited test coverage, etc. However, in my opinion, it was good enough to be a GUI testing tool. Since GUI testing checked whether the expected executions, the Error Messages, the GUI elements layout, the font, the color, etc. were correctly executed or not, the testers only need to “record” the steps that the users would do and “playback” to see the results. Therefore, I would try Record and Playback to test GUI but not for test automation.

When I read the definition about one of the course topics, Test Automation, on Wikipedia, Graphic User Interface (GUI) testing had made me curious since I hadn’t tested the GUI “formally” before. In the past, I only tested the basics like the behaviors of the components, the expected outputs, and the layout of the GUI. In order to understand more what I should do when testing GUI, I chose a blog that had a complete guide about GUI testing. I hoped that after reading this blog, I could learn the ways to test the GUI correctly. Below is the URL of the blog.

This blog explained the definition of GUI and GUI testing, the purpose, and the importance of GUI testing. It also provided the checklist to ensure detailed GUI testing: checking all the GUI elements, the expected executions, correctly displayed Error Messages, the demarcation, the font, the alignment, the color, the pictures’ clarity and alignment, GUI elements’ positions for different screen resolution. Moreover, it mentioned about the three approaches of GUI testing: Manual Based, Record and Replay, and Model Based. It also included a list of test cases and testing tools for GUI testing and its challenges.

After reading the checklist, I recognized that there is one test case that I never paid attention before: checking the positioning of GUI elements for different screen resolution. This mean that neither images nor content should shrink, crop, or overlap when the user resized the screen. When I thought about it, it really made sense that users probably would not use the application again if it was hard to read the content or see the pictures just because their phones/PC/laptops did not have the same screen resolution as the default screen resolution the developers set up. Therefore, it was important to remember to test the GUI elements’ positions for different screen resolution. I would remember it and apply it the next time I tested GUI.

About the approaches of GUI testing, I only had done manual based testing and model based testing. Record and Replay approach was interesting in my opinion, where I could “record” all the test steps at the first time, then “replay” it whenever I wanted to test the application again afterward. The blog did not provide a lot of information about this approach beside the brief introduction and a link redirected to one of the tools used for this approach, which was called QTP. Since I had not used it before, I did not know about its advantages and disadvantages. Therefore, I would have to research and try it first before deciding to use it in future.

When I read the blog entry about unit testing last week, the author had mentioned about integration testing as a tool to detect regressions. Since the topic of that blog entry was unit testing, I did not have a chance to research this type of testing. Therefore, I decided to choose integration testing as the topic for this week’s entry. Because I did not know much about this type of testing, I found a blog entry that had detailed introduction of integration testing. This blog was written by Shilpa C. Roy, a member of STH team, who was working in software testing field for the past nine years. Below was the URL of the blog entry:

In this blog, Shilpa introduced the definition of integration testing, its approaches, its purpose along with the steps to create an integration test. In Shilpa’s opinion, integration testing was a “level” of testing rather than a “type” of testing. She also believed that the concept of integration testing could be applied in not only White Box technique but also Black Box technique. Beside the two approaches of this testing, which were Bottom Up and Top Down, she also mentioned another approach called “Sandwich Testing”, which combined the features of both Bottom Up and Top Down approach. Moreover, Shilpa gave an example how integration testing could be applied in Black Box technique.

I thought that the “third” approach called Sandwich testing was interesting. I always thought I could only test either top down or bottom up, but this really change my way of thinking. By starting at the middle layer and moving simultaneously towards up and down, the job was divided into two smaller parts, which was more efficient. Unfortunately, this technique was complex and required more people with specific skill sets so I really doubted if I could use it in the future. But the general idea that I could start the process at the middle part would be useful.

According to Shilpa, validating the XML files could be considered as part of Integrating testing for a product that we only knew its architecture. Since the users’ inputs would be converted into an XML formats then be transferred from one model to another, validating the XML files could test the behavior of the product. In my opinion, this example was easy to understand how integration testing could be applied in Black Box technique. I always thought that it could only be available for White Box technique, but this example had proved that I was wrong. Now that I knew about it, I could try to apply it the next time I encountered similar scenarios.