Featured in Architecture & Design

Monal Daxini presents a blueprint for streaming data architectures and a review of desirable features of a streaming engine. He also talks about streaming application patterns and anti-patterns, and use cases and concrete examples using Apache Flink.

Featured in AI, ML & Data Engineering

Joy Gao talks about how database streaming is essential to WePay's infrastructure and the many functions that database streaming serves. She provides information on how the database streaming infrastructure was created & managed so that others can leverage their work to develop their own database streaming solutions. She goes over challenges faced with streaming peer-to-peer distributed databases.

Q&A about the book Common System and Software Testing Pitfalls

The book Common System and Software Testing Pitfalls by Donald Firesmith provides descriptions of 92 pitfalls that make testing less efficient and effective. The descriptions explain what testers and stakeholders can do to avoid falling into the pitfalls and how to deal with the consequences when they have fallen into them.

Donald also maintains the website Common Testing Pitfalls where additions and modifications of the pitfalls taxonomy are published.

InfoQ interviewed Donald about finding and preventing defects, effectiveness and efficiency of testing, testing mindset, integrating testing into the overall development process, communicating about testing, and how you can take part in extending and improving the repository of testing pitfalls.

InfoQ: What made you decide to write a book on testing pitfalls?

Donald: Over the years, I have taken part in a great many independent technical assessments of system and software development projects, and part of any comprehensive assessment includes an evaluation of the development organization’s testing program. One cannot do this very many times before noticing that the same mistakes are being made over and over again and that no project seems immune. Frustrated by this widespread ongoing lack of progress, I developed a popular short presentation that rapidly turned into a series of increasingly complete conference presentations and tutorials. Since a great number of very good “how-to” testing books have not prevented projects from falling into commonly-occurring testing pitfalls, I eventually decided that a “how-not-to” book of testing anti-patterns would be the best way to make the most people aware of these common pitfalls, how to avoid falling into them, and how to dig yourself out of any that you have been unfortunate enough to have fallen into.

InfoQ: What in your opinion makes testing so important?

Donald: Being human, system and software developers make mistakes, which is why all systems and software applications contain defects. The many different types of testing are some of the most powerful ways of uncovering these defects so that they can be fixed. Testing also provides evidence of a system or application’s quality, its fitness for purpose, and its readiness for delivery and being placed into operation. Testing can even be used to prevent defects in the first place, such as when test-driven development (TDD) is used. Finally, a great deal of a project’s resources (budget, schedule, staffing, and facilities) are devoted to testing. Thus, improving testing can have a huge positive impact on cost, time to market, quality, and even the amount of functionality that can be delivered.

InfoQ: What about other practices besides testing for finding and preventing defects, are they also important?

Donald: Testing is only one of the ways that can be used to verify and validate a system or software application. Other classic approaches include analysis, demonstration, and inspection. For example, static analysis is a very effective and efficient way of identifying certain types of software bugs and security vulnerabilities. Sometimes testing is not the most effective method for verifying certain requirements, and certain perfectly good requirements are verifiable, but not testable. Finally as summarized in Caper Jones, “Software Quality and Defect Removal Efficiency (DRE)”, Namcook Analytics LLC, 23 June 2013 and Steve McConnell, Code Complete, A Practical Handbook of Software Construction, Second Edition, Microsoft Press, 2004, research shows that the highest quality is achieved when multiple verification methods (including multiple types of testing) are used. Thus, testing should be viewed as absolutely necessary but not sufficient when it comes to system and software development.

InfoQ: In the book you state that "Unfortunately, people have come up with many ways in which to unintentionally make testing less effective, less efficient, and more frustrating to perform." Can you elaborate what in your opinion caused this to happen? What can be done to make testing more effective and efficient, and less frustrating to perform?

Donald: There are many reasons why testing mistakes are commonly being made. A great deal of testing is managed and performed by people with insufficient testing training, experience, and expertise. Many do not realize that testing is an engineering discipline that requires just as much knowledge as software design and implementation. There is insufficient communication regarding testing taking place. Testing-related lessons learned on previous project are often not applied to current projects. And naturally, because many managers, developers, and testers are unaware of the existence and dangers of these testing pitfalls, testing mistakes are made, projects fall into the associated testing pitfalls, the resulting negative consequences occur, and the projects must be dug out of the pitfalls.

InfoQ: Let's explore some of the pitfalls that are described in your book. One is called "Wrong testing mindset". Can you elaborate what it is, and how to deal with it?

Donald: Many managers, developers, and other stakeholders hold mistaken beliefs concerning testing and what it can achieve. The primary purpose of testing is to uncover defects so that they can be fixed. Testing also provides evidence to help determine whether the system or software is ready for delivery and being placed into production and operation. Finally, certain testing approaches such as test driven development (TDD) and the testing of executable requirements, architecture, and design models can help prevent defects in the first place.

What testing cannot do is show that the system or software works as it should under more than a tiny fraction of conditions and inputs because testing cannot be exhaustive. Testing certainly cannot show, let alone prove, that the item under test is defect free. Passing testing depends not just on the quality of the system or software, but also on the quality and completeness of testing. This is why even the best testing tends to find less than 90% of all defects. For this reason, quality must be built into the system and testing must be performed from the very beginning, and quality is everybody’s responsibility, not just the testers’. The best way to combat these mistaken mindsets is increased training and communication between testers and testing’s stakeholders. I naturally hope that my repository of testing pitfalls will also help people develop more realistic and helpful testing mindsets.

InfoQ: Another pitfall is "Testing and engineering processes not integrated". Can multidisciplinary agile teams help to address this?

Donald: Far too often, testing is considered an engineering activity that happens separate from, in parallel to, and after the main engineering activities of development. This separation can easily result in separate and inconsistent processes and schedules.

Agile cross-functional teams naturally integrate testing with requirements, design, implementation, and integration activities in evolutionary (i.e., iterative, incremental, parallel, and time-boxed) development cycles. However, agile is not by any means the only or even a complete approach to avoiding this pitfall. Not every activity can be performed by small agile teams, and there is still a need for independent and highly specialized testing that goes beyond the skillsets of almost all agile developers. Additionally, research shows that the best quality is achieved by a combination of agile and traditional approaches, and testing needs to be integrated with both.

InfoQ: One problem that I often see is a lack of effective communication in teams and between teams and their stakeholders. You also mentioned this in the "inadequate communication concerning testing" pitfall. Why does this happen and what can you do avoid it?

Donald: The people involved with system and software development are often separated into silos with little cross silo communication. They are also extremely busy with little time for writing or reading documentation, and this documentation is often developed by people with poor writing skills and little time or inclination to produce high quality documents. And defects do not just hide within the software; they lurk in all work products from requirements right down to test documentation. While agile methods attempt to address this problem with face-to-face communication and “self-documenting” code, the problem is that the development and maintenance of large complex systems and software requires a great deal of communication between people who can spend little or no time in direct communication. If nothing else, people move on during the 10-20 year lifespan of many systems.

Unfortunately, there is no single silver bullet for avoiding this pitfall; it must be addressed with an ongoing many-pronged approach. For example, requirements management tools become the repositories of living requirements while requirements specification documents become merely [partial] snapshots of the requirements at specific times. Wikis can replace paper test strategies and plans, (not to mention architecture and design documents. Frequent and regular training can occur in the form of lunch-and-learns, newsletters, and official mentoring programs. Representative testers can be made members of the management, requirements, architecture, and other teams and take an active part in their meetings. Test communication can be made an important official responsibility of testers. And once again, my book and my repository of additions and modifications to the book, my numerous presentations at conferences, and this interview are ways I am using to communicate the testing pitfalls that people need to be aware of, avoid, and mitigate.

InfoQ: Your research on Common Testing Pitfalls is a work in progress. If people want to be involved and share their testing pitfalls, how can they do that?

Donald: The original book documented 92 testing pitfalls organized into 14 categories. Since the book manuscript was baselined in the middle of last November, those numbers have grown to 139 pitfalls in 20 categories. Additionally, people are recommending additional pitfall information such as characteristic symptoms, potential negative consequences, causes, and recommendations.

People who want to get involved can email me at dgf@sei.cmu.edu with their recommended additions and changes. This certainly applies to updates to existing pitfalls, but especially applies to the new pitfalls and pitfall categories. Unlike the contents of the book, these completely new additions have not had the benefit of being reviewed by a large international group of testing subject matter experts, and are thus undoubtedly incomplete and contain errors. I will carefully consider all recommendations and comments for inclusion in the pitfall repository and the second edition of the book. As an example of such inputs, I have recently added two new categories for pitfalls related to Testing-as-a-Service (TaaS) and pitfalls involving misuses of the testing pitfalls themselves based on inputs I received at the Next Generation Testing conference I recently chaired. In addition to email, I plan to start several new discussions with major LinkedIn testing groups and will soon be asking for volunteers to technically review the book’s second edition.

About the Book Author

Donald Firesmith (SEI Webpage) is a principal engineer at the Software Engineering Institute at Carnegie Mellon University, where he helps government program offices acquire software-reliant systems. With over 35 years of software and system development experience, he has written seven technical books, spoken at numerous conferences, and published many articles and papers. His areas of expertise are requirements engineering, system and software architecture engineering, process engineering, object-oriented development, and naturally system and software testing. His numerous publications can be downloaded from here.