Topics

Featured in Development

Alex Bradbury gives an overview of the status and development of RISC-V as it relates to modern operating systems, highlighting major research strands, controversies, and opportunities to get involved.

Featured in Architecture & Design

Will Jones talks about how Habito, the leading digital mortgage broker, benefited from using Haskell, some of the wins and trade-offs that have brought it to where it is today and where it's going next. He also talks about why functional programming is beneficial for large projects, and how it helps especially with migrating the data store.

Featured in AI, ML & Data Engineering

Katharine Jarmul discusses research related to fair-and-private ML algorithms and privacy-preserving models, showing that caring about privacy can help ensure a better model overall and support ethics.

Featured in Culture & Methods

This personal experience report shows that political in-house games and bad corporate culture are not only annoying and a waste of time, but also harm a lot of initiatives for improvement. Whenever we become aware of the blame game, we should address it! DevOps wants to deliver high quality. The willingness to make things better - products, processes, collaboration, and more - is vital.

Featured in DevOps

Service mesh architectures enable a control and observability loop. At the moment, service mesh implementations vary in regard to API and technology, and this shows no signs of slowing down. Building on top of volatile APIs can be hazardous. Here we suggest to use a simplified, workflow-friendly API to shield organization platform code from specific service-mesh implementation details.

Automation in Testing over Test Automation

At the Agile Testing Days 2015 Richard Bradshaw explored how using the term "test automation" is restricting teams in exploiting the benefits of automation.

InfoQ interviewed Bradshaw about the difference between testing and checking and why they are both important, how automation can support testing, using automation frameworks, and why we should always focus on the testing problem.

InfoQ: Can you elaborate on the difference between testing and checking?

Bradshaw: My views on it still align with those defined by James Bach and Michael Bolton in their post testing and checking refined, something I encourage everyone to read, to form their own views. The main difference for me, is centred around learning. Learning being the acquisition of knowledge. We test software to gain knowledge about it, knowledge the business needs, to make informed decisions. When testing, we are free to explore the system, follow heuristics, explore based on our findings, seek out new information. We learn. However, checking is exactly that, checking. We are checking the system against some model, in order to detect change against that model. It’s a set algorithm that is being executed. It doesn’t learn. We then have to evaluate the results of those checks, in order to determine, is there a problem here? Something only a human can do.

InfoQ: Why do you consider them both to be important?

Bradshaw: They complement each other. I see it as incredibly hard to do one without the other. We have to check key behaviours of the system, especially behaviours that could cost the business, or damage it’s reputation. Of course, you could argue all bugs could, so I am talking key things, such as if you couldn’t purchase on Amazon, or send a message on Slack. At the same time, we have to test new features of the system, so that the business understands their behaviour, but we also have to test, how those new features impact the rest of the system. Checks can help guide such testing, as if they detect change in an area, this is an open invitation to explore, test, such areas.

InfoQ: In your talk you mentioned that automation should support testing and not replace it. Can you explain this?

Bradshaw: In some previous roles, I was told the goal of automation was to replace testing, mainly regression testing. I have even been exposed to situations where it was discussed that automation could actually replace testers. This is hokum of course. The main reason I added it to the abstract was, to align with my comments on testing and checking. In my opinion, what most people call automated testing, is actually automated checking. They have codified a model of the system, that they are checking it against. But as mentioned above, we must still test, so it’s about understanding that these automated checks are supporting our testing efforts, not replacing it. Also, if we acknowledge, that testing is required, let’s explore how we can make testing faster, or deeper, or extend the testers reach. What tools can we build that supporting testing, things like data management, state manipulation, and log file parsing to name a few examples.

InfoQ: Can you give some examples of automation frameworks that you are using, and how you are using them to automate in testing?

Bradshaw: So I like to use the analogy of a jigsaw, a jigsaw architecture. It’s a catchy name for abstraction and decoupling. When designing an architecture, I like to design them all as standalone pieces. For example, a common approach to GUI automated checking, is to have the following. Some code that manages data, be it read from a spreadsheet or create it in the database. Some code that creates browsers instances for us. Some code that manages our interactions with the page, most commonly, PageObjects. Finally, some code that will handle reporting results for us. Now all those pieces together would form a tool that we can use to do automated GUI checking. However, having designed the architecture is smaller pieces, we could now utilise this throughout our testing efforts. We could build a GUI for our data creation code, or even a command line interface, allowing members of team to use this code to create data while they test, instead of having to go into the DB themselves. Another example I am currently working on, is using parts of my mobile automation architecture, to deploy the app to 1-N devices, launch the app, log me in, and stop. Allowing me to update multiple phones in one hit, saving me probably 15-30 minutes each version, this gives me 15-30 minutes more testing time.

InfoQ: Which learning do you want to share with testers when it comes to testing and automation?

Bradshaw: The way we talk and think about automation needs to change. We need to remember it’s the testing problem we are trying to solve, not the "we need automation" problem. Think critically about your testing problem, and use tools where best appropriate, and acknowledge their limits.

The point I am trying to make is, I have witnessed companies blindly chase automation, like it’s the must have toy. I have also witnessed teams solely view automation as end to end scenarios. So a script that includes some set up, some actions, and then some assertions. We need to break these moulds. Think critically about where your testing needs improvement, and if the decision is that automation can help, incorporate it accordingly. Analyse the bottlenecks in your approaches, but go deeper than, for example, regression testing takes to long. What part of it, is taking too long? Look at automating part of the bottleneck, it doesn’t always have to be the whole thing.

Finally, remember that automation is dumb, and that it requires continuous education. That takes time, are you sure you have it? You, a human, are always learning, imagine if you were armed with some custom tools, how much could your testing improve then?