Testing Redux State

💻Jan 2, 2018

Redux at heart is a fundamentally simple concept. It’s somewhere to put your global application state and it’s a mechanism to mutate that state. One aspect of Redux I’m particularly interested in is its testability - how to approach your testing and what to actually test. In this post I’ll dive into my preferences and the reasoning behind these choices.

On the surface testing Redux state is very simple. Let’s quickly define a reducer, some actions and a selector function that I will use throughout:

Nothing too strange there, just a reducer to track a count with actions to increment and decrement. There’s also a selector function, getCount. These functions are performing data retrieval, reading from the store and encapsulating exactly where on the store the data are. Typically your UI components will access state via the selectors, rather than reading from the state themselves. By doing so, in In larger Redux applications it’s then possible to use libraries such as reselect to provide efficient combination and memoization of selectors, to reduce redraws.

Let’s write a test for the increment behaviour!

importreducer,{getCount}from'./state';describe('counter',()=>{conststate=reducer(undefined,{type:'none'});it('initially has a count of 0',()=>{expect(state.count).toEqual(0);});it('increments the counter',()={state.count=3;expect(state.count).toEqual(3);});it('retrieves count',()=>{state.count=4;expect(getCount(state)).toEqual(4);});});

These tests aren’t utilising the reducer in the way it does in production. The tests are directly manipulating and reading the state, whereas in production this will be handled through dispatching actions. Notice that there’s also a duplication: the selector test is doing exactly the same as the test that’s reading directly from the state. I’m not being too simplistic here, I’ve seen these kind of tests in real code. This typically happens when a developer wishes to demonstrate that they’re practicing the craft of unit testing, albeit in the literal sense of unit testing everything. Whilst these tests will gain you some good code coverage metrics, in the long term they will hinder your project.

Refactoring the state will mean that your tests will fail. You’ll then have to update the tests. But wait, isn’t this a good thing? Red-green-refactor? Sure, it can help, but do you want to spend all of your time fixing failing tests? I want my tests to fail if I’ve broken a requirement or some sort of contract, rather than just rejigging the code around. I’d like for my tests to capture that the system is doing what ought to be doing, rather than testing that the it’s doing what it’s doing.

How can the tests be improved? I’d make my tests match how the code will be executed in reality. Dispatch actions to the reducer, don’t set the state directly. Use the selectors to read the state. Unit testing purists may be thinking that this is no longer a ‘unit’ because it’s not the smallest possible scope. Instead, I’d argue that it’s the smallest logical scope for my tests.

importreducer,{getCount,increment}from'./state';describe('counter',()=>{letstate=reducer(undefined,{type:'none'});it('initially has a count of 0',()=>{expect(getCount(state)).toEqual(0);});it('increments the counter',()={state.=reducer(state,increment(3));expect(getCount(state)).toEqual(3);});});

When I refactor the state, if I don’t also update the selector, I’ve broken a contract and my test will fail. In the first example, test 3 would fail but test 2 would pass. Test 2 isn’t representative of reality, it’s just noise that slows the developer down. I can refactor with more safety. Furthermore, my tests are now focusing on what we’re actually trying to capture and less that each moving part works in isolation.

Understandably, this is a fine line. To some degree it’s playing with semantics and in such a small example it can feel marginal. However, consider a project with a larger state, complex interactions and a more involved workflow. A solid and representative approach to testing will help maintain test cases, making sure they will not hinder changing the code and act as sensible red flags for when functionality deviates from expected behaviour.