Important thing to note: except for __support__, the directory structure matches the /lib structure. We want to keep it this way as much as possible to make it easy to identify what is being tested. If you have worked with Go this practice should be familiar.

In most packages, to test the code you will want to launch an Ark Node or at least parts of it. This is why you will often access the setup.js file, which is used to start the components of the node needed for our tests.

We declare a setUp and a tearDown method: these will be used in our tests' beforeAll and afterAll methods.

We use @arkecosystem/core-test-utils to help us set up the container.

The containerHelper.setUp method accepts a configuration object which will be used to launch (or not) the different modules of the Ark Node.

Now this can be used in every test that needs it, just like this:

const app =require('./__support__/setup')let container
beforeAll(async()=>{
container =await app.setUp()// After the container has been set up, we can require and use any moduleconst logger = container.resolvePlugin('logger')
logger.debug('Hello')})afterAll(async()=>{await app.tearDown()})

When we write some new tests, generally we start by checking that the feature is working as expected in the general case, which is perfectly fine. However, please do not stop there, it is the edge cases we are worried about.

Go deeper and test it with different parameters. Ask yourself: in which case this could very well fail, such as a particular set of parameters? If I were to refactor the feature, what would I like to be tested then?