Pages

In this article we are going to see how to use Junit Benchmark where can we perform small performance tests on existing unit tests.

What is Junit benchmark?
Junit Benchmark is a library for running your unit tests by multi threads. It has simple annotation to support this type of execution. It usages Junit Rule to drive multi threaded execution.

So, idea is simple, if you use the rule and use the annotation you can easily benchmark a test method.

You can apply this tests for Java 6,7,8. For Java 9, this has already accepted and part of JDK tool library as JMH.

Example to use it :

Lets see an example to know how to use.
I am using a simple Calculator class with following methods.

And thats it. If you run, it will run your test with 2 concurrent threads.

Now, let's see what we have done with benchmark options. You can there are 3 parameters

concurrency(a number) = How many threads you want to use your test to be executed.

warmupRounds (a number)=> how many time you run the test to initialize your environment. How many threads you want to run before actual test starts. This is very important for testing stand point. Usually JVM based application need some initial time to start the process and JVM. So, you can add one or two threads to keep it realistic on timing that you observe. These rounds wont be calculated but, after the worm up round, you will get the timing with running environment.

benchmarkRounds (a number) = How many iterations you want to run?

If you are jmeter user, you may experienced similar parameters in thread groups.

Now, in test case I put some thread delay to see my test running as the test is very small so, it will run very fast. After adding some delay, the test case becomes.

It will run :
1. Single iteration for worm up,
2. Then 20 threads will run the tests
3. The threads will continue for 10 times.

So, Total it will run 20x10+1=401 times.

To validate number of current thread, we can use Thread.activeCount() inside test function to print current thread number .

We can actually use this annotation in both method level as well as class level.
A class level annotation actually apply for all test methods. where a method level annotation will override class level annotation if we use together.

Now, this class in my repository has some extra item with it for reporting and testing delay purposes. We will see those later.

Bonus features : Result Storing & Reporting :

This is common practices to show results of performance testing as report. Beside showing results in command line, junit benchmark has inbuilt capabilities to store your test results in H2 or mysql database. I will show examples with H2 Database. For this

a. @AxisRange : It represent axix information of the graph that is shown. We can add min & max values.

b. @BenchmarkMethodChart : It we provide name of the report that will be generated.

c. @BenchmarkHistoryChart : It is used for telling how many history results will be included in DB and results. maxRun will be how many times and label with provide options to include different type of label in each entry. I have added Run ID as label. We can add TIMESTAMP and our own CUSTOM_KEY. This CUSTOM_KEY should be included in the jub.properties.
Here is the example, as you saw from my property, the custom key was

jub.customkey=AddBenchMark

So, in the report , i can see also this

After running several rests , my chart folder looks like this.

In here the jsonp file contains results as json type object. You can also parse this to show with your json based live dashboard (like grafana).

Where to use this ?
1. You have your unit tests, use this to know your concurrency state performance.
As this is unit level performance, this will not prove concurrent user actions but, it can prove concurrent request processing.
So, when you have Strict SLA , use this to validate throughput.
This type of test is not suitable for Response Time SLAs as this is not user time.
And , important factor, error rate or error tolerant. This type of test can ensure (mostly) about server's error possibility.

2. You have functional integration tests which validates backend request. Like DB request, web service calls via UI layer.
you can use this to to test before actual tests for performance. This will help you to know how system behaves when they are integrated .
Some time it is very useful to run with manual tests in QA environment.
A number of parallel request is going on while a manual tester is testing application UI behavior.

3. Mission critical/Business critical data concurrency tests can be easily done by this.
Example, business transaction data concurrency testing for a banking or financial domain. This can validate data integrity when concurrent requests are in place in the system. (Synchronization, locks etc)

Note : While running the tests, make sure of resources that you are using in the test which will be affected by concurrency. Usually DB connections, file handlers, open ports, created sessions etc should be taken care properly for testing this way.