Play Framework (6) – performance optimization (2)

Performance issue continues to change depending on different environments, updating version, increasing transactions and so on. In past post, Play application performance optimization (1) I already give some commands which can help us to find obvious performance problems; meanwhile I list some problems which I met. In another post, JVM Memory Management, I introduce some basic concepts to help us to understand the background knowledge. So in this post, I continues to give more detailed methods to track memory leak issue and give example which appears in my project.

First I figure out what’s the problem right now in my project. I use New Relic to monitor my project. But one day, my server is down without any prediction. I thought other colleges might be shut down it. In fact, nobody touch the server. The server is down because the memory is used up and then the server is shut down finally. So memory issue comes in front of my eye. I find the memory is increasing daily. Every transaction will cause increasing the memory. It is not always. It only happens on new user, for existing user, it only increases once. I use VisualVM to check Heap usage. The GC performs well. Here I list how to use VisualVM to monitor for remote server.

Install VisualVM on your local.

Get VisualVM on your remote server.

sudo apt-get install visualvm

Create a file on your remote server.

vi jstatd.all.policy

Fill it with following content to grant access to every application from any side

grant {permission java.security.AllPermission; };

Start jstatd and tell it to expose the ip address of the machine running it

Fire up VisualVM on your local and click on “File/Add Remote Host…” and enter the ip.

And according to check visualvm, I find threads also work well. Here “well” means it can automatically back to normal base line, not always increasing. For GC, it also can automatically perform when heap size reaches to limit. But only classes keep increasing and never unloaded. So I think the memory leak might happen on class loader.

And then in order to better analyze result, i use jhat to visualize the file. If you meet “java.lang.OutOfMemoryEror: Java heap space”, you should enlarge -J-Xmx size. But it not always solves the problem, sometimes I tried several times, but it is still not ok.

visualize the heap by jhat, here 512m is the size limit, you can set it depends on the leak’s file size, like -J-Xmx2g.

jhat -J-Xmx512m leak

jhat -port 7401 leak

Now visiting 7000 port(default) can see the result. The opening screen shows amongst other things, all classes that are round in the dump.

Finding objects that were leaked is easy since I know that I shouldn’t see any objects of the classes that I deployed.