Prepared statement java memory leak

Developers feel more convenience in using PreparedStatement objects for sending SQL statements to the database. When PreparedStatement object is given a sql statement, it gets sent to DB and gets compiled immediately. As a result when a PreparedStatement is executed, the DB just run the PreparedStatement SQL statement without having to compile it first. One more advantage is you can also use the sql statement in parameterized fashion i.e different values each time you execute same statement.

We used eclipse MAT tool to analyze the heap dump. Please find the analysis below:

Large number of prepared statements in JVM getting cached outgrowing the amount of memory available to the JVM causing Java heap exhaustion and ultimately OOM. This is a typical scenario in case of high concurrency with limited resources.

Below are the Memory leak suspects which shows 87.54% of total heap is taken via large number of Prepared Statements .

Dominator tree shows that there were huge number of requests which caused the prepared statement cache to increase huge number and getting out of memory.

This kind of situation happens due to mainly 2 reasons:

Prepared statement java memory leak

Extreme concurrency with inadequately sized JVM

In one of the use case we had seen that application was going through load test which fires thousands of sql queries to database via application work flows. In these cases you might see this error quite often where the JVM will go outofmemory or will stop responding to any further requests. Though there was no memory leaks but with 3GB JVM Xmx size was inadequate to run 200 concurrency for a app tasks which were very resource intensive.

In case of doubts on code related memory leaks its better to take a heap dump during OOM and start tracing by getting the actual SQL from the leaked cursors and looking in code for where that SQL is created. A very common cause of leaked cursors are like missing to use .close() :

If you are sure there are no memory leaks like above in code then you can try below settings to improve performance and can reduce Prepared statement java Out Of Memory errors. We had done below changes to keep the OOM errors under control:

Assumptions:

App is running on Apache Tomcat with Oracle as backend database.

Tomcat Changes :

1. In Tomcat\conf\context.xml files JNDI connection pool maxActive was increased and validationQuery was set to “select 1 from dual” instead of “select * from dual”

Oracle excepts says that each Statement (including PreparedStatement and CallableStatement) holds two buffers, one byte[] and one char[]. The char[] stores all the row data of character type: CHAR,VARCHAR2, NCHAR, etc. The byte[] stores all the rest. These buffers are allocated when the SQL string is parsed, generally the first time the Statement is executed. The Statement holds these two buffers until it is closed.

Since the buffers are allocated when the SQL is parsed, the size of the buffers depends not on the actual size of the row data returned by the query, but on the maximum size possible for the row data. After the SQL is parsed, the type of every column is known and from that information the driver can compute the maximum amount of memory required to store each column. The driver also has the fetchSize, the number of rows to retrieve on each fetch. With the size of each column and the number of rows, the driver can compute the absolute maximum size of the data returned in a single fetch. That is the size of the buffers.

Example:

When the driver executes the executeQuery method, the database will parse the SQL. The database will report that the result will have three columns: a NUMBER(10), a VARCHAR2(40), and a DATE. The first column needs (approximately) 22 bytes per row. The second column needs 40 chars per row. The third column needs (approximately) 22 bytes per row. So one row needs 22 + (40 * 2) + 22 = 124 bytes per row. Remember that each character needs two bytes. The default fetchSize is 10 rows, so the driver will allocate a char[] of 10 * 40 = 400 chars (800 bytes) and a byte[] of 10 * (22 + 22) = 440 bytes, total of 1240 bytes. 1240 bytes is not going to cause a memory problem. Some query results are bigger. In the worst case, consider a query that returns 255 VARCHAR2(4000) columns. Each column takes 8k bytes per row. Times 255 columns is 2040K bytes or 2MB per row. If the fetchSize is set to 1000 rows, then the driver will try to allocate a 2GB char[]. This would be bad.

Oracle JDBC Drivers:

The 11.1.0.7.0 drivers introduce a connection property to address the large buffer problem. This property bounds the maximum size of buffer that will be saved in the buffer cache. All larger buffers are freed when the PreparedStatement is put in the Implicit Statement Cache and reallocated when the PreparedStatement is retrieved from the cache. If most PreparedStatements require a modest size buffer, less than 100KB for example, but a few require much larger buffers, then setting the property to 110KB allows the frequently used small buffers to be reused without imposing the burden of allocating many maximum size buffers. Setting this property can improve performance and can even prevent OutOfMemoryExceptions. The connection property is oracle.jdbc.maxCachedBufferSize in bytes and this can be used as a JVM argument using -Doracle.jdbc.maxCachedBufferSize=112640

Few other things to look at are:

oracle.jdbc.useThreadLocalBufferCache and oracle.jdbc.implicitStatementCacheSize settings. A good read can be found at oracle database memory PDF from Oracle.