Statement Cache causes OutOfMemoryError (JDBC)

How efficient parse one – execute many in JDBC can get? So obvious recommendation to keep PreparedStatement open as long as possible is not easy. Testing custom built statement cache solution proved it to be not very well scalable. Increasing number of concurrent connections or the size of the cache of open statements lead to ineviable OutOfMemoryError exceptions.

I noticed during my testing by fetching from different “width” tables and playing with prefetch row count, that the wider the row or higher prefetch count were the faster my cache had grown. I debugged my code for many hours looking whether I was closing result sets or for something else very obvious. Nope, no luck – everything was correct. And yet OOME was consistent.

In the end it appeared that JDBC OracleStatement kept reference to so called accessors’ arrays (like a cache of prefetched rows data) for a result set that had been last executed even when the result set was already closed. The size of that data was proportional to the product of the fetched data (in my tests table) width and prefetch row count. I’ve tried then JDBC implicit and explicit caching and was showing precisely same memory footprint.

Opened SR and after few weeks was advised by a nice Oracle TS lady that there could be one workaround (which is mentioned above). It was to use property FreeMemoryOnEnterImplicitCache and so instead of this:

conn = (OracleConnection) DriverManager.getConnection(url, user, password);

use this

OracleDataSource ods = new OracleDataSource();Properties props = new Properties();
props.put("user", user);
props.put("password", password);
props.put("oracle.jdbc.FreeMemoryOnEnterImplicitCache", true);
ods.setConnectionProperties(props);
ods.setURL(url);
conn = ods.getConnection();

During testing I didn’t find a way a custom or explicit cache with key can be used effectively, especially if they are LRU caches (the later appeared to be LRU type of cache) – that residue data is going to get into OldGeneration before the statement is reused and the arrays are repopulated. I really hope the “bug” is going to be fixed in JDBC 10.2.0.4.

Metalink bug reference: 6082980

Advertisements

One Response to “Statement Cache causes OutOfMemoryError (JDBC)”

  1. kayak Prices Says:

    Hey there! I know this is kinda off topic however I’d figured I’d ask.

    Would you be interested in exchanging links or maybe guest
    writing a blog post or vice-versa? My blog discusses a lot of the same subjects as yours and I think we could greatly benefit from each other.
    If you happen to be interested feel free to send
    me an e-mail. I look forward to hearing from you! Terrific blog by the way!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: