How efficient parse one – execute many in JDBC can get? So obvious recommendation to keep PreparedStatement open as long as possible is not easy. Testing custom built statement cache solution proved it to be not very well scalable. Increasing number of concurrent connections or the size of the cache of open statements lead to ineviable OutOfMemoryError exceptions.
I noticed during my testing by fetching from different “width” tables and playing with prefetch row count, that the wider the row or higher prefetch count were the faster my cache had grown. I debugged my code for many hours looking whether I was closing result sets or for something else very obvious. Nope, no luck – everything was correct. And yet OOME was consistent.
In the end it appeared that JDBC OracleStatement kept reference to so called accessors’ arrays (like a cache of prefetched rows data) for a result set that had been last executed even when the result set was already closed. The size of that data was proportional to the product of the fetched data (in my tests table) width and prefetch row count. I’ve tried then JDBC implicit and explicit caching and was showing precisely same memory footprint.
Opened SR and after few weeks was advised by a nice Oracle TS lady that there could be one workaround (which is mentioned above). It was to use property FreeMemoryOnEnterImplicitCache and so instead of this:
conn = (OracleConnection) DriverManager.getConnection(url, user, password);
OracleDataSource ods = new OracleDataSource();
Properties props = new Properties();
conn = ods.getConnection();
During testing I didn’t find a way a custom or explicit cache with key can be used effectively, especially if they are LRU caches (the later appeared to be LRU type of cache) – that residue data is going to get into OldGeneration before the statement is reused and the arrays are repopulated. I really hope the “bug” is going to be fixed in JDBC 10.2.0.4.
Metalink bug reference: 6082980