Note: This article has been unpublished for quite some time. It’s main parts date back to December 2007. Therefore, if some version number seem to be outdated — I am referring to the state it had back those days.
We have a Java application with embedded Jython scripting engine. The Jython scripts do mass computations on data sets. So far, we had 3000 – 4000 data sets in one chunk maximum. Now, a new customer starts and will have 8000 and more data sets.
„No big deal,” I thought. And started the test run one day before our customer will have the whole thing running on the production system for the first time. Computation starts: 1000… 2000… 3000… 4000… 5000… bang „Out of memory. You should try to increase heap size”. The server application halts completely and without any further warning.
I’m a bit shocked. The big problems always arise at place where one would definitely not expect them. I start circumvention attempts: Splitting the run into smaller chunks — does not work. Reinitializing the Jython environment periodically — makes things worse. Replacing our rather outdated (but otherwise functional) Jython 2.1 with the then-current Jython 2.2.1 — does not matter. I do not seem to have a chance to cycle more than about 5200 times through the script before I catch an „Out of memory” situation — or have to restart the whole server process.
Weird. What should I tell the customer? „Well, you cannot run your computations in one step. Start with the first half, then call us, we will restart the server, then do the second half.” ??? Not a really professional way of doing things. Even more surprising, looking at the memory situation with
Runtime.freeMemory() and friends shows that there is no shortage of memory at all. Actually, when the application crashes, it has used not more than 600 MB out of 2048 MB heap space and more than 50 MB are marked as „free”. This is not precisely what I would summarize as „out of memory”…
Finally, poking Google once more brings the solution. I find an article about just a similar problem. Fortunately, it has a solution and even explains what’s going on: Jython has an internal mapping of
PyXXX wrappers to Java objects. The default configuration uses normal references which makes these mappings resistant to garbage collection. Due to mechanisms I do not fully understand, this leads to enormous growth of the mapping set and finally an out-of-memory-situation with the internal resource management.
Fortunately, the solution is as simple as putting a
somewhere in the code before the Jython subsystem is initialized. Then, the internal table is built with weak references and suddenly, everything runs smoothly. The 8000 data sets are no problem any more and I can deliver the application as expected. Lucky me.
There is only one question remaining: What kind of parapsychological abilities are developers expected to have to find such a solution without having the luck to find an article describing this. And: Why the heck does Jython not use weak references as default? I could not find any problems or even speed penalties.