Affects Version/s: 4.0.1
Fix Version/s: 4.4
Environment:Magnolia 3.6.3 CE
Environment1: Sun JDK 1.6.0_11, 32bit, RHEL 5.3Beta
Environment2: Sun JDK 1.6.0_03,32bit, Windows 2000
Serving big binary node data (>100MB) may cause OOM error when server doesn't have enough of the memory assigned. This issue is due to the nature of objects in EhCache - cached objects are required to be Java objects, which are then serialized and kept in memory or in file system by the cache itself.
One possible solution would be simply storing such documents in file system and having File object as a part of the cachedPage, however this would introduce need for extra file store (probably in same location as a cache itself).
Since overhead accessing the repository and serving big data streams directly is relatively small to the total time it takes to stream the document to the client, the simplest solution is altering a cache policy to not cache such documents. This have been dealt with for DMS documents in
MGNLDMS-159. For the content in website workspace, it is not recommended to store big binary data directly there but to use DMS instead.
NOTE: this error occurs on Public instances only, whose has caching enabled and heap size is small (relative to size of data served)!
How to reproduce:
1) Start PUBLIC magnolia instance with Heap smaller than DMS repository size, for example -Xmx256m
2) Upload on PUBLIC instance few large files (for example 3 times 100MB PDF files)
3) Launch new anonymous browser (to ensure, that cache is used)
4) Download (do not interrupt) the 3 large 100MB large files from public instance
5) Usually the 2nd download will cause
java.lang.OutOfMemoryError: Java heap space
(Full stacktrace shall be attached later)
Woraround: add voter to cache policy to avoid caching of big data or disable caching