[MGNLCACHE-97] Allow to cache time-consuming resources even if those are bigger then inmemory threshold Created: 08/May/15 Updated: 23/Jul/15 Resolved: 21/May/15 |
|
| Status: | Closed |
| Project: | Cache Modules |
| Component/s: | None |
| Affects Version/s: | 5.3.1 |
| Fix Version/s: | 5.3.2 |
| Type: | Improvement | Priority: | Neutral |
| Reporter: | Florian Fuchs | Assignee: | Roman Kovařík |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | support | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||
| Template: |
|
||||
| Acceptance criteria: |
Empty
|
||||
| Task DoD: |
[ ]*
Doc/release notes changes? Comment present?
[ ]*
Downstream builds green?
[ ]*
Solution information and context easily available?
[ ]*
Tests
[ ]*
FixVersion filled and not yet released
[ ] 
Architecture Decision Record (ADR)
|
||||
| Date of First Response: | |||||
| Description |
|
The default threshold of in-memory cache items is hardcoded to 500K in info.magnolia.module.cache.executor.Store (via the constant CacheResponseWrapper.DEFAULT_THRESHOLD), when this threshold is reached, the item gets swapped to the disk. If highly frequented resources of a page exceed this threshold, the magnolia cache can not be used, which results in performance decrease. This threshold should be configurable from within magnolia. |
| Comments |
| Comment by Jan Haderka [ 15/May/15 ] |
|
The value of 500K was not selected randomly, but as a result of testing that shown 98% of resources were served as fast from memory as from the repo when exceeding this value. This is mainly due to the fact that transport of such amount of data offsets time needed for accessing the repository. I suspect that if this doesn't hold true in your case, it is due to the fact that you are manipulating or processing files that are being served. Is that correct? If yes, then simply increasing the number or making it configurable will never really solve your problem, it would just allow you to make it less visible at the expense of memory consumed by cache. Again, assuming you really do some processing of the resources, wouldn't it be more beneficial if this could be somehow signalled to the cache so it knows that generating this content was expensive and should be cached no matter which threshold it is exceeding? |
| Comment by Florian Fuchs [ 15/May/15 ] |
That is true - in our system, the problematic resources were minified and compressed javascripts, the compression/minify happens on demand within the "processed javascript", so when the resource wasn't cached, this computation needs to be done again. I fully agree with the rest of your comment. |
| Comment by Jan Haderka [ 20/May/15 ] |
|
Make it smarter than just a number |
| Comment by Thomas Duffey [ 21/Jul/15 ] |
|
What was the resolution to this? We're working with one of your EE customers and had to implement our own cache Store executor to increase the threshold. They're outputting a lot of data to their pages, causing the page size to exceed the 500KB threshold. In this specific case it is very advantageous to cache these pages as they are expensive to build (Slow JCR queries). Wondering if there is a better solution than implementing our own Store executor that is a total copy/paste from yours with the threshold value changed. |
| Comment by Roman Kovařík [ 23/Jul/15 ] |
|
Hello Thomas, Regards. |