-
Task
-
Resolution: Obsolete
-
Major
-
None
-
6.2.4
-
None
-
-
Empty show more show less
-
Empty show more show less
-
Yes
- Consolidate past/recent investigations
- Conduct proper benchmarking, in order to have a final and proper statement about memory issues linked to the usage of periscope-ranker.
Now, to move forward, I propose to benchmark this properly, in similar spirit as Duy's benchmarks re: JVM/GC options lately, or Maxime's past efforts Running Performance Load Tests:
- Define a clear activity scenario, likely triggering few to many find-bar searches, and document the target setup specifications (e.g. local docker with low-memory setup, cloud environment).
- Define which setups we put under test, proposing 1. enabled-no-flags, 2. enabled-with-flags, 3. disabled, 4. excluded.
- Define which metrics we're interested in: likely jvm heap + non-heap or overall committed memory, others?
- Produce charts for each setup for the activity scenario, be it via Datadog (cloud), or ad-hoc tooling / local grafana setup.
- Consider automation, whether with Test Framework, or crafting a mini-API in front of periscope if that helps for load-testing tools (instead of scripting the infamous Vaadin UIDL calls).
Let's use this as a guinea pig example to establish a proper benchmarking methodology, so the experience is profitable. We will need to establish such tests instead of discovering impact of changes (be it from core/jvm setup/other) in production.
Expected outcome: clear statement from Product development: Keep/Patch/Disable/Remove, and aligned decision on Core/Cloud.
- relates to
-
MGNLPER-82 Consider non-AI alternatives for search result rankings
- Closed
-
MGNLPER-152 Result Ranking Tech Issues
- Closed
-
MGNLPER-154 Remove ranking from bundle
- Closed
- links to