Details
-
Bug
-
Resolution: Workaround exists
-
Neutral
-
None
-
None
-
None
Description
Steps to reproduce
- We don't know how to reproduce this kind of issue but it happen widely to different customers recently. Please reference to linked issues for more information.
Expected results
Bug being fixed, improvement being implemented, preventive action being found, etc.
Actual results
Customer periodically facing issue with Lucene IndexMerger Error while merging indexes no segments* file found due to NativeFSLockFactory write.lock...
This pattern of log flooded the disk:
2020-09-07 09:43:53,087 ERROR rg.apache.jackrabbit.core.query.lucene.IndexMerger: Error while merging indexes: org.apache.lucene.index.IndexNotFoundException: no segments* file found in org.apache.jackrabbit.core.query.lucene.directory.FSDirectoryManager$FSDir@org.apache.lucene.store.SimpleFSDirectory@/magnolia/repositories2/magnoliaAuthor/magnolia/workspaces/mgnlVersion/index/_4d lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d68bae9: files: [write.lock] at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:667) ~[lucene-core-3.6.0.jar:3.6.0 1310449 - rmuir - 2012-04-06 11:31:16] at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:72) ~[lucene-core-3.6.0.jar:3.6.0 1310449 - rmuir - 2012-04-06 11:31:16] at org.apache.lucene.index.IndexReader.open(IndexReader.java:454) ~[lucene-core-3.6.0.jar:3.6.0 1310449 - rmuir - 2012-04-06 11:31:16] at org.apache.jackrabbit.core.query.lucene.AbstractIndex.getReadOnlyIndexReader(AbstractIndex.java:312) ~[jackrabbit-core-2.18.1.jar:2.18.1] at org.apache.jackrabbit.core.query.lucene.AbstractIndex.getReadOnlyIndexReader(AbstractIndex.java:334) ~[jackrabbit-core-2.18.1.jar:2.18.1] at org.apache.jackrabbit.core.query.lucene.PersistentIndex.getReadOnlyIndexReader(PersistentIndex.java:168) ~[jackrabbit-core-2.18.1.jar:2.18.1] at org.apache.jackrabbit.core.query.lucene.MultiIndex.getIndexReaders(MultiIndex.java:552) ~[jackrabbit-core-2.18.1.jar:2.18.1] at org.apache.jackrabbit.core.query.lucene.IndexMerger$Worker.run(IndexMerger.java:522) [jackrabbit-core-2.18.1.jar:2.18.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_211] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_211] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_211] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_211] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_211] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_211] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_211]
That flood the log causing full disk issue.
This kind of issue happen more frequently now and there are more and more customers facing similar issue that need us to do some deeper investigation and having preventive actions.
Workaround
- Stop the instance (it should have been dead in fact).
- Install the patch
<dependency> <groupId>org.apache.lucene</groupId> <artifactId>lucene-core</artifactId> <version>3.6.0-LUCENE-4738</version> </dependency>
- Remove the "big log" and all repository index folders
- Start up again the instance for indexes being recreate.
- Would be great if customer also follow our Consistency checks and fixes guideline.
Development notes
Notes to/from developers
Please consider my comment below setting some flags to true (enableConsistencyCheck , forceConsistencyCheck, autoRepair) by default when bundling the releases.
Checklists
Attachments
Issue Links
- is related to
-
MGNLBACKUP-139 Unable to commit volatile index
-
- Open
-
- links to
- Wiki Page
-
Wiki Page Loading...