[MAGNOLIA-6139] WebappVersionHandler should bootstrap even if initalized workspaces exist in a cluster Created: 20/Mar/15 Updated: 11/Nov/16 Resolved: 26/Oct/15 |
|
| Status: | Closed |
| Project: | Magnolia |
| Component/s: | None |
| Affects Version/s: | 5.3.7 |
| Fix Version/s: | 5.3.12, 5.4.3 |
| Type: | Bug | Priority: | Neutral |
| Reporter: | Richard Gange | Assignee: | Ilgun Ilgun |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | bootstrap, clustering, initialization, support | ||
| Remaining Estimate: | 0d | ||
| Time Spent: | 13d 7h | ||
| Original Estimate: | 1d | ||
| Attachments: |
|
||||
| Issue Links: |
|
||||
| Template: |
|
||||
| Acceptance criteria: |
Empty
|
||||
| Task DoD: |
[ ]*
Doc/release notes changes? Comment present?
[ ]*
Downstream builds green?
[ ]*
Solution information and context easily available?
[ ]*
Tests
[ ]*
FixVersion filled and not yet released
[ ] 
Architecture Decision Record (ADR)
|
||||
| Bug DoR: |
[ ]*
Steps to reproduce, expected, and actual results filled
[ ]*
Affected version filled
|
||||
| Release notes required: |
Yes
|
||||
| Date of First Response: | |||||
| Sprint: | Sprint 2 (Basel), Basel 11, Kromeriz 17 | ||||
| Story Points: | 8 | ||||
| Description |
|
If the WebappVersionHandler detects initialized workspaces in a cluster then it will not bootstrap the webapp. Just because an initialized workspace exists in the cluster doesn't mean that the webapp has been installed. For example if author, public1, and public2 share the forum workspace, author may have created and initialized it before public1 installs. |
| Comments |
| Comment by Trang Truong [ 05/Jun/15 ] |
|
This is fixed by ignoring trigger bootstrap web-app in cluster workspaces. Based on configFile param in RepositoryDefinition to check the repository is exist cluster config matched to key <code>magnolia.repositories.jackrabbit.cluster.config</code> in magnolia.properties |
| Comment by Roman Kovařík [ 07/Jul/15 ] |
|
Reopened:
|
| Comment by Trang Truong [ 08/Jul/15 ] |
|
Fixed reopened by removing exposed of interface API info.magnolia.repository.RepositoryManager#isClusterWorkspace |
| Comment by Philip Mundt [ 13/Jul/15 ] |
|
| Comment by Trang Truong [ 15/Jul/15 ] |
|
Provide unit test |
| Comment by Jan Haderka [ 15/Jul/15 ] |
|
trang.truong did you try to time whole build of core before and after your changes to the test? IIUIC, not only yours, but all tests will now use 2 repos (normal/default plus clustered) so I was wondering what is the penalty for doing that. |
| Comment by Philip Mundt [ 15/Jul/15 ] |
|
While you're at it trang.truong: public DefaultRepositoryManager() { this.magnoliaConfigurationProperties = Components.getComponent(MagnoliaConfigurationProperties.class); } Also make private MagnoliaConfigurationProperties magnoliaConfigurationProperties; final. |
| Comment by Philip Mundt [ 16/Jul/15 ] |
|
Resolved on branch |
| Comment by Trang Truong [ 24/Jul/15 ] |
|
sang.ngo please see attached config file, put it into WEB-INF to run cluster env. |
| Comment by Sang Ngo Huu [ 24/Jul/15 ] |
|
There are steps I started QA:
|
| Comment by Jan Haderka [ 24/Jul/15 ] |
|
The error is due to fact we are trying to rebootstrap content that exists in clustered workspace from all nodes in the cluster.
|
| Comment by Philip Mundt [ 08/Sep/15 ] |
|
After a short discussion with ilgun and had we identified potential remaining issues with tasks that are executed twice (by two instances, even though only master cluster node does the session.save()) if they are not written in such way that they can be actually be executed multiple times. Whenever running into such an issue we'll try to fix them but we are aware of the fact that we might not be able to cover all cases. Clients using this feature should also be aware of this. |
| Comment by Ilgun Ilgun [ 11/Sep/15 ] |
|
Changes are present in the following branches - > |
| Comment by Espen Jervidalo [ 16/Sep/15 ] |
|
info.magnolia.module.webapp.WebappVersionHandler#checkIfInitialized
info.magnolia.module.ModuleManagerImpl#saveChanges
info.magnolia.repository.DefaultRepositoryManager#DefaultRepositoryManager(info.magnolia.init.MagnoliaConfigurationProperties, info.magnolia.repository.WorkspaceMapping) |
| Comment by Ilgun Ilgun [ 18/Sep/15 ] |
|
Pushed the changes to the same branches. |
| Comment by Espen Jervidalo [ 18/Sep/15 ] |
|
info.magnolia.module.webapp.WebappVersionHandler#checkIfInitialized
info.magnolia.module.ModuleManagerImpl#isClustered(RepositoryManager repositoryManager, ..)
info.magnolia.module.ModuleManagerImpl#saveChanges
|
| Comment by Magnolia International [ 08/Oct/15 ] |
|
In the following snippet, `persist` means "tasks were executed successfully". In previous versions that translated into "yes, save all changes made by tasks", or "roll it back". Now, it looks to me like we're still rolling back on failure, but saving only on master or non-clustered workspaces. Which leads me to think that on a clustered workspace, that session will be "dirty" forever; I'm not even sure how clustering will work with that: will it override the dirty stuff, or not apply changes from master until session is cleaned/refreshed ? Shouldn't we simply also rollback in those cases ? (and let master node apply the changes whenever) As a side note: should the title of this ticket still be specific to WebappVersionHandler and bootstrapping, while the fix isn't ? if (session.hasPendingChanges()) { if (persist) { boolean isMasterCluster = isMasterCluster(); boolean isClustered = isClustered(repoName); if (!isClustered || isMasterCluster) { session.save(); } else { log.info("Skipping bootstrapping of repository '{}' because it is clustered and node is not cluster master ", repoName); } } else { session.refresh(false); } } |
| Comment by Jan Haderka [ 09/Oct/15 ] |
|
Indeed. The inner else clause should have not only log.info but also session.refresh(false) to drop pending changes. Good catch. Did you run into issue with it or did you just spotted in the code? AFAIK as long as we kept dirty changes, upon cluster node sync interval, those changes will be synced w/ changes coming from the cluster and in case of conflict InvalidItemStateException would be thrown upon attempt to save such session. |