[MGNLSCH-37] Job persistence and reliability Created: 15/Jan/13 Updated: 19/May/22 Resolved: 19/May/22 |
|
| Status: | Closed |
| Project: | Scheduler |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major |
| Reporter: | Magnolia International | Assignee: | Unassigned |
| Resolution: | Won't Do | Votes: | 0 |
| Labels: | next | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Template: |
|
| Acceptance criteria: |
Empty
|
| Task DoD: |
[ ]*
Doc/release notes changes? Comment present?
[ ]*
Downstream builds green?
[ ]*
Solution information and context easily available?
[ ]*
Tests
[ ]*
FixVersion filled and not yet released
[ ] 
Architecture Decision Record (ADR)
|
| Date of First Response: |
| Description |
|
This is perhaps a topic that needs discussions before implementation. Here's a couple of examples. Let's say I have 3 configured jobs.
Now, assume my server goes down on Jan 15 at 2pm for 3 hours. It comes back up at 5pm (I know this example is highly unrealistic Here's what I'd want to happen, ideally:
I guess what I'm getting at is we'd need
If I recall correctly, Quartz has an API to store job states - but our current implementation uses an in-memory implementation. |
| Comments |
| Comment by Magnolia International [ 15/Jan/13 ] |
|
Bingo. The interface is org.quartz.spi.JobStore and org.quartz.impl.jdbcjobstore.JobStoreSupport has, among other things a recoverJobs() which likely does what I'm trying to describe above. Unfortunately, org.quartz.impl.jdbcjobstore.JobStoreSupport is pretty much tied to JDBC and SQL. In this current version of Quartz, there doesn't seem to be an intermediate abstract class that would allow us to easily implement this over a JCR-based storage. edit: Had a quick look at 2.1.6 and this particular point doesn't seem to have changed. |
| Comment by Roman Kovařík [ 19/May/22 ] |
|
Hello, This ticket is now marked as closed due to one of the following reasons:
If you are still facing a problem or consider this issue still relevant, please feel free to re-open the ticket and we will reach out to you. Thank you, |