You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
By clicking “Sign up for GitHub”, you agree to our
terms of service
and
privacy statement
. We’ll occasionally send you account related emails.
Already on GitHub?
Sign in
to your account
Hello people,
according to the documentation the @DisallowConcurrentExecution works based on JobKey definition:
An annotation that marks a {
@link
Job} class as one that must not have multiple
instances executed concurrently (where instance is based-upon a {
@link
JobDetail}
definition - or in other words based upon a {
@link
JobKey}).
I have one class implementing the Job interface which I create multiple instances of.
The creation makes sure, that the JobKey has different names within the same group.
(I verified the JobKeys persisted in the database). Also I configured 10 worker threads.
Anyway, the quartz does not execute the job multiple time in different threads. The logs contain a lot of "trigger misfire handle" messages.
I am afraid, that the the job class is interpreted and not the jobkey for the @DisallowConcurrentExecution. May be, this problem is specifically related to clustered jobstore and not the in-memory jobstore?
Best regards
Thanks for confirming. No logic is based on job class / class name. Only on keys.
I really have no solid guess at what might be happening in your case. How long does the job take to execute, and while it is executing can you check if the triggers related to the other job detail instances go into the BLOCKED state (this would be in the _triggers table).
The only other thing I can think of that would prevent them from firing is: having been paused (triggers would be in PAUSED state), or no available worker threads (but you said you have 10).
Hello
@jhouserizer
,
I have set up an example project.
https://github.com/rfelgent/springboot-quartz-example
I am
afraid
, that your assumption is right.
The scheduler is working correctly considering @DisallowConcurrentExecution.
During my project setup, I realized, that the mistake in my production code is somewhere else...
We can close this issue. Thx for your help!
Sometimes you need to schedule the same job with different parameters at different times and you need to handle the overlapping yourself. @DisallowConcurrentExecution should cover such use case.
I see that you need to create a single job and multiple triggers in that case, but once the trigger has completed it vanishes (thurs its jobdatamap), so if you had some result you need to show to the user, you lose it.