I know it's unfair with the terracotta guys, but has anyone tried to use Hazelcast in order to use scheduled jobs in a clustered environment?
The simplest implementation I can image is the following architecture:
- A global Hazelcast lock for ensuring only one server has startup the Quartz config.
- Running the actual tasks as DistributedTask. (this can be done later, for the moment the heavy scheduled tasks will need to take care of triggering DistributedTask)
- As soon as the server holding the lock is down, another server gets the lock.
I believe this would be a great advantage for people who already has the Hazelcast, since they won't require the whole dev-environment hassle by opening the terracotta stuff all the time.
For the moment I have coded the simplest solution of making only one node to be in charge of executing Quartz triggers. Since I only use Cron-like triggers, it could be an acceptable solution if I take care of creating DistributedTasks for the heavy trigger tasks.
Here's my org.springframework.scheduling.quartz.SchedulerFactoryBean extension that makes it happen:
@Override
public void start() throws SchedulingException {
new Thread(new Runnable() {
@Override
public void run() {
final Lock lock = getLock();
lock.lock();
log.warn("This node is the master Quartz");
SchedulerFactoryBean.super.start();
}
}).start();
log.info("Starting..");
}
@Override
public void destroy() throws SchedulerException {
super.destroy();
getLock().unlock();
}
Please, let me know if I am missing something Big and if this can be done.
I have added the two files to github. Here's the RAMJobStore extension:
And here's the Spring SchedulerFactoryBean extension: