0

This has been asked here, but I don't think this was answered. The only answer talks about how aggregator uses correlationId. But the real issue is how job status is updated without checking JobExecutionId in replies. I don't have enough reputation to comment on existing question, so asking here again.

According to javadoc on MessageChannelPartitionHandler it is supposed to be step or job scoped. In remote partitioning scenario we are using RemotePartitioningManagerStepBuilder to build manager step which does not allow to set PartitionHandler. Given that every job will use same queue on rabbitmq, when worker node replies are received message are getting crossed. There is no simple way to reproduce this but I can see this behavior using some manual steps as below

  1. Launch first job
  2. Kill the manager node before worker can reply
  3. Let worker node finish handling all partitions and send a reply on rabbitmq
  4. Start manager node again and launch a new job
  5. Have some mechanism to fail the second job i.e. explicitly fail in reader/writer
  6. Check the status of 2 jobs

Expected Result: Job-1 marked completed and job-2 as failed

Actual Result: Job-1 remains in started and job-2 is marked completed eventhough its worker steps are marked as failed

Below is sample code that shows how manager and worker steps are configured

@Bean
public Step importDataStep(RemotePartitioningManagerStepBuilderFactory managerStepBuilderFactory) {

    return managerStepBuilderFactory.get()
      .<String, String>partitioner("worker", partitioner())
      .gridSize(2)
      .outputChannel(outgoingRequestsToWorkers)
      .inputChannel(incomingRepliesFromWorkers)
      .listener(stepExecutionListener)
      .build();
}

@Bean
public Step worker(
RemotePartitioningWorkerStepBuilderFactory workerStepBuilderFactory) {
    return workerStepBuilderFactory.get("worker")
          .listener(stepExecutionListener)
          .inputChannel(incomingRequestsFromManager())
          .outputChannel(outgoingRepliesToManager())
          .<String, String>chunk(10)
          .reader(itemReader())
          .processor(itemProcessor())
          .writer(itemWriter());
}

Alternatively, I can think of using polling instead of replies where crossing of message does not occur. But polling cannot be restarted if manager nodes crashed while worker nodes were processing. If I follow the same above steps using polling

Actual Result: Job-1 remains in started and job-2 is marked failed as expected

This issue does not occur in case of polling because each Poller is using exact jobExecutionId to poll and update corresponding manger step/job.

What am I doing wrong? Is there a better way to handle this scenario?

sanjay
  • 1
  • 1
  • Is your question similar to https://stackoverflow.com/questions/65949185/multiple-instances-of-a-partitioned-spring-batch-job? I think the same thing would happen with remote chunking, see https://github.com/spring-projects/spring-batch/issues/1372#issuecomment-566277247. – Mahmoud Ben Hassine Feb 04 '21 at 12:48
  • @MahmoudBenHassine thats right, same issue. Looks like I did not search enough. So does that mean only option I have is to use polling? If so is there a way to restart that polling in case of manager node crashed? – sanjay Feb 04 '21 at 14:02
  • With polling, the manager and workers are independent, and the entire process is based on the state of the job repository: workers update their step execution status in the job repository and the manager polls that. Now if the manager dies and you restart it, it will either see (based on the statuses in the job repository) that all workers have finished and mark the job as completed, or resume polling and wait for workers to finish. If one of the workers failed, it will restart it as well. Is that what you are seeing? If not, please elaborate on what is not working. – Mahmoud Ben Hassine Feb 05 '21 at 08:37
  • Polling is not the only option. The message aggregation approach should work as well. Have your tried to setup a job scoped `MessageChannelPartitionHandler` **without** using the `RemotePartitioningManagerStepBuilder`? – Mahmoud Ben Hassine Feb 05 '21 at 08:39
  • @MahmoudBenHassine thanks for inputs. When I say manager crashed, I mean jvm exit. Therefore in my case, eventhough worker steps are finished, manager step is not updated (even after jvm restart) and hence job remains in STARTED state. I am using spring boot to run my application and restarting same does not restart the manager's repository poll. If there is a way please guide. Regarding step or job scoped `MessageChannelPartitionHandler` without using `RemotePartitioningManagerStepBuilder` can you please share any example that I can refer? So far I could not find one. Thanks in advance. – sanjay Feb 06 '21 at 14:54
  • @MahmoudBenHassine I managed to configured `MessageChannelPartitionHandler` without using `RemotePartitioningManagerStepBuilder`. It's able to send requests to worker, but while receiving replies from worker it fails with error `Dispatcher has no subscribers for channel application.incomingRepliesFromWorkers`. I can't figure out what I am doing wrong though. – sanjay Feb 07 '21 at 17:29
  • If the manager's vm craches, the job won't get a chance to stop gracefully and update its status, that's why it will be stuck at STARTED, and you won't be able to restart it until you change its status manually in the DB. For your last question about the dispatcher issue, please check https://stackoverflow.com/questions/41239553/spring-integration-dispatcher-has-no-subscribers-for-channel – Mahmoud Ben Hassine Feb 08 '21 at 08:50
  • @MahmoudBenHassine forgot to update, I managed to solve dispatcher error by marking `autoStartup(true)` during IntegrationFlow registration. However I don't think I have setup `MessageChannelPartitionHandler` correctly because the bahavior has changed significantly. For instance, during vm shutdown/crash earlier job used to remain in started state but now it gets marked as FAILED due to `Timeout occurred before all partitions returned` from `receiveReplies` method. Can you please share any working example of `MessageChannelPartitionHandler` without `RemotePartitioningManagerStepBuilder` ? – sanjay Feb 08 '21 at 09:13
  • `RemotePartitioningManagerStepBuilder` has been introduced in v4.1 to ease the configuration of a remote partitioning setup. You can check the docs of v4.0 which contains an example without it: https://docs.spring.io/spring-batch/docs/4.0.x/reference/html/spring-batch-integration.html#remote-partitioning – Mahmoud Ben Hassine Feb 08 '21 at 09:20
  • @MahmoudBenHassine Thanks for reference, I will compare this with my implementation. On a side note, I have separate jvms for running manager step too. And in my understanding when there are competing consumers on rabbitmq it does not really control which consumer would get which message. So I am not sure how StepScoped partition handler is supposed to solve this issue? I feel that reply listener(s) should be independent of masterStepExecution i.e. it should look up jobExecution based on stepExecutionId received and update status. Of course there might be a reason why its not that way :) – sanjay Feb 08 '21 at 09:28

0 Answers0