2

I am designing a dataproc workflow template with multiple spark jobs. These spark jobs would run in sequence one after the other. There could be scenarios where the workflow would run few jobs successfully and might fail for others. Is there a way to just rerun the failed jobs once I have done workaround to fix the issues which failed those jobs in the first place. Please note that I am not looking for job retry mechanism of jobs. I want to re-run the workflow again by avoiding running already successful jobs.

Amol T K
  • 43
  • 4

1 Answers1

1

Dataproc Workflows do not support this use case.

Please take at Cloud Composer - Apache Airflow-based orchestration service which is more flexible and should be able to satisfy your use case.

Igor Dvorzhak
  • 4,360
  • 3
  • 17
  • 31