Let's say that we have a Composed Task in Spring Cloud Data Flow defined:
JobA && JobB
JobA needs to write data to an external database, so I am using 2 datasource configurations.
One for Task and Batch processing (received from Spring Clodu Data Flow) and one for the actual data processing (defined in task properties on Task execution).
This is how I am overriding batch datasource:
@Configuration
@Slf4j
public class CustomTaskConfigurer extends DefaultTaskConfigurer {
@Autowired
public CustomTaskConfigurer(@Qualifier("batchDataSource") DataSource dataSource) {
super(dataSource);
log.info("Batch datasource changed");
}
}
However, when running the task in Spring Cloud Data Flow on Kubernetes and the datasource configuration is wrong (any of the two), or the databases are unavailable, , the parent task gets executed correctly (which is fine). Then it executes JobA, which will fail on Datasource initialization (before the CommandLineRunner executes). JobA stays "unexecuted" with the STATUS: UNKNOWN even though the Kubernetes pod is in STATUS: Error.
The question is, how to make JobA fail if the Context initialization of the Docker image fails and continue with JobB? I did not any solution for handling exceptions before the Job actually starts.