0

I am trying to compile the given SQL into Flink's Job Graph and want to submit to YARN.

JobGraph jobGraph = streamExecutionEnv.getStreamGraph().getJobGraph();

YarnDeployer().deployJob(jobGraph);

YarnDeployer is custom class which uses YarnClusterDescriptor and ClusterSpecification API of Flink to submit the job.

In EMR, I started Flink YARN session and submitted job using flink run.

I am getting below error : The program didn't contain a Flink job. Perhaps you forgot to call execute() on the execution environment.

is it possible to run the JobGraph with out execute? I don't want to run continuous jobs.

damjad
  • 1,252
  • 1
  • 15
  • 24
  • I think you must call env.execute() or the stream job will not execute – Snakienn Feb 12 '20 at 08:59
  • I am looking at https://github.com/apache/flink/blob/master/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNITCase.java as reference –  Feb 12 '20 at 11:58
  • Please give us more information: What source(e) are you using; what are you trying to avoid; and why aren't you using batch rather than streaming? – David Anderson Feb 12 '20 at 18:54
  • FYI, Flink 1.10 introduces new interfaces for job submission and related activities. See https://flink.apache.org/news/2020/02/11/release-1.10.0.html#unified-logic-for-job-submission. – David Anderson Feb 12 '20 at 18:57
  • I have mix of batch sources and streaming sources. If I run continuous jobs using execute, each job would be running separately and I may not be able scale. Also, if there are no messages from streaming source, containers would be running idle. What I am thinking is periodically, submit the SQL job to YARN. –  Feb 12 '20 at 21:06

0 Answers0