1

I have query regarding behaviour of Flink. Below is my code snippet. As you can see, some service is supplying list of sql criterias(say about 10k sqls) that Flink is going to execute one by one. My issue is, whenever sql gets updated, how do I indicate flink to work with new sql? One way I see is to stop and start flink service which I want to avoid as other sql criterias needs to be running all the time and only the one which is getting updated needs to be stopped/started/or updated dynamically. Also, I dont want to submit 10k sqls as 10k different jobs. So the behaviour which I am looking for, is it possible with the Flink version 1.11?

env is StreamExecutionEnvironment... 

Psudo-code:

List<String> allConditionsSqls = get_SQL_FROM_some_Service();
for(String sql : allConditionsSqls)
{
    Table table = env.sqlQuery(sql);
    env.toRetractStream(table, Row.class)
     .process(new ProcessFunction <Tuple2<Boolean, Row>, Object>() {
         @Override
         public void processElement(Tuple2<Boolean, Row> value, Context ctx,Collector<Object> out) throws Exception {
             Row ev = value.f1;
             log.info(ev);
             // more code here
         }    
     });
}
ParagM
  • 63
  • 1
  • 7

1 Answers1

1

No, the only way to do this would be to run each query as a separate job. (For what it's worth, there are folks dynamically generating 10000's of Flink jobs on a daily basis -- it can be done.)

David Anderson
  • 39,434
  • 4
  • 33
  • 60