0

Why does flink split the pipeline into several jobs if there is an execute_insert in the pipeline?

docker-compose exec jobmanager ./bin/flink run --pyModule my.main -d --pyFiles /opt/pyflink/ -d
Job has been submitted with JobID 3b0e179dad500a362525f23e82e2c826
Job has been submitted with JobID 93d122a6331b4b9ec2578fe67e748a8e

End pipeline:

t_env.execute_sql("""
        CREATE TABLE mySink (
          id STRING,
          name STRING,
          data_ranges ARRAY<ROW<start BIGINT, end BIGINT>>,
          meta ARRAY<ROW<name STRING, text STRING>>,
          current_hour INT
          
        ) partitioned by(current_hour) WITH (
          'connector' = 'filesystem',
          'format' = 'avro',
          'path' = '/opt/pyflink-walkthrough/output/table',
          'sink.rolling-policy.rollover-interval' = '1 hour',
          'partition.time-extractor.timestamp-pattern'='$current_hour',
          'sink.partition-commit.delay'='1 hour',
          'sink.partition-commit.trigger'='process-time',
          'sink.partition-commit.policy.kind'='success-file'
          
        
        )
    """)
table = t_env.from_data_stream(
        ds,
        ds_schema,
    ).select('id, name, data_ranges, meta, current_hour').execute_insert("mySink")

if I comment out .execute_insert ("my receiver"), the jobs will not be split.

docker-compose exec jobmanager ./bin/flink run --pyModule eywa.main -d --pyFiles /opt/pyflink/ -d
Job has been submitted with JobID 814a105559b58d5f65e4de8ca8c0688e

1 Answers1

0

This is explained in the section of the docs on execution behavior. In short, you can combine your currently separate pipelines into a single job if you wrap them in a statement set. Note that if you do, then those pipelines will be jointly planned and optimized.

David Anderson
  • 39,434
  • 4
  • 33
  • 60