Questions tagged [flink-table-api]

Apache Flink is an open source platform for scalable batch and stream data processing. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala. Apache Flink features Table API (and SQL API) as unified APIs for stream and batch processing.

81 questions
1
vote
1 answer

Is there a Flink Table API equivalent to Window Functions using row_number(), rank(), dense_rank()?

In an attempt to discover the possibilities and limitations of the Flink Table API for use in a current project, I was trying to translate a Flink SQL statement into its equivalent Flink Table API version. For most parts, I am able to translate the…
1
vote
1 answer

"Cannot map checkpoint/savepoint state for operator" when using fromChangelogStream

I want to use the savepoint mechanism to move existing jobs from one version of Flink to another, by: Stopping a job with a savepoint Creating a new job from the savepoint, on the new version. Until Flink 1.14 I have no problem, but in Flink…
Colin Smetz
  • 131
  • 7
1
vote
0 answers

In Flink table API, how do you use postgres timestamps in scan.partition.column scan.partition.lower-bound etc

In Flink 1.13, how do you configure a CREATE TABLE statement to use a postgres timestamp column to partition by? Things I have tried: In postgres, I have a column named 'my_timestamp' of type TIMESTAMP WITHOUT TIME ZONE In my Flink CREATE TABLE…
Jordan Morris
  • 2,101
  • 2
  • 24
  • 41
1
vote
0 answers

Unexpected type: BINARY

I am trying to read parquet files via the Flink table, and it throws the error when I select one of the timestamps. My parquet table is something like this. I create a table with this SQL : CREATE TABLE MyDummyTable ( `id` INT, …
None
  • 330
  • 2
  • 16
1
vote
1 answer

java.lang.NoClassDefFoundError: org/apache/flink/table/types/logical/LogicalTypeRoot

I am trying to read parquet files via the ParquetAvroInputFormat but it throws NoClassDefFoundError exceptions. I find the class in the flink-table-common library. It is in the path org.apache.flink.table.types.logical.LogicalTypeRoot but the…
None
  • 330
  • 2
  • 16
1
vote
1 answer

Apache Flink Create Table

I'm trying to create table in flink using Table API in Java using eclipse using the following code. EnvironmentSettings settings = EnvironmentSettings .newInstance() .inStreamingMode() .build(); TableEnvironment…
Ajj
  • 13
  • 2
1
vote
1 answer

How can I join two continuous queries in Flink Table API?

I'd like to stack up two continuous queries (views based on a single upstream connector) together and eventually be consistent in my sink result at the end of the pipeline. The first view will remove the duplicated events by source. The second view…
Hako
  • 361
  • 1
  • 2
  • 9
1
vote
1 answer

Difference between Flink mysql and mysql-cdc connector?

In order to enrich the data stream, we are planning to connect the MySQL (MemSQL) server to our existing flink streaming application As we can see that Flink provides a Table API with JDBC connector…
1
vote
1 answer

how to get the last result when doing table join (using toRetractStream in flink sql

Here is my code using flink sql API to join two tables tEnv.createTemporaryView("A", streamA,"speed_sum,cnt,window_start_time,window_end_time"); tEnv.createTemporaryView("B",streamB,"speed_sum,cnt,window_start_time,window_end_time"); String…
1
vote
1 answer

How to map java LocalDateTime to Flink TIMESTAMP when using table API

My code is something like: DataStreamSource> src = ...; tableEnv.createTemporaryView("input_table", src, $("name"), $("dt")); I then realized that field dt is not a TIMESTAMP after trying to call date_format on…
gfytd
  • 1,747
  • 2
  • 24
  • 47
1
vote
1 answer

wrong result in Apache flink full outer join

I have 2 data streams which were created from 2 tables like: Table orderRes1 = ste.sqlQuery( "SELECT orderId, userId, SUM(bidPrice) as q FROM " + tble + " Group by orderId, userId"); Table orderRes2 =…
Eli m
  • 79
  • 1
  • 10
1
vote
0 answers

Out of Memory Error-Heap when storing parquet files using Flink Table API (Flink version-1.12.0) in Google Cloud Storage

Hope you are doing well. We are currently using Flink Table API (Flink Version-1.12.0) to stream data from Kafka and store it in Google Cloud Storage. The file format we are using to store data is Parquet. Initially the Flink job worked perfectly…
Aswin Ram
  • 35
  • 5
1
vote
1 answer

Flink Table and Hive Catalog storage

I have a kafka topic and a Hive Metastore. I want to join the incomming events from the kafka topic with records of the metastore. I saw the possibility with Flink to use a catalog to query Hive Metastore. So I see two ways to handle this: using…
1
vote
1 answer

Can we connect to/from a Kafka compacted topic with the Flink kafka Upsert connector?

This feels obvious, but I'm asking anyway since I can't find a clear confirmation in the documentation: The semantics of the Flink Table API upsert kafka connector available in Flink 1.12 match pretty well the semantics of a Kafka compacted topics:…
Svend
  • 6,352
  • 1
  • 25
  • 38
1
vote
0 answers

Apache Flink 1.11 Streaming Sink to S3

I'm using the Flink FileSystem SQL Connector to read events from Kafka and write to S3(Using MinIo). Here is my code, exec_env = StreamExecutionEnvironment.get_execution_environment() exec_env.set_parallelism(1) # start a checkpoint every 10…
Vidura Mudalige
  • 810
  • 2
  • 18
  • 31