In Flink 1.13, how do you configure a CREATE TABLE statement to use a postgres timestamp column to partition by?
Things I have tried:
In postgres, I have a column named 'my_timestamp' of type TIMESTAMP WITHOUT TIME ZONE
In my Flink CREATE TABLE statement, I'm specifying it as the partition column like so:
...
my_timestamp TIMESTAMP WITHOUT TIME ZONE
...
'scan.partition.column' = 'my_timestamp',
'scan.partition.num' = '4',
'scan.partition.lower-bound' = '" + lower_bound_bigint + "',
'scan.partition.upper-bound' = '" + upper_bound_bigint + "'
Where variables lower_bound_bigint
and upper_bound_bigint
are epoch seconds of java type long
.
lower-bound
and upper-bound
don't like String
representations of times, whereas they seem to accept long
s.
However, this results in malformed queries being sent to postgres, producing the error:
org.postgresql.util.PSQLException: ERROR: operator does not exist: timestamp without time zone >= bigint
I thought Flink would take care of converting the long
timestamps to postgres TIMESTAMP WITHOUT TIME ZONE
in the generated WHERE
clause, but apparently not.
The documentation says you can use timestamps for partitioning, but I'm not sure how to complete the pattern, nor how to intercept the longs in the query's generated WHERE
clause to manually cast back to TIMESTAMP WITHOUT TIME ZONE
, if that's what's needed.