In order to build sources & sinks on top of existing ones (as opposed to do it from scratch or with more boilerplate), I'd like to:
- Append a conversion function
f:A->B
to aSource<A>
in order to get aSource<B>
(corresponds to map in FP circles) - Prepend a conversion function
f:B->A
to aSink<A>
in order to get aSink<B>
(corresponds to contramap in FP circles)
In Scala, this pattern is commonly used by JSON libraries such as Circe:
Does this pattern apply well to the case at hand (Flink sources & sinks)?
Otherwise what would be the recommended way of approaching this problem?
As a special instance of the problem, you might consider the following example:
- You have an existing DataStream connector (source & sink) which works for a fixed data type
- You want to reuse that connector for the Table/SQL API, which requires
RowData
as the data type
The official docs already touch upon this here:
In particular:
If you want to develop a connector that needs to bridge with DataStream APIs (i.e. if you want to adapt a DataStream connector to the Table API), you need to add this dependency: "org.apache.flink:flink-table-api-java-bridge:1.16-SNAPSHOT"
Here you can see this in action within the kafka connector:
Finally, see also the original question posted in the User mailing list: