I have a pipeline with a BigQuery table as sink. I need to perform some steps exactly after data has been written to BigQuery. Those steps include performing queries on that table, read data from it and write to a different table.
How to achieve the above? Should I create a different pipeline for the latter but then calling it after the 1st pipeline will be another problem I assume.
If none of the above work, is it possible to call another dataflow job(template) from a running pipeline.
Really need some help with this.
Thanks.