I want to create some generic pipeline where I can pass table name or custom SQL as input and load required data from BigQuery to SQL Server. This pipeline should handle a load of daily incremental data and the initial historical load (around 100 GB).
I am trying to create it through Apache Beam (Dataflow) where I am facing some challenges with coding but before deep dive into Dataflow development, I want to understand the best way to extract data from BigQuery and load it into any database either Oracle, SQL Server, Postgres, etc.? Is there any way other than dataflow that is the best-optimized way?