I am curious to understand if there is any documentation around automating and scheduling the data transformation pipelines using GCP+BigQuery+JupyterLab.
For instance, if there are 6 BigQuery tables under a project. I would like to design data transformations on these tables with the help of 3 Jupyter Lab files and aggregate the resultant dataframe to produce a BigQuery table with the help of automated pipelines and schedulers.
While I was able to produce the above-mentioned results independently, would love to learn the automated ways with relevant resources.
Thanks in advance.