We are currently saving events to bigquery via uploading files to google cloud storage and then inserting these files into bigquery.
We have a very active application running on cirka 300 nodes and saving around 1 billion events per day.
We now plan to change this to use the "new" streaming API.
My concern now is that our current solution creates the table if it does not exist which is not the case for the streaming API. (Our event tables are sharded on game + month to reduce the data that we have to query.)
How do we solve this in the best way? I.e. having +300 nodes streaming data to bigquery and to let new tables gets created when needed!
Thanks in advance!
/Gunnar Eketrapp