As far as I know, Google Big Query follows "eventual consistency" architecture, meaning that table creation, schema changes and data import is non-synchronous.
I'm building system that synchronize couple of tables with high update ratio to BQ (by schedule). Latter is not suited for updates at all, so I'm following idea of dropping all previous data and re-importing everything (data volume for this tables is low, so this seems feasible for now). I'm facing the issue with table re-creation: I'm deleting existing table, creating new (providing schema) and inserting data (via insert_rows, which does streamed insert, AFAIK).
And I'm experiencing inconsistency, most of the time (not always) ending up with empty table, but actual schema. From what I understand, this may happen in case of streaming insert still knows about old table, inserts data in it, and later on deletion/re-creation is synced up.
So my question is how to reliably get the point when table/schema updates is converged to all nodes/regions/etc? Or if there's any other way to reliably re-import the data with potential schema change.