According to the Cloud Bigtable performance docs I should have a certain amount of data to ensure the highest throughput.
Under "Causes of slower performance" it says:
The workload isn't appropriate for Cloud Bigtable. If you test with a small amount (< 300 GB) of data
Does this limit apply to the table's size or to the total size of the instance?
I've a table of 100GB and another one of 1TB. I want to know if I should merge both of them.