Google BigTable is a system that uses LSM-tree as its core data structure for storage. LSM-tree can use different merge strategies. The two most common ones are (1) leveled merging which is more read-optimized, and (2) tiered merging which is more write-optimized. These merge strategies can further be configured by adjusting the size ratio between adjacent levels.
I have not been able to find anywhere what is BigTable's default behavior in these respects, and whether it can be tuned or not. As a result, it is hard to understand it's default performance properties and how to adapt them to different workloads.
With tiered merging, a level of LSM-tree gathers runs until it reaches capacity. It then merges these runs and flushes the resulting run to the next larger level. There are at most O(T * log_T(N)) runs at each level, and write cost is O(log_T(N) / B), where N is the data size, B is the block size, and T is the size ratio between levels.
With leveled merging, there is one run at each level of LSM-tree. A merge takes place as soon as a new run comes into the level, and if the level exceeds capacity the resulting run is flushed to the next larger level. There are at most O(log_T(N)) runs at each level, and write cost is O((T * log_T(N)) / B).
As a result, these schemes have different read/write performance properties. However, I have been unable to find sources on whether Google's BigTable uses leveled or tiered merging, and what is the default size ratio T? Also, are these aspects of the system fixed, or are they tunable?