Regarding this statement on a blog about Databricks SQL
Throughput vs latency trade off
Throughput vs latency is the classic tradeoff in computer systems, meaning that a system cannot get high throughput and low latency simultaneously. If a design favors throughput (e.g. by batching data), it would have to sacrifice latency. In the context of data systems, this means a system cannot process large queries and small queries efficiently at the same time.
Does not low latency mean high throughput by definition? Why are they suggesting that low latency provides low throughput?
If ThroughPut refers to the count of requests fulfilled in the given time and latency refers time to serve a single request, then surely less time per request means we can serve more requests in the same time frame.
For instance, if latency is 1 second per request, then the server can process 10 requests in 10 seconds.
If latency is reduced to 0.5 second per request, then server's throughput is 20 requests in 10 seconds.
Shouldn't low latency mean high throughput by this definition?