3

I was watching this video in which Jeff Dean talks about Latency and Scaling - https://www.youtube.com/watch?v=nK6daeTZGA8#t=515

At the 00:07:34 mark, he gives an example of latency that goes like this -

Lets say you have a bunch of servers. Their average response time to a request is 10ms. But 1% of the time they take 1sec or more to respond. So if you touch one of these servers, 1% of your requests take 1sec or more. Touch 100 of these servers, and 63% of your requests take 1sec or more.

How did he arrive at that 63% figure? what is the logic/math behind that?

Quest Monger
  • 8,252
  • 11
  • 37
  • 43

2 Answers2

6

It's just probability: 1.0 - 0.99^100 = 0.634 = 63.4%.

Paul R
  • 208,748
  • 37
  • 389
  • 560
1

As noted in previous answers this is just the probability that at least one of the 100 touched servers belonged to the slow 1%.

The method by which he arrived at this approximation is likely to be:

enter image description here

orizon
  • 3,159
  • 3
  • 25
  • 30