2

I tried to gain a bit of understanding about how SSL/TLS works and had a look at the TLS handshake in TLS 1.2 and TLS 1.3, and where random numbers from the server come into play there. Since every TLS requests will have a cost in terms of entropy, because cryptographic keys need to be derived, I wondered why servers don't run out of entropy quickly.

First I had a look at TLS 1.2 with RSA key-exchange:
According to the TLS 1.2 standard section 6 the server random from which the master-secret is derived is in very case 32 byte long. I would expect that the server takes 32 byte of random data from /dev/random.

Next I had a look at TLS 1.3 with ephemeral Diffie-Hellman key-exchange:
Both the client and the server generate their own private set of ECDHE paramters. Afterwards they do their Diffie-Hellman stuff and obtain a shared secret. This shared secret is used to derive the symmetric key for encryption and the key to calculate HMACs to check message integrity. Hence I would assume that the quality of my encryption relies on the quality of the ECDHE paramters. If I use the curve NIST P-256 then I need atleast a 128 bit seed according to this answer.

In conclusion:
In my TLS 1.2 example the server needs to generate 256 bit of entropy and in the 1.3 example 128 bit of entropy. I assume that the necessary bits are taken from /dev/random. The maxmimum size of my entropy pool of 4096 bit that cat /proc/sys/kernel/random/poolsize returns, seems extremely small in comparison to the number of bits I need for a single TLS handshake. Unless my calculations are off I would completely deplete my entrop pool with only 16 requests for TLS 1.2 assuming that the entropy pool is not refilled quickly.

Questions:

  1. Will my server run out of entropy if it receives a lot of TLS requests? Or can it maybe replenish the entropy pool somehow from the TLS request maybe by using the time the packets take to travel back and forth or something like this.
  2. Let's say I would like to save some entropy. Will TLS 1.3 with a 256 bit ECC have a lower cost in terms of entropy compared to TLS 1.2? In my example above I found a cost of 256 bit of entropy for TLS 1.2 and only 128 bit for TLS 1.3.
  3. If someone sends a lot of Client Hello messages without ever establishing a real connection, could he deplete my entropy pool that way? I would assume that a single Client Hello does not give me much in terms of entropy, but puts a large burden on the server, because it needs to answer with a Server Hello containing the 32 byte random data in TLS 1.2.
Max1
  • 123
  • 3

1 Answers1

3

I assume that the necessary bits are taken from /dev/random.

Don't. Blocking is for when there is a risk the system has zero entropy at all. Maybe for ssh host key generation on first boot. Not for TLS during normal operation, there's no point in causing denial of service because it is starved for random bits.

Use (non blocking) cryptographically secure pseudorandom number generators. If you wish to use a kernel source, consider getrandom() or /dev/urandom on Linux, or BCryptGenRandom on Windows. These use the same crypto primitives that make TLS and other algorithms work; if they cannot generate enormous amounts of apparently random bits from a small seed, crypto is broken.

Although probably a TLS library will use their own CSPRNG, with only a small seed from the kernel source. Simply adding up the random bits in the protocol does not indicate how much is read from the system entropy pool.

Regarding OS, be sure to say what distro you use. Linux aggressively blocking /dev/random is atypical. Most BSDs treat it in the same way as /dev/urandom.

Short answer: use /dev/urandom. The scariness requiring blocking /dev/random on Linux is superstition. An analysis of tens of TB of random bits shows them to be the same.

John Mahowald
  • 32,050
  • 2
  • 19
  • 34
  • Alright, I see that `/dev/urandom` should be used and that my server won't run out of entropy as `/dev/urandom` can provide an arbitrary amount of pseudo random numbers if seeded with enough randomness. One weak point I see with `/dev/(u)random`, which is also mentioned in the first link (Re-seeding) is that an attacker could calculate future states, if he learns about the internal state of the prng at one point. – Max1 Apr 13 '20 at 07:01
  • If an attacker has the state of the CSPRNG, it was not seeded enough at init, or the host is thoroughly compromised. – John Mahowald Apr 13 '20 at 11:48