The man page of openssl
's SSL_read()
call states:
SSL_read() works based on the SSL/TLS records. The data are received in records (with a maximum record size of 16kB for SSLv3/TLSv1). Only when a record has been completely received, it can be processed (decryption and check of integrity). Therefore data that was not retrieved at the last call of SSL_read() can still be buffered inside the SSL layer and will be retrieved on the next call to SSL_read().
Given that:
- HTTP headers of a single outgoing message can always be sent in one go
- a single SSL/TLS record can apparently hold 16KB of data, which should be enough for everyone (or at least any non-perverse HTTP request)
Browsers have little reason to divide the headers into multiple SSL records, right? Or are there browsers out there that are so aggressive with regard to latency that they will chop up even these kinds of small payloads into multiple records?
I'm asking this because it would be nice to be able to parse an entire set of HTTP headers from a single read buffer filled by a single succesfull SSL_read()
call. If that means denying the odd few (e.g. if only 0.0000X% of all requests), that might be worth it to me.
edit: Alexei Levenkov made the valid point that cookies can be really long. But let's consider then the scenario that cookies are never set or expected by this particular server.
edit2: This question was a little premature. I've meanwhile written code that stores per client state efficiently enough to accept an arbitrary number of SSL records while parsing without incurring a performance penalty of any significance. Prior to doing so I was wondering if I could take a shortcut, but general consensus seems to be that I better play by the book. Point conceded.