50

I have a security scan finding directing me to disable TCP timestamps. I understand the reasons for the recommendation: the timestamp can be used to calculate server uptime, which can be helpful to an attacker (good explanation under heading "TCP Timestamps" at http://www.silby.com/eurobsdcon05/eurobsdcon_silbersack.pdf).

However, it's my understanding that TCP timestamps are intended to enhance TCP performance. Naturally, in the cost/benefit analysis, performance degradation is a big, possibly too big, cost. I'm having a hard time understanding how much, if any, performance cost there is likely to be. Any nodes in the hivemind care to assist?

Paul Degnan
  • 1,972
  • 1
  • 12
  • 28
  • I agree, it is risky to exaggerate the security side and disregards the know-how of the network engineers inventors of tcp timestamps. Operating systems can change their implementation of tcp timestamps to avoid the extraction of the uptime . For linux see my answer http://security.stackexchange.com/a/224696/90485 – Massimo Jan 23 '20 at 09:50

5 Answers5

28

The answer is most succinctly expressed in RFC 1323 - Round-Trip Measurement... The introduction to the RFC also provides some relevant historical context...

   Introduction

   The introduction of fiber optics is resulting in ever-higher
   transmission speeds, and the fastest paths are moving out of the
   domain for which TCP was originally engineered.  This memo defines a
   set of modest extensions to TCP to extend the domain of its
   application to match this increasing network capability.  It is based
   upon and obsoletes RFC-1072 [Jacobson88b] and RFC-1185 [Jacobson90b].


  (3)  Round-Trip Measurement

       TCP implements reliable data delivery by retransmitting
       segments that are not acknowledged within some retransmission
       timeout (RTO) interval.  Accurate dynamic determination of an
       appropriate RTO is essential to TCP performance.  RTO is
       determined by estimating the mean and variance of the
       measured round-trip time (RTT), i.e., the time interval
       between sending a segment and receiving an acknowledgment for
       it [Jacobson88a].

       Section 4 introduces a new TCP option, "Timestamps", and then
       defines a mechanism using this option that allows nearly
       every segment, including retransmissions, to be timed at
       negligible computational cost.  We use the mnemonic RTTM
       (Round Trip Time Measurement) for this mechanism, to
       distinguish it from other uses of the Timestamps option.

The specific performance penalty you incur by disabling timestamps would depend on your specific server operating system and how you do it (for examples, see this PSC doc on performance tuning). Some OS require that you either enable or disable all RFC1323 options at once... others allow you to selectively enable RFC 1323 options.

If your data transfer is somehow throttled by your virtual server (maybe you only bought the cheap vhost plan), then perhaps you couldn't possibly use higher performance anyway... perhaps it's worth turning them off to try. If you do, be sure to benchmark your before and after performance from several different locations, if possible.

Community
  • 1
  • 1
Mike Pennington
  • 41,899
  • 19
  • 136
  • 174
17

Why would the security people want you to disable timestamps? What possible threat could a timestamp represent? I bet the NTP crew would be unhappy with this ;^)

The TCP Timestamp when enabled will allow you to guess the uptime of a target system (nmap v -O . Knowing how long a system has been up will enable you to determine whether security patches that require reboot has been applied or not.

R V Marti
  • 347
  • 2
  • 5
  • 2
    You are suggesting that `nmap -v -O example.org` can give the target's uptime but how is this result displayed? I have tried on a test system that implements TCP timestamps and I can't get a result. – tuxayo May 22 '16 at 09:43
16

To Daniel and anyone else wanting clarification:

http://www.forensicswiki.org/wiki/TCP_timestamps

"TCP timestamps are used to provide protection against wrapped sequence numbers. It is possible to calculate system uptime (and boot time) by analyzing TCP timestamps (see below). These calculated uptimes (and boot times) can help in detecting hidden network-enabled operating systems (see TrueCrypt), linking spoofed IP and MAC addresses together, linking IP addresses with Ad-Hoc wireless APs, etc."

It's a low-risk vulnerability denoted in PCI compliancy.

psouza4
  • 244
  • 2
  • 8
13

I got asked a similar question on this topic, today. My take is as follows:

An unpatched system is the vulnerability, not whether attacker(s) can easily find it. The solution, therefore, is to patch your systems regularly. Disabling TCP timestamps won't do anything to make your systems less vulnerable - it's simply security through obscurity, which is no security at all.

Turning the question on its head, consider scripting a solution that uses TCP timestamps to identify hosts on your network that have the longest uptimes. These will typically be your most vulnerable systems. Use this information to prioritise patching, to ensure that your network remains protected.

Don't forget that information like uptime can also be useful to your system administrators. :)

Oliver Jones
  • 239
  • 2
  • 2
11

I wouldn't do it.

Without timestamp the TCP Protection Against Wrapped Sequence numbers (PAWS) mechanism wont work. It uses the timestamp option to determine the sudden and random sequence number change is a wrap (16 bit sequence numbers) rather than an insane packet from another flow.

If you don't have this then your TCP sessions will burp every once in a while according to how fast they are using up the sequence number space.

From RFC 1185:

ARPANET       56kbps       7KBps    3*10**5 (~3.6 days)
DS1          1.5Mbps     190KBps    10**4 (~3 hours)
Ethernet      10Mbps    1.25MBps    1700 (~30 mins)
DS3           45Mbps     5.6MBps    380
FDDI         100Mbps    12.5MBps    170
Gigabit        1Gbps     125MBps    17

Take 45Mbps (well within 802.11n speeds), then we would have a burp every ~380 seconds. Not horrible, but annoying.

Why would the security people want you to disable timestamps? What possible threat could a timestamp represent? I bet the NTP crew would be unhappy with this ;^)

Hmmmm, I read something about using TCP timestamps to guess the clock frequency of the sender? Maybe this is what they are scared of? I don't know ;^)

Timestamps are less important to RTT estimation than you would think. I happen to like them because they are useful in determining RTT at the receiver or a middlebox. However, according to the cannon of TCP, only the sender needs such forbidden knowledge ;^)

The sender does not need timestamps to calculate the RTT. t1 = timestamp when I sent the packet, t2 = timestamp when I received the ACK. RTT = t2 - t1. Do a little smoothing on that and you are good to go!

...Daniel

Alexis Wilke
  • 19,179
  • 10
  • 84
  • 156
Daniel
  • 111
  • 1
  • 2
  • 3
    what do you mean by saying TCP burp, the TCP connection will be torn-down or just some performance penalty? – misteryes May 21 '13 at 17:15
  • 1
    @misteryes: I would guess it needed to be reestablished because the sequence number wouldn't make sense to the receiver if outside the 2^16 range. That is, it is OK to wrap in the 32 bit address space, however if it advances more than 2^16 then the connection will drop. – SilverlightFox Jan 27 '16 at 09:53
  • 1
    @misteryes: Actually, thinking about it more, as the maximum TCP packet size is 65535 bytes (2^16 - 1), the next packet would fall into the range, it would just be the subsequent packets that would be outside it and would just need to be retransmitted (just contradicted myself within a few minutes). – SilverlightFox Jan 27 '16 at 09:58
  • 3
    "*The sender does not need timestamps to calculate the RTT.*" - it does when it retransmits. Without TCP timestamps, the ACKs received by the sender cannot be known to have come from the original transmission or the retransmission. Therefore when retransmissions occur, RTT cannot be determined until an ACK is received with no retransmission. This can result in pathological behavior when many retranmissions are happening, as the sender may slow down to the minimum sending rate of one packet per 120 seconds. – John Zwinck Jun 06 '17 at 03:13