0

This switch datasheet specifies that the "cut-through latency" is 300ns. What exactly does "cut-through latency" mean?

Is it:

  1. The difference in time between a packet's head entry and that packet's tail exit?
  2. The difference in time between a packet's head entry and that packet's head exit?
  3. Something else?
Randomblue
  • 1,165
  • 5
  • 16
  • 33

1 Answers1

4

Switches do mainly have two forwarding strategies available to them:

  1. receive a frame completely into the buffer, evaluate the destination address, send frame from buffer to destination
  2. receive the frame's header into the buffer, evaluate the destination address and make forwarding decision, start sending frame to destination as data comes in

The first is generally referred to as store-and-forward, the second as cut-through. As you have already noted, there may be many definitions for "latency" in each of these scenarios, but these two are mainly used and even found their way into RFC 1242 (section 3.8):

  • First-in-first-out latency, or the time between the reception and sending of the first byte of a particular frame
  • Last-in-first-out latency, or the time between the reception of the last frame byte and the sending of the first frame byte

There is also the last-out-last-received method of measurement for end-to-end latency implicitly defined in RFC 2544 section 26.2, but this is very unlikely to appear in vendor's data sheets.

A 2012 whitepaper from Juniper titled "Latency: Not All Numbers Are Measured The Same" (only available from third parties as it has been removed from Juniper's site since) and a number of other sources are suggesting that cut-through-latency is in fact first-in-first-out latency.

Let's do some numbers. For the switch to be able to make a forwarding decision, it has to at least receive the destination MAC address of the Ethernet frame. Given the Ethernet header, this means to get at least the first 14 bytes (112 bits) of the frame: Ethernet Frame header from Wikipedia At a rate of 10^9 bits per second, this amounts to 112 ns, leaving 188 ns for the forwarding decision at 300 ns latency.

So, for the Gigabit interface of your FM4224 the figure looks sane assuming the first-in-first-out latency measurement. But obviously, Intel could have chosen a very own definition for its numbers - you would need to ask a sufficiently savvy representative for a definitive statement.

the-wabbit
  • 40,737
  • 13
  • 111
  • 174
  • I read RFC 1242 (s3.8) differently: it suggests that cut-thru is a form of store-and-forward and also notes that latency would be negative: "In this case, the device would still be considered a store and forward device and the latency would still be from last bit in to first bit out, even though the value would be negative." - makes no sense to me, and RFC 2544 (s 26.2) ignores RFC 1242 (s 3.8) and measures store-and-forward (presumably including cut-thru) as last-in to last-out. – philcolbourn Aug 17 '18 at 01:30
  • 1
    I agree with your conclusion that 1242 is ambigous and pretty outdated, so it probably has not been a good reference source to begin with, especially since I linked the "cut-through" term to it, which is not used throughout 1242. I tried to fix this part and re-referenced the Juniper whitepaper link, let's see if this is better. – the-wabbit Sep 12 '18 at 08:51