0

Please excuse me if this has been answered before, but I couldn't easily find an answer.

My company creates high-speed measurement equipment, that produces roughly 0.7 Gigabit/second of UDP data. Each set of samples is about 2500 bytes long, which caused us to use IPv4 fragmentation for sending the data. UDP checksums are not used at the moment (set to 0). The receiving end is a rather standard Linux box (with IPv4 fragment reassembly timeout set to 30 seconds, as is default). Given our particular setup, we expect packet loss.

Given the high volume of data, the fact that the IPv4 identification field is 16-bits and the expectation of packet loss, I'm wondering if there is a possibility of incorrect reassembly? The wraparound of the 16-bit IPv4 identification field happens well below 30 seconds.

Can this become a cause for incorrect fragment reassembly, which is then not spotted because the UDP checksum is disabled? Or is there some mechanism at play that I'm not aware of, that can prevent incorrect reassembly?

  • 1
    Maybe you should use IPv6 with the fragmentation extension header that has a 32-bit identification. Otherwise, you probably need to create an application-layer protocol that can segment the data prior to passing it to UDP. – Ron Maupin Mar 18 '22 at 16:26
  • @RonMaupin thanks for your response! Unfortunately we don't have the luxury of switching to IPv6 at this point. But it's still a very good suggestion for a later moment. Does your answer imply that my concerns are valid? If so, I can think of a few solutions: 1. Shorten the IP fragment reassembly timeout in Linux using proc/sysctl. 2. Enable UDP checksums so incorrectly assembled packets are dropped – Roel Baardman Mar 19 '22 at 18:34

0 Answers0