3

I'm just curious about how the server knows if the received segment is a UDP or a TCP segment, especially when the listening port can listen on both UDP and TCP.

I know the client can use SOCK_DGRAM to generate UDP segments and SOCK_STREAM for TCP segments, but the segment transmitted is still a bunch of bits. How can the server know whether it should interpret these bits as a UDP segment or as a TCP segment? What if these bits are a UDP segment, but accidentally do not mean "too weird" if they are interpreted as a TCP segment?

vincentvangaogh
  • 354
  • 3
  • 11
  • 2
    Since each segment always starts with a header? – Cyclonecode Mar 09 '15 at 23:26
  • I don't think the header can always help differentiate the types. It's quite possible that the server interprets the content of a TCP segment starting from bit 32 as 'Length', 'Checksum', and payload. Unless both the client and server agree that a UDP segment must start with like 100 zeros and otherwise a TCP segment. – vincentvangaogh Mar 09 '15 at 23:33
  • @WanxinGao That's what the header is *for,* among other things such as the port number. The client and server don't have to agree on anything. The data is sent by either a UDP or a TCP socket and it is received the same way. The issue is how the TCP/IP *stack* in the *kernel* tells the difference, so it knows whether to give it to a TCP socket or a UDP socket. – user207421 Mar 10 '15 at 00:23

1 Answers1

3

It's firstly an IP packet, which contains the protocol in the IP header. Inside the IP packet is a payload, which contains either a TCP segment or a UDP datagram.

user207421
  • 305,947
  • 44
  • 307
  • 483