4

I want to set the Don't Fragment flag on a IP packet. Is there a way to do so via the setsockopt() function or via the flags of the sendto() function?

Can I do this with "normal" sockets or do i have to use raw sockets and build the entire IP header myself, setting it's offset-field to IP_DF (which is defined in ip.h)?

j00hi
  • 5,420
  • 3
  • 45
  • 82
  • In a comment on the current answer, poster pointed out they were using OS X. – TBBle May 03 '17 at 10:38
  • Possible duplicate of [IP Don't fragment bit on Mac OS](http://stackoverflow.com/questions/4415725/ip-dont-fragment-bit-on-mac-os) – TBBle May 03 '17 at 10:38

2 Answers2

2

According to this page, you can set the IP_DONTFRAG option for IP layer, with datagram sockets (UDP). This SO discussion points in a similar direction.

Community
  • 1
  • 1
Eli Bendersky
  • 263,248
  • 89
  • 350
  • 412
  • Thanks for the links. I'm developing on Mac. All these constants are defined in the file in.h. Unfortunately, I can't find neither the IP_DONTFRAG nor the IP_MTU_DISCOVER constant. Most (or all) of the other constants, however, are there - for example IP_MULTICAST_TTL and so on. Any idea about this issue on Mac? – j00hi Sep 02 '10 at 16:02
0

Looking at /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Kernel.framework/Versions/A/Headers I did find some constants:

netinet/ip.h
99:#define IP_DF 0x4000                    /* dont fragment flag */
netinet6/in6.h
547:#define IPV6_DONTFRAG           62 /* bool; disable IPv6 fragmentation */
djc
  • 11,603
  • 5
  • 41
  • 54