4

i am experiencing the following situation: I open with pcap_open_live() one of my network-interfaces. Then i am compiling a filter for pcap for only capturing a specified ethernet-type (ether proto 0x1234). Now i am starting the pcap_loop(). The only thing, that the callback-function executes, is to send a frame via pcap_inject(). (The frame is hard-coded as a global char array).

When i compare now the timestamps of the received and the sent frame (e.g. on wireshark on a third non-involved computer) the delay is around 3 milliseconds (minumum 1 millisecond, but also up to 10 milliseconds). So pcap needs in average around 3 milliseconds to proceed with the received frame and calling the callback-function for sending the new frame. I want/have to decrease that delay.

Following things i already tried:

  • tried all different variants of read-timout (in ms) in pcap_open_live(): even a read-timout of -1, which should be to my knowledge a polling, generates a delay of around 3 milliseconds
  • setting no filter
  • setting a higher priority to process
  • set InterrupThrottleRate=0 and other parameters of the e1000/e1000e-kernel module to force the hardware sending an interrupt for every single frame

But i never decreased the delay under the average of 3 milliseconds.

For my planned application it is necessary to react to incoming packets in a time unter 100 microseconds. Is this even generally doable with libpcap?! Or are there any other suggestions for realizing such a application?

Thanks for all your replies, i hope anyone can help me!

Notes: I am deploying under Linux/Ubuntu in C/C++.

Joojoo
  • 79
  • 5
  • "_e.g. on wireshark on a third non-involved computer_". How exactly are you measuring the delay? Sending a packet from this non-involved computer to the one with libpcap and receiving the response back, hence measuring the round trip? If that's the case you also have the network delay to take into account. If you are comparing timestamps from different computers you could also be facing a time sync problem between them. – Nacho Apr 13 '16 at 13:05
  • I was sniffing it with a NuDog device in TAP mode. And so i only compare the timestamps from one computer. But now with pcap_set_immediate_mode() function i can decrease the delay dramatically! – Joojoo Apr 14 '16 at 06:34
  • Note: Any modern system has "Page fault" and the OS cpu scheduling. Those are usually timed from few 1-2 ms on a RT Linux to around 20ms on Windows OS. Trying to have <0.1ms real time (even soft) is at best optimistic. When I had to manage hard-real time of few us, the only solution I found is to have/create specialized hardware. – Adrian Maire Jul 30 '21 at 11:37

1 Answers1

6

For my planned application it is necessary to react to incoming packets in a time under 100 microseconds.

Then the buffering that many of the capture mechanisms atop which libpcap runs (BPF except on AIX, TPACKET_V3 on Linux with newer libpcap and kernel, DLPI on Solaris 10 and earlier, etc.) provide in order to reduce per-packet overhead would get in your way.

If the libpcap on your system has the pcap_set_immediate_mode() function, then:

  • use pcap_create() and pcap_activate(), rather than pcap_open_live(), to open the capture device;
  • call pcap_set_immediate_mode() between the pcap_create() and pcap_activate() calls.

In "immediate mode", packets should be delivered to the application by the capture mechanism as soon as the capture mechanism receives them.

  • Thanks for your reply! So i can decrease the delay up to a average of 150 microseconds... I don't know why i didn't found that solution on my own, but it is always seen in examples only with pcap_open_live() and thats it. – Joojoo Apr 14 '16 at 06:30