2

I'm using the asio ( non boost version) library to capture incoming UDP packets via a 10GB Ethernet adapter. 150k packets a second is fine, but I start getting dropped packets when i got to higher rates like 300k packets/sec.

I'm pretty sure the bottleneck is in DMA'ing 300k seperate transfers from the network card to the host system. The transfers aren't big only 1400 bytes per transfer, so not a bandwidth issue.

Ideally i would like a mechanism to coalesce the data from multiple packets into a single DMA transfer to the host. Currently I am using asio::receive, to do synchronous transfers which gives better performance than async_receive.

I have tried using the receive command with a larger buffer, or using an array of multiple buffers, but i always seem to get a single read of 1400 bytes.

Is there any way around this?

Ideally i would like to read some multiple of the 1400 bytes at a time, so long as it didn't take too long for the total to be filled. ie. wait up to 4ms and then return 4 x 1400 bytes, or simply return after 4ms with however many bytes are available...

I do not control the entire network so i cannot force jumbo frames :(

Cheers,

James
  • 67
  • 7
  • Perhaps i should add that i am seeing 1 core of the machine sit at 100% usage, whilst running my test app. I have lots of cores available, but it seems that receiving packets on a single port ends up being limited by a single thread.... Is this correct? – James Mar 01 '17 at 06:17
  • One thread can only use one core. But if you're CPU bound you need to examine your code, not network interfaces. – user207421 Mar 01 '17 at 08:49
  • Gain maybe i should make my goal clearer, I am asking if there is any way to force the asio interface to deliver larger chunks of data, (at a slower rate) – James Mar 01 '17 at 21:56
  • So after more research it appears that RSS Queue setting is not working correctly in my test systems, does anyone know of a way enable proper RSS queue support in windows 10 Pro.. I set the property in the driver panel, but it does appear to working correctly. – James Mar 02 '17 at 01:55

2 Answers2

3

I would remove the asio layer and go direct to the metal.

If you're on Linux you should use recvmmsg(2) rather than recvmsg() or recvfrom(), as it at least allows for the possibility of transferring multiple messages at a time within the kernel, which the others don't.

If you can't do either of these things, you need to at least moderate your expectations. recvfrom() and recvmsg() and whatever lies over them in asio will never deliver more than one UDP datagram at a time. You need to:

  • speed up your receiving loop as much as possible, eliminating all possible overhead, especially dynamic memory allocation and I/O to other sockets or files.
  • ensure that the socket receiver buffer is as large as possible, at least a megabyte, via setsockopt()/SO_RCVBUFSIZ, and don't assume that what you set was what you got: get it back via getsockopt() to see if the platform has limited you in some way.
user207421
  • 305,947
  • 44
  • 307
  • 483
  • I have tried a winsock implementation of the same code and it performs in a similar manner to the asio Sync version. Plus this is going to be used across multiple platforms, so i'm kinda stuck with asio. – James Mar 01 '17 at 05:07
  • Unfortunately, getting this level of performance pretty much requires platform-specific code. – David Schwartz Mar 01 '17 at 06:51
1

may be you can try a workarround with tcpdump using the libcap library http://www.tcpdump.org/ and filtering to recive UDP packets

jahndev
  • 36
  • 1
  • 3
  • see above, i need cross platform support, so stuck with asio. I am looking for a solution within the asio library, not a "use something else" solution. I am very new to asio, and i suspect there may be things i am misusing.. – James Mar 01 '17 at 05:08
  • Sorry i didn't do due diligince, on this. Apparently pcap is cross platform. I am experimenting with it now and it looks like it might be a suitable interface for the task at hand... Either way this was a useful suggestion. – James May 26 '17 at 05:11