0

I am using intel 2P X520 Adapter on Xeon(R) CPU E5-2640 v3 based server running Ubuntu 16.04. I am interested in measuring the performance(throughput) of the application when we change the batching factor at NIC and in application. By changing the batch size in the application, we are getting higher throughput until the PCIe starts becoming the bottleneck.

I am not sure how to change the batch size at the NIC. What needs to be changed in the code to change the batch size at NIC and what is the default batch size for x520 NIC in DPDK (version 16.07) ?

PS: For some of the application larger batch size is a problem as the latency per packet is increased with respect to the batch size. I am just interested in the throughput and not the latency per packet.

A-B
  • 487
  • 2
  • 23

1 Answers1

0

The batch size is basically the nb_pkts parameter of the rte_eth_rx_burst():

http://dpdk.org/doc/api/rte__ethdev_8h.html#aee7daffe261e67355a78b106627c4c45

So basically, it depends on app how to change the size. For most of the examples you just change the MAX_PKT_BURST, for the testpmd app you might look into the --burst command line argument.

Andriy Berestovskyy
  • 8,059
  • 3
  • 17
  • 33
  • Thanks for your answer. This is batching in the application and not at the NIC level. PMD fetches the packets from NIC to user level application and then the application does this batching. – A-B May 08 '17 at 05:08
  • 1
    Do you mean RTE_PMD_IXGBE_RX/TX_MAX_BURST? Make sure you are using bulk allocation as described [here](http://dpdk.readthedocs.io/en/latest/nics/ixgbe.html#prerequisite), change RTE_PMD_IXGBE_RX/TX_MAX_BURST than recompile the DPDK. – Andriy Berestovskyy May 08 '17 at 09:36