I’m using a ConnectX-5
nic.
I have a DPDK
application on which I want to support jumbo packets
.
To do that I add rx offload capabilities: DEV_RX_OFFLOAD_JUMBO_FRAME
, DEV_RX_OFFLOAD_SCATTER
And tx offload capabilities: DEV_TX_OFFLOAD_MULTI_SEGS
I also make the max_rx_pkt_len
higher so it will accept jumbo packets (9k).
I’ve noticed that adding these offload capabilities + increasing the max_rx_pkt_len
harms performance.
For example, without those offload flags, I can redirect 80Gbps
packets of size 512 without any drop.
Using those flags reduce to ~55Gbps
without losses.
I’m using DPDK 19.11.6
.
Currently in this test I do not send jumbo packets. I just want to understand how it affects the average packet size.
Is this expected? Using those offload flags should degrade performance?
Thanks
-- Edit --
Port configuration:
const struct rte_eth_conf port_conf = {
.rxmode = {
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER,
.max_rx_pkt_len = 9614;
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
.offloads = DEV_TX_OFFLOAD_MULTI_SEGS
},
.intr_conf.lsc = 0
};
--- Further information --
# lspci | grep Mellanox
37:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
37:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
# lspci -vv -s 37:00.0 | grep "Part number" -A 3
[PN] Part number: P23842-001
[EC] Engineering changes: A6
[SN] Serial number: IL201202BS
[V0] Vendor specific: PCIe EDR x16 25W