1

I’m using a ConnectX-5 nic.
I have a DPDK application on which I want to support jumbo packets.
To do that I add rx offload capabilities: DEV_RX_OFFLOAD_JUMBO_FRAME, DEV_RX_OFFLOAD_SCATTER
And tx offload capabilities: DEV_TX_OFFLOAD_MULTI_SEGS
I also make the max_rx_pkt_len higher so it will accept jumbo packets (9k).
I’ve noticed that adding these offload capabilities + increasing the max_rx_pkt_len harms performance.
For example, without those offload flags, I can redirect 80Gbps packets of size 512 without any drop.
Using those flags reduce to ~55Gbps without losses.
I’m using DPDK 19.11.6.
Currently in this test I do not send jumbo packets. I just want to understand how it affects the average packet size.

Is this expected? Using those offload flags should degrade performance?
Thanks

-- Edit --

Port configuration:

const struct rte_eth_conf port_conf = {
        .rxmode = {
                .split_hdr_size = 0,
                .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER,
                .max_rx_pkt_len = 9614;
        },
        .txmode = {
                .mq_mode = ETH_MQ_TX_NONE,
                .offloads = DEV_TX_OFFLOAD_MULTI_SEGS
        },
        .intr_conf.lsc = 0
};

--- Further information --

# lspci | grep Mellanox
37:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
37:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]

# lspci -vv -s 37:00.0 | grep "Part number" -A 3
            [PN] Part number: P23842-001
            [EC] Engineering changes: A6
            [SN] Serial number: IL201202BS
            [V0] Vendor specific: PCIe EDR x16 25W
hudac
  • 2,584
  • 6
  • 34
  • 57
  • can you please share the NIC arguments enabled if any? Can you also please share if the configuration is enabled for port and queue while setting and mbuf_fast_free is disabled? – Vipin Varghese Feb 24 '22 at 16:07
  • @VipinVarghese thanks. I shared the above port configuration. Is that what you meant? This configuration is set per port. I do not set `mbuf_fast_free` because I might have multi cores. Does it answer your questions? Please let me know if it does not. Thanks – hudac Feb 27 '22 at 07:57
  • thanks will take a look at the same, please update NIC, firmware and arguemnts used if any too, – Vipin Varghese Feb 28 '22 at 08:04
  • @VipinVarghese I'm not sure about how to get those stuff. I saw this for the firmware - please see updated question. Can you let me know how to get all the rest? Thanks – hudac Feb 28 '22 at 10:32
  • please find the command `mst start; ofed_info -s; ibv_devinfo; ethtool -i ` – Vipin Varghese Mar 01 '22 at 10:19
  • with mlx5 driver I need pass `mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128` to enable JUMBO frame rx-tx. Hence I have requested you to share the details to better understand how to reproduce the error. please let me know – Vipin Varghese Mar 01 '22 at 16:04
  • Thanks. I'm still try to figure out if I have `mst` application on my device. I currently cannot find it. – hudac Mar 02 '22 at 18:40
  • I have updated with the answer show casing enabling jumbo does not affect line rate or throughput – Vipin Varghese Mar 03 '22 at 04:52
  • Thank you. I will try to learn the answer – hudac Mar 06 '22 at 08:50
  • it is clearly pointed out the initial assumption enabling jumbo degrades performance is incorrect. Please close the question by accepting the answer to help others in understanding and finding the right answer. – Vipin Varghese Sep 05 '22 at 01:20
  • @VipinVarghese I cannot accept because I couldn't test it yet. I hope I'll be able to soon. :/ – hudac Sep 05 '22 at 17:56
  • thanks for the reply `that you have not test`. please test asap and accept as it will help others – Vipin Varghese Sep 06 '22 at 04:47

1 Answers1

1

With DPDK 21.11.0 LTS using MLX5, I am able to send an receive 8000B at line rate using single RX queue with 1 CPU core. While for smaller packet size like 64B, 128B, 256B and 512B (with JUMBO Frame support enabled) I am reaching line rate with right configuration. Hence highly recommend to use dpdk 21.11 or use LTS drop as it contains fixes and updates of MLX5 NIC as it could be the DPDK 19.11.6 could have potential fixes missing (there are rework on mprq and vector code in mlx5).

enter image description here

Packet Throughput with 1 RX and 1 TX queue: enter image description here

Note: Using the following args I am only able RX and TX jumbo frames with MLX5 supported NIC mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128

Steps to follow:

  1. Download dpdk using wget http://fast.dpdk.org/rel/dpdk-21.11.tar.xz

  2. build dpdk using tar xvf dpdk-21.11.tar.xz; meson build, ninja -C build install; ldconfig

  3. Modify DPDK example code like l2fwd to support JUMBO frames (refer code below)

  4. enable MTU of minimum of 9000B on NIC (In case of MLX NIC, since the PMD does not own the nic and is used as representations, change MTU on netlink kernel interface with ipconfig or ip command also).

  5. use ixia, spirent, packeth, ostaniato, xiena or dpdk-pktgen to send JUMBO frame.

l2fwd modification to support JUMBO in DPDK 21.11.0:

 95 static struct rte_eth_conf port_conf = {
 96         .rxmode = {
 97         .max_lro_pkt_size = 9000,
 98         .split_hdr_size = 0,
 99         },
100         .txmode = {
101         .mq_mode = ETH_MQ_TX_NONE,
102         .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
103                      DEV_TX_OFFLOAD_MULTI_SEGS),
104         },
105 };
Vipin Varghese
  • 4,540
  • 2
  • 9
  • 25