Questions tagged [mellanox]

Mellanox Technologies (NASDAQ: MLNX) offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application run time and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services.

Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability.

Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam, Israel.

63 questions
0
votes
1 answer

How does SEND bandwidth improve when the registered memory is aligned to system page size? (In Mellanox IBD)

Operating System: RHEL Centos 7.9 Latest Operation: Sending 500MB chunks 21 times from one System to another connected via Mellanox Cables. (Ethernet controller: Mellanox Technologies MT28908 Family [ConnectX-6]) (The registered memory region…
Vaishakh
  • 67
  • 5
0
votes
1 answer

dpdk rss flow over gre

I am using Mellanox Technologies MT27800 Family [ConnectX-5], using dpdk19.11 multi rx queue with rss "ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP" I receive packet with ETH:IP:GRE:ETH:IP:UDP I want the load balancing to be according to inner ip+port and…
yaron
  • 439
  • 6
  • 16
0
votes
1 answer

modprobe fails to insert beegfs after installing mellanox drivers

I have a storage cluster that has been churning along for a few years. It's based around a pretty stock Centos 7.6 setup, using beegfs. In an effort to increase throughput I've decided to do a test-upgrade of the network, from 10gig to 40gig.…
Jarmund
  • 3,003
  • 4
  • 22
  • 45
0
votes
1 answer

RSS hash for ip over gre packet

I am using Mellanox Technologies MT27800 Family [ConnectX-5], using dpdk multi rx queue with rss "ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP" I receive packet with ETH:IP:GRE:ETH:IP:UDP I want the load balancing to be according to inner ip+port and not…
yaron
  • 439
  • 6
  • 16
0
votes
1 answer

Connectx-6 LX scheduled sending only sending 25 packets

We are trying to use send scheduling on a Connectx-6 LX. If we set no timestamps on the packet buffers and manually send each packet at approximately the right time everything works. However if we set timestamps in the buffers then the first 25…
Alan Birtles
  • 32,622
  • 4
  • 31
  • 60
0
votes
1 answer

full cache linux cause drop at nic

I have a dpdk 19 application and read from nic(MT27800 Family [ConnectX-5] 100G) with 32 rx multiqueue with RSS . So there are 32 processes that receive traffic from nic with dpdk, Each process read from a different Queue, copy from the mbuf the…
yaron
  • 439
  • 6
  • 16
0
votes
2 answers

Rss hash for fragmented packet

I am using Mellanox Technologies MT27800 Family [ConnectX-5], using dpdk multi rx queue with rss "ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP" I analyzer traffic and need all packet of same session to arrive to the same process ( session for now can be…
yaron
  • 439
  • 6
  • 16
0
votes
1 answer

mlx5_core 0000:b5:00.0: mlx5_cmd_check:772:(pid 5271): CREATE_SQ(0x904) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0xd61c0b)

I am facing following issue when creating pod using sriov network. when i see output of device driver using $**dmesg** *mlx5_core 0000:b5:00.0: mlx5_cmd_check:772:(pid 5271): CREATE_SQ(0x904) op_mod(0x0) failed, status bad parameter(0x3), syndrome…
0
votes
0 answers

How to install mellanox driver?

I have nodes with an Infiniband connection and a centos 7.9 installed. when I execute the following # lspci | grep Mellanox 01:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3] # lspci -vv -s 01:00.0 | grep "Part number" -A…
targat
  • 25
  • 8
0
votes
1 answer

Performace issue with Mellanox

Trying to do performance test for Mellanox NIC with the traffic generated using IXIA.10G cable connects the traffic generator and DUT systems.Sending the traffic using IXIA traffic generator tool at 10 G but the reverse traffic throughput received…
ima
  • 539
  • 2
  • 7
  • 15
0
votes
0 answers

Unable to recognize master/representor on the multiple IB devices

I am getting DPDK MLX5 probing issue. I have installed the mlx5/ofed driver I have loaded the kernel modules. EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No available hugepages reported in…
0
votes
1 answer

what to change in ibverbs when switching from UD to RC connections

I'm looking at a ibverbs code from Mellanox. With a send/recv operation via ibverbs. The code is using UD connections. But it didnt work when I change qp_type = IBV_QPT_UD to IBV_QPT_RC What do I need to change in this case other then the…
0
votes
1 answer

Peculiar behaviour with Mellanox ConnectX-5 and DPDK in rxonly mode

Recently I observed a peculiar behaviour with Mellanox ConnectX-5 100 Gbps NIC. While working on 100 Gbps rxonly using DPDK rxonly mode. It was observed that I was able to receive 142 Mpps using 12 queues. However with 11 queues, it was only 96…
0
votes
1 answer

Is there any way to make RSS work against SRv6 packet?

I am using github my project which uses eBPF to filter/lookup/redirect/drop packets based on SRv6 routing. The eBPF code is running on mellanox Connect5X for SRv6 functionality. My expectation is mellanox Connect5X will look into SRv6 Destination…
takeru ta
  • 1
  • 2
0
votes
2 answers

Can DPDK selectively init NIC ports

I'm using an dual-port NIC, Mellanox ConnectX-5, and the DPDK version is dpdk-stable-19.11.3. After configuration, the call of rte_eth_dev_count_avail() returns 2. But only one port of my ConnectX-5 NIC is connected to the other machine. All I can…
Hovin
  • 39
  • 8