Questions tagged [mellanox]

Mellanox Technologies (NASDAQ: MLNX) offers a choice of fast interconnect products: adapters, switches, software and silicon that accelerate application run time and maximize business results for a wide range of markets including high performance computing, enterprise data centers, Web 2.0, cloud, storage and financial services.

Mellanox interconnect solutions increase data center efficiency by providing the highest throughput and lowest latency, delivering data faster to applications and unlocking system performance capability.

Founded in 1999, Mellanox Technologies is headquartered in Sunnyvale, California and Yokneam, Israel.

63 questions
1
vote
0 answers

kernel program RDMA (krping)

I'm using a kernel module to do RDMA transfer in kernel space on Infiniband (krping.c, link: git.openfabrics.org Git - ~sgrimberg/krping.git/summary). The cards I have are Mellanox ConnectX-4 (driver: mlx5), Linux kernel version: 3.13, Ubuntu 12.04,…
S. Salman
  • 590
  • 1
  • 6
  • 22
1
vote
1 answer

Is it possible to use RDMA Mellanox libraries from within a kernel module?

I want to develop a kernel module that is able to send/receive RDMA messages. I am wondering if the Mellanox libraries can be called from kernel space. Can I call Mellanox RDMA functions from a kernel module? Answer: I have some working code here:…
JC1
  • 657
  • 6
  • 21
1
vote
2 answers

dpdk_nic_bind.py doesn't show Mellanox cards, why?

I'm trying to set up DPDK on a Mellanox ConnectX-3 card and run some of the applications that comes with it, e.g., l2fwd. My understanding is that I need to use dpdk_nic_bind.py script that comes with DPDK distribution to bind ports to Mellanox…
Salem Derisavi
  • 137
  • 1
  • 10
0
votes
0 answers

Finding the Temperature of Mellanox card and other computer components in C++

I want to log the performance of my system on very intensive network loads (Sending over 100gbps). At some point after running my network at continous >100 Gbps loads (using my own C++ implementation via winsock2) my transfer rates start dropping,…
Valdez
  • 46
  • 3
0
votes
0 answers

Dpdk-testpmd panic "mlx5_net: Cannot register matcher" in DPDK 23.03

When I running dpdk-testpmd in DPDK 23.03, it reports the error: However, I have not faced the same error when I running dpdk-testpmd in DPDK 20.05. Why does this error occur? How should I fix it?
0
votes
0 answers

yum install kernel-devel-6.4.3-1.el8.elrepo.x86_64 Failed

I am trying to install MLNX_OFED_LINUX-23.04-1.1.3.0-rhel8.5-x86_64 on centos-8.5, it requires kernel-devel-6.4.3-1.el8.elrepo.x86_64 to be installated to continue further. But the yum install kernel-devel-6.4.3-1.el8.elrepo.x86_64 Failed with…
0
votes
0 answers

Do I need to install the MLX_OFED in dockers to using the mlx5 drivers?

I have installed the MLX_OFED on my physical servers. Now, I will configure a DPDK based on Mellanox connect-5 NIC on docker, do I need to install MLX_OFED on docker again? Can the DPDK installed on the docker be linked to the MLX5 driver on the…
0
votes
0 answers

How to solve "PCI width status is below PCI capabilities" for BlueField Device?

I used the mlnx_tunecommand to check the status of the BlueField device, which shows: It warns that "PCI width status is below PCI capabilities." How can I configure BIOS to solve it?
0
votes
0 answers

How to fix a python "pyverbs" application with RDMA write getting a not acknowledge (NACK)

Im currently working on a RoCE (RDMA over Converged Ethernet) python application with the library pyverbs. First, i want to do a simple loopback test with an RDMA Write. I tested the setup with ib_write_bw from perftest, which worked like a…
RealGhost
  • 1
  • 2
0
votes
0 answers

Mellanox: Achieving inner IP RSS on tunnels

I'm working with Mellanox ConnectX-6 Dx and using DPDK (ver 22.03) to capture and load distribute traffic on inner IP RSS. Facing problem with load distribution on tunnelled traffic. With some knowlegde I gathered over internet, I was able to load…
0
votes
1 answer

OpenMPI 4.1.1 There was an error initializing an OpenFabrics device Infinband Mellanox MT28908

Similar to the discussion at MPI hello_world to test infiniband, we are using OpenMPI 4.1.1 on RHEL 8 with 5e:00.0 Infiniband controller [0207]: Mellanox Technologies MT28908 Family [ConnectX-6] [15b3:101b], we see this warning with mpirun: WARNING:…
RobbieTheK
  • 178
  • 1
  • 11
0
votes
2 answers

Which RX queue will be bound to a specific CPU core?

I want to assign RX queues to cores mapping as 1:1. And I use the mlx5 nic. I want to make some different changes to the RX queue of each core. So I want to know the mapping between the index of RX queues and CPU cores. I have noted that there is a…
0
votes
1 answer

How to resolve Scatter offload is not configured Error on Jumbo Frame testing in Mellanox

How to resolve scatter offload configuration error on testing Jumboframes in Bluefield Mellanox 2? DPDK Version - 20.11.1 Error details: Initializing rx queues on lcore 1 ... rxq=0,0,0 mlx5_pci: port 0 Rx queue 0: Scatter offload is not configured…
ima
  • 539
  • 2
  • 7
  • 15
0
votes
1 answer

undefined reference to `mlx5dv_create_flow_action_packet_reformat' in DPDK mlx5

mlx5dv_create_flow_action_packet_reformat is a function in . error shows:undefined reference to `mlx5dv_create_flow_action_packet_reformat'. Why can't I link the mlx5dv_create_flow_action_packet_reformat function. Does Which…
0
votes
1 answer

Mellanox NICs listed in lspci & lshw but not in ip link

Ubutnu 20.04 Mellanox ConnectX-5 100G NICs I see the Mellanox NICs when running lspci and lshw but i don't see them listed when looking at ip link show When looking at lshw for other NICs on the system i see a logical interface like eno1 but for the…
Dave0
  • 133
  • 7