Questions tagged [pci-e]

PCI-Express (PCIe) is a peer 2 peer interconnect which is based on PCI and PCI-X. Newest generation is gen 5.0. PCIe is maintained and developed by PCI-SIG.


Versions

  • PCIe Gen1 -- Released in 2003, PCIe Gen 1 supports bandwidth of 2.5 GT/s per lane per direction.
  • PCIe Gen2 -- Released in January 2007. PCIe Gen 2 supports bandwidth of 5 GT/s per lane per direction.
  • PCIe Gen3 -- Released in November 2010. PCIe Gen 3 supports bandwidth of 8 GT/s per lane per direction.
  • PCIe Gen4 -- Released in November 2011. PCIe Gen 4 supports bandwidth of 16 GT/s per lane per direction.
  • PCIe Gen5 -- Released in June 2017. PCIe Gen 5 supports bandwidth of 32 GT/s per lane per direction.

References

PCIe (Wikipedia)


449 questions
9
votes
1 answer

how to do mmap for cacheable PCIe BAR

I am trying to write a driver with custom mmap() function for PCIe BAR, with the goal to make this BAR cacheable in the processor cache. I am aware this is not the best way to achieve highest bandwidth and that the order of writes is unpredictable…
sols
  • 91
  • 1
  • 4
8
votes
1 answer

How to generate a zero-length read on PCIE Bus using x86-64 and Linux?

In the PCI Express Base specification, section 2.2.5 "First/Last DW Byte Enables Rules", it says a zero length read can be used as a flush request. However, in the linux kernel documentation, most examples just use either a 1B or 4B read…
Joel
  • 81
  • 1
8
votes
3 answers

Linux PCIe DMA Driver (Xilinx XDMA)

I am currently working with the Xilinx XDMA driver (see here for source code: XDMA Source), and am attempting to get it to run (before you ask: I have contacted my technical support point of contact and the Xilinx forum is riddled with people having…
It'sPete
  • 5,083
  • 8
  • 39
  • 72
8
votes
1 answer

What's the difference between pci_enable_device and pcim_enable_device?

This book's PCI chapter explain about: int pci_enable_device(struct pci_dev *dev); however there's also: int pcim_enable_device (struct pci_dev * pdev); But besides stating it's a "Managed pci_enable_device", it has no explanation. So what's the…
Tar
  • 8,529
  • 9
  • 56
  • 127
8
votes
1 answer

Does the nVidia RDMA GPUDirect always operate only physical addresses (in physical address space of the CPU)?

As we know: http://en.wikipedia.org/wiki/IOMMU#Advantages Peripheral memory paging can be supported by an IOMMU. A peripheral using the PCI-SIG PCIe Address Translation Services (ATS) Page Request Interface (PRI) extension can detect and signal…
Alex
  • 12,578
  • 15
  • 99
  • 195
8
votes
2 answers

MMIO read/write latency

I found my MMIO read/write latency is unreasonably high. I hope someone could give me some suggestions. In the kernel space, I wrote a simple program to read a 4 byte value in a PCIe device's BAR0 address. The device is a PCIe Intel 10G NIC and…
William Tu
  • 301
  • 3
  • 7
8
votes
2 answers

Linux driver DMA transfer to a PCIe card with PC as master

I am working on a DMA routine to transfer data from PC to a FPGA on a PCIe card. I read DMA-API.txt and LDD3 ch. 15 for details. However, I could not figure out how to do a DMA transfer from PC to a consistent block of iomem on the PCIe card. The…
kohpe
  • 315
  • 2
  • 4
  • 9
8
votes
2 answers

Linux device driver to allow an FPGA to DMA directly to CPU RAM

I'm writing a linux device driver to allow an FPGA (currently connected to the PC via PCI express) to DMA data directly into CPU RAM. This needs to happen without any interaction and user space needs to have access to the data. Some details: -…
jedrumh
  • 81
  • 1
  • 3
7
votes
1 answer

Is CPU to GPU data transfer slow in TensorFlow?

I've tested CPU to GPU data transfer throughput with TensorFlow and it seems to be significantly lower than in PyTorch. For large tensors between 2x and 5x slower. In TF, I reach maximum speed for 25MB tensors (~4 GB/s) and it drops down to 2 GB/s…
Michal Hradiš
  • 481
  • 2
  • 9
7
votes
2 answers

PCIe Driver - How does user space access it?

I am writing a PCIe driver for Linux, currently without DMA, and need to know how to read and write to the PCIe device once it is enabled from user space. In the driver I do the basics in…
user2205930
  • 1,046
  • 12
  • 26
7
votes
3 answers

Is Multi Message MSI implemented on Linux / x86?

I am working on a network driver for an FPGA endpoint that supports multi-message MSI interrupts (not msix) on a PCIe bus. The host processor is an x86 Intel i7 620LM running on CentOS with a 4.2 kernel. The FPGA endpoint correctly advertises…
Tanner
  • 430
  • 1
  • 5
  • 12
7
votes
1 answer

PCI-e lane allocation on 2-GPU cards?

The data rate of cudaMemcpy operations is heavily influenced by the number of PCI-e 3.0 (or 2.0) lanes that are allocated to run from the CPU to GPU. I'm curious about how PCI-e lanes are used on Nvidia devices containing two GPUs. Nvidia has a few…
solvingPuzzles
  • 8,541
  • 16
  • 69
  • 112
7
votes
2 answers

PCIe 64-bit Non-Prefetchable Spaces

I've been reading through the horror that is the PCIe spec, and still can't get any kind of resolution to the following question pair. Does PCIe allow for mapping huge (say 16GB) 64-bit non-prefetchable memory spaces up above the 4GB boundary? Or…
Rgaddi
  • 445
  • 1
  • 4
  • 11
6
votes
1 answer

PCI-e memory space access with mmap

I'm using PCI-e port on Freescale MPC8308 processor (which is based on PowerPC architecture) and I have some problems when trying to use it. The endpoint PCI-e device has memory space equal to 256 MB. I can easily read and write configuration space…
ali rasteh
  • 73
  • 1
  • 8
6
votes
2 answers

Why are MSI interrupts not shared?

Can any body tell why MSI interrupts are not shareable in linux. PIN based interrupts can be shared by devices, but MSI interrupts are not shared by devices, each device gets its own MSI IRQ number. Why can't MSI interrupts be shared ?
valmiki
  • 701
  • 9
  • 24
1
2
3
29 30