I have a custom PCIe endpoint (generic endpoint supporting basic capabilities) and that sets Multiple MSI capability IRQ vectors to 16. But as I run the x86_64 bit guest ubuntu desktop OS which has the PCIe host controller that configures MSI Enable vectors[bits 6:4 in MSI control register], it always returns 1 MSI as configured. It runs on the virtualbox environment.
Hence, multiple MSI is not being supported.
pci_alloc_irq_vectors() API is used to set the multiple msi enabled bits[6:4] in the linux kernel which currently returns 1 i.e., only one MSI interrupt is being set even though the EP supports 16. pci_msi_vec_count() returns 16.
My kernel code path shows that PCI-MSI controller is being used which is not supporting MULTI MSI capability. I have turned on all the required IRQ_REMAP, PCI_MSI, PCI_DOMAINS*, etc.. that I seem are required to turn on the multiple MSI support by making use of the IR-PCI-MSI controller domain. But everything went in vain.
Require some help from virtual box experts regarding below queries:
Does generic ACPI table supplied to ubuntu 18.04 64 bit OS has any effect on deciding the MSI controller being used during pci scanning? If so, what steps do I need to consider to turn on multiple msi support.
I could see the TI's endpoint test driver functionality works properly on ARM architecture. But on x86 it fails. Am using the similar driver compiled on my platform and could even load it for my EP device. See pci_endpoint_test.c for the driver being in use
Any suggestions regarding the same are appreciated.
My current setup:
guest cpu detected: Intel Xeon E5*
OS: ubuntu desktop 18.04.1 64 bit (amd64)
arch: x86_64
linux kernel tried : 4.14, 4.18
Thanks in advance.