0

My understanding is BIOS or EFI detects the hardware during bootup and determines interrupt number, then passes it to Linux once kernel is up and running. And based on my research the lower the interrupt number the higher its priority.

My question is how does BIOS/EFI decide which hardware should have high priority over another? Is it something that is configurable or is hardcoded by BIOS/EFI?

0andriy
  • 4,183
  • 1
  • 24
  • 37
keye
  • 135
  • 1
  • 14
  • 1
    Physical interrupt lines are wired in the hardware, and firmware can’t detect them, it has them hard coded. Then, IRQ lines are going to IOAPIC followed by LAPIC. IOAPIC is programmed by OS. So, you’re asking how OS distributes priorities (beside the fact of MSI/MSI-x capability). – 0andriy Sep 13 '19 at 08:12

1 Answers1

3

Kind of.

When using the legacy 8259A PIC chip, one of the priority modes is based on the IRQ number - with lower IRQs having more priority.
However with the IO APIC and the MSI(X) technology the IRQ priority is handled in the LAPIC and it is configurable by the OS.

For the legacy scenario, these devices have fixed IRQs (not configurable).
The priority was assigned so that important/frequent tasks could interrupt less important/frequent ones.
Today those devices are emulated and their IRQ can be reassigned (in same case, it depends on the chipset/superio/embedded controller) if needed but that could cause some compatibility issue.
So every device that impersonate a legacy one (e.g. an HDD) is usually assigned its legacy IRQ number.

A different topic is the PCI interrupts (PCIe deprecated the INTx# lines in favour of MSI) for non legacy devices (e.g. a NIC).
Those were (are) the real programmable IRQs, each PCI-to-PCI bridge remap its four PIRQA-PIRQD input pins to its four INTA#-INTD# output pins (that are connected to the bridge's parent PIRQA-PIRQD pins in a tangled fashion).
The Host-to-PCI-bridge INTA#-INTD# connects (conceptually) to the 8259A and the IO-APIC.
The mapping is configurable with some chipset registers (e.g. see Chapter 29 of the Intel Series 200 PCH datasheet Volume 2).

So the firmware is free to remap at least the PCI interrupts for non legacy devices. I think the algorithm used is simply to assign the lower free IRQ to the most "important" device.
However, as said above, as soon as the OS switch away from the 8259A mode these priorities stop to matter.

Margaret Bloom
  • 41,768
  • 5
  • 78
  • 124
  • 1
    Each device has an interrupt pin (hardwired to INTA on PCI) or for a PCIe device that doesn't use MSI, the interrupt pin field tells the device what INTx message to send (there are also chipset config registers that alias to the built in device pin field). Each device (including a PCIe-to-PCI bridge) has a PIR route control mapping INTA-D to PIRQA-H in the chipset registers and then there is a global PIRQx route control set of chipset registers to map PIRQA-H to IRQs. I think 8259 mode use same IRQs. Made an error in MMIO answer which I fixed. I'll address your comment whenI'mbackon the area. – Lewis Kelsey Apr 15 '20 at 17:48
  • Even though it's hardwired to INTA pin, the wire is braided on PCI so it will end up being a different INT wire depending on slot: https://electronics.stackexchange.com/questions/76867/what-do-the-different-interrupts-in-pcie-do-i-referring-to-msi-msi-x-and-intx – Lewis Kelsey Apr 15 '20 at 17:52
  • Apparently on PIIX3, USB Controller is hardwired to PIRQD# internally which can be routed to an IRQ using a register. It seems the external INTx braid is hardwired to PIIX3's PIRQx pins. – Lewis Kelsey Apr 16 '20 at 12:08
  • @LewisKelsey Thanks for the info Lewis, I have written a few (private) notes that cover how IRQs work with PCI but I don't remember how updated this answer is. Right now I don't feel like editing it, if you want to, go ahead :) – Margaret Bloom Apr 16 '20 at 14:59
  • I've been digging for block diagrams of certain CPUs to clear some stuff up because there was sort of a murky time between the loss of the FSB and the SnB style ring bus. I've got proper diagrams of nehalem-ex /ep and westmere-ex/ep now and not those silly conceptual ones showing a L3 cache connected on bars to cpus. I'm still trying to find out if arrandale has a global queue as well or whether it's something else because it's got that qpi link in side the cpu as a relic of the northbridge which seems very weird so it might not be the same as ep which uses a global queue. – Lewis Kelsey Apr 19 '20 at 19:42
  • Both nehalem and westmere ex have that weird rbox+bbox and the ' caching agents' seemingly separate to the HA with the ring though. I'm yet to find a lynnfield and a clarksfield diagram though. The schematics don't help obviously but neither do the official datasheets. I'm going to upload this on my website just to get it out there so other people don't have to go through the ordeal – Lewis Kelsey Apr 19 '20 at 19:46
  • @LewisKelsey Seems interesting (maybe not PCI related, but interesting). Do you mind sharing the address of your website? :) – Margaret Bloom Apr 20 '20 at 07:04
  • I'll tell you when it's got a decent amount of stuff on it but it's not ready yet. I wouldn't mind sharing knowledge with someone who is interested in Intel chipsets / CPUs / bus specifications I just don't really know how to initiate it on SO without having some inappropriate discussion in the comments. I created a group https://chat.stackoverflow.com/rooms/info/212214/intel-cpus-and-chipsets – Lewis Kelsey Apr 22 '20 at 00:10