0

Alright folks,

I'm attempting to cut my teeth on ARM assembly, but I can't quite grasp the concept of interrupts. Prior to this, I've worked with AVR interrupts, where code was written to explicitly link the interrupt vector to the subroutine or segment being jumped into:

.ORG $0022                     
     RJMP TIMER_INTERRUPT

However, in working with the Atmel SAMA5D3 series, I can't figure out how to accomplish something similar. I understand that the first 8 interrupt vectors lie in the first 8 program counter addresses (reset at 0x00 to fiq at 0x1C), but I haven't been able to find a good, no-nonsense, easy-to-understand resource on how the individual peripheral interrupts (SPI, UART, TC modules) map into the standardized ARM vector table.

That is, suppose in the code I configure the UART to be able to throw interrupts. What occurs when it throws an interrupt? Does the address for the irq entry (0x18) contain the location of a table of the device-specific vectors? How does the vendor- and device-specific peripherals assert interrupts via the core-standardized table?

I've consulted with the datasheet for the SAMA5D3 as well as the ARM ARM and I haven't found much that makes it easier on the part of a beginner. My apologies if anything is unclear, and shame on the world for eschewing assembly.

EDIT:

From the SAMA5D3 datasheet, page 118, it states that:

It is assumed that:

  1. The Advanced Interrupt Controller has been programmed, AIC_SVR registers are loaded with corresponding interrupt service routine addresses and interrupts are enabled.
  2. The instruction at the ARM interrupt exception vector address is required to work with the vectoring:

LDR PC, [PC, # -&F20]

When nIRQ is asserted, if the bit ā€œIā€ of CPSR is 0, the sequence is as follows:

  1. The CPSR is stored in SPSR_irq, the current value of the Program Counter is loaded in the Interrupt link register (R14_irq) and the Program Counter (R15) is loaded with 0x18. In the following cycle during fetch at address 0x1C, the ARM core adjusts R14_irq, decrementing it by four.
  2. The ARM core enters Interrupt mode, if it has not already done so.
  3. When the instruction loaded at address 0x18 is executed, the program counter is loaded with the value read in AIC_IVR.

This is understandable. From reading this plus looking at the register numbers, I put the peripheral interrupt source (i.e. SPI0/1) number into the INTSEL field of AIC_SSR, and then it sounds like I store the address of the ISR into AIC_SVR.

There are two problems, one of which is that I don't understand the vectoring instruction (LDR PC, [PC, # -&F20]) that needs to be present and compatible with the irq vector address at 0x18.

The other problem being, how does one enable multiple interrupts? There is only one INTSEL field and source vector register, so how would one run multiple interrupt handlers on this system?

My continued apologies, folks, if I'm not getting this, but I at least try to do my homework on it.

Community
  • 1
  • 1
ecfedele
  • 306
  • 3
  • 15
  • On the ARM devices I've worked with where I did any kind interrupt handling, you'd get an IRQ and then a platform-specific memory-mapped I/O register would contain flags that indicated the reason for the IRQ (timer overflow, DMA transfer completion, etc). – Michael Sep 17 '14 at 05:43
  • Okay. In that case, I think I might have some idea how this is operating. I'll edit the info in from the datasheet, feel free to give it a look in a little while. – ecfedele Sep 17 '14 at 07:29

1 Answers1

0

I'll admit I don't know this specific system beyond a quick skim of the linked documentation, but it seems straightforward enough. First, let's forget about the CPU and focus on the interrupt controller:

Rather than waste address space and complexity mapping seperate control registers for every single interrupt source, the vectors are kept internal to the controller - instead you get a single set of registers which act as a window onto that internal data, plus a selector register which controls where that window is pointing. Thus the configuration sequence might go something like this:

  1. select interrupt 2 by writing 2 to AIC_SSR
  2. write interrupt 2's configuration to AIC_SMR and handler address to AIC_SVR
  3. now select interrupt 3 by writing 3 to AIC_SSR
  4. write interrupt 3's configuration to AIC_SMR and handler address to AIC_SVR
  5. etc...

AIC_IVR works similarly as a window, except instead of a programmable selector, it's controlled by the current active interrupt. Thus e.g. when interrupt 3 fires, AIC_IVR will read as the address we programmed in step 4 above.

Now, the CPU side is rather more straightforward. Note that the layout of the ARM exception vectors means that each 'handler' there is only a single instruction, so they are almost always branches to full-blown handler routines elsewhere. There is a single handler for each type of exception, so when IRQ is asserted the CPU just jumps to the 'IRQ' address 0x18. This is where the external interrupt vectoring comes into play - because the interrupt controller already has all the details, the CPU doesn't need a top-level IRQ handler to work out which interrupt this IRQ is for and what to do, it just needs to load whatever address is showing in AIC_IVR, jump to it, and it'll be in the correct interrupt-specific handler.

This means the 'IRQ handler' can be reduced to the magic instruction - the 'magic' being that the AIC_AVR address (0xFFFFF100) is close enough to the CPU's exception vector for a PC-relative load (thanks to the address wraparound), thus an LDR targetting the PC can perform both the address load and the branch from the single exception vector instruction itself, minimising latency.

Of course, you might not have to do it that way, it's just the best option - I'd imagine something like this pointlessly long-winded equivalent should probably work too:

/* exception vectors */
.org 0
  b    start
  ...
  b    handle_irq
  ...

handle_irq:
  push {r0}
  ldr  r0, =fffff100
  ldr  r0, [r0]
  b    r0
Notlikethat
  • 20,095
  • 3
  • 40
  • 77