6

From what I can gather:

  • NT's KeAcquireSpinLock is equivalent to spin_lock_bh: the one raises IRQL to DISPATCH_LEVEL, the other masks the bottom half interrupts -- functionally the same. While the NT variant keeps the OldIrql, the Linux variant doesn't seem to store "wereInterruptsAlreadyMasked" anywhere. Does this mean spin_unlock_bh always unmasks them?
  • NT's KeAcquireInterruptSpinLock is like spin_lock_irqsave.

What is the NT equivalent of spin_lock?

If spin_unlock_bh always unmasks interrupts (in NT-speak, always drops IRQL to <DISPATCH_LEVEL ), does it mean spin_lock is akin to KeAcquireSpinLockAtDpcLevel?

Ilya
  • 5,533
  • 2
  • 29
  • 57
  • As you have pointed - window's locking is achievied through IRQL tinkering. So if this concept is not present on linux then by following the logic spin_lock doesn't indeed have a direct equivalent since both OS use slightly different mechanisms to achieve the same thing – LordDoskias Oct 15 '11 at 01:41
  • IRQL is an abstraction on top of interrupt-masking, and Linux obviously has that. What I'm wondering is what scenario spin_lock is useful for, and why didn't the NT folks support this scenario? – Ilya Oct 15 '11 at 02:39
  • Also, I revised my question following some more things I understood re: spin_lock_bg. – Ilya Oct 15 '11 at 02:51
  • You've got a typo in the last sentence, I can't tell what you meant to say. – Harry Johnston Oct 16 '11 at 00:35

1 Answers1

2

The raw spin_lock can be used when you know no interrupts or bottom-halves will ever contend for the lock. By avoiding interrupt masking, you keep interrupt latency down, while still avoiding the overhead of a mutex for critical sections short enough to spin on.

In practice, they seem to be primarily used by things like filesystem drivers, for locking internal cache structures, and other things where there is never a need to block on IO when holding the lock. Since back-halves and driver interrupts never touch the FS driver directly, there's no need to mask interrupts.

I suspect the Windows equivalent would be a CRITICAL_SECTION, or whatever the NT kernel API equivalent is; however, unlike a NT critical section, Linux spinlocks do not fall back to a mutex when contended; they just keep spinning.

And, yes, spin_unlock_bh unconditionally restores bottom-halves. You can either keep track of when to enable/disable manually (since you should generally release locks in opposite order of acquisition this usually isn't a problem), or just resort to spin_lock_irqsave.

Mircea
  • 1,841
  • 15
  • 18
bdonlan
  • 224,562
  • 31
  • 268
  • 324
  • spin_lock still calls preempt_disable. You're saying preempt_disable just marks a flag and doesn't disable interrupts -- specifically bottom-half interrupts (DPCs in NT-speak?) and device interrupts? – Ilya Oct 16 '11 at 11:40
  • On NT, all functions that "acquire" a spinlock raise the IRQL to DISPATCH_LEVEL, meaning a) no preemption, and b) DPCs (bottom half routines) will not be executed. Does this disable interrupts per se, though? Wouldn't the clock interrupt keep hitting but simply wouldn't do anything cause the IRQL is high enough? – Ilya Oct 16 '11 at 11:56
  • Also on Linux, I can see that local_bh_disable eventually just calls something called add_preempt_count, which is the same function called by preempt_disable. So what differentiates local_bh_disable and preempt_disable? – Ilya Oct 16 '11 at 12:02
  • `preempt_disable` doesn't prevent backhalves from running, it just prevents context switches. backhalves can run (when not blocked) whenever an interrupt handler returns (except when returning from a nested interrupt) – bdonlan Oct 16 '11 at 12:59
  • For the details, see the source: http://lxr.linux.no/linux+v3.0.4/kernel/softirq.c basically, IRQ exit, or bh re-enable – bdonlan Oct 16 '11 at 14:43
  • 1
    Strictly speaking, the DISPATCH level is not guaranteed no preemption. You can still get interrupted by a hardware interrupt at a higher IRQL. – Thomas Kejser Jan 18 '13 at 11:23