1

I write a code which rarely creates/removes objects (up to several thousands) but very frequently modifies them in soft IRQ context. These objects are also rarely read (and probably will also be rarely modified) from task context (via procfs: file per object). Currently my code contains global per-CPU data blocks, each one guarded by a spinlock. Such a block contains a fixed-sized hashtable for object storage.

Obviously the current design is not optimal, especially when having very high object update loads: reading objects from procfs will cause data losses in updating soft IRQs. I need to rewrite the synchronisation scheme to get rid of global locks. The most obvious choice - to have a spinlock for each hashtable bucket - it should scale well. The problem is that I'll probably need to use my own hashtable implementation or at least to reimplement several top-level macros (didn't find those in linux/hashtable.h for spinlock-protected buckets). Should I also look towards RCU-enabled hashtable (yet I have no solid understanding of this synchronisation approach)?

ababo
  • 1,490
  • 1
  • 10
  • 24

1 Answers1

3

Buckets with lock protection are declared in the header linux/list_bl.h. They use lowest bit of the head pointer as a lock bit.

RCU-protected access to the bucket is defined with other hash table functions in the header linux/hashtable.h (they have _rcu suffix).

Choosing between locks and RCU is up to you. Note, that RCU itself cannot resolve modify-modify conflicts. And it helps mostly for frequently-read data, which seems is not your case.


As only one locking function - hlist_bl_lock - is declared for struct hlist_bl_head, and this function is unaware for irq's, additional actions should be performed when hash table can be used in irq or bottom halves:

  • spin_lock_irqsave:

    local_irq_save(flags);
    hlist_bl_lock(...);
    
  • spin_unlock_irqrestore:

    hlist_bl_unlock(...);
    local_irq_restore(flags);
    
  • spin_lock_bh:

    local_bh_disable();
    hlist_bl_lock(...);
    
  • spin_unlock_bh:

    hlist_bl_unlock(...);
    local_bh_enable();
    
Tsyvarev
  • 60,011
  • 17
  • 110
  • 153
  • How significant is performance degradation when using bit spinlocks comparing to normal ones? Maybe I should prefer normal spinlocks since I don't need too many buckets? – ababo Jan 16 '17 at 05:01
  • As far as I understand, you wouldn't degrade perfomance. As opposite, you may improve perfomance because of reduced *cache footprint* when access buckets. Using single bit for lock would lose some *lock-debugging features*. At least, *lock map* would be unaware about your locks: it needs needs additional bytes in lock object. But as long as you implement locking correctly, debugging your locks shouldn't bother you. **As summary**: Until you use locks in very unusual way, bit locks are good choice both for perfomance and for code readability. – Tsyvarev Jan 16 '17 at 07:20
  • I hope it's safe to use bit locks for sharing data between soft IRQs and kernel threads (since there's no `bit_spin_lock_bh`)? – ababo Feb 11 '17 at 18:40
  • 1
    Nice catch! You may use bit locks with IRQs, but you need to call additional functions. I have edited the answer post for that. – Tsyvarev Feb 13 '17 at 07:33