3

I am implementing a web-app that holds some data in memory. Some requests read this data for processing and some requests update this data.

In this scenario, multiple readers can concurrently operate on the data, but a writer needs exclusive access to the data in memory. I want to implement a reader-writer lock to solve this problem. I also want the fairness property that waiters on the lock are processed in FIFO order to avoid read and write starvation.

The Haskell standard libraries do not appear to provide such functionality. I found concurrency-extra gives this functionality, but the library seems to be unmaintained (and was removed from stackage after LTS 3.22) - and its fairness properties are not clear to me.

I find it a bit surprising that there is no reader-writer lock library in standard haskell libraries and in stackage - isn't the reader-writer pattern common in many software? Or is there a completely different (perhaps lock-free) approach that is preferred in Haskell?

EDIT: More precisely on the fairness property, when a writer is blocked waiting to acquire the lock, subsequent read-lock requests should be allowed only after the writer acquires and releases the write-lock - similar to MVars fairness property - MVars have a FIFO property

donatello
  • 5,727
  • 6
  • 32
  • 56
  • Why not `Control.Concurrent.MVar` (using `takeMVar`)? – josejuan May 19 '16 at 09:12
  • Multiple readers cannot concurrently operate if they all have to take an MVar. In my scenario, multiple readers can work on the data, but a writer needs exclusive access. – donatello May 19 '16 at 09:14
  • I guess that since we have STM and `TVar`s now, some simpler forms of locking were no longer needed. You might either use `TVar`s and STM, or implement your locking on top of `MVar`s. (the latter looks non trivial) – chi May 19 '16 at 09:16
  • @donatello "that holds some data in memory" I have assumed your "in memory data" is immutable then `MVar` actions are atomics; it's wrong? – josejuan May 19 '16 at 10:20
  • The shared data can be updated by some kinds of requests to the web-app (e.g. in-memory cache). The problem with `MVar` is that, only one thread may `takeMVar` and then until it has `putMVar`ed, no other thread may access the data in the `MVar`. This is not the desired behaviour when I want concurrent readers to not block each other. – donatello May 19 '16 at 11:12
  • Could you please elaborate "fairness" property? Usually starvation is prevented by "write-biased" implementation, like this one: https://github.com/Yuras/qase/blob/master/lib/RWLock.hs Your fairness requirement looks different for me. – Yuras May 19 '16 at 11:29
  • @Yuras - My fairness requirement: when a writer is blocked waiting to acquire the lock, readers that subsequently request the lock should wait behind the writer (they should not be able to get the lock before the writer does - this is similar to what MVars provide - FIFO). Also, I am looking at your implementation and trying to understand it. – donatello May 19 '16 at 11:40
  • "no other thread may access the data", yes, they can, use `readMVar` (atomic operation) for read and `takeMVar` for write locks (next readers will wait to final write operation). I think you should clarify your read/write relation (e.g. writer cannot free "updated" data until all previous readers use the readed state) but I think `MVar` match perfectly with your last EDIT. – josejuan May 19 '16 at 18:57
  • @josejuan - I understand now. Thank you. If you propose this as an answer I will accept it. – donatello May 20 '16 at 09:23

3 Answers3

3

A reader-writer lock is easy to implement on top of STM.

data LockState = Writing | Reading Int
type Lock = TVar LockState

startReading :: Lock -> STM ()
startReading lock = do
   s <- readTVar lock
   case s of
      Writing -> retry
      Reading n -> writeTVar (Reading (succ n))


stopReading :: Lock -> STM ()
stopReading lock = do
   s <- readTVar lock
   case s of
      Writing -> error "stopReading: lock in writing state?!"
      Reading n -> writeTVar (Reading (pred n))


startWriting :: Lock -> STM ()
startWriting lock = do
   s <- readTVar lock
   case s of
      Reading 0 -> writeTVar Writing
      _         -> retry

stopWriting :: Lock -> STM ()
stopWriting lock = do
   s <- readTVar lock
   case s of
      Writing -> writeTVar lock (Reading 0)
      _       -> error "stopwriting: lock in non-writing state?!"

My main concerns with the above are that 1) it looks a bit overkill to me, and 2) we still have no way to guarantee fairness (liveness) in STM.

I guess one could implement a similar library on top of MVars, albeit that would be more complex, especially if we want to guarantee fairness.

I would be tempted to avoid MVars and use semaphores instead, using QSem instead, which guarantee FIFO semantics. Using those, one can implement readers/writers in Dijkstra-style.

chi
  • 111,837
  • 3
  • 133
  • 218
  • Yep, there is no fairness guarantee with this, but thank you for the implementation. I am less unfamiliar with `TVar`s now. – donatello May 19 '16 at 11:09
1

Indeed concurrent-extra doesn't provide fairness.

As chi wrote, there is no way to guarantee fairness in STM. But we can do in IO using STM. The idea is to add other state to chi's LockState that indicates that reader can't acquire the lock:

data LockState = Writing | Reading Int | Waiting

Then writer should first set state to Waiting and then wait for all readers to release the lock. Note that waiting should be performed in a separate STM transaction, that is why we can't guarantee fairness in STM.

Here is an example implementation: It is not on Hackage, but you can vendor it (is is BSD-licensed.)

The implementation is optimized to minimize wake-ups. With single TVar, when lock is in Waiting state, each reader release unnecessary wakes up all readers waiting to acquire the lock. So I have two TVars, one for lock state and other for number of readers.

ADDED: Here is an interesting (and pretty long) discussion I had with IRC user Cale about pitfalls of read-write lock implementation.

Yuras
  • 13,856
  • 1
  • 45
  • 58
1

The best solution depend of readers/writers relation but I think you can solve your problem only using MVar.

Let

import System.Clock
import Text.Printf
import Control.Monad
import Control.Concurrent
import Control.Concurrent.MVar

t__ :: Int -> String -> IO ()
t__ id msg = do
    TimeSpec s n <- getTime Realtime
    putStrLn $ printf "%3d.%-3d - %d %s" (s `mod` 1000) n id msg

reader :: MVar [Int] -> Int -> IO ()
reader mv id = do
    t__                            id $ "reader waiting"
    xs <- readMVar mv
    t__                            id $ "reader working begin"
    threadDelay (1 * 10^6)
    t__                            id $ "reader working ends, " ++ show (length xs) ++ " items"

writer :: MVar [Int] -> Int -> IO ()
writer mv id = do
    t__                            id $ "WRITER waiting"
    xs <- takeMVar mv
    t__                            id $ "WRITER working begin"
    threadDelay (3 * 10^6)
    t__                            id $ "WRITER working ends, " ++ show (1 + length xs) ++ " items"
    putMVar mv (id: xs)

main = do

    mv <- newMVar []
    forM_ (take 10 $ zipWith (\f id -> forkIO (f mv id)) (cycle [reader, reader, reader, writer]) [1..]) $ \p -> do
        threadDelay (10^5)
        p

    getLine

with output

c:\tmp>mvar.exe +RTS -N20
486.306991300 - 1 reader waiting
486.306991300 - 1 reader working begin
486.416036100 - 2 reader waiting
486.416036100 - 2 reader working begin
486.525191000 - 3 reader waiting
486.525191000 - 3 reader working begin
486.634286500 - 4 WRITER waiting
486.634286500 - 4 WRITER working begin
486.743378400 - 5 reader waiting
486.852406800 - 6 reader waiting
486.961564300 - 7 reader waiting
487.070645900 - 8 WRITER waiting
487.179673900 - 9 reader waiting
487.288845100 - 10 reader waiting
487.320003300 - 1 reader working ends, 0 items
487.429028600 - 2 reader working ends, 0 items
487.538202000 - 3 reader working ends, 0 items
489.642147400 - 10 reader working begin
489.642147400 - 4 WRITER working ends, 1 items
489.642147400 - 5 reader working begin
489.642147400 - 6 reader working begin
489.642147400 - 7 reader working begin
489.642147400 - 8 WRITER working begin
489.642147400 - 9 reader working begin
490.655157400 - 10 reader working ends, 1 items
490.670730800 - 6 reader working ends, 1 items
490.670730800 - 7 reader working ends, 1 items
490.670730800 - 9 reader working ends, 1 items
490.686247400 - 5 reader working ends, 1 items
492.681178800 - 8 WRITER working ends, 2 items

readers 1, 2 and 3 run simultaneously, when 4 WRITER working begin next requests wait for it but 1, 2 and 3 can terminate.

(the stdout output and process order to enter into FIFO is not accurate on this example but the number of items readed or settled show the real order)

josejuan
  • 9,338
  • 24
  • 31