3

I'm trying to develop an application that consists of a pool of threads, using a work-stealing algorithm to concurrently execute tasks.

These tasks

  • access a predefined set of objects;
  • must "atomicly" acquire read/write permissions on all objects it accesses before actually running;
  • upon finishing (and are guaranteed to eventually finish) release the objects they acquire.

One possible way to to solve this problem is to have each thread pick up a task at a time, then try to lock each of the objects using a predefined order. If at least one fails release all the locks, and proceed with another task.

This method however increases the probability of starvation of tasks with big object dependencies, and may even incur in live locks.

Is there another method to acquire a set of locks while maximizing concurrency? (without a global lock) Or perhaps change the system in a way that it is no longer required? If so, any good papers about it?

ps: As thiton answered, this is a generalized version of the "dining philosophers" problem. I am looking for non-centralized solutions, in particular algorithms that fare well in high load (addition and deletion of tasks).

João Rafael
  • 131
  • 1
  • 7

3 Answers3

1

Ordering resources is a valid approach. Another straightforward aproach that comes to mind is to introduce a shared arbiter holding information about resource availability. Tasks would lock all the resources they need through the arbiter in a single atomic step "acquire(r1, r2, ..., rn)" and release them similarly with "release(r1, r2, ..., rn)".

If an "acquire" request A can be satisfied, the arbiter will make sure no other task may acquire any of the resources held by A until A releases them back.

The arbiter may use several strategies to satisfy incoming requests:

  1. Reject request that can't be immediately satisfied - tasks will have to re-try. This opens the door to live-locks and starvation.
  2. Keep all incoming requests in a queue and serving them in FIFO manner as resources needed for the request at head become available.
  3. Keep all unsatisfied requests in a list (without blocking their demanded resources) and iterate through them (perhaps with a priority for older requests) each time some resources get released, to find a request that can be satisfied.
Vaclav Pech
  • 1,431
  • 8
  • 6
  • It is a valid solution. However, since there is a single arbiter all tasks must wait in the arbiter's lock queue which causes contention. Furthermore the thread to verify possible concurrent tasks is executed in a single thread and that will put a cap on maximum concurrency. – João Rafael Aug 02 '11 at 16:43
  • Now, thinking about your very valid comment, I would probably design such an arbiter system in a way that the arbiter would not work with threads but tasks. Blocking a task (a piece of work) when waiting for a resource is likely to have much smaller impact on throughput compared to blocking threads. only sort out resources for tasks and – Vaclav Pech Aug 02 '11 at 18:47
  • (continued) The arbiter would gradually read tasks, assign them requested resources and spit those that got all resources out into a thread pool for processing. Provided the tasks take some time to execute, the single-threaded arbiter may not become a bottleneck. – Vaclav Pech Aug 02 '11 at 18:54
  • I guess it all comes down to task size. As I envision the system, these tasks will be very small, perhaps in the order of thousand CPU cycles. That's why I as asking for a decentralized algorithm. Maybe I'll implement both and see what works better. – João Rafael Aug 03 '11 at 00:05
0

If a task tries to lock objects only to fail at some point, it's possible that another task will fail to lock an object because the first task owns it at the time.

I think I would use a global lock when trying to acquire the locks initially and maybe also when releasing them all finally.

I'd worry about maximum concurrency when a simple solution proves to be inadequate in practice.

MRAB
  • 20,356
  • 6
  • 40
  • 33
  • In theory, these tasks execute very fast and the threads will be most of the time trying to figure out a task to run. Using a global lock will make the problem worse because the decision process will be made by a single thread instead of being distributed. – João Rafael Aug 02 '11 at 16:54
0

Your problem is called the Dining Philosophers. You should find any amount of literature you need under this keyword :-).

thiton
  • 35,651
  • 4
  • 70
  • 100
  • I remember now! most precisely its the "generalized dining philosophers" problem without the limit for 2 forks/philosopher. – João Rafael Aug 02 '11 at 16:50