5

Is there any possible way to achieve this?

For instance, I have an I/O completion port that 10 worker threads are pulling tasks out of. Each task is associated with an object. Some objects cannot be worked on concurrently, so if one thread is working with one of these objects and a second thread pulls out a task that requires this object, the second thread has to wait for the first to complete.

As a work around, objects could have an event that gets signaled upon release. If a thread is 'stuck' because the task is received requires a locked object, it could wait on either the locked object to be released, or for a new task to be queued. If it picks up a new task, it will push the task it couldn't work on back into the queue.

I am aware of alternative approaches, but this seems like functionality that should exist. Can this be achieved with Windows API?

Collin Dauphinee
  • 13,664
  • 1
  • 40
  • 71

2 Answers2

3

Change your design.

Add an internal task queue to the object. Then when a task is posted to the IOCP have the IOCP thread place the task in the object's task queue and, if no other thread is "processing" tasks for this object have this IOCP thread mark the object as being processed and begin processing the task; (lock per object queue, add task, check if we should be the processing thread, unlock the queue) and either process the task in the object or return to the IOCP.

When another thread has a task for the same object it also goes through the same process. Note that the thread processing the object DOES NOT hold a lock on the object's task queue so the new IOCP thread can add the task to the object's queue and then see that a thread is already processing and simply return to the IOCP.

Once the thread has finished the current task it checks the object's task queue again and either continues processing the next task, or, if the queue is empty, marks the object as not processing and returns to the IOCP.

This prevents you blocking IOCP threads on tasks which can't yet run and maintains locality of data to the thread that happens to be processing at the time.

The one potential issue is that you can have some always busy objects starving others but you can avoid this by simply checking how many tasks you have processed and it if exceeds a tunable max then pushing the next task to process back into the IOCP so that other objects have a chance.

Len Holgate
  • 21,282
  • 4
  • 45
  • 92
  • 2
    To minimize impact on your current design, you can have your task queue merely consist of the values returned by the `GetQueuedCompletionStatus` you want to defer. Processing the task queue consists of calling `PostQueuedCompletionStatus` to put the completion event back into the queue. – Raymond Chen Jun 17 '12 at 17:16
1

The idea solution is to have a thread wait for the event and post to the completion port when the event occurs. Alternatively, have a thread wait for the event and just handle it. If you have two fundamentally different things you need to do, use two threads to do them.

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
  • That doesn't really work for this case; the number of tasks of each type I'm going to be receiving isn't predictable. I need the worker threads to be able to adapt to spikes in specific types of tasks, so I can't dedicate one thread to only handle tasks of type A, and another thread to only handle tasks of type B. – Collin Dauphinee Jun 17 '12 at 04:39
  • Then use one group of threads waiting for IOCP and one waiting for events. There's no reason you need the same thread to wait for two different types of things. Creating a few extra threads won't hurt anything. (Personally, I'd use one group of threads primarily waiting on the IOCP and any time I needed to block on an event, I'd post a request to the IOCP to dispatch a thread to block on it. This way, I'd get the dispatching and scheduling advantages of the IOCP. Just make sure there are enough threads in the pool. The IOCP dispatch will idle them if needed automatically.) – David Schwartz Jun 17 '12 at 04:40