1

I am reading data via ZMQ and executing tasks in a separate thread based on their topic. I noticed that when the data frequency is really high (~1 message every 1ms), some of the tasks take a really long time to execute.

This is basically what I'm doing :

while(true){
  item = zmqSubscriber.ReceiveData(out topic, out ConsumeErrorMsg);
  if (topic.Equals(topic1))
     {
      Task.Run(() => ExecuteTask1(item));
     }
     else if (topic.Equals(topic2)
     {
      Task.Run(() => ExecuteTask2(item));
     }
     else if (topic.Equals(topic3))
     {
      Task.Run(() => ExecuteTask3(item));

     }
}

When the data frequency is a bit lower (10 messages every 100ms), I did not notice any behavior issues.

I'm new to C# and I'm wondering if this could be due to the maximum number of active thread pool threads being low. I read here that the number could be increased, however, it's not a good practice: ThreadPool.SetMinThreads(Int32, Int32) Method

So I was wondering if there was a better way of achieving what I'm trying to do??

Theodor Zoulias
  • 34,835
  • 7
  • 69
  • 104
bjalexmdd
  • 186
  • 1
  • 11
  • You could work with 3 ConcurrentQueues (or Channels) serving 3 separate Tasks as consumers. If there is a need to concurrently process the items of the same type, multiple Tasks can be used per queue... – Johan Donne Oct 25 '21 at 10:56
  • If there is some IO involved in processing those items (in `ExecuteTask` calls) then you might get some benefits by using asynchronous processing. But even if it's pure CPU work - it's probably not a good idea to run separate task for each item with such frequency - you have only so many CPU cores anyway. – Evk Oct 25 '21 at 11:00
  • What kind of work are the `ExecuteTask1`, `ExecuteTask2` and `ExecuteTask3` methods doing? – Theodor Zoulias Oct 25 '21 at 11:23
  • What library (i.e. Nuget, etc) are you using? – Enigmativity Oct 25 '21 at 21:37
  • After races and deadlock, it is the 3rd most common threading bug, a *firehose bug*. Producing more work for the worker threads than they can handle. It will crash the program, eventually, on an OutOfMemoryException. Takes a long time, modern machines have a lot of it. The thread explosion is however easy to observe in the Debug > Windows > Threads debugger window. And the less than stellar responsiveness of the program. Throttling is required, could be as simple as a SemaphoreSlim that counts the busy workers. – Hans Passant Oct 26 '21 at 07:55

1 Answers1

5

Rather than spinning up endless tasks and smashing the task scheduler, I'd personally use DataFlow or Rx to help partition, queue and manage your workloads.

They both cater for synchronous and asynchronous operations, can take cancelation tokens, manage degrees of parallelism and give you backpressure if you need it. You can also push it in to further pipelines.

var options = new ExecutionDataflowBlockOptions()
{
   //BoundedCapacity = <= set this if you want back pressure
   //CancellationToken = token <= set this if you like cancelling stuff
   //MaxDegreeOfParallelism = <= set this if you want limited parallelism
   SingleProducerConstrained = true
};

// This could all be done in the one action block,
// or different options for each block depending on your needs
var action1 = new ActionBlock<Message>(ExecuteTask1,options);
var action2 = new ActionBlock<Message>(ExecuteTask2,options);
var action3 = new ActionBlock<Message>(ExecuteTask3,options);

while (true)
{
   var item = zmqSubscriber.ReceiveData(out topic, out ConsumeErrorMsg);
   topic switch
   {
      topic1 =>  await action1.SendAsync(ConsumeErrorMsg,token),
      topic2 =>  await action2.SendAsync(ConsumeErrorMsg,token),
      topic3 =>  await action3.SendAsync(ConsumeErrorMsg,token),  
   };
}

Disclaimer : This isn't a tutorial on DataFlow, you will need to research this technology, review and adapt any solution like this to your needs.

You should also implement some throttling strategy in the case your messages are outpacing your processing.

halfer
  • 19,824
  • 17
  • 99
  • 186
TheGeneral
  • 79,002
  • 9
  • 103
  • 141