-1

My end goal is to schedule many similar tasks to run repeatedly with a consistent period, and have them spread across their period to avoid CPU demand spikes.

For the first step, I've tried finding the right way to schedule tasks to run an exact duration later. None of my attempts at this step have worked.

public static void Main(string[] args)
{
    Console.WriteLine("Entry");

    const int n = 1000;
    int[] times = new int[n];
    Task[] tasks = new Task[n];

    DateTime wctStart = DateTime.Now;
    for (int i = 0; i < n; i++)
    {
        int ibv = i;
        DateTime now = DateTime.Now;

        // For n = 1,000 we expect s >= 1,000,000; wct >= 1,000

        // n=1,000 -> s=30k-80k; wct=200
        //tasks[i] = Task.Delay(1000)
        //    .ContinueWith(_ =>
        //    {
        //        times[ibv] = (DateTime.Now - now).Milliseconds;
        //    });

        // n=1,000 -> s=190k-300k; wct=200-400
        // This also takes 50 seconds and eats 2-3MB mem
        //tasks[i] = Task.Run(delegate
        //{
        //    Thread.Sleep(1000);
        //    times[ibv] = (DateTime.Now - now).Milliseconds;
        //});


        // n=1,000 -> s=30k-400k; wct=300-500
        //tasks[i] = Task.Run(async () =>
        //{
        //    await Task.Delay(1000);
        //    times[ibv] = (DateTime.Now - now).Milliseconds;
        //});
    }

    foreach (Task task in tasks)
    {
        task.Wait();
    }

    int s = 0;
    foreach (int v in times)
    {
        s += v;
    }

    int wct = (DateTime.Now - wctStart).Milliseconds;
    Console.WriteLine($"s={s:N0}, wct={wct:N0}");

    Console.ReadKey();
}

To run this code, uncomment one of the three options first.

I'm scheduling 1,000 tasks to run 1,000 milliseconds later. Each one gets an index and stores how many milliseconds after creation it ran. The values I'd expect, and what would indicate a correct solution, are uniform measurements of 1,000ms or perhaps a few ms more, and therefore a value of s being not less than 1,000,000 but perhaps up to several thousand more, and a wct wall clock time of at least 1,000, or greater by a similar proportion.

What I observe instead is that all three of these approaches give widely variable values of s in different ranges all much less than 1,000,000, and narrowly variable values of wct all much less than 1,000.

Why do these approaches apparently not work and how can they be fixed?

igk
  • 41
  • 11
  • 1
    There's quite a few mistakes you are making with handling tasks. When working with tasks from a synchronous thread it's very easy to accidentally start a task and not realize it. It's quite possible your Task.Run operations are causing the task to execute before you need it to. I think you should rethink what you are trying to accomplish. If you're attempting to do some calculation at regular intervals consider a background worker with a loop that lasts forever and a sleep in the loop. You're using tasks wrong if you are attempting to make your own scheduler – Zakk Diaz Jan 19 '21 at 21:09
  • As a side note, I find this phrase confusing: *"...schedule many similar tasks to run repeatedly..."*. The `Task` class represents a single asynchronous operation, and therefore can run only once, not repeatedly. Maybe you are referring to "tasks" in an abstract way? – Theodor Zoulias Jan 19 '21 at 22:37
  • @TheodorZoulias that's correct. Perhaps I should have used a synonym. – igk Jan 19 '21 at 22:41
  • 1
    I guess that your intention is to have many long-running `Task` instances, where each instance will execute the same function again and again in a periodic fashion. If that is the case, then simply scheduling all `Task`s with the same [limited-concurrency](https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.concurrentexclusiveschedulerpair.concurrentscheduler) `TaskScheduler` will probably solve your problem naturally. The execution of each function will be postponed until the `TaskScheduler` has an available "slot", according to the max concurrency policy. – Theodor Zoulias Jan 19 '21 at 22:48
  • 1
    @TheodorZoulias that appears to be exactly the right thing for this problem. I'll develop the solution using that instead. – igk Jan 19 '21 at 22:58

1 Answers1

1

From the documentation of the TimeSpan.Milliseconds property:

public int Milliseconds { get; }

Gets the milliseconds component of the time interval represented by the current TimeSpan structure.

So the Milliseconds return a value between 0 - 999. To get the total duration of the TimeSpan in milliseconds you need to query the TimeSpan.TotalMilliseconds property:

public double TotalMilliseconds { get; }

Gets the value of the current TimeSpan structure expressed in whole and fractional milliseconds.

My suggestion is to use the Stopwatch class for measuring intervals. It is more accurate and more lightweight than subtracting DateTimes obtained by the DateTime.Now property.


Update: Here is a basic scaffolding for creating periodic executions of functions, with configurable interval and concurrency policy:

var concurrentScheduler = new ConcurrentExclusiveSchedulerPair(
    TaskScheduler.Default, maxConcurrencyLevel: 2).ConcurrentScheduler;

var cts = new CancellationTokenSource();

Task PeriodicExecutionAsync(TimeSpan interval, Action action)
{
    return Task.Factory.StartNew(async () =>
    {
        while (true)
        {
            var delayTask = Task.Delay(interval, cts.Token);
            action();
            await delayTask; // Continue on captured context
            cts.Token.ThrowIfCancellationRequested();
        }
    }, cts.Token, TaskCreationOptions.DenyChildAttach, concurrentScheduler).Unwrap();
}

Task periodicExecution1 = PeriodicExecutionAsync(TimeSpan.FromSeconds(1.0), () =>
{
    Thread.Sleep(200); // Simulate CPU-bound work
});

Task periodicExecution2 = PeriodicExecutionAsync(TimeSpan.FromSeconds(2.0), () =>
{
    Thread.Sleep(500); // Simulate CPU-bound work
});

For graceful termination you can call cts.Cancel() before closing the program, and then wait all the tasks to complete. You can safely catch and ignore any OperationCanceledExceptions, they are benign.

Theodor Zoulias
  • 34,835
  • 7
  • 69
  • 104
  • This resolved the matter. After further experimenting with these changes in place, the best approach turned out to be the first, with `s` usually under 1,150k and `wct` usually under 1,200ms. The third was worse by about 100k, 100ms, and unsurprisingly the second is not remotely acceptable. – igk Jan 19 '21 at 22:51
  • @igk you could make the second approach work by adding this at the start of your program: `ThreadPool.SetMinThreads(1000, 1000);`. But this is not recommended. Creating 1000 threads just for having them sleeping for most of their life is a waste of resources (each thread allocates 1 MB RAM for its stack by default). – Theodor Zoulias Jan 19 '21 at 22:57
  • That brought the `wct` for the second approach under 7,000 (`s` = 4,700k, max memory 31MB above other approaches). The fact that that memory number is so low is academically interesting, however approach two is still clearly not The Right Way to do this. Good suggestion, though. – igk Jan 19 '21 at 23:06
  • @igk I updated my answer with some scheduler-related code, to get you fast in track. – Theodor Zoulias Jan 19 '21 at 23:51