0

In my code I have a method such as:

void PerformWork(List<Item> items)
{
    HostingEnvironment.QueueBackgroundWorkItem(async cancellationToken =>
    {
        foreach (var item in items)
        {
            await itemHandler.PerformIndividualWork(item);
        }
    });
}

Where Item is just a known model and itemHandler just does some work based off of the model (the ItemHandler class is defined in a separately maintained code base as nuget pkg I'd rather not modify). The purpose of this code is to have work done for a list of items in the background but synchronously.

As part of the work, I would like to create a unit test to verify that when this method is called, the items are handled synchronously. I'm pretty sure the issue can be simplified down to this:

await MyTask(1);
await MyTask(2);
Assert.IsTrue(/* MyTask with arg 1 was completed before MyTask with arg 2 */);

The first part of this code I can easily unit test is that the sequence is maintained. For example, using NSubstitute I can check method call order on the library code:

Received.InOrder(() =>
{
    itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "First item"));
    itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "Second item"));
    itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "Third item"));
});

But I'm not quite sure how to ensure that they aren't run in parallel. I've had several ideas which seem bad like mocking the library to have an artificial delay when PerformIndividualWork is called and then either checking a time elapsed on the whole background task being queued or checking the timestamps of the itemHandler received calls for a minimum time between the calls. For instance, if I have PerformIndividualWork mocked to delay 500 milliseconds and I'm expecting three items, then I could check elapsed time:

stopwatch.Start();
// I have an interface instead of directly calling HostingEnvironment, so I can access the task being queued here
backgroundTask.Invoke(...);
stopwatch.Stop();

Assert.IsTrue(stopwatch.ElapsedMilliseconds > 1500);

But that doesn't feels right and could lead to false positives. Perhaps the solution lies in modifying the code itself; however, I can't think of a way of meaningfully changing it to make this sort of unit test (testing tasks are run in order) possible. We'll definitely have system/integration testing to ensure the issue caused by asynchronous performance of the individual items doesn't happen, but I would like to hit testing here at this level as well.

Herman
  • 51
  • 1
  • 6
  • This sounds as if you are trying to test that `await` works as documented. That doesn't make sense, of course `await` needs to be tested, but I'm pretty sure the C# compiler devs have already done that. If I'm missing something, please clarify: what kind of programming error are you wanting to catch? – Charlie Feb 05 '20 at 20:54
  • You could have a look at Jon Skeet's [time machine](https://github.com/jskeet/DemoCode/blob/master/AsyncIntro/Code/Testing.NUnit/TimeMachine.cs) – Jesse de Wit Feb 05 '20 at 21:45
  • @Charlie the unit tests would mostly be there to ensure the code is not changed to run these events in parallel. They were initially called in parallel but it caused an issue for some cases where I have to run it as seen. – Herman Feb 05 '20 at 22:41
  • Thanks @JessedeWit I'll look into how I would use that in this case – Herman Feb 05 '20 at 22:51
  • OK. In your example, the await appears to be in the test itself, with the Assert right after it. I understand now you were only trying to illustrate the problem. Since it's in the production code and you have had past problems, @David's idea makes sense. – Charlie Feb 07 '20 at 00:54

1 Answers1

0

Not sure if this is a good idea, but one approach could be to use an itemHandler that will detect when items are handled in parallel. Here is a quick and dirty example:

public class AssertSynchronousItemHandler : IItemHandler
{
    private volatile int concurrentWork = 0;
    public List<Item> Items = new List<Item>();

    public Task PerformIndividualWork(Item item) =>
        Task.Run(() => {
            var result = Interlocked.Increment(ref concurrentWork);
            if (result != 1) {
                throw new Exception($"Expected 1 work item running at a time, but got {result}");
            }

            Items.Add(item);

            var after = Interlocked.Decrement(ref concurrentWork);
            if (after != 0) {
                throw new Exception($"Expected 0 work items running once this item finished, but got {after}");
            }
        });
}

There are probably big problems with this, but the basic idea is to check how many items are already being handled when we enter the method, then decrement the counter and check there are still no other items being handled. With threading stuff I think it is very hard to make guarantees about things from tests alone, but with enough items processed this can give us a little confidence that it is working as expected:

[Fact]
public void Sample() {

    var handler = new AssertSynchronousItemHandler();
    var subject = new Subject(handler);
    var input = Enumerable.Range(0, 100).Select(x => new Item(x.ToString())).ToList();

    subject.PerformWork(input);

    // With the code from the question we don't have a way of detecting
    // when `PerformWork` finishes. If we can't change this we need to make
    // sure we wait "long enough". Yes this is yuck. :)
    Thread.Sleep(1000);
    Assert.Equal(input, handler.Items);
}

If I modify PerformWork to do things in parallel I get the test failing:

public void PerformWork2(List<Item> items) {
    Task.WhenAll(
        items.Select(item => itemHandler.PerformIndividualWork(item))
        ).Wait(2000);
}

// ---- System.Exception : Expected 1 work item running at a time, but got 4

That said, if it is very important to run synchronously and it is not apparent from glancing at the implementation with async/await then maybe it is worth using a more obviously synchronous design, like a queue serviced by only one thread, so that you're guaranteed synchronous execution by design and people won't inadvertently change it to async during refactoring (i.e. it is deliberately synchronous and documented that way).

David Tchepak
  • 9,826
  • 2
  • 56
  • 68
  • Thanks, that's pretty cool. In theory, I have a task scheduler interface that should be able to capture the incoming work and await so I don't have to Thread.Sleep, but I haven't been able to get this running at all as expected. I'll keep working on it and let you know. Part of the implementation design here is that only some cases we need synchronous runs. I believe the actual fix to this situation would be queuing in the interfacing system where the work is performed, but that's not a viable fix atm. – Herman Feb 06 '20 at 14:46