2

I am currently working an c# application that will work as the server-side of a multiplayer game and I am slightly unsure as to how I should be handling multi-threading issues. Before I continue, it is probably worth mentioning that I am quite new to this topic.

The problem

One of the requirements of the server-side application is that it should contain application-specific data, such as information on peers that have connected to the server, as well as their location and so on. The problem is that without some form of thread-safe mechanic, it is possible for two requests to be reading and writing to the same piece of data, which is obviously problematic.

Solving the problem

Up until now, to solve the problem I have simply been wrapping every request inside of a lock block, ensuring that every request happens in a serial order, so that data is only ever being manipulated by one peer at a time.

Recently, after doing some research on the topic I was introduced to the idea of fibers, and a way of setting up a "fiber pool", allowing the ability to queue actions onto a single fiber as another way of trying to ensure that requests happen in a serial order.

The question

My knowledge on threading and these types of topics is pretty limited. I would love to know more about this topic, in particular I would love to know the pros and cons of either solution and ultimately which route I should be taking.

Any help would be greatly appreciated.

Donny Sutherland
  • 346
  • 2
  • 3
  • 14
  • There are a couple of interesting discussions on fibers here on stack overflow, try searching "c# using fibers" they send to an MSDN couple of articles that probably can help. – Sabrina_cs Sep 23 '15 at 15:30
  • @Sabrina_cs - thanks for the input. I have enjoyed reading through some of the discussions on the subject so far, and will likely continue to do so. What I'm struggling with at the moment is a clear comparison between the two. Thanks again for the input. – Donny Sutherland Sep 23 '15 at 15:40
  • 2
    Fibers are a complex optimization that doesn't do anything to solve your task problem which is how to handle shared data. One giant lock is a bad solution because it doesn't scale so you'll need finer locking and all that entails. – Voo Sep 23 '15 at 16:35
  • @Voo - thanks for the input. So your suggestion would be to stick to locking, and just be clever about it? That sounds good to me – Donny Sutherland Sep 23 '15 at 17:47
  • 1
    Well my suggestion would be to get a good book on concurrency before even thinking of trying to get a working solution. It's a hard enough topic without getting in the most obscure corners. – Voo Sep 23 '15 at 18:10
  • @Voo - sounds like a plan. Thanks again for the input! – Donny Sutherland Sep 23 '15 at 18:29
  • I've developed some small scale solutions a long time ago where I needed multithreading and I had the problem of locking for the lesser amount of time possible. I resolved using Locking and a queue, to be sure that all threads can add as fast as possible the jobs to the queue and the multithreaded processor of the jobs locked the queue just for the time to take a job so that no other thread will try to take it at the same time. Then the time consuming operation was managed by the specific thread without crossover problems. I don't know if this is your case but just to give you some ideas – Sabrina_cs Sep 25 '15 at 08:26
  • @Sabrina_cs - From what I understand so far, we "lock" in order to restrict access to critical sections of code to one thread, with the end goal of solving problems where multiple threads access the same resource at the same time. It sounds like we're on the same page about that. Your addition of a queue sounds really interesting and ties in with what George said about being async to reduce contention. This is definitely something I am keen to try. Thanks again for the useful input. – Donny Sutherland Sep 25 '15 at 08:50

1 Answers1

1

I really can't figure out how fibers would solve your problem, since they basically don't provide a means to reduce contention on a shared memory resource.

I would rather focus on strategies to reduce contention on the resources, reduce duplicated computation and to reduce thread resource usage with asynhronous processing.

Using a global lock on top of all the request processing basically reduces all of processing to a single live thread. As an alternative you could try to reduce locking using locks only per resource.

Disclamer: Example code presented here is by no means a production quality, it's here just for the illustrate the concepts.

Reduce contention

You can come up with a granular locking strategy, when you lock only some region of data in question for particular operation.

Following is an example Sorting Game, which defines simple rules: Each player grabs an item in a list, and swaps it with the next one, if the left item is less then the right. Game ends when all items are sorted. Nobody wins, just a lot of fun.

using System;
using System.Threading;
using System.Collections.Generic;
using System.Linq;

public class Program
{
    public static void Main()
    {
        var game = new SortingGame();
        var random = new Random(234);

        // Simulate few concurrent players.
        for (var i = 0; i < 3; i++)
        {
            ThreadPool.QueueUserWorkItem(o =>
            {
                while (!game.IsSorted())
                {
                    var x = random.Next(game.Count() - 1);
                    game.PlayAt(x);
                    DumpGame(game);
                };
            });
        }

        Thread.Sleep(4000);

        DumpGame(game); 
    }

    static void DumpGame(SortingGame game)
    {
        var items = game.GetBoardSnapshot();

        Console.WriteLine(string.Join(",", items));
    }
}


class SortingGame
{
    List<int> items;
    List<object> lockers;

    // this lock is taken for the entire board to guard from inconsistent reads.
    object entireBoardLock = new object();

    public SortingGame()
    {
        const int N = 10;

        // Initialize a game with items in random order
        var random = new Random(1235678);
        var setup = Enumerable.Range(0, N).Select(i => new { x = i, position = random.Next(0, 100)}).ToList();
        items = setup.OrderBy(i => i.position).Select(i => i.x).ToList();
        lockers = Enumerable.Range(0, N).Select(i => new object()).ToList();
    }

    public int Count()
    {
        return items.Count;
    }

    public bool IsSorted()
    {
        var currentBoard = GetBoardSnapshot();
        var pairs = currentBoard.Zip(currentBoard.Skip(1), (a, b) => new { a, b});
        return pairs.All(p => p.a <= p.b);
    }

    public IEnumerable<int> GetBoardSnapshot()
    {
        lock (entireBoardLock)
            return new List<int>(items);
    }

    public void PlayAt(int x)
    {
        // Find the resource lockers for the two adjacent cells in question
        var locker1 = GetLockForCell(x);
        var locker2 = GetLockForCell(x + 1);

        // It's important to lock the resources in a particular order, same for all the contending writers and readers.
        // These can last for a long time, but are granular,
        // so the contention is greatly reduced.
        // Try to remove one of the following locks, and notice the duplicate items in the result
        lock (locker1)
        lock (locker2)
            {
                var a = items[x];
                var b = items[x + 1];
                if (a > b)
                {
                    // Simulate expensive computation
                    Thread.Sleep(100);
                    // Following is a lock to protect from incorrect game state read
                    // The lock lasts for a very short time.
                    lock (entireBoardLock)
                    {
                        items[x] = b;
                        items[x + 1] = a;
                    }
                }           
            }
    }

    object GetLockForCell(int x)
    {
        return lockers[x];
    }
}

Eliminate duplicated computations

If you need some expensive computation to be up to date, but not dependent on a particular request, trying to compute it for every request would be just a waste of resources.

The following approach allows to skip repeated computation if computation have already been started for another request.

It's different from caching, because you actually get the best result possible to compute in a time frame this way:

void Main()
{
    for (var i = 0; i < 100; i++)
    {
        Thread.Sleep(100);
        var j = i;
        ThreadPool.QueueUserWorkItem((o) => {
            // In this example, the call is blocking becase of the Result property access.
            // In a real async method you would be awaiting the result.
            var result = computation.Get().Result;

            Console.WriteLine("{0} {1}", j, result);
        });
    }
}

static ParticularSharedComputation computation = new ParticularSharedComputation();

abstract class SharedComputation
{
    volatile Task<string> currentWork;
    object resourceLock = new object();
    public async Task<string> Get()
    {
        Task<string> current;
        // We are taking a lock here, but all the operations inside a lock are instant.
        // Actually we are just scheduling a task to run.
        lock (resourceLock)
        {
            if (currentWork == null)
            {
                Console.WriteLine("Looks like we have to do the job...");
                currentWork = Compute();
                currentWork.ContinueWith(t => {
                    lock (resourceLock)
                        currentWork = null;
                });
            }
            else
                Console.WriteLine("Someone is already computing. Ok, will wait a bit...");
            current = currentWork;
        }

        return await current;
    }

    protected abstract Task<string> Compute();
}

class ParticularSharedComputation : SharedComputation
{
    protected override async Task<string> Compute()
    {
        // This method is thread safe if it accesses only it's instance data,
        // as the base class allows only one simultaneous entrance for each instance.
        // Here you can safely access any data, local for the instance of this class.
        Console.WriteLine("Computing...");

        // Simulate a long computation.
        await Task.Delay(2000);

        Console.WriteLine("Computed.");
        return DateTime.Now.ToString();
    }
}

Go async, not just multithreaded

Even if you are doing multithreading, you could be wasting thread resources, and the threads are actually expensive because of the stack memory allocated for each thread and because of context switching.

A well designed async application would actually use just as many threads as there are CPU cores in your system.

Look into making your application async, not just multithreaded.

George Polevoy
  • 7,450
  • 3
  • 36
  • 61
  • I really like this answer. It has pretty much cleared everything up for me. Your examples are elegant and informative. Thank you for taking the time to do this. – Donny Sutherland Sep 23 '15 at 19:51