1

I am looking at creating a ServiceManager area (set of classes with a facade interface) where other areas can sign up to and offer "services" via, and pass on requests. I plan on running this class in its own thread, and I want to keep critical regions at a minimum and handled purely within this area.

When an area sign up they get their own inbox and outbox of commands, which is a List of ServiceRequests.

Now my dilemma is, if I do not share the link to the outbox (the service providers area) I will need to locate the box via a search for a match (critical region across all signing up, and due to this all submitting requests as well, defeating the purpose of a local outbox).

Alternative I give a link outside of the ServiceManager area, then a new external area is coded it is possible to skip the critical region and directly update the list, bypassing this.


My solution is to within the code of the ServiceManager area to create a "key" object where direct access never leave the area (any object including a complete empty will work).

When an area is signed up, they are getting a class in return, where a link to the above object along with a link to the where the ServiceManager object of the signed up area. (which contains direct inbox/outbox link along with other information).

This "token" object also contains following code

public ServiceObject GetServiceObject(Key key)
{
    If (key == this.key)
    {
        return serviceObject;
    }
    return null
{

this way even if the command is public its contents can only be accessed within the ServiceManager area as its the only area with direct access to the Key object. This bypass above two concerns of having a shared critical region across all service users, and the risk of those service users accessing their lists directly.

So it solves the problem but to me this looks real ugly, is there a smarter approach, and if how?

Taoh
  • 331
  • 2
  • 12

2 Answers2

1

Use the classes in System.Collections.Concurrent - they have efficient locking implementations and also provide producer/consumer behavior along with partitioning for distributing work loads.

With these classes you will be able to model any scenario, whether you have many readers, many writers, or both at once. All without having to write any locking code of your own.

Update: example code wrapping a queue to provide a facade for an optional future split of the command queue into multiple queues.

Let's assume that your concern about the number of threads inserting commands into the queue has some merit. You can cover yourself by creating a thin wrapper around the underlying queue, which allows you to pass out different wrapper instances if you in the future discover that this is necessary.

public interface ICommandQueue {
     void Add( Command command );
}

public class CommandQueue : ICommandQueue {
     private ConcurrentQueue<Command> queue = new ConcurrentQueue<Command>();

     public void Add( Command command ) {
         queue.Enqueue( command );
     }
}

When you instantiate a ServiceObject, make sure that you pass in an ICommandQueue instance on the constructor (either manually or using an IoC container), or use a static CommandQueueFactory to have the SO obtain it on-demand if that works better for you.

public class ServiceObject {
     private ICommandQueue queue;

     public ServiceObject( ICommandQueue queue ) {
         this.queue = queue;
     }
}

This will let you change the number of queues later, rather than writing all of the (potentially unnecessary) code now. Never solve a problem that you are not sure that you have :-)

More information:

Morten Mertner
  • 9,414
  • 4
  • 39
  • 56
  • I am not too worried about locking the code, my worry is a large amount of different threads accessing the same locked region, potentially waiting for each other which can be avoided. Which is why I am looking at this. Is there some kind of methode to read/write to a list without the critical region blocking other methodes then I would very much be told what to look at excactly. – Taoh Apr 28 '13 at 16:46
  • Did you follow the links and read about the collection classes? As I mentioned, there are implementations that make it very easy for you to build a producer/consumer pattern to ensure optimum throughput. But you need to study the options available and then make an informed choice on which classes you will want to use. – Morten Mertner Apr 28 '13 at 22:01
  • I did follow the links and read, to me it reads as, these standard approaches can be used to achieve and maintain low impact Mutex. But the issue I am having is not at how to setup the critical region, but instead of my approach reduce the shared scope of the critical region. By sharing a token that does not need to be "looked up" within a critical region, and still only be useable within the specified code area. So my question was more is it a bad approach/code and if so why, and how could I do it smarter. I will attach the code I made to handle it to clarify the problem as a potential answer – Taoh Apr 29 '13 at 13:56
  • Thanks for your patience trying to help me out with this Morten. I might very well be reading this wrong. Your suggestion is to replace the "locking area" to add to the List to use another container that already have this in place. I fully agree with that suggestion. Now my worry is if I have N producers and 1 consumer, then the 1 consumer might have a hard time accessing the queue to pick up its tasks. I feel it lacks an option to move all components to an array. (move the queue to an internal queue). Multiple queues would fix this, and I need to share these, a return too original problem. – Taoh May 01 '13 at 14:45
  • I will keep pondering about the solution for this. I will mark your solution as having answered the question. It answers part of the problem via one approach. That it might not be how I like to think about the solution, that does not change the fact for majority of readers your suggestion would be correct for the problem I asked. (Reduce it to a single queue of a type that is "threadsafe" already. – Taoh May 01 '13 at 14:49
  • I would encourage you to experiment a bit with some actual code before you settle on a solution. Try it out, benchmark it (measure time spent using the Stopwatch class) and evaluate from there. The example code above will let you introduce multiple queues if you so desire, or even change the ConcurrentQueue to something else, without you having to change the code of any of the producers. As such, your code will be insulated and future changes to the consumer code will not affect any of the providers. – Morten Mertner May 01 '13 at 22:01
  • As an aside, you should not allocate threads (or indeed work with them directly in any way). Instead, use the TPL (Task Parallel Library) and use the Task class to encapsulate units of work that you need executed. It's another topic, but thought it worth mentioning ;) Hope you figure out a solution :) – Morten Mertner May 01 '13 at 22:04
0

The code I am currently using is this.

//base class of no use on its own. Used to lock the AreaIdentifier
public class Key
{
    public Key()
    {
    }
}

//public direct link to ServiceArea info, without ability to access it without the internal key
public class AreaIdentifier
{
    Key key;
    ServiceArea serviceArea;
    public AreaIdentifier(ServiceArea serviceArea, Key key)
    {
        this.key = key;
        this.serviceArea = serviceArea;
    }

    public ServiceArea GetServiceArea(Key key)
    {
        if (this.key == key) return serviceArea;
        return null;
    }
}

}

The reason for this approach is that external areas must be coded in a way that the "inner code" of the "transport layer" and the adding and removing does not mess up the actions. Now if I share a direct link

So when outer area (in its own thread) wants to add a new command it does it by calling following call which is placed within the Communication Layers code. (there is a private Key variable within this area, that is not accessable via the AreaIdentifier (but known) nor shared in other ways outside the area.

    //Put a command in the outBox for pickup by CommunicationLayer
    public Boolean AddCommand(AreaIdentifier area, CommandRequest command)
    {
         //attempt enter critical region or return false;             
         ServiceArea temparea = area.GetServiceArea(key);
         List<Command> tempOutbox = temparea.GetOutBox();
         tempOutbox.Add(command);
         //exit critical region
         return true;
    }

The only downside I see from this approach is that I have the public function of "GetServiceArea" which kinda makes little sense outside the "internal" functions of the TransportLayer

Taoh
  • 331
  • 2
  • 12
  • 1
    I don't see how any of this code will help you reduce access to the shared resource (aka critical region). All it does is introduce a local cache of items that you will still need to pass on; as such, it does not reduce your locking needs - it merely postpones them. Your problem is a standard problem of many writers (commands being added) and few consumers (command processors). My advice would be to implement this using a simple ConcurrentQueue and to have everyone write directly to the queue. If you want greater flexibility, use the ConsumerProducer stuff. – Morten Mertner Apr 29 '13 at 18:53
  • I might not understand that one correct, and do correct me if I am wrong. If I have multiple writers, writing to the same queue at the same time, would they not be blocking for each other, or are they able to "dump" their work and continue and know that it will be picked up by the queue? – Taoh Apr 29 '13 at 19:34
  • ... as for how the above works WaitHandle.WaitOne Method (Int32, Boolean) to enter and exit the region (did not code it yet, so still in pseudo comment) where Int32 is set to 0. Now with 10 producers (or more) and 1 consumer (dispatcher), the chance of grabbing the region would be 1/11, possible starving out the dispatcher. With 10 regions (seperate boolean for each area) and the dispatcher checking each region separate, chances of it being busy is lowered a lot, might make a skip or two, but overall its unlikely for the dispatcher to get starved at random. – Taoh Apr 29 '13 at 19:36
  • 1
    The locking overhead of the concurrent collections is so small that you really do not need to worry about it (it's incredibly fast, but yes, inserting items will lock the queue). Try creating a small piece of code with 20 threads that add items to a single, shared ConcurrentQueue.. like shown in this post: http://stackoverflow.com/questions/10103072/is-it-good-to-use-blockingcollectiont-as-single-producer-single-consumer-fifo?rq=1 – Morten Mertner Apr 29 '13 at 20:54
  • 1
    Also, you shouldn't use Mutexes or WaitHandle-based locks if you can avoid it (i.e. if all your code is managed/C#, then you are only killing performance by using an unmanaged/kernel lock). Managed-only code can use much faster locking mechanisms, as threads are managed by the CLR rather than the OS. – Morten Mertner Apr 29 '13 at 20:57
  • Thanks Morten, I admit I know little of which method locking is less consuming. My primary concern here has been that what I am doing here might end up with quite a few potential threads required, and areas, so my bigger worry is to not have to rewrite how they communicate with the communication layer. The actual locking method is within this code as handled above, so it can be replaced with more efficient methods by changing code once, not the access to it. I feel I am missing something basic here with my solution, but I am unable to see a better approach. (for now) – Taoh Apr 30 '13 at 05:26
  • I've updated my answer to show how I would design the basics, in a way that should make it easy to add additional queues later (if you determine that locking is in fact a bottleneck). Let me know if it doesn't make sense as explained in my answer. – Morten Mertner Apr 30 '13 at 17:56