2

I am designing a new improved Logger component (.NET 3.5, C#).

I would like to use a lock-free implementation.

Logging events will be sent from (potentially) multiple threads, although only a single thread will do the actual output to file/other storage medium.

In essence, all the writers are * enqueuing* their data into some queue, to be retrieves by some other process (LogFileWriter).

Can this be achieved in a lock-less manner? i could not find a direct reference to this particular problem on the net.

lysergic-acid
  • 19,570
  • 21
  • 109
  • 218
  • 4
    ..probably because there is no demand for it. Most logging activities have an intermittent supply of logs at one end and a slow disk at the other. A locked, kernel-managed queue is most efficient for such functionality. Is there some special reason why a lockless queue might be desirable in your app? – Martin James Jan 04 '12 at 23:59
  • I am not sure most loggers use locking, even when it's possible to not use it. My requirements are high performance, and this means freeing up the threads that call the logger as quickly as possible. Grabbing locks seems like a waste in this scenario. – lysergic-acid Jan 05 '12 at 00:02
  • 1
    I agree with Hans. http://logging.apache.org/log4net/ has a good chance of being faster than a home grown system, and will certainly take less time in development and debugging to get running properly. – That Chuck Guy Jan 05 '12 at 00:21
  • Regarding a performance logger, use DateTime.UtcNow. It is much faster than DateTime.Now, and in profiling my app is a significant hit to the logging thread. I use log4net. – Iain Jan 05 '12 at 00:41
  • @liortal - lock-free queue implementations that allow multiple producers are thin-on-the ground. Then there's the issue of how to signal the consumer/logger thread that a new entry has been added so that it can retrieve it an log it. I would use a semaphore, but there are other condvar thingies. Whatever you use, this probably means a kernel call to signal the logger. The only path where a kernel call can be avoided in a lock-free manner is if the producer takes a spinlock on the queue and ascertains that the queue count is not zero. If you can do that stuff, you don't need any help! – Martin James Jan 05 '12 at 00:55
  • Lock-free queues are the perpetuum mobile of software engineering. There's plenty of choice in Can't-see-the-lock queues. Not inventing the wheel with, say, log4net is the wise choice. Because the absolute worst place for the *one* log message that counts, the one you generate just before your program crashes, is in a lock-free queue. Unseen forever. – Hans Passant Jan 05 '12 at 02:04
  • @ChuckBlumreich I've found log4net to be slow enough that I've wrapped calls to it that pushed details into a lock-free queue first. Not as a general rule, but it did come up once in a piece that was both time-critical and also had requirements for logging. – Jon Hanna Jan 05 '12 at 15:34
  • @MartinJames Lock-free queue implementatinos that allow multiple producers are plentiful, including the fact that there's one in the framework library. I agree that signalling is the greater issue in this case. – Jon Hanna Jan 05 '12 at 16:13

3 Answers3

13

If you find that using a lock in this case is too slow, you have a much bigger problem. A lock, when it's not contended, takes about 75 nanoseconds on my system (2.0 GHz Core 2 Quad). When it's contended, of course, it's going to take somewhat longer. But since the lock is just protecting a call to Enqueue or Dequeue, it's unlikely that the total time for a log write will be much more than that 75 nanoseconds.

If the lock is a problem--that is, if you find your threads lining up behind that lock and causing noticeable slowdowns in your application--then it's unlikely that making a lock-free queue is going to help much. Why? Because if you're really writing that much to the log, your lock-free blocking queue is going to fill up so fast you'll be limited to the speed of the I/O subsystem.

I have a multi-threaded application that writes on the order of 200 log entries a second to a Queue<string> that's protected by a simple lock. I've never noticed any significant lock contention, and processing isn't slowed in the least bit. That 75 ns is dwarfed by the time it takes to do everything else.

Jim Mischel
  • 131,090
  • 20
  • 188
  • 351
5

This implementation of a lock free queue might be helpful, where the queue is the data structure you'd use to enqueue the items to be dequeued and written out by the logger.

http://www.boyet.com/Articles/LockfreeQueue.html

You might also look at .Net 4's ConcurrentQueue

http://www.albahari.com/threading/part5.aspx#_Concurrent_Collections

http://geekswithblogs.net/BlackRabbitCoder/archive/2011/02/10/c.net-little-wonders-the-concurrent-collections-1-of-3.aspx

  • Thanks i will check it out. Do you know if this works for all different scenarios (e.g: Multiple-readers, Multiple-writers, Single reader, multiple-writers, etc?) – lysergic-acid Jan 05 '12 at 00:05
  • I'm aware of them, but have not had an opportunity to use them, so can't answer definitively. In the case of the Concurrent collections, since they come along with the Parallel additions to .net, I think multiple readers/writers is the intention. In your case you probably need multiple writers, single reader, which shouldn't be terribly taxing on the queue. – hatchet - done with SOverflow Jan 05 '12 at 00:22
  • hatchet's on the right track. You have a producer/consumer pattern so a BlockingCollection with a ConcurrentQueue will do the trick. http://msdn.microsoft.com/en-us/library/dd267312.aspx. Yes it works in the different scenarios. – Richard Schneider Jan 05 '12 at 00:24
0

There's quite a few different implementations of lock-free queues out there.

My own at http://hackcraft.github.com/Ariadne/ uses a simple approach, and is open source so you can adapt it if necessary.

ConcurrerntQueue is also lock-free, and will probably serve most purposes fine, though that in Ariadne has a few members supporting other operations (like dequeuing an enumeration of the entire contents as an atomic operation, which allows for faster enumeration by a single consumer).

Jon Hanna
  • 110,372
  • 10
  • 146
  • 251