2

Could someone please mention the flaws and performance drawbacks in the Queue like implementation?

type Queue struct {
    sync.Mutex
    Items []interface{}
}

func (q *Queue) Push(item interface{}) {
    q.Lock()
    defer q.Unlock()
    q.Items = append(q.Items, item)
}

func (q *Queue) Pop() interface{} {
    q.Lock()
    defer q.Unlock()
    if len(q.Items) == 0 {
        return nil
    }
    item := q.Items[0]
    q.Items = q.Items[1:]
    return item
}

I also have methods like PopMany and PushMany, and what I am concerned about is: Is too much re-slicing that bad?

ifnotak
  • 4,147
  • 3
  • 22
  • 36
FPGA
  • 3,525
  • 10
  • 44
  • 73
  • 3
    Add `q.Items[0] = nil` between when you set `item` and when you reslice. That will remove the reference to the item from the slice so that it can be garbage collected earlier (if it's something on the heap that can be collected). Otherwise you'll keep the reference around until the next time `append` within `Push` happens to do a realloc+copy of the array that is backing the slice. – Dave C Mar 02 '15 at 22:33
  • 2
    If your queue will have a fixed maximum size then you can just use a buffered channel. When the buffer is full you could either have the "push" block or use a trivial `select` loop to return a "queue full" error. – Dave C Mar 02 '15 at 22:43

4 Answers4

10

You could simply use a buffered channel.

var queue = make(chan interface{}, 100)

The size of the buffer could to be determined empirically to be large enough for the high-water mark for the rate of pushes vs rate of pops. It should ideally not be much larger than this, to avoid wasting memory.

Indeed, a smaller buffer size will also work, provided the interacting goroutines don't deadlock for other reasons. If you use a smaller buffer size, you are effectively getting queueing via the run-queue of the goroutine time-slice engine, part of the Go runtime. (Quite possible, a buffer size of zero could work in many circumstances.)

Channels allow many reader goroutines and many writer goroutines. The concurrency of their access is handled automatically by the Go runtime. All writes into the channel are interleaved so as to be a sequential stream. All the reads are also interleaved to extract values sequentially in the same order they were enqueued. Here's further discussion on this topic.

Sheshnath
  • 301
  • 3
  • 10
Rick-777
  • 9,714
  • 5
  • 34
  • 50
  • 1
    This should be the proper solution. – OneOfOne Mar 04 '15 at 01:22
  • Would this be considered the proper approach to threadsafe queue? – majidarif Jun 02 '17 at 20:42
  • 1
    Yes, channels are the idiomatic solution for queues of messages. – Rick-777 Jun 05 '17 at 10:50
  • You would only need to implement a more complex alternative if disk-based persistence is needed - for example for durability over some app or server restart. Or possibly you need huge ('infinite') disk-based queues. In both these cases, you should design an api with channel ends for insertion and extraction operations so that the complex queue could be a drop-in replacement for a Go channel. – Rick-777 Jun 05 '17 at 10:54
4

The re-slicing is not an issue here. It will also make no difference whether you have a thread-safe or unsafe version as this is pretty much how the re-sizing is meant to be done.

You can alleviate some of the re-sizing overhead by initializing the queue with a capacity:

func NewQueue(capacity int) *Queue {
    return &Queue {
        Items: make([]interface{}, 0, capacity),
    }
}

This will initialize the queue. It can still grow beyond the capacity, but you will not be having any unnecessary copying/re-allocation until that capacity is reached.

What may potentially cause problems with many concurrent accesses, is the mutex lock. At some point, you will be spending more time waiting for locks to be released than you are actually doing work. This is a general problem with lock contention and can be solved by implementing the queue as a lock-free data structure.

There are a few third-party packages out there which provide lock free implementations of basic data structures.

Whether this will actually be useful to you can only be determined with some benchmarking. Lock-free structures can have a higher base cost, but they scale much better when you get many concurrent users. There is a cutoff point at which mutex locks become more expensive than the lock-free approach.

Community
  • 1
  • 1
jimt
  • 25,324
  • 8
  • 70
  • 60
  • 1
    slicing like that is basically a memory leak, no? that slice is backed by an actual array, and a sub-slice is the same array. So as you consume you are just removing from view, not from memory. Maybe append will discard old items when it has to grow array. But if you use a large capacity up front, it's basically a memory leak, I believe. This may be ok depending on how long app runs and how much memory you have – David Budworth Mar 02 '15 at 12:26
  • 2
    @DavidBudworth, as long as `q.Items[0] = nil` is added (as described in a comment on the question) then there is no leak. `append` will grow the underlying array as required and when doing so will only copy over the currently used items. E.g. if the slice is currently referencing the last n items of much larger array, when it reaches capacity then `append` will allocate an array of size n+1+someAmount and copy the n items before adding the new one. – Dave C Mar 02 '15 at 22:37
  • 1
    Also, for that reason using an initial capacity that is much larger then the typical length of the queue won't make any difference after the first realloc. E.g. with an initial capacity of 100 and a queue that normally has ~5 items in it, after the 101st `Push` the underlying array will likely be just 8 items long. – Dave C Mar 02 '15 at 22:39
1

I think the best way to approach this is to use a linked list, there is already one available for you in standard package here

kruczy
  • 791
  • 4
  • 8
  • 1
    I'd only use a linked list if it was commonly required to remove arbitrary items or insert items into an arbitrary position. For a simple FIFO queue either a channel or a slice is simpler, easier, etc. – Dave C Mar 02 '15 at 22:41
  • well I think a list is a better choice though probably running benchmarks might answer that better – kruczy Mar 03 '15 at 10:34
0

The answer marked correct says re-slicing is not an issue. That is not correct, it is an issue. What Dave is suggesting is right, we should mark that element as nil. Refer more about slices here: https://go.dev/blog/slices-intro