The requirement is: Items to deal with are stored in a global queue. Several handler threads get item from global queue to handle. Producer thread adds item to global queue continuously and rapidly(much faster than all dealer threads' processing speed. Also, handler thread is compute-intensive. The best performance is CPU used completely). So, I use one more countKeeping thread to keep the queue's length to a specific range, just like from BOTTOM to TOP roughly(just to keep the memory from used too much).
I use ManualResetEvent
to deal with 'can add to queue' status change. Global queue is
Queue<Object> mQueue = new Queue<Object>;
ManualResetEvent waitingKeeper = new ManualResetEvent(false);
Handler thread is
void Handle()
{
while(true)
{
Object item;
lock(mQueue)
{
if(mQueue.Count > 0)
item = mQueue.Dequeue();
}
// deal with item, compute-intensive
}
}
Producer thread will call AddToQueue() function to add item to mQueue.
void AddToQueue(Object item)
{
waitingKeeper.WaitOne();
lock(mQueue)
{
mQueue.Enqueue(item);
}
}
countKeeping thread is mainly like following
void KeepQueueingCount()
{
while(true)
{
// does not use 'lock(mQueue)'
// because I don't need that specific count of the queue
// I just need the queue does not cost too much memory
if(mQueue.Count < BOTTOM)
waitingKeeper.Set();
else if(mQueue.Count > TOP)
waitingKeeper.Reset();
Thread.Sleep(1000);
}
}
Here problom comes.
When I set the BOTTOM and TOP to smaller number, like BOTTOM = 20, TOP = 100, it works well with quad core CPU(CPU utilization is high), but with single CPU it works not that well(CPU utilization fluctuates relatively great. ).
When I set the BOTTOM and TOP to larger number, like BOTTOM = 100, TOP = 300, it works well with single CPU, but not that well with quad core CPU.
Both environment, both condition, the memory is not used too much(around 50M at most).
Logically, larger BOTTOM and TOP will help with the performance(when memory is not used too much), bacause there are more items for handler threads to handle. But the fact seems not like this.
I tried several ways to find the cause of the problem. And I just found, when I use lock(mQueue)
in keeping thread, it works both well in two above CPU conditions.
New countKeeping thread is mainly like this
void KeepQueueingCount()
{
bool canAdd = false;
while(true)
{
lock(mQueue)
{
if(mQueue.Count < BOTTOM)
canAdd = true;
else if(mQueue.Count > TOP)
canAdd = false;
}
if(canAdd)
waitingKeeper.Set();
else
waitingKeeper.Reset();
// I also did some change here
// when the 'can add' status changed, will sleep longer
// if not 'can add' status not changed, will sleep lesser
// but this is not the main reason
Thread.Sleep(1000);
}
}
So my questions are
- When I didn't use
lock
in countKeeping thread, why the range of global queue affect performance(here, performance is mainly CPU utilization) in different CPU conditions? - When I use
lock
in countKeeping thread, the performance is both well in different conditions. What doeslock
do really affect this? - Is there any better way to change 'can add' status instead of using
ManualResetEvent
? - Is there any better model fit my requirement? Or is there any better way to keep memory from not used too much when producer thread works continuously and rapidly?
---UPDATE---
Producer thread's main part is as following. STEP is the count of items for each query from database. Querying is successively and in order until all items queryed.
void Produce()
{
while(true)
{
// query STEP items from database
itemList = QuerySTEPFromDB();
if(itemList.Count == 0)
// stop all handler thread
// here, will wait for handler threads handle all items in queue
// then, all handler threads exit
else
foreach(var item in itemList)
AddToQueue(item);
}
}