I read data from a file and process it in a seperate thread. I am trying to parallelize the data read and processing parts on two threads and using conditional variables with infinite loops. However, I end up with deadlocks.
char totBuf[300000];
unsigned long toLen = 0;
unsigned long lenProc = 0;
void procData()
{
while(lenProc < totLen)
{
//process data from totBuf
//increment lenProc;
}
ready = false;
if(lenProc >= totLen && totLen > 100000)
{
cv.notify_one();
unique_lock<mutex> lk(m);
cv.wait(lk, []{return totLen>0 && lenProc<totLen;});
}
}
void readData()
{
//declared so that we notify procData only once
bool firstNot = true;
while(true)
{
//read data into
//file.read(len);
//file.read(oBf, len);
memcpy(&totBuf[totLen], oBf, len);
//increment totLen
if(totLen > 10000)
cv.notify_one();
if(totLen > 100000)
{
cv.notify_one();
unique_lock<mutex> lk(m);
cv.wait_for(lk, []{return !ready;});
totLen = 0;
firstNot = true;
lenProc = 0;
}
}
}
int main(int argc, char* argv[])
{
inFile.open(argv[1], ios::in|ios::binary);
thread prod(readData);
thread cons(procSeqMsg);
prod.join();
ready = true;
cout << "prod joined\n";
cv.notify_all();
cons.join();
cout << "cons joined\n";
inFile.close();
return(0);
}
Some explanations if it looks weird. Though I have declared totBuf size to 300k but i reset totLen to 0 when its 100k because i read data in chunks from the file where the new chunks could be big. When size reaches 100k I reset totLen to read data at begining of totBuf again. I notify the consumer first when size reaches 10k so that maximum concurrent processing can be achieved. This could be a really bad design afaik and am willing to redesign from scratch. What I want is total lockfree implementation but am new to threads hence this is done as a stop-gap/ best I could do right now.