2

I am having an OpenCV program which works like this:

VideoCapture cap(0);
Mat frame;
while(true) {
  cap >> frame;
  myprocess(frame);
}

The problem is if myprocess takes a long time which longer than camera's IO interval, the captured frame be delayed, cannot get the frame synchronized with the real time.

So, I think to solve this problem, should make the camera streaming and myprocess run parallelly. One thread does IO operation, another does CPU computing. When the camera finished capture, send to work thread to processing.

Is this idea right? Any better strategy to solve this problem?

Demo:

int main(int argc, char *argv[])
{
    cv::Mat buffer;
    cv::VideoCapture cap;
    std::mutex mutex;
    cap.open(0);
    std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
        while (true) { // keep product the new image
            cv::Mat tmp;
            cap >> tmp;
            mutex.lock();
            buffer = tmp.clone(); // copy the value
            mutex.unlock();
        }
    }, std::ref(buffer), cap, std::ref(mutex));
    product.detach();

    while (cv::waitKey(20)) { // process in the main thread
        mutex.lock();
        cv::Mat tmp = buffer.clone(); // copy the value
        mutex.unlock();
        if(!tmp.data)
            std::cout<<"null"<<std::endl;
        else {
            std::cout<<"not null"<<std::endl;
            cv::imshow("test", tmp);
        }

    }
    return 0;
}

Or use a thread keep clearing the buffer.

int main(int argc, char *argv[])
{
    cv::Mat buffer;
    cv::VideoCapture cap;
    std::mutex mutex;
    cap.open(0);
    std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
        while (true) { // keep product the new image
            cap.grab();
        }
    }, std::ref(buffer), cap, std::ref(mutex));
    product.detach();
    int i;
    while (true) { // process in the main thread
        cv::Mat tmp;
        cap.retrieve(tmp);
        if(!tmp.data)
            std::cout<<"null"<<i++<<std::endl;
        else {
            cv::imshow("test", tmp);
        }
        if(cv::waitKey(30) >= 0) break;
    }
    return 0;
}

The second demo I thought shall be work base on https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html#videocapture-grab, but it's not...

nathancy
  • 42,661
  • 14
  • 115
  • 137
JustWe
  • 4,250
  • 3
  • 39
  • 90
  • 1
    Yes, idea is right. – Nuzhny Mar 21 '19 at 07:12
  • @Nuzhny I add two demo codes, could you please have a look which is correct. – JustWe Mar 25 '19 at 05:00
  • I don't sure that this ways will be work. It need to set order for Capture for N frame and processing the N-1 frame. – Nuzhny Mar 25 '19 at 20:31
  • @Nuzhny I tested both, the first work, but the second didn't. I only need the newest one so didn't use a queue for N frame. I referred https://stackoverflow.com/questions/30032063/opencv-videocapture-lag-due-to-the-capture-buffer thought `grab` should work the same as `read` but it didn't. – JustWe Mar 26 '19 at 04:55
  • On my Ubuntu the second test works also. But I don't sure than it works right without framedrops? And second - cv::VideoCapture is not a threading safe class. – Nuzhny Mar 26 '19 at 10:30

1 Answers1

0

In project with Multitarget tracking I used 2 buffers for frame (cv::Mat frames[2]) and 2 threads:

  1. One thread for capturing the next frame and detect objects.

  2. Second thread for tracking the detected objects and draw result on frame.

I used index = [0,1] for the buffers swap and this index was protected with mutex. For signalling about the end of work was used 2 condition variables.

First works CatureAndDetect with frames[capture_ind] buffer and Tracking works with previous frames[1-capture_ind] buffer. Next step - switch the buffers: capture_ind = 1 - capture_ind.

Do you can this project here: Multitarget-tracker.

Nuzhny
  • 1,869
  • 1
  • 7
  • 13