0

I have a custom USB camera with a custom driver on a custom board Nvidia Jetson TX2 that is not detected through openpose examples. I access the data using GStreamer custom source. I currently pull frames into a CV mat, color convert them and feed into OpenPose on a per picture basis, it works fine but 30 - 40% slower than a comparable video stream from a plug and play camera. I would like to explore things like tracking that is available for streams since Im trying to maximize the fps. I believe the stream feed is superior due to better (continuous) use of the GPU.

In particular the speedup would come at confidence expense and would be addressed later. 1 frame goes through pose estimation and 3 - 4 subsequent frames are just tracking the object with decreasing confidence levels. I tried that on a plug and play camera and openpose example and the results were somewhat satisfactory.

The point where I stumbled is that I can put the video stream into CV VideoCapture but I do not know, however, how to provide the CV video capture to OpenPose for processing.

If there is a better way to do it, I am happy to try different things but the bottom line is that the custom camera stays (I know ;/). Solutions to the issue described or different ideas are welcome.

Things I already tried:

  • Lower resolution of the camera (the camera crops below certain res instead of binning so cant really go below 1920x1080, its a 40+ MegaPixel video camera by the way)
  • use CUDA to shrink the image before feeding it to OpenPose (the shrink + pose estimation time was virtually equivalent to the pose estimation on the original image)
  • since the camera view is static, check for changes between frames, crop the image down to the area that changed and run pose estimation on that section (10% speedup, high risk of missing something)
Marcus Gee
  • 126
  • 2
  • You may try asynchronous mode in OpenPose if you haven't. The idea is to have a separate worker to read images. Please check an example [here](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/3915b922eb345de9d788cb46541d42641aa014c4/examples/tutorial_api_cpp/10_asynchronous_custom_input.cpp) or an implementation [here](https://github.com/ravijo/ros_openpose). – ravi May 07 '20 at 06:06
  • I have a separate worker to read images, separate worker to preprocess the images and a separate worker to run openpose. I distributed auxiliary workers across 6 CPUs and openpose has the GPU for itself. Still, I dont know how to enable object tracking on a frame by frame basis using openpose. – Marcus Gee May 10 '20 at 19:35

0 Answers0