0

Challenge: I want to run three USB cameras 1600x1300@ 60 fps on a jetson Xavier NX using python. Now there are some ways of doing this but my approach has been:

Main -> Camera1 Thread -> Memory 1 -> Visualization thread 1.

The main starts up three Camera threads and three visualizations. The problem is the latency. I store the images from camera 1 in Memory 1 which is shared with the visualization thread. There are thread-lock on both the memory and cv2.imshow in the visualization thread.

  • Is there a way of speeding up the camera visualization. I get about 16fps. Is it better to have 1 visualization thread showing all three images in one view or as I have now, three separate.

The input capture is:

cv2.VideoCapture(Gstreamer_string, cv2.CAP_GSTREAMER)

The output to disc with the Gstreamer string is by branching the stream to a multifilesink and an appsink. The file-sink writes all three at 60FPS. Its just the visualization on screen that takes for-ever.

I have tried also to visualize directly after the capture in the camera thread, without the memory, not much difference. I have a tendency to think that the imshow thread-lock I need in order not to crash/freeze the GUI is the reason. Perhaps combining all three into one is faster.

Magnus_G
  • 49
  • 8

1 Answers1

0

It is hard to guess without code, but possible bottlenecks may be:

  1. cv imshow is not so efficient on Jetsons. You may use opencv VideoWriters with gstreamer backend to some display sinks such as nveglglessink.

  2. Your disk storage may not be able to store 3 streams at that resolution at 60 fps. Are you using a NVME SSD ? SD Card may be slow depending on model. Does lowering the framerate help ? Are you encoding or trying to save RAW video ?

  3. Opencv may also add some overhead. If opencv is not required for processing, a pure gstreamer pipeline may be able to display and record (if point 2 is not the issue).

SeB
  • 1,159
  • 6
  • 17
  • 1. I cant use a imagesink och similar since I need to add AI-processing before showing on screen. I will try to use gpu as muh as possible. Focus on this next. 2. I use nvme ssd and it works normally like a charm for all three. Sometimes I get intermittent errors. I think it is the readkey() in main that collide with imshow somehow. 3. I would like to use something better than imshow ut gsteramer pipeline purely is not working :( – Magnus_G Sep 12 '22 at 08:29
  • A video writer with gstreamer backend would just replace the imshow/waitKey pair. imshow prepares work for a drawing thread different from application thread, and waitKey yields scheduler so that the drawing thread can be scheduled. But this thread running on arm64 may not be that fast. A videoWriter with a gst pipeline to xvimagesink (or EGL using NVMM memory) would be similar but faster. If you intend to do GPU processing, you may do that in main loop before displaying. Maybe DeepStream would better fit your case. – SeB Sep 12 '22 at 18:05
  • Thnks for the reply. I use the Gstreamer for writing all images to disk. But then I have a pipeline to appsink. This is where I add AI processing and finally write the results. THEN I save it to a property which contains index and image. This is picked up by my ImageToScreen Thread that will show it. It will show from all three cameras and bake it into one image. Depending on number of cameras (max 3) I get a wider image. I will add the code for you and others to look and review. It can be a good starting point for someone else. – Magnus_G Sep 19 '22 at 08:15