0

I'm in the process of calibrating the camera, and for that I'm using the python language together with the open cv library. I'm using the Waveshare IMX219 camera on the Jetson Nano.

I tried to capture images with the cameras in order to calibrate them using the "VideoCapture" function, passing the index of camera 0 as a parameter. And that's when the following problem appears:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

The camera is correctly connected and is being recognized by the device.

Cris Luengo
  • 55,762
  • 10
  • 62
  • 120
  • this is not an error but a warning. You may try to run your camera with different VideoCapture flags instead of `gstreamer` such as `v4l` or `ffmpeg` [Here](https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html) is a list of them you can try. – Yunus Temurlenk Feb 15 '23 at 19:00

1 Answers1

1

Passing index 0 would use either V4L backend, or v4l2src plugin for gstreamer backend (your case is the latter). The problem is that IMX219 is a bayer RG10 sensor, its raw video is not suitable for opencv that expects BGR format for most algorithms (other formats are available, though, depending on your opencv version).

With a Jetson the path would be using Argus that would debayer, auto-tune gains, exposure, wb, ... with ISP and provide NV12 format frames into NVMM memory where it can be handled in gstreamer thanks to plugin nvarguscamerasrc.

For providing BGR frames to opencv application, you may first use Jetson's VIC HW for converting into BGRx format with nvvidconv outputting into system memory, and then use videoconvert for BGR. So the pipeline would be:

cam_pipeline_str = 'nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),format=NV12,width=1280,height=720,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
cap = cv2.VideoCapture(cam_pipeline_str, cv2.CAP_GSTREAMER)
if not cap.isOpened():
   ...Error

# if you get here loop reading frames and do what you want with these...

Also note that default support of some Jetsons L4T versions as device-tree/drivers would be for RPi v2 IMX219 cam. I have no experience with Waveshare cams, but vendors should provide a SDK for their products. Check with your camera vendor for your L4T release.

SeB
  • 1,159
  • 6
  • 17