Hardware: Computer -> Jetson Nano 2GB B01, Camera -> Arducam IMX519 MIPI/CSI Software: OS -> Ubuntu v16, IDE -> QT 5 Libraries: i2c, gstvideo, gstreamer-1.0
The testing that I'm performing is on a barebones project so that I can prove the concept and have it wrapped in a class that I can move to another larger project I'm working on. I figured keeping it module would allow for ruling out other interfering issues. So please keep in mind that my code is dirty and will get refactored before I export the class.
gstcontroled.cpp `
GstElement *m_pipeline;
GstBus *m_bus;
GstStateChangeReturn m_ret;
ostringstream m_launch_stream;
GError *m_error = nullptr;
static string m_launch_string;
gstcontroled::gstcontroled()
{
this->m_launch_stream
<< "nvarguscamerasrc sensor-id=0 ! "
<< "video/x-raw(memory:NVMM), format=NV12, width=(int)1920, height=(int)1080, framerate=59/1 ! "
<< "nvvidconv ! "
<< "video/x-raw(memory:NVMM),width=1280,height=720,framerate=59/1 ! "
<< "nvoverlaysink enable-last-sample=true name=sink";
m_launch_string = this->m_launch_stream.str();
this->m_pipeline = gst_parse_launch(m_launch_string.c_str(), &this->m_error);
if (this->m_pipeline == nullptr) {
g_print( "Failed to parse launch: %s\n", this->m_error->message);
return;
}
if(this->m_error) g_error_free(this->m_error);
this->FocalPoint(450);
this->m_ret = gst_element_set_state (this->m_pipeline, GST_STATE_PLAYING);
if (this->m_ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline to the playing state.\n");
gst_object_unref (this->m_pipeline);
return;
}
this->m_bus = gst_element_get_bus (this->m_pipeline);
guint bus_watch_id;
bus_watch_id = gst_bus_add_watch (this->m_bus, this->my_bus_callback, NULL);
}
void gstcontroled::Camera_CaptureSnapshot()
{
g_print("started capture");
GstCaps *caps;
GstSample *from_sample, *to_sample;
GError *err = NULL;
GstBuffer *buf;
GstMapInfo map_info;
GstElement * v_sinkbin;
v_sinkbin = gst_bin_get_by_name( GST_BIN(this->m_pipeline), "sink");
if(v_sinkbin == NULL){
g_print("Sink was empty");
return;
}
g_object_get (v_sinkbin, "last-sample", &from_sample, NULL);
if (from_sample == NULL) {
g_print ("Error getting last sample from sink");
return;
}
caps = gst_caps_from_string("image/png");
to_sample = gst_video_convert_sample (from_sample, caps, GST_CLOCK_TIME_NONE, &err);
gst_caps_unref (caps);
gst_sample_unref (from_sample);
if (to_sample == NULL && err) {
g_print ("Error converting frame: %s", err->message);
g_error_free (err);
return;
}
buf = gst_sample_get_buffer (to_sample);
if (gst_buffer_map (buf, &map_info, GST_MAP_READ)) {
const gchar * PIC_LOCATION = "/home/usr/Pictures/picture110222.png";
if (!g_file_set_contents (PIC_LOCATION, (const char *) map_info.data,
100000, &err)) {
g_print ("Could not save thumbnail: %s", err->message);
g_error_free (err);
}
else{
g_print("Image saved");
}
}
gst_buffer_unmap (buf, &map_info);
}
`
The functionality that it currently has: The application opens the main window as well as the overlay stream. Because I am coding from the remote desktop I can see the from on the rdp window, and the video stream on the small touch screen connected to the Nano. I am able to adjust the motorized focus from the form interface by changing the value and clicking the "Focus" button.
What is not working: When you click on the "Capture" button the function fills the sink from the pipeline, and fills the "GstSample *from_sample". Then returns the error "Internal data stream error." when executing "to_sample = gst_video_convert_sample (from_sample, caps, GST_CLOCK_TIME_NONE, &err);".
My Troubleshooting: I have tried to modify the launch stream to have the enable-last-sample. I have tried to add another GstElement for nvvidconv with a png/jpeg/webp encoder and appsink/filesink. I also tried creating a class for just sending straight to the jpeg file but I need to adjust the focus before capturing an image because the cameras focus resets each time a new pipeline is opened. And I would send it with the initial launch string, but arducams focus hardware runs on a separate hardware line from the camera its self. Which is why I have to use the i2c for sending machine code to the camera to set the focal point. (which the reset every time confuses me just that much more that they have to be talking somehow for it to tell the thing to reset unless its a chip reset of some kind ... but I digress)
What I'm expecting: When I run the function gstcontroled::Camera_CaptureSnapshot() I want an image captured (I have presumed that pulling from the sample is the correct approach.) from the playing pipeline to be saved to the designated file location. The file name will change and be fed to the function at a later time. For now I can just overwrite the image it outputs so I know its working.