I managed to wrap kvsWebRTCClientViewer in a python C extension and I also managed to expose frame data to python via callback function as bytes array but I don't understand how to convert these frames to images. Does it require gstreamer pipeline?
Following is C function to expose frame data to python:
VOID handleFrame(UINT64 customData, PFrame pFrame)
{
PyObject* retval;
long result;
UNUSED_PARAM(customData);
DLOGV("Frame received. TrackId: %" PRIu64 ", Size: %u, Flags %u", pFrame->trackId, pFrame->size, pFrame->flags);
PSampleStreamingSession pSampleStreamingSession = (PSampleStreamingSession) customData;
if (pSampleStreamingSession->firstFrame) {
pSampleStreamingSession->firstFrame = FALSE;
pSampleStreamingSession->startUpLatency = (GETTIME() - pSampleStreamingSession->offerReceiveTime) / HUNDREDS_OF_NANOS_IN_A_MILLISECOND;
printf("Start up latency from offer to first frame: %" PRIu64 "ms\n", pSampleStreamingSession->startUpLatency);
}
retval = PyObject_CallFunction(frame_callable, "y", pFrame->frameData);
if (retval && PyLong_Check(retval))
result = (int)PyLong_AsLong(retval);
else
result = -1;
Py_XDECREF(retval);
}
And following is python code to start viewer and handle frames:
from samples import libkvsWebrtcClientViewer
def frame_handler(frame):
print("Frame Type: ", type(frame))
libkvsWebrtcClientViewer.startViewer(frame_handler, "channel-name")
Any help in this regard is highly appreciated.