I'm using the Microsoft.MixedReality.WebRTC
library and I am planing on using it for my next project - a Real-Time video chatting app.
I have been able to establish a connection and pass video frames around.
How would I properly render those Frames and display them as Video?
Using WPF's MediaElement
seems pretty easy, but I can only input an Uri
object as source, I cannot feed it single frames, AFAIK.
I have read that drawing Bitmaps is a possible solution, but I am sure this would mean many hours reinventing the wheel and testing, which I am not a fan of doing, unless there is no other way.
The library works as follows:
Each time a new frame is received by the client the Argb32VideoFrameReady
event is raised. A Argb32VideoFrame
struct object is then passed to the callback, which contains an IntPtr
to the raw data. Height
, Width
and Stride
are also provided.
More Information on the specific struct here
What would be some ways I could achieve this?
I am planning on using WPF. The solution should target Windows 7+ and .Net Framework 4.6.2.
Thanks in advance.