The aim is to implement a custom MFT for video processing and synchronization to an external application. The details of are not important. What I would like to achieve as a first step is to get the MFT up and running using DXVA video processing or DXVA-HD. I wasn't able to do so.
Here's what I did: I started building a topology with a source input node (my webcam), an MFT (the MFT_Grayscale example) and the EVR. I included this in a small application. The topology worked and I could see the monochrome stream from the camera. Now I want to change the code of the MF_Grayscale example such that it would support DXVA video processing and could use hardware acceleration provided by the VideoProcessBlt method. The Microsoft documentation is giving bits and pieces of information but I wasn't able to achieve a running MFT.
What I did so far:
- In the method
GetAttributes
I indicate that this MFT isMF_SA_D3D_AWARE
. - In method
ProcessMessage
I process the messageMFT_MESSAGE_SET_D3D_MANAGER
to get a device handle, aIDirect3DDeviceManager9
and aIDirectXVideoProcessorService
. - In the
SetInputType
method I use the methods described here https://msdn.microsoft.com/en-us/library/windows/desktop/ms694235(v=vs.85).aspx to get a DXVA2_VideoDesc structure and follow this code https://msdn.microsoft.com/en-us/library/windows/desktop/cc307964(v=vs.85).aspx to create a video processing device. Additionally I create the surfaces usingIDirectXVideoProcessorService->CreateSurface
- In the method
GetOutputStreamInfo
thedwFlags
variable looks like this:
pStreamInfo->dwFlags =
MFT_OUTPUT_STREAM_PROVIDES_SAMPLES |
MFT_OUTPUT_STREAM_WHOLE_SAMPLES |
MFT_OUTPUT_STREAM_SINGLE_SAMPLE_PER_BUFFER |
MFT_OUTPUT_STREAM_FIXED_SAMPLE_SIZE;
Everything seems to be ok until here. Now my questions (I'm sorry that I cannot be more specific):
Do I have to adapt the
GetOutputAvailableType/SetOutputType
methods?In the
ProcessInput
method I get theIMFSample
and extract anIMFMediaBuffer
. The buffer is not managing aIDirect3DSurface9
according to my function calls. Do I have to memcpy the data of the buffer to a Direct3D surface?In the
ProcessOutput
method to make a starting point I want to forward the incoming frame to the output.VideoProcessBlt
should make a 1:1 blit from input to output. The documentation says:Get an available surface that is not currently in use.
How can I determine whether a surface is in use?
How am I supposed to output the surface? Should I use
MFCreateVideoSampleFromSurface
orMFCreateDXSurfaceBuffer
?Unfortunately I am really lost and unable to make any progress using the documentation.
The situation now is that I do not see any video output (the window has its default windows background color) and the webcam stops to capture frames after the first frame (the LED switches off). Besides that nothing happens - the application just continues to run without showing anything).
I hope that somebody can help me. I would also appreciate if someone could direct me to sample code for an MFT using DXVA video processing or DXVA-HD. I was not able to find anything...
Thanks