I'm try to make a feature that apply filter frame capture from camera and stream via network.
Because of bad performance while apply filter in CMSampleBuffer or CVPixelBuffer (CPU side), i try to convert data from CMSampleBuffer (raw from camera capture) to CIImage (CPU side) then using some built-in filter of Core Image. When I convert back to CVPixelByteBuffer (or CMSampleBuffer) for encoded with VideoToolbox, it's is too slow to achieve real-time target with smooth fps (I guest the reason is tranfer data from GPU to CPU).
So, is there any way to using CIImage or GL texture (like draw texture to a surface as MediaCodec input) as VideoToobox 's input? Or faster convert from CGImage/CIImage to CVSampleBuffer, or using shading language? I think this is possible because Messenger, Snapchat, Snow, ... can do well in old device like iPhone 5)
Thank all.