Problem
In short, I am trying to use the Face Mesh SDK on some PNG images.
Steps
- Get the image as a UIImage:
UIImage(named:"image_name")
- Convert the UIImage into a CVPixelBuffer using the following extension:
extension UIImage {
func toPixelBuffer() -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(size.width), Int(size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
}
- Setup FaceMesh:
faceMesh = FaceMesh()
faceMesh.startGraph()
faceMesh.delegate = self
- Feed the pixel buffer (from step 2) into Mediapipe FaceMesh:
if let frame = image.toPixelBuffer() {
lastFrameBuffer = frame
faceMesh.processVideoFrame(frame)
}
- The frame passes into this function:
- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer {
NSLog(@"[FACEMESH] Processing Video Frame %@", kGraphName);
const auto ts =
mediapipe::Timestamp(self.timestamp++ * mediapipe::Timestamp::kTimestampUnitsPerSecond);
NSError* err = nil;
BOOL sent = [self.mediapipeGraph sendPixelBuffer:imageBuffer
intoStream:kInputStream
packetType:MPPPacketTypePixelBuffer
timestamp:ts
allowOverwrite:NO
error:&err];
NSLog(@"[FACEMESH] Sent? %d", sent);
if (err == nil) {
NSLog(@"[FACEMESH] NO ERROR");
} else {
NSLog(@"[FACEMESH] ERROR: %@",[err localizedDescription]);
}
}
Result
The logs show that all the steps above have passed
[FACEMESH] inited graph pure_face_mesh_mobile_gpu
[FACEMESH] Started graph pure_face_mesh_mobile_gpu
[FACEMESH] Processing Video Frame pure_face_mesh_mobile_gpu
[FACEMESH] Sent? 1
[FACEMESH] NO ERROR
HOWEVER, and here's the problem, the delegate functions are not being triggered, and the following function is not reached in the c++ code
- (void)mediapipeGraph:(MPPGraph*)graph
didOutputPacket:(const ::mediapipe::Packet&)packet
fromStream:(const std::string&)streamName {
NSLog(@"[FACEMESH] Received Output Packet %@", kGraphName);
if (streamName == kLandmarksOutputStream) {
if (packet.IsEmpty()) {
NSLog(@"[FACEMESH] Packet is empty for the given input %@", kGraphName);
return;
}
...
Note that the same process works fine on CVPixelBuffers received from live camera frames.
What could be causing the converted frames to behave this way?
Thanks for any help!