I'm a newer for Flutter and TensorFlow. I'm developping an face detection app and get face key points.
Learning from https://medium.com/@mundorap2010/face-detection-with-tflite-model-without-firebase-in-flutter-6eadf888f3b0, I don't know how to convert the tflite file to the detection face model. At https://netron.app/ I can found the input and output, but why to create the Anchor and AnchorOption model?
What do type and location represent in input and output?
If I have a new tflite file, I can get the input and output, how to create new face model and use?
I hope to recognize my face through TensorFlow and use my own tflite file, and get the key points of my face.
code shown below:
loadInterPreter() async {
try {
interpreter = await Interpreter.fromAsset(
MODEL_FILE_NAME,
options: InterpreterOptions(),
);
} catch (e) {
print("Error while creating interpreter: $e");
}
}
final outputTensors = _interpreter.getOutputTensors();
for (var tensor in outputTensors) {
_outputShapes.add(tensor.shape);
}
late final ImageProcessor _imageProcessor = ImageProcessorBuilder()
.add(ResizeOp(128, 128, ResizeMethod.BILINEAR))
.add(NormalizeOp(127.5, 127.5))
.build();
final tensorImage = TensorImage(TfLiteType.float32);
tensorImage.loadImage(image);
final inputImage = getProcessedImage(tensorImage);
TensorBuffer outputFaces = TensorBufferFloat(_outputShapes[0]);
final inputs = <Object>[inputImage.buffer];
final outputs = <int, Object>{
0: outputFaces.buffer,
};
_interpreter.runForMultipleInputs(inputs, outputs);
but will crash when run to runForMultipleInputs method.