I exported YOLOv8 (using ultralytics package) model to TFLite format and I'm trying to use it in my Flutter app. I'm using tflite_flutter package to do so and there is no issue with running it without GPU delegate, but when adding GPU delegates it breaks on model loading (ArgumentError (Invalid argument(s): Unable to create interpreter.)
). No such problem with dummy TF 2.12. model running on GPU.
I'm exporting the model using command:
yolo export model=yolov8m-2023-04-25.pt format=tflite
And trying to load it by using:
final gpuDelegateV2 = tfl.GpuDelegateV2(
options: tfl.GpuDelegateOptionsV2(
isPrecisionLossAllowed: false,
inferencePreference: tfl.TfLiteGpuInferenceUsage.fastSingleAnswer,
inferencePriority1: tfl.TfLiteGpuInferencePriority.minLatency,
inferencePriority2: tfl.TfLiteGpuInferencePriority.auto,
inferencePriority3: tfl.TfLiteGpuInferencePriority.auto,
maxDelegatePartitions: 1));
var interpreterOptions = tfl.InterpreterOptions()
..addDelegate(gpuDelegateV2);
final interpreter = await tfl.Interpreter.fromAsset('yolov8.tflite',
options: interpreterOptions);
Also, the TF Analyzer shows, that the exported model should be compatible with GPU:
tf.lite.experimental.Analyzer.analyze(model_path="yolov8.tflite", gpu_compatibility=True)
I'm debugging it on Samsung Galaxy A52.
Does anyone know, what is wrong here?