I'm having issues with loading a TFLite model using the MappedByteBuffer method from the Tensorflow-for-poets-2 TFLite tutorial.
private MappedByteBuffer loadModelFile(Activity activity,String MODEL_FILE) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(MODEL_FILE);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
In particular the model I have converted with the tflite_convert (formerly toco) tools fails when the fileChannel.map is returned. The model is a floating point TFLite mode.
The problem seems to be caused by the startOffset and declaredLength variables. The error I receive in the logcat is
7525-7525/dp.thexor A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 7525 (dp.thexor), pid 7525 (dp.thexor)
If I fix these values to ones from a model that successfully loads then the method successfully returns the fileChannel.map. The values from my model are startOffset = 2846788 declaredLength = 45525464
I know that my TFlite model can successfully be mapped to memory as the tensorflow/contrib/lite/tools/benchmark:benchmark_model is able to benchmark it.
I'd try to load a quantized TFLite model but currently my model contains ops that do not have quantized equivalents (transpose_conv).
What could be causing this?
My .tflite model can be found here: https://github.com/andrewginns/CycleGAN-Tensorflow-PyTorch/releases/download/tf1.7-py3.6.4/float.tflite