I tried using one of the examples I saw on GitHub and used it. It works perfectly fine but when I try to change the tflite model that came with it, It will always crashes and give this error.
E/AndroidRuntime: FATAL EXCEPTION: inference
Process: com.objdetector, PID: 3080
java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_images:0) with 307200 bytes from a Java Buffer with 270000 bytes.
at org.tensorflow.lite.Tensor.throwIfSrcShapeIsIncompatible(Tensor.java:423)
at org.tensorflow.lite.Tensor.setTo(Tensor.java:189)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:154)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:343)
at com.objdetector.deepmodel.MobileNetObjDetector.detectObjects(MobileNetObjDetector.java:130)
at com.objdetector.MainActivity.lambda$onImageAvailable$0$MainActivity(MainActivity.java:98)
at com.objdetector.-$$Lambda$MainActivity$MYtVDhek_YLxj8lClVnaIcVW0Gg.run(lambda)
at android.os.Handler.handleCallback(Handler.java:751)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:154)
at android.os.HandlerThread.run(HandlerThread.java:61)
This is where the error points.
private static final String MODEL_FILENAME = "letters.tflite";
private static final String LABEL_FILENAME = "label.txt";
private static final int INPUT_SIZE = 300;
private static final int NUM_BYTES_PER_CHANNEL = 1;
private static final float IMAGE_MEAN = 128.0f;
private static final float IMAGE_STD = 128.0f;
private static final int NUM_DETECTIONS = 10;
private static final String LOGGING_TAG = MobileNetObjDetector.class.getName();
private ByteBuffer imgData;
private Interpreter tfLite;
private int[] intValues;
private float[][][] outputLocations;
private float[][] outputClasses;
private float[][] outputScores;
private float[] numDetections;
private Vector<String> labels = new Vector<String>();
private MobileNetObjDetector(final AssetManager assetManager) throws IOException {
init(assetManager);
}
private void init(final AssetManager assetManager) throws IOException {
imgData = ByteBuffer.allocateDirect(1 * INPUT_SIZE * INPUT_SIZE * 3 * NUM_BYTES_PER_CHANNEL);
imgData.order(ByteOrder.nativeOrder());
intValues = new int[INPUT_SIZE * INPUT_SIZE];
outputLocations = new float[1][NUM_DETECTIONS][4];
outputClasses = new float[1][NUM_DETECTIONS];
outputScores = new float[1][NUM_DETECTIONS];
numDetections = new float[1];
InputStream labelsInput = assetManager.open(LABEL_FILENAME);
BufferedReader br = new BufferedReader(new InputStreamReader(labelsInput));
String line;
while ((line = br.readLine()) != null) {
labels.add(line);
}
br.close();
try {
tfLite = new Interpreter(loadModelFile(assetManager));
Log.i(LOGGING_TAG, "Input tensor shapes:");
for (int i=0; i<tfLite.getInputTensorCount(); i++) {
int[] shape = tfLite.getInputTensor(i).shape();
String stringShape = "";
for(int j = 0; j < shape.length; j++) {
stringShape = stringShape + ", " + shape[j];
}
Log.i(LOGGING_TAG, "Shape of input tensor " + i + ": " + stringShape);
}
Log.i(LOGGING_TAG, "Output tensor shapes:");
for (int i=0; i<tfLite.getOutputTensorCount(); i++) {
int[] shape = tfLite.getOutputTensor(i).shape();
String stringShape = "";
for(int j = 0; j < shape.length; j++) {
stringShape = stringShape + ", " + shape[j];
}
Log.i(LOGGING_TAG, "Shape of output tensor " + i + ": " + tfLite.getOutputTensor(i).name() + " " + stringShape);
}
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
public static MobileNetObjDetector create(final AssetManager assetManager) throws IOException {
return new MobileNetObjDetector(assetManager);
}
private static MappedByteBuffer loadModelFile(AssetManager assets)
throws IOException {
AssetFileDescriptor fileDescriptor = assets.openFd(MODEL_FILENAME);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
public void close() {
tfLite.close();
}
public List<DetectionResult> detectObjects(final Bitmap bitmap) {
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
imgData.rewind();
for (int i = 0; i < INPUT_SIZE; ++i) {
for (int j = 0; j < INPUT_SIZE; ++j) {
int pixelValue = intValues[i * INPUT_SIZE + j];
imgData.put((byte) ((pixelValue >> 16) & 0xFF));
imgData.put((byte) ((pixelValue >> 8) & 0xFF));
imgData.put((byte) (pixelValue & 0xFF));
}
}
Object[] inputArray = {imgData};
Map<Integer, Object> outputMap = new HashMap<>();
outputMap.put(0, outputLocations);
outputMap.put(1, outputClasses);
outputMap.put(2, outputScores);
outputMap.put(3, numDetections);
tfLite.runForMultipleInputsOutputs(inputArray, outputMap);
final ArrayList<DetectionResult> recognitions = new ArrayList<>(NUM_DETECTIONS);
for (int i = 0; i < NUM_DETECTIONS; ++i) {
final RectF detection =
new RectF(
outputLocations[0][i][1] * INPUT_SIZE,
outputLocations[0][i][0] * INPUT_SIZE,
outputLocations[0][i][3] * INPUT_SIZE,
outputLocations[0][i][2] * INPUT_SIZE);
int labelOffset = 1;
recognitions.add(
new DetectionResult(
i,
labels.get((int) outputClasses[0][i] + labelOffset),
outputScores[0][i],
detection));
}
return recognitions;
The model maker I am using is on youtube and he made a tutorial on how to use it. I used it and it worked perfectly fine on python but as I said earlier when I change the tflite model on the android side It won't run.
Also, the datasets I used is on roboflow I used only small dataset and trying next the larger ones
Links:
Dataset that i am using currently
I tried trying various changes I've seen on google such as changing the
MODEL_IMAGE_INPUT_SIZE
and trying other model makers but to no avail.
Also tried using the examples from TensorFlow on GitHub but didn't dig too much because it is on kotlin
Also tried using my own dataset I created using LabelImg but that also didn't work.
This is the 2nd time I am using this git since the first one I couldn't remember what changes I made so I started over.
I want
I for the most part know for very sure that the model I created particularly is working because when I use the tester on the model maker it sees it and labels the input.
I also tried using other models but not quite sure if it is compatible with the one that I am currently using, The model that I use is on this video and I guess it's not the same since he is using OpenCV.
I am new to machine learning and have no particular background when it comes to machine learning but I have created many projects on android before this is the first time I am using TensorFlow on android.
When I change the labelmap and the model I was expecting it to work since it is almost the same but I guess not