2

I am creating an Android app which requires running a tensorflow image classification model. I created the following simple model in Python:

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(28, 28, 2)))
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dense(13, activation="softmax"))

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

Here, input is created from an image in following manner:

img = Image.open("img1.png").convert("LA").resize((28, 28))
input = np.reshape(np.asarray(img), (1, 28, 28, 2))

The accuracy of the model is pretty good. I saved the model in tflite format:

conv = tf.lite.TFLiteConverter.from_keras_model(model)
tfmodel = conv.convert()
open("mymodel.tflite", "wb").write(tfmodel)

Now, I want to use this model in Android. I am using Android Studio 4.1. I imported the tflite file from File > New > Others > TFLite. I initiated the TF model in following manner:

Mymodel model = Mymodel.newInstance(context);

I have a bitmap that I want to test with. For sending input to the model, I need to create an array of shape (1, 28, 28, 2) from this bitmap and creating a ByteBuffer object with the array. I am doing this the following manner:

public static float[][][][] getBMPArray(Bitmap bmpTile) {
    int width = bmpTile.getWidth();
    float[][][][] bmpArray = new float[1][width][width][2];
    for (int i = 0; i < width; i++) {
        for (int j = 0; j < width; j++) {
            float pixelVal = (float)(Color.red(bmpTile.getPixel(j, i)) * 299
                    + Color.green(bmpTile.getPixel(j, i)) * 587
                    + Color.blue(bmpTile.getPixel(j, i)) * 114) / 1000;
            bmpArray[0][i][j][0] = pixelVal / 255;
            bmpArray[0][i][j][1] = 1;
        }
    }
    return bmpArray;
}

public static ByteBuffer bmpByteBuffer(Bitmap bmp) {
    int width = bmp.getWidth();
    float[][][][] bmpArray = getBMPArray(bmp);
    ByteBuffer bmpByteBuf = ByteBuffer.allocate(1 * 28 * 28 * 2 * 4);;
    for (int i = 0; i < width; i++) {
        for (int j = 0; j < width; j++) {
            bmpByteBuf.putFloat(bmpArray[0][i][j][0]);
            bmpByteBuf.putFloat(bmpArray[0][i][j][1]);
        }
    }
    return bmpByteBuf;
}

I am sending the input to the model the following way:

ByteBuffer bmpByteBuf = bmpByteBuffer(bmp);
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 28, 28, 2}, DataType.FLOAT32);
inputFeature0.loadBuffer(bmpByteBuf);

I am then creating the output in the following manner:

Mymodel.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
float[] predLabels = outputFeature0.getFloatArray();

I have checked with two different images running them through the model both in Python and Android but Python gives always the right result (which is expected) but Android gives wrong probabilities to the classes. Following is the output from both Python and Android for the same image and the same model (Python gives the right answer):

Python: array([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
Android: predLabels = {float[13]@9684} [NaN for every output]

I think the problem is coming probably because I am not parsing the image data right in Android. Can someone help me in this regard. Thanks!

skdhfgeq2134
  • 426
  • 1
  • 4
  • 16
  • I have tried to onvert some HuggingFace keras models and from what I have seen `tf.lite.TFLiteConverter.from_keras_model(model)` in the output TFlite converted model a prediction model function signatures were different as were in keras model with checkpoints. `tf.lite.TFLiteConverter.from_concrete_functions([concrete_function], model)` is a better way – rudifus Apr 15 '22 at 23:16

0 Answers0