3

I have created a model which applied Mobilenet V2 for the convolutional base layers in Google colab. Then I converted it by using this command:

path_to_h5 = working_dir + '/Tensorflow_PY_Model/SavedModel.h5'
path_tfjs = working_dir + '/TensorflowJS'

!tensorflowjs_converter --input_format keras \
                        {path_to_h5} \
                       {path_tfjs}

I used an image to test classify it on both. In python, I use this code below to do the prediction:

from google.colab import files
from io import BytesIO
from PIL import Image
import matplotlib.pyplot as plt

uploaded = files.upload()
last_uploaded = list(uploaded.keys())[-1]
im = Image.open(BytesIO(uploaded[last_uploaded]))

im = im.resize(size=(224,224))
img = np.array(im)
img = img / 255.0
prediction1 = model.predict(img[None,:,:])
print(prediction1)

That code above returns this array:

[6.1504150e-05 4.8508531e-11 5.1813848e-15 2.1887154e-12 9.9993849e-01
  8.4171114e-13 1.4638757e-08 3.4268971e-14 7.5719299e-15 1.0649443e-16]]

After that I try to predict in Javascript with this code below:

async function predict(image) {
    var model = await tf.loadLayersModel('./TFJs/model.json');
    let predictions = model.predict(preprocessImage(image)).dataSync(); 
    console.log(predictions);

    return results;
}

function preprocessImage(image) {
    let tensor = tf.browser.fromPixels(image);
    const resizedImage = tensor.resizeNearestNeighbor([224,224]);
    const batchedImage = resizedImage.expandDims(0);
    return batchedImage.toFloat().div(tf.scalar(255)).sub(tf.scalar(1));
}

document.querySelector('input[type="file"]').addEventListener("change", async function () {
        if (this.files && this.files[0]) {
            img = document.getElementById("uploaded-img");
            img.onload = () => {
                URL.revokeObjectURL(img.src); // no longer needed, free memory
            };
            img.src = URL.createObjectURL(this.files[0]);
            predictionResult = await predict(model, img);
            displayResult(predictionResult);
        }
    });

However, with the same image as that I used when predicting on Python, it returns this result and it never change no matter I change the image.

Float32Array(10) [0.9489052295684814, 0.0036257198080420494, 0.000009185552698909305,
0.000029705168344662525, 0.04141413792967796, 1.4301890782775217e-9, 0.006003820803016424,
2.8357267645162665e-9, 0.000011812648153863847, 4.0659190858605143e-7]

So how to fix this problem? What more should I do? Thanks in advance for the answers and suggestions!

Dhana D.
  • 1,670
  • 3
  • 9
  • 33
  • Gah, that's an annoying issue. Are you sure the preprocessing is identical, for identical images? – Stanley Jul 29 '21 at 15:12
  • I think it's already identical, I have checked the shape result of the preprocessing and it's quite identical as in JS it's ([1,224,224,3]), meanwhile in python it's [None, 224, 224, 3]. – Dhana D. Jul 29 '21 at 15:28
  • Yea, the shape works. But maybe the image is the same every time, or some weird behavior. Is the image tensor itself identical? – Stanley Jul 29 '21 at 15:44
  • 1
    Thanks for that. I found that the image received by my web app was all 0s so that's why it behaved wrongly. – Dhana D. Jul 30 '21 at 19:14
  • Haha, knew it, done the same thing myself. No problem, best of luck. – Stanley Jul 30 '21 at 20:56

1 Answers1

1

After I debug some possible causes, I realized that the problem is in this block code:

document.querySelector('input[type="file"]').addEventListener("change", async function () {
        if (this.files && this.files[0]) {
            img = document.getElementById("uploaded-img");
            img.onload = () => {
                URL.revokeObjectURL(img.src); // no longer needed, free memory
            };
            img.src = URL.createObjectURL(this.files[0]);
            predictionResult = await predict(model, img);
            displayResult(predictionResult);
        }
    });

Firstly, I wanted to make it automated so it will just instantly display the picked image and predict in a pipeline. But it can't be done, because the src attribute of img would still be the same value as before the whole block executed.

In my case, it executed the whole block until the prediction and result then the uploaded and wrong predicted ones appears altogether. So I finally made a change like adding another button only for predicting and take out the prediction lines from that block and putting them in another function. It works well at the end.

document.querySelector('input[type="file"]').addEventListener("change", async function () {
        if (this.files && this.files[0]) {
            img = document.getElementById("uploaded-img");
            img.onload = () => {
                URL.revokeObjectURL(img.src); // no longer needed, free memory
            };
            img.src = URL.createObjectURL(this.files[0]);
        }
    });

document.querySelector('#predict-btn').addEventListener("click", async function ()  {
    img = document.getElementById("uploaded-img");
    predictionResult = await predict(model, img);
    displayResult(predictionResult);
});

Well, I am still curious if I can get these functions into a pipeline process so there would be only one upload button and the rest of works done by system.

Dhana D.
  • 1,670
  • 3
  • 9
  • 33