I'm having trouble with the transition of Tensorflow Python to Tensorflow.js in regards to image preprocessing
in Python
single_coin = r"C:\temp\coins\20Saint-03o.jpg"
img = image.load_img(single_coin, target_size = (100, 100))
array = image.img_to_array(img)
x = np.expand_dims(array, axis=0)
vimage = np.vstack([x])
prediction =model.predict(vimage)
print(prediction[0])
I get the correct result
[2.8914417e-05 3.5085387e-03 1.9252902e-03 6.2635467e-05 3.7389682e-03 1.2983804e-03 7.4157811e-04 1.4608903e-04 2.7099697e-06 1.1844193e-02 1.3398369e-04 9.3798796e-03 9.7308388e-05 7.3931034e-05 1.9695959e-04 9.6496813e-05 4.2653349e-04 8.7305409e-05 8.1476872e-04 4.9094640e-04 1.3498703e-04 9.6476960e-01]
However in Tensorflow.js with the same image post the following preprocessing function:
function preprocess(img)
{
let tensor = tf.browser.fromPixels(img)
const resized = tf.image.resizeBilinear(tensor, [100, 100]).toFloat()
const offset = tf.scalar(255.0);
const normalized = tf.scalar(1.0).sub(resized.div(offset));
const batched = normalized.expandDims(0)
return batched
}
I get the following result:
[0.044167134910821915, 0.04726826772093773, 0.04546305909752846, 0.04596292972564697, 0.044733788818120956, 0.04367975518107414, 0.04373137652873993, 0.044592827558517456, 0.045657724142074585, 0.0449688546359539, 0.04648510739207268, 0.04426411911845207, 0.04494940862059593, 0.0457320399582386, 0.045905906707048416, 0.04473186656832695, 0.04691491648554802, 0.04441603645682335, 0.04782886058092117, 0.04696653410792351, 0.045027654618024826, 0.04655187949538231]
I'm obviously not translating the preprocessing appropriately. Does anyone see what I'm missing?