I have made a TensorFlow model script that saves the model for later use. Its a Image classification model with sequential neural network that takes in the image and classifies it based on three output classes.
When I run the model on the same file in Spyder IDE and with command prompt, I get significantly different results for prediction from each, even when using the same virtual environment.
The resulted predictions for the three classes might be like: Spyder IDE prediction: Class1 = 0.17 - Class2 = 0.05 - Class3 = 0.78
running the script in CMD prediction: Class1 = 0.03 - Class2 = 0.01 - Class3 = 0.96
Many thanks in advance if anyone knows why is this so. I am looking to learn here.
I don't think I am using dropout or anything that type to cause this sort of randomness. I use the same virtual environment for both cases so the TF version should be the same.
My scrip that creates the model:
training_data = tf.keras.utils.image_dataset_from_directory(data_dir,
validation_split=0.2,
subset="training",
batch_size=32,
image_size=(img_size,img_size),
seed=50)
validation_data = tf.keras.utils.image_dataset_from_directory(data_dir,
validation_split=0.2,
subset="validation",
batch_size=32,
image_size=(img_size,img_size),
seed=50)
# Get class names
class_names = training_data.class_names
print(class_names)
# Normalize pixel values between 0 & 1
# Each pixel is currently values between 0 & 255 for RGB colors
# Neural networks work best with normalized data
# Divide all values by the max value 255
norm_layer = tf.keras.layers.Rescaling(1/255.)
# Apply the division to all data in traing data set
training_data_norm = training_data.map(lambda x, y: (norm_layer(x), y))
# Do same for test (validation) dataset
validation_data_norm = validation_data.map(lambda x, y: (norm_layer(x), y))
# Check for normalization
image_batch, labels_batch = next(iter(training_data_norm))
image_batch[0]
model_4 = tf.keras.models.Sequential([
Conv2D(filters=10,
kernel_size=3, # Or (3, 3)
activation="relu",
# Input shape (height, width, colour channels)
input_shape=(img_size, img_size, 3)),
MaxPool2D(pool_size=2, # Or (2, 2)
padding="valid"), # Or 'same'
Conv2D(10, 3, activation="relu"), # Filters, Kernel Size, activation
MaxPool2D(),
Conv2D(10, 3, activation="relu"),
MaxPool2D(),
Flatten(),
Dense(3, activation="softmax") # Binary activation output layer
])
# Compile the model
model_4.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# Fit the model
history_4 = model_4.fit(training_data_norm,
epochs=epochs_tf
,
# Step through 50 batches of 32
steps_per_epoch=len(training_data_norm),
# Validate while fitting
validation_data=validation_data_norm,
# Step through in batches of 32
validation_steps=len(validation_data_norm))
# Plot models training curves
pd.DataFrame(history_4.history).plot(figsize=(20, 10))
model_4.save(r"C:\nn")
And this is how I run the model:
my_image = tf.io.read_file(path)
my_image = tf.image.decode_image(my_image) # Turn file into a tensor
my_image = tf.image.resize(my_image, size=[model_dpi, model_dpi]) # Resize image
my_image = my_image / 255 # Normalize data
model.predict(tf.expand_dims(my_image, axis=0), verbose = 0
I checked that it is 100% sure that they are running the same file in both systems. I made sure that they both run in the same virtual environment. Since I use the same virtual environment I am sure that the TF versions are both same when I run the scripts.