I'm trying to resize some images to use them with Inception. I want to do it as a separate preprocessing step to speed things up later. Running tf.image.resize
on all of the images at once crashes, as does a loop.
I'd like to do it in batches, but I don't know how to do that with making it part of my neural network model - I want to keep it outside as a separate preprocessing step.
I made this:
inception = tf.keras.applications.inception_v3.InceptionV3(include_top=True, input_shape=(299, 299, 3))
inception = tf.keras.Model([inception.input], [inception.layers[-2].output])
for layer in inception.layers:
layer.trainable = False
resize_incept = tf.keras.Sequential([
tf.keras.layers.Resizing(299, 299),
inception])
resize_incept.compile()
So can I just call it on my images? But then how do I batch it? When I call it like resize_incept(images)
it crashes (too big), but if I call resize_incept(images, batch_size = 25)
, it doesn't work (TypeError: call() got an unexpected keyword argument 'batch_size'
).
EDIT: I'm trying to figure out if I can use tf.data.dataset
for this
EDIT2: I've put my data (which was an array of (batch, 32, 32, 3) into tf.data.dataset so I can try this:
train_dataset = train_dataset.map(lambda x, y: (resize_incept(x), y))
But when I try it, it give me this error:
ValueError: Input 0 is incompatible with layer model_1: expected shape=(None, 299, 299, 3), found shape=(299, 299, 3)
The problem seems to be that whatever is coming out of the resize layer is somehow wrong for going into the inception layer (because what I'm putting in at first is (32,32,3) and there are no complaints about those dimensions)? But the inception layer, already has input_shape=(299, 299, 3) so I would think that's what shape it would take?