2

I am training a Model in Tensorflow with a variable batchsize (Input: [None, 320, 240, 3]). The problem is during post-training quantization I can not have any dynamic input, thus no "None" and with edgetpu compiler I can not have batchsizes greater than 1.

My current approach is to train one more epoch with a fixed batchsize of 1. But that is a bit tedious.

Is it somehow possible to change the input from [None, 320, 240, 3] to [1, 320, 240, 3] or [320, 240, 3] without having to train it once more?

Jodo
  • 4,515
  • 6
  • 38
  • 50
  • you can always change batch_size after training, model.predict() function has a parameter named batch_size, https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict – spb Oct 12 '21 at 15:53
  • I am not doing any predicting on the keras model. I am doing post-training quantization and convert it to a edgetpu model afterward (https://www.tensorflow.org/lite/performance/post_training_quantization) – Jodo Oct 13 '21 at 06:19
  • @Jodo, have you figured out how to do it? – Gideon Kogan Sep 12 '22 at 08:00

1 Answers1

0

A workaround that worked for me is creating a similar model with specific batch size and copying the weights from the original model

Gideon Kogan
  • 662
  • 4
  • 18