I am training a Model in Tensorflow with a variable batchsize (Input: [None, 320, 240, 3]). The problem is during post-training quantization I can not have any dynamic input, thus no "None" and with edgetpu compiler I can not have batchsizes greater than 1.
My current approach is to train one more epoch with a fixed batchsize of 1. But that is a bit tedious.
Is it somehow possible to change the input from [None, 320, 240, 3] to [1, 320, 240, 3] or [320, 240, 3] without having to train it once more?