The documentation of tf.nn.conv2d_transpose says:
tf.nn.conv2d_transpose(
value,
filter,
output_shape,
strides,
padding='SAME',
data_format='NHWC',
name=None
)
The output_shape argument requires a 1D tensor specifying the shape of the tensor output by this op. Here, since my conv-net part has been built entirely on dynamic batch_length placeholders, I can't seem to device a workaround to the static batch_size
requirement of the output_shape for this op.
There are many discussions around the web for this, however, I couldn't find any solid solution to this issue. Most of them are hacky ones with a global_batch_size
variable defined. I wish to know the best possible solution to this problem. This trained model is going be shipped as a deployed service.