I want to use a 2d Convolutional layer in my network and as input I would like to give it pictures. So I have a batch of pictures which mean a ndim=3 matrix, like this for exemple :
dimension of my input:
[10, 6, 7]
The 10
value is the batch size
and the two others values are the image size. So what is the fourth dimension the conv 2d layer is requiring ?
Here the interesting lines of code :
self.state_size = [6, 7]
self.inputs_ = tf.placeholder(tf.float32, shape=[None, *self.state_size], name="inputs_")
# Conv2D layer 1
self.conv1 = tf.layers.conv2d(inputs = self.inputs_,
filters = 4,
kernel_size = [4, 4],
strides = [1, 1],
kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d())
Here the error I get :
Input 0 of layer conv2d_1 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 6, 7]*