2

I am using tf.contrib.layers.conv2d so far, but (e.g. to allow for weight decay filters as discussed here) I want to switch to the tf.nn.conv2d implementation. However I am confused regarding the parameters, since apparently I need to specify things that before I haven't.

With doc and SO entries I gave it a try. For 4D-Tensors with [batch_size, x, y, channels], are these two versions identical? I.e. am I correct in assuming that input_layer.shape[-1] represents the input_channels as required in filter and that I have to explicitly set strides to the number of dims of my input tensor:

with tf.contrib.layers.conv2d (original)

down0a = tf.contrib.layers.conv2d(input_layer, n_features, (3, 3))
down0b = tf.contrib.layers.conv2d(down0a, n_features, (3, 3))
down0c = tf.contrib.layers.max_pool2d(down0b_do, (2, 2), padding='same')

with tf.nn.conv2d

down0a = tf.nn.conv2d(input_layer, filter=[3, 3, input_layer.shape[-1], n_features], strides=[1, 1, 1, 1], padding='SAME')
down0ar = tf.nn.relu(down0a)
down0b = tf.nn.conv2d(down0ar, filter=[3, 3, down0ar.shape[-1], n_features], strides=[1, 1, 1, 1], padding='SAME')
down0br = tf.nn.relu(down0b)
down0c = tf.nn.max_pool(down0br, [2, 2, down0br.shape[-1], n_features], strides=[1, 1, 1, 1], padding='SAME')
Honeybear
  • 2,928
  • 2
  • 28
  • 47

1 Answers1

1

You seem to have gotten the shapes correct, the most obvious issue appears to be that you aren't supposed to tell tf.nn.conv2d the shapes, you're supposed to pass it the actual weight tensor.

down0w = tf.get_variable("down0w", shape=[3, 3, input_layer.shape[-1], n_features], initializer=tf.contrib.layers.xavier_initializer())
down0a = tf.nn.conv2d(input_layer, filter=down0w, strides=[1, 1, 1, 1], padding='SAME')
David Parks
  • 30,789
  • 47
  • 185
  • 328