-1

When I use

def main_conv_nn(images, training):
    # Convolution

    convFilterShape = [3, 3, 1, 32]
    convFilterWeights = tf.Variable(tf.truncated_normal(convFilterShape, stddev=0.1))
    Layer1 = tf.nn.conv2d(images, convFilterWeights, strides= [1,1,1,1] , padding='SAME')

Its performance is under 20% accuracy for MNIST related code. Its performance is really bad.

however when I changed my code like this,

def main_conv_nn(images, training):
    # Convolution

    #convFilterShape = [3, 3, 1, 32]
    #convFilterWeights = tf.Variable(tf.truncated_normal(convFilterShape, stddev=0.1))
    #Layer1 = tf.nn.conv2d(images, convFilterWeights, strides= [1,1,1,1] , padding='SAME')

    Layer1 = tf.layers.conv2d(images, 32, [5, 5], padding= 'same')

it works perfectly.

WHY tf.nn.conv2d does not work ? (there is no error but works strange)

StandTall
  • 13
  • 1
  • 4
  • The difference is because they use different kernel initialisers. The default for `layers.conv2d` is `variance_scaling_initializer`. – Vijay Mariappan Jul 21 '17 at 20:04

2 Answers2

0

You may want to test under identical conditions first. The conv filter size is 5 x 5 in your layers example, and 3 x 3 in the first one. The 3 x 3 may be too small to capture some dependencies.

Zechrx
  • 13
  • 4
0
  1. tf.layers.conv2d are convolution + bias

  2. tf.nn.conv2d is convolution only

Vladimir Bystricky
  • 1,320
  • 1
  • 11
  • 13