68

Is there any advantage in using tf.nn.* over tf.layers.*?

Most of the examples in the doc use tf.nn.conv2d, for instance, but it is not clear why they do so.

jul
  • 36,404
  • 64
  • 191
  • 318

6 Answers6

43

As GBY mentioned, they use the same implementation.

There is a slight difference in the parameters.

For tf.nn.conv2d:

filter: A Tensor. Must have the same type as input. A 4-D tensor of shape [filter_height, filter_width, in_channels, out_channels]

For tf.layers.conv2d:

filters: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).

I would use tf.nn.conv2d when loading a pretrained model (example code: https://github.com/ry/tensorflow-vgg16), and tf.layers.conv2d for a model trained from scratch.

Mircea
  • 954
  • 1
  • 13
  • 17
  • 3
    Parameters `filter` and `filters` are exactly the difference! – Sergey Bushmanov Sep 08 '17 at 09:06
  • 2
    Isn't it also a major difference that for `tf.nn.conv2d`you explicitly need to specify the `input_channels` (as part of `filter`), whereas `tf.contrib.layers.conv2d` apparently determines this implicitly? Also is there any case where `input_channels` is something other than `input.shape[-1]` (the last dim of the input)? – Honeybear Mar 28 '18 at 21:04
  • 1
    @Mircea Why you would use `tf.layers.conv2d` while building a model from scratch ? As `tf.nn.conv2d` will let you initialize the filters which will speed up the training time. – Abhisek May 26 '18 at 13:58
  • I guess it is mostly a matter of preference. Note that tf.layers.conv2d also has the option to initialize the filters, like this: `layer = tf.layers.conv2d(..., kernel_initializer=tf.contrib.layers.xavier_initializer())` The reason I prefer the one from tf.layers, is for using kernel dimensions instead of setting the tensor myself, then passing it as a parameter. – Mircea Jun 10 '18 at 15:14
30

For convolution, they are the same. More precisely, tf.layers.conv2d (actually _Conv) uses tf.nn.convolution as the backend. You can follow the calling chain of: tf.layers.conv2d>Conv2D>Conv2D.apply()>_Conv>_Conv.apply()>_Layer.apply()>_Layer.\__call__()>_Conv.call()>nn.convolution()...

MFisherKDX
  • 2,840
  • 3
  • 14
  • 25
GBY
  • 1,090
  • 1
  • 10
  • 13
  • 2
    They are not the same as they differ in the way they define a convoluted layer (see the answer below). – Sergey Bushmanov Sep 08 '17 at 09:07
  • friend,you made an precondition that is "for convolution".i wonder max_pool layer is the same or what is the difference between them? i hope to receive you message, thanks a lot. – wolfog May 24 '18 at 03:14
13

As others mentioned the parameters are different especially the "filter(s)". tf.nn.conv2d takes a tensor as a filter, which means you can specify the weight decay (or maybe other properties) like the following in cifar10 code. (Whether you want/need to have weight decay in conv layer is another question.)

kernel = _variable_with_weight_decay('weights',
                                     shape=[5, 5, 3, 64],
                                     stddev=5e-2,
                                     wd=0.0)
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')

I'm not quite sure how to set weight decay in tf.layers.conv2d since it only take an integer as filters. Maybe using kernel_constraint?

On the other hand, tf.layers.conv2d handles activation and bias automatically while you have to write additional codes for these if you use tf.nn.conv2d.

EXP0
  • 872
  • 2
  • 12
  • 20
8

All of these other replies talk about how the parameters are different, but actually, the main difference of tf.nn and tf.layers conv2d is that for tf.nn, you need to create your own filter tensor and pass it in. This filter needs to have the size of: [kernel_height, kernel_width, in_channels, num_filters]

Essentially, tf.nn is lower level than tf.layers. Unfortunately, this answer is not applicable anymore is tf.layers is obselete

Sagar Patil
  • 507
  • 2
  • 7
  • 18
3

DIFFERENCES IN PARAMETER:

Using tf.layer* in a code:

# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu) 
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)

Using tf.nn* in a code: ( Notice we need to pass weights and biases additionally as parameters )

strides = 1
# Weights matrix looks like: [kernel_size(=5), kernel_size(=5), input_channels (=3), filters (= 32)]
# Similarly bias = looks like [filters (=32)]
out = tf.nn.conv2d(input, weights, padding="SAME", strides = [1, strides, strides, 1])
out = tf.nn.bias_add(out, bias)
out = tf.nn.relu(out)
Nikhil Banka
  • 71
  • 1
  • 12
  • You can use activation like `tf.nn.conv2d(input, weights, activation=tf.nn.relu)` instead of `out = tf.nn.relu(out)`. – twostarxx Oct 16 '20 at 08:20
2

Take a look here:tensorflow > tf.layers.conv2d

and here: tensorflow > conv2d

As you can see the arguments to the layers version are:

tf.layers.conv2d(inputs, filters, kernel_size, strides=(1, 1), padding='valid', data_format='channels_last', dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer=None, bias_initializer=tf.zeros_initializer(), kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, trainable=True, name=None, reuse=None)

and the nn version:

tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)

I think you can choose the one with the options you want/need/like!

Prags
  • 2,457
  • 2
  • 21
  • 38
rmeertens
  • 4,383
  • 3
  • 17
  • 42
  • 5
    I read the doc, but I was just wondering if there was an advantage in using tf.nn.conv2d + the initializers and all the functionalities provided by tf.layer over tf.nn.layers.conv2d, e.g. if it it is faster. – jul Mar 14 '17 at 13:39