There isn't another mechanism that lets you define default values in core TensorFlow, so you should specify the arguments for each layer.
For instance, this code:
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
weights_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005)):
x = slim.fully_connected(x, 800)
x = slim.fully_connected(x, 1000)
would become:
x = tf.layers.dense(x, 800, activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))
x = tf.layers.dense(x, 1000, activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))
Alternatively:
with tf.variable_scope('fc',
initializer=tf.truncated_normal_initializer(stddev=0.01)):
x = tf.layers.dense(x, 800, activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))
x = tf.layers.dense(x, 1000, activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))
Make sure to read the documentation of the layer to see which initializers default to the variable scope initializer. For example, the dense layer's kernel_initializer
uses the variable scope initializer, while the bias_initializer
uses tf.zeros_initializer()
.