This is the guide to make a custom estimator in TensorFlow: https://www.tensorflow.org/guide/custom_estimators
The hidden layers are made using tf.nn.relu
:
# Build the hidden layers, sized according to the 'hidden_units' param.
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
I altered the example a bit to learn XOR, with hidden_units=[4]
and n_classes=2
. When the activation function is changed to tf.nn.sigmoid
, the example works as usual. Why is it so? Is it still giving correct result because XOR inputs are just zeros and ones?
Both functions give smooth loss curves converge to zero line.