0

Usually, an activation function is applied to all neurons of a given layer as in

layer = tf.nn.relu(layer)

How can I apply an activation function to say the second neuron only?

How can I apply a specific transformation (say tf.exp()) to a specific neuron only?

Slicing a column cannot apply here since to slice a column I need to know the number of rows and it is unknown at construction time.

Maxim
  • 52,561
  • 27
  • 155
  • 209
user11634
  • 186
  • 2
  • 12
  • What particular neuron do you have in mind? Any one? – Maxim Oct 22 '17 at 12:48
  • More precisely: The last layer of my neural network has two outputs, and I would like to pass the second output through the function log( 1+exp() ) to get strictly positive values, but the problem is fairly general (the answer to your question is thus: Any one). Nevertheless, I haven't found any solution on the web. – user11634 Oct 22 '17 at 14:06
  • So your output layer shape is `[?, 2]`, right? And you'd like to pass the whole `[:0]` slice through some activation function? – Maxim Oct 22 '17 at 14:10
  • Correct: the shape is [?,2] and I would like to pass say [:,0] (in general [:,p]) through an activation function or better use TensorFlow operations since tf.log( 1 + tf.exp() ) isn't an activation function right now. Obviously, I could construct such an activation function, but if I can avoid this complexity, it would be better. – user11634 Oct 22 '17 at 14:48

1 Answers1

1

You can do slicing of dynamically-shaped tensors, just like static ones. Here I stripped everything down to a [?, 2] tensor and it's 0-slice:

import numpy as np
import tensorflow as tf

x = tf.placeholder(dtype=tf.float32, shape=[None, 2], name='x')
layer = tf.nn.relu(x)
slice = layer[:, 0]
activation = tf.log(1 + tf.exp(slice))

with tf.Session() as session:
  session.run(tf.global_variables_initializer())
  layer_val, slice_val, activ_val = session.run([layer, slice, activation],
                                                feed_dict={x: np.random.randn(10, 2)})
  print layer_val[:, 0]
  print slice_val
  print activ_val

You should see that layer_val[:, 0] is the same as slice_val, and activ_val is its transformation.

Maxim
  • 52,561
  • 27
  • 155
  • 209
  • Thanks. I think I was mentally blocked because I wanted to have the slice back into the initial tensor, but I do not need that after all. Just out of curiosity (but it could be interesting in the future), would it be possible to put the slice "activation" back into layer[:,0] ? – user11634 Oct 22 '17 at 15:50
  • Sure, you can stack the two slices back - https://www.tensorflow.org/api_docs/python/tf/stack – Maxim Oct 22 '17 at 15:54
  • Thanks; I got an error message with `tf.stack()` so I went to `tf.concat()`, and it works as expected. – user11634 Oct 22 '17 at 18:06