2

I am trying to implement the gradient projections technique from Mitigating Unwanted Biases with Adversarial Learning

The model architecture is

  • 1) Input layer
  • 2) Dense fixed length layer
  • 3) Custom gradient project layer
    • 4a) Task 1 layers
    • 4b) Task 2 layers (adversarial task)

I would like to manipulate the gradients from task 1 and task 2 with a custom layer (3). Currently I plan to have something like this in the call of a custom layer

@tf.RegisterGradient('blah')
def proj_gradients(op, grad):
    return grad[0] - grad[1]

g = K.get_session().graph
with g.gradient_override_map({'Identity': 'blah'}):
    y = tf.identity(X)

Is there a more intuitive Keras way for doing this?

Samarth Bharadwaj
  • 559
  • 2
  • 5
  • 15
  • 1
    There is a link [here](https://github.com/tensorflow/tensorflow/issues/20630) I hope it can help. There is also a good example [here](https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb#scrollTo=d4tSNwymzf-q) – Amir Aug 29 '18 at 20:01

1 Answers1

1

tf.custom_gradient is the best tool to use here. It lets you define the gradient function at the call site, has a nicer interface than the override map, and works well with eager execution.

Alexandre Passos
  • 5,186
  • 1
  • 14
  • 19