2

I am trying to replace the computation done in the graph with a custom op that does the same.

Lets say the graph has a constant A and weight variable W, I create the custom op to take these two inputs and do the entire computation (except the last step of weight update):

custom_op_tensor = custom_module.custom_op([A,W])
g_def = tf.get_default_graph().as_graph_def()
input_map = { tensor.name : custom_op_tensor }
train_op, = tf.import_graph_def(g_def, input_map=input_map, return_elements=[train_op])

After the import graph def, there are two W's, one from the original graph def and one in the imported graph. When we run the train op, the custom op ends up reading the old W and the new W is updated. As a result, the gradient descent ends up failing to do the right thing.

The problem is instantiation of custom_op requires the input weight tensor W. The new W is known only after the import. And, import requires the custom op. How does one get around this problem ?

Anil Shanbhag
  • 950
  • 1
  • 13
  • 31
  • 2
    You are asking how to replace an op in the graph with another op. Until recently graphs were append-only and it was not possible to do this. However, recently there's been a graph editor library added, perhaps there's some function there that can help -- https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.graph_editor.html#library-overview – Yaroslav Bulatov Oct 06 '16 at 00:06

1 Answers1

0

Could you precise which version of Tensorflow you use : r0.08, r0.09, r0.10, r0.11 ?

That is impossible to change an op in the graph with another op. But If you may access W, you can still make a backup copy of W (using deepcopy() from copy module ) before running the train op which update it ?

Regards

A. STEFANI
  • 6,707
  • 1
  • 23
  • 48