4

I have successfully exported a re-trained InceptionV3 NN as a TensorFlow meta graph. I have read this protobuf back into python successfully, but I am struggling to see a way to export each layers weight and bias values, which I am assuming is stored within the meta graph protobuf, for recreating the nn outside of TensorFlow.

My workflow is as such:

Retrain final layer for new categories
Export meta graph tf.train.export_meta_graph(filename='model.meta')
Build python pb2.py using Protoc and meta_graph.proto
Load Protobuf:

import meta_graph_pb2
saved = meta_graph_pb2.CollectionDef()
with open('model.meta', 'rb') as f:
  saved.ParseFromString(f.read())

From here I can view most aspects of the graph, like node names and such, but I think my inexperience is making it difficult to track down the correct way to access the weight and bias values for each relevant layer.

Vinny M
  • 762
  • 4
  • 14

1 Answers1

9

The MetaGraphDef proto doesn't actually contain the values of the weights and biases. Instead it provides a way to associate a GraphDef with the weights stored in one or more checkpoint files, written by a tf.train.Saver. The MetaGraphDef tutorial has more details, but the approximate structure is as follows:

  1. In you training program, write out a checkpoint using a tf.train.Saver. This will also write a MetaGraphDef to a .meta file in the same directory.

    saver = tf.train.Saver(...)
    # ...
    saver.save(sess, "model")
    

    You should find files called model.meta and model-NNNN (for some integer NNNN) in your checkpoint directory.

  2. In another program, you can import the MetaGraphDef you just created, and restore from a checkpoint.

    saver = tf.train.import_meta_graph("model.meta")
    saver.restore("model-NNNN")  # Or whatever checkpoint filename was written.
    

    If you want to get the value of each variable, you can (for example) find the variable in tf.all_variables() collection and pass it to sess.run() to get its value. For example, to print the values of all variables, you can do the following:

    for var in tf.all_variables():
      print var.name, sess.run(var)
    

    You could also filter tf.all_variables() to find the particular weights and biases that you're trying to extract from the model.

mrry
  • 125,488
  • 26
  • 399
  • 400
  • Thank you, this is a huge help. I am attempting to recreate a trained network and deploy it like this new [iOS example](https://developer.apple.com/library/prerelease/content/samplecode/MetalImageRecognition/Introduction/Intro.html). Given your expertise, is this the right way to go about establishing the network parameters (binary .dat files for each layer's weight and bias as arrays of floats) ? End goal will be a TensorFlow trained network, inference on Metal. – Vinny M Aug 24 '16 at 23:56
  • Hmm, it depends on how complex the network is. You might be able to go straight to the checkpoint (e.g. using `tf.train.NewCheckpointReader()`) and read out NumPy arrays from the checkpoint file, bypassing the `MetaGraphDef`. Indeed you might even be able to use the C++ implementation of the `CheckpointReader` in your iOS program, if you are able to link it in (though I'm not sure how difficult that linking would be). – mrry Aug 24 '16 at 23:59
  • The `tf.train.NewCheckpointReader()` is a wonderful API. I loaded it with an inception_v3 checkpoint hoping to recreate the arrays stored in the aforementioned dat files. There is a disconnect for me as to how they are producing their network parameters. They load in 190 files (1 bias, 1 weight file for each node) as network parameters, yet TF has over 1200 stored variables for the inception model. Any pointers to bridge the gap? – Vinny M Aug 25 '16 at 22:24