5

I am new to Tensorflow Hub. I want to use I3D module and finetune this network to another dataset and I need to get the last hidden layer as well as some other layers' outputs. I was wondering if there is a way to get the other layers' activations. The only signature provided for I3D is just "default". I think there should be a way to get the output of all layers easily with Tensorflow Hub modules.

import tensorflow_hub as hub
module = hub.Module("https://tfhub.dev/deepmind/i3d-kinetics-600/1", trainable=False)
logits = module(inp)

This will give me the final layer output. How can I get other layer's outputs, for example, the second convolution layer's output?

Siavash
  • 503
  • 1
  • 5
  • 25
  • I have the same question. Have you found any solution? – Wubin Nov 27 '18 at 07:49
  • Actually, my workaround was to load the module with tf hub and then also implement it based on their code from github https://github.com/deepmind/kinetics-i3d I manually checked that I create the same graph as it is created by their code and then I used tf.assign operation to load all variables of the module into my own model. Then you can also save your model variables and next time load it without loading the tf module for better memory utilization. – Siavash Nov 28 '18 at 15:39

2 Answers2

3

https://tfhub.dev/deepmind/i3d-kinetics-400/1 (and also the *-600 version) happen to export only the final layer, so there is no properly supported way to get the other layers. (That said, you can always experiment by inspecting the graph and selecting tensors by name, but this has a real risk to stop working with newer module or library versions.)

arnoegw
  • 1,218
  • 6
  • 13
  • Can you please tell me a little more about inspecting the graph and selecting tensors by name part? How can I do that? Should I build the model myself from the scratch and use the pre-initialized variables as weights, biases and etc? – Siavash Sep 13 '18 at 17:42
3

You can get the other layers by name. Using Inception-v3 as an example:

import tensorflow_hub as hub

module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1")
logits = module(inp)

logits contains all the models layers. You can view them by calling items():

print(logits.items())

This outputs a dictionary containing all the layers in the graph, a few of which are shown below:

dict_items([
('InceptionV3/Mixed_6c', <tf.Tensor 'module_2_apply_image_feature_vector/InceptionV3/InceptionV3/Mixed_6c/concat:0' shape=(1, 17, 17, 768) dtype=float32>), 
('InceptionV3/Mixed_6d', <tf.Tensor 'module_2_apply_image_feature_vector/InceptionV3/InceptionV3/Mixed_6d/concat:0' shape=(1, 17, 17, 768) dtype=float32>), 
('InceptionV3/Mixed_6e', <tf.Tensor 'module_2_apply_image_feature_vector/InceptionV3/InceptionV3/Mixed_6e/concat:0' shape=(1, 17, 17, 768) dtype=float32>),
('default', <tf.Tensor 'module_2_apply_image_feature_vector/hub_output/feature_vector/SpatialSqueeze:0' shape=(1, 2048) dtype=float32>),     
('InceptionV3/MaxPool_5a_3x3', <tf.Tensor 'module_2_apply_image_feature_vector/InceptionV3/InceptionV3/MaxPool_5a_3x3/MaxPool:0' shape=(1, 35, 35, 192) dtype=float32>)])

Usually to get the last layer, you would use default:

sess.run(logits['default'])

But you can just as easily get other layers using their name:

sess.run(logits['InceptionV3/MaxPool_5a_3x3'])
ysbecca
  • 131
  • 5
  • 3
    Thank you. You can do this for InceptionV3 because the feature vectors are available on tensorflow hub, but the same is not true for i3d. – Siavash Nov 10 '18 at 21:08