I trained a Faster R-CNN from the TF Object Detection API and saved it using export_inference_graph.py
. I have the following directory structure:
weights
|-checkpoint
|-frozen_inference_graph.pb
|-model.ckpt-data-00000-of-00001
|-model.ckpt.index
|-model.ckpt.meta
|-pipeline.config
|-saved_model
|--saved_model.pb
|--variables
I would like to load the first and second stages of the model separately. That is, I would like the following two models:
A model containing each variable in the scope
FirstStageFeatureExtractor
which accepts an image (or serializedtf.data.Example
) as input, and outputs the feature map and RPN proposals.A model containing each variable in the scopes
SecondStageFeatureExtractor
andSecondStageBoxPredictor
which accepts a feature map and RPN proposals as input, and outputs the bounding box predictions and scores.
I basically want to be able to call _predict_first_stage and _predict_second_stage separately on my input data.
Currently, I only know how to load the entire model:
model = tf.saved_model.load("weights/saved_model")
model = model.signatures["serving_default"]
EDIT 6/7/2020:
For Model 1, I may be able to extract detection_features
as in this question, but I'm still not sure about Model 2.