2

I am having trouble running inference from a tensorflow 2.0 SavedModel loaded the C_API, because I cannot access the input and output operations by name.

I load the session via TF_LoadSessionFromSavedModel(...) successfully:

#include <tensorflow/c/c_api>

...

TF_Status* status = TF_NewStatus();
TF_Graph*  graph  = TF_NewGraph();
TF_Buffer* r_opts = TF_NewBufferFromString("",0);
TF_Buffer* meta_g = TF_NewBuffer();

TF_SessionOptions* opts = TF_NewSessionOptions();
const char* tags[] = {"serve"};

TF_Session* session = TF_LoadSessionFromSavedModel(opts, r_opts, "saved_model/tf2_model", tags, 1, graph, meta_g, status);

if ( TF_GetCode(status) != TF_OK ) exit(-1); //does not happen

However, I get an error when trying to setup the input and output tensors using:

TF_Operation* inputOp  = TF_GraphOperationByName(graph, "input"); //works with "serving_default_input"
TF_Operation* outputOp = TF_GraphOperationByName(graph, "prediction"); //does not work

The names I am passing as arguments are assigned to the input and output keras layers of the saved model, but are not in the loaded graph. Running saved_model_cli (following the tf SavedModel tutorial here) shows that the tenors with these names exist under the SignatureDef serving_default, so I guess that I need to instantiate serving_default into a graph (in other words create a graph according to the signature), however I could not find a way to do this using the C API.

Note that tensorflows's C_API test uses C++ tensorflow/core/ functionality to load a signature definition map from the metagraph and uses it to find input and output operation names, but I would like to avoid the dependency on C++.

Also note that accessing the operations by name works for frozen .pb graphs, however this format is being deprecated.

Thanks in advance for any ideas and hints!

javor
  • 21
  • 1
  • 3
  • 1
    This tutorial seems to have a solution to this, albeit not so elegant because you still need to analyse the defined signatures via saved_model_cli https://medium.com/analytics-vidhya/deploying-tensorflow-2-1-as-c-c-executable-1d090845055c – Bersan May 18 '20 at 01:46

1 Answers1

6

Currently (as of May 2020) the Tensorflow C API doesn't officially support the SavedModel (tensorflow 2.0) format, even though they will probably release the functionality soon.

Regardless, you can use the default SignatureDefs defined when exporting the model and find the names of the input and output tensors using the saved_model_cli tool.

Say you saved your model using

model.save('/path/to/model/folder')

You then open a bash and do

cd /python/folder/bin/
saved_model_cli show --dir /path/to/model/folder --tag_set serve --signature_def serving_default

(the actual location of saved_model_cli varies, but it is installed by default when using anaconda on the bin/ folder)

it will yield by default something like:

serving_default
The given SavedModel SignatureDef contains the following input(s):
  inputs['graph_input'] tensor_info:
      dtype: DT_DOUBLE
      shape: (-1, 28, 28)
      name: serving_default_graph_input:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['graph_output'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 10)
      name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict

In this case, serving_default_graph_input is the input tensor name, and StatefulPartitionedCall is the output tensor name. You can then load those using the TF_GraphOperationByName().

With C API support for Tensorflow 2 you'd be able to save the model with a set of defined SignatureDefs and then load the desired concrete_function(), without having to worry about tensor names. This current method, however should still work.

Bersan
  • 1,032
  • 1
  • 17
  • 28