I'm currently working on importing a trained (python3.8, TF==2.3) LSTM-Model by using the C-API (TF==1.13.2). I have to stick with this software versions. I try to show my steps so far using a dummy-example.
My model description (for dummy import purpose) using the tensorflow-cli using
python3.8 ~/path/to/tensorflow/python/tools/saved_model_cli.py show --dir ~/path/to/model/folder --tag_set serve --signature_def serving_default
is:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2, 1)
name: serving_default_input_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['dense'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
I try to import the graph-structure using:
uint8_t m_NumInputs = 1;
TF_Output* m_Input_ = static_cast<TF_Output*>(malloc(sizeof(TF_Output) * m_NumInputs));
TF_Output t0 = {TF_GraphOperationByName(m_Graph_, "serving_default_input_1"), 0};
m_Input_[0] = t0;
//********* Get Output tensor
uint8_t m_NumOutputs = 1;
TF_Output* m_Output_ = static_cast<TF_Output*>(malloc(sizeof(TF_Output) * m_NumOutputs));
TF_Output t2 = {TF_GraphOperationByName(m_Graph_, "StatefulPartitionedCall"), 0};
After declaring Input-Values I run a Session with:
TF_Tensor** InputValues = static_cast<TF_Tensor**>(malloc(sizeof(TF_Tensor*) * m_NumInputs));
TF_Tensor** OutputValues = static_cast<TF_Tensor**>(malloc(sizeof(TF_Tensor*) * m_NumOutputs));
const std::vector<std::int64_t> dims = {1, 2, 1};
const auto data_size = std::accumulate(dims.begin(), dims.end(), sizeof(float), std::multiplies<std::int64_t>{});
auto data = static_cast<float*>(std::malloc(data_size));
std::vector<float> vals = {1.0, 1.0};
std::copy(vals.begin(), vals.end(), data); // init input_vals.
auto tensor = TF_NewTensor(
TF_FLOAT,
dims.data(), static_cast<int>(dims.size()),
data, data_size,
&NoOpDeallocator, nullptr
);
InputValues[0] = tensor;
TF_SessionRun(
m_Session_, NULL,
m_Input_, InputValues, m_NumInputs,
m_Output_, OutputValues, m_NumOutputs,
NULL, 0, NULL ,
m_Status_
);
void* buff = TF_TensorData(OutputValues[0]);
float* offsets = static_cast<float*>(buff);
At TF_SessionRun()
I receive the following Error:
Expected input[1] == 'TensorArrayV2_1/element_shape:output:0' to be a control input.
In {{node TensorArrayV2Stack/TensorListStack}}
[[{{node sequential/lstm/PartitionedCall}}]]
[[{{node StatefulPartitionedCall}}]]
[[{{node StatefulPartitionedCall}}]]
And I just don't know what is meant with a control input in this context. In software-block two I set input[1]
to zero because in the cli-output this is suggested when showing the "name".
I tried several different layers and I only receive this error when I use layers for time-series (LSTM, GRU). Does anyone have a clue what I might have missed here? Thanks for every suggestion!