0

I am trying to rewrite the python code for mnist_client to c++. Since I am new to tensorflow and TF serving I am having some difficulties. I went through the tutorials and the c++ client example (inception_client). Python mnist_client works without any problems, but when I run my c++ client it gives me the arg[0] is not a matrix

gRPC call return code: 3: In[0] is not a matrix 
 [[Node: MatMul = MatMul[T=DT_FLOAT, _output_shapes=[[?,10]], transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_x_0_0, Variable/read)]]

I trained the model as in the tutorial and I've checked that the minst data I read is ok.

From this: tensorflow Invalid argument: In[0] is not a matrix , I understand MatMul needs at least 2-dim data. But, I went through the c++ code for inception_client and python mnist_client and both read the image data into a 1-dim char array... What am I missing here?

The code for inception_client: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/inception_client.cc

Any help would be much appreciated. :)

class ServingClient{
public:
ServingClient(std::shared_ptr<Channel> channel) : stub_(PredictionService::NewStub(channel)){}

tensorflow::string callPredict( const tensorflow::string &model_name,
                                const tensorflow::string &model_signature,
                                const int num_tests){
PredictRequest request;
PredictResponse response;
ClientContext context;
int image_size;
int image_offset = 16;
int label_offset = 8;

request.mutable_model_spec()->set_name(model_name);
request.mutable_model_spec()->set_signature_name(model_signature);

google::protobuf::Map<tensorflow::string, tensorflow::TensorProto> &inputs = *request.mutable_inputs();

std::fstream imageFile("t10k-images-idx3-ubyte", std::ios::binary | std::ios::in);
std::fstream labelFile("t10k-labels-idx1-ubyte", std::ios::binary | std::ios::in);

labelFile.seekp(0);
imageFile.seekp(0);

uint32_t magic_number_images;
uint32_t nImages;
uint32_t magic_number_labels;
uint32_t rowsI =0;
uint32_t rowsL =0;
uint32_t colsI = 0;
uint32_t colsL = 0;


imageFile.read((char *)&magic_number_images, sizeof(magic_number_images));
imageFile.read((char *)&nImages, sizeof(nImages));
imageFile.read((char *)(&rowsI), sizeof(rowsI));
imageFile.read((char *)&colsI, sizeof(colsI));

image_size = ReverseInt(rowsI) * ReverseInt(colsI);

labelFile.read((char *)&magic_number_labels, sizeof(magic_number_labels));
labelFile.read((char *)&rowsL, sizeof(rowsL));

for(int i=0; i<num_tests; i++){
    tensorflow::TensorProto proto;

    labelFile.seekp(label_offset);
    imageFile.seekp(image_offset);

    //read mnist image
    char *img = new char[image_size]();
    char label = 0;
    imageFile.read((char *)img, image_size);

    image_offset += image_size;
    //read label
    labelFile.read(&label, 1);
    label_offset++;

    //predict
    proto.set_dtype(tensorflow::DataType::DT_STRING);
    proto.add_string_val(img, image_size);
    proto.mutable_tensor_shape()->add_dim()->set_size(1);
    inputs["images"] = proto;

    Status status = stub_->Predict(&context, request, &response);
    delete[] img;

    if(status.ok()){
    std::cout << "status OK." << std::endl;
    OutMap &map_outputs = *response.mutable_outputs();
    OutMap::iterator iter;
    int output_index = 0;

    for(iter = map_outputs.begin(); iter != map_outputs.end(); ++iter){
        tensorflow::TensorProto &result_tensor_proto = iter->second;
        tensorflow::Tensor tensor;
        //check if response converted succesfully 
        bool converted = tensor.FromProto(result_tensor_proto);
        if (converted) {
            std::cout << "the result tensor[" << output_index << "] is:" << std::endl
                        << tensor.SummarizeValue(10) << std::endl;
         } 
         else {
            std::cout << "the result tensor[" << output_index
                        << "] convert failed." << std::endl;
        }
        ++output_index;
                }
        }
    else{
        std::cout << "gRPC call return code: " << status.error_code() << ": "
            << status.error_message() << std::endl;
            }
        }
imageFile.close();
labelFile.close();
}

private:
    std::unique_ptr<PredictionService::Stub> stub_;

};

EDIT 1: I assume the problem must be in how the model was created and what dimension is the data the client sends. I used the provided python program that trains and exports the model which sets the dimensions:

feature_configs = {'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32),}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
x = tf.identity(tf_example['x'], name='x')  # use tf.identity() to assign name
y_ = tf.placeholder('float', shape=[None, 10])
w = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
Maja
  • 61
  • 9

1 Answers1

0

As expected, the fix was obvious. All that had to be done was to add another dimension:

   proto.mutable_tensor_shape()->add_dim()->set_size(image_size);

to get [image_size,1] shape.

Maja
  • 61
  • 9