I trained a a model on a GPU and saved it like this (export_path is my output directory)
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
tensor_info_x = tf.saved_model.utils.build_tensor_info(self.Xph)
tensor_info_y = tf.saved_model.utils.build_tensor_info(self.predprob)
tensor_info_it = tf.saved_model.utils.build_tensor_info(self.istraining)
tensor_info_do = tf.saved_model.utils.build_tensor_info(self.dropout)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'myx': tensor_info_x, 'istraining': tensor_info_it, 'dropout': tensor_info_do},
outputs={'ypred': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
net, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature },)
builder.save()
Now I'm trying to load this and run predictions. It works fine if I am on a GPU, but w/o a GPU around I get:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'rnn/while/rnn/multi_rnn_cell/cell_0/cell_0/layer_norm_basic_lstm_cell/dropout/add/Enter': Operation was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.
Now I read about tf.train.import_meta_graph and the clear_device option, but I can't get this work. I'm loading my models like so:
predict_fn = predictor.from_saved_model(modelname)
at which point is throw the error mentioned above. modelname is the full filename of the pb file. Is there a way to go through the nodes of the graph and manually set the device (or doing something similar)? I'm using tensorflow 1.8.0
I saw Can a model trained on gpu used on cpu for inference and vice versa? which I don't think I'm duplicating. The difference with that question is that I want to know what to do after training