I am not sure if there is an optimum way to solve this problem, but this is how I solved it:
In my model I'm using a simple MLP, so my model()
function has lines like this in it:
train_layer = tf.add(tf.matmul(x_train, weights['w1']), biases['b1'])
train_layer = tf.nn.relu(train_layer)
test_layer = tf.add(tf.matmul(x_test, weights['w1']), biases['b1'])
test_layer = tf.nn.relu(test_layer)
As you can see, I have two inputs, x_train
, and x_test
. These are the handles to get batches of data from the tf.contrib.data dataset iterator:
x_train, x_train_labels = train_iter.get_next()
x_test, x_test_labels = test_iter.get_next()
So I essentially have two flows of data in the same graph, which the exact same operations are performed on. I also have two outputs of the model, mlp_train
and mlp_test
depending on whether the model was evaluated using x_train
or x_test
inputs.
Now: if you create your optimiser using the mlp_train
output, and create your testing metrics using your mlp_test
outputs, you simply need to run: sess.run(optimiser)
to train your system on the training dataset, and sess.run(test_metrics)
to test your system on your testing dataset, and you never need to use a feed_dict.
EDIT: I read your comment about using "data that was not available when the model was trained", and I don't think this answer satisfies that.