It's a hidden gem! You can provide it a list of strings of your choice that label the summary node, e.g.
tf.summary.scalar('learning_rate', p_lr, collections=['train'])
tf.summary.scalar('loss', t_loss, collections=['train', 'test'])
and then fetch the summaries by their label, e.g. like so:
s_training = tf.summary.merge_all('train')
s_test = tf.summary.merge_all('test')
I'm doing it like that because I often want to log extra information during the validation phase; in the above example, I don't have to provide a value for the learning rate placeholder p_lr
when evaluating (and writing) the accuracy, for example - or anything really that the inference part of the graph relies on.
Providing (only) custom categories also has the nice side effect of hiding the node from Supervisor
, for example. If you really want to have control over when exactly you write a summary (e.g. using sv.summary_computed()
in case of Supervisor
), that's an easy way to go.