I am interested in using the tune
library for reinforcement learning and I would like to use the in-built tensorboard capability. However, the metric that I am using to tune my hyperparameters is based on a time-consuming evaluation procedure that should be run infrequently.
According to the documentation, it looks like the _train
method returns a dictionary that is used both for logging and for tuning hyperparameters. Is it possible either to perform logging more frequently within the _train
method? Alternately, could I return the values that I wish to log from the _train
method but some of the time omit the expensive-to-compute metric from the dictionary?