I'm working on a deep learning model which takes image inputs, encodes them into a latent representation and reconstructs them.
I'm using visdom to visualise the inputs, outputs, latent variables, and monitor the loss function. I create a vis = visdom.Visdom()
object and pass it into the network. As the network uses the various latent variables, the Visdom object visualises them with vis.image(...)
.
The problem is this design means the images get drawn in an unsynchronised way which makes it hard to track which images in the visualisation correspond to each other. I would like to make it so visdom only updates every n iterations but its not clear to me how to do this.
Of course, I could make the network return all of its latent variables and call vis.image
only in the training script, but is there a way to circumvent this?