0

In short I want to be able to visualize both train/test data as it's learning in real-time

below is how I'm currently visualizing the progress:

batch_size = 100
epochs = 30
init = tf.global_variables_initializer()
samples = []

with tf.Session() as sess:    
    sess.run(init)    
        for epoch in range(epochs):        
        num_batches = mnist.train.num_examples // batch_size

        for i in range(num_batches):            
            batch = mnist.train.next_batch(batch_size)            
            batch_images = batch[0].reshape((batch_size, 784))
            batch_images = batch_images * 2 -1            
            batch_z = np.random.uniform(-1,1,size=(batch_size, 100))

            _ = sess.run(D_trainer, feed_dict={real_images:batch_images,    
                                               z:batch_z})
            _ = sess.run(G_trainer, feed_dict={z:batch_z})
        print("ON EPOCH {}".format(epoch))

        sample_z = np.random.uniform(-1,1, size=(1, 100))
        gen_samples = sess.run(generator(z, reuse=True),
                               feed_dict={z:sample_z})    
        samples.append(gen_samples)

new_samples = []
#saver = tf.train.Saver(var_list=g_vars)

with tf.Session() as sess:   
    #saver.restore(sess,"...")

    for x in range(5):
        sample_z = np.random.uniform(-1,1, size=(1, 100))
        gen_samples = sess.run(generator(z, reuse=True),
                               feed_dict={z:sample_z})        
        new_samples.append(gen_samples)       
plt.imshow(new_samples[0].reshape(28,28))

This is how I visualize my graph real-time for sentiment analysis by running it on a separate terminal.

import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import style
import time

style.use("ggplot")

fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)

def animate(i):
    pullData = open("twitter-out.txt","r").read()
    lines = pullData.split('\n')

    xar = []
    yar = []

    x = 0
    y = 0

    for l in lines[-200:]:
        x += 1
        if "pos" in l:
            y += 1
        elif "neg" in l:
            y -= 1

        xar.append(x)
        yar.append(y)

    ax1.clear()
    ax1.plot(xar,yar)
ani = animation.FuncAnimation(fig, animate, interval=1000)
plt.show()

I have also attached a Youtube link below to further provide clarity to the problem that I'm having. I want to be able to see and hear images/speech as it's being trained.

Starts at 1:10:23 - 1:11:03

Generating Real-time RNN-LSTM

1 Answers1

0

If you are interested in training curves, look at this post: Keras + TensorFlow Realtime training chart (in which I recommend my package livelossplot).

From my Starting deep learning hands-on: image classification on CIFAR-10 tutorial, in which I insist on keeping track of both:

  • global metrics (log-loss, accuracy),
  • examples (correctly and incorrectly classifies cases).

The later may help us telling which kinds of patterns are problematic, and on numerous occasions helped me with changing the network (or supplementing training data, if it was the case).

And example how does it work (here with Neptune, though you can do it manually in Jupyter Notebook, or using TensorBoard image channel):

Misclassified images by neural network - Neptune

And then looking at particular examples, along with the predicted probabilities:

enter image description here

Full disclaimer: I collaborate with deepsense.ai, the creators or Neptune - Machine Learning Lab.

Piotr Migdal
  • 11,864
  • 9
  • 64
  • 86