2

I'm building a simple neural network in Python using Tensorflow and Keras. I need to implement this code to work on a GPU, using PyCuda. I plan on parallelizing learning all the epochs, however since Keras is very minimalistic, all epoch training (at least from my understanding) is done with one line:

model.fit(train_images, train_labels, epochs=100)

How would it be possible to "extract" something from this function, that could be fed to a PyCuda kernel function? This is my code so far:

#TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras

#Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import cv2

print(tf.__version__)

fashion_mnist = keras.datasets.fashion_mnist

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)

plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()

train_images = train_images / 255.0
test_images = test_images / 255.0

plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(train_images[i], cmap=plt.cm.binary)
    plt.xlabel(class_names[train_labels[i]])

plt.show()

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

model.compile(optimizer=tf.train.AdamOptimizer(),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(train_images, train_labels, epochs=100)
jonny
  • 21
  • 2
  • 2
    Are you sure you want that? Tensorflow and Keras have native GPU support. Also, when you say you want to parallelize training all the epochs, that doesn't really make sense since you need to update the weights within each epoch – RunOrVeith Jan 19 '19 at 12:42
  • Yes, sorry but what you are trying to do makes no sense. Keras/TF already support GPUs and you can't parallelize across epochs. – Dr. Snoopy Jan 19 '19 at 15:57
  • @RunOrVeith is right. Maybe you're aiming at distributing them to different GPUs? If that's the case, in Keras, it's as simple as using `keras.utils.training_utils.multi_gpu_model`. – afagarap Jan 19 '19 at 15:57
  • Thank you, now I understand. I thought that epochs can be trained in parallel. If that is the case, then how can neural networks be accelerated via a GPU? – jonny Jan 20 '19 at 16:04

0 Answers0