0

I'm following the autoencoder example @ https://blog.keras.io/building-autoencoders-in-keras.html but utilizing my own data. I get very low GPU utilization and almost no GPU memory utilization.

I'm wondering if it's just having trouble fitting batches onto the GPU. My input data is 5k dimensions, and I'm encoding it to a hidden representation of 250 dimensions. When I vary the batch size on my autoencoder down to one, I get higher GPU usage but it's obviously quite slow (lots of shuffling of data). But when I go higher, I get almost no GPU usage and it's still pretty slow (and slower than CPU accelerated; lowest I've seen on GPU is 3.5k seconds versus 1.8k seconds on CPU). My GPU is a GTX 970, and everything appears to be working just fine with it.

#input and hidden dimension parameters
input_dimensions = Input(shape=(5000,))
encoded_dimensions = 250

#build autoencoder model
encoded = Dense(encoded_dimensions, activation='relu')(input_dimensions)
decoded = Dense(5000, activation='sigmoid')(encoded)
autoencoder = Model(input=input_dimensions, output=decoded)

#build encoder model
encoder = Model(input=input_dimensions, output=encoded)

#build decoder model
encoded_input = Input(shape=(encoded_dimensions,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))

autoencoder.compile(optimizer='adadelta', loss = 'mae')
autoencoder.fit(data, data, nb_epoch = 10, batch_size=512, shuffle=True, validation_split=0.1)

Is there a problem with my code that's causing it to run slow, or perhaps some strange configuration issue (my .theanorc, for what it's worth, is configured for GPU and theano reports utilizing the GPU), or is it a function of my data?

talonmies
  • 70,661
  • 34
  • 192
  • 269
Keyboard Frenzy
  • 81
  • 1
  • 1
  • 6
  • I'm struggling with similar problem, any success? – reith Aug 14 '17 at 05:49
  • I honestly think it had to do with the batch size, which was a little bit large for what I was attempting to do, and I think it had trouble with fitting everything into the GPU memory buffer. Try to use a smaller batch size if you're having similar problems and see if that helps. Definitely make sure you have updated drivers and libraries as well. You can also try switching the backend to tensorflow to see. I left this particular experiment, as it didn't work, so I don't have a specific solution to what I experienced. Best of luck! – Keyboard Frenzy Aug 16 '17 at 00:36
  • Actually my problem was CPU took some process out (the last layer), not that related to your problem :) After making all processes to be done by GPU, problem solved. – reith Aug 16 '17 at 05:42
  • I thought about batch size too. It may worth to mention that on smaller batches I gained higher GPU utilization but overall performance, as number of samples trained, reduced significantly. As I increased batch size I noticed GPU more often (half of the time) stalls at zero but it actually doing much more work for same amount of time. – reith Aug 16 '17 at 05:47

0 Answers0