1

I apologize that I'm not pretty good at English.


Well..I'm trying to feed my own data, which contain 4,000 images, to given placeholder.

For example,

from PIL import Image

class dataset():
    def __init__(self):
        self.data = []
        for file in myfolder:
            image = np.asarray(Image.open(file))
            self.data.append(image)
    ...

...
X = tf.placeholder(tf.float32, shape = [None, 32, 32, 1])
trainer = ...minimize(loss)
...
X_data = dataset.next_batch(10)
sess.run(trainer, feed_dict={X: X_data})

This works very well, but there is a simple problem that I cannot increase batch size because of lack of memory in GPU.

I guess... the above code loads whole data into a single array. So I have tried reading Image data at every iteration, but this takes pretty long time :(

How can I solve this? or divide my own data into K-arrays and load an array on GPU alternatively?

James
  • 13
  • 4
  • **BATCH-FILE** Does not mean processing multiple files!!! Please read tag description, thanks. –  Aug 17 '17 at 05:15
  • Sorry, I approved your edit! thanks :) – James Aug 17 '17 at 05:19
  • It's ok! I see a lot of people make the same mistake. Now that you understand it's not the same, you can also teach people. –  Aug 17 '17 at 05:20
  • I guess you're actually running out of CPU memory, not GPU memory. What you could do is instead of reading everything use queues in between, take a look at https://www.tensorflow.org/programmers_guide/reading_data to get started. – amo-ej1 Aug 17 '17 at 08:09
  • Wow... it helps a lot !! :) Thanks. – James Aug 18 '17 at 01:01

0 Answers0