I apologize that I'm not pretty good at English.
Well..I'm trying to feed my own data, which contain 4,000 images, to given placeholder.
For example,
from PIL import Image
class dataset():
def __init__(self):
self.data = []
for file in myfolder:
image = np.asarray(Image.open(file))
self.data.append(image)
...
...
X = tf.placeholder(tf.float32, shape = [None, 32, 32, 1])
trainer = ...minimize(loss)
...
X_data = dataset.next_batch(10)
sess.run(trainer, feed_dict={X: X_data})
This works very well, but there is a simple problem that I cannot increase batch size because of lack of memory in GPU.
I guess... the above code loads whole data into a single array. So I have tried reading Image data at every iteration, but this takes pretty long time :(
How can I solve this? or divide my own data into K-arrays and load an array on GPU alternatively?