0

When training a neural network a Stop Iteration error occurs...

That's the code to fit the model:

model.fit(
    train_generator,
    steps_per_epoch = num_train_samples // batch_size,
    epochs = 10,
    validation_data = validation_generator,
    validation_steps = num_val_samples // batch_size)

That's the error:

---------------------------------------------------------------------------
StopIteration                             Traceback (most recent call last)
<ipython-input-33-d4541a7a4ae1> in <module>()
      4     epochs = 10,
      5     validation_data = validation_generator,
----> 6     validation_steps = num_val_samples // batch_size)

3 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
   1145                 use_multiprocessing=use_multiprocessing,
   1146                 shuffle=shuffle,
-> 1147                 initial_epoch=initial_epoch)
   1148 
   1149         # Case 2: Symbolic tensors or Numpy array-like.

/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name + '` call to the ' +
     90                               'Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1730             use_multiprocessing=use_multiprocessing,
   1731             shuffle=shuffle,
-> 1732             initial_epoch=initial_epoch)
   1733 
   1734     @interfaces.legacy_generator_methods_support

/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
    183             batch_index = 0
    184             while steps_done < steps_per_epoch:
--> 185                 generator_output = next(output_generator)
    186 
    187                 if not hasattr(generator_output, '__len__'):

StopIteration: 

I ran the code for another model and it works.... Some more information, num_val_samples is a number, it is defined as:

num_val_samples = len(val_samples)

Edit: Here come the definitions of train_generator and validation_generator:

batch_size = 32
train_generator = data_generator(train_samples, batch_size=32)
validation_generator = data_generator(val_samples, batch_size=32)

Additionally:

train_samples = load_samples(train_data_path)
val_samples = load_samples(val_data_path)

And the definition of the data_generator:

def data_generator(samples, batch_size, shuffle_data = True, resize=224):
  num_samples = len(samples)
  while True:
    random.shuffle(samples)

    for offset in range(0, num_samples, batch_size):
      batch_samples = samples[offset: offset + batch_size]

      X_train = []
      y_train = []

      for batch_sample in batch_samples:
        img_name = batch_sample[0]
        label = batch_sample[1]
        img = cv2.imread(os.path.join(root_dir, img_name))

        #img, label = preprocessing(img, label, new_height=224, new_width=224, num_classes=37)
        img = preprocessing(img, new_height=224, new_width=224)
        label = label

        X_train.append(img)
        y_train.append(label)

      X_train = np.array(X_train)
      y_train = np.array(y_train)

      yield X_train, y_train
Tobitor
  • 1,388
  • 1
  • 23
  • 58
  • 1
    Can you show how do you define `train_generator` and `validation_generator`? – Yoskutik May 30 '20 at 16:36
  • I amended the code :) Thank you! – Tobitor May 30 '20 at 16:46
  • 1
    The documentation says `If x is a tf.data dataset, and 'steps_per_epoch' is None, the epoch will run until the input dataset is exhausted.`. Have you tried not to define it at all? – Yoskutik May 30 '20 at 16:55
  • Sorry, now I included the definition of the `data_generator`. – Tobitor May 30 '20 at 16:59
  • Does the first epoch go OK and the second one is failing? – Yoskutik May 30 '20 at 17:09
  • No, now no epoch is going okay... Here, I posted the code with the data generator where the first epoch was running: https://stackoverflow.com/questions/62090925/how-to-get-data-generator-more-efficient The one in this question was amended a little, because I one hot encoded the labels outside the generator. – Tobitor May 30 '20 at 17:11
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/214994/discussion-between-yoskutik-and-tobitor). – Yoskutik May 30 '20 at 17:17

0 Answers0