1

The Task

Running keras.model.fit_generator with use_multiprocessing=True and multiple workers on a data generator that itself contains a tensorflow or keras model.

This issue is very related: https://github.com/tensorflow/tensorflow/issues/5448#issuecomment-258934405

def create_minimal_keras_model():
    ##### Create Model A #####
    in1 = keras.layers.Input(shape=(1,))
    d = keras.layers.Dense(1)(in1)
    a = keras.Model(inputs=in1, outputs=d)
    opt = keras.optimizers.Adam(lr=0.01)
    loss = keras.losses.mse
    a.compile(opt, loss)
    #####

    return a

class TestGenerator(keras.utils.Sequence):
    def __init__(self):
        self.len = int(1e2)
        self.model = None

        # self.init_model()

    def init_model(self):
        self.graph = tf.Graph()
        with self.graph.as_default():
            self.model = create_minimal_keras_model()

    def __len__(self):
        """
        Number of batches for generator.
        """

        return self.len

    def __getitem__(self, index):
        """
        Keras sequence method for generating batches.
        """
        if not self.model:
            self.init_model()

        if self.model:
            with self.graph.as_default():
                res = self.model.predict(np.array([1]))

        return (np.array([index]), np.array([-index/2 + 3]))

The Error

The training hangs at the beginning of the 2nd epoch.

What I tried

  • init model at initialization of data generator (main process)
  • init model at first generation loop call (subprocess)
  • calling tf.Session() and other functions cause a deadlock at start of 1st epoch

The complete example code:

import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import os


def create_minimal_keras_model():
    ##### Create Model A #####
    in1 = keras.layers.Input(shape=(1,))
    d = keras.layers.Dense(1)(in1)
    a = keras.Model(inputs=in1, outputs=d)
    opt = keras.optimizers.Adam(lr=0.01)
    loss = keras.losses.mse
    a.compile(opt, loss)
    #####

    return a

class TestGenerator(keras.utils.Sequence):
    def __init__(self):
        self.len = int(1e2)
        self.model = None

        # self.init_model()

    def init_model(self):
        self.graph = tf.Graph()
        with self.graph.as_default():
            self.model = create_minimal_keras_model()

    def __len__(self):
        """
        Number of batches for generator.
        """

        return self.len

    def __getitem__(self, index):
        """
        Keras sequence method for generating batches.
        """
        if not self.model:
            self.init_model()

        if self.model:
            with self.graph.as_default():
                res = self.model.predict(np.array([1]))

        return (np.array([index]), np.array([-index/2 + 3]))



os.environ['CUDA_VISIBLE_DEVICES'] = ''

a = create_minimal_keras_model()
a.summary()


##########################################
##### Funcions Halt Before 1st Epoch #####
##########################################
# tf.Session()
# a.save_weights('tmp_model_weights.h5')
# a.load_weights('tmp_model_weights.h5')
# a.save('tmp_model.h5')
# keras.models.load_model('tmp_model.h5')
##########################################
##########################################
##########################################


##########################################
##### Functions Causing NO Deadlocks #####
##########################################
tf.get_default_session()
tf.Graph()
keras.__version__
with tf.device('/cpu:0'):
    _ = tf.constant(0)
keras.utils.plot_model(a, to_file=('tmp_plot_model.png'), show_shapes=True)
[a.get_layer(l_name).output for l_name in [a.layers[-1].name]]
_ = keras.backend.variable(4)
_ = keras.backend.image_data_format() 
_ = keras.backend.shape(tf.constant(1, shape=(5,5,5)))
_ = a.layers[0].get_config()
tf.random.set_random_seed(0)
##########################################
##########################################
##########################################


# The training will stop at second epoch
a.fit_generator(generator=TestGenerator(), steps_per_epoch=100, epochs=5, workers=4, use_multiprocessing=True)

The Question What options do i have for having a data generator, which runs a tensorflow model internally, in a multiprocess training.

The option I identified are:

Markus Weber
  • 1,059
  • 1
  • 11
  • 24
  • In your first link there is a suggestion to use python 3.4+ and change how multiprocessing forks processes, did you try that? – Dr. Snoopy Jul 04 '19 at 09:26
  • yes, then i get following error: .... File "/usr/lib/python3.5/multiprocessing/reduction.py", line 59, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle _thread.lock I'm not sure. but this might be caused by using `keras.utils.Sequence` as base class of the generator. – Markus Weber Jul 06 '19 at 13:37

0 Answers0