2

Good day to all!

I am trying to add the possibility to control the "randomness" of the augmentation, which is applied to the image data. This means that I perform every augmentation operation with a certain probability (for example 0.1). Since I use bare tensorflow 2 operations for augmentation, I have used tf.cond operator to choose either applying augmentation or not. However, while executing the code, I get the error:

OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

The sample of the code is below:

# params
    augment = True
    augment_prob = tf.constant(0.1, dtype=tf.float32)
    # stating augmentation methods
    augment_methods = [
    random_rotate90_image,
    random_flip_vertical_image,
    random_flip_horizontal_image,
    random_crop_image,
    random_convert_to_grayscale_image]

    AUTOTUNE = tf.data.AUTOTUNE
    # load data
    train = data_load()
    # creating tf.data.Dataset and defining other operations
    dataset = tf.data.Dataset.from_tensor_slices((train.iloc[:, 0], train.iloc[:, 1:]))
    dataset = dataset.shuffle(train.shape[0])
    dataset = dataset.map(load_image, num_parallel_calls=AUTOTUNE)
    dataset = dataset.cache()
    # add augment if needed
    if augment:
        for method in augment_methods:
            dataset = dataset.map(lambda x, y: tf.cond(
                tf.math.less(tf.random.uniform([], 0., 1.), augment_prob), # condition (if generated value is less than 0.1, perform augmentation)
                lambda: method(x, y),                                      # perform augmentation
                lambda: x, y),                                             # do not perform
                num_parallel_calls=AUTOTUNE)

    dataset = dataset.batch(batch_size)

    model = create_model()
    model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss='categorical_crossentropy')
    model.summary()

    model.fit(x=dataset, epochs=10)

Without the tf.cond statement everything works fine. The exception raises inside the tf.cond, while checking the bool variable, as I can suppose. I have tried to search similar cases but unfortunately did not find them... Did somebody face the same problem? Could you say, how did you overcome it? I would be very thankful for providing any help.

Denis D.
  • 21
  • 1
  • Do you see an error when you this type of code ? `def augment( x ): result = tf.cond(tf.math.less(tf.random.uniform([], 0., 1.), augment_prob), lambda: x, lambda: 0) return result dataset = dataset.map(lambda x: tf.py_function(func=augment, inp=[x], Tout=(tf.int32)))` – Mohan Radhakrishnan Mar 14 '22 at 08:42
  • Thank you for your help! Interestingly, this then works. However, I do not know now, how to apply augmentation in that case. Did you mean that I would need to include tf.cond function right inside every augmentation? – Denis D. Mar 14 '22 at 15:44
  • Yes. Other option seems to be [this](https://www.tensorflow.org/tutorials/images/data_augmentation). You can look for `aug_ds = train_ds.map( lambda x, y: (resize_and_rescale(x, training=True), y))`. But introducing a condition still may need different functions for each augmentation. – Mohan Radhakrishnan Mar 15 '22 at 07:55

0 Answers0