0

I am using TF2 Object Detection API to train a ssd_resnet50. Each time I train it I get different losses and evaluation scores (tensorboard logs -- graphs).

I am using VOC2012 dataset to retrain a pretrained ssd_resnet50_v1_fpn_640x640_coco17_tpu-8 model. I have religiously followed the API setup using this link: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html.

  1. Organised my workspace/training files
  2. Prepared/annotated image datasets
  3. Generate tf records from such datasets
  4. Configured a simple training pipeline
  5. Trained a model and monitored its progress

Everything is working just fine except for the reproducibility. In order to train a customized model I am using the configurations mentioned below (pipeline.config).

model {
  ssd {
    num_classes: 20 # Set this to the number of different label classes
    image_resizer {
      fixed_shape_resizer {
        height: 640
        width: 640
      }
    }
    feature_extractor {
      type: "ssd_resnet50_v1_fpn_keras"
      depth_multiplier: 1.0
      min_depth: 16
      conv_hyperparams {
        regularizer {
          l2_regularizer {
            weight: 0.00039999998989515007
          }
        }
        initializer {
          truncated_normal_initializer {
            mean: 0.0
            stddev: 0.029999999329447746
          }
        }
        activation: RELU_6
        batch_norm {
          decay: 0.996999979019165
          scale: true
          epsilon: 0.0010000000474974513
        }
      }
      override_base_feature_extractor_hyperparams: true
      fpn {
        min_level: 3
        max_level: 7
      }
    }
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
        use_matmul_gather: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    box_predictor {
      weight_shared_convolutional_box_predictor {
        conv_hyperparams {
          regularizer {
            l2_regularizer {
              weight: 0.00039999998989515007
            }
          }
          initializer {
            random_normal_initializer {
              mean: 0.0
              stddev: 0.009999999776482582
            }
          }
          activation: RELU_6
          batch_norm {
            decay: 0.996999979019165
            scale: true
            epsilon: 0.0010000000474974513
          }
        }
        depth: 256
        num_layers_before_predictor: 4
        kernel_size: 3
        class_prediction_bias_init: -4.599999904632568
      }
    }
    anchor_generator {
      multiscale_anchor_generator {
        min_level: 3
        max_level: 7
        anchor_scale: 4.0
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        scales_per_octave: 2
      }
    }
    post_processing {
      batch_non_max_suppression {
        score_threshold: 9.99999993922529e-09
        iou_threshold: 0.6000000238418579
        max_detections_per_class: 100
        max_total_detections: 100
        use_static_shapes: false
      }
      score_converter: SIGMOID
    }
    normalize_loss_by_num_matches: true
    loss {
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      classification_loss {
        weighted_sigmoid_focal {
          gamma: 2.0
          alpha: 0.25
        }
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    encode_background_as_zeros: true
    normalize_loc_loss_by_codesize: true
    inplace_batchnorm_update: true
    freeze_batchnorm: false
  }
}
train_config {
  batch_size: 8 # Increase/Decrease this value depending on the available memory (Higher values require more memory and vice-versa)
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    random_crop_image {
      min_object_covered: 0.0
      min_aspect_ratio: 0.75
      max_aspect_ratio: 3.0
      min_area: 0.75
      max_area: 1.0
      overlap_thresh: 0.0
    }
  }
  sync_replicas: true
  optimizer {
    momentum_optimizer {
      learning_rate {
        cosine_decay_learning_rate {
          learning_rate_base: 0.03999999910593033
          total_steps: 25000
          warmup_learning_rate: 0.013333000242710114
          warmup_steps: 2000
        }
      }
      momentum_optimizer_value: 0.8999999761581421
    }
    use_moving_average: false
  }
  fine_tune_checkpoint: "pre-trained-models/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0" # Path to checkpoint of pre-trained model
  num_steps: 25000
  startup_delay_steps: 0.0
  replicas_to_aggregate: 8
  max_number_of_boxes: 100
  unpad_groundtruth_tensors: false
  fine_tune_checkpoint_type: "detection" # Set this to "detection" since we want to be training the full detection model
  use_bfloat16: false # Set this to false if you are not training on a TPU
  fine_tune_checkpoint_version: V2
}
train_input_reader {
  label_map_path: "annotations/label_map.pbtxt" # Path to label map file
  tf_record_input_reader {
    input_path: "annotations/train.record" # Path to training TFRecord file
  }
}
eval_config {
  metrics_set: "coco_detection_metrics"
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "annotations/label_map.pbtxt" # Path to label map file
  shuffle: false
  num_epochs: 1
  tf_record_input_reader {
    input_path: "annotations/test.record" # Path to testing TFRecord
  }
}

I tried a few things to achieve reproducibility:

  1. Set a global seed in various modules across the training pipeline
  2. I tried setting up the operation-based seed for say shuffle, data augmentations etc.
  3. Manually setting attributes coming from .proto files, like shuffle = False in build() function of Tensorflow/models/research/object_detection/builders/data_builder.py module etc.
  4. De-selecting or removing data_augmentation_options from pipeline.config file altogether.

enter image description here

The graphs above show 2 separate training runs (everything is kept the same in both experiments: global seeding is done in data_builder.py -- tf.random.set_seed(1234), no data augmentation selected i.e. "data_augmentation_options" removed from pipeline.config, data shuffle and related attributes adjusted as mentioned in point 3)

# Operations that rely on a random seed actually derive it from two seeds: 
# the global and operation-level seeds. Adding this on top of a module sets the global seed.
tf.random.set_seed(1234)
# Switch off shuffle, config.filenames_shuffle_buffer_size
config.shuffle = False
# Make number of readers to zero
config.num_readers = 0
# Set sample_from_datasets_weights to zero
config.sample_from_datasets_weights = 0

1 Answers1

0

You might try to set the Python Hash Seed, at least for some this did the job See
https://keras.io/getting_started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development or
https://github.com/keras-team/keras/issues/2280#issuecomment-411542012

Another thing that might help is to try a different optimizer. Some optimizers - like Adam - have an internal random initilization.

grosser
  • 11
  • 3