1

I'm working on a basic model to identify the e-ink display of my DGT Centaur chess board.

Most of the labels for the bounding boxes that identify the display read "N/A". There are a few instances where the bounding box uses the "display" label, but it's when I recognizes part of my body in the image.

I'm assuming there is an incorrect configuration with my labels, but I'm not sure how to debug. See my labelmap.pbtxt below, as well as my training configuration.

Thank you for any help!


Example images:

My body labeled with "display":

Result with board and person

My body labled with "display," the board/display labeled "N/A":

enter image description here

Display labeled with "N/A":

enter image description here


labelmap.pbtxt

item {
    id: 1
    name: 'display'
    display_name: 'display'
}

faster_rcnn_inception_v2_pets.config

# Faster R-CNN with Inception v2, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  faster_rcnn {
    num_classes: 1
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 600
        max_dimension: 1024
      }
    }
    feature_extractor {
      type: 'faster_rcnn_inception_v2'
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        scales: [0.25, 0.5, 1.0, 2.0]
        aspect_ratios: [0.5, 1.0, 2.0]
        height_stride: 16
        width_stride: 16
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.01
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.7
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        use_dropout: false
        dropout_keep_probability: 1.0
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
  }
}

train_config: {
  batch_size: 1
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        manual_step_learning_rate {
          initial_learning_rate: 0.0002
          schedule {
            step: 900000
            learning_rate: .00002
          }
          schedule {
            step: 1200000
            learning_rate: .000002
          }
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "/Users/cs/GitHub/dgt-centaur-transcriber/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
  from_detection_checkpoint: true
  load_all_detection_checkpoint_vars: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 200000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
}


train_input_reader: {
  tf_record_input_reader {
    input_path: "/Users/cs/GitHub/dgt-centaur-transcriber/models/research/object_detection/train.record"
  }
  label_map_path: "/Users/cs/GitHub/dgt-centaur-transcriber/models/research/object_detection/training/labelmap.pbtxt"
}

eval_config: {
  metrics_set: "coco_detection_metrics"
  num_examples: 1
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "/Users/cs/GitHub/dgt-centaur-transcriber/models/research/object_detection/test.record"
  }
  label_map_path: "/Users/cs/GitHub/dgt-centaur-transcriber/models/research/object_detection/training/labelmap.pbtxt"
  shuffle: true
  num_readers: 1
}
charliesneath
  • 1,917
  • 3
  • 21
  • 36

1 Answers1

2

Not sure whether you have already solved this, my 2 cents - This type of N/A issue appears when:

  1. The labels.pbtxt file has an issue.
  2. The number of classes is not proper in config file.
  3. Last but not least, the python code (Example predict_image.py) which you use to display the images with labels may have a wrong number of classes.
  • I have the same problem as @charliesneath. Ι checked everything that @Satya Allamraju suggested and everything is correct...however the error remains. Also I use this command: ```category_index ={'name':'dummyname','id':1}``` from here: https://stackoverflow.com/questions/71488542/nameerror-name-category-index-is-not-defined-how-to-fix Can this be a problem?? – just_learning Aug 15 '22 at 10:59