2

I'm quite confused about what a scope or operation means when I'm referring to a layer of a network. If I want to fine-tune the last softmax layer of the network, do I go to the scope, the operation or the variables to work on it? What are the differences between all these terms?

I have looked at the TF-slim walkthrough ipynb tutorial, and here was how they excluded some scopes for fine-tuning:

def get_init_fn():
    """Returns a function run by the chief worker to warm-start the training."""
    checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"]

    exclusions = [scope.strip() for scope in checkpoint_exclude_scopes]

    variables_to_restore = []
    for var in slim.get_model_variables():
        excluded = False
        for exclusion in exclusions:
            if var.op.name.startswith(exclusion):
                excluded = True
                break
        if not excluded:
            variables_to_restore.append(var)

    return slim.assign_from_checkpoint_fn(
      os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
      variables_to_restore)

It seems that they excluded certain scopes ["InceptionV1/Logits", "InceptionV1/AuxLogits"], from which they excluded each variable from the scope they want to exclude, and included whatever variable from scopes they did not list into variable_to_restore. Is it safe to say that the scope actually refer to the layers?

If so, this is the confusing part: what is the relevance of variables to the op? I have the impression that an op.name is used to find the scope_name like in ["InceptionV1/Logits", "InceptionV1/AuxLogits"], for instance, through writing [op.name for op in g.get_operations()]. If that is the case, why would a variable still have an op.name?

How should one find the scope names to select certain layers to fine-tune? I think this will be very important in clearing my confusion.

Thank you all for your help.

kwotsin
  • 2,882
  • 9
  • 35
  • 62

0 Answers0