0

I have something like this model=BertModel.from_pretrained('bert-base-uncased',return_dict=True) What exactly is this "return_dict" used for? What happens when True and what when False?

YScharf
  • 1,638
  • 15
  • 20
Alem
  • 283
  • 1
  • 13

1 Answers1

2

When instantiating a BertModel, the default output of the model when evaluating or predicting will be a tuple consisting of loss, logits, hidden_states and attentions.

predictions = model(ids_tensor)
print(predictions)
# MaskedLMOutput(loss=None, logits=tensor([[
              # [ -0.2506,  -5.6671,  -5.1753,  ...,  -5.3228,  -7.9154,  -4.5786],
              # [ -4.1528,  -8.2391,  -8.5691,  ...,  -8.4557,  -8.2903, -10.1395],
              # [-15.5995, -17.0001, -16.9896,  ..., -14.1423, -15.6004, -15.8228],
              # ...,
              # [  3.0180,  -2.9339,  -3.3522,  ...,  -4.1684,  -4.9487,  -1.7176],
              # [-12.7654, -12.9510, -12.9151,  ..., -10.5786, -11.1695,  -9.6117],
              # [ -4.0356,  -9.7091,  -9.5329,  ...,  -9.3969, -10.5371,  -9.2839]]]),
              # hidden_states=None, attentions=None)

If the argument return_dict is defined as True, the output changes into a 'ModelOutput', called like this in the documentation of HuggingFace. This output consist on the elements last_hidden_state, hidden_states, pooler_output, past_key_values, attentionsand cross_attentions. Hope I could be of help.

model = BertModel.from_pretrained('bert-base-uncased',return_dict=True)
predictions = model(ids_tensor)
print(predictions)
# BaseModelOutputWithPoolingAndCrossAttentions(last_hidden_state=tensor([[
              # [ 0.0769, -0.0024,  0.0389,  ..., -0.0489,  0.0484,  0.4760],
              # [-0.1383, -0.3266,  0.2738,  ..., -0.0745,  0.0224,  0.8426],
              # [-0.4573, -0.0621,  0.4206,  ...,  0.0188,  0.1578,  0.4477],
              # ...,
              # [ 0.7070, -0.1623,  0.4451,  ..., -0.1530,  0.0902,  0.8289],
              # [ 0.7154,  0.0767, -0.2292,  ...,  0.2946, -0.5152, -0.2444],
              # [ 0.3558,  0.1660,  0.0459,  ...,  0.5960, -0.7525, -0.0851]]]), 
              # pooler_output=tensor([[-7.4716e-01, -1.4339e-01, …, 7.6550e-01]]), 
              # hidden_states=None, 
              # past_key_values=None, 
              # attentions=None, 
              # cross_attentions=None)

Source: Bert Documentation in Transformers - HuggingFace

furaharu
  • 21
  • 3