0

I am finetuning BERT for a binary sentiment analysis class using Tensorflow. I want to use a custom training loop/loss function. However, when I train the model I get the following error: ValueError: Internal error: Tried to take gradients (or similar) of a variable without handle data: Tensor("transformer_encoder/StatefulPartitionedCall:1019", shape=(), dtype=resource).

To debug, I tried simplifying my training loop to just compute standard binary cross entropy, which should be equivalent to if I called model.fit() with binary cross entropy as the loss function (which works completely fine). However, I get the same error as above when running this simplified training loop and I am not sure what's causing it. Note: I am using tensorflow 2.3.0.

Here is the model:

def create_model():
  max_seq_length = 512
  input_word_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,
                                        name="input_word_ids")
  input_mask = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,
                                     name="input_mask")
  input_type_ids = tf.keras.layers.Input(shape=(max_seq_length,), dtype=tf.int32,
                                      name="input_type_ids")
  
  bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2", trainable=True)
  pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, input_type_ids])
  drop = tf.keras.layers.Dropout(0.3)(pooled_output)
  output = tf.keras.layers.Dense(1, activation='sigmoid', name="output")(drop)

  model = tf.keras.Model(
      inputs={
          'input_word_ids': input_word_ids,
          'input_mask': input_mask,
          'input_type_ids': input_type_ids
      },
      outputs= output 
  )

  return model

Here is the training loop function. The issue seems to come up when running ypred = model(train_x) inside tf.GradientTape():

def train_step(train_batch):
  train_x, train_y = train_batch
  with tf.GradientTape() as tape:
    ypred = model(train_x)
    loss = tf.reduce_mean(tf.keras.losses.binary_crossentropy(train_y, ypred))
  grads = tape.gradient(loss, model.trainable_weights)
  optimizer.apply_gradients(zip(grads, model.trainable_weights))
  return loss

Again, this seems to only happen with tf.GradientTape(), since model.fit() does not result in any issues.

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5),
          loss=tf.keras.losses.BinaryCrossentropy(),
          metrics=[tf.keras.metrics.BinaryAccuracy()])

model.fit(train_data,
          validation_data=valid_data,
          epochs=epochs,
          verbose=1)
Jane Sully
  • 3,137
  • 10
  • 48
  • 87

1 Answers1

1

Could you retry with the newest version of the model (https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4)? Version 4 introduced support for gradient tapes so that might be the reason why you see issues trying to use tf.GradientTape with v2.

WGierke
  • 583
  • 3
  • 5
  • I found that just changing versions of packages helped. However, this may have also done the trick. Thanks! – Jane Sully Aug 26 '21 at 11:00