https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/eager/backprop.py#L994-L1095 here is the source code for the 'GradientTape' function, however making changes in the source code does not affect the function in the subsequent answer.
flat_grad = imperative_grad.imperative_grad(
self._tape,
flat_targets,
flat_sources,
output_gradients=output_gradients,
sources_raw=flat_sources_raw,
unconnected_gradients=unconnected_gradients)
if not self._persistent:
# Keep track of watched variables before setting tape to None
self._watched_variables = self._tape.watched_variables()
self._tape = None
grad = nest.pack_sequence_as(sources, flat_grad)
return 0
Here I have changed the return statement. However using
x = tf.Variable(3.0)
with tf.GradientTape() as tape:
y = x**2
dy_dx = tape.gradient(y, x)
dy_dx.numpy()
gives the output to be 6.0 What else needs to be changed?