0

I am trying to measure the time required for the model forward pass. I've encountered a post mentioning the disadvantage of using python time modules for doing so.

Although the post relies on torch and uses torch.cuda.Event(enable_timing=True) to determine the current time, I've found maybe a similar function with tensorflow tf.timestamp().

However, using this function with os.environ['TF_DETERMINISTIC_OPS'] = '1', leads to the following error: tensorflow.python.framework.errors_impl.FailedPreconditionError: Timestamp cannot be called when determinism is enabled [Op:Timestamp]

I am interested in knowing the reason tf.timestamp() requires the model to be not deterministic. Any ideas ?

Code Idea:

os.environ['TF_DETERMINISTIC_OPS'] = '1'


@tf.function()
def forward_pass(model, x):
    y = model(x, training=False)
    return y


def inspect_time(model, model_in, runs):
    time_start = time.time()
    time_start_gpu = tf.timestamp()
    for i in range(runs):
        pred = forward_pass(model, model_in)
    time_avg_gpu = (tf.timestamp() - time_start_gpu)/runs
    time_avg_cpu = (time.time()-time_start)/runs

    return time_avg_cpu, time_avg_gpu

if __name__ == '__main__':

model = make_model()

with tf.device(logical_gpus[0]):
            x_batch, _ = train_dataset.take(1).get_single_element()
            x_batch = tnp.copy(x_batch)
            assert x_batch.device.endswith("GPU:0")

time_cpu, time_gpu = inspect_time(model, x_batch, 100)

0 Answers0