I'm running a script with the same seed and I see results are reproduced on consecutive runs but somehow running the same script with the same seed changes the output after a few days. I'm only getting a short-term reproducibility which is weird. For reproducibility my script includes the following statements already:
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
torch.use_deterministic_algorithms(True)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
I also checked the sequence of instance ids created by the RandomSampler for train Dataloader which is maintained across runs. Also set the num_workers=0 in the dataloader.What could be causing the output to change?