0

I would like to know how to process the indices that I have in my data loader always at the same order for every training. I want to do this because it takes a lot of time for my code to save visual outputs of the results on validation set, so I decided to save only the N first examples seen on validation.

However, when I pass all my batches in my validation loop:

for t, (x, y, indices) in enumerate(dataset['loader_val'])

  x = x.to(device=device, dtype=dtype)  # move to device, e.g. GPU
  y = y.to(device=device, dtype=dtype)  # move to device, e.g. GPU

  # Obtaining the scores ... -------------------------------------------------------------------
  scores = model(x)

  ......

My values for indices are not always the same at the first iteration for instance. I am sure I have always the same indices for the validation set since I checked that at dataset['loader_val'].sampler.indices is always the same array and ordered in the same way. Is there a way for enumerate() to take always the indices inside dataset['loader_val'] in the same way?

luigui2906
  • 57
  • 6
  • 2
    Check if fixing the random seed helps `import torch;torch.manual_seed(0)`. You can check the reproducibility guidelines for more details: https://pytorch.org/docs/stable/notes/randomness.html – Mohamed Ali JAMAOUI Dec 12 '19 at 10:11
  • does your validation loader has `shuffle=True`? – Shai Dec 12 '19 at 11:41
  • Thanks.. This works.. Just doing `torch.manual_seed(0)` before the for loop in order to have always the same seed. Thanks for the reply. – luigui2906 Dec 12 '19 at 14:39

0 Answers0