I'm currently trying to make a custom PyTorch DataLoader.
I'm aware that setting drop_last=True
when first declaring the DataLoader object tells the object to drop the last incomplete batch if the size is not appropriate. However, I was wondering if that could be done in reverse, where the DataLoader computes the number of batches and counts from the back.
The reason I'm asking this is because the data that I'm currently using is time series data, and I want to use the most recent sample and therefore it would be ideal if the "leftover" samples were dropped from the oldest portion of the data.
I've thought of ways like reversing the data to start with, then creating the DataLoader object and reversing it back to the way it was, or to first reverse the data and create the object and then feed in the idx
's in reverse order when running __getitem__
, but this seems troublesome and prone to making errors so I was wondering if PyTorch offers this behavior.
Thanks in advance.