Here is my current code.
affine = transforms.RandomAffine([-15, 15], scale=(0.8, 1.2)) # 回転とリサイズ
normalize = transforms.Normalize((0.0, 0.0, 0.0), (1.0, 1.0, 1.0)) # 平均値を0、標準偏差を1に
to_tensor = transforms.ToTensor()
transform_hyouji=transforms.Compose([to_tensor,affine])
emnist_data = EMNIST(root='./EMNIST_1st', split=splits[-2],
train=False,download=True,
transform=torchvision.transforms.ToTensor())
splits = ('byclass', 'bymerge', 'balanced', 'letters', 'digits', 'mnist')
EMNIST_train = EMNIST(root='./EMNIST_1st', split=splits[-2],
train=True,download=True,
transform=torchvision.transforms.ToTensor())
EMNIST_test = EMNIST(root='./EMNIST_1st', split=splits[-2],
train=False,download=True,
transform=torchvision.transforms.ToTensor())
However, I think the size of this dataset are
EMNIST_train.__len__(), EMNIST_test.__len__()
#(240000, 40000)
I would like to change this size with keeping balanced size, i.e. the almost same rate for each label.
I think this question(How do you alter the size of a Pytorch Dataset?) is helpful but I think this is not keeping the label balanced.
If you tell me how to , I would appreciate.