You can implement the Transform using the lamdba funtion. As @dhananjay correctly pointed out.
Building on that comment, the implementation would be as follows:
def lbp(x):
radius = 2
n_points = 8 * radius
METHOD = 'uniform'
lbp = local_binary_pattern(x, n_points, radius, METHOD)
return lbp
data_transforms = {
'train': transforms.Compose([
transforms.CenterCrop(178),
transforms.RandomHorizontalFlip(),
transforms.Lambda(lbp),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
]),
'val': transforms.Compose([
transforms.CenterCrop(178),
transforms.Lambda(lbp),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
]),
}
BUT. This is a bad idea because it defeats the very purpose of the pytorch tranform.
A transform is ideal for an operation that either
1. Can be computed trivially (at low compute cost) from the original data. Hence there is no advantage to
applying it on your data and storing a copy. Normalize is one such transform.
2. Introduces an element of stochasticity or randomn perturbation in the original data. E.g RandomHorizontalFlip etc.
The key thing to remember is that your transform will be applied at every batch to the dataset while
learning.
Considering the above, you absolutely do not want to implement your lbp as a transform. It is better
to compute it offline and store it. Else you will be significantly slowing down your batch loading.