I have two PyTorch models that are equivalent (I think), the only difference between them is the padding:
import torch
import torch.nn as nn
i = torch.arange(9, dtype=torch.float).reshape(1,1,3,3)
# First model:
model1 = nn.Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode='reflection')
# tensor([[[[-0.6095, -0.0321, 2.2022],
# [ 0.1018, 1.7650, 5.5392],
# [ 1.7988, 3.9165, 5.6506]]]], grad_fn=<MkldnnConvolutionBackward>)
# Second model:
model2 = nn.Sequential(nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(1, 1, kernel_size=3))
# tensor([[[[1.4751, 1.5513, 2.6566],
# [4.0281, 4.1043, 5.2096],
# [2.6149, 2.6911, 3.7964]]]], grad_fn=<MkldnnConvolutionBackward>)
I was wondering why and when you use both approaches, the output of both is different but as I see it they should be the same, because the padding is of type reflection.
Would appreciate some help in understanding it.
EDIT
After what @Ash said, I wanted to check wheter or not the weights had influence so I pinned all of them to the same value and still there is a difference between the 2 methods:
import torch
import torch.nn as nn
i = torch.arange(9, dtype=torch.float).reshape(1,1,3,3)
# First model:
model1 = nn.Conv2d(1, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode='reflection')
model1.weight.data = torch.full(model1.weight.data.shape, 0.4)
print(model1(i))
print(model1.weight)
# tensor([[[[ 3.4411, 6.2411, 5.0412],
# [ 8.6411, 14.6411, 11.0412],
# [ 8.2411, 13.4411, 9.8412]]]], grad_fn=<MkldnnConvolutionBackward>)
# Parameter containing:
# tensor([[[[0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000]]]], requires_grad=True)
# Second model:
model2 = [nn.ReflectionPad2d((1, 1, 1, 1)),
nn.Conv2d(1, 1, kernel_size=3)]
model2[1].weight.data = torch.full(model2[1].weight.data.shape, 0.4)
model2 = nn.Sequential(*model2)
print(model2(i))
print(model2[1].weight)
# tensor([[[[ 9.8926, 11.0926, 12.2926],
# [13.4926, 14.6926, 15.8926],
# [17.0926, 18.2926, 19.4926]]]], grad_fn=<MkldnnConvolutionBackward>)
# Parameter containing:
# tensor([[[[0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000],
# [0.4000, 0.4000, 0.4000]]]], requires_grad=True)