0

I have an image matrix A. I want to learn a convolution kernel H that does following operations:

A*H gives a tensor "Intermediate" and Intermediate * H gives "A"

Here * represents convolution operation (possibly using FFT). I only have the image. I started with a random H matrix. I want to minimise the loss between the final output [(A*H)*H] and A; and using that to get the optimised H. Can someone suggest how should I proceed using Torch?

N.B: I've written a function that does the convolution operations and returns a tensor that I want to be Like A.

d_n2001
  • 1
  • 2
  • 1
    There is only one kernel that could satisfy that requirement, and it is the identity kernel (I.e. a single pixel with a value of 1). This makes `Intermediate == A`. – Cris Luengo May 27 '22 at 13:34
  • “I've written a function that does the convolution operations” Why? Torch already has this operation built in. There are a million an one implementations easily available, including one in numpy and several different ones in scipy. – Cris Luengo May 27 '22 at 13:36

1 Answers1

0

Does this code match your requirement?

import torch

A = torch.randn([1, 1, 4, 4])
conv = torch.nn.Conv2d(1, 1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(conv.parameters(), lr=0.001)

for i in range(1000):
  optimizer.zero_grad()
  out = conv(conv(A))
  loss = criterion(out, A)
  loss.backward()
  optimizer.step()
  if i % 100 == 0:
    print(i, loss.item())

And of course, the weight of convolution will converge to 1

Hayoung
  • 347
  • 2
  • 11