0

I'm having trouble implementing gradient descent algorithm to solve an optimisation problem of deblurring an image.

Here is my initial optimisation function:

E[u] = |g - u*k|^2 + λ

Where g is a blurry image, u is a sharp image, k is a 2x2 blur kernel and lambda is a regularisation term.

I fount the gradient and tried to implement it with different parameters. However, my image only gets blurrier. Here is my code.

grad = np.zeros((30,30))
cur_img = blur(sample_image)
g = blur(sample_image)
rate = 0.01
max_iters = 2000
iters = 0

while  iters < max_iters:
    prev_img = cur_img

    for i in range(28):
        for j in range(28):
            # Calculate gradient
            grad[i,j] = prev_img[i,j] + 0/5*prev_img[i-1,j-1] + 0/5*prev_img[i+1,j+1] - g[i-1,j-1]-g[i,j]
    # Gradient Descent
    cur_img = cur_img - rate * grad
    iters = iters+1

plt.imshow(cur_img, cmap ="gray")
plt.show()

Please help me understand the right way of implementing this. Any help would be highly appreciated.

2 Answers2

0

this won't fix your whole problem, but you might wanna start with the correct values of k, you probably meant to use 0.5 instead of 0/5. And then look into the boundary conditions.

0

Seeing that size(grad) = (30,30) I assume that size(g) = (29,29). Now the image needs to have the same dimension as grad, therefore, you need to initialise it in a different way. E.g. use

i_max, j_max, *_ = g.shape
cur_img = np.zeros((i_max+1,j_max+1))
for i in range(i_max):
    for j in range(j_max):
        cur_img[i,j] = g[i,j]

The current image will then have two boundaries which were not updated. You can leave them at 0 or use other boundary conditions.

If you then take care of the boundary conditions in your iteration, i.e. check whether i and j are 0 or at their maximum value and update grad accordingly, you will see that you can actually iterate over range(30) for both i and j. This should solve the problem.

  • Thank you for your answer. I will try it out. I'm also currently trying to understand the following solution, which works for me with some small changes. What I don't understand is the gradient function, scipy.ndimage.convolve requires two parameters, as shown in loss function. Do you think that the following gradient function is wrong? def loss(image): return np.sum(convolve(image, kernel) - blurred_image) def gradient(image): return convolve(convolve(image, kernel) - blurred_image) for _ in range(maxit): deblurred -= learning_rate*gradient(image) – ScienceOfficer Nov 09 '20 at 14:07