I'm having trouble implementing gradient descent algorithm to solve an optimisation problem of deblurring an image.
Here is my initial optimisation function:
E[u] = |g - u*k|^2 + λ
Where g is a blurry image, u is a sharp image, k is a 2x2 blur kernel and lambda is a regularisation term.
I fount the gradient and tried to implement it with different parameters. However, my image only gets blurrier. Here is my code.
grad = np.zeros((30,30))
cur_img = blur(sample_image)
g = blur(sample_image)
rate = 0.01
max_iters = 2000
iters = 0
while iters < max_iters:
prev_img = cur_img
for i in range(28):
for j in range(28):
# Calculate gradient
grad[i,j] = prev_img[i,j] + 0/5*prev_img[i-1,j-1] + 0/5*prev_img[i+1,j+1] - g[i-1,j-1]-g[i,j]
# Gradient Descent
cur_img = cur_img - rate * grad
iters = iters+1
plt.imshow(cur_img, cmap ="gray")
plt.show()
Please help me understand the right way of implementing this. Any help would be highly appreciated.