1

I have been working on a super-resolution task. I have this question about determining loss function, So in the case of the task at hand I felt like going with SSIM as a loss function to train my model. I did get a good set of results. Recently I come across perceptual loss function where we compare how a pretrained model looks at the Ground truth(GT) Images and the Super Resolution(SR) Image(Image generated by the model). My question is, I am thinking of using both ((1-SSIM(SR,GT))+Perceptual loss(SR,GT)) loss for backpropagation, so should I use a trade-off parameter between these two losses? if so how can I set up these trade-off parameters? or should I add these losses with equal weights.

PS: the perceptual loss is calculated by finding SSIMs between the feature maps of GT and SR images from the pre-trained model

  • I doubt there's a good theoretical justification for the value of a tradeoff parameter between the two. You would probably have to set up some form of hyperparameter search to find a good scale. A good place to start is probably to scale one of the terms so both have the same average magnitude. – jodag Nov 26 '20 at 01:10
  • @user14709645 Did you implement SSIM loss on tensorflow2? – shaurov2253 Dec 03 '20 at 10:50
  • I use pytorch! [Pytorch-SSIM](https://github.com/Po-Hsun-Su/pytorch-ssim) this worked fine for – user14709645 Dec 17 '20 at 17:39
  • The SSIM implementation of the [`piqa`](https://github.com/francois-rozet/piqa) package is (2-3x) faster. You might consider it. – Donshel Jan 10 '21 at 22:26

0 Answers0