I want to determine if an image has been overly compressed and thus if it contains those pixelated artifacts you can see clearly, for example, in the upper right portion of the image below. In the following comparison there are two JPEG images, the left one is the original, the right one has been saved at 30% quality and then saved again with 80% quality.
The loss of details in the right one is easily detactable at naked eye. I'm looking for an algorithm which, given the final image only and not the original one, detects if it's been overly compressed or if it has this kind of "disturb" which implies those clusters of similar/identical pixels and therefore determines an average poor quality of detail.
I analyzed them through ImageMagick and they have very similar values and histograms, and pretty the same min/max values on the RGB channels. The original image quality is 71% and the compressed one is 80% just because, as I already said before, I saved it back to 80% after saving it at 30% quality in first place, which makes the "quality" factor unreliable.
Before anyone asks, I wrote no code yet. I'm doing some research just looking for some tips to eventually find a solution but I don't really know how this phenomenon is called nor the algorithm(s) to serve the purpose. The matter of image and signal analysis is huge, so I'd really appreciate if you could help me to narrow it down.