1

I am pretty new to signal and image processing. I attached a picture of what I am supposed to do from a paper (https://iopscience.iop.org/article/10.1088/1361-6501/ab7f79/meta).

enter image description here

Basically, an interferogram is recorded by a CMOS sensor in B/W, then each vertical and horizontal pixel line is taken singularly and "associated" with a signal representing the intensity of light reaching the sensor. Then, the signals are Fourier-transformed to extract information about frequency and phase (to unwrap using). I have understood the final passage for the DFT of the signal, but I am stuck when I have to take the pixel line and the signal associated with it. Ideally, in Matlab the workflow would be:

  • extract each pixel line
  • assign a "colormap" to the line (white = 1, black = 0, all the other shades in between?)
  • build my signal interpolating the values of the pixels
  • DFT the signal to extract frequency and phase

Is there a compact way to do so?

So far I managed to do this: enter image description here I imported the image of the interferogram (248x320 pixels, just a snapshot from the paper) and for the 124th horizontal line I obtained the signal, frequency and phase.

Gianluca
  • 23
  • 6
  • Try looking at [imread](https://www.mathworks.com/help/matlab/ref/imread.html), it returns (for an input B/W image) a matrix with the grayscale level of each pixel. – Matteo V Feb 02 '21 at 10:28
  • [Does that help ?](https://stackoverflow.com/a/65902499/4363864) – obchardon Feb 02 '21 at 10:29
  • Ok, I tried to use imread on the interferogram in the picture. I followed this (https://uk.mathworks.com/matlabcentral/answers/261707-how-to-convert-an-image-to-frequency-domain-in-matlab) considering the green channel in this case. What I got it's a matrix whose rows and columns are the horizontal and vertical lines I would need, is that correct? Because I have just plotted a row and I got a signal that is quite similar to the one in the picture, don't know if that's just luck though – Gianluca Feb 02 '21 at 10:43
  • To normalize the pixel intensities from **0 (black)** and **1 (white)** the `im2double()` function can be used on the image returned from `imread()`. – MichaelTr7 Feb 03 '21 at 06:15

0 Answers0