I am developing an application in C# with spectrogram drawing functionality.
For my fist try, I used MathNet.Numerics, and now I am continuing to develop with alglib. When I changed from one to the other, I noticed that the output differs for them. Mathnet uses some kind of correction by default, which alglib seems to omit. I am not really into signal processing, also a newbie to programming, and I could not figure out what the difference exactly comes from.
MathNet default output (raw magnitude) values are ranging from ~0.1 to ~274 in my case. And with alglib I get values ranging from ~0.2 to ~6220.
I found that MathNet Fourier.Forward uses a default scaling option. Here is says, the FourierOptions.Default is "Universal; Symmetric scaling and common exponent (used in Maple)." https://numerics.mathdotnet.com/api/MathNet.Numerics.IntegralTransforms/FourierOptions.htm If I use FourierOptions.NoScaling, the output is the same as what alglib produces.
In MathNet, I used Fourier.Forward function: https://numerics.mathdotnet.com/api/MathNet.Numerics.IntegralTransforms/Fourier.htm#Forward In case of alglib, I used fftr1d function: https://www.alglib.net/translator/man/manual.csharp.html#sub_fftr1d
- What is that difference in their calculation?
- What is the function that I could maybe use to convert alglib output magnitude to that of MathNet, or vice versa?
- In what cases should I use these different "scalings"? What are they for exactly?
Please share your knowledge. Thanks in advance!