2

I'm writing a C++ program that takes the FFT of a real input signal containing double values and returns a vector X containing std::complex<double> values. Once I have the vector of results I then attempt to calculate the magnitude and phase of the result.

I am running into an issue with calculating the phase angle when one of the outputs is "zero". Zero is in quotes because when a calculation that results in 0 returns a double, the returned value will be very near zero, but not quite exactly zero.

For example, at index 3 my output array has the calculated "zero" value:

X[3] = 3.0531133177191805e-16 - i*5.5511151231257827e-17

I am trying to use the standard library std::arg function that is supposed to return the phase angle of a complex number. std::arg(X[3])

While X[3] is essentially 0, it is not EXACTLY 0 and the way phase is calculated this causes a problem because the calculation uses the ratio of the imaginary part divided by the ratio of the real part which is far from 0!

Doing the actual calculation results in a far from desirable result.

enter image description here

enter image description here

How can I make C++ realize that the result is really 0 so I can get the correct phase angle?

I'm looking for a more elegant solution than using an arbitrary hard-coded "epsilon" value to compare the double to, but so far searching online I haven't had any luck coming up with something better.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
tjwrona1992
  • 8,614
  • 8
  • 35
  • 98
  • 2
    I'm sorry to say but it sounds like you're asking "how can I have this special case without having to implement it"? – Patrick Roberts Aug 09 '19 at 00:08
  • There's nothing arbitrary about the unit roundoff (std::numeric_limits::epsilon()/2) – user14717 Aug 09 '19 at 00:09
  • Oh is there a standard library "epsilon"? That would be much better than hard coding some arbitrary value myself – tjwrona1992 Aug 09 '19 at 00:09
  • Take the sup norm of the output and use if (|x| < std::numeric_limits::epsilon()*max_x) – user14717 Aug 09 '19 at 00:09
  • There's a right way to do this using the rounding model of both floating point arithmetic and the FFT. I suspect the result with be something like values < N*eps*sup_norm are garbage, but I haven't done the analysis. – user14717 Aug 09 '19 at 00:14
  • "While X[3] is essentially 0," take a look at the LIGO data. It's all on the order 10^-17. There's no such thing as close to zero. – user14717 Aug 09 '19 at 00:17
  • Well the input signal I am using is a calculated signal, not a sampled one, and if you do the math the 3rd output is supposed to come out to zero, so that 10^-17 is just as close as this compiler gets i guess. But I want this code to work for many different compilers so I don't want to settle for an arbitrary value to compare to. – tjwrona1992 Aug 09 '19 at 00:19
  • The compiler does not do this calculation. That's your floating point unit. But all modern floating point units should obey the rounding model of floating point arithmetic. – user14717 Aug 09 '19 at 00:21
  • It looks like `std::numeric_limits::epsilon()` is `2.2204460492503131e-16` but only one of these values is smaller than that so I can't simply compare to that value. What exactly do you mean by sup_norm? – tjwrona1992 Aug 09 '19 at 00:22
  • 3
    Be careful with the epsilon in the standard library. It's not some universal value that you can use for all floating point comparisons. Different computations generate different amounts of error, so the tolerance you use will depend on your situation. In fact, in most cases you wouldn't want to use it directly. – eesiraed Aug 09 '19 at 00:23
  • sup_norm = maximum value of your array. Also scale by number of elements in your array. – user14717 Aug 09 '19 at 00:25
  • @AlexanderZhang: It is a universal value. It is defined by the IEEE 754 standard. – user14717 Aug 09 '19 at 00:25
  • 1
    @user14717 I meant universal in the sense that you can use it anywhere directly for all floating point comparisons with tolerances. – eesiraed Aug 09 '19 at 00:26
  • condition_number(computation)*eps is basically universal. – user14717 Aug 09 '19 at 00:27
  • Okay, so you mean `N * epsilon * max_value`? That still feels a little arbitrary though and getting the max value makes the calculation much less efficient because I would need to extract the real component from each complex number and get the max of just the real components. – tjwrona1992 Aug 09 '19 at 00:28
  • 1
    If you don't want arbitrary, you'll need to go through the rounding analysis of the FFT. There is literature on this, but I think N*eps*max_value is a good guess as to what that literature concludes. – user14717 Aug 09 '19 at 00:29
  • 2
    Don't use just real components, use the max of both real and imaginary components. – user14717 Aug 09 '19 at 00:30
  • https://www.ams.org/journals/mcom/1971-25-116/S0025-5718-1971-0300488-0/S0025-5718-1971-0300488-0.pdf – user14717 Aug 09 '19 at 00:31
  • The year is 1944. You are running an secret experiment for the government of the United States. Your computer is a room full of engineers with slide rules. Can you explain *them* what you want to do? – n. m. could be an AI Aug 09 '19 at 04:13
  • @PatrickRoberts, The `std::arg` function already implements the special case for calculating the phase when the result is 0, the issue is not that I need to implement the special case, its that my output value is supposed to be exactly 0, but instead it is just very close to 0 due to rounding errors inherent in the C++ language. – tjwrona1992 Aug 09 '19 at 16:16
  • 1
    "rounding errors inherent in the C++ language" These are not inherent in the language, they are inherent in floating-point computations. You'll get exactly the same problem with any programming language, on any computer. – Cris Luengo Aug 09 '19 at 17:46

3 Answers3

2

If you are computing the floating-point FFT of an input signal, then that signal will include noise, thus have a signal-to-noise ratio, including sensor noise, thermal noise, quantization noise, timing jitter noise, etc.

Thus the threshold for discarding FFT results as below your noise floor most likely isn't a matter of computational mathematics, but part of your physical or electronic data acquisition analysis. You will have to plug that number in, and set the phase to 0.0 or NaN or whatever your default flagging value is for a non-useful (at or below the noise floor) FFT result.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • While I 100% agree with this answer, this doesn't quite fit the current situation. In this case my input signal is completely calculated (not measured) from a `sin` equation and thus has no noise whatsoever. The only reason the output is not quite exactly zero is due to rounding errors when doing math with `double`s within the C++ language itself. I'm basically trying to find a way to detect when a rounding error has occurred, not when there is noise in the actual input signal. – tjwrona1992 Aug 09 '19 at 13:53
  • That being said, adding an arbitrary noise floor to the calculation would likely solve the problem because whatever the noise floor is it will almost certainly be greater than rounding errors within the language itself. – tjwrona1992 Aug 09 '19 at 13:54
  • The results of your transcendental sin() function are numerical approximations that are quantized to fit the finite sized number format. Thus noise. You can try interval arithmetic to analyze the maximum error bounds created by this noise in your input data, if you know the max +- error of your sin() function library. – hotpaw2 Aug 09 '19 at 14:32
  • I am using the `std::sin` function from `#include `, but the actual precision seems to be compiler dependent. How can I even go about finding the actual precision in a compiler independent manner? – tjwrona1992 Aug 09 '19 at 15:44
  • Library dependent, which is separate from the compiler. That might be a good separate question, not a comment. – hotpaw2 Aug 09 '19 at 16:05
1

It was brought to my attention that my original answer will not work when the input to the FFT has been scaled. I believe I have an actual valid solution now... The original answer is kept below so that the comments still make sense.

From the comments on this answer and others, I've gathered that calculating the exact rounding error in the language may technically be possible but it is definitely not practical. The best practical solution seems to be to allow the user to provide their own noise threshold (in dB) and ignore any data points whose power level falls below that threshold. It would be impossible to come up with a generic threshold for all situations, but the user can provide a reasonable threshold based on the signal-to-noise ratio of the signal being analyzed and pass that in.

A generic phase calculation function is shown below that calculates the phase angles for a vector of complex data points.

std::vector<double> Phase(std::vector<std::complex<double>> X, double threshold, double amplitude)
{
    size_t N = X.size();
    std::vector<double> X_phase(N);

    std::transform(X.begin(), X.end(), X_phase.begin(), [threshold, amplitude](const std::complex<double>& value) {
        double level = 10.0 * std::log10(std::norm(value) / std::pow(amplitude, 2.0));
        return level > threshold ? std::arg(value) : 0.0;
    });

    return X_phase;
}

This function takes 3 arguments:

  1. The vector of complex signal data you want to calculate the phase of.
  2. A sensible threshold -- Can be calculated from the signal-to-noise ratio of whatever measurement device was used to capture the signal. If your signal contains no noise other than the rounding errors of the language itself you can set this to some arbitrary really low value, like -120dB.
  3. The maximum possible amplitude of your input signal. If your signal is calculated, this should simply be set to the amplitude of your signal. If your signal is measured, this should be set to the maximum amplitude the measuring device is capable of measuring (If your signal comes from reading an audio file, often its data will be normalized between -1.0 and 1.0. In this case you would just set the amplitude value to 1.0).

This new implementation still provides me with the correct results, but is much more robust. By leaving the threshold calculation to the user they can set the most sensible value themselves based on the characteristics of the measurement device used to measure their input signal.

Please let me know if you notice any errors or any ways I can further improve the design!


Original Answer

I found a solution that seems generic enough.

In the #include <limits> header, there is a constant value for std::numeric_limits<double>::digits10.

According the the documentation:

The value of std::numeric_limits<T>::digits10 is the number of base-10 digits that can be represented by the type T without change, that is, any number with this many significant decimal digits can be converted to a value of type T and back to decimal form, without change due to rounding or overflow.

Using this I can filter out any output values that have a magnitude lower than this limit:

Calculate the phase of X[3]:

int N = X.size();

auto tmp = std::abs(X[3])/N > std::pow(10, -std::numeric_limits<double>::digits10)
         ? value
         : 0.0

double phase = std::arg(tmp);

This effectively filters out any values that are not precisely zero due to rounding errors within the C++ language itself. It will NOT however filter out garbage data caused by noise in the input signal.

After adding this to my phase calculation I get the expected results.

tjwrona1992
  • 8,614
  • 8
  • 35
  • 98
  • @CrisLuengo, Hmm while I dont see why anyone would scale their fft by a value of 1e-20 I guess you're still technically right... I guess to fix that I would need to check the maximum magnitude value and then assume anything that's 15 orders of magnitude smaller than that could be assumed to be 0? – tjwrona1992 Aug 09 '19 at 17:53
  • I'm not quite following why I would need to do the sum of the input divided by the max. What does the sum give me? – tjwrona1992 Aug 09 '19 at 17:57
  • @CrisLuengo, I would think that you would probably want to take the max divided by the scale factor of N. And then anything that is 15 orders of magnitude smaller than that could safely be assumed to be zero. I will have to give it a little more thought though. – tjwrona1992 Aug 09 '19 at 18:04
  • @CrisLuengo, I have given it some thought and posted an updated answer. – tjwrona1992 Aug 11 '19 at 05:22
0

The map from complex numbers to magnitude and phase is discontinuous at 0.

This is a discontinuity caused by the choice of coordinates you are using.

The solution will depend on why you chose those coordinates in a situation where values near the discontinuity are possible.

It isn't "really" zero. If you factored in error bars properly, your answer would really be a small magnitude (hopefully) and a unconstrained angle.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
  • I am aware of that, I guess the title of the question doesn't come off quite as clearly as I had hoped. I know I need to treat 0 as a special case, but the issue is in C++ after calculating the FFT where it should be 0 instead it is an incredibly small number that isn't zero. This makes it difficult to check for the special case because I can't simply compare the result and see if it is equal to 0. – tjwrona1992 Aug 09 '19 at 14:06
  • @tjwron Equality is not a property that the consstructive real numbers have; floating point values are usually best handled as approximations of constructive reals. If you want to treat zero as a special case, you are already doing it wrong. You can hack on top of it, but **all solutions will be hacks** when built on top of a flawed base. You need to work out *why* you need the phase when mapping near zero complex numbers, that is where your problem and solution should be. This information you chose to withhold, so that is all I can say. – Yakk - Adam Nevraumont Aug 09 '19 at 14:28
  • If phase is discontinuous at zero there's literally no other way to calculate the phase angle at 0 other than programming in a special case. So I don't buy "all solutions will be hacks" I just need to know how to calculate the maximum rounding error that I can possibly get from my compiler so I can compare the double values with 0 with a sensible epsilon that is not just arbitrarily thought up out of thin air. – tjwrona1992 Aug 09 '19 at 15:37
  • @tjwron There is no max rounding error in general. There is a max rounding error for a soecific sequence of operations. Calculating the exact max rounding error has a number of techniques, one of which is called interval math. Others can imvolve in-depth understanding of each operation done at the "IEEE" standard level and mathematical proofs. Of course, this ends up finding values which could be zero and doesn't find values that are separate from zero by 10^-40 (beyond accumulated rounding) - do you pretend that phase matters for that value, when it has 180 degree error bars? Hence hack – Yakk - Adam Nevraumont Aug 10 '19 at 10:50
  • Note that due to how IEEE rounds, many hacks will have a high probability of producing reasonable results. So if you pick an epsilon based on feels, it will tend to fix your symptoms most of the time. 0.00001 feels small; use that, and your phase calculation code will have fewer symptoms of the problem. Futz with it based on playing with input until it feels right. But no fixed value epsilon can possibly be right for all floating point math, so such a fixed epsilon is going to be a hack; every simple solution is going to be a hack, unless you deal with the discontinuity fundamentally. – Yakk - Adam Nevraumont Aug 10 '19 at 10:57