0

I have implemented several implementations of a linear Kalman filter tracking a sine wave. I have a Python and a C implementation that both work.

However I have also developed a version that uses a fixed point implementation and it works, but I am seeing an odd effect where it initially seems to fit and then diverges from the output of the other Kalman filter codes.

See the plots below:

Plot of output of Various Implementations of Kalman Filter

I was wondering if anyone has some intuition for what might be going wrong in the fixed point implementation?

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
SomeRandomPhysicist
  • 1,531
  • 4
  • 19
  • 42
  • Kalman filters can be very sensitive to numerical issues, particularly if you ipmplement the 'text-book' formulae. You might want to read about the square-root forms of the filter, which can be less senstive to numerical issues. – dmuir Jan 18 '17 at 14:59

1 Answers1

1

Hopefully you're long past this problem now, but, in case not - here's what I usually have to do with a fixed-point KF. Problems arise from the limited range of values available. If we set the fixed-point such that the initial covariance matrix is representable, we often do not have a lot of bits to the right of the decimal point and to represent the gain and state update for the solution once it begins to converge.

That's a long way of stating the obvious - we like floating point for filters - and in particular since the covariance matrix is squared standard deviations - it is stepping through orders-of-magnitude 'quickly' if you will, while the state update is not squared - so we are in a bind trying to select a single fixed-point location that allows us to reasonably represent the squared covariance elements and the not-squared state update.

So I often end-up with a gear shift or two - I've used up to '5-speed' fixed-point KF's in the past. When I get to the end of the time update step - I have the largest values for the covariance I am going to get for that cycle. Measurement processing will only decrease the covariance. If my Kalman filter fixed-point representation is more than 2 bits larger than I need for the covariance, I shift left and change the fixed-point on the fly. This keeps a reasonable number of bits for the gain and state update values.

This is in effect a poor-man's floating point, but I have only one decimal point location for all the filter elements. I use some hysteresis to keep from shifting all the time. When I get that working properly - I cannot distinguish the fixed and floating point results - which is more or less what we should expect as it's "floating the point".

Keith Brodie
  • 657
  • 3
  • 17
  • I did get past this problem (I wasn't using enough bits to store my values) and I get the correct behaviour as long as I am using enough bits for my fixed point representation (90 is necessary at the moment with the A matrix shifted shifted 24 bits to the left of the decimal point and the Q matrix shifted 10 bits to the left of the decimal point) however I am trying to reduce this and keep the correct behaviour as that number of bits is too big to fit on the device I am wanting to use the Kalman filter in (an FPGA). Yours is an interesting method that I hadn't considered, thanks for your help. – SomeRandomPhysicist Apr 03 '17 at 09:12
  • I will have a look at implementing your psudo-floating point method in order to fit my code on our FPGA. Thanks! – SomeRandomPhysicist Apr 03 '17 at 09:13