0

Do libraries like libfixmath perform better than arm FP and NEON or is there no gain from fixed point over existing FP hw?

I'm considering converting all instances of float in my code to a fixed point C++ class (similar to libfixmath) to optimize in terms of run-time an algorithm running on a Cortex-A9. The question is whether someone has any experience with this.

Current results with several fixed-point implementations both on Intel-i5 and on ARM-Cortex-A9 didn't show any improvement with fixed-point over floating point HW.

madhat1
  • 676
  • 9
  • 15
  • 3
    Isn't ARMv7 an instruction set? In this case it is impossible to answer your question, as you haven't said which specific processor you want the comparison for. Anyway, fixed-point math requires several instructions by operation and is highly unlikely to be faster than floating-point instructions with dedicated instructions implemented in hardware. – Pascal Cuoq Aug 17 '14 at 19:02
  • You are right, I meant ARM Cortex-A9 CPU. So fixed-point can only be useful when no hw floating point exists? – madhat1 Aug 17 '14 at 21:15

1 Answers1

1

Usually, fixed is much faster than float because :

  • integer instructions require much less cycles
  • the latency is much lower
  • no conversion necessary

However, if you are dealing with 32bit source data, thus requiring 64bit math, you might be better served with float since long integer operations require more cycles, registers, and instructions.

It rather depends on the source/target data type : when they are both integer, fixed is much better. If not, stick to float.

Jake 'Alquimista' LEE
  • 6,197
  • 2
  • 17
  • 25
  • I'm considering converting all instances of _float_ in my code to a fixed point C++ class (similar to libfixmath) to optimize in terms of run-time an algorithm running on a Cortex-A9. The question is whether someone has any experience with this and if this approach is not a waster of time... – madhat1 Aug 18 '14 at 05:56