I think you don't know enough about floating point to know that your requirements don't make sense. But let's take it at face value and see what's possible.
I don't wanna use rounding technique, I need to save in a memory exactly floating point for example 1.623
If you never want any rounding, you can't use floating point. You could use something like arbitrary-precision rational numbers, as long as you don't need any transcendental functions (sin/cos/tan, log/exp, or whatever). i.e. represent your numbers as an infinite-precision numerator and infinite-precision denominator.
But that uses an unbounded amount of storage for each number; you can normalize by cancelling common factors after operations, but after many operations there's no limit to how large it grows. This is probably not what you want for PIC.
But maybe you're ok with rounding the result of 10. / 3.
(an infinite repeating fraction starting with 3.33333...) to something that can be stored in a fixed 4 bytes.
If you don't want any rounding for decimal constants with only a few significant figures like 1.623
, you could use decimal floating point where the exponent field represents a power of 10, so you can represent short decimal fractions like 1.623 exactly. Since PIC doesn't have hardware FP, you'd have to implement it in software anyway.
The mantissa can stored as a binary integer (or BCD or whatever you like), but with limited range: it's only allowed to be in the range [0, 10P-1).
IEEE754-2008 does define some standard decimal-FP formats like decimal32
; some CPUs like IBM POWER6 and later actually implement it in hardware as well as normal binary floating point.
Normal binary floating point:
Mainstream CPUs with FPUs (and most software FP implementations) typically use IEEE binary floating point, where numbers are represented as m * 2^e
(where m
is the mantissa, aka significant, in the range [1.0 , 2.0)
, represented with an implicit leading 1.) Like https://en.wikipedia.org/wiki/Single-precision_floating-point_format. This format can't exactly represent 1.623. Try it for example on this online IEEE754 converter
The closest representable single-precision float
is 1.62300002574920654296875, represented by a bit-pattern of 0x3fcfbe77
. That's what a C compiler will store in memory when you compiler float foo = 1.623;
(assuming the C implementation uses IEEE binary32 for its float
; this is not required by ISO C11).
Fixed point:
You may not want floating point at all: without a hardware FPU, fixed-point is often a more efficient way to represent fractional numbers.
A binary fixed-point system can't represent 1.623 exactly, though, but a decimal-based one can. (Where the value represented by the signed 2's complement integer i
is i / 10000
for example. So you can represent values from -3.2768 to +3.2767 with that format.)