4

Demonstrator of compiler-selected 'Small

Being new to fixed point types in Ada, it surprised me to hear that the default value of 'Small is a power of 2, smaller or equal to the delta specified. Here is a short snippet to introduce the problem:

with Ada.Text_IO; use Ada.Text_IO;

procedure Main is
   type foo is delta 0.1 range 0.0..1.0;
   x : foo := foo'delta;
begin
   Put (x'Image);
   while true loop
      x := x + foo'delta;
      Put (x'Image);
   end loop;
end Main;

The output shows that 'Small is indeed the largest power of 2 smaller than 0.1, as some printed values appear twice:

 0.1 0.1 0.2 0.3 0.3 0.4 0.4 0.5 0.6 0.6 0.7 0.8 0.8 0.9 0.9 1.0

raised CONSTRAINT_ERROR : main.adb:9 range check failed

Solution: specify them as the same value

If we really wanted 0.1 as the delta, we could say so:

   real_delta : constant := 0.1;
   type foo is delta real_delta range 0.0..1.0
      with Small => real_delta;

Question; is it ever useful to specify both but as different values?

If optimization were the only use-case for this difference, it could have been a boolean attribute, or even just a warning "selected delta is not a power of two (suggest 2**-4 instead)". Could there be any reason to specify both as separate values, such as:

   type foo is delta 0.1 range 0.0..1.0
      with Small => 0.07;
   x : foo := 0.4 + 0.4;  -- equals 0.7 rather than 0.8

This seems to only confuse the poor reader who encounters this later. The following example is taken from John Barnes' Programming in Ada 2012, section 17.5 on page 434. He doesn't explain why the delta is a value much larger and not a multiple of the actual 'Small used.

π : constant := Ada.numerics.π;
type angle is delta 0.1 range -4*π .. 4 *π;
for Angle'Small use π * 2.0**(-13);

The only difference I see is that 'Image prints just one digit of precision now. Is that the only difference?

Also, why does my compiler reject for foo'Small use foo'Delta ?

I encountered code that does the above directly, without the constant:

type foo is delta 0.1 range 0.0..1.0;
for foo'Small use foo'Delta;

but GNAT complains that foo is frozen immediately after its declaration:

main.adb:6:04: representation item appears too late

Was that changed in some version of Ada? Should it be valid Ada2012?

TamaMcGlinn
  • 2,840
  • 23
  • 34
  • 3
    The representation clause is rejected because you request the compiler to "assign" the value `foo'Delta` to `foo'Small`. As a result, the compiler will freeze the type `foo` (incl. the type's small) in order to compute (fixate) the value of `foo'Delta`, but once the value `foo'Delta` has been determined, assignment is no longer possible as `foo` has been frozen. – DeeDee Oct 22 '20 at 14:24
  • 1
    I can think of no reason to have different small and delta. Tiresome to need to specify both, and the Barnes example is weird – Simon Wright Oct 22 '20 at 16:42
  • @SimonWright the reason is to prevent underflow, more details below if you are interested. – Raffles Sep 15 '22 at 07:28
  • 1
    @SimonWright I.e. you shouldn't normally specify both as the same, you should specify 'Small to use all the available bits as John Barnes has done. I hope this helps. Thanks – Raffles Sep 15 '22 at 07:36
  • Interesting that your failing rep clause code is the same as that given by Barnes on the page before Raffles’ reference below. I’d declare a constant `Foo_Delta := 0.1;` (0.1 is a ridiculously large value, ofc) – Simon Wright Sep 16 '22 at 15:56

2 Answers2

4

Disclaimer: So fixed-point arithmetic is quite a special topic. This is what I understand when it comes to this topic, but I must issue a warning here: I might be incorrect on what I write below. So for everyone who reads it: please correct me if I'm wrong.

In Ada, real types are defined by their accuracy. This is opposed to most other languages which define real types by their implementation (i.e. representation in hardware). The choice to use accuracy properties instead of representation aspects in the definition of real types is in-line with the language philosophy: accuracy is, as a concept, strongly related to correctness; the objective of the language. Defining real types in terms of accuracy is also more natural in the sense that you let the compiler choose the most optimal type given your requirements on accuracy (on a computer, all values are approximations anyway and you have to deal with that fact in one-way or another).

The Delta attribute defines a requirement on the absolute error bound (accuracy) associated with the underlying fixed-point type (see also Ada 83 Rationale, section 5.1.3). The advantage is two-fold:

  • The programmer specifies the numeric type using requirements, and delegates the optimal choice of representation on the hardware to the compiler.

  • The absolute error bound, typically used in numerical analysis to analyze and predict the effect of arithmetic operations on accuracy, is directly stated in the type definition. Numerical analysis (accuracy and range analysis) is an important aspect when it comes to implementing computational algorithms, in particular when using fixed-point types.

Update 24-10-2020: These former paragraphs should be read in the context of the original language specification, Ada 83. Moreover the Ada 83 language had a second important objective that seemed to have influenced the choice for defining numeric real types using accuracy: the separation principle. See, the Ada 83 Rationale, chapter 15 for a clear statement on what this meant. When Ada 95 was developed, however, the separation between logical type properties (like accuracy) and machine representation was (at least for fixed-point types) reviewed and found to be not as useful in practice as what was hoped for (see the Ada 95 Rationale, section G.4.2). Hence, as of Ada 95, the role of the Delta attribute has been diminished and the Small attribute has been used instead in the formulation of how fixed-point types and operations should work (see, for example, RM G.2.3).

As an example, consider the example program below. The program defines a numeric type and specifies that the absolute difference between a "true" value and the underlying representation must not exceed 0.07:

type Fix is delta 0.07 range 0.0 .. 10.0;     --  0.07 is just a random value here

In other words, when a given "true" value is casted to type Fix, then it will obtain an uncertainly of +/- 0.07. Hence, the three named constants X, Y and Z in the program below, when casted to the type Fix, become:

X : constant := 5.6;       --  Becomes 5.6 +/- 0.07 when casted to type Fix.
Y : constant := 0.3;       --  Becomes 0.3 +/- 0.07 when casted to type Fix.
Z : constant := 2.5;       --  Becomes 2.5 +/- 0.07 when casted to type Fix.

Given these uncertainties, one can compute the uncertainty of the result of some sequence of arithmetic operations (see also this excellent answer on SO). This is actually demonstrated in the program.

Update 24-10-2020: In retrospect, this doesn't seem to be correct and there are complications. The computations of uncertainty in the program do not take into account the intermediate and final casting (quantization) of numbers that may occur during the computation and final assignment. Hence, the computed uncertainties are not correct and too optimistic (i.e. they should be larger). I will not delete the example program though as it does provide an intuition for the original intend of the Delta attribute.

Three computations are made, both using Long_Float (Flt) and the custom fixed-point type Fix. The result of the computation using the Long_Float is, of course, also an approximation, but for the sake of demonstration, we can assume it is exact. The result of the fixed-point computation, however has a (very) limited accuracy as we specified a rather large error bound for type Fix. On the other hand, the fixed-point values require less space (here: only 8 bits per value) and the arithmetic operations require no specialized floating-point hardware.

The fact that you can tweak the Small attribute is just to allow programmers to control the model numbers available in the set defined by the fixed-point type. It might be tempting to always make the Small representation aspect equal to the Delta attribute, but making them equal does not (in general) change the requirement that you need to perform some numerical (error) analysis when using fixed-point numbers together with arithmetic operations.

Update 24-10-2020: This statement is only partly correct I think. The Small attribute does allow programmers to control the model numbers (i.e. number that can be represented exactly by the data type), but it's not just that. As of Ada 95, the Small attribute plays a major role in how fixed-point arithmetic is supposed to work (RM G.2.3) and, moreover, most documentation on fixed-point arithmetic and software to analyze fixed-point algorithms (see, for example here) assume that actual representation of the type in hardware is known; their treatment of the subject does not depart from an absolute error bound, but always depart from the representation of a fixed-point value.

In the end, it's all about trading resources (memory, floating-point hardware) with numerical accuracy.

Update 24-10-2020: Also this statement requires a remark: not requiring floating-point operations for executing fixed-point operations in Ada depends on a context. Fixed-point operations, in particular multiplication and division, can be done using only integer operations if the type of the operands and result of the operation have particular values for Small. It's too much detail to put here, but some interesting information can actually be found in the well documented source code of GNAT itself, see, for example, exp_fixd.adb.

Update 24-10-2020: So in conclusion, given the changes in Ada 95 and given the current state-of-art in tools for performing fixed-point analysis, there seems no strong argument to choose the values for Delta and Small differently. The Delta attribute still represents the absolute error bound, but it's value is not as useful as thought originally. It's only major use seems, as you already mentioned, be in the I/O of fixed-point data-types (RM 3.5.10 (5) and Ada.Text_IO.Fixed_IO).

main.adb

pragma Warnings (Off, "static fixed-point value is not a multiple of Small");
pragma Warnings (Off, "high bound adjusted down by delta (RM 3.5.9(13))");


with Ada.Text_IO; use Ada.Text_IO;

procedure Main is

   type Flt is new Long_Float;
   type Fix is delta 0.07 range 0.0 .. 10.0;

   ---------
   -- Put --
   ---------

   procedure Put (Nominal, Uncertainty : Flt; Result : Fix) is

      package Fix_IO is new Fixed_IO (Fix);
      use Fix_IO;

      package Flt_IO is new Float_IO (Flt);
      use Flt_IO;

   begin
      Put ("   Result will be within     : ");
      Put (Nominal, Fore => 2, Aft => 4, Exp => 0);
      Put (" +/-");
      Put (Uncertainty, Fore => 2, Aft => 4, Exp => 0);
      New_Line;

      Put ("   Actual fixed-point result : ");
      Put (Result, Fore => 2);
      New_Line (2);

   end Put;

   X : constant := 5.6;
   Y : constant := 0.3;
   Z : constant := 2.5;

   D : constant Flt := Fix'Delta;

begin

   Put_Line ("Size  of fixed-point type : " & Fix'Size'Image);
   Put_Line ("Small of fixed-point type : " & Fix'Small'Image);
   New_Line;

   --  Update 24-10-2020: Uncertainty computation is too optimistic. It omits
   --                     the effect of quantization in intermediate and final
   --                     variable assignments.

   Put_Line ("X + Y = ");
   Put (Nominal     => Flt (X) + Flt (Y),
        Uncertainty => D + D,
        Result      => Fix (X) + Fix (Y));

   Put_Line ("X * Y = ");
   Put (Nominal     => Flt (X) * Flt (Y),
        Uncertainty => (D / X + D / Y) * X * Y,
        Result      => Fix (X) * Fix (Y));

   Put_Line ("X * Y + Z = ");
   Put (Nominal     => Flt (X) * Flt (Y) + Flt (Z),
        Uncertainty => (D / X + D / Y) * X * Y + D,
        Result      => Fix (X) * Fix (Y) + Fix (Z));

end Main;

output

Size  of fixed-point type :  8
Small of fixed-point type :  6.25000000000000000E-02

X + Y = 
   Result will be within     :  5.9000 +/- 0.1400
   Actual fixed-point result :  5.81

X * Y = 
   Result will be within     :  1.6800 +/- 0.4130
   Actual fixed-point result :  1.38

X * Y + Z = 
   Result will be within     :  4.1800 +/- 0.4830
   Actual fixed-point result :  3.88
DeeDee
  • 5,654
  • 7
  • 14
  • but isn't the error actually bounded by 'Small, rather than by 'Delta ? Substituting that in your code gives a smaller bound but correct, although I can't prove there are no failure cases. Part of my question was; is there any difference aside from the representation that `Put ()` gives by default, when specifying both rather than only Delta? – TamaMcGlinn Oct 23 '20 at 13:14
  • I thank you and applaud your efforts and while this answer contains a lot of useful explanation about error bounds, it doesn't answer my question. The answer would be a list of use-cases for either specifying delta, small, or both. – TamaMcGlinn Oct 23 '20 at 13:18
  • You mention floating point hardware, but is that correct? I would think specifying a non-power-of-2 'Small would cause extra integer multiplication instructions for scaling, but never floating point instructions. – TamaMcGlinn Oct 23 '20 at 13:20
  • @TamaMcGlinn So I admit: with this answer I went down a rabbit-hole. The subject of fixed-point arithmetic in Ada is quite involved given its history. I updated my answer with additional comments and some sort of conclusion of which I hope somewhat answers your question. – DeeDee Oct 24 '20 at 15:32
  • 1
    @TamaMcGlinn Yes, the actual error bound is `Small`, but the original intend was that the programmer would specify the *required* absolute error bound using `Delta` based on some one-time analysis. The compiler would then choose a value for `Small` (an hence an actual absolute error bound) given the available hardware. Separating the requirement from the actual representation would allow for some sort of portability of algorithms regarding numerical accuracy. However, as I stated in the updated answer, this did not work as well as expected. – DeeDee Oct 24 '20 at 15:32
  • @TamaMcGlinn By floating-point hardware (I guess you refer to the last sentence) I refer the trade off of using floating-point arithmetic and associated hardware to perform a computation and achieve typically good accuracy versus not using it, and using fixed-point arithmetic instead to perform a computation resulting in some typically lesser accuracy. Fixed-point operations can be implemented using only need integer operations, but this might not always be the case due to the flexibility of the fixed-point types in Ada (see updated answer). – DeeDee Oct 24 '20 at 15:33
2

Yes. To prevent underflow and/or to avoid wasting bits.

Consider if you multiply 0.2 by 0.2 and then at some later point divide by 0.1.

The correct answer is 0.4. However, if your 'Small is the same as your 'Delta (i.e. 0.1), when 0.2 is squared, the true value 0.04 will underflow and so will be calculated as zero. When you then divide by 0.1 you get zero as the answer, instead of 0.4.

If on the other hand you have specified your 'Small as 0.01, the answer will be calculated correctly. The first part of the calculation will be correctly reported as zero if you try to access it, because 0.04 is closest to zero and that is the correct representation to the nearest 0.1, however when that value is then divided by 0.1 the correct value emerges, 0.4.

Consider if you have a 16 bit value, and the range is as specified in your example range 0.0..1.0, there are only 11 possible values. These can be represented in 4 bits, but that leaves the other 12 bits completely wasted. Why not use them to hold the extra precision so that in the event there is any arithmetic the answer will be accurate. I have noticed that a lot of engineers battle with themselves over these kinds of things and find it difficult to specify the extra precision unless they can see a reason why a calculation might be needed. However, I think that's the wrong question to ask. A better question is are you going to waste the other bits? It doesn't cost anything to make use of them, and you future-proof yourself against any unforseen calculations that may occur on the type, and avoid a bug.

Here's how the bug happens. Engineer A thinks "I can't see any reason why this measurement would ever be involved in a calculation it just gets logged and reported, I'll set 'Small to 'Delta". Unfortunately Engineer A is only thinking about the current project. 5 years later Engineer A has left the company and Engineer B is asked to add a feature / re-use the existing code on a new project / turn it into a product, and Engineer C then ends up having to do some arithmetic on it... not realising that this 16 bit value actually only has 4 bits of precision. Bang. Mr Barnes is clearly well aware of these kinds of issues!

One final point - going back to your initial investigation - well done. Yes, that's the way to do it. In fact because of these types of issues (i.e. the very non-intuitive default behaviour) Ada now has decimal fixed point types, so you can do e.g type Foo2 is delta 0.1 digits 2; which specifies a fixed point type with an actual delta of 0.1 (not a binary fraction that's smaller than it), and so behaves much more intuitively. Specifying 2 digits gives a range of -9.9 to +9.0, specifying digits 1 -0.9 to +0.9 etc. The delta can be any power of ten.

I hope this is useful.

Raffles
  • 136
  • 2
  • engineer D might also come along later and try to sum up 100 values and then divide the total - which would require the opposite use of the extra bits, as it seems to me – TamaMcGlinn Sep 15 '22 at 14:19
  • so, does `π * 2.0**(-13)` in Barnes' example mean 'maximum precision for 32 bits' or something similar? How did you work that out? – TamaMcGlinn Sep 15 '22 at 14:34
  • Hi @TamaMcGlinn regarding your first point, this is specified with the type's range, not the 'small, so I didn't mention it, but you are absolutely correct. Usually you have set the range to be what it is for good reasons. But if not - yes, you should definitely consider what the best use of the bits is in BOTH directions. Bear in mind though that you will lose the safety of the compile time checks and runtime constraint error on your original bounds. Ada has no equivalent protection against underflow, so you don't lose anything if you just extend the precision rather than the range. – Raffles Sep 16 '22 at 07:33
  • 1
    @TamaMcGlinn regarding the Barnes example - firstly, I think perhaps Mr Barnes has taken a leaf out of your book - the obvious range would be +/- 2pi, but +/-4pi allows for addition. I didn't read it too closely but it looked to me like a range of of 8 i.e. 3 bits needed above the binary point, and 13 bits below the binary point - 16 bits in total, which is of course the word size in many commonly used embedded processors and microcontrollers, the kind of platform for which Ada is often used. However I am just reading between the lines here - you would have to ask Mr Barnes to be sure. – Raffles Sep 16 '22 at 07:50
  • 1
    The floating-point code in Barnes p.237 had range 0.0 .. 2.0 * pi, and I think the example on p.434 misremembered this as -2.0 * pi .. 2.0 * pi. Using fixed-point (well, in any circumstances really) I would initially specify the range as -pi .. pi and then realise the addition problem and double it; this comes from a low-level hardware-related viewpoint. I now realise that when I said small should equal delta, what I meant was that delta should equal small - the accuracy of the measurement will no doubt be less than the precision, but can the type system be used to handle this issue? – Simon Wright Sep 16 '22 at 16:14
  • @Raffles Although Ada does not have protection against underflow, SPARK does. Even without modifying the Ada code (to add contracts, for example), running gnatprove will tell you of any possible underflow. – TamaMcGlinn Sep 20 '22 at 07:17