Disclaimer: So fixed-point arithmetic is quite a special topic. This is what I understand when it comes to this topic, but I must issue a warning here: I might be incorrect on what I write below. So for everyone who reads it: please correct me if I'm wrong.
In Ada, real types are defined by their accuracy. This is opposed to most other languages which define real types by their implementation (i.e. representation in hardware). The choice to use accuracy properties instead of representation aspects in the definition of real types is in-line with the language philosophy: accuracy is, as a concept, strongly related to correctness; the objective of the language. Defining real types in terms of accuracy is also more natural in the sense that you let the compiler choose the most optimal type given your requirements on accuracy (on a computer, all values are approximations anyway and you have to deal with that fact in one-way or another).
The Delta
attribute defines a requirement on the absolute error bound (accuracy) associated with the underlying fixed-point type (see also Ada 83 Rationale, section 5.1.3). The advantage is two-fold:
The programmer specifies the numeric type using requirements, and delegates the optimal choice of representation on the hardware to the compiler.
The absolute error bound, typically used in numerical analysis to analyze and predict the effect of arithmetic operations on accuracy, is directly stated in the type definition. Numerical analysis (accuracy and range analysis) is an important aspect when it comes to implementing computational algorithms, in particular when using fixed-point types.
Update 24-10-2020: These former paragraphs should be read in the context of the original language specification, Ada 83. Moreover the Ada 83 language had a second important objective that seemed to have influenced the choice for defining numeric real types using accuracy: the separation principle. See, the Ada 83 Rationale, chapter 15 for a clear statement on what this meant. When Ada 95 was developed, however, the separation between logical type properties (like accuracy) and machine representation was (at least for fixed-point types) reviewed and found to be not as useful in practice as what was hoped for (see the Ada 95 Rationale, section G.4.2). Hence, as of Ada 95, the role of the Delta
attribute has been diminished and the Small
attribute has been used instead in the formulation of how fixed-point types and operations should work (see, for example, RM G.2.3).
As an example, consider the example program below. The program defines a numeric type and specifies that the absolute difference between a "true" value and the underlying representation must not exceed 0.07:
type Fix is delta 0.07 range 0.0 .. 10.0; -- 0.07 is just a random value here
In other words, when a given "true" value is casted to type Fix
, then it will obtain an uncertainly of +/- 0.07. Hence, the three named constants X
, Y
and Z
in the program below, when casted to the type Fix
, become:
X : constant := 5.6; -- Becomes 5.6 +/- 0.07 when casted to type Fix.
Y : constant := 0.3; -- Becomes 0.3 +/- 0.07 when casted to type Fix.
Z : constant := 2.5; -- Becomes 2.5 +/- 0.07 when casted to type Fix.
Given these uncertainties, one can compute the uncertainty of the result of some sequence of arithmetic operations (see also this excellent answer on SO). This is actually demonstrated in the program.
Update 24-10-2020: In retrospect, this doesn't seem to be correct and there are complications. The computations of uncertainty in the program do not take into account the intermediate and final casting (quantization) of numbers that may occur during the computation and final assignment. Hence, the computed uncertainties are not correct and too optimistic (i.e. they should be larger). I will not delete the example program though as it does provide an intuition for the original intend of the Delta
attribute.
Three computations are made, both using Long_Float
(Flt
) and the custom fixed-point type Fix
. The result of the computation using the Long_Float
is, of course, also an approximation, but for the sake of demonstration, we can assume it is exact. The result of the fixed-point computation, however has a (very) limited accuracy as we specified a rather large error bound for type Fix
. On the other hand, the fixed-point values require less space (here: only 8 bits per value) and the arithmetic operations require no specialized floating-point hardware.
The fact that you can tweak the Small
attribute is just to allow programmers to control the model numbers available in the set defined by the fixed-point type. It might be tempting to always make the Small
representation aspect equal to the Delta
attribute, but making them equal does not (in general) change the requirement that you need to perform some numerical (error) analysis when using fixed-point numbers together with arithmetic operations.
Update 24-10-2020: This statement is only partly correct I think. The Small
attribute does allow programmers to control the model numbers (i.e. number that can be represented exactly by the data type), but it's not just that. As of Ada 95, the Small
attribute plays a major role in how fixed-point arithmetic is supposed to work (RM G.2.3) and, moreover, most documentation on fixed-point arithmetic and software to analyze fixed-point algorithms (see, for example here) assume that actual representation of the type in hardware is known; their treatment of the subject does not depart from an absolute error bound, but always depart from the representation of a fixed-point value.
In the end, it's all about trading resources (memory, floating-point hardware) with numerical accuracy.
Update 24-10-2020: Also this statement requires a remark: not requiring floating-point operations for executing fixed-point operations in Ada depends on a context. Fixed-point operations, in particular multiplication and division, can be done using only integer operations if the type of the operands and result of the operation have particular values for Small
. It's too much detail to put here, but some interesting information can actually be found in the well documented source code of GNAT itself, see, for example, exp_fixd.adb
.
Update 24-10-2020: So in conclusion, given the changes in Ada 95 and given the current state-of-art in tools for performing fixed-point analysis, there seems no strong argument to choose the values for Delta
and Small
differently. The Delta
attribute still represents the absolute error bound, but it's value is not as useful as thought originally. It's only major use seems, as you already mentioned, be in the I/O of fixed-point data-types (RM 3.5.10 (5) and Ada.Text_IO.Fixed_IO
).
main.adb
pragma Warnings (Off, "static fixed-point value is not a multiple of Small");
pragma Warnings (Off, "high bound adjusted down by delta (RM 3.5.9(13))");
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
type Flt is new Long_Float;
type Fix is delta 0.07 range 0.0 .. 10.0;
---------
-- Put --
---------
procedure Put (Nominal, Uncertainty : Flt; Result : Fix) is
package Fix_IO is new Fixed_IO (Fix);
use Fix_IO;
package Flt_IO is new Float_IO (Flt);
use Flt_IO;
begin
Put (" Result will be within : ");
Put (Nominal, Fore => 2, Aft => 4, Exp => 0);
Put (" +/-");
Put (Uncertainty, Fore => 2, Aft => 4, Exp => 0);
New_Line;
Put (" Actual fixed-point result : ");
Put (Result, Fore => 2);
New_Line (2);
end Put;
X : constant := 5.6;
Y : constant := 0.3;
Z : constant := 2.5;
D : constant Flt := Fix'Delta;
begin
Put_Line ("Size of fixed-point type : " & Fix'Size'Image);
Put_Line ("Small of fixed-point type : " & Fix'Small'Image);
New_Line;
-- Update 24-10-2020: Uncertainty computation is too optimistic. It omits
-- the effect of quantization in intermediate and final
-- variable assignments.
Put_Line ("X + Y = ");
Put (Nominal => Flt (X) + Flt (Y),
Uncertainty => D + D,
Result => Fix (X) + Fix (Y));
Put_Line ("X * Y = ");
Put (Nominal => Flt (X) * Flt (Y),
Uncertainty => (D / X + D / Y) * X * Y,
Result => Fix (X) * Fix (Y));
Put_Line ("X * Y + Z = ");
Put (Nominal => Flt (X) * Flt (Y) + Flt (Z),
Uncertainty => (D / X + D / Y) * X * Y + D,
Result => Fix (X) * Fix (Y) + Fix (Z));
end Main;
output
Size of fixed-point type : 8
Small of fixed-point type : 6.25000000000000000E-02
X + Y =
Result will be within : 5.9000 +/- 0.1400
Actual fixed-point result : 5.81
X * Y =
Result will be within : 1.6800 +/- 0.4130
Actual fixed-point result : 1.38
X * Y + Z =
Result will be within : 4.1800 +/- 0.4830
Actual fixed-point result : 3.88