-1

My program is about a method which is given floats and in this method I want to multiply or add those floats. But not multiply like a * b, I want to break those floats down to their structure like the bit for the sign, the 8 bit for the exponent and the rest of the bits as the mantissa.

I want to implement / emulate software floating-point add and multiply (to learn more about what FP hardware has to do).


In the head of the program there are the breakdowns:

    #define  SIGN(x) (x>>31);
    #define  MANT(x) (x&0x7FFFFF);
    #define  EXPO(x) ((x>>23)&0xFF);

    #define SPLIT(x, s, m, e) do {  \
    s = SIGN(x);                    \
    m = MANT(x);                    \
    e = EXPO(x);                    \
    if ( e != 0x00 && e != 0xFF ) { \
      m |= 0x800000;                \
    }                               \
    } while ( 0 )  

    #define BUILD(x, s, m, e) do {               \
        x = (s << 31) | (e<<23) | (m&0x7FFFFF);  \
    } while ( 0 )

The main looks as follows:

    float f = 2.3; 
    float g = 1.8; 
    float h = foo(&f, &g);

And the method for the calculation looks like:

   float foo(float *a, float *b)  {
   uint32_t ia = *(unsigned int *)a;
   uint32_t ib = *(unsigned int *)b;
   uint32_t result = 0;
   uint32_t signa, signb, signr; 
   uint32_t manta, mantb, mantr; 
   uint32_t expoa, expob, expor; 
   SPLIT(ia, signa, manta, expoa); 
   SPLIT(ib, signb, mantb, expob); 

I already tried the multiply by adding the exponents and multiply their mantissas as follow:

   expor = (expoa -127) + (expob -127) + 127;
   mantr = (manta) * (mantb);
   signr = signa ^ signb;

The return and rebuild of the new float:

   BUILD(result, signr, mantr, expor);
   return *(float *)&result;

The problem is now, that the result is wrong. the mantr even takes a very low negative Number (in case if foo gets 1.5 and 2.4 mantr takes -838860800 and the result is 2.0000000).

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Chickenman
  • 1,420
  • 1
  • 7
  • 11
  • 3
    `return *(float *)&result;` violates strict aliasing. It's only safe on MSVC. Use a union or memcpy for type punning in C99 / C11. – Peter Cordes Apr 06 '19 at 17:00
  • You overlooked the implied bit 24 of the significand, and, normalisation. – Weather Vane Apr 06 '19 at 17:04
  • @WeatherVane: and **rounding**. People tend to forget that, and it is not that simple either. – Rudy Velthuis Apr 06 '19 at 18:30
  • Voted to close as too broad. If you want to know how floating point multiplication is performed, then buy a book on the topic. It's far from trivial. – Bathsheba Apr 06 '19 at 18:39
  • 1
    @Batsheba: what makes it far from trivial are the special numbers. Normals are actually quite easy and more or less as described in the answer by Peter Cordes (except the rounding). This can be quite instructive, IMO. – Rudy Velthuis Apr 06 '19 at 18:41
  • In addition to Peter Cordes' and njuffa's answers the [Handbook of Floating-Point Arithmetic](https://www.springer.com/us/book/9780817647056) by Muller et al. is also quite interesting. See chapter 10 for: Software Implementation of Floating-Point Arithmetic. – wim Apr 08 '19 at 13:23

3 Answers3

4

You can't just take truncate the result of the mantissa multiply, you need to take the top 24 bits (after using the low half for rounding) and renormalize (adjust the exponent).

Floating point operations keep the top significand bits. The most significant part of the integer product is the high bits; the low bits are further places after the decimal. (Terminology: it's a "binary point", not "decimal point", because binary floats use radix 2 (binary), not 10 (decimal).)


For normalized inputs, the implicit leading 1 in the input significands means the 32x32 => 64-bit uint64_t product that you use to implement 24 x 24 => 48-bit mantissa multiplication will have its high bit in one of 2 possible locations, so you don't need a bit-scan to find it. A compare or single-bit-test will do.

For subnormal inputs, that's not guaranteed so you need to check where the MSB is, e.g. with GNU C __builtin_clzll. (There are many special cases to handle for one or both inputs being subnormal, and/or the output being subnormal.)

See https://en.wikipedia.org/wiki/Single-precision_floating-point_format for more about the IEEE-754 binary32 format, including the implied leading 1 of the significand.

And see @njuffa's answer for an actual tested + working implementation that does 64-bit operations as two 32-bit halves for some reason, instead of letting C do that efficiently.


Also, return *(float *)&result; violates strict aliasing. It's only safe on MSVC. Use a union or memcpy for type punning in C99 / C11.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
3

Emulating the multiplication of two IEEE-754 (2008) binary32 operands is a bit more complex than the question suggests. In general, we have to distinguish the following operand classes: zeros, subnormals (0 < |x| < 2-126), normals (2126 ≤ |x| < 2128), infinities, NaNs. Normals use biased exponents in [1, 254], while any of the special operand classes use biased exponents in {0, 255}. The following assumes we want to implement floating-point multiply with all floating-point exceptions masked, and using the round-to-nearest-to-even rounding mode.

First, we check whether any of the arguments belongs to a special operand class. If so, we check the special cases in sequence. If one of the arguments is a NaN, we turn that NaN into a QNaN and return it. If one of the operands is zero, we return an appropriately signed zero, unless the other argument is an infinity, in which case we return a special QNaN INDEFINITE since this is an invalid operation. After that we check for any argument of infinity, returning an appropriately signed infinity. This leaves subnormals, which we normalize. In case there are two subnormal arguments, we only need to normalize one of them as the result will underflow to zero.

The multiplication of normals proceeds as the asker envisioned in the question. The sign of the result is the exclusive-OR of the signs of the arguments, the exponent of the result is the sum of the exponents of the arguments (adjusted for exponent bias), and the significand of the result is generated from the product of the significant of the arguments. We need the full product for rounding. We can either use a 64-bit type for that, or represent it with a pair of 32-bit numbers. In the code below I have chose the latter representation. Rounding to nearest-or-even is straightforward: if we have a tie-case (the result is exactly in the middle between the closest two binary32 number), we need to round up if the least significant bit of the mantissa is 1. Otherwise, we need to round up if the most significant discarded bit (the round bit) is 1.

Three cases need to be considered for the result, based on the result exponent prior to rounding: Exponent is in normal range, result overflows (too large in magnitude), or it underflows (too small in magnitude). In the first case, the result is a normal or infinity if overflow occurs during rounding. In the second case, the result is infinity. In the last case the result is either zero (severe underflow), a subnormal, or the smallest normal (if round-up occurs).

The following code, with a simple framework for light testing via gobs of random test cases and several thousand interesting patterns shows an exemplary ISO-C implementation written in a couple of hours for reasonable clarity and reasonable performance. I let the test framework run for an hour or so on an x64 platform and no errors were reported. If you plan to use the code in production, you would want to construct a more stringent test framework, and may need additional performance tuning.

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <limits.h>

#define FLOAT_MANT_BITS    (23)
#define FLOAT_EXPO_BITS    (8)
#define FLOAT_EXPO_BIAS    (127)
#define FLOAT_MANT_MASK    (~((~0u) << (FLOAT_MANT_BITS+1))) /* incl. integer bit */
#define EXPO_ADJUST        (1)   /* adjustment for performance reasons */
#define MIN_NORM_EXPO      (1)   /* minimum biased exponent of normals */
#define MAX_NORM_EXPO      (254) /* maximum biased exponent of normals */
#define INF_EXPO           (255) /* biased exponent of infinities */
#define EXPO_MASK          (~((~0u) << FLOAT_EXPO_BITS))
#define FLOAT_SIGN_MASK    (0x80000000u)
#define FLOAT_IMPLICIT_BIT (1 << FLOAT_MANT_BITS)
#define RND_BIT_SHIFT      (31)
#define RND_BIT_MASK       (1u << RND_BIT_SHIFT)
#define FLOAT_INFINITY     (0x7f800000)
#define FLOAT_INDEFINITE   (0xffc00000u)
#define MANT_LSB           (0x00000001)
#define FLOAT_QNAN_BIT     (0x00400000)
#define MAX_SHIFT          (FLOAT_MANT_BITS + 2)

uint32_t fp32_mul_core (uint32_t a, uint32_t b)
{
    uint64_t prod;
    uint32_t expoa, expob, manta, mantb, shift;
    uint32_t r, signr, expor, mantr_hi, mantr_lo;

    /* split arguments into sign, exponent, significand */
    expoa = ((a >> FLOAT_MANT_BITS) & EXPO_MASK) - EXPO_ADJUST;
    expob = ((b >> FLOAT_MANT_BITS) & EXPO_MASK) - EXPO_ADJUST;
    manta = (a | FLOAT_IMPLICIT_BIT) & FLOAT_MANT_MASK;
    mantb = (b | FLOAT_IMPLICIT_BIT) & FLOAT_MANT_MASK;
    /* result sign bit: XOR sign argument signs */
    signr = (a ^ b) & FLOAT_SIGN_MASK;
    if ((expoa >= (MAX_NORM_EXPO - EXPO_ADJUST)) || /* at least one argument is special */
        (expob >= (MAX_NORM_EXPO - EXPO_ADJUST))) { 
        if ((a & ~FLOAT_SIGN_MASK) > FLOAT_INFINITY) { /* a is NaN */
            /* return quietened NaN */
            return a | FLOAT_QNAN_BIT;
        }
        if ((b & ~FLOAT_SIGN_MASK) > FLOAT_INFINITY) { /* b is NaN */
            /* return quietened NaN */
            return b | FLOAT_QNAN_BIT;
        }
        if ((a & ~FLOAT_SIGN_MASK) == 0) { /* a is zero */
            /* return NaN if b is infinity, else zero */
            return (expob != (INF_EXPO - EXPO_ADJUST)) ? signr : FLOAT_INDEFINITE;
        }
        if ((b & ~FLOAT_SIGN_MASK) == 0) { /* b is zero */
            /* return NaN if a is infinity, else zero */
            return (expoa != (INF_EXPO - EXPO_ADJUST)) ? signr : FLOAT_INDEFINITE;
        }
        if (((a & ~FLOAT_SIGN_MASK) == FLOAT_INFINITY) || /* a or b infinity */
            ((b & ~FLOAT_SIGN_MASK) == FLOAT_INFINITY)) {
            return signr | FLOAT_INFINITY;
        }
        if ((int32_t)expoa < (MIN_NORM_EXPO - EXPO_ADJUST)) { /* a is subnormal */
            /* normalize significand of a */
            manta = a & FLOAT_MANT_MASK;
            expoa++;
            do {
                manta = 2 * manta;
                expoa--;
            } while (manta < FLOAT_IMPLICIT_BIT);
        } else if ((int32_t)expob < (MIN_NORM_EXPO - EXPO_ADJUST)) { /* b is subnormal */
            /* normalize significand of b */
            mantb = b & FLOAT_MANT_MASK;
            expob++;
            do {
                mantb = 2 * mantb;
                expob--;
            } while (mantb < FLOAT_IMPLICIT_BIT);
        }
    }
    /* result exponent: add argument exponents and adjust for biasing */
    expor = expoa + expob - FLOAT_EXPO_BIAS + 2 * EXPO_ADJUST;
    mantb = mantb << FLOAT_EXPO_BITS; /* preshift to align result signficand */
    /* result significand: multiply argument signficands */
    prod = (uint64_t)manta * mantb;
    mantr_hi = (uint32_t)(prod >> 32);
    mantr_lo = (uint32_t)(prod >>  0);
    /* normalize significand */
    if (mantr_hi < FLOAT_IMPLICIT_BIT) {
        mantr_hi = (mantr_hi << 1) | (mantr_lo >> (32 - 1));
        mantr_lo = (mantr_lo << 1);
        expor--;
    }
    if (expor <= (MAX_NORM_EXPO - EXPO_ADJUST)) { /* normal, may overflow to infinity during rounding */
        /* combine biased exponent, sign and signficand */
        r = (expor << FLOAT_MANT_BITS) + signr + mantr_hi;
        /* round result to nearest or even; overflow to infinity possible */
        r = r + ((mantr_lo == RND_BIT_MASK) ? (mantr_hi & MANT_LSB) : (mantr_lo >> RND_BIT_SHIFT));
    } else if ((int32_t)expor > (MAX_NORM_EXPO - EXPO_ADJUST)) { /* overflow */
        /* return infinity */
        r = signr | FLOAT_INFINITY;
    } else { /* underflow */
        /* return zero, normal, or smallest subnormal */
        shift = 0 - expor;
        if (shift > MAX_SHIFT) shift = MAX_SHIFT;
        /* denormalize significand */
        mantr_lo = mantr_hi << (32 - shift) | (mantr_lo ? 1 : 0);
        mantr_hi = mantr_hi >> shift;
        /* combine sign and signficand; biased exponent known to be zero */
        r = mantr_hi + signr;
        /* round result to nearest or even */
        r = r + ((mantr_lo == RND_BIT_MASK) ? (mantr_hi & MANT_LSB) : (mantr_lo >> RND_BIT_SHIFT));
    }
    return r;
}

uint32_t float_as_uint (float a)
{
    uint32_t r;
    memcpy (&r, &a, sizeof r);
    return r;
}

float uint_as_float (uint32_t a)
{
    float r;
    memcpy (&r, &a, sizeof r);
    return r;
}

float fp32_mul (float a, float b)
{
    return uint_as_float (fp32_mul_core (float_as_uint (a), float_as_uint (b)));
}

/* Fixes via: Greg Rose, KISS: A Bit Too Simple. http://eprint.iacr.org/2011/007 */
static unsigned int z=362436069,w=521288629,jsr=362436069,jcong=123456789;
#define znew (z=36969*(z&0xffff)+(z>>16))
#define wnew (w=18000*(w&0xffff)+(w>>16))
#define MWC  ((znew<<16)+wnew)
#define SHR3 (jsr^=(jsr<<13),jsr^=(jsr>>17),jsr^=(jsr<<5)) /* 2^32-1 */
#define CONG (jcong=69069*jcong+13579)                     /* 2^32 */
#define KISS ((MWC^CONG)+SHR3)

#define ISNAN(x) ((float_as_uint (x) << 1) > 0xff000000)
#define QNAN(x)  (x | FLOAT_QNAN_BIT)

#define PURELY_RANDOM  (0)
#define PATTERN_BASED  (1)

#define TEST_MODE      (PURELY_RANDOM)

uint32_t v[8192];

int main (void)
{
    unsigned long long count = 0;
    float a, b, res, ref;
    uint32_t i, j, patterns, idx = 0, nbrBits = sizeof (uint32_t) * CHAR_BIT;

    /* pattern class 1: 2**i */
    for (i = 0; i < nbrBits; i++) {
        v [idx] = ((uint32_t)1 << i);
        idx++;
    }
    /* pattern class 2: 2**i-1 */
    for (i = 0; i < nbrBits; i++) {
        v [idx] = (((uint32_t)1 << i) - 1);
        idx++;
    }
    /* pattern class 3: 2**i+1 */
    for (i = 0; i < nbrBits; i++) {
        v [idx] = (((uint32_t)1 << i) + 1);
        idx++;
    }
    /* pattern class 4: 2**i + 2**j */
    for (i = 0; i < nbrBits; i++) {
        for (j = 0; j < nbrBits; j++) {
            v [idx] = (((uint32_t)1 << i) + ((uint32_t)1 << j));
            idx++;
        }
    }
    /* pattern class 5: 2**i - 2**j */
    for (i = 0; i < nbrBits; i++) {
        for (j = 0; j < nbrBits; j++) {
            v [idx] = (((uint32_t)1 << i) - ((uint32_t)1 << j));
            idx++;
        }
    }
    /* pattern class 6: MAX_UINT/(2**i+1) rep. blocks of i zeros an i ones */
    for (i = 0; i < nbrBits; i++) {
        v [idx] = ((~(uint32_t)0) / (((uint32_t)1 << i) + 1));
        idx++;
    }
    patterns = idx;
    /* pattern class 6: one's complement of pattern classes 1 through 5 */
    for (i = 0; i < patterns; i++) {
        v [idx] = ~v [i];
        idx++;
    }
    /* pattern class 7: two's complement of pattern classes 1 through 5 */
    for (i = 0; i < patterns; i++) {
        v [idx] = ~v [i] + 1;
        idx++;
    }
    patterns = idx;

#if TEST_MODE == PURELY_RANDOM
    printf ("using purely random test vectors\n");
#elif TEST_MODE == PATTERN_BASED
    printf ("using pattern-based test vectors\n");
    printf ("#patterns = %u\n", patterns);
#endif // TEST_MODE

    do {
#if TEST_MODE == PURELY_RANDOM
        a = uint_as_float (KISS);
        b = uint_as_float (KISS);
#elif TEST_MODE == PATTERN_BASED
        i = KISS % patterns;
        j = KISS % patterns;
        a = uint_as_float ((v[i] & 0x7fffff) | (KISS & ~0x7fffff));
        b = uint_as_float ((v[j] & 0x7fffff) | (KISS & ~0x7fffff));
#endif // TEST_MODE
        res = fp32_mul (a, b);
        ref = a * b;
        /* check for bit pattern mismatch between result and reference */
        if (float_as_uint (res) != float_as_uint (ref)) {
            /* if both a and b are NaNs, either could be returned quietened */
            if (! (ISNAN (a) && ISNAN (b) &&
                   ((QNAN (float_as_uint (a)) == float_as_uint (res)) ||
                    (QNAN (float_as_uint (b)) == float_as_uint (res))))) {
                printf ("err: a=% 15.8e (%08x)  b=% 15.8e (%08x)  res=% 15.8e (%08x)  ref=%15.8e (%08x)\n",
                        a, float_as_uint(a), b, float_as_uint (b), res, float_as_uint (res), ref, float_as_uint (ref));
                return EXIT_FAILURE;
            }
        }
        count++;
        if (!(count & 0xffffff)) printf ("\r%llu", count);
    } while (1);
    return EXIT_SUCCESS;
}
njuffa
  • 23,970
  • 4
  • 78
  • 130
  • This isn't an asm question, no real need to avoid `uint64_t` as pseudocode for 32-bit ISAs.. (And if you care about actual compiler code-gen quality, I'd assume that `<<` on `uint64_t` would work at least as well as emulating it. e.g. maybe not all compilers would spot that they could just add/adc to implement the carry when normalizing the significand on a 32-bit ISA, or especially on a 64-bit ISA that they could just use the single-register 64-bit product on a 64-bit ISA for that step, before extracting the high half.) – Peter Cordes Apr 08 '19 at 06:24
  • 1
    @PeterCordes Without additional context, its hard to know what the best implementation style is. I have worked on other questions before where asker mentioned belatedly that they are actually using a 16-bit microcontroller whose compiler doesn't support 64-bit integer types. Some compilers for 32-bit platforms often generate poor code for 64-bit operations, e.g. CUDA on GPUs. So I chose a conservative style that should work quite well across all platforms and requires minimal adaptation effort should there be no 64-bit support at all. – njuffa Apr 08 '19 at 15:12
0

It is much more complicated. Take a look on the source of the softmath library (for example https://github.com/riscv/riscv-pk/blob/master/softfloat/f64_mul.c). Clone it and analyze.

0___________
  • 60,014
  • 4
  • 34
  • 74
  • I would have expected a little more explanation of why it is much more complicated (subnormals, other special numbers, rounding, overflow, underflow, etc.), and not just a link to a worked out piece of software. – Rudy Velthuis Apr 06 '19 at 18:36
  • @RudyVelthuis it is not about linking, it about the multiplication algorithm. – 0___________ Apr 06 '19 at 19:50
  • 1
    You obviously misunderstood my comment. You posted a **link** (=URL) to an external site, and not much more. I had expected more **explanations**. I am not talking about the linking process. – Rudy Velthuis Apr 06 '19 at 21:36