0

I've seen various programs where I can convert a given decimal floating point number, to a binary in the IEEE 754 format.
Now, given a binary number, how can I get the program in C to convert it to a floating point number?

Example:

Input:  01000001010010000000000001111111
Output: 12.50012111663818359375
Phantômaxx
  • 37,901
  • 21
  • 84
  • 115
Pedro Cabaço
  • 123
  • 12
  • 2
    Mostly, we start by writing some code. Have you written any yet? –  Apr 01 '15 at 08:49
  • 1
    Are you sure about *decimal floating point number* and *binary in the IEEE 754 format*? Your input looks like a binary representation, possibly as a string, you could use `strtol` to parse it and the output may be the string conversion performed by `printf("%f", f)`, so you have some hints to start coding... – chqrlie Apr 01 '15 at 09:06
  • Hmmm This works for the title problem. `01000001010010000000000001111111 binary` --> `1095237759 decimal`. – chux - Reinstate Monica Apr 01 '15 at 17:17

1 Answers1

1

I'm assuming you have the binary representation as a string.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>

void setBit(uint32_t* f, int i) {
    uint32_t mask = ((uint32_t)1) << i;
    *f |= mask;
}

float readFloat(const char* bits) {
    assert(strlen(bits)==8*sizeof(float));
    assert(sizeof(uint32_t)==sizeof(float));
    float f = 0;
    int size = strlen(bits);
    for (int i = 0; i < size; i++) {
        int bit = bits[size-i-1]- 0x30;
        if (bit) {
            setBit((uint32_t*)&f, i);
        }
    }
    return f;
}

int main(int argc, char *argv[]){
    const char* bits = "01000001010010000000000001111111";
    float f = readFloat(bits);
    printf("%s -> %.20f", bits, f);
}

Gives

01000001010010000000000001111111 -> 12.50012111663818359375

T.Gounelle
  • 5,953
  • 1
  • 22
  • 32
  • Should you care about portability to 16-bit platforms (prevalent in embedded word in 2015), `uint32_t mask = 1u << i;` is a problem as `unsigned` may be only 16-bit. Multiple ways to cope: `uint32_t mask = ((uint32_t)1) << i;` or `uint32_t mask = 1LU << i;` or `uint32_t mask = UINT32_C(1) << i;`. I like the first. See http://stackoverflow.com/q/19451101/2410359 – chux - Reinstate Monica Apr 01 '15 at 21:35