I'm working on a Digital Design project (Verilog) involving IEEE double precision floating point standard.
I have a query regarding IEEE floating number representation. In IEEE floating point representation, the numbers are represented in normalized format, which implies that the significand bit is assumed to be 1 by default (also known as hidden bit).
When a float number is de-normalized, the significand bit is considered 0, and the exponent is made 0 by shifting the decimal point to left.
My query is regarding de-normalization procedure. For example , if the exponent can be as high as 120, in such a case, how do we treat the fractional bits (43 bits for IEEE - double precision) ?
Do we do the following
1) Increase the width of fraction ? i.e. 43 fraction bits + De-normalization => 43 + eg 43 +120 = 163 bits ?
2) Simply shift the bits and maintain the width of fraction as it is ? i.e. discard excessive bits ?