0

I have a question needing to find the largest representable integer using (6 exponent and 9 mantissa)

I know that there is a split of 5 bits for the exponent and 10 bits for the mantissa with 1 sign bit.

I know how to find the low and high digits for the mantissa. I use the function (k/(2^(mantissa))). But how do I find the high and low digits for the exponent? Does it have something to do with the amount of bits (16)?

I am looking at examples that say the high and low for (5 exponent is -16 and 15). But how they got to there is where I am confused.

Thanks

John Doee
  • 207
  • 4
  • 15
  • There's not enough information here. Giving us the number of exponent bits and significand (mantissa) bits for a floating-point format does _not_ fully determine that floating-point format. So there's no way to determine the largest representable integer from the information you've given. Do you have a complete description of the format? (It's a bit like asking: "What's the minimum integer representable in a 5-bit integer format?". The answer differs depending on whether the format is signed or unsigned, or uses sign-magnitude versus two's complement, etc.) – Mark Dickinson Sep 05 '18 at 19:21
  • Also, can you clarify what you mean by "6 exponent and 9 mantissa" followed by "5 bits for the exponent and 10 bits for the mantissa". It's not clear whether you're allowing 6 bits or 5 bits for the exponent. – Mark Dickinson Sep 05 '18 at 19:32
  • The preferred term for the fraction part of a floating-point object is “significand.” A mantissa is the fraction part of a logarithm. – Eric Postpischil Sep 05 '18 at 21:24
  • Looks a lot like [binary16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format#Exponent_encoding) whose max value is 65504. – chux - Reinstate Monica Sep 06 '18 at 01:33
  • @EricPostpischil: yes and no. Mantissa is also very often used for the part of a floating point type that contains the significant digits. I know that "significand" is preferred by IEEE and some others, but that doesn't mean it is a better word. The word "significand" is a misnomer, IMO. – Rudy Velthuis Sep 06 '18 at 16:47

1 Answers1

2

If your floating-point format follows the pattern set by IEEE 754, then the encoded exponent is biased by half its maximum value, rounded down. Thus, 5 exponent bits can hold codes from 0 to 31. Half 31 rounded down is 15. Thus an exponent code of 1 represents a mathematical exponent of 1−15 = −14, and an exponent code of, say, 27, represents a mathematical exponent of 27−15 = 12.

Additionally, in IEEE 754 binary floating-point, the maximum exponent code is reserved to represent infinities and NaNs. So the maximum exponent code for finite values in your case would be 30, representing a mathematical exponent of 30−15 = 15.

However, there is no law stating anybody must use IEEE 754. So the mere fact that your format has 1 sign bit, 5 exponent bits, and 10 significand bits does not tell us what the actual mathematical exponent values are. Somebody could choose to bias the exponent code by another value or to use the maximum value for regular numbers, not infinities and NaNs. And, given your information that the mathematical exponent range is from −15 to 16, it seems like the specification might be that all exponent codes represent numbers, and there are no infinities or NaNs. This suggests the exponent is biased by 15, and the maximum exponent code of 31 represents a mathematical exponent of 16.

It also suggests there are no subnormal numbers, as subnormal numbers would usually be encoded by an exponent code of 0, which would mean that the implicit leading bit of the significand is 0 (instead of 1 for normal numbers) and the mathematical exponent would clamp at −14 instead of decreasing to −15. The fact that your information says the minimum exponent is −15 suggests this is not occurring, so there are no subnormal numbers in this format.

Again, though, there is no law about what floating-point formats have to be. Somebody could make other choices. There should be a specification that describes this floating-point format, and that is where the necessary information should come from.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312