Moving a discussion on relative merits of integers and floats into a separate question. Here it is: what is your preference between an integer type or a floating point type in situations that are neither inherently integral nor inherently floating point? For example, when developing geometric engine for a well-conntrolled range of scales would you prefer integer coordinates in the smallest feasible units or float/double coordinates?
Asked
Active
Viewed 190 times
0
-
Do you ever need to divide two numbers? – ilent2 Aug 25 '13 at 21:54
-
3It depends. What is the dynamic range involved? What kinds of operations need to be performed? etc. – Oliver Charlesworth Aug 25 '13 at 21:54
-
1In your example I would templatize the geometric engine on the numeric type and instantiate it as needed. I haven't come across a real-world case where I could choose either integers or floating points: they are entirely different and one works where the other doesn't. It's like asking "Do your prefer bricks or potatoes where both could work?" – Dietmar Kühl Aug 25 '13 at 22:03
-
I'm trying to create a general discussion here, regardless of my specific needs. It originated as a sequence of comments to one of the answers to a rigged question; I think this discussion is worthy to be visible. Of course most of use use both types; it's interesting to see various rationale for the choices where the decision has some justification for either choice. – Michael Aug 25 '13 at 22:04
-
@Michael "of course most of us use both types": that's wrong, and unless the application is dealing with numeric values (most don't), it shouldn't use floating point. – James Kanze Aug 25 '13 at 22:07
-
1Dietmar, ints and floats have considerable intersection in their applicability. I worked with specialized geometries in 3 different companies, and in virtually identical situations one was using floats for coordinates, and two others were using integers. All these companies are still and business and thriving, thus hinting that sometimes either choice is viable. – Michael Aug 25 '13 at 22:09
-
3@Michael "I'm trying to create a general discussion here", the about page for SO says: Not all questions work well in our format. Avoid questions that are primarily opinion-based, or that are likely to generate discussion rather than answers. http://stackoverflow.com/about – ilent2 Aug 25 '13 at 22:11
-
possible duplicate of [Why is floating point preferred over long for precise tasks?](http://stackoverflow.com/questions/18314811/why-is-floating-point-preferred-over-long-for-precise-tasks) – Eric Postpischil Aug 27 '13 at 00:27
2 Answers
2
Some reasons to prefer floating-point are:
- When you multiply in a fixed-point format, the product has a new scale, so it must be adjusted or the code must be written to account for the changed scale. For example, if you adopt a format scaled by 100, so that .3 is represented with 30 and .4 is represented with 40, then multiplying 30 by 40 produces 1200, but correct answer at the same scale should be 12 (representing .12). Division needs similar adjustment.
- When the integer format overflows, many machines and programming languages do not have good support for getting the most significant portion of the result. Floating-point automatically produces the most significant portion of the result and rounds the discarded bits.
- Integer arithmetic usually truncates fractions, but floating-point rounds them (unless requested otherwise).
- Some calculations involve a large range of numbers, including both numbers that are very large and very small. A fixed-point format has a small range, but a floating-point format has a large range. You could manually track the scale with a fixed-point format, but then you are merely implementing your own floating-point using integers.
- Many machines and/or programming languages ignore integer overflow, but floating-point can handle these gracefully and/or provide notifications when they occur.
- Floating-point arithmetic is well defined and generally well implemented; bugs in it have been reduced (sometimes by painful experience). Building new do-it-yourself arithmetic is prone to bugs.
- For some functions, it is difficult to predict the scale of the result in advance, so it is awkward to use a fixed-point format. For example, consider sine. Whenever the input is near a multiple of π, sine is near zero. Because π is irrational (and transcendental), the pattern of which integers or fixed-point numbers are near multiples of π is very irregular. Some fixed-point numbers are not near multiples of π, and their sines are around .1, .5, .9, et cetera. Some fixed-point numbers are very near multiples of π, and their sines are close to zero. A few are very close to multiples of π, and their sines are tiny. Because of this, there is no fixed-point format of reasonable precision that can always return the result of sine without either underflowing or overflowing.
Some reasons to prefer integers are:
- Integer arithmetic may be faster or have greater throughput on particular hardware.
- Integer arithmetic provides greater precision for the same number of bits.
- Some support for integer arithmetic may be better in some language implementations. For example, default settings or low-quality software with high-precision settings may display floating-point values incorrectly, but software rarely prints integer values incorrectly.
I considered ways to list certain “features” of integer arithmetic as reasons to use it, but, upon examination, they are not actual features:
- One might say that integer arithmetic is exact until it overflows. But this is false because integer arithmetic, or fixed-point arithmetic (integer arithmetic with a scale), is not exact. Calculating monthly interest given an annual rate is usually inexact. Converting between currencies is not exact. Physical calculations are not exact. Coordinate scaling is not exact.
- To the extent that integer arithmetic is exact until it overflows, it is not a feature. Most machines allow integer arithmetic to overflow without warning. So, when integer arithmetic fails, it fails spectacularly. (With IEEE 754 floating-point, you can design exact arithmetic and request a trap or flag if inexactness occurs.)

Eric Postpischil
- 195,579
- 13
- 168
- 312
-1
Here are some ideas to when NOT to use floats/doubles and stick to integers/fixedpoint
- You need to compare for equality
- You need predictable rounding errors or no rounding errors. (like when handling money)
- Size of precision must be absolute, and not relative to the magnitude of value (Sometimes when handling dates or spatial coordinates. Time intervals or distances can normally use floats)

youdontneedtothankme
- 656
- 4
- 6
-
1What's wrong with floating-point equality (that is not equally wrong with fixed-point equality)? – Pascal Cuoq Aug 26 '13 at 10:02
-
@PascalCuoq, 1.0+2.0==3.0 can't be trusted when using floats, not even 1.0+2.0==1.0+2.0. Fixed point may work (depending on where the point is) – youdontneedtothankme Aug 26 '13 at 14:24
-
3It is superstition that `1.0+2.0!=3.0` or `1.0+2.0!=1.0+2.0` can happen, for most programming languages. In the case of C, David Monniaux has made a rather exhaustive list of caveats: http://hal.archives-ouvertes.fr/docs/00/28/14/29/PDF/floating-point-article.pdf . No combination of the dangers he points out can make any of your putative counter-examples compute an unexpected result on a platform with IEEE 754 floating-point. – Pascal Cuoq Aug 26 '13 at 15:02
-
1To be more constructive, you could have said “there is a problem with floating-point equality because `1.0 + 1e-20 + (-1.0) != 1.0 + (-1.0) + 1e-20`. However, in that example, the problem is with `+`, not with `==`. Your list could include “You need addition to be associative (or exact) barring overflows”. Note that fixed-point multiplication is not associative in general either: the result of a fixed-point multiplication may be in range to be represented but have to many digits to be represented exactly in the fixed-point system. – Pascal Cuoq Aug 26 '13 at 16:29