Anything related to the precision of a floating-point number representation. The term precision refers to the number of significant digits a representation can hold. This is NOT the same as the "accuracy", which concerns errors in performing calculations, although it may be sometimes related.
Questions tagged [floating-point-precision]
551 questions
4
votes
3 answers
Exhausting floating point precision in a (seemingly) infinite loop
I've got the following Python script:
x = 300000000.0
while (x < x + x):
x = x + x
print "exec: " + str(x)
print "terminated" + str(x)
This seemingly infinite loop, terminates pretty quickly if x is a floating point number. But if i change…

Joe Tam
- 564
- 5
- 16
3
votes
3 answers
Efficient way to store a fixed range float
I'm heaving an (big) array of floats, each float takes 4 bytes.
Is there a way, given the fact that my floats are ranged between 0 and 255, to store each float in less than 4 bytes?
I can do any amount of computation on the whole array.
I'm using C.

cojocar
- 1,782
- 2
- 18
- 22
3
votes
1 answer
SQL Server built-in function Stdevp calculating incorrectly
I am using SQL Server 2008, one of my requirements is to calculate population standard deviation. SQL Server provides a built-in function stdevp for the same. I am using it but I am befuddled by the result I am getting. Population standard deviation…

rirhs
- 297
- 2
- 4
- 9
3
votes
1 answer
Adding 32 bit floating point numbers.
I'm learning more then I ever wanted to know about Floating point numbers.
Lets say I needed to add:
1 10000000 00000000000000000000000
1 01111000 11111000000000000000000
2’s complement form.
The first bit is the sign, the next 8 bits are the…

Snow_Mac
- 5,727
- 17
- 54
- 80
3
votes
2 answers
How to divide two integers and get a result in float in Forth?
I am looking for a way to be able to translate between single precision and double precision.
One example would be to divide 2 integers and get a floating result. How is that possible?

Marci-man
- 2,113
- 3
- 28
- 76
3
votes
1 answer
release mode uses double precision even for float variables
My algorithm is calculating the epsilon for single precision floating point arithmetic. It is supposed to be something around 1.1921e-007. Here is the code:
static void Main(string[] args) {
// start with some small magic number
float a =…

user492238
- 4,094
- 1
- 20
- 26
3
votes
2 answers
Confusion with floating point numbers
int main()
{
float x=3.4e2;
printf("%f",x);
return 0;
}
Output:
340.000000 // It's ok.
But if write x=3.1234e2 the output is 312.339996 and if x=3.12345678e2 the output is 312.345673.
Why are the outputs like these? I think if I write…

Parikshita
- 1,297
- 5
- 15
- 23
3
votes
0 answers
Rounding error in generating perfect squares python
I am trying to find all the values of n such that (2^n-7) is a perfect square. My code is:
import math
def isperfect(n):
return math.sqrt(n) % 1 == 0
print [i for i in range(3,200) if isperfect(2**(i)-7)]
Here is what i am getting:
[3, 4,…

overflow
- 33
- 4
3
votes
1 answer
MATLAB force num2str to format
I'm searching for the reverse of this request
So when I type num2str(foo,3), I get this :
foo=0.00781
foo=0.0313
foo=0.125
foo=0.5
foo=2
foo=8
However I want to make them all of the same length, so something like this…

jeff
- 13,055
- 29
- 78
- 136
3
votes
2 answers
Loss of precision in float substraction with Swift
I'm trying to make a function to create the fraction version of a floating point number in my app with Swift. It's working perfectly right now, except when it has to build a mixed number (whole and fraction part).
As an example below, when I call…

Sergio Daniel L. García
- 302
- 3
- 16
3
votes
1 answer
Tiny numerical difference in sum-of-squares depending on numpy procedure call used
I am writing a function which computes sum of squares of errors. x and y are vectors of the same length; y is observation data, x is data computed by my model.
The code is like:
>> res = y.ravel() - x.ravel()
>> np.dot(res.T, res)
>>…

mrad
- 46
- 5
3
votes
1 answer
numpy.power() and math.pow() don't give the same result
Is numpy.power() less accurate then math.pow()?
Example:
Given A = numpy.array([6.66655333e+12,6.66658000e+12,6.66660667e+12,3.36664533e+12])
I define
result = numpy.power(A,2.5)
So
>> result = [ 1.14750185e+32 1.14751333e+32 1.14752480e+32 …

farhawa
- 10,120
- 16
- 49
- 91
3
votes
2 answers
Floating point precision issue in PHP and MySQL for storing occasional decimal values
I want to store occasional decimal values in my MySQL database and display them in my PHP application. Let me explain, what do I mean by occasional decimal values. The numbers are whole numbers at most of the time like an integer. For example, 160…

Debiprasad
- 5,895
- 16
- 67
- 95
3
votes
3 answers
Error Propagation upon Summing Single-Precision (float) Values
I'm learning single precision and would like to understand the error propagation. According to this nice website, addition is a dangerous operation.
So I wrote a small C program to test how quickly the errors add up. I'm not entirely sure if this…

Sebastian
- 1,408
- 1
- 20
- 28
3
votes
1 answer
How to round float to the upper/lower representation
When I compute :
float res = 1.123123123123;
I suppose that res variable would be approximated to the nearest possible float representation of 1.123123123123.
Is it possible to approximate to the lower/upper possible float representation ?

user2443456
- 199
- 1
- 1
- 7