Questions tagged [fixed-point]

Questions about fixed-point arithmetic, done using a set number of decimal places. For the combinators used to encode recursion, use [fixpoint-combinators] instead. For the numerical method, use [fixed-point-iteration] instead. For the fixedpoint engine of Z3, use [z3-fixedpoint] instead.

Fixed point arithmetic uses a fixed radix position to do calculations, instead of a variable amount (floating point). Instead of representing numbers using the IEEE mantissa-and-exponent format, numbers are represented as integers which are scaled by a fixed amount. It can be faster and is, within its regime, more precise than floating-point.

Most questions deal with finding a suitable fixed-point library for [insert language here], as most languages lack fixed-point arithmetic in their standard libraries (although some have one natively). Also, different types of fixed point number exist based on the number of decimal points included (and thus accuracy) - see the Q format.

511 questions
6
votes
2 answers

Fixed point on a compression algorithm widely used nowadays

I was wondering if there is a compression algorithm, in common use today, that contains a fixed point, i.e., an identity file. To explain, let's call C : byte[] -> byte[] a function that represents the compression algorithm. I want to know if there…
Bruno Reis
  • 37,201
  • 11
  • 119
  • 156
6
votes
4 answers

fixed point arithmetics in java with fast performance

I need to represent some numbers in Java with perfect precision and fixed number of decimal points after decimal point; after that decimal point, I don't care. (More concretely - money and percentages.) I used Java's own BigDecimal now, but I found…
Karel Bílek
  • 36,467
  • 31
  • 94
  • 149
5
votes
2 answers

How to display two decimal points in python, when a number is perfectly divisible?

Currently I am trying to solve a problem, where I am supposed to print the answer upto two decimal points without rounding off. I have used the below code for this purpose import math a=1.175 #value of a after some…
DrinkandDerive
  • 67
  • 1
  • 1
  • 7
5
votes
1 answer

Log2 approximation in fixed-point

I'v already implemented fixed-point log2 function using lookup table and low-order polynomial approximation but not quite happy with accuracy across the entire 32-bit fixed-point range [-1,+1). The input format is s0.31 and the output format is…
Ali
  • 288
  • 3
  • 10
5
votes
2 answers

Doing exact real number arithmetic with PHP

What is fastest and most easy way of making basic arithmetic operations on strings representing real numbers in PHP? I have a string representing MySQL's DECIMAL value on which I want to operate and return the result to a database after that. I…
Dariusz Walczak
  • 4,848
  • 5
  • 36
  • 39
5
votes
2 answers

Mapping [-1,+1] floats to Q31 fixed-point

I need to convert float to Q31 fixed-point, Q31 meaning 1 sign bit, 0 bits for integer part, and 31 bits for fractional part. This means that Q31 can only represent numbers in the range [-1,0.9999]. By definition, when converting from float to…
Danijel
  • 8,198
  • 18
  • 69
  • 133
5
votes
1 answer

PHP bcmath versus Python Decimal

I am using PHP's bcmath library to perform operations on fixed-point numbers. I was expecting to get the same behaviour of Python's Decimal class but I was quite surprised to find the following behaviour instead: // PHP: $a = bcdiv('15.80',…
Simone Bronzini
  • 1,057
  • 1
  • 12
  • 23
5
votes
4 answers

How to use python to convert a float number to fixed point with predefined number of bits

I have float 32 numbers (let's say positive numbers) in numpy format. I want to convert them to fixed point numbers with predefined number of bits to reduce precision. For example, number 3.1415926 becomes 3.25 in matlab by using function…
tuming1990
  • 89
  • 3
  • 8
5
votes
2 answers

Synthesisable Fixed/Floating points in VHDL's IEEE Library

I'm creating a VHDL project (Xilinx ISE for Spartan-6) that will be required to use decimal "real-style" numbers in either fixed/floating point (I'm hoping fixed point will be sufficient). Being quite new to VHDL, I found out the hard way that the…
davidhood2
  • 1,367
  • 17
  • 47
5
votes
2 answers

Is it better to use GL_FIXED or GL_FLOAT on Android

I would have assumed that GL_FIXED was faster, but the iPhone docs actually say to use GL_FLOAT because GL_FIXED has to be converted to GL_FLOAT. Is it the same on Android? I suppose it varies by phone, but what about recent popular ones (Nexus One,…
Timmmm
  • 88,195
  • 71
  • 364
  • 509
5
votes
5 answers

Restrict Float Precision in JavaScript

I'm working on a function in JavaScript. I take two variables x and y. I need to divide two variables and display result on the screen: x=9; y=110; x/y; then I'm getting the result as : 0.08181818181818181 I need to do it with using some thing…
Sai Avinash
  • 4,683
  • 17
  • 58
  • 96
5
votes
2 answers

Compile-time metaprogramming-based fixed-point arithmetic. Multiplication overflow?

I'm currently implementing a compile-time 3d raster though template metaprogramming. After implementing the algebraic basics (2d/3d/4d vectors, 3x3/4x4 matrices arithmetic, aabb2d/3d for culling purposes, etc), I noted that integer arithmetic is not…
Manu343726
  • 13,969
  • 4
  • 40
  • 75
5
votes
1 answer

Fixed Point Multiplication of Unsigned numbers

I am trying to solve a multiplication problem with fixed point numbers. The numbers are 32 bit. My architecture is 8 bit. So here goes: I am using 8.8 notation i.e., 8 for integer, 8 for fraction. I have A78 which is 10.468. I take its two's…
user1343318
  • 2,093
  • 6
  • 32
  • 59
5
votes
2 answers

Is there a way to force PMULHRSW to treat 0x8000 as 1.0 instead of -1.0?

To process 8-bit pixels, to do things like gamma correction without losing information, we normally upsample the values, work in 16 bits or whatever, and then downsample them to 8 bits. Now, this is a somewhat new area for me, so please excuse…
Alex
  • 846
  • 6
  • 16
5
votes
2 answers

Reproducibility of floating point operation result

Is it possible for an floating-point arithmetic operation to yield different results on different CPUs? By CPUs i mean all of x86 and x64. And by different results i mean even if only a single least important bit is different.. I need to know if…
user1316208
  • 667
  • 1
  • 5
  • 12