9

How do I divide an int by 100?

eg:

int x = 32894;
int y = 32894 / 100;

Why does this result in y being 328 and not 328.94?

Craig Johnston
  • 7,467
  • 16
  • 40
  • 47
  • 1
    You're doing integer divisions which are magically rounded. Try (32894.0/100.0). On a totally relevant node, how do you expect to store 328.94 in an integer data type? – sisve Mar 09 '11 at 07:08
  • 2
    @Ravi if the expected answer is 328.94, then ***do not*** use `double`... – Marc Gravell Mar 09 '11 at 07:12
  • @Marc: why can't `double` be used? – Craig Johnston Mar 09 '11 at 07:15
  • @Craig: The number 328.94 can't be represented exactly as a double. see the links at the end of my answer. The closest exact double is 328.93999999999999772626324556767940521240234375. – Jon Skeet Mar 09 '11 at 07:17
  • 1
    @Craig because floating point arithmetic (IEEE754) won't necessarily give you what you expect, due to how the numbers are represented/rounded. `decimal` uses base-10 arithmetic/rounding, which will preserve your crisp `.94`. – Marc Gravell Mar 09 '11 at 07:18
  • @Marc @Jon: That was a great point. Thanks for the pointers – rkg Mar 09 '11 at 07:25

5 Answers5

19

When one integer is divided by another, the arithmetic is performed as integer arithmetic.

If you want it to be performed as float, double or decimal arithmetic, you need to cast one of the values appropriately. For example:

decimal y = ((decimal) x) / 100;

Note that I've changed the type of y as well - it doesn't make sense to perform decimal arithmetic but then store the result in an int. The int can't possibly store 328.94.

You only need to force one of the values to the right type, as then the other will be promoted to the same type - there's no operator defined for dividing a decimal by an integer, for example. If you're performing arithmetic using several values, you might want to force all of them to the desired type just for clarity - it would be unfortunate for one operation to be performed using integer arithmetic, and another using double arithmetic, when you'd expected both to be in double.

If you're using literals, you can just use a suffix to indicate the type instead:

decimal a = x / 100m; // Use decimal arithmetic due to the "m"
double b = x / 100.0; // Use double arithmetic due to the ".0"
double c = x / 100d; // Use double arithmetic due to the "d"
double d = x / 100f; // Use float arithmetic due to the "f"

As for whether you should be using decimal, double or float, that depends on what you're trying to do. Read my articles on decimal floating point and binary floating point. Usually double is appropriate if you're dealing with "natural" quantities such as height and weight, where any value will really be an approximation; decimal is appropriate with artificial quantities such as money, which are typically represented exactly as decimal values to start with.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
3

Because an int is only a whole number. Try this instead.

int x = 32894;
double y = x / 100.0;
Cameron
  • 96,106
  • 25
  • 196
  • 225
Øyvind Bråthen
  • 59,338
  • 27
  • 124
  • 151
3

328.94 is not an integer. Integer / divide rounds down; that is how it works.

I suggest you cast to decimal:

decimal y = 32894M / 100;

or with variables:

decimal y = (decimal)x / 100;
Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
0

Because you're doing integer division. Add a period behind the 100 and you'll get a double instead.

When you divide two integers, the result is an integer. Integers don't have decimal places, so they're just truncated.

mpen
  • 272,448
  • 266
  • 850
  • 1,236
0

its programming fundamental that int(integer) dividing is different from float(floating point) dividing.

if u want .94 use float or double

var num = 3294F/100F
Bonshington
  • 3,970
  • 2
  • 25
  • 20