13
int power(int first,int second) {
    int counter1 = 0;
    long ret = 1;

    while (counter1 != second){
        ret *= first;
        counter1 += 1;
    }
    return ret;
}


int main(int argc,char **argv) {

    long one = atol(argv[1]);
    long two = atol(argv[2]);
    char word[30];
    long finally;

    printf("What is the operation? 'power','factorial' or 'recfactorial'\n");
    scanf("%20s",word);

    if (strcmp("power",word) == 0){
        finally = power(one,two);
        printf("%ld\n",finally);
        return 0;
    } 

}

This function is intended to do the "power of" operation like on the calculator, so if I write: ./a.out 5 3 it will give me 5 to the power of 3 and print out 125

The problem is, in cases where the numbers are like: ./a.out 20 10, 20 to the power of 10, I expect to see the result of: 1.024 x 10^13, but it instead outputs 797966336.

What is the cause of the current output I am getting?

Note: I assume that this has something to do with the atol() and long data types. Are these not big enough to store the information? If not, any idea how to make it run for bigger numbers?

OJFord
  • 10,522
  • 8
  • 64
  • 98
Charana
  • 1,074
  • 1
  • 13
  • 26
  • 3
    What are `INT_MAX` and `LONG_MAX` defined as for your program if you include `limits.h`? – Andrew Henle Oct 02 '15 at 11:04
  • `4'294'967'296` is the maximum with `uint32_t`. `uint32_t` cannot hold `200'000'000'000'000'000'000` – Jarod42 Oct 02 '15 at 11:06
  • Why `scanf("%20s",word);` instead of `scanf("%29s",word);`? – Spikatrix Oct 02 '15 at 11:08
  • @Jarod42: Where do you see `uint32_t`? – too honest for this site Oct 02 '15 at 12:16
  • @Olaf: Size of `int` may vary. A common size is 32, so I used the fixed size type to out spot a problem when size of `int` is 32. But indeed, there is no `uint32_t` in OP's code. – Jarod42 Oct 02 '15 at 12:20
  • @Jarod42: More common is still 16 bit `int`. And `long` is also most commonly 32 bits. – too honest for this site Oct 02 '15 at 13:20
  • 3
    @Olaf: I posit that claiming that 16-bit `int` is still "more common" than 32-bit `int` is ludicrous. However, we do still see it quite a lot from time travellers (that is to say, Turbo C++ programmers). – Lightness Races in Orbit Oct 02 '15 at 13:23
  • 1
    @LightnessRacesinOrbit: Far by most CPUs are still 8 and 16 bit. You completely forget about the embedded world. – too honest for this site Oct 02 '15 at 13:32
  • @Olaf: I don't pretend which one is the most common. But anyway if number doesn't fit into `uint32_t`, it doesn't fit neither into `uint16_t`. :-) – Jarod42 Oct 02 '15 at 13:41
  • @Jarod42: I did not say the contrary. Problem is that `long` is also often 32 bit only. (btw. OP uses signed, not unsigned) – too honest for this site Oct 02 '15 at 13:46
  • @Olaf: Forget about it? I work in it. – Lightness Races in Orbit Oct 02 '15 at 14:35
  • @LightnessRacesinOrbit: Hmm, so even more strange you think 32 bit is more common (I likely will agree in some years, however, but not for now and wish your were right already). – too honest for this site Oct 02 '15 at 14:38
  • 1
    @Olaf: My point stands: you are exagerrating. Of course there are still 8-bit and 16-bit CPUs in the wild, but to suggest that they are in the main "most common" is ridiculous! Consider that even in embedded tech, the consumer and military markets have mostly moved on to 32-bit and 64-bit CPUs. In civil industry not so much yet. – Lightness Races in Orbit Oct 02 '15 at 14:42
  • 1
    @LightnessRacesinOrbit: Just think about the billions of smart-cards. Most of them still use 8 or 16 bit CPUs. Then all hidden controllers: Fridge, Washing machines, even TV for specifif functions. Then you have the whole world of DSPs which are still often 16 bit (Audio processors, etc.). Automotive also uses many small CPUs, e.g. for LIN endpoints and very high temperature. PC-Keyboards, Mice, ... And there are still many new(!) industrial control projects which use 8 or 16 bit CPUs. (I did not say that I support this; yet there are cost and reliability reasons to use larger die-structures). – too honest for this site Oct 02 '15 at 14:51
  • 1
    @Olaf: Ah I see — assuming for a moment that we can call all those things "CPUs" (which I vehemently dispute, but let's leave that aside), you were talking about raw quantity of manufactured units. I was talking about quantity of models... which is really what we consider when we weigh up the pros and cons of portable programming! – Lightness Races in Orbit Oct 02 '15 at 14:56
  • @LightnessRacesinOrbit: You seem to have a quite uncommon definition of "CPU". Regarding "models": The 8/16 bit market here also provides much more variants. There are not that much 32/64 bit CPU families, not even if counting family members seperately. – too honest for this site Oct 02 '15 at 15:05
  • @Olaf: Okay, reasonable: so let's at least agree that the majority of computer programming work worldwide no longer takes place on 8-bit or 16-bit chipsets? – Lightness Races in Orbit Oct 02 '15 at 15:42
  • @LightnessRacesinOrbit: Hmm, sorry for being nit-picky on that. The last statistics I read was ~2-3 (IIRC) years old and still not near to 50/50, but showed a clar tendency shifting to 32 bits. As industry is quite conservative about using "something new" and the Cortex-M0 (which make the vast majority of 8/16 replacements) _is_ quite different from the older MCU-CPUs, I suspect, it is still not en-par. However, as we both seem to have no recent shares, why not just leave it at 50/50 for now? And restart the discussion in 2-5 years again ;-) - fair? – too honest for this site Oct 02 '15 at 17:48
  • I think the argument is moot when "Most commonly used CPU" is defined as "CPU that most people program with", because for the purpose of this site, the "use" of a CPU is writing programs for it. There's probably 1,000,000 programs my PC can run, but most embedded applications just run one program written by a very small team. Anyway, interesting discussion. – JPhi1618 Oct 02 '15 at 17:53
  • @JPhi1618: Depends. Embedded MCUs are sold at a factor the whole PC-CPU manufacturers can only dream of. And if you refer to professional programmers, I'm still not sure if that still holds true. (I really not the slightest idea if there are more embedded programmers or more PC-alike programmers.) The teams are not necessarily **that** small and do not forget that in a single e.g. car, there are dozends of MCUs nowadays, even in something like the VW Polo or Golf. – too honest for this site Oct 02 '15 at 17:58
  • 1
    Right, my point is that the x86 32-bit architecture has been around a long time and people write new programs for them every day. Re-use is very, very high. Going by sales or device numbers isn't a good metric because the MCUs in a car are going to be designed and programmed once (maybe small updates here and there to bypass emissions testing...), and then end up in a million cars, never to be touched again (by a programmer). How long till this gets moved to chat? – JPhi1618 Oct 02 '15 at 18:06
  • @JPhi1618: But there is a plethora of different devices and applications. Just count the number of small MCUs in your household and garage. And regarding PC-software: sure, high reuse, so the same program runs on millions of PCs. Isn't that the same as for MCUs then? For the latter, howeve, you often have to rewrite code after a small change in the underlying hardware (e.g. new MCU, etc.), which for PC the same app still runs on the next CPU generation. It is not that simple as you state. (I would have moved it already to chat, but the message disappeared now). – too honest for this site Oct 02 '15 at 19:31
  • @Olaf The point is that while there are billions of devices out there, The amount of code written for them is only a fraction of general purpose code. Hell go and check how many people look for embedded c devs and how many for app developers. Then most general purpose programs are orders of magnitudes larger (by necessity really). And in this case of a clear beginner writing to console there's not even any contest art the odds of this being run on x86. – Voo Oct 02 '15 at 21:52

6 Answers6

23

Sure, your inputs are long, but your power function takes and returns int! Apparently, that's 32-bit on your system … so, on your system, 1.024×1013 is more than int can handle.

Make sure that you pick a type that's big enough for your data, and use it consistently. Even long may not be enough — check your system!

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • 2
    Note that on Windows `long` is also 32-bit. You would want to use a `long long` or a floating point type to work with numbers as large as your example. Here is a [MSDN page](https://msdn.microsoft.com/en-us/library/s3f49ktz.aspx) with more info – JPhi1618 Oct 02 '15 at 13:46
  • @JPhi1618: Fair point (floating point probably isn't appropriate though) – Lightness Races in Orbit Oct 02 '15 at 14:35
  • @JPhi1618 Agree about using `long long` or even `uintmax_t`. Detail about "Windows long is also 32-bit": Having used 64-bit `long` on Windows, should `long` be 32, 64 bit or whatever, is certainly dependent on the compiler (e.g. VS) - which in turn is influenced by OS/CPU. – chux - Reinstate Monica Oct 02 '15 at 15:59
  • @chux, totally valid. I should have phrased as "when using Visual Studio for Windows applications". Of course there are other compilers and details to consider. The OS doesn't have a concept of 'long' or 'int' after its compiled, so that was a bad way to phrase it. – JPhi1618 Oct 02 '15 at 16:06
  • @JPhi1618: Yes and no; CPUs do have word sizes and variants, and a vaguely C-based OS (e.g. Linuxes, Windows) will know at the OS layer what C types it is mapping to those word sizes. But I'm nitpicking :) – Lightness Races in Orbit Oct 02 '15 at 19:17
9

First and foremost, you need to change the return type and input parameter types of power() from int to long. Otherwise, on a system where long and int are having different size,

  1. The input arguments may get truncated to int while you're passing long.

  2. The returned value will be casted to int before returning, which can truncate the actual value.

After that, 1.024×1013 (10240000000000) cannot be held by an int or long (if 32 bits). You need to use a data type having more width, like long long.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
Sourav Ghosh
  • 133,132
  • 16
  • 183
  • 261
4

one and two are long.

long one = atol(argv[1]);
long two = atol(argv[2]);

You call this function with them

int power(int first, int second);

But your function takes int, there is an implicit conversion here, and return int. So now, your long are int, that cause an undefined behaviour (see comments).

MokaT
  • 1,416
  • 16
  • 37
  • It will overflow on 32 bit. –  Oct 02 '15 at 11:03
  • 1
    No, it IS in fact undefined behaviour! Signed integer overflow is undefined behaviour. Unsigned integer overflow causes wraparound. Since a `long` is used instead of an `unsigned long`, undefined behaviour results in this case. (see http://stackoverflow.com/a/3679123/31945) – Artelius Oct 02 '15 at 21:41
2

Quick answer:

The values of your power function get implicitly converted.

Change the function parameters to type other then int that can hold larger values, one possible type would be long.

  • The input value gets type converted and truncated to match the parameters of your function.

  • The result of the computation in the body of the function will be again converted to match the return type, in your case int: not able to handle the size of the values.

Note1: as noted by the more experienced members, there is a machine-specific issue, which is that your int type is not handling the usual size int is supposed to handle.


1. To make the answer complete

Ziezi
  • 6,375
  • 3
  • 39
  • 49
2

Code is mixing int, long and hoping for an answer the exceeds long range.

The answer is simply the result of trying to put 10 pounds of potatoes in a 5-pound sack.


... idea how to make it run for bigger numbers.

  1. Use the widest integer available. Examples: uintmax_t, unsigned long long.

With C99 onward, normally the greatest representable integer will be UINTMAX_MAX.

#include <stdint.h>

uintmax_t power_a(long first, long second) {
  long counter1 = 0;
  uintmax_t ret = 1;

  while (counter1 != second){  // number of iterations could be in the billions
    ret *= first;
    counter1 += 1;
  }
  return ret;
}

But let us avoid problematic behavior with negative numbers and improve the efficiency of the calculation from liner to exponential.

// return x raised to the y power
uintmax_t pow_jululu(unsigned long x, unsigned long y) {
  uintmax_t z = 1;
  uintmax_t base = x;
  while (y) {   // max number of iterations would bit width: e.g. 64
    if (y & 1) {
      z *= base;
    }
    y >>= 1;
    base *= base;
  }
  return z;
}

int main(int argc,char **argv) {
    assert(argc >= 3);
    unsigned long one = strtoul(argv[1], 0, 10);
    unsigned long two = strtoul(argv[2], 0, 10);
    uintmax_t finally = pow_jululu(one,two);
    printf("%ju\n",finally);
    return 0;
}

This approach has limits too. 1) z *= base can mathematically overflow for calls like pow_jululu(2, 1000). 2) base*base may mathematically overflow in the uncommon situation where unsigned long is more than half the width of uintmax_t. 3) some other nuances too.

  1. Resort to other types e.g.: long double, Arbitrary-precision arithmetic. This is likely beyond the scope of this simple task.
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • Good tip about `uintmax_t`, I was about to write it as well. Anyways, C99's got `strtoumax` which is specifically for `uintmax`. – edmz Oct 02 '15 at 16:50
  • @chux woops, sorry, I thought it was C++ not C. –  Oct 03 '15 at 16:24
-1

You could use a long long which is 8 bytes in length instead of the 4 byte length of long and int.

long long will provide you values between –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. This I think should just about cover every value you may encounter just now.

Stephen Ross
  • 882
  • 6
  • 21
  • 2
    C++ does not tell us how long a `long long`, a `long` or an `int` shall be. That is implementation specific. It can (and _does_) vary across systems. This may seem like nitpicking, but it's actually a really crucial thing to understand. This question is a perfect example of that! :) – Lightness Races in Orbit Oct 02 '15 at 11:04
  • @LightnessRacesinOrbit that is true. But it should be at least 64 bit as defined by (ISO C99). But as with all things C++ it entirely depends on the compiler. – Stephen Ross Oct 02 '15 at 11:08
  • Most things in C++ do not depend on the compiler whatsoever. – Lightness Races in Orbit Oct 02 '15 at 11:21