0

Regardless of how the multiplication(or division) operation is implemented(i.e. whether it is a software function or a hardware instruction), it won't be solvable in time O(1). for big n values, the processor cannot even compute it by one instruction.

In such algorithms, why are these operations constant and not depended to n?

for (i = 1; i <= n; i++) {
    j = n;
    while (j > 1)
        j = j / 3;    //constant operation
}
  • It is up to the underlying architecture how this is implemented. In most practice cases it is irrelevant and can be bounded by a large enough constant. In fact, the constant could be the maximal possible value, in which case the operation is immediately bounded. In any case, it is usually not relevant to the algorithm that is analyzed and would only bias what we are really interested in. – Zabuzard Oct 29 '18 at 20:14
  • Typically, multiplication of native types is O(1) (constant time). Big integer types such as found in Java, .NET and other environments typically have higher complexity. See https://www.javaspecialists.eu/archive/Issue236.html, for example. – Jim Mischel Oct 29 '18 at 20:17
  • @JimMischel we are calculating the complexity for big n values. so we should let n be as big as possible. then why is multiplication by n constant? (as I know, it is assumpted to be constant usually) – Mehran Ghofrani Oct 29 '18 at 20:26
  • 2
    Just take it as assumption (sometimes taken) showing a gap between theory and practice. From a theoretical view you could also question the constant complexity of addition, given physics: speed of light and an upper bound of information-density per area/volume (before everything is lost in a black hole). – sascha Oct 29 '18 at 20:44
  • Please re-read my comment. If your `n` is a native type (like `long` or `double`), then the value is irrelevant. The system can multiply by 9324568078 as fast as it can multiply by 7. If `n` some other, non-native, type (like `BigInteger`), then multiplication has higher complexity, as described in the article I linked. – Jim Mischel Oct 29 '18 at 21:54
  • Arbitrary precision multiplication in a single machine step is a physically unrealistic loophole that adds a surprising amount of power to the machine presumed to have it - see https://cs.stackexchange.com/questions/76797/random-access-machines-with-only-addition-multiplication-equality. The small print of most theoretical machines outlaws this, typically by saying that numbers can't be presumed to get too big, or not providing multiplication, which is enough to keep numbers within a reasonable range. – mcdowella Oct 30 '18 at 05:49

1 Answers1

3

Time complexity is not a measure of time. It's a measure of "basic operations" which can be defined how you want. Often, any arithmetic operation is considered a basic operation. Sometimes (for example, when considering the time complexity of sorting algorithms, or hash table operations), the basic operations are comparisons. Sometimes, "basic operations" are operations on single bits (in which case j=j/3 would have time complexity O(log(j)).

The rules that tend to be followed are:

  • if you're talking about sorting or hashtables, basic operations are comparisons
  • if you're talking about any other practical algorithm, basic operations are arithmetic operations and assignments.
  • if you're talking about P/NP classes, basic operations are the number of steps of a deterministic turing machine. (I think this is equivalent to bit operations).
  • if you're talking about practical algorithms as a complexity theory expert, you'll often assume that basic types have ~log n bits, and that basic operations are arithmetic operations and assignments on these ~log n bit words.
Paul Hankin
  • 54,811
  • 11
  • 92
  • 118