3

I was searching for the answer to this question but was not able to find any.

What is the time complexity of new Array(n).fill('apple')?

For n=5, this will create an array with 5 'apple' strings: ['apple', 'apple', 'apple', 'apple', 'apple']

My assumption is new Array(5) will first create an array with 5 empty slots and then iterate through it to put 'apple' in each slot. In this case, the time complexity is O(N), N is the length of the array?

However, I also hear that some say that since it's a built-in method, it will only take O(1).

Bergi
  • 630,263
  • 148
  • 957
  • 1,375
danny jung
  • 39
  • 1
  • 1
  • 4
    First off, there's no free lunch with filling an N length array. It will be O(n) at some level. It might be a really fast O(n) if it's a very efficient operation with native code, but it's going to be proportional to the length of the array at some level. Second, the actual performance all depends upon the implementation and the ONLY way for you to know anything about the actual performance is to measure/benchmark. That's an integral part of answering ANY performance question. And finally, why do you want to know? What would you do differently based on knowing this? What is the real question? – jfriend00 Sep 26 '21 at 14:50
  • your assumption is right, `Array(5)` creates an array on undefined's `.fill('apple')` fills them, by calling in like `Array(5).fill('apple')` wont magically make it do something else – Lawrence Cherone Sep 26 '21 at 14:53
  • 2
    @LawrenceCherone It doesn't create an array "*of undefined's*". The OP's description "*an array with 5 empty slots*" is by far more accurate. – Bergi Sep 26 '21 at 15:41
  • 2
    "_since it's a built-in method, it will take only O(1)_" - this generalization is wrong. there are plenty of "built-in" methods that are O(n), `Array.prototype.map`, `Array.prototype.filter`, `Array.prototype.find`, among many more. – Mulan Sep 26 '21 at 15:51
  • @Bergi semantics https://playcode.io/815709/ – Lawrence Cherone Sep 26 '21 at 16:21
  • @LawrenceCherone Yes, and it's important to get semantics right (which the console.log in your link does not) :-D – Bergi Sep 26 '21 at 16:22

2 Answers2

-1

You should ignore the algorithm complexity when you work with array of strings with 5 items length. But in this case you fill array with the same values. In this case:

Array(5).fill('apple')

is more elegant solution by code style reasons

P.S. O(1) + O(n) => O(n)

Jackkobec
  • 5,889
  • 34
  • 34
-1

The time complexity should be O(N) as it should scale linearly with the length of the array.


To test out that theory, I ran some benchmarks just to see if it appears to actually be O(n).

For larger arrays, nodejs shows approximately O(n) when you measure it (see code and results below).

I run this test app with 5 sizes for the array.

const sizes = [1000, 10000, 100000, 1000000, 1000000];

And, time how long it takes with process.hrtime.bigint(). I then output the total time for each sized array in nanoseconds and the per element time.

This is the output I get:

Array sizes:
 [ 1000, 10000, 100000, 1000000, 1000000 ]
Ns per pass:
 [ 36700n, 48600n, 553000n, 5432700n, 5268600n ]
Ns per element:
 [ 36.7, 4.86, 5.53, 5.4327, 5.2686 ]

You can see that the last three sizes are very close to O(n) with around a 5% variation from a fixed time per element (which would be exactly O(n)). The first one is way off and the second one is slightly faster per element than the others, though in the same general ball park as the last three.

The very first pass must have some sort of interpreter overhead (perhaps optimizing the code path) or perhaps just the overall overhead of the operation is so much more than actually filling the array that it distorts what we're trying to measure.

Here's the code:

class Measure {
    start() {
        this.startTime = process.hrtime.bigint();
    }
    end() {
        this.endTime = process.hrtime.bigint();
    }
    deltaNs() {
        return this.endTime - this.startTime;
    }

    deltaNumber() {
        return Number(this.deltaNs());
    }
}

const sizes = [1000, 10000, 100000, 1000000, 1000000];
const benchmark = new Measure();
const times = sizes.map(size => {
    benchmark.start();
    new Array(size).fill('apple');
    benchmark.end();
    return benchmark.deltaNs();
});
console.log('Array sizes:\n', sizes);
console.log('Ns per pass:\n', times);
let factors = times.map((t, index) => {
    return Number(t) / sizes[index];
});
console.log('Ns per element:\n', factors);
jfriend00
  • 683,504
  • 96
  • 985
  • 979
  • I've run this example 3 times and received results were different: Array sizes: [ 1000, 10000, 100000, 1000000, 1000000 ] Ns per pass: [ 55921n, 41718n, 937870n, 18406736n, 15594361n ] 1.Ns per element: [ 55.921, 4.1718, 9.3787, 18.406736, 15.594361 ] 2.Ns per element: [ 40.731, 9.852, 10.83491, 13.539558, 64.740058 ] 3.Ns per element: [ 24.96, 3.9618, 30.96081, 16.86574, 20.271241 ] Good example , but it's very average precision – Jackkobec Sep 27 '21 at 07:11
  • 1
    I think OP was asking for the time complexity, not for a benchmark – Bergi Sep 27 '21 at 07:49
  • @Bergi - But besides the theory for what the time complexity should be `O(N)`, doesn't a benchmark offer you an idea of what the time complexity is in real life? – jfriend00 Sep 27 '21 at 20:14
  • 1
    @jfriend00 Only then it's called "performance", "speed" or "running time" not "complexity" :-) – Bergi Sep 27 '21 at 20:24
  • @Bergi - So, to you "complexity" is entirely theoretical and one never tests at a bunch of different sizes to see if the theoretical complexity is actually correct? – jfriend00 Sep 27 '21 at 20:42
  • @jfriend00 I'm not saying you shouldn't test or benchmark your code, but it doesn't prove the computed complexity. – Bergi Sep 27 '21 at 20:52