The time complexity should be O(N)
as it should scale linearly with the length of the array.
To test out that theory, I ran some benchmarks just to see if it appears to actually be O(n)
.
For larger arrays, nodejs shows approximately O(n)
when you measure it (see code and results below).
I run this test app with 5 sizes for the array.
const sizes = [1000, 10000, 100000, 1000000, 1000000];
And, time how long it takes with process.hrtime.bigint()
. I then output the total time for each sized array in nanoseconds and the per element time.
This is the output I get:
Array sizes:
[ 1000, 10000, 100000, 1000000, 1000000 ]
Ns per pass:
[ 36700n, 48600n, 553000n, 5432700n, 5268600n ]
Ns per element:
[ 36.7, 4.86, 5.53, 5.4327, 5.2686 ]
You can see that the last three sizes are very close to O(n)
with around a 5% variation from a fixed time per element (which would be exactly O(n)
). The first one is way off and the second one is slightly faster per element than the others, though in the same general ball park as the last three.
The very first pass must have some sort of interpreter overhead (perhaps optimizing the code path) or perhaps just the overall overhead of the operation is so much more than actually filling the array that it distorts what we're trying to measure.
Here's the code:
class Measure {
start() {
this.startTime = process.hrtime.bigint();
}
end() {
this.endTime = process.hrtime.bigint();
}
deltaNs() {
return this.endTime - this.startTime;
}
deltaNumber() {
return Number(this.deltaNs());
}
}
const sizes = [1000, 10000, 100000, 1000000, 1000000];
const benchmark = new Measure();
const times = sizes.map(size => {
benchmark.start();
new Array(size).fill('apple');
benchmark.end();
return benchmark.deltaNs();
});
console.log('Array sizes:\n', sizes);
console.log('Ns per pass:\n', times);
let factors = times.map((t, index) => {
return Number(t) / sizes[index];
});
console.log('Ns per element:\n', factors);