I'm wondering how you calculate the time complexities of a function when doubling
the input sizes. I'm specifically referring to the famous Algorithms Design
practice problems.
Example Problem Questions Here
The solutions: Solutions
At first, it looked like he just plugged in the values into the function. n^3
becomes (2n)^3
, therefore that becomes 8n^3
, so 8 times slower
.
Where I begin getting confused is looking at nlogn
and 2^n
. Is there a certain trick I am missing to perform this computation or is it just mathematical substitution? I've read through the chapter in his book and can't seem to find a solution.