my query is- if declaring the variable inside the loop and then using it has any additional overload on the compiler or does declaring once outside the loop and then constantly re-iterating its value inside the loop is a better option in terms of performance of loops having over a million iterations
for example, I have the code snippet
int 1;
for(i=0;i<1000000;i++)
{
int k;
//some code that uses k and displays the computed results
}
VS
int i;
int k;
for(i=0;i<1000000;i++)
{
k=0;
//copy k from the user as usual and compute and display the results
}
here my main concern is not just about the time taken for both the snippets, if I look at the bigger picture the variable k is still accessed a million times in the second loop as well so if I use RISC or CISC converted code for the same, is "declaring and using" same as "accessing and using" for the processor? does my act of accessing adding the same work as declaring it and then accessing in the loop? which is the better method of the two for efficiency?