Your question is about what happens in .NET at a really low level. What really happens when the JIT'ed and optimized code executes might be different from what you expect.
However, a way to answer your question is to look at the IL that is generated:
string hi = "HelloWorld";
int length = hi.Length;
Console.WriteLine(length);
Console.WriteLine(length);
compiles to
ldstr "HelloWorld"
callvirt System.String.get_Length
dup
call System.Console.WriteLine
call System.Console.WriteLine
and
string hi = "HelloWorld";
Console.WriteLine(hi.Length);
Console.WriteLine(hi.Length);
(where I have removed the assignment to length
because it is not used) compiles to
ldstr "HelloWorld"
dup
callvirt System.String.get_Length
call System.Console.WriteLine
callvirt System.String.get_Length
call System.Console.WriteLine
The first version seems to be slightly more efficient compared to the second because there is only one call to System.String.get_Length
and both uses an extra stack location (dup
). However, the JIT'er could conceivably inline this call which would then use indirection to read a value from a memory location and then there is hardly any difference.
Notice that the .NET string type stores the length of the string in the string object so there is no need to count the characters in the string. The length is known when the string is created.