I've read descriptions online describing big and little endian. However, they all seem to basically read the same way and I am still confused on the the actual implementation regarding "most" and "least" significant bytes. I understand that little endian values evaluate the "least significant" values first and under big endian the "most significant" bytes are evaluated first. However, I'm unclear as to the meaning of "most" and "least" significant. I think it would help me to understand if I use an actual example which I will put forth here:
I have an integer value: 12345
If I convert it to a hex value using the Windows calculator, I get a value of: 3039 (basically a two byte value). Is the value 3039 showing the bytes representing the integer value 12345 stored as a little or big endian value and how do I determine this based on the value?