You're not measuring what you think you're measuring, that's why you're getting "surprising" results.
You're timing how long it takes to format and print a string, rather than "how fast a for loop is".
Also, keep in mind that measuring how long it takes to print something depends not only on how the code gets compiled / interpreted, but also on where exactly you're printing: I/O performance depends on things that are beyond your program (maybe the OS, maybe some physical device etc).
Finally, if you tried to microbenchmark the performance of a loop that does absolutely nothing, a compiler could be able to detect that and simply optimize the loop out completely, leaving you with not measuring anything...
These micro-benchmarks in isolation are, most of the time, not useful. If you want to compare Python vs Go in terms of performance, it's usually a better idea to test on a real problem rather than something artificial. And then compare not only the raw performance, but also other characteristics of the code quality in general.
The bottom line is that there's too much wrong with this benchmark to get any useful conclusions.