Reflection is much slower, even if both operations are O(1), because big-O notation deliberately doesn't capture the constant, and reflection has a large constant (its c
is very roughly about 100, or 2 decimal orders of magnitude, here).
I would quibble slightly (but only slightly) with Volker's comment that reflection is O(1) as this particular reflection has to look up the name at runtime, and this may or may not involve using a Go map
,1 which itself is unspecified: see What is the Big O performance of maps in golang? Moreover, as noted in the accepted answer to that question, the hash lookup isn't quite O(1) for strings anyway. But again, this is all swamped by the constant factor for reflection.
An operation of the form:
f := TargetStruct.Field
would often compile to a single machine instruction, which would operate in anywhere from some fraction of one clock cycle to several cycles or more depending on cache hits. One of the form:
v := reflect.ValueOf(TargetStruct)
f := reflect.Indirect(v).FieldByName("Field")
turns into calls into the runtime to:
- allocate a new reflection object to store into
v
;
- inspect
v
(in Indirect()
, to see if Elem()
is necessary) and then that the result of Indirect()
is a struct
and has a field whose name is the one given, and obtain that field
and at this point you still have just a reflect.Value
object in f
, so you still have to find the actual value, if you want the integer:
fv := int(Field.Int())
for instance. This might be anywhere from a few dozen instructions to a few hundred. This is where I got my c ≈ 100
guess.
1The current implementation has a linear scan with string equality testing in it. We must test every string at least once, and for strings whose lengths match, we must do the extra testing of the individual string bytes as well, at least up until they don't match.