This question is not about optimizing code, but its a technical question about performance difference of short circuit logical operators and normal logical operators which may go down to how they are performed on hardware level.
Basically logical AND
and OR
takes one cycle whereas short circuit evaluation uses branching and can take various amount of cycles. Now I know that branch predictors can make this evaluation efficient but I don't see how its faster than 1 cycle?
Yes, if right operand is something expensive then trying to not evaluate it is beneficial. but for simple conditions like X & (Y | Z)
, assuming these are atomic variables, non short circuit logical operators would perform likely faster. Am I right?
I assumed that short circuit logical operators use branching (no official source, just self thought), because how else you make those jumps while executing instructions in order?