In this environment we measure efficiency in number of consumed Service Units. I'll convert the current dateTime to milliseconds to illustrate the bug:
0 100 100 100 100 100 1000⊥⎕TS ⍝ this statement consumes around 150 SUs
0 100 100 100 100 100 1000.0⊥⎕TS ⍝ this statement consumes around 5 SUs
What's going on here? Well, by attaching .0
to any of the terms in the left argument, we're telling the interpreter to go into float mode. Without it, it first tries to handle the operation with integers, notices that it isn't working and then retries in float mode.
The same trick can be used on the right argument, or by adding 0.0
, or by multiplying by 1.0
.