The Euclidean division theorem, with which most math students and Haskellers are familiar, states that
Given two integers a and b, with b ≠ 0, there exist unique integers q and r such that a = bq + r and 0 ≤ r < |b|.
This gives the conventional definitions of quotient and remainder. This 1992 paper argues that they are the best ones to implement in a programming language. Why, then, does divMod
always round the dividend toward negative infinity?
Exact difference between div and quot shows that divMod
already does a fair bit of extra work over quotRem
; it seems unlikely to be much harder to get it right.
Code
I wrote the following implementation of a Euclidean-style divMod
based on the implementation in GHC.Base
. I'm pretty sure it's right.
divModInt2 :: Int -> Int -> (Int, Int)
divModInt2 (I# x) (I# y) = case (x `divModInt2#` y) of
divModInt2# :: Int# -> Int# -> (# Int#, Int# #)
x# `divModInt2#` y#
| (x# <# 0#) = case (x# +# 1#) `quotRemInt#` y# of
(# q, r #) -> if y# <# 0#
then (# q +# 1#, r -# y# -# 1# #)
else (# q -# 1#, r +# y# -# 1# #)
| otherwise = x# `quotRemInt#` y#
Not only does this produce pleasantly Euclidean results, but it's actually simpler than the GHC code. It clearly performs at most two comparisons (as opposed to four for the GHC code).
In fact, this could probably be made entirely branchless without too much work by someone who knows more about primitives than I.
The gist of a branchless version (presumably someone who knows more could make it more efficient).
x `divMod` y = (q + yNeg, r - yNeg * y - xNeg)
where
(q,r) = (x + xNeg) `quotRem` y
xNeg = fromEnum (x < 0)
yNeg = xNeg*(2 * fromEnum (y < 0) - 1)