2

We're trying to benchmark binary encoding with Criterion. Since the data types are strict, we are perfectly able to benchmark the process of packing a Request.

However, one step further by trying to benchmark the process of encode (Request to ByteString) using runPut we end up getting μs-results. This is probably due to lazy bytestring evaluation.

encode = runPut . buildRqMessage

main = do

  randBytes <- getEntropy 1000000
  let !topicA = stringToTopic "performance-0"
  let !topicB = stringToTopic "performance-1"
  let !clientId = stringToClientId "benchmark-producer"
  let !bytes = [randBytes | x <- [1..1]]
  let !head = Head 0 0 clientId
  let !prod = [ ToTopic topicA [ ToPart 0 bytes ] ]

  defaultMain [
      bgroup "encode" [
             bench "pack" $ whnf (packPrRqMessage head) prod
           , bench "pack+build" $ whnf encode (packPrRqMessage head prod)
        ]
    ]

Is there a way to benchmark the encode process appropriately?

Marc Juchli
  • 2,240
  • 24
  • 20
  • 1
    Does trying to query the length of the encoded ByteString help? – MathematicalOrchid Jun 04 '15 at 09:43
  • Try using `nf` instead of `whnf`. There's a reason it's there! – Carl Jun 04 '15 at 14:06
  • "we end up getting μs-results" is that faster or slower than what you expected? Both `whnf` and `nf` are useful things to measure. I would suggest that if you use `nf` on benchmarks in the low microseconds range that you also do a benchmark like `nf (\x-> x) outputOfEncodeAbove` which will give you an estimate of the overhead of the fully evaluating your (already fully-evaluated) data. I think criterion should try to account for this itself, but doesn't appear to. – jberryman Jun 04 '15 at 14:29

0 Answers0