I know this is an old question but I felt I could add a bit more if you come across this today like I have.
ZFS doesn't have a built in option for defragmentation. Due to how blocks are allocated, how ZFS is Copy On Write, and snapshots locking blocks down means you can't really defragment data. The only solution I know of is to make a pool of equivalent size and ZFS send/receive the data, destroy the old pool, make and make it again.
Also it's worth mentioning you have your scrubs backwards. Data you use a lot is constantly having it's checksums validated whereas quiescent data sits there rotting without verifying block/pointer checksums.
Generally most people do 1 month at least for heavy use datasets (even less if you know 90%+ of your data is going to be used like for a webserver)
For data that isn't used often scrubbing twice a month or once a week is good practice (depending on the number of disks, how much data, how old the drives are .etc) YMMV