In many resources on Reducers (like the canonical blog post by Rich Hickey), it's claimed that reducers are faster than the regular collection functions ((map ... (filter ...))
etc.) because there is less overhead.
What is the extra overhead that's avoided? IIUC even the lazy collection functions end up walking the original sequence just once. Is the difference in the details of how intermediate results are computed?
Pointers to relevant places in the implementation of Clojure that help understand the difference will be most helpful too