The Structure and Interpretation of Computer Programs book I've been reading presents Church numerals by defining zero and an increment function
zero: λf. λx. x
increment: λf. λx. f ((n f) x)
This seemed pretty complicated to me and it took me a really long time to figure it out and derive one (λf.λx. f x
) and two (λf.λx. f (f x)
).
Wouldn't it be much simpler to encode numbers this way instead, with zero being the empty lambda?
zero: λ
increment: λf. λ. f
Now it's trivial to derive one (λ. λ
) and two (λ. λ. λ
), and so on.
This seems like a much more immediately obvious and intuitive way to represent numbers with lambdas. Is there some problem with this approach and thus a good reason why Church numerals work the way they do? Is this approach already attested?