In section 1.2.2 of the classic text Structure and Interpretation of Computer Programs, there is an example of how to count the number of ways to break an amount of money into smaller denominations. Here is the language they wrote:
It takes only a bit of cleverness to come up with the iterative Fibonacci algorithm. In contrast, consider the following problem: How many different ways can we make change of $1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a procedure to compute the number of ways to change any given amount of money?
This problem has a simple solution as a recursive procedure. Suppose we think of the types of coins available as arranged in some order. Then the following relation holds:
The number of ways to change amount a using n kinds of coins equals...
...the number of ways to change amount a using all but the first kind of coin, plus the number of ways to change amount a − d using all n kinds of coins, where d is the denomination of the first kind of coin.
To see why this is true, observe that the ways to make change can be divided into two groups: those that do not use any of the first kind of coin, and those that do. Therefore, the total number of ways to make change for some amount is equal to the number of ways to make change for the amount without using any of the first kind of coin, plus the number of ways to make change assuming that we do use the first kind of coin. But the latter number is equal to the number of ways to make change for the amount that remains after using a coin of the first kind.
My question is about the last statement. We're talking about a dollar (though they later recommend trying to solve this with a smaller amount such as 10 cents) - why exactly is it that the number of ways to break a dollar is always equal to the number of ways to break the amount that remains after removing a single coin of one denomination from a dollar (i.e. a dollar less the amount of an individual coin)?