This seems to be covered by exercise 41 in section 2.5 of Volume 1 of The Art of Computer Programming, by Knuth. Knuth considers blocks of size 1, 2, .. 2^k with total store allocated of n - Quote follows
We can prove by induction on k that the total storage consumed by such split blocks never exceeds kn; for after every request to split a block of size 2^(k + 1), we are using at most kn locations in split 2^k blocks and at most n locations in unsplit ones.
(end quote)
So I think you can view this as if you had a memory allocator for blocks of size up to 2^(k+1) which produced a guarantee of using at most n(k+1) store by using at most n store to deliver blocks of size 2^(k+1) and deferring requests for smaller blocks to an inductively magic allocator which could service those requests using at most nk store. If you start off with a pool of (k+1)n blocks, the magic allocator never needs more than kn store, so the allocator for blocks of size 2^(k+1) will always have at least n store of untouched unfragmented blocks to serve requests for size 2^(k+1) from.