If I have a very large contiguous file in the gigabytes that I want to copy, my disk has to allocate all of the necessary space and write a duplicate of every block.
Why can't a copy be "fast" in a sense that it instead copies references to the blocks and writes new blocks only when a change is made?
I understand that this would lead to a decoupling of how much data is on disk (due to block references) with the potential of a disk appearing to contain data in excess of its actual capacity. This could also result in a large amount of space being taken up on writes due to entirely new blocks having to be written when they change from their source block.
There would certainly be unique penalties to such a file system but it sounds like an interesting use case.
Are there any file systems existing today which exploit a similar way of handling data?
Note that I am not an expert on file systems so some of my assumptions may be embarrassingly wrong. I welcome any corrections in the comments.