The answer will depend on just how sparse the file is, as well as on the cluster size of the hard disk.
NTFS, like most other filesystems, considers a file to be an ordered list of disk clusters. That "ordered list" is a physical data structure in the filesystem, and occupies disk space. As the number of records in this list grows, the filesystem must assign more physical blocks to hold it. However, the number of blocks that it can add is ultimately limited (see references).
So, let's assume that you have a 1TB disk, which by default has a 4kb cluster size, and you write a 512GB file.
- If you write that file sequentially, the system will make an attempt to allocate contiguous blocks, and there will be a relatively small number of entries in the list (fragments in the file).
- If you write that file randomly, you will create a sparse file; each time you write a block that hasn't been written before, you must allocate a cluster for that block. Since you're writing randomly, the OS probably won't be able to allocate contiguous clusters, so you'll have more entries in the list. Your 512GB file could require 134,217,728 fragments (assuming I've done the math correctly).
I don't know if that number of fragments would be beyond the capacity of the NTFS management structures. But let's assume it is. You might still be able to manage that file if you used a volume where the cluster size is 64k (resulting in 8,388,608 fragments).
Aside from the possibility of running out of fragments, heavily fragmented files will be less efficient because access to any particular block requires walking through the list of fragments to find that block (I'll assume that some form of binary search is involved, but it's still worse than examining one fragment that holds all blocks). Moreover, when using magnetic media, the overall disk access will be sub-optimal because closely numbered blocks may be at widely different locations on the drive. Better, in my opinion, is to pre-allocate and sequentially init the entire file (unless, of course, you're not planning to store much data in it).
References (both from Microsoft):
- How NTFS Works - an overview of the structures in the NTFS filesystem.
- The Four Stages of NTFS File Growth - Post by a member of Microsoft's support team that details how the allocation nodes for a file grow over time. See also the followup post that shows a partial work-around that increases the number of allocation records.