if its content that doesn't change, then any transfer method will do, even NFS. If you tar the files first then unpack them, the transfer rate will be very quick.
Of course, if you need to transfer a lot of small files regularly then you might not have any solution - your web servers will always be out of date with the latest files (or worse, required files).
If you do want to set up a way to deploy files to a central server, and then have them automatically transfer to the others, you could set up a rsync transfer to the servers, then only the files you change will be transferred (if you use the rsync daemon, it'll use rsync's protocol, and will take next to no time at all, seconds in all likelyhood).
A good method others use is to store web files in a version control system and then use that to export the files to the other servers, then you'd just update the files and the vcs would replace changed files with the new version.
Incidentally, NFS isn't too good for small files, but then, nothing else is. The problem is more about network latency than anything else. That still shouldn't put you off using it, unless you have a heap of files to transfer, it won't be too slow to use. You could try using cifs transport instead of nfs, I do this for my Windows server as I got better performance for large files there.