duplicity will use the file state at the point in time when the file is processed during the backup.
Note:
as a user application duplicity is not capable to enforce file system consistency, meaning if the file is readable, but currently open in another application and written only partially, this inconsistent state will be backed up.
Suggestions
- use a files system that is snapshot capable and backup those
- stop services/software that might write data to be backed up, to retrieve a consistent state beforehand
- duplicity was never developed for data sets this huge. you may run into trouble.
- for big data sets a strategy to backup to a local file system and mirror that to a cloud location later might improve performance a lot.