One way to speed up this kind of transfer is to ensure the role you are using for the transfer is allowed to read the source bucket and write the destination bucket. When the role and bucket policies are setup correctly you can prevent the data from traversing outside of AWS servers. This enables much faster transfer times and saves your own data usage.
See: https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/ and https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
Another way to speed up the transfer is to tweak some of the settings used during the copy. You can specify the chunk size, how many threads can be doing transfers simultaneously, and other settings. See: https://docs.aws.amazon.com/cli/latest/topic/s3-config.html
You probably want to use at least 4-10 max_concurrent_requests
(number of threads) and increase the multipart_chunksize
from the default to something based on your available memory and how many concurrent requests you will allow.
I found the aws cli tool is able to consume basically all of your CPU and memory, so you need to be careful how you set this up.