Since there is ability to set argument --expected-size
in aws s3 cp in order to ensure files/data larger than 5GB is uploaded successfully, how can it be set in python version of it: boto3
upload_fileobj?
I'm trying to upload database backup as data stream to S3 without saving it to disk, but it fails in the middle of process due to InvalidArgument: Part number must be an integer between 1 and 10000, inclusive
.
I assume it's because data stream is non-seekable so you have to set expected data size explicitly.
AWS CLI example:
innobackupex --stream=xbstream --compress /backup \
| aws s3 cp - s3://backups/backup2018112 --expected-size=1099511627776
Boto3 example:
import subprocess
import boto3
innobackupexProc = subprocess.Popen([
'innobackupex',
'--stream=xbstream',
'--compress',
'/backup'
], stdout=subprocess.PIPE)
s3 = boto3.client('s3')
with innobackupexProc.stdout as dataStream:
s3.upload_fileobj(dataStream, 'backups', 'backup2018112')