I am doing some performance test to transfer large files (~ 4 GB) from FTPS to SFTP server. I did some research and tried python script to see if there is any performance improvement to get a file from FTPS and transfer to SFTP.
FTPS connection setup
def create_connection(self):
print('Creating session..........')
ftp = ftplib.FTP_TLS()
# ftp.set_debuglevel(2)
ftp.connect(self.host, self.port)
ftp.login(self.user, self.passwd)
ftp.prot_p()
# optimize socket params for download task
print('Optimizing socket..........')
ftp.sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
ftp.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 75)
ftp.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 60)
print('Session created successfully')
return ftp
def get_file(self, ftp_session, dst_filename, local_filename):
print('Starting download........', datetime.now())
myfile = BytesIO()
print(myfile.tell())
ftp_session.retrbinary('RETR %s' % dst_filename, myfile.write)
print(myfile.tell())
print('Download completed ........', datetime.now())
For SFTP connection I am using paramiko
host, port = "abc.com", 22
transport = paramiko.Transport((host, port))
username, password = "user", "pwd"
transport.connect(None, username, password)
transport.default_window_size = 3 * 1024 * 1024
sftp = paramiko.SFTPClient.from_transport(transport)
myfile.seek(0)
sftp.putfo(fl=myfile, remotepath='remotepath/' + local_filename)
sftp.close()
I am using BytesIO so that I can keep the file in memory and stream it while copying. The following code can copy the file but it is taking ~ 20 mins. The code is first copy the file in memory and then its transferring. Is there any possible way to transfer file more efficiently ?