2

Good Morning,

I need some help concerning my script. I'm a beginner in Python and I would like to know how can I add a thread to verifiy my downloading process ?

Here my Download Script:

class Download:

  def __init__(self):
    self.path="FolderFiles"
    self.target=" /var/www/folder/Output"

def downloadFile(self):
        for root, dirs, files in os.walk(self.path, topdown=False):
          for name in files:
            print(name)
            rarFiles=os.path.join(root, name)
            unrar = "unrar x -y "+rarFiles+self.target
          os.system(unrar)
          #time.sleep(10)

additional information: I used python 3.x with unrar library

Thanks for your help

  • can you specify your problem ? so you like to download a file ... if the download is finished -> unrar in a new thread ? are you downloading file by file or multiple files at the same time... ? How do you download the files ? – Fabian Feb 28 '19 at 10:35
  • the files I download are in several parts. The total file size is about 1.3TB. The thread I would like to set up is a check of the download every 25min to see if there is no problem to restart the download if it has been stopped – Anthony PALERMO Feb 28 '19 at 10:47

2 Answers2

0

This could help you, if my understanding is correct you have a bunch of zip files you like to download... and check the current status ofc you could limit this print statements to 25min if you want or every X amount of mb.

import requests

url_list = ["http://file-examples.com/wp-content/uploads/2017/02/zip_10MB.zip", "http://file-examples.com/wp-content/uploads/2017/02/zip_10MB.zip", "http://file-examples.com/wp-content/uploads/2017/02/zip_10MB.zip", "http://file-examplesc.com/wp-content/uploads/2017/02/zip_10MB.zipdd", "http://file-examples.com/wp-content/uploads/2017/02/zip_10MB.zip"]


def download_file(url, total_download_mb):
    local_filename = url.split('/')[-1]
    with requests.get(url, stream=True) as r:
        filesize = int(r.headers["Content-Length"]) / 1024 / 1024
        downloaded = 0
        with open(local_filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                if chunk:
                    f.write(chunk)
                    downloaded = (downloaded + len(chunk))
                    downloaded_mb = downloaded/1024/1024
                    print("%.2fmb / %.2fmb downloaded." % (downloaded_mb ,filesize))
        total_download_mb += downloaded_mb
        #download is finished could be unpacked ....
    return total_download_mb

def download_url_list(url_list):
    total_download_mb = 0
    failed_urls = []
    for i, url in enumerate(url_list):
        try:
            total_download_mb = download_file(url, total_download_mb)
            print("Total download: %.2fmb" % total_download_mb)
        except:
            failed_urls.append(url_list[i])
            print("failed by file:" + str(i))
    print("failed downloads")
    print(failed_urls)

download_url_list(url_list)
Fabian
  • 1,130
  • 9
  • 25
  • Thank you Fabian for your help and your code. I will use it as an inspiration and add it to my code because my download is not via a URL but via Files in an FTP – Anthony PALERMO Mar 01 '19 at 15:32
0

After lot of test, I decided to work in this way: analyse the files after downloading in function of size, downloading time etc... The unrar library cannot provides any solutions to test the good process during the downloading

Thank for your help