0

My Institute doesn't allow file downloads more than 300MB so I came up with this trick of downloading the files in parts! Now I wanted to ease the task and make a script in python! I'm using os.system() to execute commands!

My plan is to use this curl --range 300000000*X-(300000000*(X+1)-1) [url] -o filename.partX # X is the part number

But I have no idea how to terminate the loop! How would i know that the all available parts of file are downloaded! Can anyone help me out with this?

1 Answers1

0
# repeat download if fail to download the big file
until curl -C - -o partial_file.zip http://example.com/file.zip; do
    echo Tansfer disrupted, retrying in 10 seconds...
    sleep 10
done
Feng
  • 559
  • 3
  • 8
  • Hi Sorry I think I didn't make the question clear! I cannot even request download to file if it is greater than 300 mb! – Kartik Gupta Sep 29 '18 at 16:56
  • I think when downloading a file bigger than 300MB, your download process would be killed after downloaded 300MB. So you should continue download this file by curl – Feng Sep 29 '18 at 16:59
  • Could you add some comments on what this does and why it solves the problem? – Matt C Sep 29 '18 at 17:00
  • ·curl -C· means curl would remember the offset and continue/resume a previous file transfer. The command in the answer means, when the curl downloading process was killed, it would raise an error and another curl process would run after 10s until the file success to be downloaded. – Feng Sep 29 '18 at 17:10
  • @Feng ! No this doesn't, I tried it ! And i received an HTML page saying the request is too large! The original method i use works but I'll have to enter the values manually! Takes too much time! :( – Kartik Gupta Sep 29 '18 at 17:17
  • I understand what you want now. You can do it by python function. 1) get request file size by `curl -sI URL`; 2) calc count of part by `int(file_size/ part_max_size) + 1`; 3) download each part by iter `part_count`; 4) merge all file part into a big file. – Feng Sep 29 '18 at 17:35