2

Is there a standard protocol to transfer large blobs in chunks where the downloading of each chunk is managed at a higher lever than HTTP and the full download from start to end can span over different times and network connections?

For example, say a user is downloading an app from the Apple App Store and loses network half way through. When the user gets the network back the chunks not yet received would then be downloaded. When all chunks have been received, they are reassembled to make the desired file/blob.

I know that many applications do such a thing but cannot find if there is a standard mechanism for this.

noctonura
  • 12,763
  • 10
  • 52
  • 85
  • Are you looking for [Range Requests](https://tools.ietf.org/html/rfc7233)? – DaSourcerer Sep 08 '17 at 18:13
  • That helps! Searching for Range Requests also bring up https://en.m.wikipedia.org/wiki/Byte_serving which pretty much answers it. Is this used in practice? – noctonura Sep 08 '17 at 20:01
  • @RichAmberale byte-ranges are used often but do require server support. –  Sep 08 '17 at 20:10
  • Yes, it is. Download managers such as [Download Accelerator Plus](http://www.speedbit.com/dap/) have been using it since ages. In fact, that is what enables them to download several parts of a file in parallel. Regular clients are expected to resume broken or partial downloads in this way according to the RFC. See also the [MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests).Please take note that the RFC goes at some length to ensure parts of the right resource *presentation* are being downloaded. Just the URI is not sufficient for that. – DaSourcerer Sep 09 '17 at 19:36

0 Answers0