I am downloading a large number of files using the following method and I am concerned about its memory usage.
Chrome's Blob Storage System Design documentation mentions the following.
If the in-memory space for blobs is getting full, or a new blob is too large to be in-memory, then the blob system uses the disk. This can either be paging old blobs to disk, or saving the new too-large blob straight to disk.
However, even after going through the documentation multiple time I still have the following concerns:
- I am still unsure where the use of fetch affect this behavior and load data into the memory first.
- If fetch in fact alters this behavior is there a filesize limit that is recommended for this method (and any files beyond that size shouldn't be downloaded)?
- What would the behavior be in other (non-chromium-based browsers)?
const download = downloadLinks => {
const _download = async ( downloadLink ) => {
const blobURL = await fetch(downloadLink, {
responseType: 'blob'
})
.then(res => res.blob())
.then(blob => window.URL.createObjectURL(blob))
const fileName = downloadLink.substr(downloadLink.lastIndexOf('/'))
const a = document.createElement('a')
a.href = blobURL
a.setAttribute('download', fileName)
document.body.appendChild(a)
a.click()
a.remove()
window.URL.revokeObjectURL(blobURL)
}
const downloadInterval = () => {
if (downloadLinks.length == 0) return
const url = downloadLinks.pop()
_download(url)
if (downloadLinks.length !== 0) setTimeout(downloadInterval, 500)
}
setTimeout(downloadInterval, 0)
}
Here are some of the resources that I went through. These answer part of all three of these questions, but I am a little too concerned about how fetch might affect if the Blob is first loaded into the memory or not.