0

I'm trying to implement file upload with blueImp jQuery-File-Upload using angularJS.

The file upload is supposed to support chunked upload and automatically resume if a chunk upload fails.

Uploading in chunks works fine. The problem I face now is when an error occurs the automatic resume (chunkfail method) is invoked and restarts an upload call with the rest of the data being uploaded all at once! No more chunking occurs when automatic resume takes place. Here's my code based on the example shown on github:

.controller('MyFileUploadController', ['$scope','$http', 
    function ($scope, $http) {
        // setting upload properties:
        $scope.options = {
            url: myUploadUrl,
            maxChunkSize: 100*1024 //100kB,
            maxRetries: 100,
            retryTimeout: 500,
            chunkfail: function (e, data) {
                if (e.isDefaultPrevented()) {
                    return false;
                }
                var that = this,
                scope = data.scope;
                if (data.errorThrown === 'abort') {
                   scope.clear(data.files);
                   return;
                }
                var fu = $(this).data('blueimp-fileupload') || $(this).data('fileupload'), 
                retries = data.retries || 0;

                var retry = function () {

                    var req = {
                        method: 'POST',
                        url: filecheckUrl,
                        headers: {
                            'Content-Type': 'application/json'
                        },
                        data: {file: data.files[0].name}
                    };


                    $http( req ).success(function(respData, status, headers, config) {
                        data.uploadedBytes = respData && respData.size;
                        data.data = null;

                        data.submit();

                    }).error(function(respData, status, headers, config) {
                        fu._trigger('fail', e, data);
                    });

                    if (data.errorThrown !== 'abort' && data.uploadedBytes < data.files[0].size && retries < fu.options.maxRetries) {
                        retries += 1;
                        data.retries = retries;
                        window.setTimeout(retry, retries * fu.options.retryTimeout);
                        return;
                    }
                    data.retries = null;
                    $.blueimp.fileupload.prototype.options.fail.call(this, e, data);
                }
            };

What am I missing to have the data.submit() call in chunkfail resuming the upload with the next chunk instead of all the remaining at once?

Vincent
  • 470
  • 6
  • 13
  • We had another complicated blueimp query just recently and the author of the library Sebastian answered the question in SO when asked via email. You can contact him using the top link at https://blueimp.net/. Send him a polite email with the link for this question and ask him to comment here. The previous issue had been bugging quite a few clever folks for some time and the answer was simple with that little input from Seb. – Vanquished Wombat Nov 19 '16 at 23:54

1 Answers1

0

I finally found the cause myself:

The file size on the check call is returned as String but must be Number instead.

Changing this line

data.uploadedBytes = respData && respData.size;

to

data.uploadedBytes = respData && Number(respData.size);

fixes the problem.

========

Detailed explanation

why it fails with data.uploadedBytes being of type String instead of Number:

Somewhere within the bluimp implementation the File-Blob slice of the next chunk is calculated as follows:

slice.call(file,
    ub, // = data.uploadedBytes (chunk start byte pos)
    ub + mcs, // = data.uploadedBytes + data.maxChunkSize (chunk end byte pos)
    file.type
);

This results the chunk end position

ub + mcs, // = data.uploadedBytes + data.maxChunkSize

becoming a concatenated String like

102400 + 102400 // supposed to be 100kb + 100kb = 200kb

resulting in

102400102400 // about 95GB

which is larger than the size of the uploaded file.

So the next chunk slice is calculated as being all the remaining bytes of the file.

Community
  • 1
  • 1
Vincent
  • 470
  • 6
  • 13