0
namespace gcs = google::cloud::storage;
using ::google::cloud::StatusOr;

auto client_options = 
gcs::ClientOptions::CreateDefaultClientOptions();

auto client = gcs::internal::CurlClient::Create(*client_options);
StatusOr<std::unique_ptr<gcs::internal::ResumableUploadSession>> session;
std::uint64_t total_size = 0;
void performresumableupload(std::string& bucket_name,
                        std::string& object_name,
                        std::string& data, std::uint64_t data_size, bool isFinal) {

gcs::internal::ResumableUploadRequest request(bucket_name, object_name);


StatusOr<gcs::internal::ResumableUploadResponse> response;
total_size += data_size;
if (!isFinal) {
    session = client->CreateResumableSession(request);
    response = (*session)->UploadChunk(data);

}
else {
    std::cout << "Uploading total size " << total_size << "\n";
    response = (*session)->UploadFinalChunk(data, total_size);

}
std::cout << "Response Status: " << response.status() << "\n";
}

I'm using curlclient to upload a huge object in chunks of 5MB (a multiple of 256K) and it works. However, I'd like to be able to Cancel this operation and get rid of the partially uploaded chunks before UploadFinalChunk has been called and the object has been committed. From looking at the documentation here, I see that to cancel the resumable upload request, I would have to send a DELETE request. However, I don't see any method available in the CurlClient that will help me do the same. Appreciate any help.

Katayoon
  • 592
  • 5
  • 13
Tushar Jagtap
  • 45
  • 1
  • 1
  • 7

1 Answers1

1

We never implemented an API to to cancel resumable uploads. I just created a feature request #4404, if there is anything you want to add in the GitHub issue please feel free.

Also, I am curious why you are using storage::internal::CurlClient directly? Typically the APIs in storage::Client are more friendly, and as the namespace implies, storage::internal::CurlClient is an internal API, we might change it or remove it without notice. Feel free to contact me directly (my email should be easy to find) if you prefer.

coryan
  • 723
  • 3
  • 5
  • Thanks for opening up the feature request. I'm looking forward to using it soon. I'm using CurlClient because the API layers above me are providing me parts of an object in chunks already. I wanted to be able to upload these chunks individually and form an object after I upload the last chunk. Whereas I think, the WriteObject interface with ResumableUploadSession is meant to achieve a different use case where it allows to continue from the point where an upload of a huge object would have failed. This is different from what we want. – Tushar Jagtap Jun 16 '20 at 18:50
  • I tried to use WriteObject interface to achieve"multi-part" upload behavior before but I wasn't able to achieve it. When I use `gcs::ObjectWriteStream stream = client.WriteObject(bucket_name, object_name, gcs::IfGenerationMatch(0), gcs::NewResumableUploadSession());` to upload **chunks** and suspend the stream by calling `std::move(stream).Suspend();` for chunk sizes less than 8MB the object gets committed and the next time I try to restore the session to write the next chunks the previous chunk gets overwritten. – Tushar Jagtap Jun 16 '20 at 18:52
  • For a chunk size greater than 8MB, it works but I have to figure out where the upload was suspended by using `next_expected_byte()`. CurlClient's uploadChunk and uploadFinalChunk gives us exactly what AWS's Aws::S3::Model::UploadPartRequest interface provides. When we receive any errors during a "multi-part" upload we would like to cancel it. AWS provides https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html to be able to Abort the operation. We are looking for something similar to abort the operation on the Google side. – Tushar Jagtap Jun 16 '20 at 18:53
  • 1
    > I wanted to be able to upload these chunks individually and form an object after I upload the last chunk. You may want to consider uploading each chunk to a real object and then doing ComposeMany(). It has downsides (you need to delete the components), but you can upload out of order too. – coryan Jun 17 '20 at 19:09
  • Thanks! I will look into ComposeMany(). I much appreciate your quick response. I just have a couple of clarifying questions to which I did not find any answers in the ComposeMany() docs. First, would using ComposeMany have any negative effect on performance? Since now I would be uploading chunks as real objects, then composing them into a new one and also deleting the individual chunks after composing. Another question would be around cost. Wouldn't this require twice as much as storage and cost twice as much? Thanks! – Tushar Jagtap Jun 17 '20 at 20:58
  • > First, would using ComposeMany have any negative effect on performance? Since now I would be uploading chunks as real objects, then composing them into a new one and also deleting the individual chunks after composing. That you would have to measure, I expect the compose approach uses more operations, but you can do more of them in parallel. > Another question would be around cost. Wouldn't this require twice as much as storage and cost twice as much? I guess yes, but cost is per byte per second, I expect you have the double the data for a few seconds or minutes at most. – coryan Jun 19 '20 at 11:09