I'm trying to upload large files using the GCS writer:
bucketHandle := m.Client.Bucket(bucket)
objectHandle := bucketHandle.Object(path)
writer := objectHandle.NewWriter(context.Background())
then for chunks of size N
I call writer.write(myBuffer)
. I'm seeing some out of memory exceptions on my cluster and wondering if this is actually just buffering the entire file into memory or not. What are the semantics of this operation, am I misunderstanding something?