0

I have a custom nginx module that in some proxy situations backs-up the proxied file to S3. The following works as expected upon my development machine (macOS Sierra):

Aws::Client::ClientConfiguration config;
auto s3_client = Aws::MakeShared<Aws::S3::S3Client>(ALLOCATION_TAG,
                     Aws::Auth::AWSCredentials(s3_key_id, s3_secret),
                     config);

/* Create stream for S3 PUT from nginx ngx_buf_t *b */
auto input_data = Aws::MakeShared<Aws::StringStream>(ALLOCATION_TAG);
input_data->write(reinterpret_cast<char *>(b->pos), buffer_size);

Aws::S3::Model::PutObjectRequest put_object_request;
put_object_request.WithBucket(bucket_name).WithKey(key_name);
put_object_request.SetBody(input_data);

auto s3_put_outcome = s3_client->PutObject(put_object_request);

Upon a production EC2 instance, however, be it Amazon Linux or CentOS, it is nothing but Unable to connect to endpoint while sending to client errors.

I have tried setting configuration options (as recommended at AWS C++ S3 SDK PutObjectRequest unable to connect to endpoint), seeking, content-length setting, but none have worked:

config.scheme = Aws::Http::Scheme::HTTPS;
config.region = Aws::Region::US_EAST_1;
config.connectTimeoutMs = 30000;
config.requestTimeoutMs = 600000;
...
input_data->seekg(0);
...
put_object_request.SetContentLength(buffer_size);

The AWS SDK and nginx versions are the same upon my development machine and EC2, so I'm out of ideas. Any insights as to why this works from macOS but not on an actual AWS host?

Community
  • 1
  • 1

0 Answers0