I have setup two ceph clusters with a rados gateway on a node for each of them. What I'm trying to achieve is to transfer all objects from a bucket "A" with an endpoint in my cluster "1" to a bucket "B" which can be reached from another endpoint on my cluster "2". It doesn't really matter for my issue but at least you understand the context.
I created a script in python using the boto3 module. The script is really simple. I just wanted to put an object in a bucket.
The relevant part is as written below :
s3 = boto3.resource('s3',
endpoint_url=credentials['endpoint_url'],
aws_access_key_id=credentials['access_key'],
aws_secret_access_key=credentials['secret_key'],
use_ssl=False)
s3.Object('my-bucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))
(hello.txt just contains a word)
Let's say this script is written and runs from a node (which is the radosgw endpoint node) in my cluster 1. It works well when the "endpoint_url" is the node I'm running the script from but it does not work when I'm trying to reach my other endpoint (the radosgw, located in another node within my cluster "2").
I got this error :
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL
The weird thing is that I can create a bucket without any error :
s3_src.create_bucket(Bucket=bucket_name)
s3_dest.create_bucket(Bucket=bucket_name)
I can even list the buckets of my two endpoints.
Do you have any idea why I can do pretty much everything but not put a single object in my second endpoint ?
I hope it makes any sense.