1

I have setup two ceph clusters with a rados gateway on a node for each of them. What I'm trying to achieve is to transfer all objects from a bucket "A" with an endpoint in my cluster "1" to a bucket "B" which can be reached from another endpoint on my cluster "2". It doesn't really matter for my issue but at least you understand the context.

I created a script in python using the boto3 module. The script is really simple. I just wanted to put an object in a bucket.

The relevant part is as written below :

 s3 = boto3.resource('s3',
                      endpoint_url=credentials['endpoint_url'],
                      aws_access_key_id=credentials['access_key'],
                      aws_secret_access_key=credentials['secret_key'],
                      use_ssl=False)

    s3.Object('my-bucket', 'hello.txt').put(Body=open('/tmp/hello.txt', 'rb'))

(hello.txt just contains a word)

Let's say this script is written and runs from a node (which is the radosgw endpoint node) in my cluster 1. It works well when the "endpoint_url" is the node I'm running the script from but it does not work when I'm trying to reach my other endpoint (the radosgw, located in another node within my cluster "2").

I got this error :

botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL

The weird thing is that I can create a bucket without any error :

s3_src.create_bucket(Bucket=bucket_name)
s3_dest.create_bucket(Bucket=bucket_name)

I can even list the buckets of my two endpoints.

Do you have any idea why I can do pretty much everything but not put a single object in my second endpoint ?

I hope it makes any sense.

user3561383
  • 77
  • 1
  • 13

1 Answers1

0

Ultimately, I found that the issue was not related with boto but with my ceph pool which countains my data.

The bucket pool was healthy, that's why I could create my buckets whereas the data pool was unhealthy, hence the issue when I tried to put an object in a bucket.

user3561383
  • 77
  • 1
  • 13