0

I followed the installation guide for Devstack according to this https://docs.openstack.org/devstack/latest/ and then followed this to configure the keystoneauth middleware https://docs.openstack.org/swift/latest/overview_auth.html#keystone-auth But when I tried to list bucket using boto3 with credentials I generate from OpenStack ec2 credential create, I got the error "The AWS Access Key Id you provided does not exist in our records" Would appreciate any help

My boto3 code is

import boto3 s3 = boto3.resource('s3',aws_access_key_id='5d14869948294bb48f9bfe684b8892ca',aws_secret_access_key='ffcbcec69fb54622a0185a5848d7d0d2',) 

for bucket in s3.objects.all():
    print(bucket)

Where the 2 keys are according to below:

| access     | 5d14869948294bb48f9bfe684b8892ca| 
| links      | {'self': '10.180.205.202/identity/v3/users/…'} | 
| project_id | c128ad4f9a154a04832e41a43756f47d |
| secret     | ffcbcec69fb54622a0185a5848d7d0d2 |
| trust_id   | None |
| user_id    | 2abd57c56867482ca6cae5a9a2afda29 

After running the commands @larsks provided, I got public: http://10.180.205.202:8080/v1/AUTH_ed6bbefe5ab44f32b4891fc5e3e55f1f for my swift endpoint. And just making sure, my ec2 credential is under the user admin and also project admin.

When I followed the Boto3 code and removed everything starting from v1 for my endpoint I got the error botocore.exceptions.ClientError: An error occurred () when calling the ListBuckets operation:

And when I kept the AUTH part, I got botocore.exceptions.ClientError: An error occurred (412) when calling the ListBuckets operation: Precondition Failed

The previous problem is resolved by adding enable_service s3api in the local.conf and stack again. This is likely because OpenStack needs to know it's using s3api, from the documentation it says Swift will be configured to act as a S3 endpoint for Keystone so effectively replacing the nova-objectstore.

Jack Yu
  • 11
  • 2
  • 2
    Can you share with us your `boto3` code? And perhaps also sufficient output from `openstack ec2 credential list` to demonstrate that the access key and secret you're using match values available from openstack. – larsks Jun 30 '21 at 22:08
  • My boto3 code is: `import boto3 s3 = boto3.resource('s3',aws_access_key_id='',was_secret_access_key='',) for bucket in s3.objects.all(): print(bucket)` where the 2 keys are according to below: | access | 5d14869948294bb48f9bfe684b8892ca| links | {'self': 'http://10.180.205.202/identity/v3/users/2abd57c56867482ca6cae5a9a2afda29/credentials/OS-EC2/5d14869948294bb48f9bfe684b8892ca'} | | project_id | c128ad4f9a154a04832e41a43756f47d | secret | ffcbcec69fb54622a0185a5848d7d0d2 | trust_id | None | user_id | 2abd57c56867482ca6cae5a9a2afda29 – Jack Yu Jul 01 '21 at 17:36
  • When people ask for additional information, you should in general *update the question*, because code posted in comments is just about unreadable. – larsks Jul 01 '21 at 17:44
  • sry about that, I will also post my update to the original question. Thx for the guide – Jack Yu Jul 02 '21 at 01:58

1 Answers1

0

Your problem is probably that nowhere are you telling boto3 how to connect to your OpenStack environment, so by default it is trying to connect to Amazon's S3 service (in your example you're also not passing in your access key and secret key, but I'm assuming this was just a typo when creating your example).

If you want to connect to the OpenStack object storage service, you'll need to first get the endpoint for that service from the catalog. You can get this from the command line by running openstack catalog list; you can also retrieve it programatically if you make use of the openstack Python module.

You can just inspect the output of openstack catalog list and look for the swift service, or you can parse it out using e.g. jq:

$ openstack catalog list -f json |
  jq -r '.[]|select(.Name == "swift")|.Endpoints[]|select(.interface == "public")|.url'
https://someurl.example.com/swift/v1

In any case, you need to pass the endpoint to boto3:

>>> import boto3
>>> session = boto3.session.Session()
>>> s3 = session.client(service_name='s3',
... aws_access_key_id='access_key_id_goes_here',
... aws_secret_access_key='secret_key_goes_here',
... endpoint_url='endpoint_url_goes_here')
>>> s3.list_buckets()
{'ResponseMetadata': {'RequestId': 'tx0000000000000000d6a8c-0060de01e2-cff1383c-default', 'HostId': '', 'HTTPStatusCode': 200, 'HTTPHeaders': {'transfer-encoding': 'chunked', 'x-amz-request-id': 'tx0000000000000000d6a8c-0060de01e2-cff1383c-default', 'content-type': 'application/xml', 'date': 'Thu, 01 Jul 2021 17:56:51 GMT', 'connection': 'close', 'strict-transport-security': 'max-age=16000000; includeSubDomains; preload;'}, 'RetryAttempts': 0}, 'Buckets': [{'Name': 'larstest', 'CreationDate': datetime.datetime(2018, 12, 5, 0, 20, 19, 4000, tzinfo=tzutc())}, {'Name': 'larstest2', 'CreationDate': datetime.datetime(2019, 3, 7, 21, 4, 12, 628000, tzinfo=tzutc())}, {'Name': 'larstest4', 'CreationDate': datetime.datetime(2021, 5, 12, 18, 47, 54, 510000, tzinfo=tzutc())}], 'Owner': {'DisplayName': 'lars', 'ID': '4bb09e3a56cd451b9d260ad6c111fd96'}}
>>>

Note that if the endpoint url from openstack catalog list includes a version (e.g., .../v1), you will probably want to drop that.

larsks
  • 277,717
  • 41
  • 399
  • 399