0

I need to access a Cloudian S3 bucket and copy certain files to my local directory. What I was given was 4 piece of info in the following format:

•   Access key: 5x4x3x2x1xxx
•   Secret key: ssssssssssss
•   Region: us-east-1
•   S3 endpoint: https://s3-aaa.xxx.bbb.net 
•   Storage path: store/STORE1/

What I do is first configure and create a profile called cloudian which asks above info:

aws configure --profile cloudian

And then I run a sample command. For instance to copy a file:

aws --profile=cloudian --endpoint-url= https://s3-aaa.xxx.bbb.net  s3 cp s3://store/STORE1/FILE.mp4 /home/myfile.mp4

But it keeps waiting, no output, no errors, nothing. Am I doing anything wrong? Is there anything missing?

Tina J
  • 4,983
  • 13
  • 59
  • 125
  • Dont need space there `s3:// store/STOR` – Marcin Mar 02 '20 at 21:15
  • Sure. That was a type in question! lol – Tina J Mar 02 '20 at 21:20
  • 1
    Add debug your command to get more information by adding option --debug (boolean) – Ajinkya Mar 02 '20 at 21:28
  • Should I write it as `--debug true`? – Tina J Mar 02 '20 at 21:31
  • 1
    Just add --debug – Ajinkya Mar 02 '20 at 21:34
  • Yeah lots of outputs printed...Every iteration it increments and says `Starting new HTTPS connection (1)`... – Tina J Mar 02 '20 at 21:36
  • Are you trying to copy from S3 bucket to local? In that case you can get the path to your file in the S3 from UI. Also, endpoint url option is not required. – Ajinkya Mar 02 '20 at 21:46
  • Only thing I was given was these info. I don't have access to the UI. And I have to do this for thousands of files in code. – Tina J Mar 02 '20 at 21:47
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/208877/discussion-between-ajinkya-and-tina-j). – Ajinkya Mar 02 '20 at 21:49
  • Do your ~.aws/credentials and ~/.aws/config file match what is shown at https://cloudian.com/blog/aws-cli-s3-compatible-storage/? – jarmod Mar 02 '20 at 22:35
  • 1
    If there are a large number of files, tens or hundreds of thousands or more, then what you are observing *might* be kind of... normal. aws-cli is painfully slow at many operations and the S3 ListObjects APIs do not allow any parallelization when iterating a bucket's objects. It's fetch 1000, next 1000, next 1000... any idea how large this bucket is? – Michael - sqlbot Mar 03 '20 at 01:23
  • Actually the bucket has thousands of folders inside. That's correct. But looks like my current problem is not related to number of files though. If I wait, I get a `HTTP Timeout` error :-| – Tina J Mar 03 '20 at 02:13

1 Answers1

0

If you have set the profile properly, this should work:

aws --profile cloudian s3 cp s3://s3-aaa/store/STORE1/FILE.mp4 /home/myfile.mp4

where s3-aaa is the name of the bucket.

chris
  • 36,094
  • 53
  • 157
  • 237
  • What I was given was an `endpoint` (S3 endpoint: https://s3-aaa.xxx.bbb.net) and a `storage path`. So I should grab the bucket name from endpoint?! – Tina J Mar 02 '20 at 21:28
  • There are different types of endpoints, so it's hard to be certain given that you want to hide the actual endpoint. The command I provided should work IF the bucketname is part of the endpoint. – chris Mar 03 '20 at 16:13