0

I am trying to create a camel route to transfer a file from an FTP server to an AWS S3 storage. I have written the following route

private static class MyRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception 
{
from("sftp://<<ftp_server_name>>&noop=true&include=<<file_name>>...")
    .process(new Processor(){

        @Override
        public void process(Exchange ex)
        {
            System.out.println("Hello");
        }

       })

     .to("aws-s3://my-dev-bucket ?    
     accessKey=ABC***********&secretKey=12abc********+**********");
}

The issue is, this gives me the following exception:

Exception in thread "main" org.apache.camel.FailedToCreateRouteException: Failed to  create route route1 at: >>> To[aws-s3://my-dev-bucket?accessKey=ABC*******************&secretKey=123abc******************** <<< in route: Route(route1)[[From[sftp://<<ftp-server>>... because of Failed to resolve endpoint: aws-s3://my-dev-bucket?accessKey=ABC***************&secretKey=123abc************** due to: The request signature we calculated does not match the signature you provided. Check your key and signing method.

I then tried to do this the other way. i.e.writing a method like this:

public void boot() throws Exception {
    // create a Main instance
    main = new Main();
    // enable hangup support so you can press ctrl + c to terminate the JVM
    main.enableHangupSupport();
    // bind MyBean into the registery
    main.bind("foo", new MyBean());
    // add routes

    AWSCredentials awsCredentials = new BasicAWSCredentials("ABC*****************", "123abc*************************");
    AmazonS3 client = new AmazonS3Client(awsCredentials);
    //main.bind("client", client);  
    main.addRouteBuilder(new MyRouteBuilder());
    main.run();
}

and invoking using the bound variable #client. This approach does not give any exceptions, but the file transfer does not work.

To make sure that there's nothing wrong with my approach, I tried aws-sqs instead of aws-s3 and that works fine (file succesfully transfers to the SQS queue)

Any idea why this is happening? Is there some basic issue with "aws-s3" connector for camel?

gotz
  • 539
  • 2
  • 8
  • 23

4 Answers4

2

Have you tried of using RAW() function to wrap as like RAW(secretkey or accesskey).

It will help you to pass your keys as it is without encoding.

Parth Trivedi
  • 3,802
  • 1
  • 20
  • 40
0

Any plus signs in you secret key need to be url encoded as %2B, in your case **********+*********** becomes **********%2B***********

0

When you configure Camel endpoints using URIs then the parameter values gets url encoded by default. This can be a problem when you want to configure passwords as is.

To do that you can tell Camel to use the raw value, by enclosing the value with RAW(value). See more details at How do I configure endpoints which has an example also. See Camel Documentation

0

Your url should looks like:

aws-s3:bucketName?accessKey=RAW(XXXX)&secretKey=RAW(XXXX)
Pankaj
  • 302
  • 4
  • 7