3

I have a Vertx server with a worker vertical to asynchronously handle requests for S3 operations. We need a solution to transfer a file from S3 to the client through our server. A previous question Streaming S3 object to VertX Http Server Response is answered by tsegismont, but it appears that the recommendation would block the Vertx thread. The file transfer belongs in a separate vertical. This recommended solution would not work in a worker vertical since The RoutingContext is not sent across the bus as recommended by How can I send RoutingContext object from routing vertical to some other vertical using vertx.eventBus().send() method?. Note that here it seems the recommended solution, to create a special codec, would not work since a RoutingContext is required in the worker vertical.

Another solution would be to get the object from S3. Save it to file. Then use the fileSend method in WebClient API to send to the client. This... is not an elegant solution.

A third solution would be to abandon the WorkerVertical and use the blockingHandler method in the MainVertical. This is not an async call. The thread would not be released for potentially seconds and is worse than the previous solution.

  • `blockingHandler` method works almost the same as a worker verticle and can not be "worse" than that – injecteer Jan 27 '21 at 11:23
  • I agree unless it blocks for several second. See documentation at https://vertx.io/docs/vertx-web/java/ under blockinghandler. If the operation blocks for more than a few seconds, you should use a worker vertical. –  Apr 11 '21 at 21:21

1 Answers1

0

I don't know if its an option for you, but you can utilize presigned url - https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

What I do most of the time is using the vertx server to handle coordination and generate presigned url's to get/upload files from s3.

The client get the presigned URL and can download / upload directly to s3 while protecting the actual credentials via the backend.

The presigned URL can have limited lifetime and you get all the scalability of AWS without dealing with file streaming / out-of-memory exceptions / etc.

Tom
  • 3,711
  • 2
  • 25
  • 31
  • That would be a better solution, and it is where we started. The majority of our work with S3 is through presignedURLs. For some reason, we can successfully upload and download larger files, but smaller files that are between 4k and 10k bytes are not downloaded correctly. Our current solution is to avoid presignedURLs for smaller files. The cost of generating URLs is just as expensive as uploading and downloading smaller files to s3 and is a solution that works, although not as elegantly as we would have liked. –  Jan 17 '21 at 17:24