2

we are using alpakka latest S3 connector for downloading stream but it fails with max-length error as exceeded content length limit (8388608 bytes)! You can configure this by setting `akka.http.[server|client].parsing.max-content-length` or calling `HttpEntity.withSizeLimit` before materializing the dataBytes stream. ,stream as Source (downloading file from S3) and flow which calls to AMQP server using java amqp-client library , when we stream file less that 8M it process but larger file did not process and throw error highlighted above.

  • Akka-http does not keep file in memory it streams directly from source , do we need to first keep file in memory and then stream it?

  • Is Downstream i.e AMQP which is java client libary(5.3.0) is givig
    this issue , we are using One connection and one channel for Rabbit
    MQ?

    val source = s3Client.download(bucketName, key)._1.
    via(Framing.delimiter(ByteString("\n"), Constant.BYTE_SIZE, 
    true).map(_.utf8String))
    
    val flow = Flow[String].mapAsync(
    //Calling future of posting message to rabbit MQ
    
    Future {
    //Rabbit MQ config Properties
    rabbitMQChannel.basicPublish(exchangeName, routingKey, amqpProperties, 
    message.toString.getBytes)
    }
    
    )     
    val result = source.via(flow).runWith(Sink.ignore)
    
eibersji
  • 1,218
  • 4
  • 29
  • 52
Learner
  • 45
  • 1
  • 6
  • Some additional Info , prior to this error i see this warning in logs ```[WARN] [08/31/2018 08:38:42.104] [default-akka.actor.default-dispatcher-3] [default/Pool(shared->https://abc.s3.amazonaws.com:443)] [1 (WaitingForEndOfResponseEntity)] Ongoing request [GET /(filename in S3) Empty] was dropped because pool is shutting down``` and this dispatcher default-akka.actor.default-dispatcher-3 , Will above error makes some message loss in large file processing? – Learner Aug 31 '18 at 10:27

0 Answers0