2

I have undefined amount of akka-http client flows downloading data from an http service. I'm using akka-http host-level connection pooling because I would like to customise the pool, since there are long running requests going through it.

Since, the number of clients is undefined and dynamic, I don't know how to configure the connection pool (max-open-requests/max-connections). Additionally, I might want the connection pool to be small (less than number of clients) to not damage the bandwidth.

Thus, I would like to set up a client flow so that new connections and requests to the pool are backpressured:

1.Does this mean I will need to have a single materialised client flow?

2.How I materialise as many client flows as I want, such that if there are no available connections (demand from downstream) requests will be back pressured.

My first attempt was Source.single pattern, however this method can exceed max-open-request and throw an exception as it creates a new flow instance each time a request is sent to a server.

My second attempt was Source.Queue, This method creates a single flow to which all requests are enqueued: however despite the documentaiton SourceQueue's OverflowStrategy backpressured does not work and when it exceeds max-connection or max-open-request, akka-http throws an exception

Can I accomplish backpressure using host-level streaming fashion and have one client flow and add new requests using with MergeHub?

This is my solution:

  private lazy val poolFlow: Flow[(HttpRequest, Promise[HttpResponse]), (Try[HttpResponse], Promise[HttpResponse]), Http.HostConnectionPool] =
    Http().cachedHostConnectionPool[Promise[HttpResponse]](host.split("http[s]?://").tail.head, port, connectionPoolSettings)

  val ServerSink =
    poolFlow.async.toMat(Sink.foreach({
      case ((Success(resp), p)) => p.success(resp)
      case ((Failure(e), p)) => p.failure(e)
    }))(Keep.left)

  // Attach a MergeHub Source to the consumer. This will materialize to a
  // corresponding Sink.
  val runnableGraph: RunnableGraph[Sink[(HttpRequest, Promise[HttpResponse]), NotUsed]] =
  MergeHub.source[(HttpRequest, Promise[HttpResponse])](perProducerBufferSize = 16).to(ServerSink)


  val toConsumer: Sink[(HttpRequest, Promise[HttpResponse]), NotUsed] = runnableGraph.run()


  protected[akkahttp] def executeRequest[T](httpRequest: HttpRequest, unmarshal: HttpResponse => Future[T]): Future[T] = {
    val responsePromise = Promise[HttpResponse]()
    Source.single((httpRequest -> responsePromise)).runWith(toConsumer)
    responsePromise.future.flatMap(handleHttpResponse(_, unmarshal))
    )
  }
Rabzu
  • 52
  • 5
  • 26
  • Where do your requests come from and the responses go to? To be fully backpressured you'd either create `Source`s and `Sink`s via the provided adapter constructors or custom actor based versions which would be more involved. Just use the stream DSL as much as possible: `Source.form(...).via(poolFlow).runWith(Sink.something)`. – André Rüdiger Dec 15 '17 at 10:28
  • Its end to end streaming – Rabzu Dec 15 '17 at 10:41

3 Answers3

1

I hope I understood your issue correctly, so this is my solution (which is adaption of Akka Docs):

import akka.actor.ActorSystem
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.{HttpRequest, HttpResponse}
import akka.http.scaladsl.unmarshalling.Unmarshal
import akka.stream.OverflowStrategy
import akka.stream.QueueOfferResult.Enqueued
import akka.stream.scaladsl.{Sink, Source, SourceQueueWithComplete}
import com.typesafe.config.ConfigFactory
import scala.concurrent.{ExecutionContext, Future, Promise}
import scala.util.{Failure, Success}

class Service(implicit sys: ActorSystem, ec: ExecutionContext) {
  private val maxOffers = 256
  private val bufferSize = ConfigFactory.load().getInt("akka.http.host-connection-pool.max-open-requests") // default is 32
  private val poolClientFlow = Http().cachedHostConnectionPool[Promise[HttpResponse]]("localhost", 7000)
  private val queue: SourceQueueWithComplete[(HttpRequest, Promise[HttpResponse])] =
    Source
      .queue(bufferSize, OverflowStrategy.backpressure, maxOffers)
      .via(poolClientFlow)
      .to(Sink.foreach({
        case (Success(r), p) => p.success(r)
        case (Failure(e), p) => p.failure(e)
      }))
      .run()

  def makeRequest: Future[String] = {
    val response = Http().singleRequest(HttpRequest().withUri("http://localhost:7000/tommy"))
    response.map(Unmarshal(_)).flatMap(_.to[String])
  }

  def makeRequestBackpressured: Future[String] = {
    val promise = Promise[HttpResponse]()
    val request = HttpRequest().withUri("/tommy")

    val response = queue.offer(request -> promise).flatMap {
      case Enqueued => promise.future
      case other => Future.failed(new RuntimeException(s"Queue offer error: $other"))
    }

    response.map(Unmarshal(_)).flatMap(_.to[String])
  }
}

Here makeRequest will fail after 32 parallel queries, but makeRequestBackpressured will utilize backpressure strategy, configured in Source.queue

Mitrakov Artem
  • 1,355
  • 2
  • 14
  • 22
0

Source.queue cannot propagate back-pressure to the upstream producer. Back-pressure happens between itself and its consumer.

ZJ Lyu
  • 331
  • 3
  • 10
0

From the documentation of queue.offer:

Additionally when using the backpressure overflowStrategy: - If the buffer is full the Future won't be completed until there is space in the buffer

This is what I observe as well. You have to wait for the Future returned from offer before offering more elements. Then you have kind of backpressure behavior on the caller.

Jan Rudert
  • 76
  • 4