I'm using spray-can version 1.3.3 and akka 2.3.9 The server code looks pretty standard
application.conf:
spray.can {
server {
pipelining-limit = 10
request-timeout = 50ms
stats-support = on
}
...
Boot:
object Boot extends App {
implicit val system = ActorSystem("on-spray-can")
val service = system.actorOf(Props[MyServiceActor], "demo-service")
IO(Http) ! Http.Bind(service, "0.0.0.0", port = 8080)
}
MySeviceActor:
class MyServiceActor extends Actor with MyService {
def actorRefFactory = context
def receive = runRoute(route)
}
trait MyService extends HttpService {
implicit def executionContext = actorRefFactory.dispatcher
val route: Route =
path("do") {
post {
detach() {
entity(as[String]) { body =>
complete {
val resp = processRequestAsync(body)
convertToHttpResponseAsync(resp)
}}}}
}
def processRequestAsync(body:String):Future[MyContext]={
// do some non-blocking using async client.api, spray-client..
}
def convertToHttpResponseAsync(resp: Future[MyContext]): Future[HttpResponse] = {
resp.map { ctx =>
// ... Do some non blocking ops
HttpResponse(StatusCodes.OK, entity = HttpEntity(`application/json`, ctx.getData))
}
}
}
In my localhost environment the statistics looks reasonable (at least #of open connections) I'm using ab, wrk benchmarks, But in production I'm getting very different picture:
Total requests : 2434475
Open requests : 1847228
Max open requests : 1847228
Total connections : 1995540
Open connections : 142
Max open connections : 1239
Requests timed out : 189196
There is an ngnix that receives the outside requests and dispaches them to the application.
The question is what could be a root cause for so many open connections? What parameters, headers, something else should I pay attention, investigate?
My application runs from docker container