2

I'm tweaking my Akka Http server now and having very horrible results when loading it with concurrent requests. Since I wasn't sure if perhaps I had a hidden blocking IO request somewhere I figured it would be worth testing the example project from the Akka Http site:

Alternatively, you can bootstrap a new sbt project with Akka HTTP already configured using the Giter8 template:

sbt -Dsbt.version=0.13.15 new https://github.com/akka/akka-http-scala-seed.g8

I've gone ahead and boot strapped it as per the instructions and run the server on local host:

/path/to/bootstrap/sbt run
[info] Running com.example.QuickstartServer
Server online at http://127.0.0.1:8080/
http://127.0.0.1:8080/

I ran some very trivial tests with "ab" tool:

Simple test performing sequential requests:

ab -n 1000 http://127.0.0.1:8080/users


Server Software:        akka-http/10.1.5
Server Hostname:        127.0.0.1
Server Port:            8080

Document Path:          /users
Document Length:        12 bytes

Concurrency Level:      1
Time taken for tests:   0.880 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      165000 bytes
HTML transferred:       12000 bytes
Requests per second:    1136.74 [#/sec] (mean)
Time per request:       0.880 [ms] (mean)
Time per request:       0.880 [ms] (mean, across all concurrent requests)
Transfer rate:          183.17 [Kbytes/sec] received

We see that the "time per request" is 0.880 ms [mean] in this case

Now I bumped the concurrency up to 5:

ab -n 1000 -c 5 http://127.0.0.1:8080/users

Concurrency Level:      5
Time taken for tests:   0.408 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      165000 bytes
HTML transferred:       12000 bytes
Requests per second:    2450.39 [#/sec] (mean)
Time per request:       2.040 [ms] (mean)
Time per request:       0.408 [ms] (mean, across all concurrent requests)
Transfer rate:          394.84 [Kbytes/sec] received

Now Time per request has increased quite sharply 2.040 [ms] (mean) (throughput is much higher though)

and again bumping up to 50 concurrent requests:

ab -n 1000 -c 50 http://127.0.0.1:8080/users

Concurrency Level:      50
Time taken for tests:   0.277 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      165000 bytes
HTML transferred:       12000 bytes
Requests per second:    3607.35 [#/sec] (mean)
Time per request:       13.861 [ms] (mean)
Time per request:       0.277 [ms] (mean, across all concurrent requests)
Transfer rate:          581.26 [Kbytes/sec] received

Here the latency is extremely high , at 13.861ms vs the first case which was at 0.880ms (latency increased about factor 16)

This simple server has no blocking IO.

I am wondering what I should configure in order to keep the latency as low as possible.

Avba
  • 14,822
  • 20
  • 92
  • 192

1 Answers1

0

Indirect Answer

Examining the akka-http-quickstart-scala.g8 source code shows that all of the concurrent GET requests are querying an Actor to get the Users:

get {
  val users: Future[Users] =
    (userRegistryActor ? GetUsers).mapTo[Users]
  complete(users)
}

Therefore, all of the GetUsers requests being sent to the Actor are queued in the mailbox and the Actor is processing them 1-by-1. So all of your concurrent connections are being synchronized and processed serially, not concurrently.

Although some configuration will fine tune the performance of the ActorSystem on your particular machine, there is no getting around the fact that the design of that particular project was not intended to minimize latency for multiple concurrent reads.

To get the type of performance you are looking for the project would have to be redesigned with something like a ReadWriteLock.

Direct Answer

To maximize performance for particular hardware you would have to adjust the http configurations and the actor system configuration.

Ramón J Romero y Vigil
  • 17,373
  • 7
  • 77
  • 125
  • I added a simple endpoint which responds directly without going through any actor - pathPrefix(“ping”) { complete(“pong”)} and see same behavior. I.e degradation in latency when more than 1 connected client – Avba Dec 08 '18 at 17:34
  • I think the root cause in my situation has nothing to do with actors as well as in the additional test i commented on here. My project doesnt use actors at all. All routes are implemented inline, including the trivial “ping” example above. Something internally seems to slow the stack – Avba Dec 08 '18 at 17:36