0

I'm running a microservice on AWS Elastic Beanstalk which is logging it's responses internally at 1-4ms, but the AWS Dashboard is showing an average of 68ms (not even counting latency to/from AWS). Is this normal? It just seems odd that EB/ELB would add 60ms of latency to every request.

It's configured to use a Docker container, which seems to use nginx. Although it doesn't seem to be configured to log the ttfb in the access logs, this is auto-configured by Amazon.

In testing I tried both a t2.micro, and a t2.large instance, and that had no effect on the test results. Is there something I can tweak on my end... really need to get this under 10-20ms (not counting rtt/ping distance) for the service to be useful.

Tracker1
  • 19,103
  • 12
  • 80
  • 106
  • Turn on logging for the ELB, then make some requests, and correlate the timestamps between ELB and app, and see what you see. – Michael - sqlbot Mar 04 '17 at 02:43
  • If you use Multi-Container Docker Environment, it does not put nginx between docker container and ELB. You can try with it. – Cagatay Gurturk Mar 04 '17 at 15:33
  • We have three internal datacenters for part of our app that we can't legally put in the cloud... we're moving what we can into the cloud for convenience/scale... running the entire stack in ELB isn't an option. – Tracker1 Mar 05 '17 at 03:00

1 Answers1

0

It appears to have been a problem on Amazon's side... It was averaging 69ms on Friday, today (Monday morning) it's now 3.9ms

Tracker1
  • 19,103
  • 12
  • 80
  • 106