2

I am pulling a ~3GB image from a private docker registry and it takes roughly 10 minutes.

About 80% of the time is spent for extracting the layers, so download/network does not seem to be a bottleneck. It is executed in an aws environment. Both instances, the one that pulls and the registry are on the same network in aws. Both are t2.micro.

Any idea why it takes so long? When I pull the same image from my local dev machine the "extraction" takes less than 1 minute!

Is there a ebs io performance bottle neck? THe pulling instance is "fresh" i.e. has been set up right before the pull.

Michael
  • 21
  • 1
  • 3
  • **edit** The problem was related to my own network performance and nto to docker or the EC2 instance. – Michael Mar 18 '18 at 05:43

2 Answers2

2

You are likely running out of IO to your EBS volumes. Also check if you are using gp2 or magnetic, as magnetic in at least 1 AZ in us-east is VERY slow. However gp2 also has a credit bucket that you might be exhausting.

Jason Martin
  • 5,023
  • 17
  • 24
0

How is the load on your Server, this seem to be an IO related issues, please answer this questions which aids in further troubleshooting.

What is the load average on your server?, you have any other huge IO related process running?

Can you share a screenshot of your EBS Volumes status, at least the operation status?. You can choose the volume in question and closely look at the Status Checks report.

Do you have CloudWatch metrics active service to monitor I/O characteristics of your ec2 instances and volumes?, like the VolumeWriteOps/VolumeWriteBytes and VolumeReadBytes/VolumeReadOps.

What’s your Docker engine storage driver and file system. Are you using aufs or devicemapper?, ext4 or btrfs filesystem?

You can also perform few IO tests on your instances to make sure everything is working as normal. dd can be a good starting point. Tools like vmstat,iostat and iotop can come in handy as well for your troubleshooting.

Dina Kaiser
  • 131
  • 3