0

I've got an I/O-intensive java tomcat application where I want to execute htmlunit-tests in a more modern, clustered environment. I therefore also took docker into account which may provide cool features by applying swarm and so on.

The tests runs against a oracle db and creates traffic also on the local I/O. I'm just curious about one question:

When I run the tests using a plain install of the product with limited resources (load15 factor raises above 2 on a 1 CPU system) the test execution time is ~35% slower than on the same environment (limited resources) using a dockerized approach of the test execution. If enough resources are there to keep the load factor below 1 (on a 1 CPU system), the running times between plain install and dockerized install is nearly the same.

I'm looking for ways to explain this. Is it about some overlay filesystem caching mechanisms? Where to look at, when investigating this?

ferdy
  • 7,366
  • 3
  • 35
  • 46
  • 1
    The exact answer would require a full analysis of the system, but an example of how this can happen is if you create a lot of I/O work in parallel. If you run into CPU limits, your I/O will be more serialised, and therefore faster. Or in more general terms: performance degradation profiles will vary depending on which bottleneck is hit. – biziclop Jul 25 '16 at 08:53

1 Answers1

1

That answer is very specific to your application so all you can do is test the various docker storage setups.

First, test the app with a local data volume. Either a local volume or mounting a local directory as a volume. This remove's most of the overhead and should be as close to host IO speeds as you will get.

If your app runs at full speed then it's likely the docker storage driver is the culprit so you can try the alternative options to the docker daemon to see which works best.

  • Don't use devicemapper in loop mode, ever! --storage-driver==devicemapper \ --storage-opt dm.loop*
  • AUFS --storage-driver==aufs
  • OverlayFS --storage-driver==overlay2 or overlay on pre 4.x kernels.
  • Direct LVM --storage-driver==devicemapper \ --storage-opt dm.datadev=/dev/dockervg/datalv \ --storage-opt dm.metadatadev=/dev/dockervg/metadatalv
    This requires some lvm setup as well.

Test with each driver.
Your container data will be destroyed each time you swap.
Then use the best driver.

Mounting the local volume will probably be the fastest, if you choose that then you deal with data being stored outside the container.

Matt
  • 68,711
  • 7
  • 155
  • 158
  • Thanks for answering. I'm very sorry about this, but I mixed it up: It's the dockerized approach which is 35% faster in test execution over the plain installation on limited resources... I just want to understand why. – ferdy Jul 25 '16 at 11:47
  • @ferdy Oh, I see what you mean... but that's an odd situation. Again, this is very specific to your app and your system setup so you might not get a good answer. Could you add some more detail about your limited resources setup compared to the other setup? – Matt Jul 25 '16 at 22:57
  • You might get more help moving this question to the unix stack site or possibly server fault once you've added the system detail. – Matt Jul 25 '16 at 22:57