0

I've been running my containerized Spring Boot app on AWS Elastic Container Service for months without any issues. Yesterday, out of nowhere, newly started containers began to fail due to the memory calculator being unable to calculate the memory configuration:

2023-06-28T12:21:32.605Z    Setting Active Processor Count to 2
2023-06-28T12:21:32.878Z    Calculating JVM memory based on 619708K available memory
2023-06-28T12:21:32.878Z    For more information on this calculation, see https://paketo.io/docs/reference/java-reference/#memory-calculator
2023-06-28T12:21:32.878Z    unable to calculate memory configuration
2023-06-28T12:21:32.878Z    fixed memory regions require 635118K which is greater than 619708K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=123118K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 250 threads
2023-06-28T12:21:32.879Z    [31;1mERROR: [0mfailed to launch: exec.d: failed to execute exec.d file at path '/layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/memory-calculator': exit status 1

I obviously built the container image using the Spring Boot Maven plugin (spring-boot:build-image) and have been running this very image version for many weeks w/o problems.

Why is the memory calculator suddenly unable to calculate the memory config when it was able to do so multiple times before? Naturally, the container configuration/task definition didn't change...

IggyBlob
  • 372
  • 2
  • 13
  • Nothing has changed recently with the memory calculator, and if you've not changed buildpack versions then it would be the same memory calculator code running now as before. Does 619708K (~620M) seem right for your environment? I can say that the standard memory calculator configuration is going to need 1G of RAM, which is why it's failing. You can tune things down but it requires manual configuration. If you've been running w/less than 1G maybe your tuning isn't being applied? Are you setting `JAVA_TOOL_OPTIONS` (that's where you'd tune JVM memory), and is it being applied to your env? – Daniel Mikusa Jun 29 '23 at 13:47
  • Thanks for clarifying. I indeed haven't touched the image since April 24, so the memory calculator implementation could not have changed. For me, the only explanation is that AWS changed something under the hood. I've been running the container (ECS task) with 500M allocated memory ever since, so the ~620M seems a bit odd. However, before the failure, the logs show 3.5G available memory (despite having 500M configured). This could be an indicator that AWS didn't respect the task config, but does now. – IggyBlob Jun 30 '23 at 12:25
  • To fix the failing memory calculator, I increased the task memory to 1G; now the logs show ~1.6G available memory. Still not the exact configured amount, but at least it works. I do not do any manual tuning, like setting `JAVA_TOOL_OPTIONS`. Would you recommend doing so to align the logged available memory (1.6G) to the actual configured task memory (1G)? – IggyBlob Jun 30 '23 at 12:30
  • I don't know why it would report using more than 1G if that is the container memory limit that you assigned. Maybe that is a reporting issue like it is including OS level memory in that report? If you give it a 1G memory limit, it will configure the JVM to use as close to 1G as is possible with the JVM. – Daniel Mikusa Jun 30 '23 at 13:25
  • Since there seems to be a discrepancy between the configured memory (1G) and the available memory (1.6G), do you think it would make sense to set some 40% headroom via `BPL_JVM_HEAD_ROOM=40`? – IggyBlob Jun 30 '23 at 13:53
  • It's hard to say without knowing why there is a difference. You can set head room and tell the memory calculator to use less than all of the available memory, but if you don't know why it's that way then you could be underprovisioning your apps and wasting memory. – Daniel Mikusa Jul 01 '23 at 02:47
  • I see. To investigate this further, please confirm that the memory calculator detects the available memory as follows: First, it checks `/sys/fs/cgroup/memory/memory.limit_in_bytes`. If that fails, it checks `/proc/meminfo`, and if that fails too it falls back to 1G. – IggyBlob Jul 01 '23 at 14:50
  • Yes, basically. It checks `/sys/fs/cgroup/memory/memory.limit_in_bytes` CGroups v1, if not set then `/sys/fs/cgroup/memory.max` CGroup v2. If still not set, then it looks at `/proc/meminfo`, and if all else fails defaults to 1G. If there's something else we could check to help in your env, feel free to open a Github issue here -> https://github.com/paketo-buildpacks/libjvm/issues – Daniel Mikusa Jul 05 '23 at 03:00

0 Answers0