2

I'm using the latest CoreOS AMI (ami-0fc25a0b6bd986d03 details) on a small t2.nano instance.

This instance only has 500MB of memory. Unfortunately, CoreOS immediately consumes ~240MB for a tmpfs, which it then mounts at /tmp as shown below. This seems to completely eat my shared memory and I cannot launch containers. Is there any way to reduce the size of this? Or perhaps some way to mount /tmp onto the root filesystem?

I'm considering abandoning CoreOS solely because I cannot get it to work with small instance sizes, which is a shame since I chose it specifically because it was supposed to be a tiny OS that gets out of the way and let's me run containers...

$ free -h
              total        used        free      shared  buff/cache   available
Mem:          479Mi       232Mi       7.0Mi       199Mi       238Mi        34Mi
Swap:            0B          0B          0B

$ df -h
Filesystem       Size  Used Avail Use% Mounted on
devtmpfs         219M     0  219M   0% /dev
tmpfs            240M     0  240M   0% /dev/shm
tmpfs            240M  488K  240M   1% /run
tmpfs            240M     0  240M   0% /sys/fs/cgroup
/dev/xvda9        14G  2.8G  9.9G  22% /
/dev/mapper/usr  985M  791M  143M  85% /usr
none             240M  200M   41M  84% /run/torcx/unpack
tmpfs            240M     0  240M   0% /media
tmpfs            240M     0  240M   0% /tmp
/dev/xvda6       108M  112K   99M   1% /usr/share/oem
/dev/xvda1       127M   53M   74M  42% /boot
tmpfs             48M     0   48M   0% /run/user/500

Edit: Perhaps relevant, RancherOS apparently requires a minimum of 1GB to launch, although their GitHub discusses values from 512MB up to 2GB. Unclear to me why these "tiny OS" have such relatively high RAM needs. For context, Debian minimum is 256MB on a headless install

Hamy
  • 367
  • 3
  • 11
  • try to check which process is using your shared memory /proc/sysvipc/shm – c4f4t0r Aug 11 '19 at 21:26
  • Just made a similar experience: Recent CoreOS AMI on t3a.nano with just one openjdk11 container and a rather small Spring Boot app that was killed almost instantly (Error 137). My first though was that the limits where somehow imposed by Docker or managed improperly by the JVM, but it was indeed just lack of memory on the host. Switched to Amazon Linux 2 where the same app runs perfectly, it seems to be a good choice for these tiny instance sizes. – Till Kuhn Nov 22 '19 at 09:35
  • A tmpfs only uses memory when it actually has to store data, and `/tmp` is empty according to the `df` you posted. That suggests that whatever problem you have, the cause is not what you appear to think it is. – womble Dec 24 '19 at 07:19

1 Answers1

1

That's due to torcx, which lets you select the version of Docker you want on the system. torcx unpacks the selected Docker image into a tmpfs. It's technically possible to get around this, e.g. by disabling torcx and providing your own container runtime, but there aren't any officially supported ways to do so.

bgilbert
  • 111
  • 1