4

I've been using LXC containers for a few years and have recently expanded the types of applications that run inside of container environments.

I'm starting to limit resources at the container level now with configuration parameters like:

lxc.cgroup.cpuset.cpus                 = 16-23
lxc.cgroup.memory.limit_in_bytes       = 30720M
lxc.cgroup.memory.memsw.limit_in_bytes = 32768M

I'm working with a developer who's using a "tuning" tool (pgtune) to generate a configuration for a Postgres database that will run inside of the LXC environment. This tool is older and is not quite VM or container-aware. It makes sizing recommendations based on the RAM visible to the system.

That's when I realized that seeing all of the host system's RAM (96GB) is visible to the container instance could be harmful in some cases.

Is there any workaround for this, or is it just a given when using LXC?

ewwhite
  • 197,159
  • 92
  • 443
  • 809

2 Answers2

8

Currently the proc filesystem is not "container aware" in mount namespaces, so tools basing their logic on this will get host-related values instead of container-related values.

But a work is in progess, it's called lxc-fs and few releases are available here. This is a user-space workaround that will make possible a bind mount over /proc to get things consistent inside a container.

Xavier Lucas
  • 13,095
  • 2
  • 44
  • 50
4

There appears to be no way around it. LXC uses cgroups to do its RAM-limiting, and non-virtualization aware tools read stats like /proc/meminfo which is not contained within LXC and will output the overall RAM in the system. You can see this behavior with free or top as well when run inside the container.

Source: http://fabiokung.com/2014/03/13/memory-inside-linux-containers/

Nathan C
  • 15,059
  • 4
  • 43
  • 62