I'm trying to understand what the whole point of containers is. I'd say it has to do with the concept of isolation, which would pose these benefits:
- Malicious users who take control over a process, such as a vulnerable web server, might be able to damage the machine, so that's for security;
- If a process crashes due to a bug, it wouldn't be able to take down the whole machine;
- Each process thinks it's using the whole machine's resources for itself, and that is somehow good (I suppose it enables it to perform optimally?)
But nowadays all these things are built into any decent operating system. Was this not the case before these tools were developed? Aren't these fundamental concerns when developing multitasking systems?
We already have virtual memory, so whenever I try to access another process' memory I get a segfault. AFAIK a Linux process already thinks it's using all machine resources for itself, it's just the kernel's fooling it to think so.
What does LXC bring to the table then? Could it be thought of as a way to "group" processes and control them as if they were one?