-1

I'm using Docker to provide a integration testing CentOS 7 environment and need an image with systemd running. Everything seemed fine until a coworker tried to use the setup in a CentOS 7 VM. I was able to reproduce the issue, but have been unable to determine why Docker is behaving differently on my real CentOS 7 box than a CentOS 7 VMWare VM.

I'm using VMware Workstation 15.5.7 and am using this page as a guide: Docker CentOS

My dockerfile:

$ cat dockerfiles/centos7_test.dockerfile 
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
$ docker build --rm -t local/c7-systemd -f dockerfiles/centos7_test.dockerfile .

Running the image on my box.

$ docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro local/c7-systemd </dev/null &>/tmp/docker_c7.log &
[2] 6129
$ docker exec -it 64b /bin/bash
[root@64bd3992ceaf /]# systemctl status
● 64bd3992ceaf
    State: running
     Jobs: 0 queued
   Failed: 0 units
    Since: Fri 2021-04-23 13:41:40 UTC; 36s ago
   CGroup: /system.slice/docker-64bd3992ceaf536f43569b86071e143800133d0415fc8a51f039b587af1c2516.scope
           ├─ 1 /usr/sbin/init
           ├─21 /bin/bash
           ├─36 systemctl status
           ├─37 more
           └─system.slice
             └─systemd-journald.service
               └─19 /usr/lib/systemd/systemd-journald
[root@64bd3992ceaf /]# journalctl
-- Logs begin at Fri 2021-04-23 13:41:40 UTC, end at Fri 2021-04-23 13:41:40 UTC. --
Apr 23 13:41:40 64bd3992ceaf systemd-journal[19]: Runtime journal is using 4.0M (max allowed 8.0M, trying to leave 9.6M free of 59.9M available → current limit 8.0M).
Apr 23 13:41:40 64bd3992ceaf systemd-journal[19]: Journal started
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Started Create Volatile Files and Directories.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Update UTMP about System Boot/Shutdown is not active.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Dependency failed for Update UTMP about System Runlevel Changes.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Job systemd-update-utmp-runlevel.service/start failed with result 'dependency'.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Reached target System Initialization.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Started Daily Cleanup of Temporary Directories.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Reached target Timers.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Listening on D-Bus System Message Bus Socket.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Reached target Sockets.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Reached target Basic System.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Reached target Multi-User System.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Startup finished in 44ms.
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Starting Cleanup of Temporary Directories...
Apr 23 13:41:40 64bd3992ceaf systemd[1]: Started Cleanup of Temporary Directories.
[root@64bd3992ceaf /]# 

Running the image in my VM:

$ docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro local/c7-systemd </dev/null &>/tmp/docker_c7.log &
[3] 50751
$ docker exec -it 622 /bin/bash
[root@62213a94ce67 /]# systemctl status
● 62213a94ce67
    State: degraded
     Jobs: 0 queued
   Failed: 4 units
    Since: Fri 2021-04-23 13:39:56 UTC; 24s ago
   CGroup: /system.slice/docker-62213a94ce677c6fdef8af6f5f4dc65b1ae8dd8853745705f0efccbe15270310.scope
           ├─ 1 /usr/sbin/init
           ├─23 /bin/bash
           ├─36 systemctl status
           └─37 systemctl status
[root@62213a94ce67 /]# journalctl 
No journal files were found.
-- No entries --
[root@62213a94ce67 /]# 

I'm trying to understand why systemd doesn't seem to be working when Docker is running in a VM.

pcarter
  • 107
  • 3

1 Answers1

-1

In writing this question, I discovered the answer. I had disabled SELinux on my Centos 7 box some time in the past, but had not done this in my VM.

The key was to look at the journal log (in the VM, not docker) as root (I had ran as non-root before and this message wasn't listed).

$ sudo journalctl -r
...
Apr 23 06:57:26 localhost.localdomain python[51693]: SELinux is preventing /usr/lib/systemd/systemd from add_name access on the directory systemd-tmpfiles-setup.service.
                                                     
                                                     *****  Plugin catchall_boolean (89.3 confidence) suggests   ******************
                                                     
                                                     If you want to allow container to manage cgroup
                                                     Then you must tell SELinux about this by enabling the 'container_manage_cgroup' boolean.
                                                     
                                                     Do
                                                     setsebool -P container_manage_cgroup 1
                                                     
                                                     *****  Plugin catchall (11.6 confidence) suggests   **************************
                                                     
                                                     If you believe that systemd should be allowed add_name access on the systemd-tmpfiles-setup.service directory by default.
                                                     Then you should report this as a bug.
                                                     You can generate a local policy module to allow this access.
                                                     Do
                                                     allow this access for now by executing:
                                                     # ausearch -c 'systemd' --raw | audit2allow -M my-systemd
                                                     # semodule -i my-systemd.pp
...

I disabled SELinux by the command below and now I get the same result in the VM:

$ sudo setenforce 0

Others might want to use the suggested solution in the log message above.

pcarter
  • 107
  • 3