I have installed an 64bit ubuntu on my raspberry pi 4 and it seems to me that each pod restarts frequenty:
microk8s.kubectl describe pod redis-c49fd5d65-g8ghn
Name: redis-c49fd5d65-g8ghn
Namespace: default
Priority: 0
Node: raspberrypi4-docker1/192.168.0.45
Start Time: Thu, 10 Sep 2020 08:11:38 +0000
Labels: app=redis
pod-template-hash=c49fd5d65
Annotations: <none>
Status: Running
IP: 10.1.42.201
IPs:
IP: 10.1.42.201
Controlled By: ReplicaSet/redis-c49fd5d65
Containers:
redis:
Container ID: containerd://9b8300e456691025ccbfbee588a52069a1fa25ffa6f0c1b5f5f652227a1172f3
Image: hypriot/rpi-redis:latest
Image ID: sha256:2e0128f189c5b19a15001e48fac1d0326326cebb4195abf6a56519e374636f1f
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 07 Mar 2021 10:15:57 +0000
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Sun, 07 Mar 2021 09:24:16 +0000
Finished: Sun, 07 Mar 2021 10:14:43 +0000
Ready: True
Restart Count: 4579
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dn4bk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-dn4bk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dn4bk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 8d kubelet Pod sandbox changed, it will be killed and re-created.
Normal SandboxChanged 8d kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8d kubelet Container image "hypriot/rpi-redis:latest" already present on machine
Normal Created 8d kubelet Created container redis
Normal Started 8d kubelet Started container redis
Normal SandboxChanged 8d kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8d kubelet Container image "hypriot/rpi-redis:latest" already present on machine
Normal Created 8d kubelet Created container redis
Normal Started 8d kubelet Started container redis
Normal SandboxChanged 8d kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 8d kubelet Container image "hypriot/rpi-redis:latest" already present on machine
Normal Created 8d kubelet Created container redis
Normal Started 8d kubelet Started container redis
...
Normal SandboxChanged 108m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 107m kubelet Container image "hypriot/rpi-redis:latest" already present on machine
Normal Created 107m kubelet Created container redis
Normal Started 107m kubelet Started container redis
Normal SandboxChanged 101m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 101m kubelet Container image "hypriot/rpi-redis:latest" already present on machine
Normal Created 101m kubelet Created container redis
Normal Started 101m kubelet Started container redis
Normal SandboxChanged 49m kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 49m kubelet Container image "hypriot/rpi-redis:latest" already present on machine
Normal Started 49m kubelet Started container redis
Normal Created 49m kubelet Created container redis
I have read that this error can be a result of networking failure, what I can found is a DNS error messages in my journalctl logs:
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:52 raspberrypi4-docker1 systemd-resolved[1760]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Mar 07 11:24:55 raspberrypi4-docker1 microk8s.daemon-kubelet[4953]: E0307 11:24:55.190320 4953 summary_sys_containers.go:47] Failed to get system container stats for "/systemd/system.slice": failed to get cgroup stats for "/systemd/system.slice": failed to get container info for "/systemd/system.slice": unknown container "/systemd/system.slice"
Microk8s inspect output:
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Building the report tarball
Report tarball is at /var/snap/microk8s/2038/inspection-report-20210307_113359.tar.gz
How can I prevent restarting of containers?