2

I have a cluster of 4 raspberry pi 4 model b, on which Docker and Kubernetes are installed. The versions of these programs are the same and are as follows:

Docker:

Client:
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.11.6
 Git commit:        4c52b90
 Built:             Fri, 13 Sep 2019 10:45:43 +0100
 OS/Arch:           linux/arm
 Experimental:      false

Server:
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.6
  Git commit:       4c52b90
  Built:            Fri Sep 13 09:45:43 2019
  OS/Arch:          linux/arm
  Experimental:     false

Kubernetes:

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm"}

My problem occurs when a kubernetes pod is deployed on machine "02". Only on that machine the pod never goes into a running state and the logs say:

standard_init_linux.go:207: exec user process caused "exec format error"

On the other hand, when the same pod is deployed on any of the other 3 raspberry pi, it goes correctly in a running state and does what it has to do. I have tried to see similar topics to mine, but there seems to be no match with my problem. I put below my Dockerfile and my .yaml file.

Dockerfile

FROM ubuntu@sha256:f3113ef2fa3d3c9ee5510737083d6c39f74520a2da6eab72081d896d8592c078
CMD ["bash"]

yaml file

apiVersion: v1
kind: Pod
metadata:
  labels:
    name: mongodb
  name: mongodb
spec:
  nodeName: diamond02.xxx.xx
  containers:
    - name : mongodb
      image: ohserk/mongodb:latest
      imagePullPolicy: "IfNotPresent"
      name: mongodb
      ports:
      - containerPort: 27017
        protocol: TCP
      command:
       - "sleep"
       - "infinity"

In closing, this is what happens when I run kubectl apply -f file.yaml specifying to go to machine 02, while on any other machine the output is this:

kubectl get pod -w -o wide

I could solve this problem by specifying precisely on which raspberry to deploy the pod, but it doesn't seem like a decent solution to me. Would you know what to do in this case?

EDIT 1

Here the journelctl output just after the deploy on machine 02

Nov 05 08:33:39 diamond02.xxx.xx kubelet[1563]: I1105 08:33:39.744957    1563 topology_manager.go:200] "Topology Admit Handler"
Nov 05 08:33:39 diamond02.xxx.xx systemd[1]: Created slice libcontainer container kubepods-besteffort-pod6a0d621a_55ab_449a_91cb_a88ac10df0cf.slice.
Nov 05 08:33:39 diamond02.xxx.xx kubelet[1563]: I1105 08:33:39.906608    1563 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqs4\" (UniqueName: \"kubernetes.io/projected/6a0d621a-55ab-449a-91cb-a88ac10df0cf-kube-api-access-trqs4\") pod \"mongodb\" (UID: \"6a0d621a-55ab-449a-91cb-a88ac10df0cf\") "
Nov 05 08:33:40 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-03b99c20a2e9dd9b6f06a99625272c899d6e7a36e2071e268b326dfee54476c8\x2dinit-merged.mount: Succeeded.
Nov 05 08:33:40 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:40.702427163Z" level=info msg="shim docker-containerd-shim started" address=/containerd-shim/moby/a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4/shim.sock debug=false pid=15599
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: libcontainer-15607-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: libcontainer-15607-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15607_systemd_test_default.slice.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15607_systemd_test_default.slice.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Started libcontainer container a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15648-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15648-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15648_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15648_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15654-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15654-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15654_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15654_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15661-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15661-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15661_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15661_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx kubelet[1563]: I1105 08:33:41.673178    1563 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4"
Nov 05 08:33:41 diamond02.xxx.xx kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth27f79edb: link becomes ready
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered blocking state
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered disabled state
Nov 05 08:33:41 diamond02.xxx.xx kernel: device veth27f79edb entered promiscuous mode
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered blocking state
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered forwarding state
Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: IAID 58:9b:78:38
Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: adding address fe80::5979:f76a:862:765a
Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: Joining mDNS multicast group on interface veth27f79edb.IPv6 with address fe80::5979:f76a:862:765a.
Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: New relevant interface veth27f79edb.IPv6 for mDNS.
Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: Registering new address record for fe80::5979:f76a:862:765a on veth27f79edb.*.
Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: IAID 58:9b:78:38
Nov 05 08:33:41 diamond02.xxx.xx kubelet[1563]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.244.3.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xcaa76c), "name":"cbr0", "type":"bridge"}
Nov 05 08:33:41 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b\x2dinit-merged.mount: Succeeded.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b\x2dinit-merged.mount: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:42.283254485Z" level=info msg="shim docker-containerd-shim started" address=/containerd-shim/moby/1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234/shim.sock debug=false pid=15718
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15725-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15725-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15725_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15725_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Started libcontainer container 1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15749-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15749-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15749_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15749_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: soliciting an IPv6 router
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15755-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15755-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15755_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15755_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: docker-1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.scope: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: docker-1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.scope: Consumed 39ms CPU time.
Nov 05 08:33:42 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: soliciting a DHCP lease
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15766-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15766-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15766_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15766_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15778-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15778-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15778_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15778_systemd_test_default.slice.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: libcontainer-15784-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: libcontainer-15784-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15784_systemd_test_default.slice.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15784_systemd_test_default.slice.
Nov 05 08:33:43 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:43.097966208Z" level=info msg="shim reaped" id=1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234
Nov 05 08:33:43 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:43.107322948Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 05 08:33:43 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:43 diamond02.xxx.xx avahi-daemon[389]: Registering new address record for fe80::cc12:58ff:fe9b:7838 on veth27f79edb.*.
Nov 05 08:33:44 diamond02.xxx.xx kubelet[1563]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.3.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}I1105 08:33:44.040009    1563 scope.go:110] "RemoveContainer" containerID="1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234"
Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
OhSerk
  • 21
  • 1
  • 4
  • my first guess would've been that you are pulling the x86 image instead of the ARM one. but the commit hash is identical with the ARM image. do see anything interesting in machines syslog or journalctl of kubelet on machine 2 when it ties to run the pod? – meaningqo Nov 04 '21 at 19:44
  • I edited the post by adding journalctl logs – OhSerk Nov 05 '21 at 08:50
  • @OhSerk, you didn't replied to meaningqo's question the image type. From the logs it can't be seen what image you're trying to pull. It's very strange having this kind of error while you do the same thing with the other 3 machines. So the problem can be with your docker image. And try to troubleshoot that specific machine for kubernetes issues – Bazhikov Nov 05 '21 at 15:24
  • As an error it was really strange and bizarre, because precisely the machines do not differ in anything (neither hardware nor software). But I found a solution. Thinking the problem was docker, I uninstalled and reinstalled both kubernetes and docker, and now I don't have that problem anymore – OhSerk Nov 08 '21 at 09:57

1 Answers1

1

Posting comment as the community wiki answer for better visibility:

Reinstalling both Kubernetes and Docker solves the issue

Bazhikov
  • 765
  • 3
  • 11