1

I have a problem with the rancher-logging-root-fluentbit daemon. Some pods start correctly, others have this error. Error: Error response from daemon: Duplicate mount point: /var/lib/docker/containers Any ideas to solve this error? Thanks.

I have tried checking for suspended pods or helm applications in an update or error state. I have not tried restarting nodes with the problem; I would like to avoid that.

This code describe pod structure:

 Name:             rancher-logging-root-fluentbit-scjn4
Namespace:        cattle-logging-system
Priority:         0
Service Account:  rancher-logging-root-fluentbit
Node:             xxxx
Start Time:       xxxx
Labels:           app.kubernetes.io/managed-by=rancher-logging-root
                  app.kubernetes.io/name=fluentbit
                  controller-revision-hash=5b6c67854b
                  pod-template-generation=3
Annotations:      checksum/fluent-bit.conf: 2b08687b2f14ac5fece45523412a2ba2669a33cc4e0e2c4479b752e92e511045
                  cni.projectcalico.org/containerID: fa706d155fa5e571893dfcf92bab107d0f2a3aeb0df0b3c7817223d2a757f949
                  cni.projectcalico.org/podIP: 10.42.0.84/32
                  cni.projectcalico.org/podIPs: 10.42.0.84/32
Status:           Pending
IP:               10.42.0.84
IPs:
  IP:           10.42.0.84
Controlled By:  DaemonSet/rancher-logging-root-fluentbit
Containers:
  fluent-bit:
    Container ID:   
    Image:          rancher/mirrored-fluent-fluent-bit:1.9.3
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  100M
    Requests:
      cpu:        100m
      memory:     50M
    Environment:  <none>
    Mounts:
      /buffers from buffers (rw)
      /fluent-bit/etc/fluent-bit.conf from config (rw,path="fluent-bit.conf")
      /tail-db from positiondb (rw)
      /var/lib/docker/containers from varlibcontainers (ro)
      /var/lib/docker/containers/ from extravolumemount0 (ro)
      /var/log/ from varlogs (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvr65 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  varlibcontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers/
    HostPathType:  
  varlogs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
  extravolumemount0:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers/
    HostPathType:  
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rancher-logging-root-fluentbit
    Optional:    false
  positiondb:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  buffers:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-hvr65:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 cattle.io/os=linux:NoSchedule
                             node-role.kubernetes.io/controlplane=true:NoSchedule
                             node-role.kubernetes.io/etcd=true:NoExecute
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  4m51s                   default-scheduler  Successfully assigned cattle-logging-system/rancher-logging-root-fluentbit-scjn4 to xxx
  Warning  Failed     2m49s (x12 over 4m50s)  kubelet            Error: Error response from daemon: Duplicate mount point: /var/lib/docker/containers
  Normal   Pulled     2m35s (x13 over 4m50s)  kubelet            Container image "rancher/mirrored-fluent-fluent-bit:1.9.3" already present on machine

Solved When I installed the helm package, I specified the /var/lib/docker like docker root and this option created the problem on pod boot because it set the extravolume property. I have leave blank the root docker entry and the problem was solved. (docker root property is set to /var/lib/docker to default)

  • It seems that the kubelet is not working correctly on the nodes you are experiencing the error on. Have you tried looking at the kubelet logs from one of the problem nodes? Run this command *journalctl -u kubelet* and see if there is anything interesting. – glv Mar 22 '23 at 10:48
  • Thanks @glv. There are no significant logs. I have updated the post with a description of the pod. – Marco Brunet Mar 22 '23 at 14:42

1 Answers1

0

Solved When I installed the helm package, I specified the /var/lib/docker like docker root and this option created the problem on pod boot because it set the extravolume property. I have leave blank the root docker entry and the problem was solved. (docker root property is set to /var/lib/docker to default)