3

I start a kubernetes replication controller. When the corresponding container in the single pod in this replication controller has a gcePersistentDisk specified the pods will start very slow. After 5 minutes the pod is still in the Pending state.

kubectl get po will tell me:

NAME          READY     STATUS    RESTARTS   AGE
app-1-a4ni7   0/1       Pending   0          5m

Without the gcePersistentDisk the pod is Running in max 30 seconds.

(I am using a 10 GB Google Cloud Storage disk and I know that these disks have lower performance for lower capacities, but I am not sure this is the issue.)

What could be the cause of this?

Community
  • 1
  • 1
Gabriel Petrovay
  • 20,476
  • 22
  • 97
  • 168

2 Answers2

5

We've seen the GCE PD attach calls take upwards of 10 minutes to complete, so this is more or less expected. For example see https://github.com/kubernetes/kubernetes/issues/15382#issuecomment-153268655, where PD tests were timing out before GCE PD attach/detach calls could complete. We're working with the GCE team to improve performance and reduce latency.

If the pod never gets out of pending state, then you might've hit a bug. In that case, grab your kubelet log and open an issue at https://github.com/kubernetes/kubernetes/

Saad Ali
  • 1,730
  • 11
  • 12
1

At least from my feeling, using PersistentVolumeClaims are working much faster. You can nearly instantly destroy and recreate replication controllers.

See: http://kubernetes.io/v1.1/docs/user-guide/persistent-volumes/README.html

stvnwrgs
  • 137
  • 10
  • Thanks! This improves the issue but there is still a delay of 20-60 seconds till a pod reaches the `Running` state. See: http://stackoverflow.com/q/34854472/454103 – Gabriel Petrovay Jan 18 '16 at 12:11