0

For context, we have a cluster of about a dozen nodes or so. Many parts of our project utilize iSCSI persistent volumes & claims to mount very large read only databases.

The only issue is that launching numerous pods concurrently all trying to utilize the same persistent volume causes numerous mounts to fail with the message that the "failed to get any path for iscsi disk".

Eventually all will succeed, but they often take a couple minutes or more for each pod to finally resolve. This is an issue for us as our consumers expect quicker launch times.

As an example, we have one such stateful set that launches 100+ pods. We could start the pods sequentially to relieve some of this but even then, launching that many pods sequentially would be time consuming.

I am not super well versed on iSCSI but is there anything we can do to ensure availability of the claim is consistent across concurrent pod launches? Would nfs or another alternative better suite our needs? Is multipathing a possible solution?

Our iSCSI targets are hosted on node A. We then have our cluster master node on node B and nodes B1-BX make up the rest of the cluster.

Kubernetes 1.2

Note: I can confirm it is not an issue of the hostname being used in the target or an authentication issue. The mount will resolve given enough time.

derpyTerp
  • 1
  • 2

0 Answers0