0

I have installed a Kuberntes system with one node,the node itself worked as both work node and master node,now I met a problem:

When the pod num exceed 255,Kubernetes will failed to deplopy it,after checking the node it shows there is no available IP address enter image description here

After checking the config,I found it was caused by the node's podCIDR configuration,the value is 10.244.0.0/24

enter image description here

The available subnet number is 254,which cause the problem.

I did a lot of search, all of them said I need to delete the old node and use kube init to create a new node,but I only have one node,if I delete it,my data will also lost.

I am wondering if there is a solution to update the exists node's podCIDR without recreating it?,thanks in advance!

enter image description here

flyingfox
  • 101
  • 1
  • it seems, worth try to read [how to change PODcidr on specifiec nodes Kubernetes · Issue #87150 · kubernetes/kubernetes](https://github.com/kubernetes/kubernetes/issues/87150). And, after the cluster is created, ```kubectl edit``` is used to edit – Tom Newton Jun 13 '23 at 12:56
  • @TomNewton Thanks for your reply,I have read this article before posting my question and it not work,also I just have only one node,both work as master node and worker node – flyingfox Jun 14 '23 at 01:51
  • So, according above mentioned link, this can be the defenite answer: **You can't really update the node's PodCIDR. It generally gets cached in the node, and any running pods need to be restarted. The most complete answer is to drain, delete, and re-init the node.** Hope, @flyingfox you will reslove this issue by re-create OR will found other way – Tom Newton Jun 14 '23 at 09:44

0 Answers0