0

Certificates in my kubernetes are expired. What are the steps in redeploying certificates. After redployment pod health is affected. How do i overcome this?

[mdupaguntla@iacap067 K8S_HA_Setup_Post_RPM_Installation_With_RBAC]$ sudo kubectl logs elasticsearch-logging-0
+ export NODE_NAME=elasticsearch-logging-0
+ NODE_NAME=elasticsearch-logging-0
+ export NODE_MASTER=true
+ NODE_MASTER=true
+ export NODE_DATA=true
+ NODE_DATA=true
+ export HTTP_PORT=9200
+ HTTP_PORT=9200
+ export TRANSPORT_PORT=9300
+ TRANSPORT_PORT=9300
+ export MINIMUM_MASTER_NODES=2
+ MINIMUM_MASTER_NODES=2
+ chown -R elasticsearch:elasticsearch /data
+ ./bin/elasticsearch_logging_discovery
F0323 07:18:25.043962       8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
goroutine 1 [running]:
k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0xc4202b1200, 0xc42020a000, 0x77, 0x85)
        /go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:766 +0xcf
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0x1a38100, 0xc400000003, 0xc4200ba2c0, 0x1994cf4, 0x22, 0x4e, 0x0)
        /go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:717 +0x322
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).printf(0x1a38100, 0x3, 0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
        /go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:655 +0x14c
k8s.io/kubernetes/vendor/github.com/golang/glog.Fatalf(0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
        /go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:1145 +0x67
main.main()
        /go/src/k8s.io/kubernetes/cluster/addons/fluentd-elasticsearch/es-image/elasticsearch_logging_dis
Samuil Petrov
  • 542
  • 1
  • 13
  • 24

1 Answers1

0

Certificates in my kubernetes are expired. What are the steps in redeploying certificates. After redployment pod health is affected. How do i overcome this?

...

F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized

It seems you must have regenerated the private key for the certificates, rather that just issuing new certs using CSRs generated using the existing keys of the cluster.

If that is true, then you will need to do (at least) one of the following two things:

Dig the old private key files out of a backup, generate a CSR from them, re-issue the API certificates, and chalk this up to a valuable lesson not to delete private keys again without careful thought

Or:

Delete all the serviceAccounts named in any Pod's serviceAccountName, for every namespace, followed by a deletion of those pods themselves to get their volumeMount:s rebound. Addition information is in their admin guide.

If all goes well, the ServiceAccountController will recreate those ServiceAccount secrets, allowing those Pods to start back up, and you are back in business.

The concrete steps to manage the X.509 certificates for a cluster are too numerous to fit into a single answer box, but that is the high level overview of what needs to happen.

mdaniel
  • 31,240
  • 5
  • 55
  • 58
  • Tried with first approach. Got solution for my problem. Thanks. May i know the reason behind using the old private key's for generating new certificates? – vamsi krishna Apr 04 '18 at 11:23
  • _May i know the reason behind using the old private key's for generating new certificates_ because the private key represents the true contract between the apiserver and the rest of the cluster; the certificate is, as you saw, more "ephemeral" details like the `CN`, and expiry, etc. That process is exactly the same as how "normal" SSL renewals work, and for the same reason. – mdaniel Apr 05 '18 at 03:38
  • HI Matthew L Daniel can you please help me to solve this problem? https://stackoverflow.com/questions/51303819/rbac-error-in-kubernetes – vamsi krishna Jul 12 '18 at 10:56