Manuelles Erneuern von kubeadm-verwalteten Zertifikaten
Problem
Von kubeADM verwaltete Zertifikate laufen ein Jahr nach der Bereitstellung ab. Wenn die Zertifikate ablaufen, werden Pods nicht angezeigt und zeigen Fehler mit ungültigen Zertifikaten im Protokoll an.
Lösung
Der Paragon Automation Kubernetes-Cluster verwendet selbstgenerierte, von kubeadm verwaltete Zertifikate. Diese Zertifikate laufen ein Jahr nach der Bereitstellung ab, es sei denn, die Kubernetes-Version wird aktualisiert oder die Zertifikate werden manuell erneuert.
Führen Sie die folgenden Schritte aus, um Zertifikate manuell zu erneuern:
-
Überprüfen Sie das aktuelle Ablaufdatum der Zertifikate, indem Sie den
kubeadm certs check-expiration
Befehl auf jedem primären Knoten Ihres Clusters verwenden.root@primary1-node:~# kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Dec 13, 2023 13:20 UTC 328d no apiserver Dec 13, 2023 13:20 UTC 328d ca no apiserver-etcd-client Dec 13, 2023 13:20 UTC 328d etcd-ca no apiserver-kubelet-client Dec 13, 2023 13:20 UTC 328d ca no controller-manager.conf Dec 13, 2023 13:20 UTC 328d no etcd-healthcheck-client Dec 13, 2023 13:20 UTC 328d etcd-ca no etcd-peer Dec 13, 2023 13:20 UTC 328d etcd-ca no etcd-server Dec 13, 2023 13:20 UTC 328d etcd-ca no front-proxy-client Dec 13, 2023 13:20 UTC 328d front-proxy-ca no scheduler.conf Dec 13, 2023 13:20 UTC 328d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Nov 27, 2032 21:31 UTC 9y no etcd-ca Nov 27, 2032 21:31 UTC 9y no front-proxy-ca Nov 27, 2032 21:31 UTC 9y no
-
Um die Zertifikate zu erneuern, verwenden Sie den
kubeadm certs renew all
Befehl auf jedem primären Knoten Ihres Kubernetes-Clusters.root@primary1-node:~# kubeadm certs renew all [renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
-
Überprüfen Sie das Ablaufdatum erneut mit dem
kubeadm certs check-expiration
Befehl auf jedem primären Knoten Ihres Clusters.root@primary1-node:~# kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jan 18, 2024 21:40 UTC 364d no apiserver Jan 18, 2024 21:40 UTC 364d ca no apiserver-etcd-client Jan 18, 2024 21:40 UTC 364d etcd-ca no apiserver-kubelet-client Jan 18, 2024 21:40 UTC 364d ca no controller-manager.conf Jan 18, 2024 21:40 UTC 364d no etcd-healthcheck-client Jan 18, 2024 21:40 UTC 364d etcd-ca no etcd-peer Jan 18, 2024 21:40 UTC 364d etcd-ca no etcd-server Jan 18, 2024 21:40 UTC 364d etcd-ca no front-proxy-client Jan 18, 2024 21:40 UTC 364d front-proxy-ca no scheduler.conf Jan 18, 2024 21:40 UTC 364d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Nov 27, 2032 21:31 UTC 9y no etcd-ca Nov 27, 2032 21:31 UTC 9y no front-proxy-ca Nov 27, 2032 21:31 UTC 9y no
-
Starten Sie die folgenden Pods von einem der primären Knoten aus neu, um die neuen Zertifikate zu verwenden.
root@primary1-node:~# kubectl delete pod -n kube-system -l component=kube-apiserver root@primary1-node:~# kubectl delete pod -n kube-system -l component=kube-scheduler root@primary1-node:~# kubectl delete pod -n kube-system -l component=kube-controller-manager root@primary1-node:~# kubectl delete pod -n kube-system -l component=etcd