Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update the apiserver etcd ca ssl certificate #7104

Closed
luo964973791 opened this issue Jan 6, 2021 · 21 comments
Closed

update the apiserver etcd ca ssl certificate #7104

luo964973791 opened this issue Jan 6, 2021 · 21 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@luo964973791
Copy link

How does kubespray update the ssl certificate, or is there a script or script that can set the certificate validity period to 10 or 100 years?

@floryut
Copy link
Member

floryut commented Jan 6, 2021

How does kubespray update the ssl certificate, or is there a script or script that can set the certificate validity period to 10 or 100 years?

They are updated on ugprade, that's what the k8s team recommend as after a year a Kubernetes version is no longer supported and HAS TO be upgraded.

Anyway this may help you => #6403
Merge recently and will be in the next release.

@luo964973791
Copy link
Author

luo964973791 commented Jan 6, 2021 via email

@floryut
Copy link
Member

floryut commented Jan 6, 2021

Is it possible to write a shell script to update all the keys under /etc/kubernetes/ssl? I am not familiar with the running logic of kubespray ansible code

That's more something to ask on Kubernetes end, I won't advise something like that ;)

@luo964973791
Copy link
Author

So if I run kubespary in a production environment, how do I do if the certificate expires after one year? Will it be risky?

@floryut
Copy link
Member

floryut commented Jan 6, 2021

So if I run kubespary in a production environment, how do I do if the certificate expires after one year? Will it be risky?

As I said, after a year you should have already update Kubernetes components, if not you are already at risk as Kubernetes team support only 3 version (one version every 3 or 4 months, that 9 months to 12 months).
but we added the possibility to regenerate without upgrading, even if I think that's a bad idea, with the PR #6403

@floryut
Copy link
Member

floryut commented Jan 6, 2021

生产环境肯定不敢随便升级,只要一年后还可以正常使用就行,谢谢您勒

After translating I think I get what you mean, we are not talking about casual upgrade, we are talking about CVE and security, after 3 versions the Kubernetes stop supporting version and stop backporting bugs and CVE that's why you should not keep an old Kubernetes version running.

But as said, the PR allow that as we also understand that depending on the context some people might have to still be running an old version (but I won't talk about that again, I'm strongly against that 😄 )

@luo964973791
Copy link
Author

Does kubespary provide a script or tool for one-click upgrade of kubernetes version?

@floryut
Copy link
Member

floryut commented Jan 6, 2021

Does kubespary provide a script or tool for one-click upgrade of kubernetes version?

We don't provide a script or tool, we provide ansible playbooks and yes with the PR you can renew those with one execution with the correct setup.

Explanation in #6403

I added a simple variable named force_certificate_regeneration (default to false) that the users may set to True during a subsequent run of cluster.yml in order to force apiserver certificate regeneration flow.

@luo964973791
Copy link
Author

Thanks, I have no problem here.

@luo964973791
Copy link
Author

I tried to change the k8s-cluster.yml force_certificate_regeneration: true variable, and changed the server time, and found that k8s would be outdated, prompting that the certificate expired x509, and it did not force the certificate to be renewed automatically.

[root@node1 pki]# kubectl get node
kNAME STATUS ROLES AGE VERSION
node1 Ready master 3h56m v1.19.6
node2 Ready master 3h55m v1.19.6
node3 Ready 3h54m v1.19.6
[root@node1 pki]# kubeadm alpha certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0107 14:29:39.928152 23532 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]

CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 07, 2022 02:33 UTC 364d no
apiserver Jan 07, 2022 02:32 UTC 364d ca no
apiserver-kubelet-client Jan 07, 2022 02:32 UTC 364d ca no
controller-manager.conf Jan 07, 2022 02:33 UTC 364d no
front-proxy-client Jan 07, 2022 02:32 UTC 364d front-proxy-ca no
scheduler.conf Jan 07, 2022 02:33 UTC 364d no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jan 05, 2031 02:32 UTC 9y no
front-proxy-ca Jan 05, 2031 02:32 UTC 9y no
[root@node1 pki]# date
Thu Jan 7 14:30:02 CST 2021
[root@node1 pki]#
[root@node1 pki]# date
Sat Jan 8 02:35:19 CST 2022
[root@node1 pki]# kubectl get node
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2022-01-08T02:35:23+08:00 is after 2022-01-07T02:32:38Z
[root@node1 pki]# cat /root/kubespray/inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml | grep "force_certificate_regeneration"
force_certificate_regeneration: true
[root@node1 pki]#

@floryut
Copy link
Member

floryut commented Jan 7, 2021

/ping @pestebogdan

@pestebogdan
Copy link
Contributor

Hello,

From what I remember, this works only if the cluster's certificates are currently valid (and will prolong the validity for them for 1 year from the day you run them).

If they are already expired, this may not work. I had been in the second scenario (certificates expired) and followed a manual procedure to renew them following a sequence of tasks.

I had it documented somewhere, if i find it i will post it here if it's of interest, but as to the newly added variable, this should be considered an action you would need to do before the certificates expire.

Also, etcd certificates were not included in the rotation because as I remember they had a really long validity period (~100 years) so not really relevant.

@luo964973791
Copy link
Author

I changed force_certificate_regeneration: true to true before deployment, and then deployed it. After deploying kubespray, I changed the server time and found a 509 certificate error. I felt that changing force_certificate_regeneration: false to force_certificate_regeneration: true did not work. , Etcd certificate is 10 years, apiserver is valid for one year.

@floryut
Copy link
Member

floryut commented Jan 7, 2021

I changed force_certificate_regeneration: true to true before deployment, and then deployed it. After deploying kubespray, I changed the server time and found a 509 certificate error. I felt that changing force_certificate_regeneration: false to force_certificate_regeneration: true did not work. , Etcd certificate is 10 years, apiserver is valid for one year.

Do you have the latest master branch ? are you sure that you have the PR #6403 in your local codebase ?

@pestebogdan
Copy link
Contributor

Not sure i understand the sequence of events you followed or the reason for the server time change, but let me give you the sequence I used to test the new functionality before submitting the PR.

  • deployed a k8s cluster with kubespray (kubespray version at that time was v2.12.7 )
  • after deployment, checked the date on the server or client certificates (either by checking the timestamp on the files on disk or via openssl to decode the cert and look at the Expiration date)
  • ran kubespray again (cluster.yml), this time setting the force_certificate_regeneration variable to True
  • after completion ,checked the certificates again (+ a curl to apiserver to see the certificate there) and saw the new timestamp , based of a more recent time than the one registered in the first step at deployment time) was set to the certificates.

@luo964973791
Copy link
Author

Yes, the kubespray warehouse of git clone on the morning of January 7, 2021

@luo964973791
Copy link
Author

kubespray again (cluster.yml), do I need to run the cluster.yaml file again? How do I do it? I only executed ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root once cluster.yml

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 7, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 7, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants