-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: change default blockSize for calico #9055
feat: change default blockSize for calico #9055
Conversation
91416fc
to
0c142e3
Compare
Hi @cyclinder, good idea but it looks like you might have broken some assumption in the CI tests, please address the CI job failures so the PR can be accepted. |
0c142e3
to
6806c0c
Compare
Thanks! Let me see what's wrong. |
@cristicalin I found a ci bug: kubespray/tests/testcases/040_check-network-adv.yml Lines 2 to 10 in 9ca5632
There are two changes needed here:
The reason CI kept works before was that the default If you think so, I'd fix it. |
6806c0c
to
fba2d7c
Compare
https://github.com/kubernetes-sigs/kubespray/pull/9055/checks?check_run_id=7217609171 I don't know why the cluster created by this CI job still uses the 24-bit I checked the job for packet_debian11-calico, which already uses a blocksize of 26 bits. |
Signed-off-by: cyclinder [email protected]
fba2d7c
to
15f8bb7
Compare
Hi @cristicalin , All the tests passed, PTAL :) |
Nice work! /lgtm |
@@ -19,7 +19,7 @@ calico_cni_name: k8s-pod-network | |||
# calico_pool_name: "default-pool" | |||
|
|||
# add default ippool blockSize (defaults kube_network_node_prefix) | |||
# calico_pool_blocksize: 24 | |||
calico_pool_blocksize: 26 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it appropriate to modify the existing env directly when the kubespray version is upgraded?
kubelet_max_pods: 250
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's appropriate in most scenarios, but there may be some variables that can't be updated, such as calico_pool_blocksize
Thanks @cyclinder for this work! /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cristicalin, cyclinder The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: cyclinder [email protected]
Signed-off-by: cyclinder [email protected]
Signed-off-by: cyclinder [email protected]
@cyclinder @cristicalin It looks like this causes upgrades to kubespray 2.20 to fail. When running the kubespray/upgrade-cluster.yml playbook:
For any cluster built with the default config before this change, calico_pool_conf.spec.blockSize is 24. But with this PR changing calico_pool_blocksize to 26 it causes the assertion to fail here: https://github.com/kubernetes-sigs/kubespray/blob/release-2.20/roles/network_plugin/calico/tasks/check.yml#L158 How are clusters meant to be upgraded to Kubespray 2.20.0? Is there a bug fix for this that can be cherrypicked ? Or is there a manual procedure that is required to update the block size of the calico pool? There is nothing in the 2.20 release notes about this. |
Looks like #10516 could fix it but that isn't even in 2.24 yet. |
Yes, you should configure the value of calico_pool_blocksize when you upgrade your cluster. Otherwise, calico_pool_conf.spec.blockSize doesn't match the calico_pool_blocksize. We can consider cherrypick #10516. |
Signed-off-by: cyclinder [email protected]
What type of PR is this?
/kind feature
What this PR does / why we need it:
According to calico official recommendation, Default value of
calico_blocksize_ipv4
is 26, andcalico_blocksize_ipv6
is 122.The current default value of
calico_blocksize_ipv4
in kubespray is 24, in the case of cluster pod cidr is 18-bit mask, which means there are at most 64 blocks and each block has 256 addresses, which also equates to a maximum of 64 nodes in the cluster, otherwise there will be some problems, refer to projectcalico/calico#6160Also, by default, each node has a maximum of 110 pod(kubelet_max_pods), and cannot use up all 256 addresses.
So we should adjust the default value of calico_block_size to 26, so that with 18-bit mask, there will be at most 2^(26-18)=256 blocks with 64 addresses per block. When a node has more than 64 pods, calico assigns 2 blocks to it (calico assigns one or more blocks to each node)
Special notes for your reviewer:
Does this PR introduce a user-facing change?: