English | 简体中文
This upgrade guide is intended for Spiderpool running on Kubernetes. If you have questions, feel free to ping us on Spiderpool Community.
-
Read the full upgrade guide to understand all the necessary steps before performing them.
-
When rolling out an upgrade with Kubernetes, Kubernetes will first terminate the pod followed by pulling the new image version and then finally spin up the new image. In order to reduce the downtime of the agent and to prevent ErrImagePull errors during upgrade. You can refer to the following command to pull the corresponding version of the image in advance.
# Taking docker as an example, please modify [upgraded-version] to your upgraded version. docker pull ghcr.io/spidernet-io/spiderpool/spiderpool-agent:[upgraded-version] docker pull ghcr.io/spidernet-io/spiderpool/spiderpool-controller:[upgraded-version] # If you are mainland user who is not available to access ghcr.io, you can use the mirror source ghcr.m.daocloud.io docker pull ghcr.m.daocloud.io/spidernet-io/spiderpool/spiderpool-agent:[upgraded-version] docker pull ghcr.m.daocloud.io/spidernet-io/spiderpool/spiderpool-controller:[upgraded-version]
It is recommended to always upgrade to the latest and maintained patch version of Spiderpool. Check Stable Releases to learn about the latest supported patch versions.
-
Make sure you have Helm installed.
-
Setup Helm repository and update
helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool
-
Remove spiderpool-init Pod
spiderpool-init
Pod will help initialize environment information, and it will be incomplete
state after each run. Duringhelm upgrade
, sincespiderpool-init
is essentially a Pod, patching some resources will fail. So delete it viakubectl delete spiderpool-init
before upgrading.Error: UPGRADE FAILED: cannot patch "spiderpool-init" with kind Pod: Pod "spiderpool-init" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.activeDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
-
Upgrade via
helm upgrade
# -n specifies the namespace where your Spiderpool is located, and modify [upgraded-version] to the version you want to upgrade to. helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version]
You can use --set
to update the Spiderpool configuration when upgrading. For available values parameters, please see the values documentation. The following example shows how to enable Spiderpool's SpiderSubnet function
helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] --set ipam.spiderSubnet.enable=true
You can also use --reuse-values
to reuse the values from the previous release and merge any overrides from the command line. However, it is only safe to use the --reuse-values
flag if the Spiderpool chart version remains unchanged, e.g. when using helm upgrade to change the Spiderpool configuration without upgrading the Spiderpool components. For --reuse-values
usage, see the following example:
helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] --set ipam.spiderSubnet.enable=true --reuse-values
Conversely, if the Spiderpool chart version has changed and you want to reuse the values from the existing installation, save the old values in a values file, check that file for any renamed or deprecated values, and pass it to helm upgrade command, you can retrieve and save values from existing installations using.
helm get values spiderpool --namespace=kube-system -o yaml > old-values.yaml
helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] -f old-values.yaml
Occasionally, it may be necessary to undo the rollout because a step was missed or something went wrong during upgrade. To undo the rollout run:
helm history spiderpool --namespace=kube-system
helm rollback spiderpool [REVISION] --namespace=kube-system
The following upgrade notes will be updated on a rolling basis with the release of new versions. They will have a priority relationship (from old to new). If your current version meets any one of them, when upgrading, you need to check in order from that item to Latest on every note.
In versions lower than 0.3.6, -
is used as a separator for SpiderSubnet delimiter for autopool names. It was ultimately difficult to extract it to trace the namespace and name of the application to which the autopool corresponded. The SpiderSubnet functionality in these releases was flawed by design, and has been modified and optimised in the latest patch releases, as well as supporting multiple network interfaces for the SpiderSubnet functionality in releases from 0.3.6 onwards. As mentioned above, the names of the new auto pools created in the new release have been changed, e.g., the IPv4 auto pool corresponding to application kube-system/test-app
is auto4-test-app-eth0-40371
. At the same time, the auto pool is marked with some labels as follows.
metadata:
labels:
ipam.spidernet.io/interface: eth0
ipam.spidernet.io/ip-version: IPv4
ipam.spidernet.io/ippool-cidr: 172-100-0-0-16
ipam.spidernet.io/ippool-reclaim: "true"
ipam.spidernet.io/owner-application-gv: apps_v1
ipam.spidernet.io/owner-application-kind: DaemonSet
ipam.spidernet.io/owner-application-name: test-app
ipam.spidernet.io/owner-application-namespace: kube-system
ipam.spidernet.io/owner-application-uid: 2f78ccdd-398e-49e6-a85b-40371db6fdbd
ipam.spidernet.io/owner-spider-subnet: vlan100-v4
spec:
podAffinity:
matchLabels:
ipam.spidernet.io/app-api-group: apps
ipam.spidernet.io/app-api-version: v1
ipam.spidernet.io/app-kind: DaemonSet
ipam.spidernet.io/app-name: test-app
ipam.spidernet.io/app-namespace: kube-system
Upgrading below 0.3.6 to the latest patch version is an incompatible upgrade. If the SpiderSubnet feature is enabled, you will need to add a series of tags as described above to the stock auto pool in order to make it available to the stock auto pool, as follows:
kubectl patch sp ${auto-pool} --type merge --patch '{"metadata": {"labels": {"ipam.spidernet.io/owner-application-name": "test-app"}}}'
kubectl patch sp ${auto-pool} --type merge --patch '{"metadata": {"labels": {"ipam.spidernet.io/owner-application-namespace": "kube-system"}}}'
...
SpiderSubnet supports multiple network interfaces, you need to add the corresponding network interface label
for the auto pool as follows:
kubectl patch sp ${auto-pool} --type merge --patch '{"metadata": {"labels": {"ipam.spidernet.io/interface": "eth0"}}}}'
Due to architecture adjustment, SpiderEndpoint.Status.OwnerControllerType
property is changed from None
to Pod
. Therefore, find all SpiderEndpoint objects with Status.OwnerControllerType
of None
and replace the SpiderEndpoint.Status.OwnerControllerType
property from None
to Pod
.
In versions higher than 0.5.0, the SpiderMultusConfig and Coordinator functions are added. However, due to helm upgrade, the corresponding CRDs cannot be automatically installed: spidercoordinators.spiderpool.spidernet.io
and spidermultusconfigs.spiderpool.spidernet.io
. Therefore, before upgrading, you can obtain the latest stable version through the following commands, decompress the chart package and apply all CRDs.
~# helm search repo spiderpool --versions
# Please replace [upgraded-version] with the version you want to upgrade to.
~# helm fetch spiderpool/spiderpool --version [upgraded-version]
~# tar -xvf spiderpool-[upgraded-version].tgz && cd spiderpool/crds
~# ls | grep '\.yaml$' | xargs -I {} kubectl apply -f {}
In versions below 0.7.3, Spiderpool will enable a set of DaemonSet: spiderpool-multus
to manage Multus related configurations. In later versions, the DaemonSet was deprecated, and the Muluts configuration was moved to spiderpool-agent
for management. At the same time, the function of automatically cleaning up the Muluts configuration during uninstallation
was added, which is enabled by default. Disable it by --set multus.multusCNI.uninstall=false
when upgrading to avoid CNI configuration files, CRDs, etc. being deleted during the upgrade phase, causing Pod creation to fail.
Due to the addition of the txQueueLen
field to the SpiderCoordinator CRD in version 0.9.0, you need to manually update the CRD before upgrading as Helm does not support upgrading or deleting CRDs during the upgrade process.(We suggest skipping version 0.9.0 and upgrading directly to version 0.9.1)
In versions below 0.9.4, when statefulSet is rapidly scaling up or down, Spiderpool GC may mistakenly reclaim IP addresses in IPPool, causing the same IP to be assigned to multiple Pods in the K8S cluster, resulting in IP address conflicts. This issue has been fixed, see Fix, but after the upgrade, the conflicting IP addresses cannot be automatically corrected by Spiderpool. You need to manually restart the Pod with the conflicting IP to assist in resolving the issue. In the new version, there will no longer be an issue with IP conflicts caused by incorrect GC IPs.
In versions lower than 0.9.5, the spiderSubnet field in Spiderpool Charts values.yaml changed from ipam.spidersubnet
to ipam.spiderSubnet
, so you cannot safely use the --reuse-values
flag to upgrade from versions < 0.9.5 to 0.9.5 and above. Please modify the values.yaml file or use the --set ipam.spiderSubnet.enable=true
flag to override the value in the values.yaml file.
TODO.
Due to your high availability requirements for Spiderpool, you may set multiple replicas of the spiderpool-controller Pod through --set spiderpoolController.replicas=5
during installation. The Pod of spiderpool-controller will occupy some port addresses of the node by default. The default port Please refer to System Configuration for occupancy. If your number of replicas is exactly the same as the number of nodes, then the Pod will fail to start because the node has no available ports during the upgrade. You can refer to the following Modifications can be made in two ways.
-
When executing the upgrade command, you can change the port by appending the helm parameter
--set spiderpoolController.httpPort
, and you can change the port through helm Values.yaml and System Configuration to check the ports that need to be modified. -
The type of spiderpool-controller is
Deployment
. You can reduce the number of replicas and restore the number of replicas after the Pod starts normally.