-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/redis-cluster] Update script uses old node IP address #4064
Comments
Hi, I was unable to reproduce the issue. However, I see that the default number of nodes is 6 (not 3), so the command above should specify a different number of nodes (like 9) in order to perform the upgrade. Could you confirm that's the case? |
@javsalgar I didn't want to paste entire values override here, just to make it more readable, but I deployed with 3 nodes 0 replicas initially, and then scaled up to 6 nodes. Here's full overrides file I used for
Values I used for
|
Hi, I believe that the minimum of nodes for the cluster to work properly is 6. That could explain the issues of scaling from 3 to 6. However, could you try again going from 6 to 9 to see if the same issue happens? |
However, I would like @miguelaeh to confirm this |
same issue |
Hi,
|
@miguelaeh Hello, We're using Redis as a cache layer for an application, and we use a cluster of only master nodes with sharding (slots) to distribute the data across different master nodes. I would imagine 6 nodes rule apply if you expect to get 3 master nodes and 3 followers (1 for each master), but in our case a 3 node cluster was totally fine, because we only use master nodes. |
Hi @tpolekhin , Regarding use only 3 masters, I am not totally sure but it could work, from the official docs:
So it seems it is a recommendation, not a mandatory thing to have 3 replicas too. |
@miguelaeh we still use PVCs to keep the config, but we have disabled any type of Redis data persistence via the config:
Regarding running only 3 masters:
I was able to successfully create and test this configuration with our load testing tool, so it works as expected, even when I delete one of the master pods it comes back and joins the cluster fine |
Ok, I understand the issue.
? |
This issue is for a cluster-update Kubernetes job pod, that is used to add new Redis nodes to the existing cluster when you run a scale up upgrade. |
Hi @tpolekhin , |
@miguelaeh not exactly, I'm seeing that helm post-upgrade hook starts as soon as StatrfulSet has been modified, not after all pods in StatefulSet were rolled. Since StatefulSet rolls pods in a reverse order, pod |
Hi @tpolekhin , |
@miguelaeh I honestly do not understand why you need to do this dns resolve in a first place, why just not use
this would watch the rolling upgrade of the StatefulSet and exit when it's done |
Hi @tpolekhin ,
That is the pod name it will not be resolved to an IP, in any case we could use something like So I guess we can add that command you shared before the DNS resolution to wait until the deployment is upgraded |
…eIp IP address again (#4362) * fix #4064 * bump chart version Co-authored-by: Javier J. Salmerón-García <[email protected]>
Which chart:
redis-cluster-3.2.8
Describe the bug
When helm upgrade is run with cluster scale-up all redis pods are rolled in StatefulSet because env variable with nodes is changed. Cluster update Kubernetes job resolves pod dns name to IP address only once, so when a pod is rolled and it's IP address changes, update job can no longer perform any operations
To Reproduce
Steps to reproduce the behavior:
helm install redis-cluster bitnami/redis-cluster --set 'cluster.replicas=0'
helm upgrade redis-cluster bitnami/redis-cluster --set 'cluster.nodes=6,cluster.replicas=0,cluster.init=false,cluster.update.addNodes=true,cluster.update.currentNumberOfNodes=3'
Expected behavior
Cluster update job should account for pods IP address change during the rolling upgrade process
Version of Helm and Kubernetes:
helm version
:kubectl version
:Additional context
redis-cluster-0 pod had an IP address of
172.21.190.236
before the upgradeduring the rolling upgrade of the StatefulSet it's IP address changed to
172.21.190.237
cluster update job never finished
The text was updated successfully, but these errors were encountered: