Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to create a 8 node redis cluster with 4 master and 4 slaves? It is not creating 4 shards #3816

Closed
EswarRams opened this issue Sep 29, 2020 · 9 comments

Comments

@EswarRams
Copy link

Using Helm created the redis cluster. It always creates as 6 cluster nodes and even it creates 8 redis server. When I logged in to one of the redis k8s pods and run cluster nodes command, noticed it got only 3 shards and 1 master got added no slave for it.
pod/my-redis-cluster-0 1/1 Running 0 8m28s
pod/my-redis-cluster-1 1/1 Running 0 8m28s
pod/my-redis-cluster-2 1/1 Running 0 8m27s
pod/my-redis-cluster-3 1/1 Running 0 8m27s
pod/my-redis-cluster-4 1/1 Running 0 8m27s
pod/my-redis-cluster-5 1/1 Running 0 8m27s
pod/my-redis-cluster-6 1/1 Running 0 8m27s
pod/my-redis-cluster-7 1/1 Running 0 8m27s
pod/my-redis-cluster-cluster-create-26zrd 0/1 Completed 0 8m27s

d6557744ef30862c8f09957f023279a647e4606b 10.12.0.57:6379@16379 slave 0d5197448db103acdf39c2c7a33b3e9e929cbbee 0 1601416394000 8 connected
0d5197448db103acdf39c2c7a33b3e9e929cbbee 10.12.0.82:6379@16379 master - 0 1601416397786 8 connected 10923-16383
67413dfab2075a97fdfd0877051f01ebd4a05eb6 10.12.0.11:6379@16379 slave cfa99d6bc724e3024b2d6fd97ec958425f7d95ea 0 1601416399792 7 connected
3c8e6eb1696febc9b51f048590a9adc406256cbb 10.12.0.72:6379@16379 slave 1b8f0bdbd77ee767968d60430cca71b48585f0ff 0 1601416398789 9 connected
1811c44ac19dc0f75d8e7a93b51e449a341f12b2 10.12.0.17:6379@16379 master - 0 1601416397000 0 connected
1b8f0bdbd77ee767968d60430cca71b48585f0ff 10.12.0.78:6379@16379 master - 0 1601416398000 9 connected 0-5460
cfa99d6bc724e3024b2d6fd97ec958425f7d95ea 10.12.0.45:6379@16379 myself,master - 0 1601416396000 7 connected 5461-10922

I would like to create a bigger redis cluster with multiple partition. How I should do it?

@FraPazGal
Copy link
Contributor

Hi @EswarRams,

I'm not sure if I understood your issue correctly. The problem is that your 8-nodes cluster is leaving one master without its slave, is that right? Could you please share the chart version you are using as well as the steps you followed in your chart installation?

I haven't been able to reproduce the issue and, as a test, deployed the following cluster:

$ helm install red-clus bitnami/redis-cluster --set cluster.nodes=8
$ kubectl get pods
NAME                                          READY   STATUS      RESTARTS   AGE
red-clus-redis-cluster-0                      1/1     Running     1          4m53s
red-clus-redis-cluster-1                      1/1     Running     1          4m53s
red-clus-redis-cluster-2                      1/1     Running     0          4m53s
red-clus-redis-cluster-3                      1/1     Running     1          4m53s
red-clus-redis-cluster-4                      1/1     Running     1          4m53s
red-clus-redis-cluster-5                      1/1     Running     0          4m52s
red-clus-redis-cluster-6                      1/1     Running     0          4m52s
red-clus-redis-cluster-7                      1/1     Running     0          4m52s
red-clus-redis-cluster-cluster-create-zvmpd   0/1     Completed   0          4m53s
red-clus-redis-cluster:6379> cluster nodes
6ba7abee7bf953aa213dc5dbd8151e433f5d0c89 172.17.0.8:6379@16379 slave 1079b2b70d3a540be244a6bb0a1af7424264c4ac 0 1601478763000 1 connected
3d099237e8692b24bce7a3d8096e8e9b4c3ebfbb 172.17.0.10:6379@16379 slave 2c6074e53eaa9b2c65a0db34e9bb0d9024787a61 0 1601478763018 2 connected
2c6074e53eaa9b2c65a0db34e9bb0d9024787a61 172.17.0.5:6379@16379 myself,master - 0 1601478761000 2 connected 4096-8191
91f80d794130eaccf01470fe8cd044e963758b62 172.17.0.6:6379@16379 master - 0 1601478762007 4 connected 12288-16383
7195d2d45a00bfd32e632b9c304ad39b20f4ed75 172.17.0.9:6379@16379 master - 0 1601478764021 3 connected 8192-12287
1079b2b70d3a540be244a6bb0a1af7424264c4ac 172.17.0.4:6379@16379 master - 0 1601478759000 1 connected 0-4095
04fa23723ad5e49927701e264c756a1b8b376100 172.17.0.11:6379@16379 slave 7195d2d45a00bfd32e632b9c304ad39b20f4ed75 0 1601478761000 3 connected
1bb0246f09fe9ab9b85c52df656a60f764871929 172.17.0.7:6379@16379 slave 91f80d794130eaccf01470fe8cd044e963758b62 0 1601478761001 4 connected

@EswarRams
Copy link
Author

EswarRams commented Sep 30, 2020 via email

@EswarRams
Copy link
Author

EswarRams commented Sep 30, 2020 via email

@FraPazGal
Copy link
Contributor

Hi @EswarRams,

That is a strange behaviour, could you be using a custom values.yaml? I have done a couple more chart installations and I haven't encountered the issue. Please make sure you are doing helm install in a new cluster and also, could you tell me where are you deploying your K8s cluster?

Regarding you other question, if you want to upgrade your chart to go from having 6 nodes to 8, you will need to use:

helm upgrade red-clus --set password=${REDIS_PASSWORD},cluster.nodes=8,cluster.update.currentNumberOfNodes=6,cluster.update.addNodes=true,cluster.init=false bitnami/redis-cluster

Please keep in mind that currently the chart has a known issue #3743 where every added node after an upgrade is configured as master.

@EswarRams
Copy link
Author

EswarRams commented Oct 1, 2020 via email

@EswarRams
Copy link
Author

EswarRams commented Oct 1, 2020 via email

@FraPazGal
Copy link
Contributor

Hi @EswarRams,

Yes, you can use your custom values.yaml for this chart. To install it in a custom namespace, you just need to add the --namespace option in your helm install. Only a few options related to used namespaces by auxiliary charts are configured in the .yaml.

@EswarRams
Copy link
Author

EswarRams commented Oct 2, 2020 via email

@FraPazGal
Copy link
Contributor

Hi @EswarRams!

There are indeed some known problems with this cluster and failover like you #3876 describe and we are currently working on them.

I see you have already commented in that issue so, if you could, please continue the discussion there so every problem has its specific issue. As the original problem was solved, I'll proceed to close this issue. Do please open a new issue if you have any new problems!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants