-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to create a 8 node redis cluster with 4 master and 4 slaves? It is not creating 4 shards #3816
Comments
Hi @EswarRams, I'm not sure if I understood your issue correctly. The problem is that your 8-nodes cluster is leaving one master without its slave, is that right? Could you please share the chart version you are using as well as the steps you followed in your chart installation? I haven't been able to reproduce the issue and, as a test, deployed the following cluster:
|
Thanks for your quick response.
helm install --timeout 600s red-clus bitnami/redis-cluster --set
cluster.nodes=8
c3339b2e1ec55e2a8e3f2bf6f4acd46c4f28093 10.12.0.55:6379@16379 master - 0
1601489793727 8 connected
86c93e35de6ee8c7cd2bec3df4ff86c00e95e80c 10.12.0.31:6379@16379 master - 0
1601489788712 5 connected
a79687a7d7ee201166b48bbfaaddaddc15cc541d 10.12.0.41:6379@16379
myself,master - 0 1601489790000 1 connected 0-4095
f2fcd8615fc4f5d97707b4ecc6b39d6adb9beb7c 10.12.0.81:6379@16379 master - 0
1601489793000 6 connected
055776340487e138aef2e3bb8401eea4baf12dea 10.12.0.57:6379@16379 master - 0
1601489792725 3 connected 8192-12287
26d0c0cebf0f54c8816d010aa851214b59b3f378 10.12.0.16:6379@16379 master - 0
1601489793000 7 connected
7c9bc937de44a1d8462d11736ba625cfe8114f83 10.12.0.10:6379@16379 master - 0
1601489794730 2 connected 4096-8191
ee43b9809fc2a05982f768303e36a1b2df3b0608 10.12.0.77:6379@16379 master - 0
1601489791722 4 connected 12288-16383
I don't see a slave now? All are running as master.
Thanks,
Eswar
Another question: If I want to scale up, will it scale up master only or
both master and slave? How to do that?
…On Wed, Sep 30, 2020 at 10:32 AM Francisco de Paz Galán < ***@***.***> wrote:
Hi @EswarRams <https://github.com/EswarRams>,
I'm not sure if I understood your issue correctly. The problem is that
your 8-nodes cluster is leaving one master without its slave, is that
right? Could you please share the chart version you are using as well as
the steps you followed in your chart installation?
I haven't been able to reproduce the issue and, as a test, deployed the
following cluster:
$ helm install red-clus bitnami/redis-cluster --set cluster.nodes=8
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
red-clus-redis-cluster-0 1/1 Running 1 4m53s
red-clus-redis-cluster-1 1/1 Running 1 4m53s
red-clus-redis-cluster-2 1/1 Running 0 4m53s
red-clus-redis-cluster-3 1/1 Running 1 4m53s
red-clus-redis-cluster-4 1/1 Running 1 4m53s
red-clus-redis-cluster-5 1/1 Running 0 4m52s
red-clus-redis-cluster-6 1/1 Running 0 4m52s
red-clus-redis-cluster-7 1/1 Running 0 4m52s
red-clus-redis-cluster-cluster-create-zvmpd 0/1 Completed 0 4m53s
red-clus-redis-cluster:6379> cluster nodes
6ba7abee7bf953aa213dc5dbd8151e433f5d0c89 ***@***.*** slave 1079b2b70d3a540be244a6bb0a1af7424264c4ac 0 1601478763000 1 connected
3d099237e8692b24bce7a3d8096e8e9b4c3ebfbb ***@***.*** slave 2c6074e53eaa9b2c65a0db34e9bb0d9024787a61 0 1601478763018 2 connected
2c6074e53eaa9b2c65a0db34e9bb0d9024787a61 ***@***.*** myself,master - 0 1601478761000 2 connected 4096-8191
91f80d794130eaccf01470fe8cd044e963758b62 ***@***.*** master - 0 1601478762007 4 connected 12288-16383
7195d2d45a00bfd32e632b9c304ad39b20f4ed75 ***@***.*** master - 0 1601478764021 3 connected 8192-12287
1079b2b70d3a540be244a6bb0a1af7424264c4ac ***@***.*** master - 0 1601478759000 1 connected 0-4095
04fa23723ad5e49927701e264c756a1b8b376100 ***@***.*** slave 7195d2d45a00bfd32e632b9c304ad39b20f4ed75 0 1601478761000 3 connected
1bb0246f09fe9ab9b85c52df656a60f764871929 ***@***.*** slave 91f80d794130eaccf01470fe8cd044e963758b62 0 1601478761001 4 connected
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3816 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARGEXTVZXTSDSBHJXQAPQXLSINFQZANCNFSM4R6LIOYQ>
.
|
Sorry to include the chart version.
sources:
- https://github.com/bitnami/bitnami-docker-redis
- http://redis.io/
version: 3.2.5
On Wed, Sep 30, 2020 at 1:18 PM eswaramoorthy ramaswamy <[email protected]>
wrote:
… Thanks for your quick response.
helm install --timeout 600s red-clus bitnami/redis-cluster --set
cluster.nodes=8
c3339b2e1ec55e2a8e3f2bf6f4acd46c4f28093 ***@***.*** master - 0
1601489793727 8 connected
86c93e35de6ee8c7cd2bec3df4ff86c00e95e80c ***@***.*** master - 0
1601489788712 5 connected
a79687a7d7ee201166b48bbfaaddaddc15cc541d ***@***.***
myself,master - 0 1601489790000 1 connected 0-4095
f2fcd8615fc4f5d97707b4ecc6b39d6adb9beb7c ***@***.*** master - 0
1601489793000 6 connected
055776340487e138aef2e3bb8401eea4baf12dea ***@***.*** master - 0
1601489792725 3 connected 8192-12287
26d0c0cebf0f54c8816d010aa851214b59b3f378 ***@***.*** master - 0
1601489793000 7 connected
7c9bc937de44a1d8462d11736ba625cfe8114f83 ***@***.*** master - 0
1601489794730 2 connected 4096-8191
ee43b9809fc2a05982f768303e36a1b2df3b0608 ***@***.*** master - 0
1601489791722 4 connected 12288-16383
I don't see a slave now? All are running as master.
Thanks,
Eswar
Another question: If I want to scale up, will it scale up master only or
both master and slave? How to do that?
On Wed, Sep 30, 2020 at 10:32 AM Francisco de Paz Galán <
***@***.***> wrote:
> Hi @EswarRams <https://github.com/EswarRams>,
>
> I'm not sure if I understood your issue correctly. The problem is that
> your 8-nodes cluster is leaving one master without its slave, is that
> right? Could you please share the chart version you are using as well as
> the steps you followed in your chart installation?
>
> I haven't been able to reproduce the issue and, as a test, deployed the
> following cluster:
>
> $ helm install red-clus bitnami/redis-cluster --set cluster.nodes=8
>
> $ kubectl get pods
> NAME READY STATUS RESTARTS AGE
> red-clus-redis-cluster-0 1/1 Running 1 4m53s
> red-clus-redis-cluster-1 1/1 Running 1 4m53s
> red-clus-redis-cluster-2 1/1 Running 0 4m53s
> red-clus-redis-cluster-3 1/1 Running 1 4m53s
> red-clus-redis-cluster-4 1/1 Running 1 4m53s
> red-clus-redis-cluster-5 1/1 Running 0 4m52s
> red-clus-redis-cluster-6 1/1 Running 0 4m52s
> red-clus-redis-cluster-7 1/1 Running 0 4m52s
> red-clus-redis-cluster-cluster-create-zvmpd 0/1 Completed 0 4m53s
>
> red-clus-redis-cluster:6379> cluster nodes
> 6ba7abee7bf953aa213dc5dbd8151e433f5d0c89 ***@***.*** slave 1079b2b70d3a540be244a6bb0a1af7424264c4ac 0 1601478763000 1 connected
> 3d099237e8692b24bce7a3d8096e8e9b4c3ebfbb ***@***.*** slave 2c6074e53eaa9b2c65a0db34e9bb0d9024787a61 0 1601478763018 2 connected
> 2c6074e53eaa9b2c65a0db34e9bb0d9024787a61 ***@***.*** myself,master - 0 1601478761000 2 connected 4096-8191
> 91f80d794130eaccf01470fe8cd044e963758b62 ***@***.*** master - 0 1601478762007 4 connected 12288-16383
> 7195d2d45a00bfd32e632b9c304ad39b20f4ed75 ***@***.*** master - 0 1601478764021 3 connected 8192-12287
> 1079b2b70d3a540be244a6bb0a1af7424264c4ac ***@***.*** master - 0 1601478759000 1 connected 0-4095
> 04fa23723ad5e49927701e264c756a1b8b376100 ***@***.*** slave 7195d2d45a00bfd32e632b9c304ad39b20f4ed75 0 1601478761000 3 connected
> 1bb0246f09fe9ab9b85c52df656a60f764871929 ***@***.*** slave 91f80d794130eaccf01470fe8cd044e963758b62 0 1601478761001 4 connected
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#3816 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ARGEXTVZXTSDSBHJXQAPQXLSINFQZANCNFSM4R6LIOYQ>
> .
>
|
Hi @EswarRams, That is a strange behaviour, could you be using a custom Regarding you other question, if you want to upgrade your chart to go from having 6 nodes to 8, you will need to use:
Please keep in mind that currently the chart has a known issue #3743 where every added node after an upgrade is configured as master. |
Thanks for the quick response. I am able to create it successfully now. I
tried with a different name for the cluster. Looks like old have some
issues.
So adding a new node is not working as expected is a known issue. I got
that.
Here are my questions.
Can we use the values.yaml to change the values as well or any know issues?
Thanks,
Eswar
…On Thu, Oct 1, 2020 at 3:41 AM Francisco de Paz Galán < ***@***.***> wrote:
Hi @EswarRams <https://github.com/EswarRams>,
That is a strange behaviour, could you be using a custom values.yaml? I
have done a couple more chart installations and I haven't encountered the
issue. Please make sure you are doing helm install in a new cluster and
also, could you tell me where are you deploying your K8s cluster?
Regarding you other question, if you want to upgrade your chart to go from
having 6 nodes to 8, you will need to use:
helm upgrade red-clus --set password=${REDIS_PASSWORD},cluster.nodes=8,cluster.update.currentNumberOfNodes=6,cluster.update.addNodes=true,cluster.init=false bitnami/redis-cluster
Please keep in mind that currently the chart has a known issue #3743
<#3743> where every added node
after an upgrade is configured as master.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3816 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARGEXTXWUSUH4OBO7WRX2FTSIQ6FFANCNFSM4R6LIOYQ>
.
|
How to change the namespace from default to a custom namespace? I don't
see anything on the values.yaml to pass it.
On Thu, Oct 1, 2020 at 2:06 PM eswaramoorthy ramaswamy <[email protected]>
wrote:
… Thanks for the quick response. I am able to create it successfully now. I
tried with a different name for the cluster. Looks like old have some
issues.
So adding a new node is not working as expected is a known issue. I got
that.
Here are my questions.
Can we use the values.yaml to change the values as well or any know issues?
Thanks,
Eswar
On Thu, Oct 1, 2020 at 3:41 AM Francisco de Paz Galán <
***@***.***> wrote:
> Hi @EswarRams <https://github.com/EswarRams>,
>
> That is a strange behaviour, could you be using a custom values.yaml? I
> have done a couple more chart installations and I haven't encountered the
> issue. Please make sure you are doing helm install in a new cluster and
> also, could you tell me where are you deploying your K8s cluster?
>
> Regarding you other question, if you want to upgrade your chart to go
> from having 6 nodes to 8, you will need to use:
>
> helm upgrade red-clus --set password=${REDIS_PASSWORD},cluster.nodes=8,cluster.update.currentNumberOfNodes=6,cluster.update.addNodes=true,cluster.init=false bitnami/redis-cluster
>
> Please keep in mind that currently the chart has a known issue #3743
> <#3743> where every added node
> after an upgrade is configured as master.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#3816 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ARGEXTXWUSUH4OBO7WRX2FTSIQ6FFANCNFSM4R6LIOYQ>
> .
>
|
Hi @EswarRams, Yes, you can use your custom |
Hi Fancisco,
Thanks for your reply.
When I have 7 node cluster with 3 masters and 4 slaves. I restarted one pod
to see how the failover works. I see the pod which got restarted in failed
state and slave of that master become master. I see the cluster re-balance
happened but when the new pod came up with the new ip. I didn't see that
pod join the cluster.
Is that anything to make that new pod join the cluster and old ips to go
away?
localhost:6379> cluster nodes
0db4de7fdbc3c9e85dc732cb52093e1772d9b073 10.12.0.58:6379@16379 slave
d769595b6b9d4944654e38db2663c66e5e8898d8 0 1601663403680 8 connected
d769595b6b9d4944654e38db2663c66e5e8898d8 10.12.0.72:6379@16379 master - 0
1601663404682 8 connected 10923-16383
0adea034634c4a36b13edcc8294ad2e024c36dd1 10.12.0.16:6379@16379
myself,master - 0 1601663402000 1 connected 0-5460
d69ac08a88de3a16eddba733e404c0ae36b404bb 10.12.0.81:6379@16379 slave
0adea034634c4a36b13edcc8294ad2e024c36dd1 0 1601663401000 1 connected
68c2b9ceddbb784f6ea70a619db49d94af7d1b6f 10.12.0.13:6379@16379 *master,fail
- *1601663271273 1601663267000 3 connected
b1a56616ab8fe66132b4b1cdc56799363e328f43 10.12.0.66:6379@16379 master - 0
1601663401000 2 connected 5461-10922
ffccffa90272778d8eb32d1cc1a71d34abbfecce 10.12.0.18:6379@16379 slave
b1a56616ab8fe66132b4b1cdc56799363e328f43 0 1601663403000 2 connected
After pod restart: 10.12.0.26 is my new ip. I don't see with the cluster
node that joined the cluster.
NAME READY STATUS RESTARTS AGE
IP NODE NOMINATED NODE READINESS
GATES
redis-cluster-0 1/1 Running 0
3m48s
10.12.0.16 aks-agentpool-27766319-vmss000000 <none> <none>
redis-cluster-1 1/1 Running 0
3m48s
10.12.0.66 aks-agentpool-27766319-vmss000001 <none> <none>
redis-cluster-2 1/1 Running 0 15s
*10.12.0.26 * aks-agentpool-27766319-vmss000000 <none>
<none>
redis-cluster-3 1/1 Running 0
3m48s
10.12.0.81 aks-agentpool-27766319-vmss000002 <none> <none>
redis-cluster-4 1/1 Running 0
3m48s
10.12.0.58 aks-agentpool-27766319-vmss000001 <none> <none>
redis-cluster-5 1/1 Running 0
3m47s
10.12.0.18 aks-agentpool-27766319-vmss000000 <none> <none>
redis-cluster-6 1/1 Running 0
3m47s
10.12.0.72 aks-agentpool-27766319-vmss000002 <none> <none>
How this dynamic restart is handled?
Thanks,
Eswar
…On Fri, Oct 2, 2020 at 4:56 AM Francisco de Paz Galán < ***@***.***> wrote:
Hi @EswarRams <https://github.com/EswarRams>,
Yes, you can use your custom values.yaml for this chart. To install it in
a custom namespace, you just need to add the --namespace option in your helm
install. Only a few options related to used namespaces by auxiliary
charts are configured in the .yaml.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3816 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARGEXTV6TMTJPUB56H6P7BDSIWPU7ANCNFSM4R6LIOYQ>
.
|
Hi @EswarRams! There are indeed some known problems with this cluster and failover like you #3876 describe and we are currently working on them. I see you have already commented in that issue so, if you could, please continue the discussion there so every problem has its specific issue. As the original problem was solved, I'll proceed to close this issue. Do please open a new issue if you have any new problems! |
Using Helm created the redis cluster. It always creates as 6 cluster nodes and even it creates 8 redis server. When I logged in to one of the redis k8s pods and run cluster nodes command, noticed it got only 3 shards and 1 master got added no slave for it.
pod/my-redis-cluster-0 1/1 Running 0 8m28s
pod/my-redis-cluster-1 1/1 Running 0 8m28s
pod/my-redis-cluster-2 1/1 Running 0 8m27s
pod/my-redis-cluster-3 1/1 Running 0 8m27s
pod/my-redis-cluster-4 1/1 Running 0 8m27s
pod/my-redis-cluster-5 1/1 Running 0 8m27s
pod/my-redis-cluster-6 1/1 Running 0 8m27s
pod/my-redis-cluster-7 1/1 Running 0 8m27s
pod/my-redis-cluster-cluster-create-26zrd 0/1 Completed 0 8m27s
d6557744ef30862c8f09957f023279a647e4606b 10.12.0.57:6379@16379 slave 0d5197448db103acdf39c2c7a33b3e9e929cbbee 0 1601416394000 8 connected
0d5197448db103acdf39c2c7a33b3e9e929cbbee 10.12.0.82:6379@16379 master - 0 1601416397786 8 connected 10923-16383
67413dfab2075a97fdfd0877051f01ebd4a05eb6 10.12.0.11:6379@16379 slave cfa99d6bc724e3024b2d6fd97ec958425f7d95ea 0 1601416399792 7 connected
3c8e6eb1696febc9b51f048590a9adc406256cbb 10.12.0.72:6379@16379 slave 1b8f0bdbd77ee767968d60430cca71b48585f0ff 0 1601416398789 9 connected
1811c44ac19dc0f75d8e7a93b51e449a341f12b2 10.12.0.17:6379@16379 master - 0 1601416397000 0 connected
1b8f0bdbd77ee767968d60430cca71b48585f0ff 10.12.0.78:6379@16379 master - 0 1601416398000 9 connected 0-5460
cfa99d6bc724e3024b2d6fd97ec958425f7d95ea 10.12.0.45:6379@16379 myself,master - 0 1601416396000 7 connected 5461-10922
I would like to create a bigger redis cluster with multiple partition. How I should do it?
The text was updated successfully, but these errors were encountered: