Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

redis-ha-4.27.0 - split brain #283

Open
Pride1st1 opened this issue Jun 26, 2024 · 8 comments
Open

redis-ha-4.27.0 - split brain #283

Pride1st1 opened this issue Jun 26, 2024 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@Pride1st1
Copy link

Describe the bug
I deployed the chart with default values. During its explatation we met condition when redis-0 and redis-2 are replicas of redis-1, and redis-1 is replica of redis-0. The split-brain-fix container wasn`t able to fix the problem.

172.20.75.109 - redis-0
172.20.181.236 - redis-1
172.20.198.17 - redis-2

redis-0:

  |   | 2024-06-18 18:23:36.849 | 1:S 18 Jun 2024 15:23:36.849 * Connecting to MASTER 172.20.181.236:6379 |  
  |   | 2024-06-18 18:23:36.849 | 1:S 18 Jun 2024 15:23:36.849 * MASTER <-> REPLICA sync started |  
  |   | 2024-06-18 18:23:36.850 | 1:S 18 Jun 2024 15:23:36.850 # Error condition on socket for SYNC: Connection refused |  
  |   | 2024-06-18 18:23:37.852 | 1:S 18 Jun 2024 15:23:37.852 * Connecting to MASTER 172.20.181.236:6379 |  
  |   | 2024-06-18 18:23:37.852 | 1:S 18 Jun 2024 15:23:37.852 * MASTER <-> REPLICA sync started

redis-1 (sentinel tries to restart it):

  |   | 2024-06-18 18:26:55.109 | 1:S 18 Jun 2024 15:26:55.109 * Ready to accept connections tcp |  
  |   | 2024-06-18 18:26:55.109 | 1:S 18 Jun 2024 15:26:55.109 * Connecting to MASTER 172.20.75.109:6379 |  
  |   | 2024-06-18 18:26:55.110 | 1:S 18 Jun 2024 15:26:55.109 * MASTER <-> REPLICA sync started |  
  |   | 2024-06-18 18:26:55.110 | 1:S 18 Jun 2024 15:26:55.110 * Non blocking connect for SYNC fired the event. |  
  |   | 2024-06-18 18:26:55.111 | 1:S 18 Jun 2024 15:26:55.111 * Master replied to PING, replication can continue... |  
  |   | 2024-06-18 18:26:55.113 | 1:S 18 Jun 2024 15:26:55.112 * Trying a partial resynchronization (request 8605e4e1a74e2a74a8ad3742efb5784ad4b0ce41:1). |  
  |   | 2024-06-18 18:26:55.113 | 1:S 18 Jun 2024 15:26:55.113 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master |  
  |   | 2024-06-18 18:26:56.114 | 1:S 18 Jun 2024 15:26:56.113 * Connecting to MASTER 172.20.75.109:6379 |  
  |   | 2024-06-18 18:26:56.114 | 1:S 18 Jun 2024 15:26:56.114 * MASTER <-> REPLICA sync started

sentinel-1 (leader)

  |   | 2024-06-18 18:26:55.883 | 1:X 18 Jun 2024 15:26:55.883 * +reboot master mymaster 172.20.181.236 6379 |  
  |   | 2024-06-18 18:28:09.960 | 1:X 18 Jun 2024 15:28:09.960 # +new-epoch 21 |  
  |   | 2024-06-18 18:28:09.960 | 1:X 18 Jun 2024 15:28:09.960 # +try-failover master mymaster 172.20.181.236 6379 |  
  |   | 2024-06-18 18:28:09.963 | 1:X 18 Jun 2024 15:28:09.963 * Sentinel new configuration saved on disk |  
  |   | 2024-06-18 18:28:09.963 | 1:X 18 Jun 2024 15:28:09.963 # +vote-for-leader aa33680947f52ae19df761ea8f26a4285d4910c1 21 |  
  |   | 2024-06-18 18:28:09.969 | 1:X 18 Jun 2024 15:28:09.969 * d4ca60ac0fa2353d3c6a5684df1401f8faccf6ef voted for aa33680947f52ae19df761ea8f26a4285d4910c1 21 |  
  |   | 2024-06-18 18:28:09.969 | 1:X 18 Jun 2024 15:28:09.969 * d21ee95d5d45a94a9deb59bd2b2797a4bddedf53 voted for aa33680947f52ae19df761ea8f26a4285d4910c1 21 |  
  |   | 2024-06-18 18:28:10.039 | 1:X 18 Jun 2024 15:28:10.039 # +elected-leader master mymaster 172.20.181.236 6379 |  
  |   | 2024-06-18 18:28:10.039 | 1:X 18 Jun 2024 15:28:10.039 # +failover-state-select-slave master mymaster 172.20.181.236 6379 |  
  |   | 2024-06-18 18:28:10.116 | 1:X 18 Jun 2024 15:28:10.115 # -failover-abort-no-good-slave master mymaster 172.20.181.236 6379 |   |   | 2024-06-18 18:28:10.187 | 1:X 18 Jun 2024 15:28:10.187 * Next failover delay: I will not start a failover before Tue Jun 18 15:34:10 2024 |  
  |   | 2024-06-18 18:32:53.938 | 1:X 18 Jun 2024 15:32:53.936 * +reboot master mymaster 172.20.181.236 6379

split-brain-fix-1

  |   | 2024-06-18 18:20:30.025 | Could not connect to Redis at 127.0.0.1:6379: Connection refused |  
  |   | 2024-06-18 18:20:30.025 | Could not connect to Redis at 127.0.0.1:6379: Connection refused |  
  |   | 2024-06-18 18:21:30.027 | Identifying redis master (get-master-addr-by-name).. |  
  |   | 2024-06-18 18:21:30.027 | using sentinel (hewi-redis-ha), sentinel group name (mymaster) |  
  |   | 2024-06-18 18:21:30.043 | Tue Jun 18 15:21:30 UTC 2024 Found redis master (172.20.181.236) |  
  |   | 2024-06-18 18:21:30.046 | Could not connect to Redis at 127.0.0.1:6379: Connection refused |  
  |   | 2024-06-18 18:21:30.049 | Tue Jun 18 15:21:30 UTC 2024 Start... |  
  |   | 2024-06-18 18:21:30.057 | Initializing config.. |  
  |   | 2024-06-18 18:21:30.057 | Copying default redis config.. |  
  |   | 2024-06-18 18:21:30.057 | to '/data/conf/redis.conf' |  
  |   | 2024-06-18 18:21:30.061 | Copying default sentinel config.. |  
  |   | 2024-06-18 18:21:30.061 | to '/data/conf/sentinel.conf' |  
  |   | 2024-06-18 18:21:30.063 | Identifying redis master (get-master-addr-by-name).. |  
  |   | 2024-06-18 18:21:30.063 | using sentinel (hewi-redis-ha), sentinel group name (mymaster) |  
  |   | 2024-06-18 18:21:30.083 | Tue Jun 18 15:21:30 UTC 2024 Found redis master (172.20.181.236) |  
  |   | 2024-06-18 18:21:30.083 | Identify announce ip for this pod.. |  
  |   | 2024-06-18 18:21:30.083 | using (hewi-redis-ha-announce-1) or (hewi-redis-ha-server-1) |  
  |   | 2024-06-18 18:21:30.088 | identified announce (172.20.181.236) |  
  |   | 2024-06-18 18:21:30.088 | Verifying redis master.. |  
  |   | 2024-06-18 18:21:30.088 | ping (172.20.181.236:6379) |  
  |   | 2024-06-18 18:21:30.091 | Could not connect to Redis at 172.20.181.236:6379: Connection refused |  
  |   | 2024-06-18 18:21:34.102 | Could not connect to Redis at 172.20.181.236:6379: Connection refused |  
  |   | 2024-06-18 18:21:39.125 | Could not connect to Redis at 172.20.181.236:6379: Connection refused |  
  |   | 2024-06-18 18:21:45.137 | Tue Jun 18 15:21:45 UTC 2024 Can't ping redis master (172.20.181.236) |  
  |   | 2024-06-18 18:21:45.137 | Attempting to force failover (sentinel failover).. |  
  |   | 2024-06-18 18:21:45.137 | on sentinel (hewi-redis-ha:26379), sentinel grp (mymaster) |  
  |   | 2024-06-18 18:21:45.144 | Tue Jun 18 15:21:45 UTC 2024 Failover returned with 'NOGOODSLAVE' |  
  |   | 2024-06-18 18:21:45.144 | Setting defaults for this pod.. |  
  |   | 2024-06-18 18:21:45.144 | Setting up defaults.. |  
  |   | 2024-06-18 18:21:45.144 | using statefulset index (1) |  
  |   | 2024-06-18 18:21:45.144 | Getting redis master ip.. |  
  |   | 2024-06-18 18:21:45.144 | blindly assuming (hewi-redis-ha-announce-0) or (hewi-redis-ha-server-0) are master |  
  |   | 2024-06-18 18:21:45.161 | identified redis (may be redis master) ip (172.20.75.109) |  
  |   | 2024-06-18 18:21:45.161 | Setting default slave config for redis and sentinel.. |  
  |   | 2024-06-18 18:21:45.161 | using master ip (172.20.75.109) |  
  |   | 2024-06-18 18:21:45.161 | Updating redis config.. |  
  |   | 2024-06-18 18:21:45.162 | we are slave of redis master (172.20.75.109:6379) |  
  |   | 2024-06-18 18:21:45.162 | Updating sentinel config.. |  
  |   | 2024-06-18 18:21:45.162 | evaluating sentinel id (${SENTINEL_ID_1}) |  
  |   | 2024-06-18 18:21:45.162 | sentinel id (aa33680947f52ae19df761ea8f26a4285d4910c1), sentinel grp (mymaster), quorum (2) |  
  |   | 2024-06-18 18:21:45.163 | redis master (172.20.75.109:6379) |  
  |   | 2024-06-18 18:21:45.164 | announce (172.20.181.236:26379) |  
  |   | 2024-06-18 18:21:45.165 | Tue Jun 18 15:21:45 UTC 2024 Ready...

split-brain-fix-0

  |   | 2024-06-18 18:21:56.044 | using sentinel (hewi-redis-ha), sentinel group name (mymaster) |  
  |   | 2024-06-18 18:21:56.052 | Tue Jun 18 15:21:56 UTC 2024 Found redis master (172.20.181.236) |  
  |   | 2024-06-18 18:22:56.056 | Identifying redis master (get-master-addr-by-name).. |  
  |   | 2024-06-18 18:22:56.056 | using sentinel (hewi-redis-ha), sentinel group name (mymaster) |  
  |   | 2024-06-18 18:22:56.063 | Tue Jun 18 15:22:56 UTC 2024 Found redis master (172.20.181.236) |  
  |   | 2024-06-18 18:23:56.067 | Identifying redis master (get-master-addr-by-name)..

To Reproduce
I tried node/pod deletion and redis-cli replicaof with no success to reproduce this bug

Expected behavior
split-brain-fix container should fix even this rare case

Additional context
The scripts logic was broken by inability of sentinel to failover. Maybe script should have additional condition to check the role of potential default master. I will be very apreatiate for any help with this. Please let me know if you need some additional logs/checks

@Pride1st1 Pride1st1 added the bug Something isn't working label Jun 26, 2024
@mhkarimi1383
Copy link
Contributor

+1

@tschirmer
Copy link

I've had this too.

I've found that when we've added a descheduler to the stack (https://github.com/kubernetes-sigs/descheduler) to balance nodes automatically, this kind of issue will disable the redis service frequently.

Can the master allocation be done with kubernetes lease locks? https://kubernetes.io/docs/concepts/architecture/leases/

@DandyDeveloper
Copy link
Owner

@tschirmer I'm trying to work out why this would happen unless the podManagementPolicy of the STS is set to Parallel?

Is this happening in either of your cases? @tschirmer ??

Because in theory, on first rollout, the first pod should start up and become master, way before -1/-2 start.

@mhkarimi1383
Copy link
Contributor

@DandyDeveloper
Hi
I'm having problem when my network becomes a bit unstable (for example pods are not able to each other for a sec.) and my redis pods can't see each other

@tschirmer
Copy link

@tschirmer I'm trying to work out why this would happen unless the podManagementPolicy of the STS is set to Parallel?

Is this happening in either of your cases? @tschirmer ??

Because in theory, on first rollout, the first pod should start up and become master, way before -1/-2 start.

Haven't set it to Parallel. I suspect it would be something like, pod when evicted isn't completing the trigger-failover-if-master.sh. We are running it with sentinel, which might add some complexity here. I haven't debugged it yet.

So far we're getting a load of issues with the liveness probe not containing the SENTINELAUTH env from the secret, but it's clearly defined in the spec; and a restart of the pod works. It's happening very frequently though, so I'm wondering if there needs to be a grace period defined on startup and shutdown to prevent it both of these things from happening

@mhkarimi1383
Copy link
Contributor

I think being able to have separated Statefulsets for redises and sentinels will make this chart more stable and manageable,
By creating two Statefulsets and giving sentinel monitor config to monitor an external host

@tschirmer
Copy link

I like the idea of seperate stateful sets, I've been thinking of doing that and making a PR

I suspect this is from preStop hooks not firing and completely successfully. trigger-failover-if-master.sh occasionally doesn't run as expected. When we had the descheduler running it was ~2min between turning on and off each pod, and found that every now and again, that would fail. The rate of failure is low, so it's unlikely occur unless you're hammering it (we haven't had an issue with the ah cluster once we turned off the descheduler.

@mhkarimi1383
Copy link
Contributor

I wanted to make a PR too. But there are a lot of configs that should propagate this change

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants