Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error bringing back up failed master #63

Open
mperham opened this issue Dec 30, 2013 · 8 comments
Open

Error bringing back up failed master #63

mperham opened this issue Dec 30, 2013 · 8 comments

Comments

@mperham
Copy link

mperham commented Dec 30, 2013

I have master localhost:7777 and slave localhost:8888

I take down 7777 and everything switches over to 8888 without a problem. I bring 7777 back up and Redis properly switches itself to a slave of 8888 but I see this error in the log output of the leader redis_node_manager. Any idea what's wrong?

2013-12-30 06:55:01 UTC RedisFailover 12178 INFO: RedisFailover::NodeStrategy::Majority marking localhost:7777 as unavailable. Snapshot: Node localhost:7777 available by {}, unavailable by ["mikes-macbook-air.local-12374", "mikes-macbook-air.local-12178"] (0 up, 2 down)
2013-12-30 06:55:06 UTC RedisFailover 12178 INFO: Reconciling node localhost:7777
2013-12-30 06:55:06 UTC RedisFailover 12178 INFO: localhost:7777 is now a slave of localhost:8888
2013-12-30 06:55:06 UTC RedisFailover 12178 ERROR: Error handling state report for [<RedisFailover::Node localhost:7777>, :available]: #<RedisFailover::NodeUnavailableError: Node: RedisFailover::NodeUnavailableError: Node: Redis::CommandError: READONLY You can't write against a read only slave.>
2013-12-30 06:55:11 UTC RedisFailover 12178 INFO: Handling available node: localhost:7777
@ryanlecompte
Copy link
Owner

Hmm, I haven't seen that particular error before. Which version of redis is this? In this scenario what should happen is that the resurrected node (previous master) should be made a slave of the newly promoted master node. It does this by sending a "slaveof" command. After that happens, we issue periodic "state reports" for each node. A state report is really just the result of sending an "INFO" command to the redis node. I wonder if somehow Redis is in a "read-only" mode and rejects INFO commands when in that mode?

@mperham
Copy link
Author

mperham commented Dec 30, 2013

2.8.3

On Dec 30, 2013, at 7:13, Ryan LeCompte [email protected] wrote:

Hmm, I haven't seen that particular error before. Which version of redis is this? In this scenario what should happen is that the resurrected node (previous master) should be made a slave of the newly promoted master node. It does this by sending a "slaveof" command. After that happens, we issue periodic "state reports" for each node. A state report is really just the result of sending an "INFO" command to the redis node. I wonder if somehow Redis is in a "read-only" mode and rejects INFO commands when in that mode?


Reply to this email directly or view it on GitHub.

@arohter
Copy link
Contributor

arohter commented Dec 31, 2013

You must have 'slave-read-only yes' set in your redis.conf: https://github.com/antirez/redis/blob/unstable/redis.conf#L223

Your old master process is coming back up thinking it's master (probably set in redis.conf), at which point:

node_manager::update_master_state is called.
it's :available and not syncing_with_master (since it still thinks it's master), so handle_available is called.
first thing handle_available does is call reconcile().
reconcile correctly determines that the redis node needs to be made a slave, so it calls node.make_slave!
make_slave! issues the slaveof command, BUT then calls the wakeup() method.....
wakeup does an LPUSH, which is a write op ( https://github.com/antirez/redis/blob/unstable/src/redis.c#L138 ), which is where this error is coming from.

Additionally, while I haven't spent much time at all looking at the node manager code, it seems to me that the node.rb wait() health-checking functionality simply won't work against read-only slaves (the default since 2.6), since BLPOP and DEL ops are writes. I can confirm that only the master node is getting redis_failover* ops. What is weird is that there should be a NodeWatcher thread attempting (and failing) this against the slaves as well, yet I'm not seeing those read-only exceptions in the logs..

@mperham
Copy link
Author

mperham commented Dec 31, 2013

I believe this might be something new in 2.8. Slaves are now read only by default.

On Dec 31, 2013, at 0:46, arohter [email protected] wrote:

You must have 'slave-read-only yes' set in your redis.conf: https://github.com/antirez/redis/blob/unstable/redis.conf#L223

Your old master process is coming back up thinking it's master (probably set in redis.conf), at which point:

node_manager::update_master_state is called.
it's :available and not syncing_with_master (since it still thinks it's master), so handle_available is called.
first thing handle_available does is call reconcile().
reconcile correctly determines that the redis node needs to be made a slave, so it calls node.make_slave!
make_slave! issues the slaveof command, BUT then calls the wakeup() method.....
wakeup does an LPUSH, which is a write op ( https://github.com/antirez/redis/blob/unstable/src/redis.c#L138 ), which is where this error is coming from.

Additionally, while I haven't spent much time at all looking at the node manager code, it seems to me that the node.rb wait() health-checking functionality simply won't work against read-only slaves (the default since 2.6), since BLPOP and DEL ops are writes. I can confirm that only the master node is getting redisfailover_* ops. What is weird is that there should be a NodeWatcher thread attempting (and failing) this against the slaves as well, yet I'm not seeing those read-only exceptions in the logs..


Reply to this email directly or view it on GitHub.

@ryanlecompte
Copy link
Owner

That definitely sounds like what's happening here @arohter given that slaves are read-only by default in 2.8. The Node Watcher definitely uses the blpop/del/lpush redis commands when watching a node's status. I went that route to avoid having to be in a busy-wait loop, but maybe that's what we should do given that slaves are read-only by default. @arohter, is this something that you'd be interested in modifying? I think getting redis_failover working with 2.8 and Ruby 2.0 would be great along with the other fixes that you've already submitted. I'll create an issue for it.

@arohter
Copy link
Contributor

arohter commented Jan 7, 2014

Actually, slaves became read-only by default in 2.6, not just 2.8 :(

We're investigating this issue now. There's really two issues here: 1) The above reported, where the wakeup() method is called on cluster reconfig and 2) slave node_watcher health checking generally.

This leads to a couple questions.

Is there a reason for using BLPOP beyond avoiding the busy-wait loop? It can be handy to immediately determine node process failures, but other parts of the code (specifically the snapshot processing loop) still rely on relatively long (5s) intervals (unless there's some zk Watch code I've failed to see), so not sure this is an active feature. The same comment applies to the wakeup() method. Just want to make sure there's no other reasons/advantages for using BLPOP.

I vaguely recall (digging into the history a while back) you were using a polling loop before switching to BLPOP. What was the main motivator for changing? Anything we should be on alert for? I'm assuming a switch to a simple INFO or PING polling health check, although I shall ponder if there might be anything more sophisticated/better.

@ryanlecompte
Copy link
Owner

I think switching back to a polling approach using PiNG or INFO should be fine. I can't recall at this time why I did switched other to avoid hitting redis servers with commands every N seconds, but like you said, it's probably not a big deal at all and sounds like it's what we need to do now that slaves are read-only by default in 2.6. Thanks for taking this on!

@arohter
Copy link
Contributor

arohter commented Jan 7, 2014

I think we'll use ECHO, since it properly fails when slave-serve-stale-data is set to no.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants