-
Notifications
You must be signed in to change notification settings - Fork 670
masters fight over the same range #3721
Comments
@kostyrev the latest code (not in a release version) should recover from this much better, but in the meantime take a look at #3310 (comment) Also consider if you have anything that might be abruply killing the Weave Net containers - a memory limit or liveness probe. This could be the trigger for the problem. |
thanks! these are new recreated masters and weave-net pods were not restarted
|
You mean you removed and recreated the entire machine? Yes, that should make the file go away. I guess it's possible that each time you recreated one node, you left two nodes running with an inconsistent state, and each time the new node came up it would accept one of those states, so you maintain the inconsistency across all three restarts. Something like the story of the fox, the chicken and the corn. Hopefully there is an ordering where you can remove the state, restart, and lose the inconsistency. |
yes I'll try to recreate them all at the same time tomorrow. thank you! |
so I found verify-weave.sh from this issue and did how issue suggests. |
weave-net-pj8pg
weave-net-24nkq
weave-net-hd9dv
weave-net-pj8pg
weave-net-24nkq
weave-net-hd9dv
weave-net-pj8pg
weave-net-24nkq
weave-net-hd9dv
logs from
weave-net-pj8pg
weave-net-24nkq
weave-net-hd9dv
I've tried to recreate masters one by one (it's asg-based kops three master installation) but those messages do not go away.
What's the right way of resolving this issue?
The text was updated successfully, but these errors were encountered: