You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Already asked this question here, but I thought I'd open a seperate issue:
Currently, when the cluster leader goes down and an election happens, all consul lock processes which have previously acquired locks will terminate immediately.
Is that really the intended behaviour?
If you use consul lock to implement a hot standby for singleton services (as described in the docs), that means that an election will trigger a failover for all such services. I just don't think that's very practical.
The text was updated successfully, but these errors were encountered:
@jeinwag It is not intended behavior, we just need special case handling of the "no cluster leader" return. The consul lock command err's on the side of caution and aborts if it encounters any error, but the lock is still "held" even if a leader election happens.
From the perspective of the servers that client still holds the the lock, but the client is not teasing apart different errors to determine if its safe to continue. Makes sense?
Already asked this question here, but I thought I'd open a seperate issue:
Currently, when the cluster leader goes down and an election happens, all consul lock processes which have previously acquired locks will terminate immediately.
Is that really the intended behaviour?
If you use consul lock to implement a hot standby for singleton services (as described in the docs), that means that an election will trigger a failover for all such services. I just don't think that's very practical.
The text was updated successfully, but these errors were encountered: