You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For now, we use (*AllocatorManager).PriorityChecker() to make sure a Local TSO Allocator is elected from the corresponding DC as possible. However, when a new cluster is started or some PD instances in the same DC are joined, there is the possibility that the Local TSO Allocator is played by a PD node from another DC. That's because the priority is guaranteed by PriorityChecker to check periodically whether the current Local TSO Allocator is from its DC correctly, which has some lagging depending on the inspection period.
// PriorityChecker is used to check the election priority of a Local TSO Allocator.
// In the normal case, if we want to elect a Local TSO Allocator for a certain DC,
// such as dc-1, we need to make sure the follow priority rules:
// 1. The PD server with dc-location="dc-1" needs to be elected as the allocator
// leader with the highest priority.
// 2. If all PD servers with dc-location="dc-1" are down, then the other PD servers
// of DC could be elected.
func (am*AllocatorManager) PriorityChecker() {
Describe the feature you'd like
We need to make sure the PD nodes from the corresponding DC become its Local TSO Allocator as soon as possible rather than being corrected by the PriorityChecker later, which should make the cluster provide more stable and fast TSO service to TiDB.
Describe alternatives you've considered
Maybe we can trigger a DC's Local TSO Allocator election earlier than other PD nodes in the other DCs.
The text was updated successfully, but these errors were encountered:
Feature Request
Describe your feature request related problem
For now, we use
(*AllocatorManager).PriorityChecker()
to make sure a Local TSO Allocator is elected from the corresponding DC as possible. However, when a new cluster is started or some PD instances in the same DC are joined, there is the possibility that the Local TSO Allocator is played by a PD node from another DC. That's because the priority is guaranteed byPriorityChecker
to check periodically whether the current Local TSO Allocator is from its DC correctly, which has some lagging depending on the inspection period.pd/server/tso/allocator_manager.go
Lines 784 to 791 in b157862
Describe the feature you'd like
We need to make sure the PD nodes from the corresponding DC become its Local TSO Allocator as soon as possible rather than being corrected by the
PriorityChecker
later, which should make the cluster provide more stable and fast TSO service to TiDB.Describe alternatives you've considered
Maybe we can trigger a DC's Local TSO Allocator election earlier than other PD nodes in the other DCs.
The text was updated successfully, but these errors were encountered: