Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mcs, tso: PD/API forward streaming is expensive #6659

Closed
binshi-bing opened this issue Jun 23, 2023 · 2 comments · Fixed by #6664
Closed

mcs, tso: PD/API forward streaming is expensive #6659

binshi-bing opened this issue Jun 23, 2023 · 2 comments · Fixed by #6664
Labels
type/enhancement The issue or PR belongs to an enhancement.

Comments

@binshi-bing
Copy link
Contributor

Enhancement Task

profile004.pdf
[2023/06/23 16:23:04.959 +00:00] [INFO] [trace.go:152] ["trace[529495468] linearizableReadLoop"] [detail="{readStateIndex:52252203; appliedIndex:52252203; }"] [duration=565.10434ms] [start=2023/06/23 16:23:04.394 +00:00] [end=2023/06/23 16:23:04.959 +00:00] [steps="["trace[529495468] 'read index received' (duration: 565.085558ms)","trace[529495468] 'applied index is now lower than readState.Index' (duration: 3.208µs)"]"]
[2023/06/23 16:23:04.962 +00:00] [WARN] [util.go:163] ["apply request took too long"] [took=567.79578ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:"/pd/7187976276065784319/gc/safe_point" "] [response="range_response_count:1 size:79"] []
[2023/06/23 16:23:04.962 +00:00] [INFO] [trace.go:152] ["trace[1260142690] range"] [detail="{range_begin:/pd/7187976276065784319/gc/safe_point; range_end:; response_count:1; response_revision:49649625; }"] [duration=567.883329ms] [start=2023/06/23 16:23:04.394 +00:00] [end=2023/06/23 16:23:04.962 +00:00] [steps="["trace[1260142690] 'agreement among raft nodes before linearized reading' (duration: 567.767596ms)"]"]
[2023/06/23 16:23:04.966 +00:00] [WARN] [util.go:163] ["apply request took too long"] [took=572.019237ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:"/pd/7187976276065784319/gc/safe_point" "] [response="range_response_count:1 size:79"] []
[2023/06/23 16:23:04.966 +00:00] [INFO] [trace.go:152] ["trace[96759071] range"] [detail="{range_begin:/pd/7187976276065784319/gc/safe_point; range_end:; response_count:1; response_revision:49649625; }"] [duration=572.104726ms] [start=2023/06/23 16:23:04.394 +00:00] [end=2023/06/23 16:23:04.966 +00:00] [steps="["trace[96759071] 'agreement among raft nodes before linearized reading' (duration: 571.971197ms)"]"]
[2023/06/23 16:23:04.968 +00:00] [WARN] [util.go:121] ["failed to apply request"] [took=24.352µs] [request="header:<ID:1294372468269364274 > lease_revoke:id:11f688e6ea3db3f8"] [response=size:30] [error="lease not found"]
[2023/06/23 16:23:06.279 +00:00] [WARN] [util.go:121] ["failed to apply request"] [took=41.321µs] [request="header:<ID:1294372468269364300 > lease_revoke:id:11f688e6ea3db390"] [response=size:30] [error="lease not found"]
[2023/06/23 16:23:06.807 +00:00] [INFO] [grpc_service.go:1742] ["update service GC safe point"] [service-id=gc_worker] [expire-at=-9223372035167238423] [safepoint=442377643433918464]
[2023/06/23 16:23:12.698 +00:00] [WARN] [grpclog.go:60] ["transport: http2Server.HandleStreams failed to read frame: read tcp 10.0.102.147:2379->10.0.121.190:42872: read: connection reset by peer"]
[2023/06/23 16:23:14.325 +00:00] [INFO] [trace.go:152] ["trace[760380547] linearizableReadLoop"] [detail="{readStateIndex:52252313; appliedIndex:52252313; }"] [duration=442.316662ms] [start=2023/06/23 16:23:13.883 +00:00] [end=2023/06/23 16:23:14.325 +00:00] [steps="["trace[760380547] 'read index received' (duration: 442.312855ms)","trace[760380547] 'applied index is now lower than readState.Index' (duration: 2.946µs)"]"]
[2023/06/23 16:23:14.326 +00:00] [WARN] [util.go:163] ["apply request took too long"] [took=442.662924ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:"/pd/7187976276065784319/gc/safe_point" "] [response="range_response_count:1 size:79"] []
[2023/06/23 16:23:14.326 +00:00] [INFO] [trace.go:152] ["trace[548987313] range"] [detail="{range_begin:/pd/7187976276065784319/gc/safe_point; range_end:; response_count:1; response_revision:49649702; }"] [duration=442.759416ms] [start=2023/06/23 16:23:13.883 +00:00] [end=2023/06/23 16:23:14.326 +00:00] [steps="["trace[548987313] 'agreement among raft nodes before linearized reading' (duration: 442.467644ms)"]"]
[2023/06/23 16:23:14.978 +00:00] [WARN] [util.go:121] ["failed to apply request"] [took=43.11µs] [request="header:<ID:1294372468269364437 > lease_revoke:id:11f688e6ea3db3fe"] [response=size:30] [error="lease not found"]
[2023/06/23 16:23:25.639 +00:00] [WARN] [raft.go:440] ["leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"] [to=bb2483bf620467b7] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=295.792928ms]
[2023/06/23 16:23:25.639 +00:00] [WARN] [raft.go:440] ["leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"] [to=1594bb0eb9d011f6] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=295.878113ms]
[2023/06/23 16:23:32.128 +00:00] [WARN] [raft.go:440] ["leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"] [to=bb2483bf620467b7] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=525.556745ms]
[2023/06/23 16:23:32.129 +00:00] [WARN] [raft.go:440] ["leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"] [to=1594bb0eb9d011f6] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=525.601339ms]
[2023/06/23 16:23:36.075 +00:00] [WARN] [grpclog.go:60] ["transport: http2Server.HandleStreams failed to read frame: read tcp 10.0.102.147:2379->10.0.121.190:33884: read: connection reset by peer"]

@binshi-bing binshi-bing added the type/enhancement The issue or PR belongs to an enhancement. label Jun 23, 2023
@binshi-bing
Copy link
Contributor Author

[2023/06/23 16:38:34.508 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.509 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.511 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.523 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.524 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.525 +00:00] [INFO] [coordinator.go:320] ["coordinator starts to collect cluster information"]
[2023/06/23 16:38:34.525 +00:00] [INFO] [cluster.go:331] ["memory info"] [total-mem=7527096320]
[2023/06/23 16:38:34.525 +00:00] [INFO] [tuner.go:117] ["new gctuner"] [threshold=4516257792]
[2023/06/23 16:38:34.525 +00:00] [INFO] [cluster.go:344] ["update gc tuner"] [enable-gc-tuner=false] [gc-threshold-bytes=4516257792]
[2023/06/23 16:38:34.525 +00:00] [INFO] [memory_limit_tuner.go:50] [debug.SetMemoryLimit] [limit=9223372036854775807] [ret=9223372036854775807]
[2023/06/23 16:38:34.525 +00:00] [INFO] [server.go:212] ["establish sync region stream"] [requested-server=serverless-cluster-pd-0] [url=http://serverless-cluster-pd-0.serverless-cluster-pd-peer.tidb-serverless.svc:2379]
[2023/06/23 16:38:34.526 +00:00] [INFO] [server.go:230] ["requested server has already in sync with server"] [requested-server=serverless-cluster-pd-0] [server=serverless-cluster-pd-1] [last-index=17212310]
[2023/06/23 16:38:34.525 +00:00] [INFO] [cluster.go:351] ["update gc memory limit"] [memory-limit-bytes=0] [memory-limit-gc-trigger-ratio=0.7]
[2023/06/23 16:38:34.526 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.526 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.529 +00:00] [INFO] [id.go:174] ["idAllocator allocates a new id"] [new-end=297984570] [new-base=297983570] [label=idalloc] [check-curr-end=true]
[2023/06/23 16:38:34.529 +00:00] [INFO] [util.go:41] ["load cluster version"] [cluster-version=6.5.0-cse-16-ge4c1515]
[2023/06/23 16:38:34.529 +00:00] [INFO] [server.go:1578] ["API service leader is ready to serve"] [leader-name=serverless-cluster-pd-1]
[2023/06/23 16:38:34.531 +00:00] [INFO] [store_config.go:200] ["sync the store config successful"] [store-address=tikv-r6gdlarge-25v230526-p0-tikv-0.tikv-r6gdlarge-25v230526-p0-tikv-peer.tidb-serverless.svc:20180] [store-config="{\n "coprocessor": {\n "region-max-size": "750MiB",\n "region-split-size": "500MiB",\n "region-max-keys": 75000000,\n "region-split-keys": 50000000,\n "enable-region-bucket": true,\n "region-bucket-size": "96MiB"\n }\n}"]
[2023/06/23 16:38:34.581 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.603 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.702 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.743 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.832 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.832 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:34.887 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:35.043 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:35.152 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:35.165 +00:00] [INFO] [tso_keyspace_group.go:201] ["all keyspace groups have equal or more than default replica count, stop to alloc node"]
[2023/06/23 16:38:35.387 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:35.451 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]
[2023/06/23 16:38:35.593 +00:00] [WARN] [peer.go:267] ["dropped internal Raft message since sending buffer is full (overloaded network)"] [message-type=MsgHeartbeat] [local-member-id=1594bb0eb9d011f6] [from=1594bb0eb9d011f6] [remote-peer-id=5d7de14a2e2790a0] [remote-peer-active=false]

@binshi-bing
Copy link
Contributor Author

[2023/06/23 17:04:54.626 +00:00] [INFO] [coordinator.go:320] ["coordinator starts to collect cluster information"]
[2023/06/23 17:04:54.628 +00:00] [INFO] [cluster.go:331] ["memory info"] [total-mem=7527096320]
[2023/06/23 17:04:54.628 +00:00] [INFO] [cluster.go:344] ["update gc tuner"] [enable-gc-tuner=false] [gc-threshold-bytes=4516257792]
[2023/06/23 17:04:54.628 +00:00] [INFO] [memory_limit_tuner.go:50] [debug.SetMemoryLimit] [limit=9223372036854775807] [ret=9223372036854775807]
[2023/06/23 17:04:54.628 +00:00] [INFO] [cluster.go:351] ["update gc memory limit"] [memory-limit-bytes=0] [memory-limit-gc-trigger-ratio=0.7]
[2023/06/23 17:04:54.631 +00:00] [INFO] [id.go:174] ["idAllocator allocates a new id"] [new-end=297993570] [new-base=297992570] [label=idalloc] [check-curr-end=true]
[2023/06/23 17:04:54.631 +00:00] [INFO] [util.go:41] ["load cluster version"] [cluster-version=6.5.0-cse-16-ge4c1515]
[2023/06/23 17:04:54.631 +00:00] [INFO] [server.go:1578] ["API service leader is ready to serve"] [leader-name=serverless-cluster-pd-1]
[2023/06/23 17:04:54.633 +00:00] [INFO] [store_config.go:200] ["sync the store config successful"] [store-address=tikv-r6gdlarge-25v230526-p0-tikv-1.tikv-r6gdlarge-25v230526-p0-tikv-peer.tidb-serverless.svc:20180] [store-config="{\n "coprocessor": {\n "region-max-size": "750MiB",\n "region-split-size": "500MiB",\n "region-max-keys": 75000000,\n "region-split-keys": 50000000,\n "enable-region-bucket": true,\n "region-bucket-size": "96MiB"\n }\n}"]
[2023/06/23 17:04:55.217 +00:00] [INFO] [tso_keyspace_group.go:201] ["all keyspace groups have equal or more than default replica count, stop to alloc node"]
[2023/06/23 17:04:55.241 +00:00] [INFO] [server.go:212] ["establish sync region stream"] [requested-server=serverless-cluster-pd-0] [url=http://serverless-cluster-pd-0.serverless-cluster-pd-peer.tidb-serverless.svc:2379]
[2023/06/23 17:04:55.242 +00:00] [INFO] [server.go:295] ["sync the history regions with server"] [server=serverless-cluster-pd-0] [from-index=17284423] [last-index=17284425] [records-length=2]
[2023/06/23 17:04:55.243 +00:00] [INFO] [server.go:212] ["establish sync region stream"] [requested-server=serverless-cluster-pd-2] [url=http://serverless-cluster-pd-2.serverless-cluster-pd-peer.tidb-serverless.svc:2379]
[2023/06/23 17:04:55.244 +00:00] [INFO] [server.go:295] ["sync the history regions with server"] [server=serverless-cluster-pd-2] [from-index=17284423] [last-index=17284425] [records-length=2]
[2023/06/23 17:04:56.739 +00:00] [WARN] [v3_server.go:814] ["waiting for ReadIndex response took too long, retrying"] [sent-request-id=1294372468269500937] [retry-timeout=500ms]
[2023/06/23 17:04:56.740 +00:00] [INFO] [trace.go:152] ["trace[1864104980] linearizableReadLoop"] [detail="{readStateIndex:52344330; appliedIndex:52344332; }"] [duration=896.725028ms] [start=2023/06/23 17:04:55.843 +00:00] [end=2023/06/23 17:04:56.740 +00:00] [steps="["trace[1864104980] 'read index received' (duration: 896.720548ms)","trace[1864104980] 'applied index is now lower than readState.Index' (duration: 3.397µs)"]"]
[2023/06/23 17:04:56.740 +00:00] [INFO] [trace.go:152] ["trace[905983731] put"] [detail="{key:/ms/7187976276065784319/tso/registry/http://pd-tso-server-1.tso-service.tidb-serverless.svc:2379; req_size:188; response_revision:49734889; }"] [duration=896.669569ms] [start=2023/06/23 17:04:55.843 +00:00] [end=2023/06/23 17:04:56.740 +00:00] [steps="["trace[905983731] 'process raft request' (duration: 896.574965ms)"]"]
[2023/06/23 17:04:56.740 +00:00] [WARN] [util.go:163] ["apply request took too long"] [took=897.097505ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:"/pd/7187976276065784319/gc/safe_point" "] [response="range_response_count:1 size:79"] []
[2023/06/23 17:04:56.740 +00:00] [INFO] [trace.go:152] ["trace[1140742346] range"] [detail="{range_begin:/pd/7187976276065784319/gc/safe_point; range_end:; response_count:1; response_revision:49734889; }"] [duration=897.159994ms] [start=2023/06/23 17:04:55.843 +00:00] [end=2023/06/23 17:04:56.740 +00:00] [steps="["trace[1140742346] 'agreement among raft nodes before linearized reading' (duration: 897.02859ms)"]"]
[2023/06/23 17:04:56.753 +00:00] [WARN] [util.go:163] ["apply request took too long"] [took=908.03464ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:"/pd/7187976276065784319/gc/safe_point" "] [response="range_response_count:1 size:79"] []
[2023/06/23 17:04:56.753 +00:00] [INFO] [trace.go:152] ["trace[1545188436] range"] [detail="{range_begin:/pd/7187976276065784319/gc/safe_point; range_end:; response_count:1; response_revision:49734889; }"] [duration=908.252945ms] [start=2023/06/23 17:04:55.844 +00:00] [end=2023/06/23 17:04:56.753 +00:00] [steps="["trace[1545188436] 'agreement among raft nodes before linearized reading' (duration: 907.866634ms)"]"]
[2023/06/23 17:04:56.753 +00:00] [WARN] [util.go:163] ["apply request took too long"] [took=908.436836ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:"/pd/7187976276065784319/gc/safe_point" "] [response="range_response_count:1 size:79"] []
[2023/06/23 17:04:56.753 +00:00] [INFO] [trace.go:152] ["trace[189176557] range"] [detail="{range_begin:/pd/7187976276065784319/gc/safe_point; range_end:; response_count:1; response_revision:49734889; }"] [duration=908.526428ms] [start=2023/06/23 17:04:55.844 +00:00] [end=2023/06/23 17:04:56.753 +00:00] [steps="["

ti-chi-bot bot pushed a commit that referenced this issue Jun 26, 2023
…hanism. (#6664)

ref #6659

Fix expensive async forwardTSORequest() and its timeout mechanism.

In order to handle the timeout case for forwardStream send/recv, the existing logic is to create 
context.withTimeout(forwardCtx,...) for every request, then start a new goroutine "forwardTSORequest", 
which is very expensive as shown by the profiling in #6659. 

This change create a watchDeadline routine per forward stream and reuse it for all the forward requests
in which forwardTSORequest is called synchronously. Compared to the existing logic, the new change
is much cheaper and the latency is much stable.

Signed-off-by: Bin Shi <[email protected]>
@binshi-bing binshi-bing changed the title PD/API forward streaming is expensive mcs, tso: PD/API forward streaming is expensive Jun 29, 2023
rleungx pushed a commit to rleungx/pd that referenced this issue Aug 2, 2023
…hanism. (tikv#6664)

ref tikv#6659

Fix expensive async forwardTSORequest() and its timeout mechanism.

In order to handle the timeout case for forwardStream send/recv, the existing logic is to create 
context.withTimeout(forwardCtx,...) for every request, then start a new goroutine "forwardTSORequest", 
which is very expensive as shown by the profiling in tikv#6659. 

This change create a watchDeadline routine per forward stream and reuse it for all the forward requests
in which forwardTSORequest is called synchronously. Compared to the existing logic, the new change
is much cheaper and the latency is much stable.

Signed-off-by: Bin Shi <[email protected]>
rleungx pushed a commit to rleungx/pd that referenced this issue Aug 2, 2023
…hanism. (tikv#6664)

ref tikv#6659

Fix expensive async forwardTSORequest() and its timeout mechanism.

In order to handle the timeout case for forwardStream send/recv, the existing logic is to create 
context.withTimeout(forwardCtx,...) for every request, then start a new goroutine "forwardTSORequest", 
which is very expensive as shown by the profiling in tikv#6659. 

This change create a watchDeadline routine per forward stream and reuse it for all the forward requests
in which forwardTSORequest is called synchronously. Compared to the existing logic, the new change
is much cheaper and the latency is much stable.

Signed-off-by: Bin Shi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement The issue or PR belongs to an enhancement.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant