Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve sync speed #3989

Merged
merged 2 commits into from
Jul 19, 2022
Merged

Improve sync speed #3989

merged 2 commits into from
Jul 19, 2022

Conversation

dapplion
Copy link
Contributor

@dapplion dapplion commented May 8, 2022

Motivation

Lodestar doesn't have great sync speed compared to other clients. I've been analyzing why and what can we do better.

Verifying blocks requires 3 separate tasks:

  • Verify signatures:
    • Collect signatures: 5 ms (index attestations)
    • Aggregate pubkeys: 45 ms
    • Verify signatures: 100-150 ms (in workers)
  • Run state transition:
    • processBlock: 5 ms pure logic, 5 ms commitViewDU
    • hashTreeRoot: 22 ms
  • Execution payload: (not benchmarked)

Same info but in a table

Op ms per block ms per block x32
processBlock 32 1024
getBlockSignatures 45 1440
verifySigs (102 s) 100-200 3200-6400

As you can see the bulk of the work is for signature, which currently only the verifySig part is done in workers, getBlockSignatures happens in the main thread.

Current master processes blocks in series which submits 100 sigs per blocks to the workers. Since that's not a lot all signatures are submitted as a single job to worker id 0. So there's no use of parallelization.

Description

This PR attempts to parallelize better the tasks above.

  • Submit all signatures at once to ensure full utilization of workers
  • Ensure signatures and execution payloads are submitted first before state transition "blocks" main thread

As very rough estimate this PR reduces the time to process 32 blocks from 8s to 4s. From benchmark data below

Benchmark suite Current: d7a22cb Previous: d7acdcf Ratio
altair verifyImport mainnet_s3766816:31 3.6543 s/op 6.6473 s/op 0.55

Here are some CPU profiles that show this data plotted

1: Full block segment profile

Signatures are collected first (blue wide sections) with some state transitions runs in between (narrow tall pink sections). Then the rest of block's state transitions are run. Then main thread is idle waiting for workers to finish verifying signatures.

Screenshot from 2022-05-08 22-33-48

2: state transition run detail

hashTreeRoot dominates

Screenshot from 2022-05-08 22-34-18

TODO

  • Do preliminary PR with perf test on master for reference Benchmark initial sync #3995
  • Benchmark with execution payload HTTP roundtrip time (just simulate)
  • Long term: explore how to aggregate pubkeys in workers

@dapplion
Copy link
Contributor Author

dapplion commented May 12, 2022

@github-actions
Copy link
Contributor

github-actions bot commented May 13, 2022

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: bae8e9e Previous: a4634c7 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 2.4413 ms/op 2.1790 ms/op 1.12
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 89.402 us/op 67.664 us/op 1.32
BLS verify - blst-native 2.2267 ms/op 2.1654 ms/op 1.03
BLS verifyMultipleSignatures 3 - blst-native 4.5546 ms/op 4.4701 ms/op 1.02
BLS verifyMultipleSignatures 8 - blst-native 9.7958 ms/op 9.6708 ms/op 1.01
BLS verifyMultipleSignatures 32 - blst-native 35.629 ms/op 35.268 ms/op 1.01
BLS aggregatePubkeys 32 - blst-native 46.742 us/op 46.664 us/op 1.00
BLS aggregatePubkeys 128 - blst-native 182.98 us/op 182.22 us/op 1.00
getAttestationsForBlock 56.397 ms/op 43.122 ms/op 1.31
isKnown best case - 1 super set check 516.00 ns/op 481.00 ns/op 1.07
isKnown normal case - 2 super set checks 500.00 ns/op 470.00 ns/op 1.06
isKnown worse case - 16 super set checks 506.00 ns/op 470.00 ns/op 1.08
CheckpointStateCache - add get delete 10.741 us/op 9.1860 us/op 1.17
validate gossip signedAggregateAndProof - struct 5.1199 ms/op 5.0093 ms/op 1.02
validate gossip attestation - struct 2.4229 ms/op 2.3853 ms/op 1.02
altair verifyImport mainnet_s3766816:31 9.8105 s/op 12.493 s/op 0.79
pickEth1Vote - no votes 2.5369 ms/op 2.1741 ms/op 1.17
pickEth1Vote - max votes 26.547 ms/op 22.837 ms/op 1.16
pickEth1Vote - Eth1Data hashTreeRoot value x2048 13.687 ms/op 13.550 ms/op 1.01
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 24.316 ms/op 22.099 ms/op 1.10
pickEth1Vote - Eth1Data fastSerialize value x2048 1.8527 ms/op 1.6469 ms/op 1.12
pickEth1Vote - Eth1Data fastSerialize tree x2048 18.734 ms/op 16.749 ms/op 1.12
bytes32 toHexString 1.3530 us/op 1.1710 us/op 1.16
bytes32 Buffer.toString(hex) 849.00 ns/op 830.00 ns/op 1.02
bytes32 Buffer.toString(hex) from Uint8Array 1.1270 us/op 1.0360 us/op 1.09
bytes32 Buffer.toString(hex) + 0x 847.00 ns/op 832.00 ns/op 1.02
Object access 1 prop 0.47500 ns/op 0.43500 ns/op 1.09
Map access 1 prop 0.35100 ns/op 0.29900 ns/op 1.17
Object get x1000 21.887 ns/op 11.448 ns/op 1.91
Map get x1000 1.2190 ns/op 0.94500 ns/op 1.29
Object set x1000 147.35 ns/op 91.569 ns/op 1.61
Map set x1000 86.803 ns/op 56.143 ns/op 1.55
Return object 10000 times 0.44780 ns/op 0.44200 ns/op 1.01
Throw Error 10000 times 7.0513 us/op 6.1580 us/op 1.15
enrSubnets - fastDeserialize 64 bits 3.3580 us/op 3.4960 us/op 0.96
enrSubnets - ssz BitVector 64 bits 1.0230 us/op 868.00 ns/op 1.18
enrSubnets - fastDeserialize 4 bits 496.00 ns/op 471.00 ns/op 1.05
enrSubnets - ssz BitVector 4 bits 919.00 ns/op 886.00 ns/op 1.04
prioritizePeers score -10:0 att 32-0.1 sync 2-0 118.47 us/op 96.836 us/op 1.22
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 150.56 us/op 114.83 us/op 1.31
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 270.06 us/op 213.13 us/op 1.27
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 580.72 us/op 435.21 us/op 1.33
prioritizePeers score 0:0 att 64-1 sync 4-1 545.64 us/op 406.60 us/op 1.34
RateTracker 1000000 limit, 1 obj count per request 258.70 ns/op 201.97 ns/op 1.28
RateTracker 1000000 limit, 2 obj count per request 197.18 ns/op 151.37 ns/op 1.30
RateTracker 1000000 limit, 4 obj count per request 166.83 ns/op 127.11 ns/op 1.31
RateTracker 1000000 limit, 8 obj count per request 146.88 ns/op 113.56 ns/op 1.29
RateTracker with prune 5.1640 us/op 4.9690 us/op 1.04
array of 16000 items push then shift 3.7837 us/op 51.596 us/op 0.07
LinkedList of 16000 items push then shift 28.423 ns/op 16.476 ns/op 1.73
array of 16000 items push then pop 269.96 ns/op 246.44 ns/op 1.10
LinkedList of 16000 items push then pop 24.963 ns/op 14.761 ns/op 1.69
array of 24000 items push then shift 5.4532 us/op 77.365 us/op 0.07
LinkedList of 24000 items push then shift 29.615 ns/op 21.398 ns/op 1.38
array of 24000 items push then pop 232.64 ns/op 197.94 ns/op 1.18
LinkedList of 24000 items push then pop 24.878 ns/op 16.626 ns/op 1.50
intersect bitArray bitLen 8 14.092 ns/op 10.805 ns/op 1.30
intersect array and set length 8 204.19 ns/op 156.53 ns/op 1.30
intersect bitArray bitLen 128 86.395 ns/op 55.520 ns/op 1.56
intersect array and set length 128 2.7425 us/op 1.9721 us/op 1.39
pass gossip attestations to forkchoice per slot 3.7506 ms/op 5.1573 ms/op 0.73
computeDeltas 3.6698 ms/op 3.2401 ms/op 1.13
computeProposerBoostScoreFromBalances 1.1065 ms/op 806.25 us/op 1.37
altair processAttestation - 250000 vs - 7PWei normalcase 4.5043 ms/op 3.9376 ms/op 1.14
altair processAttestation - 250000 vs - 7PWei worstcase 6.9486 ms/op 5.6776 ms/op 1.22
altair processAttestation - setStatus - 1/6 committees join 241.98 us/op 176.51 us/op 1.37
altair processAttestation - setStatus - 1/3 committees join 460.57 us/op 343.82 us/op 1.34
altair processAttestation - setStatus - 1/2 committees join 650.18 us/op 496.05 us/op 1.31
altair processAttestation - setStatus - 2/3 committees join 837.81 us/op 643.70 us/op 1.30
altair processAttestation - setStatus - 4/5 committees join 1.1605 ms/op 900.36 us/op 1.29
altair processAttestation - setStatus - 100% committees join 1.4031 ms/op 1.0869 ms/op 1.29
altair processBlock - 250000 vs - 7PWei normalcase 31.557 ms/op 27.088 ms/op 1.16
altair processBlock - 250000 vs - 7PWei normalcase hashState 43.237 ms/op 34.189 ms/op 1.26
altair processBlock - 250000 vs - 7PWei worstcase 103.16 ms/op 87.746 ms/op 1.18
altair processBlock - 250000 vs - 7PWei worstcase hashState 115.23 ms/op 101.73 ms/op 1.13
phase0 processBlock - 250000 vs - 7PWei normalcase 5.1969 ms/op 4.2340 ms/op 1.23
phase0 processBlock - 250000 vs - 7PWei worstcase 60.458 ms/op 53.119 ms/op 1.14
altair processEth1Data - 250000 vs - 7PWei normalcase 1.1603 ms/op 805.46 us/op 1.44
Tree 40 250000 create 981.05 ms/op 701.15 ms/op 1.40
Tree 40 250000 get(125000) 347.29 ns/op 239.65 ns/op 1.45
Tree 40 250000 set(125000) 2.7648 us/op 2.4745 us/op 1.12
Tree 40 250000 toArray() 36.631 ms/op 28.838 ms/op 1.27
Tree 40 250000 iterate all - toArray() + loop 36.727 ms/op 28.870 ms/op 1.27
Tree 40 250000 iterate all - get(i) 129.91 ms/op 113.80 ms/op 1.14
MutableVector 250000 create 17.708 ms/op 15.076 ms/op 1.17
MutableVector 250000 get(125000) 17.776 ns/op 10.679 ns/op 1.66
MutableVector 250000 set(125000) 669.50 ns/op 557.01 ns/op 1.20
MutableVector 250000 toArray() 7.8315 ms/op 6.2896 ms/op 1.25
MutableVector 250000 iterate all - toArray() + loop 7.9939 ms/op 6.4755 ms/op 1.23
MutableVector 250000 iterate all - get(i) 3.9438 ms/op 2.6905 ms/op 1.47
Array 250000 create 7.2360 ms/op 6.2047 ms/op 1.17
Array 250000 clone - spread 3.7443 ms/op 3.2119 ms/op 1.17
Array 250000 get(125000) 1.6530 ns/op 1.5360 ns/op 1.08
Array 250000 set(125000) 1.6680 ns/op 1.5730 ns/op 1.06
Array 250000 iterate all - loop 201.59 us/op 150.97 us/op 1.34
effectiveBalanceIncrements clone Uint8Array 300000 95.674 us/op 219.68 us/op 0.44
effectiveBalanceIncrements clone MutableVector 300000 1.2450 us/op 625.00 ns/op 1.99
effectiveBalanceIncrements rw all Uint8Array 300000 303.07 us/op 246.12 us/op 1.23
effectiveBalanceIncrements rw all MutableVector 300000 228.05 ms/op 134.11 ms/op 1.70
phase0 afterProcessEpoch - 250000 vs - 7PWei 224.05 ms/op 196.26 ms/op 1.14
phase0 beforeProcessEpoch - 250000 vs - 7PWei 111.84 ms/op 57.544 ms/op 1.94
altair processEpoch - mainnet_e81889 677.02 ms/op 546.18 ms/op 1.24
mainnet_e81889 - altair beforeProcessEpoch 145.98 ms/op 127.04 ms/op 1.15
mainnet_e81889 - altair processJustificationAndFinalization 38.304 us/op 16.918 us/op 2.26
mainnet_e81889 - altair processInactivityUpdates 12.292 ms/op 9.1814 ms/op 1.34
mainnet_e81889 - altair processRewardsAndPenalties 108.43 ms/op 81.924 ms/op 1.32
mainnet_e81889 - altair processRegistryUpdates 7.7760 us/op 2.5090 us/op 3.10
mainnet_e81889 - altair processSlashings 2.0410 us/op 651.00 ns/op 3.14
mainnet_e81889 - altair processEth1DataReset 2.0650 us/op 645.00 ns/op 3.20
mainnet_e81889 - altair processEffectiveBalanceUpdates 2.8960 ms/op 1.9840 ms/op 1.46
mainnet_e81889 - altair processSlashingsReset 11.912 us/op 4.3240 us/op 2.75
mainnet_e81889 - altair processRandaoMixesReset 13.245 us/op 3.9100 us/op 3.39
mainnet_e81889 - altair processHistoricalRootsUpdate 2.0090 us/op 616.00 ns/op 3.26
mainnet_e81889 - altair processParticipationFlagUpdates 8.3190 us/op 2.3630 us/op 3.52
mainnet_e81889 - altair processSyncCommitteeUpdates 1.7050 us/op 541.00 ns/op 3.15
mainnet_e81889 - altair afterProcessEpoch 234.11 ms/op 196.56 ms/op 1.19
phase0 processEpoch - mainnet_e58758 619.28 ms/op 488.96 ms/op 1.27
mainnet_e58758 - phase0 beforeProcessEpoch 263.65 ms/op 182.33 ms/op 1.45
mainnet_e58758 - phase0 processJustificationAndFinalization 36.394 us/op 16.486 us/op 2.21
mainnet_e58758 - phase0 processRewardsAndPenalties 163.48 ms/op 120.43 ms/op 1.36
mainnet_e58758 - phase0 processRegistryUpdates 19.390 us/op 7.2720 us/op 2.67
mainnet_e58758 - phase0 processSlashings 1.9020 us/op 571.00 ns/op 3.33
mainnet_e58758 - phase0 processEth1DataReset 1.8810 us/op 553.00 ns/op 3.40
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 2.6520 ms/op 1.7589 ms/op 1.51
mainnet_e58758 - phase0 processSlashingsReset 11.575 us/op 3.5620 us/op 3.25
mainnet_e58758 - phase0 processRandaoMixesReset 14.504 us/op 4.0800 us/op 3.55
mainnet_e58758 - phase0 processHistoricalRootsUpdate 2.4070 us/op 606.00 ns/op 3.97
mainnet_e58758 - phase0 processParticipationRecordUpdates 13.796 us/op 3.5710 us/op 3.86
mainnet_e58758 - phase0 afterProcessEpoch 192.10 ms/op 161.78 ms/op 1.19
phase0 processEffectiveBalanceUpdates - 250000 normalcase 3.2858 ms/op 1.9877 ms/op 1.65
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 3.5734 ms/op 2.2981 ms/op 1.55
altair processInactivityUpdates - 250000 normalcase 46.676 ms/op 41.062 ms/op 1.14
altair processInactivityUpdates - 250000 worstcase 60.008 ms/op 33.578 ms/op 1.79
phase0 processRegistryUpdates - 250000 normalcase 16.055 us/op 5.9810 us/op 2.68
phase0 processRegistryUpdates - 250000 badcase_full_deposits 519.51 us/op 379.08 us/op 1.37
phase0 processRegistryUpdates - 250000 worstcase 0.5 248.96 ms/op 174.17 ms/op 1.43
altair processRewardsAndPenalties - 250000 normalcase 157.41 ms/op 78.753 ms/op 2.00
altair processRewardsAndPenalties - 250000 worstcase 98.208 ms/op 115.86 ms/op 0.85
phase0 getAttestationDeltas - 250000 normalcase 15.874 ms/op 11.484 ms/op 1.38
phase0 getAttestationDeltas - 250000 worstcase 15.881 ms/op 11.849 ms/op 1.34
phase0 processSlashings - 250000 worstcase 6.4563 ms/op 5.0643 ms/op 1.27
altair processSyncCommitteeUpdates - 250000 335.59 ms/op 294.00 ms/op 1.14
BeaconState.hashTreeRoot - No change 555.00 ns/op 518.00 ns/op 1.07
BeaconState.hashTreeRoot - 1 full validator 65.365 us/op 66.053 us/op 0.99
BeaconState.hashTreeRoot - 32 full validator 769.44 us/op 717.21 us/op 1.07
BeaconState.hashTreeRoot - 512 full validator 7.1562 ms/op 6.8966 ms/op 1.04
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 94.705 us/op 89.456 us/op 1.06
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.4954 ms/op 1.2759 ms/op 1.17
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 18.024 ms/op 17.157 ms/op 1.05
BeaconState.hashTreeRoot - 1 balances 73.520 us/op 63.256 us/op 1.16
BeaconState.hashTreeRoot - 32 balances 684.57 us/op 637.37 us/op 1.07
BeaconState.hashTreeRoot - 512 balances 6.6674 ms/op 6.3753 ms/op 1.05
BeaconState.hashTreeRoot - 250000 balances 101.83 ms/op 113.60 ms/op 0.90
aggregationBits - 2048 els - zipIndexesInBitList 41.348 us/op 26.491 us/op 1.56
regular array get 100000 times 80.969 us/op 60.555 us/op 1.34
wrappedArray get 100000 times 80.988 us/op 60.576 us/op 1.34
arrayWithProxy get 100000 times 34.757 ms/op 28.748 ms/op 1.21
ssz.Root.equals 612.00 ns/op 489.00 ns/op 1.25
byteArrayEquals 612.00 ns/op 485.00 ns/op 1.26
shuffle list - 16384 els 14.110 ms/op 11.464 ms/op 1.23
shuffle list - 250000 els 197.76 ms/op 166.32 ms/op 1.19
processSlot - 1 slots 14.177 us/op 13.826 us/op 1.03
processSlot - 32 slots 1.9847 ms/op 1.9508 ms/op 1.02
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 459.57 us/op 377.01 us/op 1.22
getCommitteeAssignments - req 1 vs - 250000 vc 6.4167 ms/op 5.3638 ms/op 1.20
getCommitteeAssignments - req 100 vs - 250000 vc 8.8009 ms/op 7.8369 ms/op 1.12
getCommitteeAssignments - req 1000 vs - 250000 vc 9.4071 ms/op 8.4067 ms/op 1.12
computeProposers - vc 250000 21.750 ms/op 18.618 ms/op 1.17
computeEpochShuffling - vc 250000 200.59 ms/op 169.63 ms/op 1.18
getNextSyncCommittee - vc 250000 321.48 ms/op 285.48 ms/op 1.13

by benchmarkbot/action

@codecov
Copy link

codecov bot commented May 13, 2022

Codecov Report

Merging #3989 (ea38bf3) into unstable (d34bcfa) will not change coverage.
The diff coverage is n/a.

❗ Current head ea38bf3 differs from pull request most recent head d230415. Consider uploading reports for the commit d230415 to get more accurate results

@@       Coverage Diff        @@
##   unstable   #3989   +/-   ##
================================
================================

@dapplion dapplion marked this pull request as ready for review May 13, 2022 12:33
@dapplion dapplion requested a review from a team as a code owner May 13, 2022 12:33
@wemeetagain
Copy link
Member

failing tests and merge conflicts

@dapplion dapplion changed the base branch from master to unstable May 27, 2022 04:33
@wemeetagain
Copy link
Member

I know this is lower priority, but it would be nice to not let this rot

@philknows philknows added the scope-performance Performance issue and ideas to improve performance. label Jun 6, 2022
@dapplion
Copy link
Contributor Author

dapplion commented Jun 9, 2022

  • TODO: Ensure that the new verify+import benchmark runs last and / or ensure that all resources are cleaned-up to prevent messing with other benchmarks

@philknows philknows added the status-blocked This is blocked by another issue that requires resolving first. label Jun 29, 2022
@dapplion dapplion marked this pull request as draft July 12, 2022 13:44
@dapplion
Copy link
Contributor Author

Still needs a lot of work to be review-able, putting as draft till then

@dapplion dapplion marked this pull request as ready for review July 18, 2022 15:16
@dapplion dapplion removed the status-blocked This is blocked by another issue that requires resolving first. label Jul 18, 2022
@@ -69,8 +69,15 @@ export class QueuedStateRegenerator implements IStateRegenerator {
// Check the checkpoint cache (if the pre-state is a checkpoint state)
if (parentEpoch < blockEpoch) {
const checkpointState = this.checkpointStateCache.getLatest(parentRoot, blockEpoch);
if (checkpointState) {
if (checkpointState && computeEpochAtSlot(checkpointState.slot) === blockEpoch) {
// TODO: Miss-use of checkpointStateCache here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this comment mean?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't conform to the method description: "state dialed to block.slot"

});
}

postStates[i] = postState;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to keep all states?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, current importBlock flow needs the state of every block. To only require the last we would need to refactor importBlock

@dapplion
Copy link
Contributor Author

dapplion commented Jul 18, 2022

I've restarted two contabos to sync Prater from genesis:

With unstable

Heavy use of a single worker. Does not compete for resources so the time for sig is lower, doing 480 sigs / sec

Screenshot from 2022-07-18 18-38-18

Screenshot from 2022-07-18 18-45-07

With this branch

Node is using the four cores with all the workers, cost per sig is higher, but can do 700 sigs / sec

Screenshot from 2022-07-18 18-36-12

Screenshot from 2022-07-18 18-46-42

try {
const [{postStates, proposerBalanceDeltas}, , {executionStatuses, mergeBlockFound}] = await Promise.all([
// Run state transition only
// TODO: Ensure it yields to allow flushing to workers and engine API
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this TODO?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This goal is not accomplished, but it's just a hope. I haven't found a way to ensure from JS land that data has been sent to the worker which is the ultimate issue that cap throughput.

assertValidTerminalPowBlock(chain.config, mergeBlock, {executionStatus, powBlock, powBlockParent});
if (mergeBlockFound !== null) {
// merge block found and is fully valid = state transition + signatures + execution payload.
// TODO: Will this banner be logged during syncing?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it'd be nice to log the merge block so remove this TODO too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it should be logged during sync, tho would prefer to debate in a separate issue. This TODO was more to note that this can happen

* - if all valid, await all and return
* - if one invalid, abort immediately and return index of invalid
*/
export function rejectFirstInvalidResolveAllValid(isValidPromises: Promise<boolean>[]): Promise<AllValidRes> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you move this to utils module or any utils file? this could be reused

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The counter argument is that if we stop using this function it will stay in utils forever un-used. It's very specific to this code, if it gets re-used in the future then we can move to utils

Copy link
Contributor

@twoeths twoeths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks great 👍

@dapplion dapplion merged commit 321eaff into unstable Jul 19, 2022
@dapplion dapplion deleted the dapplion/optimize-sync branch July 19, 2022 08:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
scope-performance Performance issue and ideas to improve performance.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants