Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aggregator] Move shardID calculation out of lock #3179

Merged
merged 2 commits into from
Feb 6, 2021
Merged

Conversation

vdarulis
Copy link
Collaborator

@vdarulis vdarulis commented Feb 5, 2021

What this PR does / why we need it:

Screen Shot 2021-02-05 at 4 17 48 PM

Placement/shard lookup is pretty expensive for every incoming metric, has loads and loads of locks, yet there's no need to run a hash over metric ID under a read lock.

Special notes for your reviewer:

Does this PR introduce a user-facing and/or backwards incompatible change?:


Does this PR require updating code package or user-facing documentation?:


@codecov
Copy link

codecov bot commented Feb 5, 2021

Codecov Report

Merging #3179 (f83cd61) into master (f83cd61) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3179   +/-   ##
=======================================
  Coverage    72.3%    72.3%           
=======================================
  Files        1086     1086           
  Lines      100673   100673           
=======================================
  Hits        72822    72822           
  Misses      22800    22800           
  Partials     5051     5051           
Flag Coverage Δ
aggregator 75.7% <0.0%> (ø)
cluster 85.0% <0.0%> (ø)
collector 84.3% <0.0%> (ø)
dbnode 78.7% <0.0%> (ø)
m3em 74.4% <0.0%> (ø)
m3ninx 73.1% <0.0%> (ø)
metrics 20.0% <0.0%> (ø)
msg 74.2% <0.0%> (ø)
query 67.3% <0.0%> (ø)
x 80.4% <0.0%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f83cd61...f9393aa. Read the comment docs.

@vdarulis vdarulis changed the title [aggregator] Move shardID calculation out of critical section [aggregator] Move shardID calculation out of lock Feb 6, 2021
@vdarulis vdarulis merged commit f6450c1 into master Feb 6, 2021
@vdarulis vdarulis deleted the v/agglocking branch February 6, 2021 21:48
soundvibe added a commit that referenced this pull request Feb 10, 2021
* master: (30 commits)
  [dbnode] Use go context to cancel index query workers after timeout (#3194)
  [aggregator] Fix change ActivePlacement semantics on close (#3201)
  [aggregator] Simplify (Active)StagedPlacement API (#3199)
  [aggregator] Checking if metadata is set to default should not cause copying (#3198)
  [dbnode] Remove readers and writer from aggregator API (#3122)
  [aggregator] Avoid large copies in entry rollup comparisons by making them more inline-friendly (#3195)
  [dbnode] Re-add aggregator doc limit update (#3137)
  [m3db] Do not close reader in filterFieldsIterator.Close() (#3196)
  Revert "Remove disk series read limit (#3174)" (#3193)
  [instrument] Improve sampled timer and stopwatch performance (#3191)
  Omit unset fields in metadata json (#3189)
  [dbnode] Remove left-over code in storage/bootstrap/bootstrapper (#3190)
  [dbnode][coordinator] Support match[] in label endpoints (#3180)
  Instrument the worker pool with the wait time (#3188)
  Instrument query path (#3182)
  [aggregator] Remove indirection, large copy from unaggregated protobuf decoder (#3186)
  [aggregator] Sample timers completely (#3184)
  [aggregator] Reduce error handling overhead in rawtcp server (#3183)
  [aggregator] Move shardID calculation out of critical section (#3179)
  Move instrumentation cleanup to FetchTaggedResultIterator Close() (#3173)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants