Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add: limit while scheduling promises #307

Merged
merged 2 commits into from
May 6, 2024

Conversation

aneessh18
Copy link
Contributor

This PR addresses the concerns of this issue #169.

More information: right now the kernel processes schedules from the databases every 10 ms and creates promises for execution. It finds out the list of promises to be scheduled by executing the READ_ALL_SCHEDULES query against the respective database. This is prone to occur when the system is under high stress as the query can degrade performance when there are a huge number of promises to be scheduled. Instead, this PR ensures that only a limited number of them are scheduled without the system getting overwhelmed.

Testing

  1. I've run the local build and put log statements for recording the number of results in the readSchedules method and I was able to verify that the number of scheduled promises is pegged at the limit.
  2. Unit tests

Copy link

codecov bot commented May 4, 2024

Codecov Report

Attention: Patch coverage is 40.00000% with 3 lines in your changes are missing coverage. Please review.

Project coverage is 57.79%. Comparing base (c73a3b4) to head (6f043d2).
Report is 1 commits behind head on main.

Files Patch % Lines
internal/kernel/system/system.go 0.00% 2 Missing ⚠️
...rnal/app/subsystems/aio/store/postgres/postgres.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #307      +/-   ##
==========================================
- Coverage   58.46%   57.79%   -0.68%     
==========================================
  Files         113      113              
  Lines        9776     9778       +2     
==========================================
- Hits         5716     5651      -65     
- Misses       3693     3766      +73     
+ Partials      367      361       -6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@dfarr dfarr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much again @aneessh18!!!

Do you mind adding the schedule batch size configuration to our deterministic simulation testing (DST) flags? We would need to add it here, the range flag generates a random number for the config value between the min and the max. It would also be useful to print out this config value here, this helps compare two runs of our DST to ensure that the configuration value is the same.

You can try the DST locally!

go run ./... dst run

cmd/serve/serve.go Outdated Show resolved Hide resolved
cmd/serve/serve.go Outdated Show resolved Hide resolved
internal/kernel/system/system.go Outdated Show resolved Hide resolved
@aneessh18
Copy link
Contributor Author

Thank you so much again @aneessh18!!!

Do you mind adding the schedule batch size configuration to our deterministic simulation testing (DST) flags? We would need to add it here, the range flag generates a random number for the config value between the min and the max. It would also be useful to print out this config value here, this helps compare two runs of our DST to ensure that the configuration value is the same.

You can try the DST locally!

go run ./... dst run

Thanks for reviewing this MR. Added the flags in DST.

Copy link
Member

@dfarr dfarr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@dfarr dfarr merged commit 465563a into resonatehq:main May 6, 2024
3 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants