Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update batch sizes to minimize idle cores #295

Merged
merged 2 commits into from
Mar 31, 2021
Merged

Conversation

binarymason
Copy link
Contributor

@binarymason binarymason commented Mar 31, 2021

Previously, batch sizes during optimization could leave quite a few idle cores. The changes in this PR ensures better cpu utilization.

Fixes #293

Before

Here are a few examples of what batch sizes would look like before this change (note number of idle cores):

length cpu_count minimum before_clip after_clip batches idle
4000 38 5 106 106 38 0
4000 16 5 250 250 16 0
1000 38 5 27 27 38 0
60 38 5 2 5 12 26
60 16 5 4 5 12 4

After

Same test dataset, but with proposed changes (minimizing idle cores):

length cpu_count minimum before_clip after_clip batches idle
4000 38 1 106 106 38 0
4000 16 1 250 250 16 0
1000 38 1 27 27 38 0
60 38 1 2 2 30 8
60 16 1 4 4 15 1

backtesting/backtesting.py Outdated Show resolved Hide resolved
@kernc
Copy link
Owner

kernc commented Mar 31, 2021

I agree, spawning extra processes, particularly by forking, is not enough an overhead to justify maintaining min. 5 jobs per core, when single runs can take long by themselves.

Thanks for finding this!

@kernc kernc merged commit fd61d49 into kernc:master Mar 31, 2021
kernc pushed a commit that referenced this pull request Mar 31, 2021
* update batch sizes to minimize idle cores

Refs: #293
Benouare pushed a commit to Benouare/backtesting.py that referenced this pull request Jun 21, 2021
* update batch sizes to minimize idle cores

Refs: kernc#293
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants