-
-
Notifications
You must be signed in to change notification settings - Fork 21.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SCons: Default num_jobs
to max CPUs minus 1 if not specified
#63087
SCons: Default num_jobs
to max CPUs minus 1 if not specified
#63087
Conversation
Oh well I had missed this note in the docs: https://docs.python.org/3/library/os.html#os.sched_getaffinity
Not available on macOS and Windows... so back to |
66db73a
to
b23913d
Compare
b23913d
to
fb857c2
Compare
FWIW, I agree with this change and it is what I expect out of the box. |
If any Python gurus want to double check what's the absolute best portable way to query CPU count, feel free. I saw |
I think this could be a great proposal, though I'd limit the default to num_of_cores - 1 so it's not clogging the entire bandwidth. |
Yeah that's what I tend to do myself, I have 8 cores on the laptop but use 7 so it doesn't interfere too much with my always on Firefox. Got a couple more echoes along this line: And one suggesting using half the cores but that's pretty arbitrary and unexpected IMO, I prefer "all or nothing" defaults: I think for low CPU situations I might stick to max CPU, as that's where it makes the biggest difference (between e.g. 1 and 2 cores), and that matches the kind of config users have on server VMs like CI runners. So maybe up to 4 cores we use max, above we use max - 1. |
fb857c2
to
eed306f
Compare
num_jobs
to max available CPUs if not specifiednum_jobs
to max CPUs minus 1 if not specified
This doesn't change the behavior when `--jobs`/`-j` is specified as a command-line argument or in `SCONSFLAGS`. The SCons hack used to know if `num_jobs` was set by the user is derived from the MongoDB setup. We use `os.cpu_count()` for portability (available since Python 3.4). With 4 CPUs or less, we use the max. With more than 4 we use max - 1 to preserve some bandwidth for the user's other programs.
eed306f
to
ea21122
Compare
N-1 is a good default even though I tend to go to N or N+1 sometimes. To me the biggest problem is not really the CPU usage but the RAM one. Each job you add can take a decent amount or RAM so if you have a high core system with low memory available you will end up swapping, which is much worse than slowness induced by the increased CPU usage. I don't know what's the RAM usage like while compiling Godot though, but that might be something to look at. |
It's gonna be hard to estimate RAM usage, it depends on specific tool chain used and build settings a lot, especially for LTO enabled builds (GCC LTO sometimes use 10 times more RAM than clang LTO). |
Cherry-picked for 3.5. |
This doesn't change the behavior when
--jobs
/-j
is specified as acommand-line argument or in
SCONSFLAGS
.The SCons hack used to know if
num_jobs
was set by the user is derivedfrom the MongoDB setup.
We useos.sched_getaffinity
to respect potential limits set by the useror system administrator.
We use
os.cpu_count()
for portability (available since Python 3.4).With 4 CPUs or less, we use the max. With more than 4 we use max - 1 to
preserve some bandwidth for the user's other programs.
Edit: godot-cpp equivalent: godotengine/godot-cpp#788
Redo of #9972, 5 years later I've warmed up to the idea (CC @capnm).
It's up for discussion though. I think some modern buildsystems are now doing this automatically so users are used to it just using all the available capacity. Notably I saw this recent example of someone mistakenly building with one core. It's still possible to specify
-j1
when you want to compile sequentially (useful to check a build error without the noise of other jobs).This change would enable us to simplify the compiling documentation (e.g. here) a bit so we don't have to drag along system-specific instructions to get the number of CPU cores. And I noticed that GitHub CI runners for macOS actually have 3-core CPUs so removing our
--jobs=2
should speed up CI significantly: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resourcesEdit: CI build logs on this PR confirm 3 CPUs for macOS and 2 CPUs for Windows/Linux runners.