-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Limit typescript:ci
step memory limit
#8796
Conversation
Netlify deploy previewNetlify deploy preview: https://deploy-preview-8796--material-ui-x.netlify.app/ Updated pagesNo updates. These are the results for the performance tests:
|
typescript:ci
step memory limit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
Would it somewhat make sense in the future to run git diff to know which projects are updated by the PR and run TypeScript only on the modified packages? |
Hm. An interesting idea. I think that it would definitely make sense to optimize this case, but I don't think that diff would be enough, because of the relationship between packages. The complexity of how to identify when a certain package needs it might not be doable with simple static |
A continuation of #6850. This is an effort to try and reduce the flakiness of
typescript:ci
run, where it sometimes errors out with 137 error code signifying an out-of-memory issue.It might not help in the long term.
Reducing concurrency to the number of CPU cores (2) seems to help the most with memory usage as it avoids the extra work that the OS scheduler needs to do in order to run 4 concurrent processes, but it does decrease the pipeline runtime. A middle ground between the two is 3 concurrent processes, that both produce a decent memory usage graphic as well as basically the same pipeline runtime as with
concurrency 4
.node
might help a bit with the runtime performancemedium+
with 3 CPU cores and 6GB of RAM.The underlying problem is that the docker runs out of memory because
tsc
consumes quite a lot of resources and garbage collection does kick in.But for now, we can try sticking with a cheaper container and seeing if the reduced node process heap size and even further reduced concurrency work for us. 🙏
Set the maximum allowed node process memory limit to 3GB and
concurrency
to3
instead of4
.P.S. Simply limiting the allowed memory for a node process is not the final remedy, because we also have other OS processes and especially GC consuming quite a lot of memory.