-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run kpack-image-builder
, job-task-runner
and statefulset-runner
as standalone Deployments
#3522
base: main
Are you sure you want to change the base?
Run kpack-image-builder
, job-task-runner
and statefulset-runner
as standalone Deployments
#3522
Conversation
4732231
to
7e4ba83
Compare
7e4ba83
to
3f36248
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, looks good, minor comments.
I played a bit with the change:
- All tests pass
- Manual testing looks good
- Upgrade from 0.13 seems seamless, helm takes care of the new deployments nicely
- Kind installer works
I was trying to reason about resource requirements and how they are affected by splitting controllers into multiple deployments:
Monolit controllers deployments
- Requests
cpu: 50m
memory: 100Mi - Limits
cpu: 1000m
memory: 1Gi
Multiple controllers deployments
Controllers:
- Requests
cpu: 50m
memory: 100Mi - Limits
cpu: 1000m
memory: 1Gi
Statefulset runner:
- Requests
cpu: 50m
memory: 100Mi - Limits
cpu: 1000m
memory: 1Gi
Kpack builder:
- Requests
cpu: 50m
memory: 100Mi - Limits
cpu: 1000m
memory: 1Gi
Task runner:
- Requests
cpu: 50m
memory: 100Mi - Limits
cpu: 1000m
memory: 1Gi
Total:
- Requests
cpu: 200m
memory: 400Mi - Limits
cpu: 4000m
memory: 4Gi
What I see is that with the default request/limit values, the multiple deployment model requires 4 times more resources in comparisson with the monolit deployment model. I wonder whether we could come up with reasonable defaults that total up to similar numbers
3f36248
to
c741d39
Compare
c741d39
to
57242b8
Compare
57242b8
to
4b1c144
Compare
@pbusko any thoughts on the comment by @danail-branekov above with regards to limits and requests? |
@@ -12,8 +12,6 @@ RUN --mount=type=cache,target=/go/pkg/mod \ | |||
|
|||
COPY model model | |||
COPY controllers controllers | |||
COPY kpack-image-builder kpack-image-builder | |||
COPY job-task-runner job-task-runner | |||
COPY statefulset-runner statefulset-runner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are splitting the statefulset-runner
into its own deployment with its own docker image this line should go.
Currently if it is removed there are compilation errors, because controllers
refer to a constant defined by statefulset-runner
. However this constant is a duplicate so we car simply refer to the one in the korifi types. With this change we should be able to get rid of statefulset-runner
in the controllers
Dockerfile.
@@ -12,8 +12,6 @@ RUN --mount=type=cache,target=/go/pkg/mod \ | |||
|
|||
COPY model model | |||
COPY controllers controllers | |||
COPY kpack-image-builder kpack-image-builder | |||
COPY job-task-runner job-task-runner | |||
COPY statefulset-runner statefulset-runner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line should go, see above comment for more details.
Is there a related GitHub Issue?
Fixes #3519
Fixes #3495
What is this change about?
Implementation for #3519 (and a fix for #3495)
Does this PR introduce a breaking change?
no
Acceptance Steps
Tag your pair, your PM, and/or team