-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Systemd managed pods that don't kill all containers when one dies #14546
Comments
@vrothberg PTAL |
Thanks for reaching out, @nivekuil. Can you share a reproducer? From the systemd.unit man page:
That sounds like a sane behavior to me. If one of the container units fails to start, I don't want to pod unit to be marked as active but to reflect the error. Your comment suggests that the pod unit fails if container unit starts successfully but then fails at some later point. Do I read it correctly? |
yes
This actually doesn't reflect how podman works without systemd. For example if I do
and then |
I think the point of having pods in systemd is to have such kind of (more robust) behavior. If a container fails that is part of a larger service, the service should be restarted IMHO. To match Podman's behavior you could try with |
No, the point is to have robust services such that container uptime is as high as possible. |
A friendly reminder that this issue had no activity for 30 days. |
@vrothberg any new thoughts on this? |
I agree with @nivekuil , having the pod-X unit |
A friendly reminder that this issue had no activity for 30 days. |
@vrothberg any update on this? |
We're also running into this problem. It is quite unexpected. |
Thanks for the ping(s). I'll take a stab at it. |
Change the dependencies from a pod unit to its associated container units from `Requires` to `Wants` to prevent the entire pod from transitioning to a failed state. Restart policies for individual containers can be configured separately. Also make sure that the pod's RunRoot is always set. Fixes: containers#14546 Signed-off-by: Valentin Rothberg <[email protected]>
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
Currently
podman generate systemd
for a pod includes aRequires=
for each container service. This means when one container dies, the entire pod service is considered failed and will kill every other container in the pod. Not sure if this is the intended behavior but it is not ideal for robustness. An easy workaround is to change this toWants=
. I also wonder if this functionality should be done in the reverse direction withRequiredBy=
orWantedBy=
instead, which matches the semantics ofpodman create --pod ...
The text was updated successfully, but these errors were encountered: