-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Containers started using socket-activated APIv2 die from systemd activation timeout #7294
Comments
@baude @jwhonce @vrothberg PTAL I don't think we've seen this elsewhere in our testing, which is making me think this is Debian-specific. Does this happen as rootless, root, or both? Suspicion: |
@mheon I have not seen this on fedora or rhel using systemd. |
I'd believe that this is debian specific. I have not had a chance to try rootless yet, will see if I can get a repro. I just tried adding Separately, I noticed that systemd is not too happy about the podman shutdown in general. Here's several While unix socket is active and in use
Just after the timeout, after containers already died; unit is 'deactivating' for some time
Minutes later: unit is failed
On any further request the unit recovers back to active and services the request. Not sure how relevant that is.. in fact I see So, possibly some missed config in the debian packaging? I've repro'd using packages from both kubic and debian-testing. |
Can you try |
Ah crap, I patched the user unit instead of the system unit. Adding Seems like nothing unintentional leaked?
systemd noted what leaked, too:
Do we have an explanation for why this would be debian specific? I do not see KillMode in podman.service. |
This was a quite recent change - @vrothberg PTAL, I think your KillMode change may have broken things. |
Without this flag, systemd will tear down our containers when stopping the podman API server due to inactivity. Upstream ticket: containers/podman#7294
We changed to using the default killmode (i.e., cgroups) because of #7021 (Cc @martinpitt). FWIW, I can reproduce on Fedora as well. The problem I think we're having is that conmons and containers are in a sub-cgroup and are hence killed when the service stops. @mheon @baude, I guess that's something we need to fix before 2.0.5. I suspect that #7021 happened in the user service because of Podman rexecing to join the user namespace? We don't really have a mechanism yet for Podman to either write it's PID or send it via sd notify. What I think we should do, now as we have nice sd notify support, is to check in podman-system-service if we're running in a systemd service and then send READY and PID. @giuseppe WDYT? |
If we specify IMO we should use |
Implementing sd-notify for |
Lets attempt the sd-notify. |
Commit 2b6dd3f set the killmode of the podman.service to the systemd default which ultimately lead to the problem that systemd will kill *all* processes inside the unit's cgroup and hence kill all containers whenever the service is stopped. Fix it by setting the type to sdnotify and the killmode to process. `podman system service` will send the necessary notify messages when the NOTIFY_SOCKET is set and unset it right after to prevent the backend and container runtimes from jumping in between and send messages as well. Fixes: containers#7294 Signed-off-by: Valentin Rothberg <[email protected]>
The killing of # (podman-remote run -d --name foo busybox sleep 60;sleep 11;echo here we go;podman-remote ps -a)|ts
Aug 13 13:16:33 01da3fa8a9e8e7ce0ac470f8ff5856676d4a6906837eb87b8b55d8118b48b342
Aug 13 13:16:44 here we go
Aug 13 13:17:33 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Aug 13 13:17:33 01da3fa8a9e8 docker.io/library/busybox:latest sleep 60 About a minute ago Exited (0) Less than a second ago foo Translation:
As best I can tell, #7312 solves this behavior, but I'm still looking into it further. I am reporting this because others may experience this different symptom and search for "podman remote ps hang". |
Commit 2b6dd3f set the killmode of the podman.service to the systemd default which ultimately lead to the problem that systemd will kill *all* processes inside the unit's cgroup and hence kill all containers whenever the service is stopped. Fix it by setting the type to sdnotify and the killmode to process. `podman system service` will send the necessary notify messages when the NOTIFY_SOCKET is set and unset it right after to prevent the backend and container runtimes from jumping in between and send messages as well. Fixes: containers#7294 Signed-off-by: Valentin Rothberg <[email protected]>
Commit 2b6dd3f set the killmode of the podman.service to the systemd default which ultimately lead to the problem that systemd will kill *all* processes inside the unit's cgroup and hence kill all containers whenever the service is stopped. Fix it by setting the type to sdnotify and the killmode to process. `podman system service` will send the necessary notify messages when the NOTIFY_SOCKET is set and unset it right after to prevent the backend and container runtimes from jumping in between and send messages as well. Fixes: containers#7294 Signed-off-by: Valentin Rothberg <[email protected]>
more about flaky cache: - containers/podman#7021 - containers/podman#7294 using mv does not work across boots, running rm -rf seems too adventorous
/kind bug
Description
When creating and starting containers specifically via the
/run/podman/podman.sock
endpoint, the containers are stopped whenpodman.service
gets shut down due to socket activation timeout. This manifests as containers stopping 10s after the last APIv2 request is made. The shutdown is clean;podman ps -a
shows exit code 0 for them.Opening an APIv2 events stream or issuing APIv2 requests every ~5 seconds causes the containers to stay alive longer; but if those requests stop, the containers still stop then.
Steps to reproduce the issue:
Minimal case assuming you have podman's systemd units installed already:
Check you have the socket unit ready:
systemctl enable --now podman.socket
Create a sleep container:
curl --unix-socket /run/podman/podman.sock -XPOST http://d/v1.0.0/libpod/images/pull'?reference=k8s.gcr.io/pause:3.2'
curl --unix-socket /run/podman/podman.sock -XPOST http://d/v1.0.0/libpod/containers/create --header 'content-type: application/json' --data '{"image":"k8s.gcr.io/pause:3.2","name":"bg-test"}'
Start watching events:
podman events &
Start the container:
curl --unix-socket /run/podman/podman.sock -XPOST http://d/v1.0.0/libpod/containers/bg-test/start
Describe the results you received:
Describe the results you expected:
Alternative repro within docker:
I also constructed a self-contained repro within docker. This isn't too minimal, but should show the problem on any machine that has Docker available (I'm not sure how relevant the Linux distro is in this issue). You can just copy the whole block and it should show the container dying during the final
sleep
call.I also tried to repro on the
fedora:32
image but fedora's podman wouldn't run within docker, said something about overlay-on-overlay. I didn't debug further.Additional information you deem important (e.g. issue happens only occasionally):
Pod infra containers don't seem affected, when the
/pods/.../start
API is used.I wasn't able to repro when
podman system service
is run manually from an interactive session. I've only seen this happen specifically when the podman daemon's lifecycle is managed by systemd.Output of
podman version
:Output of `podman info --debug`:
Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: