-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman generate kube results in broken ports configuration #3408
Comments
to be certain, it appears this is a rootless usage? can you confirm as such |
Yes, rootless |
slirp will try to bind mount the specified ports on the host. Could you please verify that the ports 5432 and 9187 on the host are free and not already used? |
They are free and available. Please note that the issue is not about binding this ports, but because of configuration from Podman which configures both ports for both containers. If I leave relevant port for each container, then it works just fine. |
thanks for the clarification. @baude should not we create a single network namespace for the infra container and let the other containers join it? |
I think that we're looking up the ports for each container individually during |
@haircommander Is this something you changed? |
I got this one |
This likely broke when we made containers able to detect that they shared a network namespace and grab ports from the dependency container - prior to that, we could grab ports without concern for conflict, only the infra container had them. Now, all containers in a pod will return the same ports, so we have to work around this. Fixes containers#3408 Signed-off-by: Matthew Heon <[email protected]>
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
YAML generated by
podman generate kube
contains broken ports configuration. There is also a bonus bug found during removing the pod.Steps to reproduce the issue:
Last command throws this error:
But it's probably m'kay, because pod is removed and all good. Let's continue the road to primary bug of this issue:
This throws:
Describe the results you received:
Faulty postgresql.yaml that is not usable due to incorrect port configuration. Somehow the command resulted in this port config for both containers:
If I leave 5432 for PostgreSQL and 9187 for exporter, then
podman play kube
works. But then there is another issue: for postgres_exporter Podman generated following config:Notice the
command
part: this is wrong, because the image itself has/postgres_exporter
as an entrypoint. So when I runpodman play kube
container tries to run/postgres_exporter /postgres_exporter
and fails to start. If I remove generatedcommand
then it starts just fine. And if I runpodman pod rm postgresql -f
on the pod started withpodman play kube
with a fixed postgresql.yaml, then I don't hit a bonus bug described earlier.Describe the results you expected:
Correct YAML that can be used straight ahead.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Additional environment details (AWS, VirtualBox, physical, etc.):
Just my personal laptop with Fedora 30.
The text was updated successfully, but these errors were encountered: