Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Starting profile using podman driver fail with: "volume already exists" using random profile name #16755

Closed
nirs opened this issue Jun 22, 2023 · 21 comments
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@nirs
Copy link
Contributor

nirs commented Jun 22, 2023

What Happened?

Running minikube in github actions started to fail with:

Error: volume with name test-af672ccebb7d698c8cd745eb7d14911b-cluster already exists:
volume already exists

We use a random profile name, so it cannot exists:

minikube start --profile test-af672ccebb7d698c8cd745eb7d14911b-cluster \
    --driver podman \
    --container-runtime cri-o \
    --disk-size 20g \
    --nodes 1 \
    --cni auto \
    --cpus 2 \
    --memory 2g \
    --alsologtostderr

The failures started this week, this code was running without errors for several month
as part of ramen github actions.

Example failing run:
https://github.com/RamenDR/ramen/actions/runs/5348065862/jobs/9697442318

Attach the log file

Attached log from our tests - this is a log from the drenv tool, showing the
minikube command, exitcode, output, and error captured during the failed run.

logs.txt

Operating System

Ubuntu

Driver

Podman

@nirs
Copy link
Contributor Author

nirs commented Jun 22, 2023

For reference, switching to docker driver fixes this issue, but we are already using
podman for building and managing container images and would like to avoid the docker
dependency.

@mascenzi80
Copy link

@nirs were you able to make podman work with minikube?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 15, 2023

You can seee sudo -n podman volume create test-af672ccebb7d698c8cd745eb7d14911b-cluster --label name.minikube.sigs.k8s.io=test-af672ccebb7d698c8cd745eb7d14911b-cluster --label created_by.minikube.sigs.k8s.io=true in the log, so the volume is created by minikube.

It seems to be some log spam from a previous error, the real issue is Error validating CNI config file

@afbjorklund afbjorklund added co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Aug 15, 2023
@nirs
Copy link
Contributor Author

nirs commented Aug 15, 2023

@nirs were you able to make podman work with minikube?

We switched to docker for our github actions tests, did not try again yet.

@nirs
Copy link
Contributor Author

nirs commented Aug 15, 2023

It seems to be some log spam from a previous error, the real issue is Error validating CNI config file

@afbjorklund Any clue on debugging this issue?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 15, 2023

All I saw was the log lines:
"Error validating CNI config file /etc/cni/net.d/test-af672ccebb7d698c8cd745eb7d14911b-cluster.conflist: [plugin bridge does not support config version \"1.0.0\" plugin portmap does not support config version \"1.0.0\" plugin firewall does not support config version \"1.0.0\" plugin tuning does not support config version \"1.0.0\"]"

Wonder what "auto" picks up...

      I0622 16:35:41.107655    1891 cni.go:84] Creating CNI manager for "auto"
      I0622 16:35:41.107672    1891 cni.go:142] "podman" driver + "crio" runtime found, recommending kindnet

EDIT: But maybe I am misreading this, and it is the podman network that is broken ?

The crio network should have a different name. Maybe the host just needs new CNI...

The error is weird though, since podman3 should be using cniVersion 0.4.0 for net

// Current reports the version of the CNI spec implemented by this library
func Current() string {
	return "0.4.0"
}

@nirs
Copy link
Contributor Author

nirs commented Aug 18, 2023

I created a repo for debugging this:
https://github.com/nirs/test-minikube-podman

Turns out that you get pretty old podman (3.4) in Ubuntu 22.04 while current version
is 4.6. You can install the latest version via the Kubic project:
https://build.opensuse.org/package/show/devel:kubic:libcontainers:unstable/podman

With podman 4.6 minikube works fine:
https://github.com/nirs/test-minikube-podman/actions/runs/5898563462/job/15999784769

So I think we can close this issue and add a known issue for using podman on Ubuntu here:
https://minikube.sigs.k8s.io/docs/drivers/podman/#known-issues

@rmsilva1973
Copy link
Contributor

@afbjorklund take a look at the details I added to issue #17154 regarding podman CNI plugin.

@mascenzi80
Copy link

I created a repo for debugging this: https://github.com/nirs/test-minikube-podman

Turns out that you get pretty old podman (3.4) in Ubuntu 22.04 while current version is 4.6. You can install the latest version via the Kubic project: https://build.opensuse.org/package/show/devel:kubic:libcontainers:unstable/podman

With podman 4.6 minikube works fine: https://github.com/nirs/test-minikube-podman/actions/runs/5898563462/job/15999784769

So I think we can close this issue and add a known issue for using podman on Ubuntu here: https://minikube.sigs.k8s.io/docs/drivers/podman/#known-issues

@nirs when I attempt to install via kubic I only get version 4.4. I'm running Ubuntu 20, so I'm assuming that because you are running 22.04, you have access to 4.6. Any thoughts?

@nirs
Copy link
Contributor Author

nirs commented Sep 5, 2023

@nirs when I attempt to install via kubic I only get version 4.4. I'm running Ubuntu 20, so I'm assuming that because you are running 22.04, you have access to 4.6. Any thoughts?

I tried to add a ubuntu-20.04 build here:
https://github.com/nirs/test-minikube-podman/actions/runs/6088573062/job/16519507180

It seems there is no podman build for ubuntu 20.04 in kubic repo.

@mascenzi80
Copy link

mascenzi80 commented Sep 6, 2023

@nirs not sure what I'm doing wrong, but I set up ubuntu22.04 in a VM to verify we can make this work before updating our servers. But I'm still only getting access to Podman 3.4. I'm following your instructions from github test

  sudo mkdir -p /etc/apt/keyrings
  
  curl -fsSL "https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/Release.key" \
    | gpg --dearmor \
    | sudo tee /etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg > /dev/null
	
  echo \
    "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/devel_kubic_libcontainers_unstable.gpg]\
      https://download.opensuse.org/repositories/devel:kubic:libcontainers:unstable/xUbuntu_$(lsb_release -rs)/ /" \
    | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:unstable.list > /dev/null
	
  sudo apt-get update -qq
  
  sudo apt-get -qq -y install podman

@nirs
Copy link
Contributor Author

nirs commented Sep 7, 2023

@mascenzi80 It will be better to go directly to the source:
https://podman.io/docs/installation#ubuntu

While looking for it I found this interesting page:
https://computingforgeeks.com/how-to-install-podman-on-ubuntu/

@mascenzi80
Copy link

@nirs
Going directly to the Podman instructions has a couple of issues. 3.4 is the first issue, but to get around that you need to build from the source. The instructions for building from the source have you installing Go with the use of a deprecated installer. I'm posting to Go's GitHub issue forum shortly to seek out their advice on getting around this issue.

I also came across your 2nd link, but it doesn't provide instructions for 4.6, only 3.4

@nirs
Copy link
Contributor Author

nirs commented Sep 7, 2023

@mascenzi80 why build from source? The link I posted explains how to install a package from
the Kubic project, and this is what I used in my test repo to install on ubuntu-22.04.

@mascenzi80
Copy link

@nirs when I attempted to install the kubic project repo, it only installed 3.4. I could not get it to install anything higher.

@mascenzi80
Copy link

@nirs, I was finally able to get kubic installed on my system that was able to pull 4.6.2. The mistake was the instructions on the link https://computingforgeeks.com/how-to-install-podman-on-ubuntu/ was pulling from a the stable branch. I needed to pull from the unstable branch in order to get the latest version of Podman.

@tschf
Copy link

tschf commented Nov 10, 2023

I get this error on OL8

W1110 05:12:01.042725 510883 out.go:239] ❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125

I tried podman system reset to try make sure I'm in a clean environment.

I'm just running with minikube start --driver=podman.

Podman version is 4.4.1

╰> dnf info podman
Last metadata expiration check: 0:00:18 ago on Fri 10 Nov 2023 13:26:51 AEDT.
Installed Packages
Name         : podman
Epoch        : 3
Version      : 4.4.1
Release      : 16.module+el8.8.0+21191+109ddc60
Architecture : x86_64
Size         : 49 M
Source       : podman-4.4.1-16.module+el8.8.0+21191+109ddc60.src.rpm
Repository   : @System

If I run rootfully, it works fine.

logs.txt

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 9, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

7 participants