Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"podman system reset" does not return #9075

Closed
eriksjolund opened this issue Jan 24, 2021 · 27 comments · Fixed by #14466
Closed

"podman system reset" does not return #9075

eriksjolund opened this issue Jan 24, 2021 · 27 comments · Fixed by #14466
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. parkinglot Not actively worked on, but should remain open

Comments

@eriksjolund
Copy link
Contributor

eriksjolund commented Jan 24, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

podman system reset does not return. First there is some output, for instance

ERRO[0073] Error removing image ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab: Image used by 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99: image is in use by a container

but soon nothing more is written to the terminal. At least 60 minutes have now passed since the last output was written to the terminal.

I also have some systemd user services running podman run .... (Maybe those cause the problem?).

This bug report might be a bit of a nightmare as there is no clear description of how to reproduce the bug. Feel free to close it.

Steps to reproduce the issue:

  1. Install Ubuntu 20.04 (long time ago)
  2. Install Podman from Kubic (long time ago)
esjolund@ubuntu:~$ cat /etc/apt/sources.list.d/devel\:kubic\:libcontainers\:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /
esjolund@ubuntu:~$ 
  1. Use podman to create containers and container images. (Different versions of podman were used because
    sudo apt-get update && sudo apt-get dist-upgrade -y changes the podman version over time). Create and enable systemd user services that are using podman.
  2. Remove the repository entry for stable and instead use testing. (This happened yesterday)
esjolund@ubuntu:~$ cat /etc/apt/sources.list.d/devel\:kubic\:libcontainers\:testing.list 
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/testing/xUbuntu_20.04/ /
esjolund@ubuntu:~$ 
  1. Run sudo apt-get update && sudo apt-get dist-upgrade -y. (This happened yesterday)
  2. Reboot the computer (This happened yesterday)
  3. Run some podman commands. (This happened yesterday. Unfortunately I don't remember the details).
  4. Run podman images --sort size. This command was successfully executed.
  5. Type commands on the terminal. (This happened today)
esjolund@ubuntu:~/gunicorn-fedora$ podman unshare du -sh ~/.local/share/containers/
154G	/home/esjolund/.local/share/containers/
esjolund@ubuntu:~/gunicorn-fedora$ podman rmi --all
137a1eee74b336ff8e33e220c7859c9bcdc7cc27d65519cf9b592086adb1cd71
7118911d3b0e5816c5ba07c3f65c7b8cfb5b62ec72f536f3d65ea0aeba8a14d4
79996618c1cfea8dc0e758a16b69808756c085636a69426e68b8e94c1d2fe74f
90db1f709b6011e8fe2f93467017c6fed6a7d6200b902742da664ef00392ea82
3ae54cc676aff77b29ddf85f2cce6cabc202971c79bb74f55dc310767e7418cd
647730e92367a0427b514b116394f7a2249ab37ff6d329361e7b95598ee37e61
9e2939a494baf216129be1aa9783124f21a5973fadb024537582f82befe98ab6
38f8f5b8751695c0e4a1da031ab36573361937abbe2eaa45cc043b1af084dbd7
5216cf4855295884e916d4ec77f5fbb70ab39728c5d79f349d0525d2ee4cdbbb
7da4e99ca404481f43a7ece017f06702ba47311d4babf2317c063253620a0d39
5ae54105a45474fbfc83eb3cdfe7f78df98ad39b7d984399aa0c54e6b16658de
c60a93c2f94d4a51f52f4353e2afa6c3e01aafc8a57122ad028c085f46432f38
a656a9fefa997dbc3ce40a813d4ad701190aef11e58962f1dc652fc4930a9b01
82c6172f5d6c766c5e4460ba154837fbad6dfd7e3a0df3f54efcd620636d27a2
1b94250f26d8f1cca514b4c3137a5aa4bd981bce516d3270a2956814b267187f
4d8a917a20323c3983bf42417415e22bfe4aa942f19c9018d5112018bf4603f1
19ab252b9852e453f04704ed443adec5d2fb5215bb4146570b44e7fa71fb82f9
cbbcc218c9d267a457b3276bcc9d60506b9819b8dabdde849017f698232855d6
5cebde3ea43953b07adec4c9bec3e8cbe8540500d680311ee6b5d2358413a6c7
63c3b9ef180cca3959315db0c1f22f4f2b6b279df8dd649e7c64f5b7c4a115a5
e63b628d68f442d3653fcf7c89576cb2b1d0ccaf673df35ed784a4c26982c702
3f04fe802e02b24ca4b298df8082614a7098f7dfcf95b3e6ccc8ef4284f02c74
48fbd3f2a5ae9cc1b6170656c9103643874c51de06fe025a9baf109bd6c90407
61814b1fac959bcc535dfe5f02400a2ca715c67522da8d3cb92ee048594fe013
d4761096b2e863c542da4c2477c389e38e52c4c1c479d92aa0e816b9d924ca30
8ee26a2f1dcb7519b0b58918e77635548d2bb1817e692990961f0a2236de936a
52c75f18f876df66b088658a02cd5157b106d0017b7145cd9bda184b862aa1be
9ed2983be82951d2aca8fafafdbf52bb43661e0121e860b51a49b0e8e55eda12
ac1b50ce4f9751140a0e3118711cd63feef5fcb728a283bf1bedf8b2b495ce0a
9fe92eb54ed8493906677f00c510307fd8617b070c1eee571d3fb6db70029bce
31db7ef1abcfc6f9a51c9c00c0467e06b7a10c7ebe3cff84f6c908e810810742
c28f20840128227fd199e99fabae67065f80e186205219eecf13d82d71eab1eb
96dcae37e27eaf1b466f97b7defa62a7e9f5915e7c2165adf0c8a4fb9f0a0f8a
1a667f8e1d5af99efbb4cbaaa403d4fef7c3422583169ef2afe9d97a79461609
c487564ac0a69b37da850f93e7e32d6ea551c089ef280c52ba7b65406d65b6ff
987e2c5189043cee2b4af75e263c3683102d86ae82d86f5b1b5949eb74e9ce20
c5162fbb6f6583fd705168ffa2013fdd135bd629e8f072a0d94a50180c161689
1c47769ae1827ec0f74294a68091ca8a35523a4f0ef7bafc3f691f38c332776c
e7c49074d79297f79d903b0e59f4a05fbf3d15e7c03b1e3c94a1d5235564baa9
8139d457c9b81a6bd0eb78defe5eae191f6dee16866057fd789ac07a713bddf3
564b4f156f3e5004aad53efb7f46350f93c16f609da268e084498dd786d63a5e
00dc752aff18bc57898207239c1a73cea82b4441b10b1bd1eec4069a032e4aa3
7db2bca50eca15e9a3e4e02dcb158313dd2246e105e5600cf76201c6a04943a9
7c14e18d57f3e5f764622ef32904d615177f180a4c851fb9a9b3aa27d9360b03
0a5312869dfd6d918c81f3dd61de53312ab9f9aefdd12c389ddc0b49a265aba3
39411022dccf29d10dbbafaf9f29829fc386e573bca1845b4026f282134132f7
05ad0204ccb525f3b7104c330b1fbdcdd9670c23efdfcf366120d04ad0fa8025
523d3260d000ce2b59bc42a6c5af701cb1b1bda2723b08d09f8ddf3aa942d9d1
f5703192e501bb6172a31f4f846f60214cb17e5c6fad524abc7379dfab8ceddb
f530fff0965caae38ac83c3ff0652758c71cc7a53a8d8456360dce63b10d4d3c
ff45d9a0bdc9660d35edaea56676248fee4b7c0305f3bd898574c977f9e5202a
91f63ba0b84f80649b6a754f3561975711687dc065069ae3043f1467f7c2161e
356bcc0ee8c9e614cd267c360911ad3fb20c9a725f518a223f4bbba690ffaaec
b7591f3b2a18dd2d611324e2a6f59237cf463913c63776f1e86299cd96e13660
c96ae64744af23cac22e7c0fe9124a28f193def87f01b5e96d4813490180072c
592bf6bd57ecd623f1d3d200a0d154ed26b867fae7b8ac199cb81c944629647b
ca3b47d3eabdf78ae441ed0e57067b3a79a7d66eb56d5c1f91250fc63d156f32
11fd6968fc5529be89d879d378e6c1486922f6eacd20585f8a202cba114e90bb
320662b7f3811c50693a47056d7f83d39c68c9e467f8aa852e0c33f849c5863f
fa5438fa99dd11ec5b39ebebbbd93c47b1ef4adf475bcc79db4387f6df43e156
5d3aacb3325fdb910d5e2141f2fc6320eefab466ce4bf046f3c49b1083e0a072
831691599b88ad6cc2a4abbd0e89661a121aff14cfa289ad840fd3946f274f1f
7dfa15bfdd4957d558ff6da513131c5f6c825e2446024ef0d9935b0c584acc61
9c5980c7f1d0770009ba7f4ee7e1f2658ceb3b724d78aa5d05cccc5b1550eb29
0d91334944a8f63e7f6be9bfe4e5118aadcd84acab668855ff8b94843216413c
40891197dc023775e438a8d1ba95b949164fe457fbc2d1565b1dec269de6ff70
56d9769e539a4f8c21de454efeba2cfab4eac6219ca2ae37e2173d002c605e57
b24e63bff42d9737ee0e5332d64e888bf3ac6a72c1909db9b86db88017b19eed
d8230deb51061faa4e25695adcf819cd74d9253dc0f6aab94d9e0056b9e569d4
e54cf9f70a4c26d035e9a462a926bfa2a7963ba918d7bfbfe65b8e6c80f48b7c
c430257e7cff1d67d52635ed6ea7d4f7f0333b65be51efb94290ef1ad4d6ee8c
eae97a7d3505b9a30f180a5b97b262fa994a41dbd8d48b8260c34b5ec0f99edd
9a1f48362a28733f831039be88f85b1667acb3b9d9eaa301aa65e46503ad92b5
0e294252d3ca319b9a7cff12e00dd0b4c811578c84a98ae3556b870c8ef6d0d9
704821a4c77e06a17519e4c74fa8bb7a1d928a31ddf0727d12c9330fe49e3b81
96baa4b59cf5870edc19b7317e205145ea90f5ae2417ea23786a6a59610b9d36
c382ca36896a2095aab318a9cd586f03a111f5b82e78d66c6dec78064826e09f
dd29ee80d2e4b9eb561906ccbd0f6a824376f1063ead21b89551ef22eb9a1399
d60ea47b0c67126ea681207fc379b8ea7624f0e4d0ddaea2bd5a82fd13fe0373
698284e6651c54292f90ef530bc677cd70f981a54ab1589e5270305ba34224e7
a50abd4f3bcfbcdb51adf0b738cf6c865e33daecdb51899ad9d382aec6bf2ba8
15c167a9d1bb5fdd544cc1d5ea16771237c57b9b59bd977790164ffc9590881f
a0cd641ff3f599794d52fb73fbeae48bd858c56efc65fa3d1068d249ce000e0a
df2b6caf06bbf5f739ee879de4f42220b4304cf2af53cc3614826590993f2400
c35c832f4e6eab403ce6c7f1b72b5cb1bb2b688d7b4605a1406c6688619bdda7
d0d3d7e0c9bd27d99a9c771110869757e1cc67063b72a915c654ad094606f1e6
9ae6537deb50442ef3b9682d5b4b81667df33f7172cc17a7897887a98e973181
25a2f5ff35dc9815ad4f4ffc267472f175ce2a8edfefd84221246f2601e78f7e
d9091a898133ac126b4a11d4aa192cebcb7530d50f3495d1855431a675005234
333070279dbbbf9fb7a7f4f6347c09c4714aba0a436641fdb15e436905eee20c
1826e181abc41fe7b76f397937a04e7a7fbbbd4f6ae24eaf06eeb99b66d3a16b
8024eaaf252f3e34b20435d6dbacff4d1d1513736b73b3090fba4c2be40dcccd
0344075ef6b91cd2499d7a12bdee23808163e820ffa06da51fd7262bd2ce0e61
a8ad2cff49e37b088fadbec1c81948d8ff0a916fedec05e0b9cdf241c0481b16
48ce366e63caef5a8013aa66e473f93216bba943f184fb57a309564e1a118c89
73ea8854b40cb11bfe5b45fc79e936cd5ecc9115cbfac4a6db8d7bb64b3ed555
f3d705ddcb5b17bdc71d4aee79f471ad0b69fa71a0e7bba16564b47f6db216b3
7ab7e3771f4776339b447097c9d67b6506db040a94e9a03acdba9174fa701e86
830e4384e160be40087722290f509e0f75352d3a094e5d41476745151bf0ebc7
4526213b5fcf3258241b7d498bf7f0959d7cf143174379829fc18bf48dc7a00a
e09a66f5eee4d5a99996b00a4a8b1c0ea307aaa19e4b92e2acb48d0736f6ddf6
10f9654f6d6e1deb353559fbab0fcae25663482b2fc56d60adf79c088d33bb01
6c24d7ec2e35056c2e60e91dc9caa3d109a5acf5fd1c8094d18743e6513b9289
c2f56fe81235cc504306c49edee828d22b5163104a76ade6e463ba8af6496558
c571f13cde84a5351ad7504020ae0ee9c5a7a374e5d698f453b6918cae898dd5
d6b245d18d56c8bb70f2f3247856f5b9566cc3401adb3ec2f40061b0ff40a170
a9da963a67e354bc82e55eaf6f6fc69c772083440c8e0282de558973b82d5dea
a24b7700891f60cedee2d85de0f7a987a4e1a4f3c47b6496901681468e4ebae0
2036a686ec0689e8259f87106b214abbd47a87e325ef8f6e3662452f28d3a9c7
7cef057ed76603850b2c98ecbc884e4d06bfb221b3b948bdd4fbfe5f447cd0fc
38a70f8f52ea5310239a10936f705b9bc7c67048ba2ae7dd033f38c9537c0d71
f007db76147015a0a050afa9ffdbab8d7dfef6c6d8d231556b236c4dc7d3e2fd
0b46b7c724b6a591c5610e3401ac0f1f74c511596814eea51a2037e54819f651
a671bf421ffc13825a16d02b519fbc27fcb9661d8776d739e2a4d3d481078a7f
f8dfd4f7e6410ad95644e06accbb6fc3d259b3e50985bea07b1a6ddfadf60d84
ca540d3b24e38c1decc735dcd163d5fc8d3873171a0e00d1aa3fd0261b88b5d7
a93d2260c79ac0ca6948eb1e71febaf96e795c60f20e08a6d2f48696a11a0b90
b5017ddc009a75d623bb23d91ba3acbc9223916637ef05e5d0f55cb292a7886b
36dbcaf4abc92d85d10ae57893892fd82d642feb13191ad9c6ab5357904591c4
30c02da5951f60e514adbc0a37d4efd89711fbdf37965fec5f3fa32170af80cc
44f429ec640d53a3bbf22d2d294613dfb5d1c10cc2b4a1c7206e5f5dcdb794b4
b2d3e091046a785abeb236e814327ce980f08fb9712f34deb6736a8d1c877da6
bf3003c07778c03dd144ed9321db1f4a18c715c3364de3e69e86f3b6d2ce1b1f
65f3eaf23dbd315b060b44e7224703d37df0ddd8141028e297d95deafcbac191
4416dc951b8f9d41bd71eac912a84ef1d700c14fab5ef4d41dfdf6fb067ea097
f66bfcde7dc0af65af934d04349cface0b6b1504b2a3f3b3c11aacc9fe90a7b0
317c9def49ee6a92a1ee8c046b96a15627f5f98b13926a6bcad29ab08dbbda85
e46a0f191a4ac96b1b62dad52959e7a9b352168d5d1ada580813a5993260c2ce
644957f7b24e3385a67e052a022423a8f12927519ee824eb70707ce6f6bc8aa4
a1679bc0dce2cec1035fa0367115665379466eb52f5e464478a837c526e3c91e
bb8710ed40b1296ae72d22ce376cab13ca2adc051b2dd656c1bcb18991b65d4c
62aba83f7b228b96b89bbeb316188bb9c9226f61088035c0d607e3d61297533f
d0ad2eac4a12e0d889def7618cd80bb6c7f04ec03d81b0768c6688098670306a
3e35454af8db6f21d612cb72e6b5388658d8b54c2c180cd6caaaf2518e1f68cb
b8455f68907c5cf97e99b6e2345197b1ebc4a3979dd457959917997a9e90f563
351015056d39fa3187f6501cf2694b7ac274500eb1cc74a30c0cef5da8b6aa7d
28db28826e7c4354bb791cb9e9e27609e4efc5bfc915d48f53e116a08dc4f164
Untagged: quay.io/buildah/stable:latest
Untagged: quay.io/podman/stable:latest
Untagged: quay.io/podman/stable:v2.1.1
Untagged: localhost/mip:latest
Untagged: localhost/slurm-container-cluster-mip:latest
Untagged: docker.io/nextflow/examples:latest
Untagged: localhost/openmpi:latest
Untagged: localhost/testar:latest
Untagged: docker.io/library/mysql:latest
Untagged: docker.io/eriksjolund/mysql-with-norouter:mysql-5.7-norouter-v0.6.1
Untagged: localhost/proxy-frontend:latest
Untagged: docker.io/grafana/grafana:latest
Untagged: docker.io/library/redis:latest
Untagged: registry.access.redhat.com/ubi8/ubi-minimal:latest
Untagged: docker.io/library/mariadb:latest
Untagged: docker.io/sebp/lighttpd:latest
Untagged: localhost/fedoralighttpd2:latest
Untagged: localhost/gunicorn-fedora:latest
Untagged: docker.io/giovtorres/slurm-docker-cluster:latest
Untagged: docker.io/library/perl:5.26
Untagged: docker.io/library/python:3.8-alpine
Untagged: localhost/fedoralighttpd:latest
Deleted: 64333d7a727019044dfaece2887d886630eb761b40be72ee2f3ce2b232dd25eb
Deleted: a3def042137ca66fe35464e317e9ddd5fab0c1d2db1efd6975f2cdf640f0787c
Deleted: 775840ae7e5bd20146f6334429a9e66b135f3e21e8794e5aee6e851ed5dbf2cc
Deleted: 1c1e28edebdbb7d30c95fed050881a0924701bd4630f65f2b042d013ae33249a
Deleted: 2337474f464eb49a746684a05cdd918131bee8a6e9d98aadd334da18cc8b1e0a
Deleted: 26c0024516b448f28369ca26bb49efd493a27276f5181f12e96e7c42f05c90a8
Deleted: e7b023c0f8564de27f60a913c16f1484670eb33cdfdce61af502414239b64051
Deleted: 306ba975bfa9848862e689b161b65f9b47b0ff3dada669ddf9614b1b5ec1f40f
Deleted: aedee719ca71befbfccb36af94503ee72c05e1f62e0f9fffbd7f6b904192ce9d
Deleted: cfa0b94e818159b724d583d8d69584f14d1a4e7a761ba5db11662ab0c7cd3ba7
Deleted: 30117ceed4ea55483e219001d2543bea4ed4cc3fa86fbf4d7cb6f2bd3688563e
Deleted: 5c1ec2f6346ec4c4c27f6c3856300a6bf3c191af6ab288f3c6b348c34beddd6c
Deleted: 318c7d0648b82816613a5927abb13bfb4be6fe3a85c72a957fb3f3b66c70d1e9
Deleted: 479bbe9e1a8aeebd98842e8e098311d7da4accb59f35544eedb02ef04df568b0
Deleted: 542ddbbad60a1a11a3a0d9f958d698396e48c88125c890a1adffd2960dcff0d6
Deleted: b4f30594cd8b571bef504f07ae0206e15136b0a03391ed12e1183ac58500b753
Deleted: 631f4b22af18cf49c2c9053e18d2d50e91211cdf74384b4f9b5677013dbdae8d
Deleted: a316c41225f96e25bdfd41e497b18e71b33a90b6a96133b8b52b650830c05d1e
Deleted: 39595608686537dfbb723710bb2dce24674d0e930d1038eba3f6d10516e0bb98
Deleted: 989286f77175eb0070fbc71469d713c9aa215407b638351620469bf22d95e92c
Deleted: ab2f358b86124c477cc1f91066d42ca15fb2da58f029aa3c4312de5b3ca02018
Deleted: 23a30e3c2c81e00e87be908a8760395694e3a392ec846b1e933ad2728d0967e5
Deleted: 73154b3c390e3b5c8fd9825b768e6d23945070804261b4586210713aa4ee80e6
Deleted: 198072c2da614008018f3f72cf2c257491d8f51e7c97a86ceddd5b3a3009e3a3
Deleted: f966636e5b907aa3bd8bc345d7692a4df8e4e715ccb12f06d5e68cca4b03a713
Deleted: 2cccdde49a31602d125daf7c68ffd9ad649aa6db316235f350385c25b5a1b8ae
Deleted: bc59a014df8f0f10bd81f2b6f69ca9c209d3d9875a28207964c15600cfae3839
Deleted: 751377198a73433324be2e4ebce9811ef2900c0361633d0463c4eaba29c3e024
Deleted: 2a167fc9fbb5ee5f8683993a6f59e0356fe0c6e41624d1ce824fcce194a28c86
Deleted: c97fc66f638c68a34a6f4f188f054d9f526d87f3a9891977c140dffea7b43e3a
Deleted: 18ed848c544adf93cf6789e28df586f647c55c4b3acd8f71a9d863c825b07114
Deleted: 0f626b2f483bee29e62addb8eb64d3ea98abad4c45dc7157c4aaa8de762bc593
Deleted: 4edb90363afb63ede37e570e27e002c9f83983d65b0d92ff212a0887d6897318
Deleted: bb0f021d34a45fe77a6b019582afffe666cb22fa0d1effb4abcd6659037ae3e1
Deleted: b3719b23be015d728335db15541ffc5afc44062e28b0ea86fe72a75e883c47e3
Deleted: ba6d85f2b2dfd61bf89626c2f01c82323e99ce874961d07cbbefa5b78bb2c8c5
Deleted: cecd78f3e7dbe89f807b24a6a6a9fd4ebd3b53971971f8b9a2f51bb425f8fc82
Deleted: 4ec5d5b55954b4e7fe12bc72c24f12e8a63c4ad8025875fb6d1a400bfa3510a3
Deleted: 60cd08620bae90c42461109b860c2ef68f9a9950a9d1970895ceb8cd7390f323
Deleted: 8a86d07b46dc45e66dee00064d9a94859f7065ecd70f03c1cb8f0f22b8ec4096
Deleted: 8485bfa2f4c2ab6ba02e08798e46225bbba588c4e265f9485d8edb9ed71fdb5d
Deleted: 626b47a6d9b49c037c9e9f1b932afc252aaa78b6bf2fe24a6a166947eb57f700
Deleted: 3c7f996712896e42940e010623c44d35754d8bb447a777971578dea5050186f7
Deleted: 621ceef7494adfcbe0e523593639f6625795cc0dc91a750629367a8c7b3ccebb
Deleted: 7331d26c1fdfb4c5cc19fd8fdf039c8efe6e5df7c46f02d027bf685c8614107f
Deleted: 3a348a04a8159339ed3ca053ea925f854252e6a6c3df6fa82c17625d1026f18b
Deleted: 8c46e542506bcb5db43a5aeb9e444207c888a29c93cbd696625d38dcf13dbe53
Deleted: 06926c6a7dad56f9ccd103d59121d398d99b770cf922535e494122d3a50d32d8
Deleted: 89de47888c9df1a97c4b0ace3fde476232549a48d76506b792edae08a48c9067
Deleted: 3bd9bf3735c8301d8357f2f4b0ac2283d14935cf5f9d08a29cfe605d93c6642c
Deleted: dc3d91f84c2b87f41b2be2ff03cbef157d48cb83eabdf7b42f4fc3ba9772bf62
Deleted: ff1a37bb2e6af54125a74326cd28a62eea910ffabc4bb6cc99c60fedc59f112a
Deleted: 8a3fa8608b24ed07d18afa610fefffa2ea716c0e64af1a7ef43cb88240b0fda8
Deleted: 53b271d170d32a2c13ee22f986da8a36c6fefa83a499433f1c2fea5108e0386f
Deleted: 0047e5343a77280993a358d27337fcfeb34658d30d57cb784e145de0818341ca
Deleted: bd6e559ae148e095788887c5a6a61b431622bb838e6e4c6c59074d1048992488
Deleted: 64df5e2068e389326fc91858380a7a308ceb5943364686ba40427aa4cdbcf57b
Deleted: 6de16b74300e0b8033f95ace7298e5104354a50d0fc2eea9288251736cf51e85
Error: 31 errors occurred:
	* could not remove image 5fa78203c8f7c5126842b4cab3241cba620e86f0f6b7645ebd3c5db7629693af as it is being used by 1 containers: image is being used
	* unable to delete d6e46aa2470df1d32034c6707c8041158b652f38d2a9ae3d7ad7e7532d22ebe0 (must force) - image is referred to in multiple tags: image is being used
	* could not remove image 56179057dcc3ce62f701cfd7b3ada536b7d30925dec2f623d9eabdf6753a1d15 as it is being used by 1 containers: image is being used
	* could not remove image ddf66ebce79b0c850c7e8af72210beeb64360478911bcd0be4b9d675eff2f034 as it is being used by 1 containers: image is being used
	* could not remove image aedc30f38677249e3b71c3f69810d1dab7e9ee639ae50d9e256326f87de052e8 as it is being used by 1 containers: image is being used
	* image is in use by a container
	* could not remove image ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab as it is being used by 3 containers: image is being used
	* could not remove image ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd as it is being used by 1 containers: image is being used
	* could not remove image d7f396029a550165fac51e599086c66cce1ded37ac4c8ce4fe9b9283bdcf87a2 as it is being used by 1 containers: image is being used
	* could not remove image 5fa78203c8f7c5126842b4cab3241cba620e86f0f6b7645ebd3c5db7629693af as it is being used by 1 containers: image is being used
	* unable to delete d6e46aa2470df1d32034c6707c8041158b652f38d2a9ae3d7ad7e7532d22ebe0 (must force) - image is referred to in multiple tags: image is being used
	* could not remove image 56179057dcc3ce62f701cfd7b3ada536b7d30925dec2f623d9eabdf6753a1d15 as it is being used by 1 containers: image is being used
	* could not remove image ddf66ebce79b0c850c7e8af72210beeb64360478911bcd0be4b9d675eff2f034 as it is being used by 1 containers: image is being used
	* could not remove image aedc30f38677249e3b71c3f69810d1dab7e9ee639ae50d9e256326f87de052e8 as it is being used by 1 containers: image is being used
	* image is in use by a container
	* could not remove image ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab as it is being used by 3 containers: image is being used
	* unable to delete a488b622b94021ea66002df2cac4914a762fd9bb5b2e3444bfa0477a23452cea (must force) - image is referred to in multiple tags: image is being used
	* could not remove image ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd as it is being used by 1 containers: image is being used
	* could not remove image d7f396029a550165fac51e599086c66cce1ded37ac4c8ce4fe9b9283bdcf87a2 as it is being used by 1 containers: image is being used
	* could not remove image 5fa78203c8f7c5126842b4cab3241cba620e86f0f6b7645ebd3c5db7629693af as it is being used by 1 containers: image is being used
	* unable to delete d6e46aa2470df1d32034c6707c8041158b652f38d2a9ae3d7ad7e7532d22ebe0 (must force) - image is referred to in multiple tags: image is being used
	* could not remove image 56179057dcc3ce62f701cfd7b3ada536b7d30925dec2f623d9eabdf6753a1d15 as it is being used by 1 containers: image is being used
	* could not remove image ddf66ebce79b0c850c7e8af72210beeb64360478911bcd0be4b9d675eff2f034 as it is being used by 1 containers: image is being used
	* could not remove image aedc30f38677249e3b71c3f69810d1dab7e9ee639ae50d9e256326f87de052e8 as it is being used by 1 containers: image is being used
	* unable to delete b3048463dcefbe4920ef2ae1af43171c9695e2077f315b2bc12ed0f6f67c86c7 (must force) - image is referred to in multiple tags: image is being used
	* image is in use by a container
	* could not remove image ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab as it is being used by 3 containers: image is being used
	* unable to delete a488b622b94021ea66002df2cac4914a762fd9bb5b2e3444bfa0477a23452cea (must force) - image is referred to in multiple tags: image is being used
	* could not remove image ae2feff98a0cc5095d97c6c283dcd33090770c76d63877caa99aefbbe4343bdd as it is being used by 1 containers: image is being used
	* could not remove image d7f396029a550165fac51e599086c66cce1ded37ac4c8ce4fe9b9283bdcf87a2 as it is being used by 1 containers: image is being used
	* unable to delete all images, check errors and re-run image removal if needed
esjolund@ubuntu:~/gunicorn-fedora$ podman system reset

WARNING! This will remove:
        - all containers
        - all pods
        - all images
        - all build cache
Are you sure you want to continue? [y/N] y
ERRO[0047] Error removing image 5fa78203c8f7c5126842b4cab3241cba620e86f0f6b7645ebd3c5db7629693af: Image used by cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0: image is in use by a container 
ERRO[0047] Error removing image 0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566: Image used by 5149928c38d3597d85f797f0b25c894cdd12b16cc8e157214e15d4d3d6f552e5: image is in use by a container 
ERRO[0072] Error removing image 6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237: Image used by 5414c10e95c6898bf76549dd24e25daff8d981e97a38ecdab81af275943ed514: image is in use by a container 
ERRO[0072] Error removing image 3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb: Image used by 1b75220d53ef2f34a10ca2f4f60a0efc74a8a47fdc6f0506bef5117b76190f93: image is in use by a container 
ERRO[0073] Error removing image ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab: Image used by 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99: image is in use by a container 

Describe the results you received:

Nothing more is written to the terminal. At least 60 minutes have now passed since the last output was written to the terminal.

Describe the results you expected:

I would expect the command podman system reset to return.

Additional information you deem important (e.g. issue happens only occasionally):

Some disk space has been freed up

I checked that ~/.local/share/containers now consumes 45 Gb disk space. Before it was 154 Gb.

esjolund@ubuntu:~$ sudo du -sh /home/esjolund/.local/share/containers
45G	/home/esjolund/.local/share/containers
esjolund@ubuntu:~$ 

The network does not seem to be the problem

I'm logged in via ssh so theoretically the network connection could influence what is happening.
It doesn't seem to be related to the network, though, as I tested pressing ctrl-z and then typed fg

 ^Z
[1]+  Stopped                 podman system reset
esjolund@ubuntu:~/gunicorn-fedora$ fg
podman system reset

There is no special user config

esjolund@ubuntu:~$ ls -l ~/.config/containers/
total 0
-rw-r--r-- 1 esjolund esjolund 0 dec 10 16:41 short-name-aliases.conf.lock
esjolund@ubuntu:~$ 

ps axuw | grep podman

esjolund@ubuntu:~$ ps axuw | grep podman
esjolund  116579  0.0  0.3 1418996 49024 pts/0   Sl+  08:43   0:00 podman system reset
esjolund  116589  124  0.3 1567056 60560 pts/0   Sl+  08:43 109:59 podman system reset
esjolund  116794  0.0  0.0  88948  2016 ?        Ssl  08:43   0:00 /usr/libexec/podman/conmon --api-version 1 -c cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0 -u cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0 -r /usr/bin/crun -b /home/esjolund/.local/share/containers/storage/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata -p /run/user/1000/containers/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata/pidfile -n mysql --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/esjolund/.local/share/containers/storage/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata/oci-log --conmon-pidfile /run/user/1000/slurm-mysql.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/esjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0
esjolund  116971  0.0  0.0  88948  2120 ?        Ssl  08:44   0:00 /usr/libexec/podman/conmon --api-version 1 -c 830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4 -u 830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4 -r /usr/bin/crun -b /home/esjolund/.local/share/containers/storage/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata -p /run/user/1000/containers/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata/pidfile -n slurmdbd --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/esjolund/.local/share/containers/storage/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata/oci-log --conmon-pidfile /run/user/1000/slurm-slurmdbd.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/esjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4
esjolund  117055  0.0  0.1  49784 23436 ?        S    08:44   0:00 /usr/bin/podman
esjolund  117082  0.0  0.0  88948  1940 ?        Ssl  08:44   0:00 /usr/libexec/podman/conmon --api-version 1 -c 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99 -u 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99 -r /usr/bin/crun -b /home/esjolund/.local/share/containers/storage/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata -p /run/user/1000/containers/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata/pidfile -n slurmctld --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/esjolund/.local/share/containers/storage/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata/oci-log --conmon-pidfile /run/user/1000/slurm-slurmctld.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/esjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99
esjolund  122786  0.0  0.2 1418912 48124 pts/1   Sl   09:29   0:00 podman version
esjolund  122996  0.0  0.2 1418656 48516 pts/2   Sl   09:30   0:00 podman info --debug
esjolund  128334  0.5  0.2 1271448 48312 pts/2   Sl   10:11   0:00 podman info --debug
esjolund  128361  0.0  0.0  11900  2904 pts/2    S+   10:11   0:00 grep --color=auto podman
esjolund@ubuntu:~$

ps axuw | grep conmon

esjolund@ubuntu:~$ ps axuw | grep conmon
esjolund  116794  0.0  0.0  88948  2016 ?        Ssl  08:43   0:00 /usr/libexec/podman/conmon --api-version 1 -c cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0 -u cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0 -r /usr/bin/crun -b /home/esjolund/.local/share/containers/storage/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata -p /run/user/1000/containers/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata/pidfile -n mysql --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/esjolund/.local/share/containers/storage/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/vfs-containers/cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0/userdata/oci-log --conmon-pidfile /run/user/1000/slurm-mysql.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/esjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg cfe0285dbbc520bd1e2e2ae6860cdf229ff60f940af5cadc2afdccdd43b220c0
esjolund  116971  0.0  0.0  88948  2120 ?        Ssl  08:44   0:00 /usr/libexec/podman/conmon --api-version 1 -c 830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4 -u 830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4 -r /usr/bin/crun -b /home/esjolund/.local/share/containers/storage/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata -p /run/user/1000/containers/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata/pidfile -n slurmdbd --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/esjolund/.local/share/containers/storage/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/vfs-containers/830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4/userdata/oci-log --conmon-pidfile /run/user/1000/slurm-slurmdbd.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/esjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 830fcb2a988890eea06ab94822fb3dd1e244e3bf62486dbb10ef1ac93ec7acd4
esjolund  117082  0.0  0.0  88948  1940 ?        Ssl  08:44   0:00 /usr/libexec/podman/conmon --api-version 1 -c 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99 -u 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99 -r /usr/bin/crun -b /home/esjolund/.local/share/containers/storage/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata -p /run/user/1000/containers/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata/pidfile -n slurmctld --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -l k8s-file:/home/esjolund/.local/share/containers/storage/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata/ctr.log --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/vfs-containers/7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99/userdata/oci-log --conmon-pidfile /run/user/1000/slurm-slurmctld.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/esjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 7fdd72318ad28781f2771b122beef42ef6cb61e89a62a281d22df2386305fd99
esjolund  129843  0.0  0.0  11900  2920 pts/2    S+   10:23   0:00 grep --color=auto conmon
esjolund@ubuntu:~$ 

systemctl --user list-units

esjolund@ubuntu:~$ systemctl --user list-units
  UNIT                                                                                     LOAD   ACTIVE SUB       DESCRIPTION                                                             
  sys-devices-pci0000:00-0000:00:01.0-0000:01:00.1-sound-card1.device                      loaded active plugged   GP108 High Definition Audio Controller                                  
  sys-devices-pci0000:00-0000:00:16.3-tty-ttyS4.device                                     loaded active plugged   100 Series/C230 Series Chipset Family KT Redirection                    
  sys-devices-pci0000:00-0000:00:17.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda1.device loaded active plugged   SAMSUNG_MZ7LN256HCHP-000H1 EFI\x20System\x20Partition                   
  sys-devices-pci0000:00-0000:00:17.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda2.device loaded active plugged   SAMSUNG_MZ7LN256HCHP-000H1 2                                            
  sys-devices-pci0000:00-0000:00:17.0-ata1-host0-target0:0:0-0:0:0:0-block-sda.device      loaded active plugged   SAMSUNG_MZ7LN256HCHP-000H1                                              
  sys-devices-pci0000:00-0000:00:17.0-ata2-host1-target1:0:0-1:0:0:0-block-sr0.device      loaded active plugged   hp_HLDS_DVDRW_GUD1N                                                     
  sys-devices-pci0000:00-0000:00:17.0-ata3-host2-target2:0:0-2:0:0:0-block-sdb-sdb1.device loaded active plugged   SAMSUNG_MZ7PD128HCFV-000H1 EFI\x20System\x20Partition                   
  sys-devices-pci0000:00-0000:00:17.0-ata3-host2-target2:0:0-2:0:0:0-block-sdb-sdb2.device loaded active plugged   SAMSUNG_MZ7PD128HCFV-000H1 2                                            
  sys-devices-pci0000:00-0000:00:17.0-ata3-host2-target2:0:0-2:0:0:0-block-sdb-sdb3.device loaded active plugged   SAMSUNG_MZ7PD128HCFV-000H1 3                                            
  sys-devices-pci0000:00-0000:00:17.0-ata3-host2-target2:0:0-2:0:0:0-block-sdb.device      loaded active plugged   SAMSUNG_MZ7PD128HCFV-000H1                                              
  sys-devices-pci0000:00-0000:00:1f.3-sound-card0.device                                   loaded active plugged   100 Series/C230 Series Chipset Family HD Audio Controller               
  sys-devices-pci0000:00-0000:00:1f.6-net-eno1.device                                      loaded active plugged   Ethernet Connection (2) I219-LM                                         
  sys-devices-platform-serial8250-tty-ttyS1.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS1                              
  sys-devices-platform-serial8250-tty-ttyS10.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS10                             
  sys-devices-platform-serial8250-tty-ttyS11.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS11                             
  sys-devices-platform-serial8250-tty-ttyS12.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS12                             
  sys-devices-platform-serial8250-tty-ttyS13.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS13                             
  sys-devices-platform-serial8250-tty-ttyS14.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS14                             
  sys-devices-platform-serial8250-tty-ttyS15.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS15                             
  sys-devices-platform-serial8250-tty-ttyS16.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS16                             
  sys-devices-platform-serial8250-tty-ttyS17.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS17                             
  sys-devices-platform-serial8250-tty-ttyS18.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS18                             
  sys-devices-platform-serial8250-tty-ttyS19.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS19                             
  sys-devices-platform-serial8250-tty-ttyS2.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS2                              
  sys-devices-platform-serial8250-tty-ttyS20.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS20                             
  sys-devices-platform-serial8250-tty-ttyS21.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS21                             
  sys-devices-platform-serial8250-tty-ttyS22.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS22                             
  sys-devices-platform-serial8250-tty-ttyS23.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS23                             
  sys-devices-platform-serial8250-tty-ttyS24.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS24                             
  sys-devices-platform-serial8250-tty-ttyS25.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS25                             
  sys-devices-platform-serial8250-tty-ttyS26.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS26                             
  sys-devices-platform-serial8250-tty-ttyS27.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS27                             
  sys-devices-platform-serial8250-tty-ttyS28.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS28                             
  sys-devices-platform-serial8250-tty-ttyS29.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS29                             
  sys-devices-platform-serial8250-tty-ttyS3.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS3                              
  sys-devices-platform-serial8250-tty-ttyS30.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS30                             
  sys-devices-platform-serial8250-tty-ttyS31.device                                        loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS31                             
  sys-devices-platform-serial8250-tty-ttyS5.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS5                              
  sys-devices-platform-serial8250-tty-ttyS6.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS6                              
  sys-devices-platform-serial8250-tty-ttyS7.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS7                              
  sys-devices-platform-serial8250-tty-ttyS8.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS8                              
  sys-devices-platform-serial8250-tty-ttyS9.device                                         loaded active plugged   /sys/devices/platform/serial8250/tty/ttyS9                              
  sys-devices-pnp0-00:08-tty-ttyS0.device                                                  loaded active plugged   /sys/devices/pnp0/00:08/tty/ttyS0                                       
  sys-devices-virtual-block-loop0.device                                                   loaded active plugged   /sys/devices/virtual/block/loop0                                        
  sys-devices-virtual-block-loop1.device                                                   loaded active plugged   /sys/devices/virtual/block/loop1                                        
  sys-devices-virtual-block-loop10.device                                                  loaded active plugged   /sys/devices/virtual/block/loop10                                       
  sys-devices-virtual-block-loop11.device                                                  loaded active plugged   /sys/devices/virtual/block/loop11                                       
  sys-devices-virtual-block-loop12.device                                                  loaded active plugged   /sys/devices/virtual/block/loop12                                       
  sys-devices-virtual-block-loop13.device                                                  loaded active plugged   /sys/devices/virtual/block/loop13                                       
  sys-devices-virtual-block-loop14.device                                                  loaded active plugged   /sys/devices/virtual/block/loop14                                       
  sys-devices-virtual-block-loop15.device                                                  loaded active plugged   /sys/devices/virtual/block/loop15                                       
  sys-devices-virtual-block-loop16.device                                                  loaded active plugged   /sys/devices/virtual/block/loop16                                       
  sys-devices-virtual-block-loop17.device                                                  loaded active plugged   /sys/devices/virtual/block/loop17                                       
  sys-devices-virtual-block-loop2.device                                                   loaded active plugged   /sys/devices/virtual/block/loop2                                        
  sys-devices-virtual-block-loop3.device                                                   loaded active plugged   /sys/devices/virtual/block/loop3                                        
  sys-devices-virtual-block-loop4.device                                                   loaded active plugged   /sys/devices/virtual/block/loop4                                        
  sys-devices-virtual-block-loop5.device                                                   loaded active plugged   /sys/devices/virtual/block/loop5                                        
  sys-devices-virtual-block-loop6.device                                                   loaded active plugged   /sys/devices/virtual/block/loop6                                        
  sys-devices-virtual-block-loop7.device                                                   loaded active plugged   /sys/devices/virtual/block/loop7                                        
  sys-devices-virtual-block-loop8.device                                                   loaded active plugged   /sys/devices/virtual/block/loop8                                        
  sys-devices-virtual-block-loop9.device                                                   loaded active plugged   /sys/devices/virtual/block/loop9                                        
  sys-devices-virtual-misc-rfkill.device                                                   loaded active plugged   /sys/devices/virtual/misc/rfkill                                        
  sys-devices-virtual-net-br\x2d34cb800a426f.device                                        loaded active plugged   /sys/devices/virtual/net/br-34cb800a426f                                
  sys-devices-virtual-net-br\x2d5fa62d9924b3.device                                        loaded active plugged   /sys/devices/virtual/net/br-5fa62d9924b3                                
  sys-devices-virtual-net-br\x2da1dd9338508a.device                                        loaded active plugged   /sys/devices/virtual/net/br-a1dd9338508a                                
  sys-devices-virtual-net-br\x2dd48dd43f0ec0.device                                        loaded active plugged   /sys/devices/virtual/net/br-d48dd43f0ec0                                
  sys-devices-virtual-net-br\x2de81c73dc9ddd.device                                        loaded active plugged   /sys/devices/virtual/net/br-e81c73dc9ddd                                
  sys-devices-virtual-net-docker0.device                                                   loaded active plugged   /sys/devices/virtual/net/docker0                                        
  sys-devices-virtual-net-virbr0.device                                                    loaded active plugged   /sys/devices/virtual/net/virbr0                                         
  sys-devices-virtual-net-virbr0\x2dnic.device                                             loaded active plugged   /sys/devices/virtual/net/virbr0-nic                                     
  sys-devices-virtual-tty-ttyprintk.device                                                 loaded active plugged   /sys/devices/virtual/tty/ttyprintk                                      
  sys-module-configfs.device                                                               loaded active plugged   /sys/module/configfs                                                    
  sys-module-fuse.device                                                                   loaded active plugged   /sys/module/fuse                                                        
  sys-subsystem-net-devices-br\x2d34cb800a426f.device                                      loaded active plugged   /sys/subsystem/net/devices/br-34cb800a426f                              
  sys-subsystem-net-devices-br\x2d5fa62d9924b3.device                                      loaded active plugged   /sys/subsystem/net/devices/br-5fa62d9924b3                              
  sys-subsystem-net-devices-br\x2da1dd9338508a.device                                      loaded active plugged   /sys/subsystem/net/devices/br-a1dd9338508a                              
  sys-subsystem-net-devices-br\x2dd48dd43f0ec0.device                                      loaded active plugged   /sys/subsystem/net/devices/br-d48dd43f0ec0                              
  sys-subsystem-net-devices-br\x2de81c73dc9ddd.device                                      loaded active plugged   /sys/subsystem/net/devices/br-e81c73dc9ddd                              
  sys-subsystem-net-devices-docker0.device                                                 loaded active plugged   /sys/subsystem/net/devices/docker0                                      
  sys-subsystem-net-devices-eno1.device                                                    loaded active plugged   Ethernet Connection (2) I219-LM                                         
  sys-subsystem-net-devices-virbr0.device                                                  loaded active plugged   /sys/subsystem/net/devices/virbr0                                       
  sys-subsystem-net-devices-virbr0\x2dnic.device                                           loaded active plugged   /sys/subsystem/net/devices/virbr0-nic                                   
  -.mount                                                                                  loaded active mounted   Root Mount                                                              
  boot-efi.mount                                                                           loaded active mounted   /boot/efi                                                               
  dev-hugepages.mount                                                                      loaded active mounted   /dev/hugepages                                                          
  dev-mqueue.mount                                                                         loaded active mounted   /dev/mqueue                                                             
  oldhdd.mount                                                                             loaded active mounted   /oldhdd                                                                 
  proc-sys-fs-binfmt_misc.mount                                                            loaded active mounted   /proc/sys/fs/binfmt_misc                                                
  run-user-1000-gvfs.mount                                                                 loaded active mounted   /run/user/1000/gvfs                                                     
  run-user-1000.mount                                                                      loaded active mounted   /run/user/1000                                                          
  snap-chromium-1444.mount                                                                 loaded active mounted   /snap/chromium/1444                                                     
  snap-chromium-1461.mount                                                                 loaded active mounted   /snap/chromium/1461                                                     
  snap-core-10577.mount                                                                    loaded active mounted   /snap/core/10577                                                        
  snap-core-10583.mount                                                                    loaded active mounted   /snap/core/10583                                                        
  snap-core18-1932.mount                                                                   loaded active mounted   /snap/core18/1932                                                       
  snap-core18-1944.mount                                                                   loaded active mounted   /snap/core18/1944                                                       
  snap-gnome\x2d3\x2d26\x2d1604-100.mount                                                  loaded active mounted   /snap/gnome-3-26-1604/100                                               
  snap-gnome\x2d3\x2d26\x2d1604-98.mount                                                   loaded active mounted   /snap/gnome-3-26-1604/98                                                
  snap-gnome\x2d3\x2d28\x2d1804-128.mount                                                  loaded active mounted   /snap/gnome-3-28-1804/128                                               
  snap-gnome\x2d3\x2d28\x2d1804-145.mount                                                  loaded active mounted   /snap/gnome-3-28-1804/145                                               
  snap-gnome\x2d3\x2d34\x2d1804-60.mount                                                   loaded active mounted   /snap/gnome-3-34-1804/60                                                
  snap-gnome\x2d3\x2d34\x2d1804-66.mount                                                   loaded active mounted   /snap/gnome-3-34-1804/66                                                
  snap-gnome\x2dsystem\x2dmonitor-145.mount                                                loaded active mounted   /snap/gnome-system-monitor/145                                          
  snap-gnome\x2dsystem\x2dmonitor-148.mount                                                loaded active mounted   /snap/gnome-system-monitor/148                                          
  snap-gtk\x2dcommon\x2dthemes-1513.mount                                                  loaded active mounted   /snap/gtk-common-themes/1513                                            
  snap-gtk\x2dcommon\x2dthemes-1514.mount                                                  loaded active mounted   /snap/gtk-common-themes/1514                                            
  snap-snap\x2dstore-498.mount                                                             loaded active mounted   /snap/snap-store/498                                                    
  snap-snap\x2dstore-518.mount                                                             loaded active mounted   /snap/snap-store/518                                                    
  sys-fs-fuse-connections.mount                                                            loaded active mounted   /sys/fs/fuse/connections                                                
  sys-kernel-config.mount                                                                  loaded active mounted   /sys/kernel/config                                                      
  sys-kernel-debug.mount                                                                   loaded active mounted   /sys/kernel/debug                                                       
  sys-kernel-tracing.mount                                                                 loaded active mounted   /sys/kernel/tracing                                                     
  ubuntu-report.path                                                                       loaded active waiting   Pending report trigger for Ubuntu Report                                
  init.scope                                                                               loaded active running   System and Service Manager                                              
  podman-116589.scope                                                                      loaded active running   podman-116589.scope                                                     
  podman-pause.scope                                                                       loaded active running   podman-pause.scope                                                      
  dbus.service                                                                             loaded active running   D-Bus User Message Bus                                                  
  gvfs-afc-volume-monitor.service                                                          loaded active running   Virtual filesystem service - Apple File Conduit monitor                 
  gvfs-daemon.service                                                                      loaded active running   Virtual filesystem service                                              
  gvfs-goa-volume-monitor.service                                                          loaded active running   Virtual filesystem service - GNOME Online Accounts monitor              
  gvfs-gphoto2-volume-monitor.service                                                      loaded active running   Virtual filesystem service - digital camera monitor                     
  gvfs-mtp-volume-monitor.service                                                          loaded active running   Virtual filesystem service - Media Transfer Protocol monitor            
  gvfs-udisks2-volume-monitor.service                                                      loaded active running   Virtual filesystem service - disk device monitor                        
● [email protected]                                                              loaded failed failed    Podman slurm-slurmd.service                                             
  slurm-mysql.service                                                                      loaded active running   Podman slurm-mysql.service                                              
  slurm-slurmctld.service                                                                  loaded active running   Podman slurm-slurmctld.service                                          
  slurm-slurmdbd.service                                                                   loaded active running   Podman slurm-slurmdbd.service                                           
  tracker-miner-fs.service                                                                 loaded active running   Tracker file system data miner                                          
  -.slice                                                                                  loaded active active    Root Slice                                                              
  slurm\x2dcomputenode.slice                                                               loaded active active    slurm\x2dcomputenode.slice                                              
  user.slice                                                                               loaded active active    user.slice                                                              
  dbus.socket                                                                              loaded active running   D-Bus User Message Bus Socket                                           
  dirmngr.socket                                                                           loaded active listening GnuPG network certificate management daemon                             
  gpg-agent-browser.socket                                                                 loaded active listening GnuPG cryptographic agent and passphrase cache (access for web browsers)
  gpg-agent-extra.socket                                                                   loaded active listening GnuPG cryptographic agent and passphrase cache (restricted)             
  gpg-agent-ssh.socket                                                                     loaded active listening GnuPG cryptographic agent (ssh-agent emulation)                         
  gpg-agent.socket                                                                         loaded active listening GnuPG cryptographic agent and passphrase cache                          
  pk-debconf-helper.socket                                                                 loaded active listening debconf communication socket                                            
  pulseaudio.socket                                                                        loaded active listening Sound System                                                            
  snapd.session-agent.socket                                                               loaded active listening REST API socket for snapd user session agent                            
  swapfile.swap                                                                            loaded active active    /swapfile                                                               
  basic.target                                                                             loaded active active    Basic System                                                            
  default.target                                                                           loaded active active    Main User Target                                                        
  paths.target                                                                             loaded active active    Paths                                                                   
  sockets.target                                                                           loaded active active    Sockets                                                                 
  timers.target                                                                            loaded active active    Timers                                                                  

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

146 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
esjolund@ubuntu:~$ 

systemctl --user cat 'slurm-*'

esjolund@ubuntu:~$ systemctl --user cat  'slurm-*'
# /home/esjolund/.config/systemd/user/slurm-create-datadir.service
[Unit]

Description=Podman slurm-create-datadir.service
Wants=network.target
After=network-online.target
ConditionPathIsDirectory=!%S/slurm-container-cluster/var_lib_mysql

[Service]
Type=oneshot
RemainAfterExit=yes
Environment=PODMAN_SYSTEMD_UNIT=%n
StateDirectory=slurm-container-cluster
StateDirectoryMode=0700
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/etc_munge
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/etc_slurm
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/extra-containerimages
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/slurm_jobdir
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/var_lib_mysql
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/var_run_mysqld
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/var_log_slurmdbd
ExecStart=/bin/mkdir -p %S/slurm-container-cluster/var_log_slurmctld
ExecStart=podman unshare /bin/chmod 700 %S/slurm-container-cluster/etc_munge
ExecStart=podman unshare /bin/chown 993:992 %S/slurm-container-cluster/etc_munge

ExecStart=podman unshare /bin/chmod 700 %S/slurm-container-cluster/var_log_slurmdbd
ExecStart=podman unshare /bin/chmod 700 %S/slurm-container-cluster/var_log_slurmctld
ExecStart=podman unshare /bin/chown 992:991 %S/slurm-container-cluster/var_log_slurmdbd
ExecStart=podman unshare /bin/chown 992:991 %S/slurm-container-cluster/var_log_slurmctld

KillMode=control-group

[Install]
WantedBy=multi-user.target

# /home/esjolund/.config/systemd/user/slurm-mysql.service
[Unit]
Description=Podman slurm-mysql.service
Wants=network.target
After=network-online.target

Wants=slurm-create-datadir.service
After=slurm-create-datadir.service

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
StateDirectory=slurm-container-cluster
StateDirectoryMode=0700
ExecStartPre=/bin/mkdir -p %S/slurm-container-cluster/var_run_mysqld
ExecStartPre=/usr/bin/podman unshare chown 999:999 %S/slurm-container-cluster/var_run_mysqld
ExecStartPre=/usr/bin/podman unshare chmod 777 %S/slurm-container-cluster/var_run_mysqld

ExecStartPre=/bin/rm -f %t/slurm-mysql.pid %t/slurm-mysql.ctr-id

ExecStart=/usr/bin/podman run --cgroups=no-conmon \
                              --cidfile %t/slurm-mysql.ctr-id \
                              --conmon-pidfile %t/slurm-mysql.pid \
                              --detach \
                              --name mysql \
                              --replace \
                              --volume=%S/slurm-container-cluster/var_run_mysqld:/var/run/mysqld:z \
                              --volume=%S/slurm-container-cluster/var_lib_mysql:/var/lib/mysql:Z \
                              -e MYSQL_DATABASE=slurm_acct_db \
                              -e MYSQL_PASSWORD=password \
                              -e MYSQL_RANDOM_ROOT_PASSWORD=yes \
                              -e MYSQL_USER=slurm \
                              localhost/mysql-with-norouter

ExecStop=/usr/bin/podman stop --cidfile %t/slurm-mysql.ctr-id \
                               --ignore \
                               --time 10
ExecStopPost=/usr/bin/podman rm --cidfile %t/slurm-mysql.ctr-id \
                                --force \
                                --ignore
PIDFile=%t/slurm-mysql.pid
KillMode=control-group
Type=forking

[Install]
WantedBy=multi-user.target default.target

# /home/esjolund/.config/systemd/user/[email protected]
[Unit]
Description=Podman slurm-slurmd.service
Wants=network.target
After=network-online.target

Wants=slurm-create-datadir.service
After=slurm-create-datadir.service

# AssertFileNotEmpty=%S/slurm-container-cluster/etc_munge/munge.key (other subuid UID)
AssertFileNotEmpty=%S/slurm-container-cluster/etc_slurm/slurm.conf
AssertFileNotEmpty=%S/slurm-container-cluster/etc_slurm/slurmdbd.conf

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
#Restart=on-failure
Restart=no

StateDirectory=slurm-container-cluster
StateDirectoryMode=0700
ExecStartPre=rm -f %t/slurm-slurmd%i.pid %t/slurm-slurmd%i.ctr-id
ExecStartPre=mkdir -p %S/slurm-container-cluster/computenode/%i/var_lib_containers
ExecStartPre=mkdir -p %S/slurm-container-cluster/computenode/%i/var_log_slurm
ExecStartPre=podman unshare chmod 700 %S/slurm-container-cluster/computenode/%i/var_log_slurm
ExecStartPre=podman unshare chown 992:991 %S/slurm-container-cluster/computenode/%i/var_log_slurm
ExecStartPre=podman unshare rm -rf %S/slurm-container-cluster/computenode/%i/var_log_munge
ExecStartPre=podman unshare mkdir -p %S/slurm-container-cluster/computenode/%i/var_log_munge
ExecStartPre=/usr/bin/podman unshare chmod 755 %S/slurm-container-cluster/computenode/%i/var_log_munge
ExecStartPre=/usr/bin/podman unshare chown 993:992 %S/slurm-container-cluster/computenode/%i/var_log_munge
ExecStart=/usr/bin/podman run --cgroups=no-conmon \
                              --cidfile %t/slurm-slurmd%i.ctr-id \
                              --conmon-pidfile %t/slurm-slurmd%i.pid \
                              --detach \
                              --name c%i \
                              --hostname c%i \
                              --privileged=true \
                              --replace \
                              --security-opt label=disable \
                              --volume=%S/slurm-container-cluster/adjusting_ports_for_norouter/slurmd:/etc/slurm/adjusting_ports_for_norouter:z \
                              --volume=%S/slurm-container-cluster/computenode/%i/var_lib_containers:/var/lib/containers:Z \
                              --volume=%S/slurm-container-cluster/computenode/%i/var_log_munge:/var/log/munge:z \
                              --volume=%S/slurm-container-cluster/computenode/%i/var_log_slurm:/var/log/slurm:Z \
                              --volume=%S/slurm-container-cluster/etc_munge:/etc/munge:z \
                              --volume=%S/slurm-container-cluster/etc_slurm:/etc/slurm:z \
                              --volume=%S/slurm-container-cluster/extra-containerimages:/var/lib/shared:ro \
                              --volume=%S/slurm-container-cluster/slurm_jobdir:/data:slave \
                              localhost/slurm-with-norouter slurmd
ExecStop=/usr/bin/podman stop --cidfile %t/slurm-slurmd%i.ctr-id \
                              --ignore \
                              --time 10
ExecStopPost=/usr/bin/podman rm --cidfile %t/slurm-slurmd%i.ctr-id \
                                --force \
                                --ignore
PIDFile=%t/slurm-slurmd%i.pid
KillMode=control-group
Type=forking

[Install]
WantedBy=multi-user.target default.target

# /home/esjolund/.config/systemd/user/slurm-slurmdbd.service
[Unit]
Description=Podman slurm-slurmdbd.service
Wants=network.target
After=network-online.target

Wants=slurm-create-datadir.service
After=slurm-create-datadir.service

Wants=slurm-mysql.service
After=slurm-mysql.service

# AssertFileNotEmpty=%S/slurm-container-cluster/etc_munge/munge.key (other subuid UID)
AssertFileNotEmpty=%S/slurm-container-cluster/etc_slurm/slurm.conf
AssertFileNotEmpty=%S/slurm-container-cluster/etc_slurm/slurmdbd.conf

#BindsTo=slurm-pod.service
#After=slurm-pod.service 

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
StateDirectory=slurm-container-cluster
StateDirectoryMode=0700
ExecStartPre=/bin/rm -f %t/slurm-slurmdbd.pid %t/slurm-slurmdbd.ctr-id
ExecStart=/usr/bin/podman run --cgroups=no-conmon \
                              --cidfile %t/slurm-slurmdbd.ctr-id \
                              --conmon-pidfile %t/slurm-slurmdbd.pid \
                              --detach \
                              --name slurmdbd \
                              --replace \
                              --volume=%S/slurm-container-cluster/adjusting_ports_for_norouter/slurmdbd:/etc/slurm/adjusting_ports_for_norouter:z \
                              --volume=%S/slurm-container-cluster/etc_munge:/etc/munge:z \
                              --volume=%S/slurm-container-cluster/etc_slurm:/etc/slurm:z \
                              --volume=%S/slurm-container-cluster/var_log_slurmdbd:/var/log/slurm:Z \
                              --volume=%S/slurm-container-cluster/var_run_mysqld:/var/run/mysqld:z \
                              --env MYSQL_UNIX_PORT=/var/run/mysqld/mysqld.sock \
                              localhost/slurm-with-norouter slurmdbd
ExecStop=/usr/bin/podman stop --cidfile %t/slurm-slurmdbd.ctr-id \
                              --ignore \
                              --time 10
ExecStopPost=/usr/bin/podman rm --cidfile %t/slurm-slurmdbd.ctr-id \
                             --force \
                             --ignore
PIDFile=%t/slurm-slurmdbd.pid
KillMode=control-group
Type=forking

[Install]
WantedBy=multi-user.target default.target

# /home/esjolund/.config/systemd/user/slurm-slurmctld.service
No files found for slurm-create-munge-key.service.
No files found for slurm-copy-default-slurm-configuration.service.
[Unit]
Description=Podman slurm-slurmctld.service
Wants=network.target
After=network-online.target

Wants=slurm-copy-default-slurm-configuration.service
After=slurm-copy-default-slurm-configuration.service

Wants=slurm-create-datadir.service
After=slurm-create-datadir.service

Wants=slurm-create-munge-key.service
After=slurm-create-munge-key.service

Wants=slurm-mysql.service
After=slurm-mysql.service

#Wants=slurm-container-cluster-network-create.service
#After=slurm-container-cluster-network-create.service

Wants=slurm-slurmdbd.service
After=slurm-slurmdbd.service

# AssertFileNotEmpty=%S/slurm-container-cluster/etc_munge/munge.key (other subuid UID)
AssertFileNotEmpty=%S/slurm-container-cluster/etc_slurm/slurm.conf
AssertFileNotEmpty=%S/slurm-container-cluster/etc_slurm/slurmdbd.conf

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
StateDirectory=slurm-container-cluster
StateDirectoryMode=0700
ExecStartPre=/bin/rm -f %t/slurm-slurmctld.pid %t/slurm-slurmctld.ctr-id

ExecStart=/usr/bin/podman run \
                              --cgroups=no-conmon \
                              --cidfile %t/slurm-slurmctld.ctr-id \
                              --conmon-pidfile %t/slurm-slurmctld.pid \
                              --detach \
                              --hostname slurmctld \
                              --name slurmctld \
                              --privileged \
                              --replace \
                              --ulimit host \
                              --volume /dev/fuse:/dev/fuse:rw \
                              --volume=%S/slurm-container-cluster/adjusting_ports_for_norouter/slurmctld:/etc/slurm/adjusting_ports_for_norouter:z \
                              --volume=%S/slurm-container-cluster/etc_munge:/etc/munge:z \
                              --volume=%S/slurm-container-cluster/etc_slurm:/etc/slurm:z \
                              --volume=%S/slurm-container-cluster/extra-containerimages:/var/lib/shared:ro \
                              --volume=%S/slurm-container-cluster/slurm_jobdir:/data:z \
                              --volume=%S/slurm-container-cluster/var_log_slurmctld:/var/log/slurm:Z \
                              localhost/slurm-with-norouter slurmctld
ExecStop=/usr/bin/podman stop --cidfile %t/slurm-slurmctld.ctr-id \
                              --ignore \
                              --time 10
ExecStopPost=/usr/bin/podman rm --cidfile %t/slurm-slurmctld.ctr-id \
                                --force \
                                --ignore
PIDFile=%t/slurm-slurmctld.pid
KillMode=control-group
Type=forking

[Install]
WantedBy=multi-user.target default.target

esjolund@ubuntu:~$

Output of podman version:

Nothing, the command podman version does not return

The command sudo podman version gives

Version:      3.0.0-rc1
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

Nothing, the command podman version does not return

The command sudo podman info --debug | grep -v hostname: gives

host:
  arch: amd64
  buildahVersion: 1.19.2
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.24, commit: '
  cpus: 4
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.4.0-64-generic
  linkmode: dynamic
  memFree: 2604425216
  memTotal: 16666775552
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.16.3-fd58-dirty
      commit: fd582c529489c0738e7039cbc036781d1d039014
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2146430976
  swapTotal: 2147479552
  uptime: 11h 9m 48.26s (Approximately 0.46 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 0
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.0.0-rc1

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 3.0.0~0.rc1 amd64 [installed]
podman/unknown 3.0.0~0.rc1 arm64
podman/unknown 3.0.0~0.rc1 armhf
podman/unknown 3.0.0~0.rc1 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 24, 2021
@eriksjolund
Copy link
Contributor Author

The SSH connection was lost because I had to close the lid of my laptop for a few hours.
I logged in to the other computer again with SSH and tried podman system reset once more but this time with --loglevel debug

esjolund@ubuntu:~$ podman images
REPOSITORY                     TAG     IMAGE ID      CREATED       SIZE
localhost/slurm-with-norouter  latest  ba890554b267  4 weeks ago   2.55 GB
<none>                         <none>  3d5e89df0988  5 weeks ago   222 MB
localhost/mysql-with-norouter  latest  5fa78203c8f7  2 months ago  490 MB
docker.io/library/centos       8       0d120b6ccaa8  5 months ago  222 MB
esjolund@ubuntu:~$ podman --log-level=debug  system reset
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called reset.PersistentPreRunE(podman --log-level=debug system reset) 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.33.1 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/esjolund/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/esjolund/.local/share/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/esjolund/.config/cni/net.d}} 
DEBU[0000] Reading configuration file "/etc/containers/containers.conf" 
DEBU[0000] Merged system config "/etc/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.33.1 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/esjolund/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/esjolund/.local/share/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/esjolund/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"   
DEBU[0000] Initializing boltdb state at /home/esjolund/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/esjolund/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/esjolund/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/esjolund/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/runc"                
INFO[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Setting parallel job count to 13             
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called reset.PersistentPreRunE(podman --log-level=debug system reset) 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.33.1 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/esjolund/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/esjolund/.local/share/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/esjolund/.config/cni/net.d}} 
DEBU[0000] Reading configuration file "/etc/containers/containers.conf" 
DEBU[0000] Merged system config "/etc/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.33.1 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/esjolund/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/esjolund/.local/share/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/esjolund/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"   
DEBU[0000] Initializing boltdb state at /home/esjolund/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver vfs                       
DEBU[0000] Using graph root /home/esjolund/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/esjolund/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/esjolund/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "vfs"   
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/runc"                
INFO[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Setting parallel job count to 13             

WARNING! This will remove:
        - all containers
        - all pods
        - all images
        - all build cache
Are you sure you want to continue? [y/N] y
DEBU[0002] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0002] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.33.1 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/esjolund/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/esjolund/.local/share/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/esjolund/.config/cni/net.d}} 
DEBU[0002] Reading configuration file "/etc/containers/containers.conf" 
DEBU[0002] Merged system config "/etc/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.33.1 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:true Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NetworkCmdOptions:[] NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/esjolund/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/esjolund/.local/share/containers/storage/volumes VolumePlugins:map[]} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/esjolund/.config/cni/net.d}} 
DEBU[0002] Using conmon: "/usr/libexec/podman/conmon"   
DEBU[0002] Initializing boltdb state at /home/esjolund/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0002] Using graph driver vfs                       
DEBU[0002] Using graph root /home/esjolund/.local/share/containers/storage 
DEBU[0002] Using run root /run/user/1000/containers     
DEBU[0002] Using static dir /home/esjolund/.local/share/containers/storage/libpod 
DEBU[0002] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0002] Using volume path /home/esjolund/.local/share/containers/storage/volumes 
DEBU[0002] Set libpod namespace to ""                   
DEBU[0002] Initializing event backend journald          
DEBU[0002] using runtime "/usr/bin/crun"                
DEBU[0002] using runtime "/usr/bin/runc"                
INFO[0002] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0002] Removing container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
DEBU[0002] Stopping ctr 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 (timeout 10) 
DEBU[0002] Stopping container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 (PID 176958) 
DEBU[0002] Sending signal 15 to container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
INFO[0012] Timed out stopping container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7, resorting to SIGKILL: given PIDs did not die within timeout 
DEBU[0012] Sending signal 9 to container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
DEBU[0012] Removing all exec sessions for container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
DEBU[0012] Cleaning up container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
DEBU[0012] Tearing down network namespace at /run/user/1000/netns/cni-e6ad9684-a37a-ee76-b8e6-e816a003d6c8 for container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
DEBU[0012] Successfully cleaned up container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 
ERRO[0012] Storage for container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 has been removed 
DEBU[0012] Container 91ec5928306e7df71c8ac68ce59b40e9ade1c1954899384f452a4610ca7d9eb7 storage is already unmounted, skipping... 
DEBU[0013] Removing container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
DEBU[0013] Stopping ctr 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 (timeout 10) 
DEBU[0013] Stopping container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 (PID 177161) 
DEBU[0013] Sending signal 15 to container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
INFO[0023] Timed out stopping container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032, resorting to SIGKILL: given PIDs did not die within timeout 
DEBU[0023] Sending signal 9 to container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
DEBU[0023] Removing all exec sessions for container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
DEBU[0023] Cleaning up container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
DEBU[0023] Tearing down network namespace at /run/user/1000/netns/cni-328bca5d-75b6-4a99-cd94-f8dfe3aaaa6e for container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
DEBU[0023] Successfully cleaned up container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 
DEBU[0023] unmounted container "985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032" 
DEBU[0024] Container 985aaab11311df27c35529c687c45a0b54e899a1b5d31c37d4d58a8b6c468032 storage is already unmounted, skipping... 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566" 
DEBU[0024] exporting opaque data as blob "sha256:0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566" 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237" 
DEBU[0024] exporting opaque data as blob "sha256:6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237" 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb" 
DEBU[0024] exporting opaque data as blob "sha256:3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb" 
ERRO[0024] Error removing image 0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566: Image used by 5149928c38d3597d85f797f0b25c894cdd12b16cc8e157214e15d4d3d6f552e5: image is in use by a container 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237" 
DEBU[0024] exporting opaque data as blob "sha256:6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237" 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566" 
DEBU[0024] exporting opaque data as blob "sha256:0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566" 
ERRO[0024] Error removing image 6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237: Image used by 5414c10e95c6898bf76549dd24e25daff8d981e97a38ecdab81af275943ed514: image is in use by a container 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb" 
DEBU[0024] exporting opaque data as blob "sha256:3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb" 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566" 
DEBU[0024] exporting opaque data as blob "sha256:0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566" 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237" 
DEBU[0024] exporting opaque data as blob "sha256:6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237" 
ERRO[0024] Error removing image 3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb: Image used by 1b75220d53ef2f34a10ca2f4f60a0efc74a8a47fdc6f0506bef5117b76190f93: image is in use by a container 
DEBU[0024] parsed reference into "[vfs@/home/esjolund/.local/share/containers/storage+/run/user/1000/containers]@ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab" 
DEBU[0024] exporting opaque data as blob "sha256:ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab" 
ERRO[0024] Error removing image ba890554b267616e8dbbe10198ec6d51294af4449517a5bd3e3d3ef1cb79e3ab: Image used by 7b82f227678d9417b79265af03f7208e42f6c643aca0f29969d9b52e6c0d6cf1: image is in use by a container 

Okay, the same thing is happening as last time. The command does not return.

I didn't mention it in the last GitHub comment but I saw a high CPU load previously.
Also this time there is a podman process that consumes a lot of CPU

top - 17:03:57 up 18:39,  2 users,  load average: 1,08, 1,19, 0,98
Tasks: 230 total,   1 running, 214 sleeping,   0 stopped,  15 zombie
%Cpu(s): 15,8 us, 13,6 sy,  0,0 ni, 65,9 id,  0,1 wa,  0,0 hi,  4,7 si,  0,0 st
MiB Mem :  15894,7 total,   4654,7 free,    914,8 used,  10325,2 buff/cache
MiB Swap:   2048,0 total,   2047,0 free,      1,0 used.  14680,8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                                                
 177919 esjolund  20   0 1566800  57112  29060 S 121,9   0,4  22:21.87 podman    

PID 177919 is the process podman --log-level=debug system reset

esjolund@ubuntu:~$ cat /proc/177919/cmdline | tr '\0' '\n'
podman
--log-level=debug
system
reset
esjolund@ubuntu:~$ 

Check open file handles

esjolund@ubuntu:~$ ls -l /proc/177919/fd
ls: cannot access '/proc/177919/fd/10': No such file or directory
total 0
lrwx------ 1 esjolund esjolund 64 jan 24 16:45 0 -> /dev/pts/0
lrwx------ 1 esjolund esjolund 64 jan 24 16:45 1 -> /dev/pts/0
l????????? ? ?        ?         ?            ? 10
lrwx------ 1 esjolund esjolund 64 jan 24 16:45 2 -> /dev/pts/0
lrwx------ 1 esjolund esjolund 64 jan 24 16:45 3 -> 'socket:[522212]'
lrwx------ 1 esjolund esjolund 64 jan 24 16:47 4 -> 'anon_inode:[eventpoll]'
lr-x------ 1 esjolund esjolund 64 jan 24 16:47 5 -> 'pipe:[524072]'
l-wx------ 1 esjolund esjolund 64 jan 24 16:47 6 -> 'pipe:[524072]'
lrwx------ 1 esjolund esjolund 64 jan 24 16:47 7 -> /home/esjolund/.local/share/containers/storage/storage.lock
lrwx------ 1 esjolund esjolund 64 jan 24 16:47 8 -> /home/esjolund/.local/share/containers/storage/vfs-layers/layers.lock
lrwx------ 1 esjolund esjolund 64 jan 24 16:47 9 -> /run/user/1000/containers/vfs-layers/mountpoints.lock
esjolund@ubuntu:~$ for i in `seq 0 9 `; do echo ; echo fd=$i; cat /proc/177919/fdinfo/$i ;done

fd=0
pos:	0
flags:	02
mnt_id:	28

fd=1
pos:	0
flags:	02
mnt_id:	28

fd=2
pos:	0
flags:	02
mnt_id:	28

fd=3
pos:	0
flags:	02004002
mnt_id:	10

fd=4
pos:	0
flags:	02000002
mnt_id:	15
tfd:        3 events: 8000201d data:     7f64ac73bbf8  pos:0 ino:7f7e4 sdev:9
tfd:        5 events:       19 data:     564fbb7a10f8  pos:0 ino:7ff28 sdev:d

fd=5
pos:	0
flags:	02004000
mnt_id:	14

fd=6
pos:	0
flags:	02004001
mnt_id:	14

fd=7
pos:	0
flags:	02100002
mnt_id:	1087
lock:	1: POSIX  ADVISORY  WRITE 177919 08:02:13501691 0 EOF

fd=8
pos:	64
flags:	02100002
mnt_id:	1087
lock:	1: POSIX  ADVISORY  WRITE 177919 08:02:13502174 0 EOF

fd=9
pos:	64
flags:	02100002
mnt_id:	1095
lock:	1: POSIX  ADVISORY  WRITE 177919 00:35:74 0 EOF
esjolund@ubuntu:~$ 

@mheon
Copy link
Member

mheon commented Jan 24, 2021

Are the systemd units set to attempt to restart on failure? That could very well cause this.

podman system reset does not grab and hold the Alive lock right now, which means that other Podman processes can race against it. If we changed it to act more like podman system renumber this would no longer be a risk.

@eriksjolund
Copy link
Contributor Author

Yes, I see a few Restart=on-failure in the ~/.config/systemd/user/*.service

esjolund@ubuntu:~$ grep ^Restart= .config/systemd/user/slurm-*.service
.config/systemd/user/[email protected]:Restart=no
.config/systemd/user/slurm-mysql.service:Restart=on-failure
.config/systemd/user/slurm-slurmctld.service:Restart=on-failure
.config/systemd/user/slurm-slurmdbd.service:Restart=on-failure
esjolund@ubuntu:~$ 

I disabled all systemd user services and rebooted the computer and tested once more

esjolund@ubuntu:~$ pgrep podman
esjolund@ubuntu:~$ pgrep conmon
esjolund@ubuntu:~$ podman images
REPOSITORY                     TAG     IMAGE ID      CREATED       SIZE
localhost/slurm-with-norouter  latest  ba890554b267  4 weeks ago   2.55 GB
<none>                         <none>  3d5e89df0988  5 weeks ago   222 MB
docker.io/library/centos       8       0d120b6ccaa8  5 months ago  222 MB
esjolund@ubuntu:~$ podman system reset

WARNING! This will remove:
        - all containers
        - all pods
        - all images
        - all build cache
Are you sure you want to continue? [y/N] y
ERRO[0002] Error removing image 0d120b6ccaa8c5e149176798b3501d4dd1885f961922497cd0abef155c869566: Image used by 5149928c38d3597d85f797f0b25c894cdd12b16cc8e157214e15d4d3d6f552e5: image is in use by a container 
ERRO[0002] Error removing image 6b7e696e96cf673de3a9dd8436d5441bf8fd33518a2722982a9db829c9d6f237: Image used by 5414c10e95c6898bf76549dd24e25daff8d981e97a38ecdab81af275943ed514: image is in use by a container 
ERRO[0002] Error removing image 3d5e89df098859e4e4a054e9c9e5a3cd876aeaea4d81c74f55e9f132619d54bb: Image used by 1b75220d53ef2f34a10ca2f4f60a0efc74a8a47fdc6f0506bef5117b76190f93: image is in use by a container 
esjolund@ubuntu:~$ 

This time it worked!

@mheon
Copy link
Member

mheon commented Jan 24, 2021

I'm going to go ahead and re-open - we really need to make the system reset command race-safe

@mheon mheon reopened this Jan 24, 2021
@edsantiago edsantiago added the parkinglot Not actively worked on, but should remain open label Feb 3, 2021
@github-actions
Copy link

github-actions bot commented Mar 6, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan rhatdan removed their assignment Jun 16, 2021
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@vrothberg
Copy link
Member

@mheon, you seem to have a good idea of what needs to be done. Could you write a brain dump?

@mheon
Copy link
Member

mheon commented Aug 6, 2021

Alright. There are two core issues with system reset right now.

  1. Resetting the system is currently a method of Libpod's Runtime. This means that you require a valid runtime to proceed - if any misconfiguration of the system prevents a Runtime from being spawned (usually a storage misconfiguration in the database), the podman system reset command is nonfunctional. Given the intent of the command is to factory-reset the system and fix this type of problem, it is very bad that system reset does not work at the exact time you most want to use it.
  2. The Reset method does not grab the Alive lock, as described here. Libpod's Alive file is used to determine whether the system has recently restarted (we put a file on tmpfs the first time Libpod runs, if we no longer see that file we assume the system restarted). The Alive lock protects access to the alive file (so if we did restart, only one Podman actually does post-reboot cleanup tasks) - but, since every Podman process will, immediately after launching, grab the Alive lock to verify whether a restart occurred, we can inhibit other Podman processes from starting by holding onto this lock, thus allowing us to perform a storage reset in peace, without risk of other Podman processes using the files and directories we are in the process of removing.

As such, what needs to happen: the current method for resetting storage needs to be changed from a method on a Libpod Runtime to be part of the NewRuntime() call, similar to how podman system renumber and podman system migrate work right now - a special option is passed to NewRuntime that invokes alternative behavior. This allows us to avoid scenario 1, as if we know we're resetting we can make configuration errors that would normally be fatal, non-fatal. Further, NewRuntime() already holds the alive lock, so we can add a functional call to the critical section to perform the reset iff we were so configured.

@vrothberg
Copy link
Member

Thanks a lot for the great summary, @mheon!

@github-actions
Copy link

github-actions bot commented Sep 6, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Sep 7, 2021

@vrothberg @eriksjolund @mheon What should we do with this issue now?

@mheon
Copy link
Member

mheon commented Sep 7, 2021

Still needs to be worked on per my comment above. Might want to get a card written for it?

@github-actions
Copy link

github-actions bot commented Oct 8, 2021

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Oct 8, 2021

@mheon Did you ever produce a card?

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Dec 20, 2021

@mheon Ping again.

@mheon
Copy link
Member

mheon commented Dec 20, 2021

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jan 24, 2022

@cdoern since you are in this area now, could you look at this?

@cdoern
Copy link
Contributor

cdoern commented Jan 26, 2022

sure, I can look at this @rhatdan

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Mar 26, 2022

@mheon I think it is time you work on this one, since you understand it the best.

@mheon
Copy link
Member

mheon commented Mar 26, 2022

Ack, sure.

@mheon mheon assigned mheon and unassigned cdoern Mar 26, 2022
@github-actions
Copy link

github-actions bot commented May 6, 2022

A friendly reminder that this issue had no activity for 30 days.

@mheon
Copy link
Member

mheon commented Jun 2, 2022

On this one now

mheon added a commit to mheon/libpod that referenced this issue Jun 3, 2022
Firstly, reset is now managed by the runtime itself as a part of
initialization. This ensures that it can be used even with
runtimes that would otherwise fail to be created - most notably,
when the user has changed a core path
(runroot/root/tmpdir/staticdir).

Secondly, we now attempt a best-effort removal even if the store
completely fails to be configured.

Third, we now hold the alive lock for the entire reset operation.
This ensures that no other Podman process can start while we are
running a system reset, and removes any possibility of a race
where a user tries to create containers or pull images while we
are trying to perform a reset.

[NO NEW TESTS NEEDED] we do not test reset last I checked.

Fixes containers#9075

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. parkinglot Not actively worked on, but should remain open
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants