-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First time run: Error starting kubernetes (v0.4.1) #562
Comments
Getting the same |
Could you please verify that this is not a duplicate of #535: $ id
uid=501(jan) gid=20(staff) groups=20(staff), [...] If your If this is not the problem, then please attach the lime log files: $ ls ~/Library/State/rancher-desktop/lima/rancher-desktop/*.log
/Users/jan/Library/State/rancher-desktop/lima/rancher-desktop/ha.stderr.log
/Users/jan/Library/State/rancher-desktop/lima/rancher-desktop/ha.stdout.log
/Users/jan/Library/State/rancher-desktop/lima/rancher-desktop/serial.log |
Same issue. My uid is 501. ~/Library/State/rancher-desktop/lima/rancher-desktop/ha.stderr.log:
~/Library/State/rancher-desktop/lima/rancher-desktop/ha.stdout.log:
~/Library/State/rancher-desktop/lima/rancher-desktop/serial.log:
|
My uid is 503 and this is on a Macbook 13" 2019 Core i5: ~/Library/State/rancher-desktop/lima/rancher-desktop/ha.stderr.log
~/Library/State/rancher-desktop/lima/rancher-desktop/ha.stdout.log
~/Library/State/rancher-desktop/lima/rancher-desktop/serial.log
|
Same error and user id as @mooneye14 |
@mpptx This looks similar to #532, which we couldn't figure out because the error suddenly went away. There are several instructions to run lima and qemu on their own, and to try different CPU settings. Could you see if any of that applies to your setup? @mooneye14 You logs look different, and I'm still working through them, but there is most likely no point in performing the steps from #532. I'll update if I have further ideas... |
@mooneye14 Your logs very much look like the error in #535, but since your uid is less than For the following check you need the Your VM seems to be running, and the SSH daemon is listening on the port, but rejects your key. Since we can't connect via $ socat stdio ~/Library/State/rancher-desktop/lima/rancher-desktop/serial.sock
root
root
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org/>.
You can setup the system with the command: setup-alpine
You may change this message by editing /etc/motd.
lima-rancher-desktop:~# ^[[60;25Rcat /mnt/lima-cidata/user-data
cat /mnt/lima-cidata/user-data
#cloud-config
# vim:syntax=yaml
growpart:
mode: auto
devices: ['/']
users:
- name: "jan"
uid: "501"
homedir: "/home/jan.linux"
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: true
ssh-authorized-keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFGNCQ9U3k8ErTkhLHJP+8bCgvi56GJshb6q0cdhWKHw jan@mactop
[...] The $ cat ~/Library/State/rancher-desktop/lima/_config/user.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFGNCQ9U3k8ErTkhLHJP+8bCgvi56GJshb6q0cdhWKHw jan@mactop Since it is a public key, it is safe to share it. If you have concerns, you can delete the key files on the host and Rancher Desktop will generate a new one on the next start. While you are still connected via lima-rancher-desktop:~# ^[[60;25Rcat /home/*/.ssh/authorized_keys
cat /home/*/.ssh/authorized_keys
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFGNCQ9U3k8ErTkhLHJP+8bCgvi56GJshb6q0cdhWKHw jan@mactop Finally, please capture the output of |
To add to the thread, I'm also seeing this on a recently fresh/new version of Big Sur on Mac M1. Please let me know if you'd like any logs or further steps carried out for troubleshooting purposes. My id is 502, here's the log output - ~/Library/State/rancher-desktop/lima/rancher-desktop/ha.stderr.log:
~/Library/State/rancher-desktop/lima/rancher-desktop/ha.stdout.log
~/Library/State/rancher-desktop/lima/rancher-desktop/serial.log (is empty) |
2019 Intel Core i9 MBP
|
@pulberg Thanks! That shows that |
I have applied the updated lima .iso as well
|
@pulberg Try selecting "Reset Kubernetes" on the "Kubernetes Settings" page. That should delete the old VM and create a new one using the new ISO. |
I also found that it looks like there is a typo in the code looking for the alpine image name -
After resetting kubernetes i get the error pretty much instantly now -
There is also no |
Yes, there is, but it is inconsequential. The downloaded ISO is renamed to that name during packaging, and will be loaded from that name at runtime. The name can really be anything.
That's not what I expected. Let's start over: Make sure RD is not running, and then delete the whole State directory (it will be recreated automatically when RD runs):
Then run RD again, and see if it now works. If it doesn't, and there are no logs in |
ok, reset everything, RD running now, mostly...it's been in "Waiting for image manager to be ready" for about 15min now |
@pulberg Can you run Maybe try |
trying the |
On a mac, you need to user |
|
Im getting this output. Is trying to connect to AWS even the command is specifying localhost. kim builder install --force --endpoint-addr=127.0.0.1
|
Having the same err. on MacOS v12.0 Beta of Monterey. I'm seeing the following in the time="2021-09-07T09:32:08+02:00" level=fatal msg="open /Users/lars/Library/State/rancher-desktop/lima: no such file or directory"
Could not parse lima status, assuming machine is unavailable.
time="2021-09-07T09:32:09+02:00" level=fatal msg="open /Users/lars/Library/State/rancher-desktop/lima: no such file or directory"
Could not parse lima status, assuming machine is unavailable.
time="2021-09-07T09:32:13+02:00" level=info msg="Terminal is not available, proceeding without opening an editor"
time="2021-09-07T09:32:13+02:00" level=info msg="Attempting to download the image from \"/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/alpline-lima-v0.1.0-std-3.13.5.iso\""
time="2021-09-07T09:32:13+02:00" level=info msg="Downloaded image from \"/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/alpline-lima-v0.1.0-std-3.13.5.iso\""
time="2021-09-07T09:32:13+02:00" level=fatal msg="failed to run [qemu-img create -f qcow2 /Users/lars/Library/State/rancher-desktop/lima/rancher-desktop/diffdisk 107374182400]: \"dyld[74677]: Library not loaded: @executable_path/../opt/glib/lib/libglib-2.0.0.dylib\\n Referenced from: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/Cellar/qemu/6.0.0/bin/qemu-img\\n Reason: tried: '/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/Cellar/qemu/6.0.0/bin/../opt/glib/lib/libglib-2.0.0.dylib' (no such file), '/usr/lib/libglib-2.0.0.dylib' (no such file)\\n\": signal: abort trap"
Error starting lima: Error: /Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl exited with code 1
at ChildProcess.<anonymous> (/Applications/Rancher Desktop.app/Contents/Resources/app.asar/dist/app/background.js:1:3150)
at ChildProcess.emit (events.js:315:20)
at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12) What can one do? And thank you |
@larssb Your error seems to be a duplicate of #518. It will be fixed in the next release, but there is also a workaround documented in #518 (comment). |
(edited by mook-as: formatting changes only.) |
Thanks @mooneye14, the This will be fixed by a new ISO image via lima-vm/alpine-lima#13 |
This issue is getting too crowded, with so many people reporting the same symptoms, but different underlying causes. We know about 3 issues:
These issues have been already fixed for the next release. I've created separate issues for any problems related to And there is one problem left where qemu exits with "signal: abort trap" (in I'm going to close this issue now; if you still get the "limactl exited with code 1" error, and it is not covered by any of the scenarios listed above, please open a new separate issue. This includes any further instances of #532. |
I'm getting the same :( |
Happened in v0.4.1
The text was updated successfully, but these errors were encountered: