Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing NixOS VM's to be run on macOS #108984

Closed
infinisil opened this issue Jan 10, 2021 · 99 comments · Fixed by #206951
Closed

Allowing NixOS VM's to be run on macOS #108984

infinisil opened this issue Jan 10, 2021 · 99 comments · Fixed by #206951
Labels
6.topic: darwin Running or building packages on Darwin

Comments

@infinisil
Copy link
Member

infinisil commented Jan 10, 2021

People using macOS currently can't interactively run NixOS VM's as described e.g. here, even with a remote Linux builder. In this issue I'm describing some ways in how that could be made to work. Note that speed is important as well, so kvm/hvf and co. should be used if possible.

Relevant issue is #64578. Ping @zupo @matthewbauer @nmattia @roberth. Any pointers/help with this is appreciated.

This issue is sponsored by Niteo :)

Using qemu directly (doesn't work)

With this change it's possible to build a NixOS VM run script that is executable on macOS (note that this requires a remote Linux builder)

$ nix-build '<nixpkgs/nixos>' \
  --argstr system x86_64-linux \
  --arg configuration '{ virtualisation.qemu.pkgs = import <nixpkgs> { system = "x86_64-darwin"; }; }' \
  -A vm \
  -I nixpkgs=https://github.com/infinisil/nixpkgs/archive/4d244410ee0f3e3ece5494533217bbafbd95d9b3.tar.gz
/nix/store/2a7cbyp9xp12ddc4lxb4h93dxa5yfndy-nixos-vm

However running the VM doesn't actually work:

$ result/bin/run-nixos-vm                                                          
qemu-system-x86_64: -virtfs local,path=/nix/store,security_model=none,mount_tag=store: There is no option group 'virtfs'
qemu-system-x86_64: -virtfs local,path=/nix/store,security_model=none,mount_tag=store: virtfs support is disabled

This is because qemu doesn't support virtfs on macOS.

It was suggested that this patch could be used to make it support virtfs. This was attempted here, however the build doesn't succeed:

$ nix-build https://github.com/infinisil/nixpkgs/archive/57562282ab34fc44a492f32a13939d77e29d0d9b.tar.gz -A qemu
[...]
fsdev/virtfs-proxy-helper.c:16:10: fatal error: 'sys/fsuid.h' file not found
#include <sys/fsuid.h>
         ^~~~~~~~~~~~~
1 error generated.
make: *** [/private/var/folders/l6/7v9ppmg12q90382m246y3g5c0000gn/T/nix-build-qemu-5.1.0.drv-0/qemu-5.1.0/rules.mak:69: fsdev/virtfs-proxy-helper.o] Error 1
make: *** Waiting for unfinished jobs....
builder for '/nix/store/z8mkhjvaz3r7dixv94c1jprwx8gvl0gh-qemu-5.1.0.drv' failed with exit code 2
error: build of '/nix/store/z8mkhjvaz3r7dixv94c1jprwx8gvl0gh-qemu-5.1.0.drv' failed

Attempting to remove all the missing header files from virtfs-proxy-helper.c, including <sys/fsuid.h>, <sys/vfs.h>, <linux/fs.h> and <cap-ng.h> just leads to compilation errors, seemingly indicating that this is a Linux-only functionality.

This page however mentions that:

FSDRIVER: Either "local", "proxy" or "synth". This option specifies the filesystem driver backend to use. In short: you want to use "local". In detail:

  • local: Simply lets QEMU call the individual VFS functions (more or less) directly on host.
  • proxy: this driver was supposed to dispatch the VFS functions to be called from a separate process (by virtfs-proxy-helper), however the "proxy" driver is currently not considered to be production grade.

And indeed, the run-nixos-vm script only uses local, quoting the script:

-virtfs local,path=/nix/store,security_model=none,mount_tag=store \
-virtfs local,path=$TMPDIR/xchg,security_model=none,mount_tag=xchg \
-virtfs local,path=${SHARED_DIR:-$TMPDIR/xchg},security_model=none,mount_tag=shared \

Attempting to not compile that tool with this commit however also doesn't work, failing again on seemingly Linux-specific headers:

$ nix-build https://github.com/infinisil/nixpkgs/archive/5ed493ed6ae957aa510d5292eb373b8b1f4a3db1.tar.gz -A qemu
[...]
/private/var/folders/l6/7v9ppmg12q90382m246y3g5c0000gn/T/nix-build-qemu-5.1.0.drv-0/qemu-5.1.0/fsdev/file-op-9p.h:19:10: fatal error: 'sys/vfs.h' file not found
#include <sys/vfs.h>
         ^~~~~~~~~~~
1 error generated.
make: *** [/private/var/folders/l6/7v9ppmg12q90382m246y3g5c0000gn/T/nix-build-qemu-5.1.0.drv-0/qemu-5.1.0/rules.mak:69: fsdev/qemu-fsdev.o] Error 1
builder for '/nix/store/zfnmaj3jh16x2qlryzyf3m2gy3zcww53-qemu-5.1.0.drv' failed with exit code 2
error: build of '/nix/store/zfnmaj3jh16x2qlryzyf3m2gy3zcww53-qemu-5.1.0.drv' failed

So it seems that qemu just doesn't support virtfs on macOS.

Using libvirt (might work)

Above qemu documentation also has this section, which describes how the same can be achieved with libvirt. While I believe that underneath it just uses qemu as well, there might be some additional libvirt magic happening. And it seems that macOS Mojave supports virtio-9p, in which post the author also uses libvirt successfully.

Currently the NixOS VM runner just passes arguments to qemu. In order to use libvirt we'll have to transform all these arguments to the libvirt equivalent in its XML configuration. There even is a libvirt page describing the equivalent of qemu arguments, which will be very useful. There is also the virsh domxml-from-native qemu-argv command which can supposedly do this transformation automatically, though I didn't have any success with it yet.

In above-linked document describing qemu argument equivalents, the -virtfs (or the -fsdev and -device options it's a shorthand for) option is notably missing. This should be replaced with a <filesystem> section as described in the qemu documentation link above.

It would be a good idea to first manually create a libvirt configuration and verifying that it works on a NixOS VM. libvirt has a graphical interface which could also be used.

More relevant links:

  • libvirt supports virtio-9p since version 6.9.0
  • haxm might be needed for the end result to be fast
  • virtio-9p can be slow. There is a faster replacement virtio-fs. However as far as I know, macOS doesn't have a driver for that.
  • A blogpost describing how to use libvirt on macOS to start an Ubuntu VM
  • The NixOS Wiki page on libvirt

Removing the need for filesystem mapping (might work)

The main reason this -virtfs argument is used at all is so that the guest machine in the VM can access the host machines /nix/store, which is where the whole system that the guest runs resides in.

An alternate approach however could be to create a /nix/store image that can be used by qemu as the /nix/store directly, therefore not depending on this filesystem mapping to the host.

See the various make-* files in https://github.com/NixOS/nixpkgs/tree/master/nixos/lib, which could be useful for this

@infinisil infinisil added 0.kind: bug Something is broken 6.topic: darwin Running or building packages on Darwin and removed 0.kind: bug Something is broken labels Jan 10, 2021
@zupo
Copy link
Contributor

zupo commented Jan 11, 2021

Another benefit if we get MacOS + QEMU testing story working: ATM, running NixOS tests on various CI providers such as CircleCI and GitHub Actions is very slow since these services do not provide /dev/kvm/ on their Linux runners. But they do provide MacOS runners, that come with Mac's KVM alternative, so if MacOS + QEMU story is improved, even people that don't really care about MacOS, but do run CI in the cloud (looking at @domenkozar), stand to gain lots of speed improvements.

@infinisil
Copy link
Member Author

I don't have a lot of time right now, but here's a basic skeleton that could be used to create a libvirt xml config for seeing if that approach is viable:

let
  pkgs = import <nixpkgs> {};
  config = (import <nixpkgs/nixos> {
    system = "x86_64-linux";
    configuration = <nixpkgs/nixos/modules/virtualisation/qemu-vm.nix>;
  }).config;
in pkgs.writeText "nixos.xml" ''
  <domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
    ...
    <os>
      ...
      <kernel>${config.system.build.toplevel}/kernel</kernel>
      ...
    </os>
    ...
    <filesystem type='mount' accessmode='$security_model'>
      <source dir='/nix/store'/>
      <target dir='store'/>
    </filesystem>
    ...
  </domain>
''

@domenkozar
Copy link
Member

Refs #5241

@domenkozar
Copy link
Member

virtio-9p can be slow. There is a faster replacement virtio-fs. However as far as I know, macOS doesn't have a driver for that.

Did you see https://passthroughpo.st/mac-os-adds-early-support-for-virtio-qemu/

@infinisil
Copy link
Member Author

@domenkozar Yeah, click the "uncovered by Qemu developer" link there, it leads to a blogpost I linked above, which only mentions virtio-9p, not virtio-fs. I'm now just seeing that that news article mistakenly used virtio-fs in its picture..

@domenkozar
Copy link
Member

Ah! That was confused me.

@domenkozar
Copy link
Member

Benchmarks show that should also speed up the NixOS tests https://matrix.org/_matrix/media/r0/download/johnguant.com/qgAVstnDkCydsABgKmOXRPGW

cc @JJJollyjim do you have a branch for this?

@Gaelan
Copy link
Contributor

Gaelan commented Feb 15, 2021

The patch you linked seems to be only one of a series - presumably you need all of them for it to work. The series is visible here: https://lore.kernel.org/qemu-devel/[email protected]/. Not sure if there's a way to get the whole series from Patchwork.

@JJJollyjim
Copy link
Member

Oh hey, just catching up here, I'm a little confused - the stuff where apple has added support for something is all about osx guests, not osx hosts which is what we care about here right?

I don't believe the current virtiofsd (host component) is able to run on osx (e.g. it uses namespaces to sandbox itself), but I don't think there is any technical limitation stopping it being ported?

I recommend patchew if you want a nice way to search for qemu patches and their status: https://patchew.org/QEMU/[email protected]/ (wish the kernel had something like this... -_-).

Anyway, let me know if you find that switching to virtio-fs will make OSX support work:

I have a branch, but haven't worked on cleaning it up and posting it because for some reason the DAX patches still haven't been submitted to qemu (last I checked), and without DAX the tiny performance improvement isn't worth the complexity. That calculus changes if it will fix OSX though :)

@JJJollyjim
Copy link
Member

On second thoughts, I patch out the namespace stuff (so it works in the nix sandbox), so it's possible it will run on osx without any further changes. I don't have an osx machine to test on :(

@Gaelan
Copy link
Contributor

Gaelan commented Feb 15, 2021

I have a macOS machine I'm happy to test stuff on if it'd be helpful for you.

Oh hey, just catching up here, I'm a little confused - the stuff where apple has added support for something is all about osx guests, not osx hosts which is what we care about here right?

Not quite sure what you're referring to, but here's a quick run-down of Apple's recent-ish features for macOS hosts:

  • Hypervisor.framework: 3-5 years old. Provides low-level userspace access to the CPU's virtualization features. I think QEMU has a backend for this, called HVF.
  • Virtualization.framework: New with Big Sur, this year's release. Much higher-level - you give it a kernel, initrd, and disk, and it goes from there. Pretty limited - only emulates a few devices (the virtio devices for disk, network, serial, RNG and memory balloons), and only supports raw images (though APFS's sparse files negate the need for qcow2 a bit). vftool provides a thing command-line wrapper over this.

(As a side note, I've been experimenting with using Virtualization.framework to implement a simple Linux VM, using a tiny linux kernel with u-root as a simple "bootloader" to find and exec the real kernel image off a Linux filesystem; I've got it working with Ubuntu, but not NixOS; I suspect there's some issue with virtio-console support on the install ISO.)

@JJJollyjim
Copy link
Member

@Gaelan I was referring to Domen's Passthru Post link about virtio :)

@Gaelan
Copy link
Contributor

Gaelan commented Feb 15, 2021

Ah, my bad.

@mroi
Copy link
Contributor

mroi commented Mar 23, 2021

I ported the original 9p patchset (mentioned earlier in this thread) to QEMU 5.2.0. In initial tests, 9p support on Darwin appears to be working.

But this is a large change that should probably go to upstream and not to Nixpkgs. If you want to try anyways: mroi@839559b

@anthr76
Copy link
Contributor

anthr76 commented Apr 8, 2021

@mroi Would be help to at least submit the patch to Homebrew. Thanks for the work! I'll look on giving it a try.

@SCOTT-HAMILTON
Copy link
Contributor

SCOTT-HAMILTON commented Apr 14, 2021

I would really love the test driver to run on MacOS with HAXM enabled. (-enable-hax)

I tried to merge both patches from @mroi and @infinisil and ran the same command as above :

$ nix-build '<nixpkgs/nixos>' \
            --argstr system x86_64-linux \
            --arg configuration '{ virtualisation.qemu.pkgs = import <nixpkgs> { system = "x86_64-darwin"; };
}' \
            -A vm \
            -I nixpkgs=http://github.com/SCOTT-HAMILTON/NixPkgs/archive/230f823cd985e499ff2fd450b419bc5b44c6db
f8.tar.gz

But I get this error :

error: a 'x86_64-linux' with features {} is required to build '/nix/store/qbr4wjfz9lzrqha4gl6mjp2l0imwqdgh-append-initrd-secrets.drv', but I am a 'x86_64-darwin' with features {benchmark, big-parallel, nixos-test}

@domenkozar
Copy link
Member

For testing patches you'll need a remote builder to build the Linux bits. The quickest way is to follow https://nix.dev/tutorials/continuous-integration-github-actions.html and get them from your binary cache.

@domenkozar
Copy link
Member

Steps left:

@mroi
Copy link
Contributor

mroi commented May 9, 2021

make a PR for mroi/nixpkgs@839559b

You mean as a PR toward Nixpkgs or toward upstream QEMU? I can easily do the former. For the latter I would need to familiarise myself with their e-mail-based workflow. (Anyone here with experience in that?)

@domenkozar
Copy link
Member

Former, although upstream patch would be really nice.

@anthr76
Copy link
Contributor

anthr76 commented May 9, 2021

make a PR for mroi/nixpkgs@839559b

You mean as a PR toward Nixpkgs or toward upstream QEMU? I can easily do the former. For the latter I would need to familiarise myself with their e-mail-based workflow. (Anyone here with experience in that?)

The linux kernel has a bot explaining their email workflow which you might find similar with QEMU

torvalds/linux#803 (comment)

@mroi
Copy link
Contributor

mroi commented May 10, 2021

I updated the 9p patches for current QEMU 6.0: #122420

I’ll look into proposing this to upstream. (Thanks @anthr76 for the link.)

@domenkozar
Copy link
Member

@r2r-dev did you manage to get it working?

@r2r-dev
Copy link
Contributor

r2r-dev commented May 17, 2021

Yup. I've prepared 2 test branches:

  1. Based mostly on stuff mentioned in this issue, as well as my fix for virtfs.
  1. With additional patches for M1, and vmnet device adapter:

Obviously, both of these branches aren't pretty and surely will need some cleanup.

Some remarks:

  • Qemu's v9fs requires additional patching (see 1st branch). Without it, any getdents or getdents64 syscalls on shared filesystem will result in a deadlock.
  • overlayfs won't work on top of v9fs mounts. It seems that v9fs cannot handle xattrs querying. This can be fixed by mounting lower-dir with -o version=9p2000.u. However, even though I managed to boot into VM, it was quite unstable. In order to mitigate that I used squashfs-based /nix/store instead of one mounted from host.

Refs: Mic92/nixos-shell#16

@bergkvist
Copy link
Member

bergkvist commented Jan 18, 2023

When I try to build github:NixOS/nixpkgs/594b94b4c3038f5c2cfb2f5d9c10ef30c7070a4c#darwin.builder inside of my builder (nix run github:NixOS/nixpkgs/af89d3a2be6f70edb187dd817377d6c4360134fa#darwin.builder)

% nix build github:NixOS/nixpkgs/594b94b4c3038f5c2cfb2f5d9c10ef30c7070a4c#darwin.builder
error: build of '/nix/store/rnwbjdkq9wz43nlsmggr7ysn7k9h9z2w-nixos-disk-image.drv' on 'ssh-ng://builder@localhost' failed: builder for '/nix/store/rnwbjdkq9wz43nlsmggr7ysn7k9h9z2w-nixos-disk-image.drv' failed with exit code 32;
       last 10 log lines:
       > copying path '/nix/store/65cvxfd36l9cawzj1gkx3zpzj0bdfg0b-unit-serial-getty-.service' to 'local'...
       > copying path '/nix/store/8izgxj2nzpwkd0hpm3r38zkiklm1jkzr-unit-nixos-activation.service' to 'local'...
       > copying path '/nix/store/82nc5jrzri2w4hw6rhffis9m6n25xnbv-unit-systemd-fsck-.service' to 'local'...
       > copying path '/nix/store/r9h4h2j299iz32gnnr2qksz2hsi6i8cw-unit-systemd-udevd.service' to 'local'...
       > copying path '/nix/store/klkbaapdbcs35bxyp0l97ic3lpp0njnb-user-units' to 'local'...
       > copying path '/nix/store/c04f008fdyqmarjl6nf35fdrzfm9h6lk-system-units' to 'local'...
       > copying path '/nix/store/786ah3bs0n5lcvjq2qhp4laiqkg60wr0-etc' to 'local'...
       > copying path '/nix/store/mj26i1zj0bjv1b732bafrg2ffly34adj-nixos-system-nixos-23.05pre-git' to 'local'...
       > mount: /build/root/build/root: must be superuser to use mount.
       >        dmesg(1) may have more information after failed mount system call.
       For full logs, run 'nix log /nix/store/rnwbjdkq9wz43nlsmggr7ysn7k9h9z2w-nixos-disk-image.drv'.
error: builder for '/nix/store/rnwbjdkq9wz43nlsmggr7ysn7k9h9z2w-nixos-disk-image.drv' failed with exit code 1
error: 1 dependencies of derivation '/nix/store/wr12pkszc3gj021fy08cch9syhqvwyj2-run-nixos-vm.drv' failed to build
error: 1 dependencies of derivation '/nix/store/p5pa1imyb5yc6l3wy1cda5bz38dc430f-nixos-vm.drv' failed to build
error: 1 dependencies of derivation '/nix/store/ld4rlr7sf599qi53bmd4d2xrjsm53n11-create-builder.drv' failed to build

#210812 also seems to have broken other builds - which triggered this emergency fix: #211218. Doesn't seem to have made it to the unstable branch yet though. https://nixpk.gs/pr-tracker.html?pr=211218

@dhess
Copy link
Contributor

dhess commented Jan 19, 2023

Is it possible to add NixOS modules to the darwin.builder's config? If so, it's not clear how.

I ask because I'd like to run this on our aarch64-darwin CI machines, but enable the NixOS Tailscale module to make each aarch64-darwin's darwin.builder available to our Hydra as a remote aarch64-linux builder, as well. (I believe this would also get around any host port 22 and/or macOS firewall issues.)

@Gabriella439
Copy link
Contributor

@dhess: Yes. If you copy the code from here:

builder =
let
toGuest = builtins.replaceStrings [ "darwin" ] [ "linux" ];
nixos = import ../../nixos {
configuration = {
imports = [
../../nixos/modules/profiles/macos-builder.nix
];
virtualisation.host = { inherit pkgs; };
};
system = toGuest stdenv.hostPlatform.system;
};
in
nixos.config.system.build.macos-builder-installer;
})

… then you can add in additional modules of your own

@bergkvist
Copy link
Member

bergkvist commented Jan 23, 2023

In case you need more than 3GB of memory for your builder

# Starts darwin.builder VM with 8GB RAM
QEMU_OPTS="-m 8192" nix run nixpkgs#darwin.builder

@misuzu
Copy link
Contributor

misuzu commented Jan 25, 2023

Does qemu support passing rosetta binary to guest system? Would be awesome to enable #202847 to support building for x86_64-linux too.

@YorikSar
Copy link
Contributor

@misuzu Rosetta "drive" is mounted via higher-level Virtualisation.framework while QEMU uses lower-level Hypervisor.framework. The later is more like kvm while the former is more like QEMU itself, so I doubt its support will happen in QEMU. I think, #5241 is relevant to bringing virtualisation backends other than QEMU for running NixOS VMs.

@roberth
Copy link
Member

roberth commented Jan 25, 2023

UTM does support it, is supposedly based on QEMU, is packaged, and we have a NixOS module to support it on its guests #202847. If it has a qemu compatible command line it might be close to a drop-in replacement.

#5241

If it's like qemu, you can set qemu.package at the test level if you use the documented entrypoint.

@YorikSar
Copy link
Contributor

@roberth UTM docs clearly state https://docs.getutm.app/advanced/rosetta/:

Rosetta allows you to run Intel Linux executables in an Apple Silicon Linux virtual machine (using Apple Virtualization backend).

QEMU VMs don't have this setting in UTM.

@ElvishJerricco
Copy link
Contributor

@roberth IIUC, UTM only supports rosetta if you tell it to use Apple Virtualisation instead of QEMU.

@willcohen
Copy link
Contributor

I love QEMU, but this is awesome. I still don't quiiite have copy/paste working this way (not sure if spice + wayland is a thing), but using Apple Virtualization solves the other major pain point of having normal-ish resolutions relative to my laptop and external monitor working with wayland/sway on a graphical NixOS VM. I feel like there should be a flashing exclamation point somewhere that this works so well.

@roberth
Copy link
Member

roberth commented Jan 25, 2023

supposedly based on

This was outdated information then. Thanks for correcting me.

I was thinking about the VM tests in my last comment. It'd be far easier to add Apple Virtualisation support if it's just for the darwin.builder, as it that's a simple entrypoint. I guess you'd have to figure out

  • how to invoke Apple Virtualization from the command line. Some apple command? UTM? Custom program?
  • create a new module with a new option that holds the Apple Virtualization invocation
    • figure out which NixOS options hold the image and kernel
  • change the darwin.builder package to get that option instead of the qemu one

@misuzu
Copy link
Contributor

misuzu commented Jan 25, 2023

how to invoke Apple Virtualization from the command line. Some apple command? UTM? Custom program?

Maybe this could be adapted to our use-case:
https://github.com/Code-Hex/vz/tree/main/example/linux
lima-vm/lima#1155

Another approach might be to mount /Library/Apple/usr/libexec/oah/RosettaLinux into QEMU VM, but we have to figure out how to enable TCO mode.

@ElvishJerricco
Copy link
Contributor

I do wish the kernel had support for enabling TSO mode per process. I don't even know if that's possible in a VM, but it'd be very nice for asahi linux to be able to use the rosetta binary via hacks and with TSO enabled.

@YorikSar
Copy link
Contributor

There's a vmcli project: https://github.com/gyf304/vmcli#vmcli-1 - it allows to run VMs with Virtualisation.framework from CLI. We could use it to start VM and add Rosetta support to it.

@fkorotkov
Copy link

There is also Tart that has Rosetta support. One just need to:

brew install cirruslabs/cli/tart
tart clone ghcr.io/cirruslabs/ubuntu:20.04 ubuntu
tart run --rosetta="rosetta" ubuntu

And make sure Rosetta is configured inside the VM according to these docs.

@dhess
Copy link
Contributor

dhess commented Jan 28, 2023

I've gotten pretty far with Tart. Thanks to @fkorotkov for making me aware of it in the above comment!

  1. Create a new, empty Virtualization.Framework VM named nixos with tart create nixos --linux
  2. Build an aarch64-linux NixOS configuration as a raw-efi image using https://github.com/nix-community/nixos-generators. For bonus points, set virtualisation.rosetta.enable = true and nix.settings.extra-platforms = [ "x86_64-linux" ] in the VM's config, so that the VM will be able to run x86_64 Linux binaries, as well.
  3. Overwrite the Tart VM's disk.img with the nixos.img file from step 2.
  4. Run the VM with tart run nixos --rosetta rosetta

I've spent most of my time on this so far just learning how Virtualization.Framework works, and not much time experimenting with the running VM, but everything I've tried so far works great, including building and running x86_64-linux derivations. Additionally, I think it would be straightforward to automate steps 1-3 in a Nix derivation.

Even better would be to use the Mac's /nix/store in the VM. To experiment with this, I did the following:

  1. Run the VM with tart run nixos --rosetta rosetta --dir="store:/nix/store"
  2. In the running VM, run mount -t virtiofs com.apple.virtio-fs.automount /mnt. This mounts the Mac's /nix/store on /mnt/store in the running VM. I can see all the files there, though performance is a bit slow. Maybe mounting it with some cache= options would help.
  3. Bind mount the virtiofs-mounted Mac Nix store to the VM's /nix/store via mount --bind /mnt/store /nix/store. (I think this extra step is necessary due to the way that Virtualization.Framework makes all shared filesystems available under a single mount point, but I might be missing something.) Confirm that everything still works.

So far, so good. However, my particular use case for this VM is to use it as an aarch64-linux remote builder, so I want to be able to build derivations in it, while re-using the Mac's own /nix/store for disk and network savings. Now I try to build something that's not in the store: nix run nixpkgs#hello. This fails with error: cannot open connection to remote store 'daemon': error: reading from file: Connection reset by peer

Looking at the logs, the problem is apparent: unexpected Nix daemon error: error: changing ownership of path '/nix/store': Operation not permitted

This makes sense: Nix is running in multi-user mode on my Mac, so my Mac's /nix/store is owned by root and managed by the local nix-daemon running as root. I'm running tart as my local macOS user, not root, so tart doesn't have permissions to change ownership on the Mac's /nix/store.

Anyway, besides that small catch, this route looks very promising. I think the main issue for our use case is that Virtualization.Framework doesn't support nested virtualization, so we won't be able to use this virtualized remote aarch64-linux builder to run NixOS tests.

@Gabriella439
Copy link
Contributor

@dhess: You probably don't want to share the host's /nix/store with the builder anyway, for the reason outlined in this comment:

# If we don't enable this option then the host will fail to delegate builds
# to the guest, because:
#
# - The host will lock the path to build
# - The host will delegate the build to the guest
# - The guest will attempt to lock the same path and fail because
# the lockfile on the host is visible on the guest
#
# Snapshotting the host's /nix/store as an image isolates the guest VM's
# /nix/store from the host's /nix/store, preventing this problem.

@dhess
Copy link
Contributor

dhess commented Jan 28, 2023

@Gabriella439 Thanks, I did see that comment linked from elsewhere, either in this discussion, or in a related one.

However, if I understand the comment correctly, I don't think it applies in our use case. We have several dedicated macOS remote builders (let's call them mac1, mac2, etc.), which appear in our x86_64-linux NixOS dedicated builder's /etc/nix/machines file; let's call that machine nixos1. mac1 etc. are not for interactive use: they only build derivations that are delegated by nixos1.

nixos1 cannot build aarch64-linux derivations, but virtualized aarch64-linux NixOS VMs running on mac1 etc. could. So what we'd like to do is have mac1 etc. run these aarch64-linux VMs (let's call them arm1 etc.), list those VMs in nixos1's /etc/nix/machines file, and then have nixos1 delegate aarch64-linux jobs to those VMs. Therefore, unless I'm missing something, the comment does not apply in our case, because the host is not delegating any jobs to the guest, only nixos1 is.

Perhaps it's possible that nixos1 could ask arm1 to build platform-independent derivation foo at the same time as it's asking mac1 to build it? But even in this case, I assume that whichever remote builder wins the race will go first while the other waits for the lock, then proceeds (and presumably sees that the derivation has magically appeared in the store in the meantime). In any case, I wouldn't expect a deadlock to occur in this scenario.

@ghost
Copy link

ghost commented Nov 9, 2023

Over in this podman issue, I see that virtiofs in MacOS doesn't bypass open file limits, which leads to unexpected behavoir in the guest OS: containers/podman#16106

I'm addressing the issue directly in podman, but I was hoping someone from this issue might have an opinion on if qemu should be bumping it's own ulimits up when it is acting as the virtiofs daemon?

Gabriella439 added a commit that referenced this issue Mar 2, 2024
Closes #193336
Closes #261694
Related to #108984

The goal here was to get the following flake to build and run on
`aarch64-darwin`:

```nix
{ inputs.nixpkgs.url = <this branch>;

  outputs = { nixpkgs, ... }: {
    checks.aarch64-darwin.default =
      nixpkgs.legacyPackages.aarch64-darwin.nixosTest {
        name = "test";

        nodes.machine = { };

        testScript = "";
      };
  };
}
```

… and after this change it does.  There's no longer a need for the
user to set `nodes.*.nixpkgs.pkgs` or
`nodes.*.virtualisation.host.pkgs` as the correct values are inferred
from the host system.
@Gabriella439
Copy link
Contributor

You can now run NixOS tests on macOS, too. See: #282401

Note that you still need a Linux builder to build the test VM, though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
6.topic: darwin Running or building packages on Darwin
Projects
None yet
Development

Successfully merging a pull request may close this issue.