Skip to content
WALDEMAR KOZACZUK edited this page Dec 29, 2022 · 72 revisions

The AArch64 port of OSv has been progressing over the years since 2015 and this ongoing effort can be broken down into three waves. The initial and most fundamental work was done by Claudio Fontana and others from Huawei Technologies Duesseldorf GmbH in 2015. The second "wave" contributions to add Xen support came from Sergiy Kibrik in 2017. The latest effort has been picked up by Waldemar Kozaczuk in late 2019. Many contributions also came from Stewart Hildebrand from DonnerWorks.

State of the AArch64 Port as of December 2022 (release 0.57)

As of this writing, OSv can boot in both emulated AArch64 mode on QEMU on X64 hardware as well as on QEMU and Firecracker on AArch64 hardware with KVM enabled. The latter has been tested on Raspberry PI 4b and Odroid N2+ hardware with Ubuntu installed. The XEN support has not been tested in any way recently.

On both QEMU and Firecracker the virtio-blk, virtio-net and virtio-rng and serial console devices are recognized and supported; on QEMU the virtio support is PCI-based, though without MSI/MSI-X extension, on Firecracker it is MMIO-based. Furthermore one can run most applications as well as all unit tests loaded from RAMFS or ROFS or ZFS disk and Virtio-FS (please note the tst-rcu-hashtable occasionally fails). Networking seems to function as well, based on the fact that DHCP works and OSv guest responds to ping from a host. Many advanced apps including python, java, golang, lua, iperf3, nginx, lighttpd, and web server apps implemented in Rust have been tested as well and seem to be behaving fine.

Overall the aarch64 support is pretty much on-par with x86_64.

Recent Improvements

Some Missing Features

  • No GICv2m or GICv3 support, therefore no MSI or MSI-X at the moment
  • Lack of dynamic TLS support for applications
  • Other open issues

Building, Running, Debugging

As of this writing, the AArch64 kernel can be cross-compiled on both Fedora, Ubuntu, and CentOS 7 with AArch64 artifacts (libraries and headers) downloaded. If you happen to have different Linux distribution, you can always use the Fedora OSv development container or the Ubuntu OSv development container. It should be also possible to build the AArch64 version of OSv natively on Ubuntu on ARM hardware like Raspberry PI 4, Odroid N2, or RockPro64.

You can build either RAMFS, ZFS, or ROFS images like so:

./scripts/build image=empty fs=rofs -j4 arch=aarch64 #When cross-compiling
./scripts/build image=nginx-from-host                 #On native ARM hardware

You can run OSv either in emulated AArch64 mode on QEMU on X64 hardware or on QEMU and Firecracker on AArch64 hardware with KVM enabled. Here is an example of using run.py to run OSv in emulated mode on QEMU:

./scripts/run.py --arch=aarch64 -e '/tests/tst-hello.so'

On Firecracker:

./scripts/firecracker.py

One can debug OSv on QEMU as simply as pointing gdb to the AArch64 version of the loader.elf like so:

gdb build/release.aarch64/loader.elf

Setup Raspberry PI 4 for OSv

The easiest way is to use non-SSD boot which simply requires installing the Raspberry PI version of Ubuntu 20.04 from https://ubuntu.com/download/raspberry-pi on the SD card. However, the performance is not going to great as disk I/O with SD is much worse than with SSD. Setting up Raspberry PI 4 so it can boot Ubuntu from SSD is more involved but you get much better performance.

Steps for SSD boot (somewhat based on https://tynick.com/blog/05-22-2020/raspberry-pi-4-boot-from-usb/, https://www.raspberrypi.org/forums/viewtopic.php?t=275291 and https://www.raspberrypi.org/forums/viewtopic.php?f=131&t=268476#p1634061):

  1. Install 64-bit Raspberry Pi OS on SD.
  2. Boot above and upgrade boot loader - follow the 1st article.
    • At this point, you should be able to boot from SSD without SD, if you installed 64-bit Raspberry Pi OS on SSD, but we want Ubuntu 20.04. The 64-bit Raspberry Pi OS is fairly outdated in terms of available development packages like gcc (only 8.3) or QEMU which is pretty much unavailable in a usable form.
  3. Using a laptop with Ubuntu, get Raspberry Pi 64-bit version of Ubuntu 20.04 and install it on SSD for example by using the “Disks” app (restore disk, not partition).
  4. Mount 64-bit Raspberry Pi OS SD from step 1 on the same laptop.
  5. Copy (overwrite) start4.elf and fixup4.dat from Raspberry Pi OS boot partition to Ubuntu boot partition.
  6. Edit config.txt on Ubuntu boot partition by following https://www.raspberrypi.org/forums/viewtopic.php?f=131&t=268476#p1634061.
  7. Uncompress vmlinuz from Ubuntu partition and overwrite it with the uncompressed copy of it.
  8. After each ‘apt-get upgrade” repeat step 7 above again if kernel updated.

To prepare for OSv:

  • sudo apt-get install qemu-kvm - to install QEMU
  • sudo usermod -aG kvm <user> - to enable KVM

Various useful ArmV8/AArch64 documentation

The documentation below had been written by Claudio Fontana and describes the state of AArch64 port of OSv as of circa 2015.

The AArch64 Port of OSv is ongoing, initially targeting the QEMU Mach-virt platform, running on the ARM Foundation Model v8 or on the APM X-Gene Mustang development board.

Functional Status

mainline contains AArch64 support for the loader image (loader.img), which means it is possible to embed programs inside the loader itself. Manual modification of the bootfs.manifest.skeleton is necessary.

SMP is supported, but the SMP work for AArch64 has exposed a bug in the virtual counter in QEMU/kvm which still needs solving. In the meantime, you need to apply the following workaround for kvm:

"[RFC PATCH] KVM: arm/arm64: Don't let userspace update CNTVOFF once guest is running"

https://lists.cs.columbia.edu/pipermail/kvmarm/2015-June/015198.html

There are some limitations, mostly in the libc support; you can read the details below.

Mainline OSv already includes all the features, so there is no need to look at special branches.

Experimental work-in-progress can sometimes be found here:

https://github.com/hw-claudio/osv_aarch64 "aarch64-next"

https://github.com/hw-claudio/osv_aarch64/tree/aarch64-next

Beware: aarch64-next is a rebasing branch.

Upstream QEMU (git mainline) is now usable for aarch64, including PCI support, since February 13th, 2015.

While the loader.img can be built, including adding own programs to the bootfs image, build of the usr.img is not possible yet, due to issues in the current build system.

Most of the problems are due to the build step which requires to run an OSv VM in order to build the OSv with the ZFS file system, but there other major issues, including the framework loosely defined by the python scripts in scripts/ and the general lack of cross-compilability in everything beyond the kernel proper.

To address these challenges, a user-space ZFS image creation tool has been sketched to enable building the ZFS-based image without running OSv, and then the scripts should be reworked (or avoided as much as possible) for the usr.img creation.

For development purposes, you can find a pre-built tentative usr.img image to use for virtio-blk and ZFS mounting tests at:

https://github.com/hw-claudio/osv_aarch64.git "usr.img"

In the usr.img branch, look for a file in the top source directory called "usr.img.aarch64"

Component Status

* build system: the first pass (loader.img) mostly works through cross-compilation, but there are issues to proceed any further as mentioned.

* qemu-system-aarch64 tcg software system emulation: currently the AArch64 image runs on the Foundation Model but also successfully on the QEMU tcg software system emulation.

* tests with available hardware (development boards): preliminary tests with the hardware currently available has been done, in particular with the APM X-Gene Mustang board.

* external dependencies: Avi has successfully added the Fedora packages for AArch64, and they seem ok, although they still contain broken stap information, causing warnings.

* devices support: support for PCI is available, as well as virtio-rng, virtio-blk and virtio-net. Simple small functional tests have been performed with success.

* smp support: work is now upstream. SMP is implemented via PSCI.

* tls support: currently tls works only for in-kernel tls variables. It is not possible to run external applications (.so ELF files) which make use of tls variables.

* libc: We need to implement setjmp, longjmp, ucontext and architecture signal code, and also the broken/missing floating point support for the libc math functions which depend on floating point representation.

* musl: also related to the preceeding point, as libc is implemented partly with musl, partly with own code for the OSv-specific parts. AArch64 support for musl has now landed in mainline musl, but the issues with floating point are still there (results of double precision are completely wrong).

* hardware information passing from the host to the guest is currently based on device trees, with fallback defaults for the mach-virt platform. No ACPI in OSv yet though.

* ELF64: initial relocations for get_init() are supported, plus basic relocs necessary to run applications. Additional relocations will be implemented if/when we hit missing ones while enabling more and more applications for AArch64.

* console: pl011 UART output and input is now available, no FIFO (or, FIFO depth = 1).

* backtrace: basic functionality now available.

* power: implemented via PSCI.

* exception handling: seems to work well.

* MMU: done, but need to revisit for the sigfault detection speedup feature added to x64 (and stubbed on AArch64). We also need to clean up parameter passing through the MMU code, and remove some code duplication.

* page fault and VMA handling: basic functionality now available.

* interrupts: basic functionality now available.

* GIC: more or less done, v2. Note that we don't support GICv2m or GICv3, so we cannot have MSI or MSI-X at the moment.

* Generic Timers: functionality available.

* scheduling: support for task switching available (switch-to-first, switch-to)

* signals: work started, but architecture-generated signals work has not been submitted yet.

* arch trace: nothing available.

* sampler: sampler support missing for AArch64.

* scripts: most scripts have not even been looked at for AArch64

* management tools: management tools have not been looked at yet for AArch64

* tests: some tests build but most don't because of other missing components. No attempt to run any tests beside tst-hello.so.

* string optimizations: imported from newlib (based on BSD-licensed Linaro patches). This includes memcpy, memset and memcpy_backwards, slightly changed from the original due to different API entry point.

* hypervisor and firmware detection: not a priority, not implemented.

Build instructions

These are some brief instructions about how to cross-compile OSv's loader.img (very incomplete still) on an X86-64 build host machine.

At the time of writing this, the available functionality is minimal: the loader image boots, gic is initialized, timers are initialized, etc, and a simple hello world application is started on top of OSv (in the case of aarch64-next), or you will get an abort with a backtrace in zfs (for master).

Crosscompiler Tools and Host Image from Linaro

You can find a 32bit (needs multilib 😟) crosscompiler from Linaro, in particular the package

gcc-linaro-aarch64-linux-gnu-4.8-2013.12_linux, which is not distro-specific 😃, and includes all tools needed for building.

http://releases.linaro.org/13.12/components/toolchain/binaries

For the host root filesystem for AArch64, a good option is the Linaro LEG Image

linaro-image-leg-java-genericarmv8-20131215-598.rootfs.tar.gz

http://releases.linaro.org/13.12/openembedded/aarch64/

You can experiment with other images and compilers from Linaro, but those I am using right now.

Crosscompiler Tools for Ubuntu

For Ubuntu there are AArch64 crosscompilers available in the official repositories as well. Packages are named g++-4.8-aarch64-linux-gnu and gcc-4.8-aarch64-linux-gnu.

Ubuntu is also used over here, and works ok.

Preparing the AArch64 Host

You will need to have or build an AArch64 linux kernel for the Host, which will be run on top of the Foundation v8 Model. In addition to that, you will need the bootwrapper, which you can get from:

http://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/boot-wrapper-aarch64.git

Use the foundation-v8.dts, and my suggestion is to use nfsroot to mount the root filesystem (the linaro LEG image). The boot wrapper takes as input the linux kernel Image and produces linux-system.axf, which will be the input for the Foundation Model.

Running ARMv8 Foundation Model

Start the Foundation model:

./Foundation_v8 --image=linux-system.axf --cores=1 --network=nat --network-nat-ports=1234=1234

The latter option will expose the 1234 port on the host side to the same port number in the guest running inside the model. You can add additional mappings as desired/needed.

If you are skipping the user space initialization via something like init=/bin/sh for speedup (edit the bootwrapper Makefile), in Foundation model you might need to run:

/sbin/udhcpc eth0

Preparing the guest: External dependencies

Nothing to do anymore, since they are now part of the mainline tree.

Preparing the guest: Environment Variables for make

In addition to the general requirements for building OSv (see README.md),

note that the simple build system recognizes the ARCH and CROSS_PREFIX environment variables, and looks for the following build tools:

CXX=$(CROSS_PREFIX)g++
CC=$(CROSS_PREFIX)gcc
LD=$(CROSS_PREFIX)ld
STRIP=$(CROSS_PREFIX)strip
OBJCOPY=$(CROSS_PREFIX)objcopy
HOST_CXX=g++

In order to build AArch64, countrary to the past when the target architecture was automatically detected by running the supplied compiler, you need to explicitly say make ARCH=aarch64, otherwise the build system will try to detect ARCH running uname on the host machine, and try to build x64.

At the beginning of the build process, look for this message:

build.mk:
build.mk: building arch=aarch64, override with ARCH env
build.mk:

If the message does not say arch=aarch64, the crosscompiler could not be found or run correctly. In this case, check the CROSS_PREFIX variable, or the compiler binary name if it's not canonical (do you need to add a symlink for example from g++-4.8.3 to g++ ?).

Running the guest

An example of command line for QEMU which works running on top of Foundation Model with kvm with an AArch64 qemu-system-aarch64 binary is:


$ qemu-system-aarch64 -nographic -M virt -enable-kvm \
    -kernel ./loader.img -cpu host -m 1024M -append "--nomount /tools/uush.so"

An example of command line for QEMU running on system emulation on an x86_64 host with an x86_64 qemu-system-aarch64 binary is:


$ qemu-system-aarch64 -nographic -M virt \
    -kernel ./loader.img -cpu cortex-a57 -m 1024M -append "--nomount /tools/uush.so"

Jani Kokkonen <[email protected]>
Claudio Fontana <[email protected]>
Clone this wiki locally